OFF THE SIDES OF THEIR DESKS: DEVOLVING EVALUATION TO NONPROFIT AND GRASSROOTS ORGANIZATIONS by GERALD BRUCE HINBEST M.A. Queen’s University, 1995 B.A. University of Guelph, 1979 A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF EDUCATION in THE FACULTY OF GRADUATE STUDIES (Educational Leadership & Policy) THE UNIVERSITY OF BRITISH COLUMBIA (VANCOUVER) September 2008 © Gerald Bruce Hinbest, 2008 ABSTRACT This study examines the changing context and implications for evaluation practice of social program and service delivery devolved to small nonprofit and grassroots organizations. The setting is explored through a critical reflection-on-practice of over twenty years experience conducting evaluation. Using a multiple case study approach, the dissertation examines nine broad themes through two broad composite scenarios and twenty-five detailed vignettes that portray the challenges of working as a consultant with and for small nonprofit and grassroots organizations as they grapple with growing demands for accountability through evaluation. The multiple case study analysis is complemented by an analysis of case studies in two broad areas of literature; one on the impacts of devolution in the nonprofit sector, and the other examining recent trends in evaluation conducted in challenging settings, including community-based and non-governmental organizations (NGOs). The five broad themes addressed through the case studies and literature on devolution are: 1) accountability, 2) capacity, 3) mandate drift, 4) competition, and 5) complexity. The four broad themes addressed through case studies and literature on evaluation are: 1) theory-based evaluation, 2) inclusiveness (participatory approaches), 3) the changing and multiple roles of evaluators, and 4) the use of dialogue, deliberative and democratic approaches in evaluation practice. The study contends that the ‘rough ground’ of nonprofit settings provides a useful lens for understanding broader challenges and trends in evaluation practice; that evaluators provide more than just technical skills and knowledge, but undertake important roles in linking communities, mediating among stakeholders, fostering dialogue and deliberation about programming, and mitigating some of the more egregious impacts of devolution experienced by nonprofit and grassroots organizations. By acknowledging and supporting the development of such roles and responsibilities, the profession and evaluators working in these settings can provide meaningful contributions to public discourse about the nature of accountability, the broad context of social programming, the complex capacity challenges being faced by nonprofit organizations, and the role of evaluation in exacerbating or potentially mitigating such effects. ii TABLE OF CONTENTS ABSTRACT.......................................................................................................................................ii TABLE OF CONTENTS..................................................................................................................iii LIST OF TABLES............................................................................................................................. v LIST OF FIGURES ..........................................................................................................................vi LIST OF VIGNETTES ....................................................................................................................vii ACKNOWLEDGEMENTS............................................................................................................viii CHAPTER ONE: INTRODUCTION ..................................................................................................... 1 The Changing Context.................................................................................................................. 2 A Reflection on Practice............................................................................................................... 6 Reflective Questions..................................................................................................................... 7 Outline of the Dissertation............................................................................................................ 7 CHAPTER TWO: RESEARCH METHODS: REFLECTION ON PRACTICE ........................................... 10 Study Rationale .......................................................................................................................... 12 Developing the Themes ........................................................................................................... 14 Power and Reflexivity.............................................................................................................. 15 A Multiple Case Study Approach............................................................................................... 17 Multiple Case Studies.............................................................................................................. 17 Data for the Case Studies........................................................................................................ 19 Triangulation .............................................................................................................................. 20 Composite Scenarios .................................................................................................................. 21 Composite Scenario Background ............................................................................................ 25 Vignette 2.1 – The Grassroots Composite Scenario................................................................ 25 Vignette 2.2 – The Small Nonprofit Composite Scenario........................................................ 27 CHAPTER THREE: TRENDS IN THE NONPROFIT, VOLUNTARY AND GRASSROOTS SECTOR ........ 30 Defining the Nonprofit Sector .................................................................................................... 31 Canadian Nonprofit Organizations......................................................................................... 33 Civil Society............................................................................................................................. 37 The Devolution of Programs and Services ................................................................................. 40 The Rationale for Devolution .................................................................................................. 41 The Impacts of Devolution on Nonprofit Organizations ......................................................... 47 CHAPTER FOUR: THE ROLE OF EVALUATION IN SOCIETY .......................................................... 49 Devolving Evaluation................................................................................................................. 49 Defining Accountability ............................................................................................................. 57 Reflecting on Evaluation and its Rationale ................................................................................ 65 iii CHAPTER FIVE: CASE STUDY ANALYSIS: DEVOLUTION THEMES............................................... 71 Theme One: Accountability ....................................................................................................... 72 Accountability in the Context of Devolution ........................................................................... 72 Accountability and Performance Measurement ...................................................................... 81 Theme Two: Capacity ................................................................................................................ 89 Theme Three: Mandate Drift .................................................................................................... 107 Theme Four: Competition ........................................................................................................ 127 Theme Five: Complexity .......................................................................................................... 145 Summary .................................................................................................................................. 167 CHAPTER SIX: CASE STUDY ANALYSIS: EVALUATION THEMES ............................................... 174 Theme Six: Theory-Based Evaluation...................................................................................... 175 Theme Seven: Inclusiveness..................................................................................................... 203 Theme Eight: Roles .................................................................................................................. 231 Theme Nine: Dialogue, Deliberation & Democracy ................................................................ 261 Summary .................................................................................................................................. 282 CHAPTER SEVEN: OFF THE SIDES OF THEIR DESKS.................................................................. 285 Revisiting the Reflective Questions ......................................................................................... 287 Implications for Practice........................................................................................................... 291 Leadership in Evaluation Practice ............................................................................................ 304 Distinctive Contribution of the Study....................................................................................... 307 Study Limitations ..................................................................................................................... 308 Future Research ........................................................................................................................ 308 Reflections on Researching my Practice .................................................................................. 311 REFERENCE LIST ....................................................................................................................... 313 APPENDIX A: META-ANALYSIS TABLES .................................................................................... 339 APPENDIX B: CANADIAN NONPROFIT & VOLUNTARY ORGANIZATIONS.................. 341 iv LIST OF TABLES TABLE A1: DEVOLUTION CASE STUDIES AND SURVEY ARTICLES ................................................... 339 TABLE A2: EVALUATION CASE STUDIES AND SURVEY ARTICLES ................................................... 340 TABLE B1: NONPROFIT & VOLUNTARY ORGANIZATIONS, BY PRIMARY ACTIVITY AREA, 2003 ....... 341 v LIST OF FIGURES FIGURE 6.2.1: PROCESS DIMENSIONS IN COLLABORATIVE INQUIRY .......................................... 205 vi LIST OF VIGNETTES VIGNETTE 2.1 – THE GRASSROOTS COMPOSITE SCENARIO........................................................ 25 VIGNETTE 2.2 – THE SMALL NONPROFIT COMPOSITE SCENARIO .............................................. 27 VIGNETTE 5.1.1 (NONPROFIT) – OSSIFYING LOGICS ................................................................. 77 VIGNETTE 5.1.2 (GRASSROOTS) – ELEVENTH-HOUR REFRAMING ............................................. 83 VIGNETTE 5.2.1 (GRASSROOTS) – OFF THE SIDE OF THE DESK................................................. 93 VIGNETTE 5.2.2 (NONPROFIT) – TRADE OFF ............................................................................ 98 VIGNETTE 5.3.1 (NONPROFIT) – THE NEW INFORMATION SYSTEM .......................................... 108 VIGNETTE 5.3.2 (GRASSROOTS) – STEALTHY CHANGES........................................................... 116 VIGNETTE 5.4.1 (GRASSROOTS) – DECIDING TO GROW........................................................... 129 VIGNETTE 5.4.2 (NONPROFIT) – ANTICIPATING OPPORTUNITIES ............................................ 137 VIGNETTE 5.5.1 (NONPROFIT) – TWO BIRDS WITH ONE … ..................................................... 149 VIGNETTE 5.5.2 (GRASSROOTS) – SWITCHING HATS................................................................ 153 VIGNETTE 5.5.3 (NONPROFIT) – REFRAMING GOALS & ACTIVITIES ........................................ 157 VIGNETTE 5.5.4 (GRASSROOTS) – REFRAMING SUCCESS & FAILURE ...................................... 160 VIGNETTE 6.1.1 (NONPROFIT) – BUILDING A LOGIC............................................................... 184 VIGNETTE 6.1.2 (NONPROFIT) – REFRAMING PROGRAM THEORY ........................................... 192 VIGNETTE 6.1.3 (GRASSROOTS) – SHARING A COMMUNITY OF PRACTICE................................ 196 VIGNETTE 6.2.1 (NONPROFIT) – PARTICIPATION REVOKED .................................................... 206 VIGNETTE 6.2.2 (GRASSROOTS) – EVALUATOR OR PARTICIPANT … OR BOTH?........................ 212 VIGNETTE 6.2.3 (NONPROFIT) – REVOLVING MEMBERSHIP .................................................... 217 VIGNETTE 6.2.4 (NONPROFIT) – BUILDING LONG-TERM CAPACITY ........................................ 222 VIGNETTE 6.3.1 (GRASSROOTS) – EVALUATION VS. RESEARCH................................................ 232 VIGNETTE 6.3.2 (NONPROFIT) – CHOOSING SIDES … OR NOT? .............................................. 241 VIGNETTE 6.4.1 (NONPROFIT) – ENGAGING DISCUSSION, DEBATE & DIALOGUE ................... 264 VIGNETTE 6.4.2 (GRASSROOTS & NONPROFIT) – COMMUNITIES OF DIFFERENCE .................. 275 vii ACKNOWLEDGEMENTS I have been fortunate to make this dissertation journey with a particularly well-suited and supportive committee. I am indebted to Carolyn Shields for challenging me to critically explore my evaluation practice as a context of both education and leadership. Sandra Mathison’s breadth of knowledge about evaluation and her professional efforts to reflect on its basic assumptions gave me inspiration and practical insight that can be found woven throughout the fabric of the dissertation. I am particularly grateful to Tom Sork, who as chair of my committee demonstrated outstanding scholarship, gentle and unfailing encouragement, and exceptional patience with my seemingly interminable process. To all three of you, thank you for believing in this work as both worthwhile and achievable. The Ed.D. Program is pointedly about examining and renewing professional practice. Key to this process is the learning community represented by the cohort, who provide practical wisdom from practice contexts, and model in word and deed their own growth and critical reflection. I thank all members of my cohort for their substantial and ongoing support and camaraderie, and I remain particularly indebted to Rhonda Margolis, Joanna Ashworth, and Jennifer White for their inspiration, sage counsel, and companionship. I also thank David Coulter and Shauna Butterwick, who so ably represent the values of the program. I hope that this dissertation faithfully represents the serious challenges faced by nonprofit and grassroots organizations that I came to appreciate through informative discussions with project coordinators, administrators, and participants. My thanks in particular go to Jennifer Kyffin and Jennifer White, who proved to be wellsprings of innovative ideas and exceptional sounding boards for constructive reflection. I extend my deep appreciation for valuable comments, clarifications, resource suggestions, and encouragement generously offered via e-mail by Ernie House, Bob Stake, Jennifer Greene, Bob Williams, Michael Patton, Jennifer Clark, Angela Eikenberry, Dvora Yanow, Susan Phillips, John Shields, Ted Richmond and Bryan Evans. Colleagues near and far have contributed substantially to my understanding of issues and my process of working through how to represent them, including Lynette Harper, Doug Fraser, Chantelle Marlor, Caroline Burnley, Owen Peer, Victor Owen and John Vellacott. Linda Derksen provided all this and more, creatively walking the line between department chair, motivator and friend. The practical matter of writing the dissertation was greatly enhanced by delightful opportunities to housesit for Lynne MacFadgen & Ross MacKay, Joe & Rony Gerritsen, Lynette Harper & Bruce Finlayson, and Leigh Blaney. When not borrowing a desk from these friends, I was writing at a table in the Teahouse on Rutherford, for which I must thank the incomparable hospitality and enthusiasm of Bernice Rambo and Jaz Parmajit. Two mentors have enriched my professional life. From David McKinney I learned about balancing rigour in research with a passion for social justice. Mike Nelson taught me how to grow beyond doing evaluation, and to question what it could and should represent. It is beyond my means or the space available to acknowledge the many friends who offered me sustenance and understanding as I neglected other priorities. Please be aware that for your questions, accommodations and encouragements I remain profoundly grateful. Last, but in my heart and mind, first, I thank my family. To my daughter Maggie, thank you for your enthusiasm, love and humour, and for your tremendous help with the references. And to Jeanne, whose patience, love and wisdom sustained me and reminded me of my purpose: without your enduring support this dissertation would not have been possible. viii CHAPTER ONE: INTRODUCTION Evaluators are drawing closer to community, making their services available there. Perhaps this is a symptom of the growing distance between western governments and their electorates, a reaction by communities and professions against centrally-imposed, coercive programmes – an attempt, perhaps, to recapture the change initiative for the grass-roots... there seems to be a proliferation of, often small-scale, studies evaluating community development projects. Saville Kushner, 2002b, p. 20. The phone rings. It's Laura, the director of a local grassroots organization. I've been working with her on an evaluation of one of her agency’s innovative pilot programs, and she has been reviewing the first draft of the final report I submitted before sharing it with her steering committee. Her somewhat panicked voice says, “But where are the numbers”? Laura's question is not about the appropriateness of qualitative versus quantitative inquiry; it is concerned with the nature of accountability in delivering programs and services in small nonprofit organizations. It turns out that at a meeting earlier that day on an unrelated issue, the funder’s representative had offered congratulations on completing the project, and expressed anticipation for receiving the report and “seeing how the numbers turned out.” Despite spending over a year working closely with Laura and a broadly-based steering committee in jointly designing, developing and implementing the project, and working out detailed data collection plans since the project’s initial planning stages, one casual comment to her by a representative of the funder as the initiative nears completion is enough have her questioning our work and progress. While I am a little taken aback by Laura's phone call, I am not completely taken by surprise. It is a phone call I have had repeated on numerous occasions. Even with programs that have had only a dozen participants over a year-long period, and for which the primary 1 data collection approaches involved detailed interviews, focus groups, observation or a case study, at some point the perceived needs of accountability to a funder can quickly set aside all previous planning, discussions and understandings. As I work to reassure Laura that we can and should stick to our game plan, that the funder will be satisfied with the report, and that her board of directors will appreciate this form of reporting, I am reminded that the negotiation and dialogue never end – they start before the project begins and may not finish until long after the project is completed. In this dissertation I examine the changing context and implications for evaluation in small nonprofit and grassroots organizations. The topic is program evaluation, but my focus is not the methods of research, but rather, the socio-political context in which program evaluation takes place. The dissertation represents a critical reflection-on-practice of some twenty-five years spent conducting program evaluation, as an internal evaluator within government, and as an external consultant to all levels of government and to nonprofit organizations. The dissertation grew out of my ongoing efforts to understand the sources of change I have experienced in my practice, and to understand the implications of my role as an evaluator for the organizations I have worked for and with. The Changing Context Changes to the program and service context have been profound. Over the past two decades, the New Public Management (NPM) movement (Kaboolian, 1998) has been associated with a wide range of reforms to the relationship between state, the private sector and the broader public sector across much of the Western world. Two aspects of these changes have proven to be particularly meaningful for the nonprofit sector – the general downsizing of government, and the concomitant shifting of responsibilities for program and 2 service delivery to the nonprofit sector. This shifting of program and service delivery has been widely characterized as ‘devolution,’ however it encompasses a diverse range of descriptive and analytical distinctions. The language used to discuss ‘devolution’ reflects the multi-disciplinary ramifications of the phenomenon. The terminology is both inconsistent and evolving. The general discourse has been theoretical; the political science and sociological as well as other academic disciplines and literatures have used labels such as neoliberal and neoconservative transitions, marketization, privatization, and decentralization. In the literature of human services and social work practitioners, the discussions have tended to be more practically based – speaking of devolution, downsizing and outsourcing. The administrative disciplines have used the full range of terms, depending on the publication and the intended audience. All approaches talk about the issues in terms of the ‘third, voluntary, nonprofit or grassroots’ sectors. More recently, the theoretical discussions have focused on ‘civil society’ as a frame for conceptualizing the sector. The terminology itself is an important facet of understanding and framing the issues faced by the sector, and will be discussed more fully in Chapter Three. The devolution of programs and services to the nonprofit and voluntary sectors has generally involved organizations adopting many of the methods and values of the market. The New Public Management has primarily been characterized by an increasingly hierarchical approach to governance, such as contractual and competitive project-based funding and a preoccupation with accountability. However, recent reforms have attempted to introduce what purport to be more horizontal forms of governance processes – working with and through networks, using cooperative approaches of decision-making and policy development, and coordinating among diverse stakeholder groups. Recent discussions have 3 highlighted an apparent shift toward more horizontal governance models (Levasseur & Phillips, 2005; Phillips, 2004; Salamon, 2003). My interest lies in the tensions that exist and are emerging between accountability on one hand, and autonomy, collaboration and interdependence on the other – in short, the nexus of governance within nonprofit, voluntary and grassroots organizations. To date, the accountability emphasis in the hierarchical governance under New Public Management has been on contract compliance, control of abuse, and the appropriate use of public funds. Although this is only one aspect of the broader mandate of evaluation, the demand for performance improvement and effectiveness evaluation has been growing, and there is a real need to understand how processes of devolution affect those working on the front lines of evaluation and accountability in the nonprofit sector. As program and service delivery has devolved to nonprofit and voluntary organizations, the locus and nature of much evaluation work has also changed. Devolution has broad implications for the evaluation field, reflecting shifts in the process, content, clients and stakeholders of evaluation. Similarly, the increased emphasis on public accountability associated with the devolution of programs and services is reflected in changes to how programs are designed, administered and funded, and to the kinds of programming available to clients. And yet the evaluation literature has been curiously silent on the broader impacts and implications of devolution for both evaluation theory and practice. On the other hand, the past few years have seen many of devolution’s impacts on nonprofit organizations documented – mission drift, a reduced advocacy role for organizations and their clients, increased competition among agencies, and planning challenges because of the contractual nature of funding (Evans & Shields, 2005; Eikenberry 4 & Kluver, 2004; Richmond & Shields, 2004; Phillips & Levasseur, 2004). Such changes are fundamentally implicated in what evaluators do. The quote by Saville Kushner (2002b) that introduced this chapter, which spoke of evaluators drawing closer to community, and the proliferation of small scale studies evaluating community development projects, continued as follows: Here, the evaluator has to make no effort to discover desirable agendas. These are often values-driven projects, which grow directly out of need and are neither mediated through the logic of government nor their ethical purposes overwhelmed by distantly-conceived targets. They are often more or less pure well-springs of humanism. (p. 20) The experience from my practice on which I reflect in this dissertation suggests to me that Kushner’s optimism here is unwarranted. While such projects at the grassroots may begin as values-driven projects, they clearly do quickly become drawn into the logic of government as soon as they begin the process of applying for or requesting financial support from the state – even if they never obtain that funding.1 This begins with how the project is framed or re-framed in initial proposals, and continues through funding processes that demand complex accountability of even the most small and innovative projects, and through the ongoing reshaping of community initiatives to make them appear attractive for renewed funding or to new or additional funders. The system of contracting and financial supports under the New Public Management can quickly transform ‘well-springs of humanism,’ redefining and repackaging innovative ideas, and removing from them their contingent, situated, and locally relevant emphasis. In the process, organizations are drawn into the world of contracted program delivery, and refashion their identities within the community and beyond. 1 Indeed, it begins long before that, based on how the state defines problems, allocates funding, and decides what is within the scope of government intervention. 5 A Reflection on Practice The University of British Columbia’s Doctor of Education in Educational Leadership and Policy (Ed.D.) program uses reflection on practice to help students examine root assumptions about the contexts in which we work. Such pointed reflexivity challenges our practices and enriches them. I enrolled in the Ed.D. Program because I had questions and concerns about my ongoing practice as a consultant and evaluator, and I was searching for tools to help me understand the implications of that practice. The Ed.D. is specifically designed to produce knowledge that contributes to the understanding and improvement of practice, in addition to a broader theoretical understanding of practice issues. The dominant theme in my practice over the past twenty-five years has been program evaluation. My evaluation practice has been in federal and provincial government bureaucracies (as an internal evaluator), with two small management-consulting firms, as an independent consultant, in post-secondary institutions, and as a volunteer with community- level nonprofit organizations. Over this period I participated in or was the lead evaluator for approximately fifty program evaluation studies, spanning large national programs to small innovative projects offered by grassroots organizations, which involved perhaps a dozen participants. Over time, my focus shifted from large-scale programs to smaller provincial level programs, and finally, over the past dozen years, to programs delivered in local communities by third party deliverers. For some time I had seen these changes as reflecting my own shifting interests and ongoing frustrations with the contexts in which I have worked. Through the reflection on practice afforded by the Ed.D. program, I have come to appreciate how these changes represent in at least equal measure transformations occurring in the broader context of evaluation work. 6 In this dissertation I reflect on these changes, describe and document the range of impacts on organizations and clients of the process of devolution to third party deliverers, and also identify challenges and implications of the devolution of evaluation activities to local communities. In doing so, I draw upon my experience as a program evaluator working with a variety of organizations in many contexts. While the story I am telling involves the experience and insights of many people – collaborators and clients from my practice over the past dozen years, in particular – the reflection is of my experience as an evaluator in these contexts. It is an on-the-ground examination of devolved evaluation practice. Reflective Questions My reflection on practice in this dissertation focuses on the following questions: 1. How and why has my evaluation practice changed over the past twenty-five years? 2. How does the devolution of programs and services affect the practice of evaluation? 3. What are the problems and challenges of conducting evaluation in a context of devolved programs and services? 4. How does evaluation affect devolved programs and services? 5. What are the implications of devolving evaluation to nonprofit organizations for clients, programs, organizations, communities, evaluators and the broader society? The questions as posed reflect a dual focus – the nature of devolution and its impacts on nonprofit and grassroots sector organizations, and the nature of evaluation within that context. My overriding interest is in the implications for evaluation practice, but it is the devolution context that marks its academic and practical significance. Outline of the Dissertation Chapter Two further describes the rationale for the approach taken for this reflection on practice. It presents the idea of a ‘multiple case study’ analysis for examining the diverse 7 contexts and examples from my practice over the past dozen years. The chapter introduces and describes the broad experience base on which I reflect, the presentation format for the ‘vignettes’ through two composite scenarios, and the broad literatures used to help identify the analytic themes that I use to frame the analysis and presentation of arguments. Chapter Three summarizes the literature examining trends in the Canadian nonprofit, voluntary and grassroots sector. The two main sections of the chapter examine the current profile of nonprofit organizations in Canada, and some of the history of devolution of programs and services to nonprofit organizations, respectively. Because many of the examples and case studies in the literature reflect the nonprofit context in Europe, Southeast Asia, and the United States, this chapter also discusses devolution and nonprofit organizations in an international context, and reflects on broad global trends and recent discourse concerning the relevance and importance of the concept of civil society to the sector. Chapter Four describes some of the changes to the contexts in which evaluation work is done, and reviews key trends in evaluation, assessing the relevance of these trends for the context of devolution as a site of evaluation activity. The devolution process involves shifts in who is conducting evaluation, for which programs, and for whom, as the clients of evaluation are becoming more diverse and simultaneously less easily identifiable. Such trends to become more democratic, deliberative, participatory, inclusive, developmental, and responsive reflect an acknowledgement of the inherent complexity and diversity of evaluation contexts. The chapter situates my own experience and understanding of evaluation within the literature by focusing on my evaluation practice from the perspective of on-the- ground constructed systemic relationships among community players. 8 Chapter Five provides a structured approach to understanding the impacts of devolution on nonprofit organizations and other stakeholders, and the impacts and implications for program evaluation as well. The chapter focuses on five devolution themes – accountability, capacity, mandate drift, competition and complexity. These themes are constructed through examining the case studies drawn from my practice and from analysis of the literature on devolution. Chapter Six represents the evaluation counterpart to Chapter Five. It provides a structure for reflecting on four broad themes from the evaluation literature, and how they are reflected in and relevant to devolved evaluation contexts. The chapter focuses on the themes of theory-based evaluation, inclusiveness, the multiple roles of the evaluator, and dialogue, deliberation and democracy as both analytical approach and practical strategy for on-the- ground practice. In Chapter Seven I reflect on the implications of the nine themes described and discussed in Chapters Five and Six. The chapter summarizes key arguments, explores the implications for devolved program and service delivery and for evaluation in a devolved context, and reflects on the relevance of this context to the broader world of evaluation practice. Finally, the chapter suggests a framework for future research and reflective practice. 9 CHAPTER TWO: RESEARCH METHODS: REFLECTION ON PRACTICE The single case is meaningful, to some extent, in terms of other cases. The researcher and the readers of the case report are acquainted with other cases. Any case would be incomprehensible if other, somewhat similar cases were not already known. So even when there is no attempt to be comparative, the single case is studied with attention to other cases. Robert E. Stake, 2006, p. 4. The literatures dealing with evaluation practice and with service delivery and administration regularly use case studies to demonstrate practice issues. In examining my practice of evaluation, the issues I am most interested in articulating and examining are those that have not been discussed widely in the literature. While Khakee (2003) has spoken of the growing gap between theory and practice in evaluation, and Stame (2006) notes the divergence between approaches that attempt to simplify contexts (performance measurement) versus those that acknowledge and address complexity (realistic evaluation, developmental evaluation, responsive evaluation, fourth generation evaluation), there are other gaps in the literature that have not been examined analytically. Most discussions in both the administrative / human service and evaluation literatures focus on larger organizations. When nonprofit and voluntary organizations are mentioned, it is typically larger foundations and umbrella organizations that are the focus – little has been done on the grassroots. As I have examined the evaluation literature that has been produced over the past fifteen years, I have noted that few articles – even those that do address the reality of evaluation in nonprofit organizations – examine grassroots as a context of practice, and fewer still look at the implications of the devolution of programs and services for evaluation practice. Similarly, while the literature on social program and human service delivery does address nonprofit organizations, and increasingly examines issues related to devolution, its emphasis is still on 10 larger nonprofit and voluntary organizations, leading Smith (1997) to refer to the grassroots as “the dark matter ignored in prevailing ‘flat earth’ maps of the sector” as he notes how seldom studied such groups have been. The complexity of the evaluation contexts that I want to understand and capture is challenged by traditional approaches to the case study. No one case could provide the opportunity to examine the breadth of inter-related issues that are encompassed by devolution as a context for practice. It is this complexity that I am interested in exploring, for it is in the combination and interaction of issues that I believe it is important to reflect on the implications for communities, organizations and evaluation practice. To address this combination of breadth and complexity I use a multiple case study (Stake, 2006) approach to conducting my reflection on practice. While the review is retrospective, and does not involve new collection of data concerning the various sites of my practice over the past dozen years, it does offer a diverse combination of settings on which to reflect. Over the past dozen years I have worked primarily in consulting – most of that time as an independent consultant. During this time I undertook approximately twenty-five projects, of which sixteen were evaluations, and seven involved nonprofit organizations directly as clients. Three evaluation projects were undertaken on behalf of federal government departments, three for provincial government departments, and five for universities or research centres working through universities. All of the evaluation work done for the provincial and university sectors involved working with third-party delivers of programs and services, as did at least another half dozen evaluation studies undertaken previously as an internal evaluator working within provincial government departments. While I refer to the latter examples on occasion, the main focus of my review of practice 11 consists of the seven evaluations with nonprofit clients, and the eight evaluation studies undertaken for provincial departments or universities, all of which were done in the period between 1995 and 2006. Study Rationale The literature on devolution in Canada has matured over the past ten years. It has begun to move beyond the focus on describing and documenting the nature of devolution, and has provided extensive evidence of potentially devastating impacts of that devolution. A key element of this impact has been the New Public Management emphasis on accountability, and this emphasis has frequently been translated and expressed in the form of expectations for project or program evaluation. Further, the evaluation process itself is being devolved to nonprofit, voluntary and grassroots organizations, demanding new skills, knowledge and competence of those working in the sector. While larger non-profits sometimes have staff that are prepared and capable of undertaking such tasks, the majority of organizations do not (Carman, 2005 & 2007; Hall et al., 2003; Fine et al., 1998). In some cases, the organizations hire consultants to help with such work. At other times, particularly in small organizations, the task falls to program managers, coordinators or front-line deliverers. Increasingly, it is some combination of these. In my practice as a consultant, and to a lesser extent before that as an internal evaluator in government, I have seen this process emerging over the past twenty years. And yet there is little evidence of this trend in the evaluation literature. In part, this reflects the professional focus on evaluation of larger, long-term and established programs. Many long- term programs are still evaluated in this manner, although the changes to the context of delivery, even for large programs, has implications for how evaluation is conducted and 12 coordinated – reflecting a much more complex operating environment – and yet these changes have had only a minor influence on larger theoretical and practical debates in the evaluation literature. My starting point in the dissertation is to document some of the changes that have transformed the context in which community level evaluation of programs and services is happening. This builds on insights from the literature and case studies that have documented the impacts of devolution on programs and services, and helps build a frame for reflecting on case examples from my own practice over the past dozen years. I then use some of the most recent trends in evaluation practice and theory to frame additional case studies from the evaluation literature, again building on the examples of devolved program delivery from my own practice. Finally, I use these conceptual tools to reflect on the implications for nonprofit organizations, their staff and clients, communities and evaluators working in a context of devolved evaluation. While the reflection on practice in the dissertation is presented in the form of a multiple case study, a significant component of my research and analysis consists of a comprehensive meta-analysis of case studies (Jensen & Rodgers, 2001) and other literature related to the devolution of programs and services, evaluation in nonprofit and local community contexts, and the intersection of the two. Although I did not undertake new interviews or data collection for the study, the case study format provides a useful frame for presenting examples, and comparing them to other cases and studies that have been published over the past ten years. The latter task involved examining diverse studies in both the public administration / human service fields, and the literature of evaluation practice. Occasionally, they met in articles that attempted to address both sets of issues, although this was rare. 13 Developing the Themes The starting point for my reflection on practice was examining and questioning changes occurring in the context of my work as an evaluator. I treated the specific evaluation projects on which I worked as individual cases that demonstrated some constellation of the factors and themes that I saw emerging. I extracted five broad devolution themes and four evaluation themes from the case studies, using the meta-analysis as a guide to help frame the issues and define themes concerning devolution and evaluation within that context. These themes reflect some of the distinctions and conceptualizations described in the literature. However, my primary concern was accurately reflecting the breadth and diversity of experiences across the case studies, and the consistencies in challenges I faced as an evaluator working in these contexts. The themes regarding the context of devolution consisted of, 1) accountability, 2) capacity, 3) mandate drift, 4) competition, and 5) complexity. Each theme has numerous sub- themes, which overlap and offer a variety of points of convergence between the broader themes. Table A1 in Appendix A presents a listing of the case studies from the devolution literature used to help frame and contrast the cases from my practice, and to identify and develop the broad themes and sub-themes. I also extracted four broad evaluation themes from the cases, again using the evaluation literature and recent case studies of evaluation in local community contexts to help frame the issues and themes. While these articles and studies examined aspects of the devolution context, in most cases they did not articulate or identify them as such. The articles discussed one or more aspects of community context that also coincided with particular aspects of devolution, as represented in the first set of themes. The evaluation themes consist 14 of, 1) theory-based evaluation, 2) inclusiveness, 3) roles of the evaluator, and 4) dialogue and deliberation as responses to the devolution context. Once again, the themes contain a variety of sub-themes, which are discussed in detail in Chapter Six. To date, there have not been any comprehensive treatments of evaluation in nonprofit organizations that focus on the context of devolution or the ‘marketization’ of the sector, particularly for smaller community level nonprofit or grassroots organizations. Little has been done that addresses the evolving relationships between such organizations and either government as funder, or the broader society. Through the development of these nine themes, I describe, explore and analyze the intersection between these two worlds and bodies of literature, and provide a descriptive and analytical examination of devolved evaluation in practice and impact. Power and Reflexivity I have been drawn to the literature on democratic evaluation because it addresses issues of power within the evaluation context. I have experienced this power from several angles, and as the evaluator on behalf of funders and clients have benefited from it at times. Yet in my practice I have always been aware of the element of negotiation as part of evaluation practice – negotiating the scope of the evaluation, resources and time-frame, control over methods and research approach, identifying ‘legitimate’ stakeholders, and defining the scope of the program, including its goals and activities. I have also been on the losing end of power struggles, and in other situations found myself a helpless bystander as program stakeholders were dealt with in what I viewed was an unfair or heavy-handed manner. The issue of power is always present in any evaluation study, and this calls for a measure of reflexivity concerning the evaluator’s roles and responsibilities (Gergen & 15 Gergen, 2002). I have found that deliberative democratic evaluation provides a useful frame for understanding and developing an orientation towards power within the research setting, and in particular a way to help reflect on the range of perspectives among stakeholders. In the context of the present reflection on practice, my interest is not in demonstrating model examples of deliberation or democratic evaluation. Indeed, there have been times when I have stumbled on strategies that have reflected such approaches, and found them useful. There were far more occasions when a democratic orientation would have facilitated different or more satisfactory processes or outcomes of the evaluations undertaken, but in the moment I did not recognize those opportunities. However, by far the most common situation was finding myself wishing that I could employ such approaches in an evaluation, but for various reasons that I explore and discuss in the dissertation, the context itself mitigated against democratic approaches, at the same time that they would appear to be most appropriate. In part this reflects the challenges of doing democratic evaluation in “undemocratic settings” (MacNeil, 2000), and in part the unintended consequences of the devolution context. My own reflection on power within evaluation contexts has been influenced by Saville Kushner’s deeply personal reflection on implicit power and the social location of evaluation practice in contemporary society. In Personalizing Evaluation (2000), Kushner explores the flip side of what evaluation means to those who are subjected to its intrusions into their lives. He questions the program orientation of those who have the power to request and implement evaluation activities, and to impose consequences based on those evaluations. In particular Kushner (2000) reflects on the difficulties faced by evaluators in understanding and documenting the experiences of program participants given, 16 a latent source of unfairness and injustice built into the fabric of program evaluation for it tends to favour the voice of those few for whom programs are useful instruments to advance their careers and their economic power. For the majority of people implicated by or involved in a program, the concept ‘program’ is barely understood and may even be irrelevant to their lives. Or, more to the point, it rep- resents an opportunity irrelevant to their interests and their opportunities. (pp. 10-11) Through the reflection on practice in this dissertation, I hope to transcend the instrumental concerns implicit in most evaluation practice, and see some of the ways that ‘personalizing evaluation’ can help me to understand and address differences in power among evaluation stakeholders, particularly in a context of devolved programs and evaluation practice. A Multiple Case Study Approach Stake (2006) describes an approach to studying and analysing multiple case studies2. This approach is not completely applicable to a reflection on practice, as it emphasizes comprehensive data collection among a variety of sites, but it provides a useful framework and starting point for examining the ‘cases’ on which my evaluation practice has been based. Multiple Case Studies For Stake (2006, p. 6), in a multiple case study analysis, the ‘quintain’ is the “object or phenomenon or condition to be studied – a target, but not a bull’s eye.” It is the common link among the set of cases, and the starting point for analysis of the phenomenon in its many different forms and manifestations. Stake uses the example of the “proverbial blind men describing an elephant,” in which the elephant would be the quintain. In cases in which a multi-site evaluation was being conducted, the quintain would typically be the program. For Stake (2006, p. 1) the ‘case’ of a case study is “… rather special. A case is a noun, a thing, an entity; it is seldom a verb, a participle, a functioning.” A community or 2 This work builds on and extends ideas from Stake’s classic The art of case study research (1995), in which he refers to multiple cases as ‘collective case studies.’ 17 agency can be a case, as could something as amorphous as a program or a training module, but not generically ‘services to clients’ or ‘training’ in themselves. We examine cases in which there are “… opportunities to examine functioning, but the functioning is not the case” (p. 2). The cases we examine are bounded, although they may also be complex, situational, and connected to diverse issues, interests and even other cases. While it might be tempting to think of a case as a ‘snap-shot’ picture of a program or situation, Stake (p. 3) emphasizes viewing cases as dynamic and operating in real time, and as embedded in historical contexts that involve many individuals and events, each of whom could be a case. In this way virtually everything about a case becomes potentially relevant to studying the quintain – history, political, cultural and social contexts, activities, documents, and other efforts to examine the case, which could encompass other evaluations, previous programs, and the broader programming environment in which the organizations operate. For this study, the quintain is the practice of evaluation, and the sites are the specific nonprofit contexts of devolved program delivery with which I have worked over the past dozen years – the communities, agencies and programs in which the evaluation work took place. The evaluations on which I draw are diverse, from small and short-term studies encompassing a few months of activity, to complex multi-site studies consisting of up to ten individual case study communities, and with an implementation of the evaluation lasting up to three years. The communities and program contexts that represent the cases in which the evaluations took place also represent extreme diversity, ranging from large urban centres (Vancouver, Victoria, Surrey) to small rural communities of a few hundred people. The purposes of case study research can be either ‘instrumental’ – going beyond the case, or ‘intrinsic’ – looking at the key elements of the case that are of enduring interest in 18 and of themselves (Stake, 2006, p. 8; Creswell, 1998, p. 62). For multiple case study analysis, the emphasis is usually on the instrumental case – and this study is no exception. While the specific and unique attributes of cases are relevant to my analysis, the focus is on understanding evaluation practice across the cases, and seeing the larger picture of which any one case might contribute one or two pieces of the puzzle. Data for the Case Studies As a reflection on practice, the study examines details from cases that spanned the past dozen years of my practice. Many of the people I worked with over this period – clients, program clients, program and service deliverers and administrators – are no longer associated with the organizations in which they were involved at that time, and indeed, many of the programs, projects and organizations no longer exist in the form in which I experienced them. A retrospective examination of cases is challenging but possible; a retrospective collection of information from evaluation work conducted in the past is not achievable. The case studies use existing documents and public records, my own notes and records from the evaluations, and reflective materials that represent background information used in developing and negotiating contracts and draft reports. I also use several reflective narratives describing my own efforts at grappling with devolved evaluation issues, constructed as part of my course work for the doctoral program. Stake (2006, p. 23) notes the main criteria for selecting cases: • Relevance to the quintain, • Diversity across contexts, and • Cases that provide good opportunities to learn about complexity and context. While not all of the cases from my practice over the past dozen years have equal relevance for understanding the devolution context, most have some relevance. As a group, 19 they represent a very diverse set of contexts, although certainly not exhaustive of all possible situations relevant to understanding the quintain for my study. Most do provide opportunities for learning about evaluation under devolution, and they are particularly appropriate for addressing the complexity of devolved evaluation in local communities. Triangulation One of the reasons for conducting a multiple case study is that it provides opportunities to enhance validity – in looking at multiple cases, the researcher focuses on factors that cut across several cases, and appreciates the interplay of special circumstances in contrast with those that appear to me more universal. Examining cases retrospectively – and despite how disparate they might be, all from one evaluation practice – represents a challenge to the study’s validity. I am reflecting on my own practice, and although I have frequently discussed devolution and its impact on evaluation with other consultants and evaluators, I have not ‘interrogated’ their practices to the same depth as I have examined my own. Yet the evaluation discipline, as well as the practice-based literatures of program delivery and administration have made excellent use of case studies as means to share understanding about the context of practice. Indeed, examining these case studies has been a crucial element of my efforts to understand and develop the concepts used to make sense of my practice. I have reviewed a wide range of case studies reflecting either 1) devolution of programs and services, or 2) evaluation in community-based nonprofit organizations. Jensen and Rodgers (2001) argue that meta-analysis of case studies allows researchers to resolve problems of knowledge cumulation and generalizability. Such meta-analysis represents a form of data triangulation as discussed by Denzin (1978) and Patton (2002a). 20 A few of the evaluation case studies explicitly examine issues relevant to the devolution of programs and services as a context for the evaluation, but that is rare. Very few of either set of case studies address grassroots organizations specifically. Most case studies are bounded much more narrowly than the group of cases from my own practice. A few cover a fairly wide time span, such as five years (Clayson, 2002), but usually within a fairly bounded community and geographic range. These case studies provide invaluable reference points for framing and contrasting the examples from my own practice. In many cases the findings of these studies complement the observations from my own practice. In some they are at odds with my experience and interpretations, and this helps point to areas in need of further research and clarification. Composite Scenarios Because a key element of my reflection on practice is looking at the complexity of contexts in which evaluation is practiced, and also because of the diversity of the contexts I link through my analysis, I present the evaluation case studies as a series of vignettes representing two broad ‘composite scenarios,’ one describing a grassroots organization, and the other describing a small nonprofit organization. This serves several purposes for the study and for the dissertation: it addresses the complexity and breadth of the cases and makes them more comprehensible to the reader, and it addresses confidentiality concerns posed by a ‘reflection on practice.’ Stake (1995, pp. 86-87) discusses how the detailed descriptions included in case studies can assist with ‘naturalistic generalization’ – learning much that is general from individual cases, trying to understand the implications of those cases in other contexts. Because the cases I describe are so diverse and the contexts so rich and varied, discussing 21 each of them would be quickly overwhelming to the reader, and presenting the volume of contextual detail would represent a daunting challenge. Further, the links among cases and the broader themes are facilitated by such a composite approach – the factors are not independent of one another, despite the fact that no one case would provide avenues for seeing every ‘part of the elephant.’ Stake (1995) suggests that, To assist the reader in making naturalistic generalizations, case researchers need to provide opportunity for vicarious experience. Our accounts need to be personal, describing the things of our sensory experiences, not failing to attend to the matters that personal curiosity dictates. A narrative account, a story, a chronological presentation, personalistic description, emphasis on time and place provide rich ingredients for vicarious experience. (p. 86) Yet to be comprehensible, the detailed accounts benefit from being consistent in presentation, shorter and more concise in exposition, and because of the number of issues to be discussed, built on continuity and connection. The presentation of the cases using two broad scenarios is part of the process for providing an analytic frame for understanding the relevance of what is being discussed, and doing so in a way that simplifies the narrative arc. Kearns (2003) provides an example of such a composite scenario process, used for illustrative purposes as a way of clearly portraying a complex context. At the end of this chapter are two background vignettes that map out the parameters of the composite scenarios. In creating these scenarios, I have conceptualized them as two points of a continuum that would range from very small grassroots organizations to large, multi-site nonprofit organizations or foundations. One scenario represents the grassroots end, and the other a small nonprofit organization that is still close to that end of the broad continuum, but closer to the medium or larger nonprofits in characteristics and broadening program focus. The composite scenarios are further portrayed in Chapters Five and Six through the use of another twenty-three detailed vignettes, each facilitating reflection on a particular 22 theme or sub-theme. Most of these vignettes can be viewed as excerpts from the multiple cases that represent the data for the study. The individuals portrayed in the vignettes are composites, using standardized names, characteristics and roles to support the clarity of the presentation and also to safeguard the identities of community members. Several of the vignettes also represent composites, in that rather than simply portraying an event or series of events from the cases, multiple settings and in at least one instance, several evaluation studies have been compiled to form one clear vignette to demonstrate the point in question. Another part of the rationale for composite scenarios is the issue of confidentiality. As a retrospective approach, the reflection on practice examines issues and settings for which there exist no opportunities for obtaining permission and consent. The evaluations themselves, and in many cases the reports prepared for them, are public processes, and part of the natural process of my work, and the products are a part of the public record. I did not conduct interviews or focus groups related to my dissertation research as part of doing this work. However, in the process of conducting interviews and focus groups for the research projects, relevant topics arose and became part of my understanding of the program or setting being examined. Insofar as the interviews I conducted were used in constructing reports, I can use that information as a public representation of issues that is also relevant to this dissertation. Yet those conversations also contained information that was not intended by the subjects to be included as part of the evaluation studies, and in many cases this information was more relevant and interesting for my understanding of the context of the study, rather than for the study itself. Further, in grappling with and developing these ideas over the past four years – and indeed, the past decade – I have gone through an intensive but informal process of discussion 23 and reflection with clients, colleagues and project informants. Many of the conversations that informed my development of the ideas and understanding of the dynamics of the changes occurring in nonprofit programs, agencies and accountability relationships that I discuss in this dissertation have been ‘in the margins.’ These conversations have occurred as ‘small talk’ at the beginning and end of interviews and focus group discussions as informants reflected on their organizations and on their own participation in ‘evaluation’ of programs. They have occurred in the negotiation of contracts and identification of roles, responsibilities, priorities and constraints. They have occurred in the hallways between sessions at conferences, in the coffee breaks of workshops and training sessions I have attended and delivered, and in informal conversations with those developing government policy and programs. In short, they have occurred in the margins of my ‘official’ working life – not as parts of my contracts or evaluation work, nor in my official capacities as researcher or graduate student. They helped me to understand and appreciate devolved evaluation, but they are not appropriate for direct quotation or inclusion in case studies drawn from specific cases. Even so, the people I have spoken with have been passionate about the changes affecting them, and they have often jumped at the opportunity to talk about issues central to their working lives, over which they find few opportunities to either reflect or take action. In my informal discussions with them, most individuals expressed interest in my observations, and were quick to offer suggestions of examples of themes and their own concerns. Such discussions were not systematic data-gathering, but they did point me towards examples that existed within the cases I was already examining, and represented ‘found’ opportunities to better understand and articulate the nature of the devolution of programs and services. These 24 conversations typically informed my understanding of the contexts of practice for those delivering programs and services, rather than the evaluation of those programs. Composite Scenario Background In order to contrast the various types of case I have examined in my practice, and to better contrast the particular challenges of grassroots evaluation from that in larger nonprofit organizations I have developed two composite scenarios from the multiple cases on which I am drawing. One is of a recently incorporated grassroots organization that has grown through the efforts of a group of parents dealing with the needs of their children. The other is a small, established nonprofit organization that has been delivering programs and services in one community for approximately twenty years, and although not yet affiliated with a larger umbrella organization of deliverers, is in a position to contemplate doing so. Vignette 2.1 – The Grassroots Composite Scenario Laura is the director of a small grassroots organization in a community with a total population of less than ten thousand people. The organization grew out of the efforts of a group of parents and teachers seeking to meet the needs of children with developmental disabilities. Parents and caregivers are finding it difficult to obtain recognition of their challenges, and are also finding few supports within the school system or afterwards in the community as they try to enter public life in the community. The organization has a board composed of parents, teachers and some service providers, has recently gone through the process of incorporating, and the board is examining the benefits and requirements of obtaining official ‘charitable’ status. Over the past ten years the organization has received seed money from provincial and federal departments and agencies to develop a variety of public education programs and 25 services within the community. The education is targeted broadly – to schools, parents, broader community members, and those within the program and service delivery network – other agencies and government employees with whom the parents and their children of necessity interact. The education sessions provide information about the specific form of disability in question, and how it is relevant to the needs and capabilities of the young people, as well as the implications for service delivery to this population. The group formed on the basis of a perceived need to advocate for their children locally, provincially and nationally, and moved into public education as a means of addressing the needs of young people more broadly within the immediate community. The board and agency are at the point of growing again, as they have hired a director (Laura) and two staff people to work with the many volunteers who have been working with the organization over the years. All of the staff members now with the agency began their involvement as volunteers with various initiatives, and indeed many still do volunteer part of their time as well as working for the organization. The organization coordinator and board members have developed several innovative approaches to programming, which they began offering in the community using volunteers, and presented at conferences of service providers. Based on these conference presentations, they have obtained funding from federal and provincial agencies to develop their innovative approach, which fills gaps in services within the community and outlying area. The new funding stipulates that the project must evaluate its efforts, and provide ‘lessons learned’ and ‘best practices’ that can be shared with other communities. Both sets of funding are parts of broader funding envelopes that are not delivering specific government programs, but supporting innovative community-based solutions to public issues. The 26 funding sources are diverse, and represent agencies and departments with education, health, youth mental health, social services and ‘First Nations’ support program focuses. The evaluation efforts are required for most of the projects, and a small amount of the funding has been budgeted for supporting evaluation. The organization board has chosen to use the evaluation requirement to examine all facets of their project, as it has grown in several directions, and so has hired a consultant to coordinate this work, and to support and document their evaluation efforts. Part of the rationale for this evaluation is to enhance the organization’s ability to position itself with respect to future anticipated growth, and possible linkages with other similar organizations that are forming across the province. As the evaluation consultant to the organization, I have been asked to work with the director, staff and volunteers in a participatory approach. Part of the rationale for this is that they cannot afford to have an evaluator complete all of the work – their budget is small. But a participatory approach also fits the organization’s rationale and orientation towards its role in the community, which emphasizes a community development focus based on developing relationships with and among diverse stakeholders, agencies, and front-line deliverers. Vignette 2.2 – The Small Nonprofit Composite Scenario Susan is the executive director of a small nonprofit in a northern community of 25,000 people. The agency has been active in the community for approximately twenty years, and has offered a variety of community health, mental health, health education and health promotion programs over that period. The agency has a dozen full and part-time employees, and regularly has up to forty volunteers participating in program activities in the course of a year. The organization has built up strong links with other community agencies and organizations, and membership on the boards of these agencies sometimes overlaps. 27 Looking to stabilize the funding for community programming, Susan has recently been successful in expanding the agency’s programming after submitting proposals to provide local delivery of several provincially and federally funded programs that had not yet been established in the community. Two of these programs involve connecting with a larger network of deliverers in other communities, working through an umbrella organization that mediates between the funder and community delivery partners. Another venture is a demonstration project active in eight communities across the province, and part of a two-year initiative that is examining an innovative approach to community delivery. In order to deliver these programs, Susan has rationalized the delivery of several services that had traditionally been delivered by the agency, but funded through municipal and community fundraising efforts, and primarily delivered by volunteers. I have been working with the demonstration project to evaluate the innovative effort in eight communities, and have been using a participatory approach through which I facilitate evaluation planning and data collection in the communities, but much of the work is done locally. I have arranged to visit each of the communities twice per year, and have their local lead contact visit Vancouver twice per year as well, to share stories and strategies for the project evaluation efforts of each of the eight communities. Susan has been my initial contact for the demonstration project evaluation, as she wrote the proposal and oversaw the development of the initiative. After attending the first gathering in Vancouver, Susan has since passed on the responsibility to the project coordinator Sally, who has been hired specifically for the project, and may or may not continue with the agency beyond its completion. The project has been established with base funding that cannot cover all of the project expenses; funding is offered with the stipulation that the agency and local community 28 also contribute matching funding or resources representing at least a third of total funding. None of the funding offered through the project may be used for capital expenses, and evaluation of the project is a required element. My efforts to develop and support the evaluation of the project demand that I spend time with Susan discussing the information systems used for all of the agency’s programming, and not just the one project we are jointly working on. Susan has been working with provincial and two federally funded programs to develop an information system that collects basic information about participation in programs and services, and at the minimum provides output data related to core agency functions. Susan and the board are supportive of this effort, as they would like to have a better sense of what is happening with the agency as it grows. The agency’s programs and services are currently delivered at two locations within the town, and through two other locations in nearby First Nations communities. The geographic expansion has challenged Susan’s ability to keep track of what is being done in the other offices, and she hopes that the information system being developed represents an opportunity to reduce her travel demands to other offices. The project that I am working with is being delivered through three of the four community locations, and I have made an effort to visit those offices to help put a face to the name being mentioned as ‘the evaluator.’ Most staff and many volunteers met with me on my first visit to the community as the project was starting up, and support across the organization seems high and enthusiastic. It is a time of growth in the organization, and the community appears to support the initiative, and has expectations of success and further expansion, which they are eager to document. 29 CHAPTER THREE: TRENDS IN THE NONPROFIT, VOLUNTARY AND GRASSROOTS SECTOR The conventional designations of the nonprofit sector do not give us the tools we need to conceive the question. Here is why: the nonprofit sector is not really a sector, but rather a residual category. It consists of organizations that are neither government nor for-profit business. The residual nature of the term plagues similar designations: the concepts of an independent sector, a third sector, a voluntary sector, and civil society, are no more helpful. Mark E. Warren, 2003, p. 47. This chapter summarizes and reflects on the literature examining the status of nonprofit organizations in Canada. The two main sections of the chapter examine the current profile of nonprofit organizations, and some of the history of devolution of programs and services to nonprofit organizations, respectively. Because many of the examples and case studies in the literature reflect the nonprofit context in Europe, Southeast Asia, and the United States, I will also discuss the nonprofit in an international context, and reflect on some relevant broad global trends, including recent discussions on civil society. Across North America, nonprofit, voluntary and grassroots organizations are in transition. Indeed, the devolution of programs and services from all levels of government to third sector organizations has been a worldwide trend over the past two decades (Day & Devlin, 1997; Alexander, 2000 & 1999; Rekart, 1997 & 1993; Nowland-Foreman, 1998; Salamon, 1993). This trend has intensified since the early 1990s, reflecting neo-liberal political changes, as well as intensified competitive pressures associated with globalization (Richmond & Shields, 2004, p. 54). While the trends have produced significant and widespread impacts on the operations of nongovernmental organizations (NGOs) – particularly nonprofit, voluntary and grassroots organizations – there has been a curious gap in analysis and writing about these issues, not just in Canada, but also around the world (Evans & Shields, 2005; Richmond & Shields, 2004). 30 Defining the Nonprofit Sector There is not a great deal of consistency in the literature regarding either the boundaries of the nonprofit sector, or in the terminology used to describe it. This applies to distinguishing the sector from the public and private spheres, as well as identifying relevant features of the nonprofit sector itself. Dreessen (2000, p. 2) attributes this to the diversity of purposes for analysis and classification to which conceptualizations of the sector have been subjected. Similarly, Hirshhorn (1997) attributes the different terminology used in describing the sector to the diversity of disciplinary approaches to the subject matter: … different labels – nonprofit, not-for-profit, voluntary, third or independent sector, “The Commons” – have their roots in different scholarly approaches to thinking about the sector. While, for example, sociologists have been attracted by the noncoercive aspects of participation in the “voluntary sector,” political scientists have focused on the potential benefits to pluralist democracies from the availability of an “independent” or “third sector.” By adopting the term “nonprofit” throughout the project, we have subscribed to the language of economists. (p. 5) Given the diversity of organizations that are considered to be part of the nonprofit sector in Canada and globally, the diversity in terminology in use is not surprising. As a ‘residual category’ (as noted by Warren, 2003, p. 47), the nonprofit sector includes everything from hospitals and universities to arts organizations, social clubs, day-care centres, advocacy organizations, unions, places of worship and organizations that deliver community level services and programs. The organizations can vary in size from a few individuals to thousands of paid staff and volunteers. Nonprofit organizations and the nonprofit ‘sector’ are consistently called by such names as ‘nonprofit,’ ‘non-profit,’ ‘not-for-profit,’ ‘non-governmental organization’ (NGO), ‘community-based,’ ‘charitable,’ ‘voluntary,’ ‘independent,’ ‘third sector,’ and sometimes the whole sector is referred to as ‘civil society.’ What they typically have in common is that they are institutionally separate from government, and are not commercial businesses. They 31 may obtain revenues from sales of products or services, but do not distribute ‘profits’ to owners or directors. According to Statistics Canada (2004), they are self-governing (independent and self-regulating), and voluntary (using volunteers or donations of time or resources). While these features help us to define the sector, they are not conclusive. Baulderstone (2005) calls the nonprofit sector a ‘fuzzy’ sector, with blurred distinctions between public, private and third sector organizations. Usually we think that: … private sector organizations undertake business and commercial activity, public sector organizations provide physical infrastructure and deliver services to the public, and nonprofits reflect special interests. In practice, third sector organizations may engage in commercial activities and make a profit, they may compete with private sector organizations for work, and governments contract with such agencies to provide services on their behalf. Government agencies and some private sector organizations use volunteers and both government and private sector organizations support nonprofit organizations in various ways. (p. 4) In practical terms, the boundaries between the sectors overlap when we look at individual criteria, yet we usually do not have difficulty determining the sector of specific organizations. Some of the arguments concerning the changes to the nonprofit sector have focused on the ways in which the sector is becoming more like both the private and government sectors – the ‘marketization’ of the sector and the ‘hollowing out’ of government, respectively. I will return to these themes in the next section. For the purposes of this dissertation I will use the terms ‘nonprofit organization’ and ‘nonprofit sector’, except where quoting other writers, or referring to specific sub-groups within the broader sector. Distinguishing between types of nonprofit organization remains the most significant challenge in defining the sector. Recently, Salamon and Anheier (1997) developed international criteria for classifying nonprofit organizations, and in Canada, this provided the framework for Statistics Canada’s recent (2004) studies on nonprofit organizations. Even so, 32 the distinctions that this classification scheme makes do not adequately address the need to describe and analyze the varieties of organizations within the sector. In the United States, classifying nonprofit organizations appropriate for this study would likely be an easier task, as there is a simple tax-code designation for such organizations. The category of 501 (c)(3) covers charitable organizations, and 501 (c)(4) describes organizations with social welfare purposes (Warren, 2003, p. 47). Similarly, in Australia, in order for charitable organizations to obtain government funding, or to compete for service delivery contracts, they must be legally incorporated (Baulderstone, 2005, p. 5). There is no comparable requirement in Canada, and at their discretion, federal or provincial departments or agencies can and do contract with unincorporated ‘grassroots’ organizations. Canadian Nonprofit Organizations Within Canada, several recent national level studies and initiatives have begun to document the sweeping transition of the nonprofit sector. Statistics Canada’s study Cornerstones of community: Highlights of the national survey of nonprofit and voluntary organizations (2004) identified fifteen categories of nonprofit organization, based on their primary activity areas. For this study, the main types of nonprofit organization with which I worked and that provide community level delivery or programs and services would be found in the ‘Social Services’ category, comprised of “organizations and institutions providing social services to a community or target population.” The 19,099 organizations identified through Statistics Canada’s study represented 11.8% of all such organizations. And yet, on closer examination, some of the nonprofit organizations that I have worked with could easily be included under the categories of ‘Health,’ ‘Education and Research,’ ‘Development and Housing,’ or ‘Law, Advocacy and Politics.’ Further, although the nonprofit organizations are 33 categorized based on their primary activity, with the expansion and contraction of organizations based on the number and type of program delivery contracts that they take on, nonprofit organizations can have substantial transformations of their range of activities, key programming areas, and how they define what they do. Further, the categories are not mutually exclusive, as organizations that, for example, provide health education to teens might easily choose ‘Education and Research,’ ‘Health,’ ‘Social Services,’ or ‘Development and Housing’ as the best-fitting category, depending on how they were thinking about the activity as either a health promotion, education, community development, or program delivery activity. Clearly it could be considered all of these at once, but which would get the nod as securing the identity of the agency? And which program among perhaps a dozen that might be offered by the organization? The categorization would also reflect who in the organization participated in the survey, as a program manager or front-line staff person might have a very different idea of the organization’s activities than would an executive director or board member. See the table in Appendix B for a portrayal of the complex picture of nonprofit organizations in Canada. Perhaps a larger issue for this study is that Statistics Canada’s National Survey of Nonprofit and Voluntary Organizations (NSNVO) excluded grassroots organizations and any groups that were not formally incorporated or registered with provincial, territorial or federal governments (2004, p. 7). This reflects the difficulty in identifying and locating the organizations. Dreessen (2000) notes that with regard to unregistered nonprofit organizations: … two conclusions are inescapable: one, noncharitable nonprofits are by no measure a negligible phenomenon; and two, we are almost completely in the dark about even the most basic data for this segment of the sector. (p. 13) 34 Similarly, Day and Devlin (1997, p. xx) point out “… it has proven to be almost impossible to obtain an accurate measure of the size of the noncharitable nonprofit component of the sector.” Indeed, Smith (1997) asks why grassroots associations are so seldom studied by those studying the nonprofit sector. He suggests that perhaps it is because they look at individual organizations using a bureaucratic model rather than using a more appropriate market model, which would emphasize the system of groups in the community in competition with one another for resources, or linked together within a network of program and service delivery. This perspective fits with an observation that grassroots organizations are hard to find – unless you ask people in other community organizations where they are. The Voluntary Sector Initiative (VSI), a five-year joint initiative between the government of Canada (providing most of the financial support) and a variety of third sector organizations – particularly the Canadian Council on Social Development – was active between June of 2000 and mid-2005 in conducting research and preparing reports on the status of nonprofit organizations in Canada. Statistics Canada’s survey (2004) reported that Canada had over 161,000 incorporated nonprofit and voluntary organizations in 2003. The study reported findings from a survey of 13,000 of these organizations, which were identified from a database of incorporated nonprofit and charity organizations. If we were to include grassroots organizations, Evans & Shields (2005, p. 2) and Hirshhorn (1997, p. 8) estimate that the number of organizations would likely exceed 200,000. The NSNVO included a diverse range of organizations, from large universities and hospitals, to small voluntary associations and agencies with annual revenues of under $30,000. Just over half (56%) were registered charitable organizations. 35 Evans and Shields (2005, p. 3; & 2002, p. 140) divide third sector organizations into four categories: 1) funding agencies such as the United Way, 2) public benefit organizations such as day cares and social service agencies that provide goods and services – programs for the general welfare, 3) member serving organizations such as trade unions, business or professional organizations that serve a membership group rather than the public at large, and 4) religious organizations such as churches involved primarily in sacred and religious activities. This approach is similar to the nonprofit groupings used prior to recent attempts to develop more complete databases about the sector. They offer a simpler categorization than the Statistics Canada survey, but it also excludes a wide range of larger nonprofit organizations that are part of the diversity of the sector – hospitals and universities Some of the key findings of the Statistics Canada survey relevant to this dissertation include observations that larger organizations receive the bulk of government resources flowing to nonprofits, are more dependent on government funding, and the largest organizations are also those that are growing – they are more likely to have reported increased revenues, more volunteers or a higher number of paid staff over the previous four years (between 2000 and 2003). By contrast, smaller organizations were more likely to report declining revenues, fewer volunteers and stable staff levels for the same period (Statistics Canada, 2004, p. 10). The majority of organizations reported challenges in fulfilling their missions based on capacity – difficulties in recruiting and retaining volunteers and board members, and difficulty obtaining funding. In short, the report described a nonprofit sector in transition, experiencing significant challenges and stresses, yet representing an enormous presence in the Canadian economy, with revenues in excess of $112 billion, $75 billion of which is not in hospitals, universities 36 or colleges. Nonprofit and voluntary organizations employ over two million people, although hospitals, universities and colleges employ a third of this number, but make up less than 1% of all organizations, and use over nineteen million volunteers who contribute over two billion hours of volunteer time – representing the equivalent of a million full-time jobs per year. Just under half (49%) of the funding for the sector originates directly in government grants and contracts. Civil Society As Warren (2003) notes, one of the more interesting ways to understand the nonprofit sector is as ‘civil society,’ which has been a focus of many of those writing about devolution (Alexander, 1999 & 2000; Eikenberry & Kluver, 2004; Brock, 2002; Evans & Shields, 2005; Nowland-Foreman, 1998; Wolch, 1999). Alexander (1999, p. 454) refers to civil society as “the space occupied by associations,” and this seems to represent the starting point for how civil society has been used in this literature. Most say they are building their conceptions of civil society on de Tocqueville’s work of over a century ago, and for these writers, the idea represents a core feature of workable democracy in a capitalist society – a structural element of society representing that point of overlap and convergence between the state and the private sector; part of neither, but mediating the effects of both. For many writers such as Rifkin (1995), civil society represents the next realm of opportunity in the face of technological impacts – and a source of hope. Edwards (2005) takes issue with the lack of clarity and rigour among those using the term ‘civil society,’ arguing that it is used in so many ways that it is in danger of becoming meaningless. He argues that if we look at civil society as a ‘part’ of society – nonprofit or voluntary organizations – then we lose the original and still useful meaning of the concept – 37 civil society as a kind of society identified with the ideals of political equality and peaceful coexistence. Edwards contrasts three ways of looking at civil society: 1) as associational life, 2) as the good society, and 3) as the public sphere. Civil society as associational life emphasizes a structural approach, viewing the nonprofit sector as the context in which the good society develops. Like Putnam (2000), Edwards notes the danger of this perspective – that nonprofit organizations are home to competing and anti-democratic values as well as those that support democracy. He also notes that in terms of expecting this sector to be a saviour, we actually spend a small part of our total lives in such associations, and it is their relationship to families, schools and other institutions that the nonprofit organizations have their key impacts on society. Civil society as the good society, as the idea of what kind of society we want to live in, always leads to the question of how do we decide what is good? Civil society as public sphere – “the places where citizens argue with one-another about the great questions of the day and negotiate a constantly-evolving sense of the ‘common’ or ‘public’ interest,” (Edwards, 2005) provides a process for getting there – through having a public space in which we can deliberate, negotiate, and work out our definitions of problems and questions, and our solutions to both. Edwards argues that civil society remains a useful concept when we think of it as the arena in which we can do this kind of deliberation. The more structural definitions of civil society help us to identify “gaps and weaknesses in associational life,” (Edwards, 2005) but de-emphasize the relationships between sectors, and promote a view of the nonprofit sector that overemphasizes its role and capability. 38 This view of civil society resonates with the work of Lyons, who contrasts the nonprofit sector and civil society. Lyons (1996, pp. 3-6) notes that the literature on the ‘nonprofit sector’ has grown out of work by economists, who have emphasized quite structural definitions of the sector based on the American tax-status distinctions noted earlier. This has led to the exclusion of some types of third sector organizations, notably mutual aid organizations, cooperatives and self-help groups – types of association that might inhabit the grassroots sector: nonprofit organizations in the early stages of becoming part of the broader social service delivery web. In contrast, Lyons notes that ideas about civil society have grown out of the sociological literature, encompassing conceptual rather than concrete and pragmatic concerns, and have focused specifically on relationships among people and organizations. For example, the nonprofit literature views volunteering as philanthropy – self- interested giving as part of a reciprocal and symmetrical exchange – while civil society views volunteering as membership and as a form of collective action that can contribute to the development of social capital (Lyons, 1996, p. 12). While much of the economics-based nonprofit literature seems comfortable with viewing nonprofit organizations as a special form of firm, using corporate models of governance, and addressing the needs of consumers as ‘customers,’ the civil society literature focuses on democratic forms of governance, and the capacity of the sector to encourage participation in social life – viewing participants as citizens rather than strictly as consumers of services (Lyons, 1996, pp. 12-13). In essence, the two literatures refer to different organizations and types of organization, start from differing assumptions about the form of the sector, and reflect potentially conflicting views of the roles, activities and purposes of the organizations they encompass. 39 The most common interpretations of civil society in the devolution literature remain fairly structural – reflecting their overlap with broader literature on the ‘nonprofit’ sector. In view of the diversity of perspectives on civil society, and the lack of conceptual clarity among many using the term embedded within broader discussions growing out of the literature on nonprofit organizations, for the purposes of this dissertation I will continue to use the term ‘nonprofit’ when discussing the sector, unless quoting a specific author. The Devolution of Programs and Services While solid statistical information on the history of growth in the nonprofit sector in Canada is quite limited, it is clear that the 1980s and 1990s were a time of significant growth. Rekart (1997, p. 3) notes that both the federal and provincial governments became increasingly involved in providing services directly during the middle decades of the last century, in effect taking over and professionalizing the delivery of the welfare state’s safety net from charitable organizations – particularly religious organizations – and municipal governments that had undertaken such responsibilities in the recent past. The evolving relationships between the federal and provincial governments have provided a constantly shifting and negotiated backdrop to the delivery of programs and services, with the long-term trend being movement to government initially, and then a gradual shift from the federal government to provincial government jurisdictions, and finally devolution back to the nonprofit and voluntary sector over the past two decades (see Hall & Reed, 1998). For the latter devolution, this involved concurrent shifts in funding by government, as well as control over budgets and products. 40 The Rationale for Devolution The growing literature about the impacts of nonprofit sector restructuring remains diverse and somewhat disjointed, although there is a growing body of research on the process and impacts of devolution. Theoretical work has tended to look at the origins of restructuring in terms of the neoliberal foundations that are transforming government, private sector and third sector financing. The work has also begun to explore the transitions in governance forms that have been associated with devolved program and service mandates (Evans & Shields, 2005; Eikenberry & Kluver, 2004; Richmond & Shields, 2004; Brock, 2000 & 2002; Alexander, 1999 & 2000; Rekart, 1993 & 1997; Nowland-Foreman, 1998; Salamon, 1989, 1993, 1997 & 2003; Pulkingham, 1989; Wolch, 1999). On a practical front, the issues of fiscal reform and the impacts of devolution on nonprofit organizations and their clients has been a hot topic in conferences and gatherings of those working on the front lines. The specific impacts are commonly discussed – but often in isolation, and often with little connection to the broader societal trends in which they are embedded (Richmond & Shields, 2004; Phillips & Levasseur, 2004) or to other stakeholders who might have useful ideas or observations to contribute. As such, there exists a clear need for a more comprehensive and interdisciplinary treatment of devolution in the Canadian context, and one that can bring together the various strands of the current debate in a meaningful and useful way for those affected. Although this remains a worthwhile goal, it is beyond the means of this dissertation to provide such a comprehensive examination of devolution. Even so, as a precursor to understanding devolved evaluation, in this dissertation I do provide a framework for conceptualizing the impacts of devolution, particularly focusing on grassroots organizations. 41 In the United States, much of the literature concerning devolution has grown out of welfare reform, and the shift of delivery of welfare out of government. This shift has involved two facets that are not present in the Canadian context. The first is that the delivery of welfare programming itself has not been devolved in Canada. While many ancillary and complementary programs and services, such as employment training and support services, have been devolved – inconsistently across the different provinces – for the most part the administration of the welfare system is still undertaken by civil servants. The second facet of devolution that differs between the two countries is the level of introduction of the private sphere into program delivery. In the United States the marketization of welfare and other programs has introduced direct competition between nonprofit and private sector deliverers. While there has been some growth of private sector organizations in Canada delivering some programs, and certainly in the private school market, the direct competition between the sectors has remained more a ‘myth’ among nonprofit practitioners rather than a practice or strategy employed by government funders. Indeed the Canadian nonprofit sector has been watching the process of devolution in the United States, and in recent years conversations have been rife with portents of future private sector competition for program delivery. Lester Salamon (1989, 1993, 1997 & 2003) has been at the forefront of writing and researching the devolution trend in the United States for the past twenty years. In talking about a theory of government-nonprofit relations in the welfare state Salamon (1989) describes some of the practical advantages that make the nonprofit sector inviting as a deliverer of human services. These advantages include, • a significant degree of flexibility resulting from the relative ease with which agencies form and disband and the closeness of governing boards to the field of action. 42 • Existing institutional structures in a number of program areas because voluntary agencies often begin work in an area prior to government getting involved in these areas. • A generally smaller scale of operation, providing greater opportunity for tailoring services to client needs. • A greater capacity to avoid fragmented approaches and to concentrate on the full range of needs that families or individuals face, to treat the person or the family instead of the isolated problem. • Greater access to private charitable resources and volunteer labour, which can enhance the quality of service provided and leverage public dollars. Salamon (1993) discusses the marketization of welfare and how it is changing nonprofit and for-profit roles in American program delivery. He defines marketization as “the penetration of the essentially market-type relationships into the social welfare arena” (1993, p. 17), and notes that as for-profit firms enter the social market, they will inevitably siphon off the more affluent customers, leaving nonprofit firms with the most difficult, and least profitable, cases. In general, the stated rationale for the devolution of programs and services to the nonprofit and voluntary sector under the New Public Management has included reducing the costs of public services, improving efficiency, and reducing the size of the state (Evans & Shields, 2005; Eikenberry & Kluver, 2004; Alexander, 1999 & 2000; Brock, 2000; Rekart, 1993 & 1997; Trebilcock, 1995; Ferris, 1993; Salamon, 1993; and Pulkingham, 1989). Some of the more recent discussions also identify such factors as finding ways to bypass public sector unions, and managing results with a customer service orientation (Phillips & Levasseur, 2004). The emphasis in introducing competition in the delivery of programs and services – marketization – was particularly seen as a means of introducing efficiencies, and this has also been the rationale for privatizing programs and services as well. While much of the literature has focused explicitly on the nonprofit sector, and has been concerned with the impacts on organizations taking on new roles and responsibilities, a 43 complementary literature has examined the impact of devolution on the state. Rekart (1997), Nowland-Foreman (1998) and Wolch (1999), among others have examined the ‘hollowing out’ of the state sector. Their concern is with the expansion of the nonprofit sector as ‘shadow state’ – as taking on the roles discarded by the state – and with the reduced capacity of the state to intervene in the public interest, combined with the questionable capacity of the nonprofit sector to successfully take on such responsibilities. Miller (1998), in examining the history of devolution in Canada, notes the 1995 federal budget as a turning point for Canadian nonprofit organizations as federal transfer payments to the provinces were simultaneously restructured and reduced. Because of this, provincial funds flowing to nonprofit organizations were also cut or terminated, and began being restructured as service contracts continued to replace grants. He examines one argument that suggests the vulnerability of nonprofit organizations in this environment has grown in part from their over-dependence on government during the previous 25 years. Burnley et al. (2005) in a study of local agencies delivering programs and services for children and families, examined impacts of devolution in Nanaimo, B.C. They note that in response to provincial and federal deficits (referencing Brock, 2000), and in the face of rising public demand for services, both levels of government proceeded to devolve delivery of many social programs to nonprofit organizations. In British Columbia, in response to an administrative review of government practices (the 1993 Korbin Commission), the provincial government initiated a contract reform project “intended to reduce fragmentation in its approach to contracting services with the social and community services sector. The stated outcome objectives of the project were to establish long-term relationships with eligible contractors, improve consistency and coordination of contracting practices, streamline 44 administration, and increase accountability” (p. 71). More recently, the provincial government has developed “province-wide standards for locally delivered programs, the establishment of accountability and performance management mechanisms, and the increased use of output and outcome-based contracts for service” (p. 71). The discussion by Burnley et al. (2005) highlights a key aspect of the transition to the era of New Public Management; it has brought a variety of reforms to how programs and services are funded. Devolution involves a move to contracting for the delivery of programs and services: from providing unconditional grants to organizations, to using either contracts (purchase-of-service contracting) or contribution agreements that require nonprofit organizations to produce concrete deliverables that can be specified in advance, and are thus in many ways virtually indistinguishable from contracts (Phillips & Levasseur, 2004: 453). Indeed, the changes that have most often characterized as ‘devolution’ in the literature have been financial in nature. Devolution has involved a shift from long-term supports for operational funding or grants, to short-term, contingent contracting and contribution agreements. These contracts have involved a loss of core funding, and explicit expectations for agencies or communities to make ‘in-kind’ contributions, or obtain joint funding for initiatives or programs by other levels of government (Phillips & Levasseur, 2004; Scott, 2003a; Hall & Reed, 1998). The new contracting processes have also involved increasing accountability demands, and the pre-specification of products and contract deliverables. The process of devolution has been accompanied by a complementary rationale that emphasizes increasing democratization of policies, programs and services, local relevance, and customized implementations by and for communities. This ‘spin’ on the rationale for 45 devolution has not simply been hollow rhetoric; it clearly was intended to represent an effort to reverse some of the more egregious negative impacts of modernization – and the loss of community and personal contact that the “business of helping” experienced through being taken over by governments and other secular institutions, as well as the professionalization undertaken in many helping occupations over the past century. Even so, the bottom line for all such discussions has been the overarching rationale of financial efficiency, and reducing costs in the interest of long-term sustainability of programs and services. From the perspective of the nonprofit and voluntary sector organizations that have been taking on the task of delivering programs and services, this process of devolution has held both promise and peril. Government contracts were initially seen by many organizations as a potential source of stable funding for the longer term, and an answer to perennial challenges of fundraising to be able to address the issues they perceived to be important. The contracts also were perceived as a form of vindication; a recognition that they were better able to identify and meet the needs of individuals who kept falling through the cracks in the government delivered system. Government bureaucracies were described as inefficient and focused on finding ways to limit services and expenditures through eligibility criteria and program regulations – not just by the political critics who were demanding reform, but also by the nonprofit sector advocacy organizations representing the interests of clients. The sector actively promoted a perspective that community agencies had a better on-the-ground sense of client needs, and could deliver better services, and do so more efficiently. And yet the peril that exists for nonprofit organizations is very real. They risk cooptation of their mandates, increased dependence on government funding to be able to keep a stable work setting for staff and consistent programs for clients, and new 46 accountability requirements that demand increased professionalization of staff and administration. In short, devolution heightens the tension between nonprofit organizations’ accountability expectations and their autonomy. The Impacts of Devolution on Nonprofit Organizations The recent literature on the devolution of programs and services has begun to document and highlight a variety of impacts that have evolved, often in unintended ways, from the changes to the process of contracting. These immediate impacts include increased competition among agencies and with the private sector (Evans & Shields, 2005; Eikenberry & Kluver, 2004; Scott, 2003a), moving organizations away from their core goals and historical emphases – mission drift (Evans & Shields, 2005; Eikenberry & Kluver, 2004; Scott, 2003a; Brock, 2000), making it more difficult for community level agencies to advocate on behalf of clients (Evans & Shields, 2005 & 2002; Eikenberry & Kluver, 2004; Scott, 2003a; Brock, 2000) funding and capacity stresses on human resources, such as difficulties in doing long-term planning, and staff turnover (Eikenberry & Kluver, 2004; Scott, 2003a; Alexander, 2000; Evans & Shields, 2002; Hall & Reed, 1998), and longer term impacts on the sustainability of programs and organizations (Evans & Shields, 2005; Eikenberry & Kluver, 2004; Phillips & Levasseur, 2004; Scott, 2003a; Alexander, 2000 & 1999; Brock, 2000). Indeed, the most recent emphasis has been on the challenges that devolution raises for the continued viability of civil society, and the ability of the nonprofit sector to play its important role in contributing to public dialogue about the type of society in which we want to live (Evans & Shields, 2005 & 2002; Eikenberry & Kluver, 2004; Alexander, Nank & Stivers, 1999; Hall & Reed, 1998). 47 When thinking about the impacts of devolution, it is useful to contrast how that impact is experienced by clients, organizations, and social service provision within communities. For example, most emphases in the literature speak to organization level impacts, such as those noted above – volatility in the contracting environment, mission drift, competition among agencies, and reduced opportunities for advocacy. Less often discussed are the impacts for clients – ‘creaming’ of those clients most likely to succeed, with reduced access to programming for those most in need; the loss of personal advocacy by one agency on behalf of services obtained from another; and the depersonalization people experience as they become defined by their characteristics and perceived program eligibilities rather than having their ‘problem’ viewed from a holistic perspective. The same holds true for system analysis, as the literature rarely discusses broader impacts such as widening gaps in the program and service net, as each agency has increasingly specialized roles that do not overlap with those of other agencies; the shift from long-term to short-term interventions, emphasizing goals of reducing client dependence on support; and an overall bureaucratization of the system, whereby each agency has less flexibility to provide person- centred treatment. Chapter Five will provide a structured approach to understanding the impacts of devolution on nonprofit organizations and other stakeholders, and the impacts and implications for program evaluation as well. But first, Chapter Four will review key trends in evaluation, and assess their relevance for the context of devolution as a site of evaluation activity. 48 CHAPTER FOUR: THE ROLE OF EVALUATION IN SOCIETY In the United States and around the world, our practice of program evaluation is shaped not by the discipline of our profession, but by the corporate and accountability notions of our clients and program managers. We are a service profession, dedicated less to the well-being of stakeholders and communities, more to those with funding and a status quo to preserve. And the future seems to hold more of the same. Robert E. Stake, 2001, p. 349. Khakee (2003) discusses an emerging gap between evaluation research and practice – between the emerging consensus in evaluation theory, and evaluation as it is practiced ‘in the trenches’. He describes, … the growing convergence in evaluation research towards stakeholder-oriented, communicative, disaggregated and multi-dimensional methods. On the other hand, public agencies still demand of their policy evaluators quantitative, aggregated, (often uni-dimensional) expert products… the increasing gap between evaluation research and evaluation practice poses some major challenges to politicians, policy makers and public sector managers. (p. 349) While this gap has been growing for the whole discipline of evaluation, when it comes to evaluation in grassroots and small nonprofit organizations, the gap is a veritable grand canyon. In part, this relates to the capacity of nonprofit organizations to conduct evaluation, but it also reflects a certain myopia within the evaluation literature about the actual contexts in which evaluation work is done. In the nonprofit sector, most evaluation work is not done by ‘evaluators,’ and as such remains outside of the purview of the discipline. Devolving Evaluation When government employees were the primary deliverers of programs and services, it was intended that evaluation be done regularly and holistically, examining the program as a whole. As program delivery has devolved to third party deliverers in the nonprofit sector, the program evaluation function initially remained as a centralized activity, overseen or at least 49 contracted by program management and administration within funding organizations – usually federal and provincial governments. More recently, this process has begun to be replaced by a devolved responsibility for evaluation, whereby individual deliverers are responsible for evaluating or having evaluated the programs and services they provide. In the case of larger nonprofit organizations, this may be done by staff dedicated to such activities, or by external consultants contracted to do the work. For small nonprofit and grassroots organizations, this is often passed along to program coordinators to do at the same time as they are delivering the program or service, or perhaps done with some assistance from an external consultant or a volunteer with specific related experience or expertise. All of these developments have profound implications for how program evaluation work is done, and by whom. Hall et al. (2003) examined how evaluation is conducted in Canadian nonprofit organizations, and who is doing it. They note that only 8% of nonprofit organizations say they hire consultants to help with evaluation work, and another 15% use volunteers. This means that for many nonprofit organizations, the evaluation they are undertaking is being done by staff of the agency, and in many cases, program coordinators and deliverers. Although the literature on devolution is fragmented and just in the beginning stages of “maturation” (Scott, 2005, p. 155), the evaluation literature examining the context and implications of devolution is extremely limited. There is a growing and increasingly sophisticated literature on how to do evaluation in the nonprofit sector, which has two major emphases: 1) research methods considered appropriate for the nonprofit sector, including a variety of agency-developed manuals and resources (Chalmers, et. al., 2001; United Way of Greater Toronto, 2001), and 2) discussions about the rationale for organizations in the sector to do evaluation (Chalmers, 2003; Behn, 2003; Hoefer, 2000; Fine, et. al., 2000). The latter 50 often amount to exhortations to “do more” evaluation, and that it is essential to build evaluation and accountability into everyday activities of the organization. Indeed, there is a growing body of literature that focuses on resistance to evaluation, which emphasizes how to convince people of the value of evaluation to organizations, and even how to reduce fear and anxiety (Taut & Brauns, 2003; Donaldson, Gooler & Scriven, 2002). A related literature focuses on evaluation capacity building (Compton, et. al., 2002), although it addresses a wide range of evaluation contexts, and is not restricted to the third sector (McDonald et. al., 2003; Cousins et. al., 2003). Capacity remains a significant issue not just for devolution, but also for evaluation, and is examined in detail in Chapter Five presenting the devolution themes. The evaluation literature that does address issues relevant to understanding the impacts of devolution and the specific evaluation context of smaller nonprofit, voluntary and grassroots organizations usually addresses specific issues, rather than the context as a whole. For example, writers examine such factors as having multiple contract funders (Koppell, 2005; Markeiwicz, 2005; Mohan, et. al., 2002), stakeholder participation and inclusion (Mathie & Greene, 1997; Thayer & Fine, 2001; Mathison, 2001), stakeholder conflict (Abma, 2000b), multi-site evaluation (Straw & Herrell, 2002), and the challenge of evaluating collaborative and community-based initiatives (Hughes & Traynor, 2000; Page, 2004). The existing literature dealing with evaluation in nonprofit organizations generally focuses on evaluation as a positive activity; that it helps organizations focus on results, it keeps them aware of outcomes and impacts, and it provides opportunities for stakeholders to contribute to program and policy decisions, among other things. Few studies or papers address the possible conflicts and inequalities that have been a result of devolution of 51 programs and services. Fewer still address how evaluation can contribute to enhancing the problematic impacts of devolution. One of the forms of professionalization explicitly encouraged by the marketization of the nonprofit sector is improved ‘accountability’ – although like many other aspects of the process of devolution, what the term actually refers to can be problematic. While the New Public Management insists on accountability as a way of imitating the purported successes of the private ‘market’ sphere, in actuality the defining characteristics of accountability being employed clearly reflect the recognition of the nonprofit sector’s special relationship with government. Aucoin and Heinzman (2000) and Phillips and Levasseur (2004) argue that the state tends to focus on control of abuse and providing assurance that resources are used appropriately, rather than viewing accountability as organizational learning. They state that this is the area of accountability most relevant to innovation and change, but also the least emphasized element of accountability, which is “overwhelmingly focused on control and demonstrating that the rules, particularly those related to financial reporting, have been followed” (Phillips & Levasseur, 2004, 454). Most of those writing about the accountability emphasis of the New Public Management have this same focus – accountability as contract compliance, avoiding abuse, and justifying appropriate expenditures of scarce resources (Good, 2003; Brock, 2000; Martin, 1995; Ferris, 1993). While the public administration, policy analysis and political science literatures have examined accountability almost exclusively through the eyes of the audit culture rather than evaluation, the evaluation literatures have been notably silent on this context, with the exception that various ‘challenges’ are occasionally identified and discussed in terms of methodological fixes that might address them. The article by Phillips and 52 Levasseur (2004) is among the first to argue that many of the impacts of devolution experienced by nonprofit organizations are due to the over-riding emphasis on accountability in the New Public Management. Although their emphasis is on accountability broadly defined, they do briefly discuss program evaluation. Another key limitation to this work – in common with most of the existing literature examining accountability in the nonprofit sector – is that it only examines the federal contracting environment. It does not address the full spectrum of devolution impacts experienced by nonprofit organizations, and it does not address the context of small, unincorporated and grassroots organizations, which are also experiencing many of the same impacts as the largest nonprofit organizations. It is clear that both the context and process of evaluation work is changing – not just for large federally funded programs and agencies, but also for small, unincorporated nonprofit, voluntary and grassroots organizations that operate in one location in a local community. Federal and provincial governments still conduct evaluations of existing and large scale programs, often through a tendered contracting process. Yet increasingly, purchase-of-service orders, contracts and contribution agreements contain not just contract compliance accountability provisions, but expectations for program evaluation focusing on performance measurement and outcome assessment. This work is often done by program delivery staff, sometimes by volunteers or administrative staff in larger organizations, and less often by private consultants or academic evaluators. Increasingly, some combination of these options is employed. Phillips and Levasseur (2004, pp. 455-460) describe five broad ways that the culture of accountability impacts on nonprofit organizations, and most of these relate to the capacity of nonprofit organizations to cope with increased accountability and evaluation demands. 53 The first of their themes is how the quest for accountability demands significant time and resources, which are often not funded through projects, but intended to be an in-kind contribution from the administrative budget of the organization. This accountability demand for time and resources encompasses preparing, negotiating, managing and reporting on contracts and contribution agreements. The second theme they describe relates to how they see the contracting environment as risk aversive; requiring deliverables to be specified in advance removes flexibility, innovation and creativity from proposals. In order to get funded, agencies submit more conservative proposals than they would otherwise consider, and focus on features that are most easily monitored. This is exacerbated by a context in which multi- year funding is very difficult to obtain, and concrete deliverables must be attained within a one year project mandate, no matter how realistic that might be. The third theme described by Phillips and Levasseur consists of time delays introduced by increased bureaucratization of the contracting processes. With new layers of approval for funding, significant delays common, and no possibility for retrospective funding even when project approval is not obtained until several months after start-up, nonprofit organizations end up subsidizing projects. In the current environment in which the majority of projects are expected to have numerous project partners, and often co-funders (usually other levels of government), nonprofit organizations can risk losing both community good will or in-hand funding from other sources by delaying projects. This ends up being a massive juggling act. Such juggling provides a wide range of stresses for staff in nonprofit organizations, who have no guarantees that funding will come through, which contributes to rapid turnover of staff (and turnover among public servants as well, who can quickly tire of the ‘policing’ 54 role that the new accountabilities demand of them). Even for those staff who remain with the nonprofit organizations, there is a high degree of burnout from working to meet the needs of contracts that do not fund central activities demanded of them, including accountability, evaluation, project development and proposal writing. To complicate matters even further, staff turnover (in both nonprofit organizations and in government) leads to a loss of corporate memory – what projects were done, by whom, and how they were conducted. Finally, Phillips and Levasseur describe the newest pressures – measuring outcomes, no matter how relevant or easy the process is for the project at hand. Staff are thus addressing such needs without sufficient expertise, off the sides of their desks (as the contract does not officially compensate this work), and usually with insufficient resources combined with unrealistic expectations. In such a context nonprofit organizations often try to keep things as simple as possible; they submit ‘safe’ proposals, and keep the innovation and creativity either underground, or out of the picture entirely. The impacts of accountability identified by Phillips and Levasseur are relevant to understanding the reciprocal impacts of devolution on evaluation, and evaluation on nonprofit organizations, but they are not enumerative. The range of people who are doing evaluation activity is expanding, as is the nature of the work itself. Increased competition among community agencies also increases the challenge of establishing cooperation among the agencies for conducting research and evaluation, establishing steering committees, and generally implementing inclusive approaches to client support. Having multiple funders brings multiple accountabilities – often with conflicting reporting requirements and reporting overload. From the client and agency perspective, the reporting that gets done with local level evaluations often involves thick descriptions of program activities, and with small 55 programs this can entail serious risks of lost privacy for clients and staff. The obverse side of this issue is that with local evaluations being conducted, the reports are not necessarily treated with the same criteria as provincial and federal level documents – they may not be made public. This introduces a system level impact of devolved evaluation – a reduction in the transparency that has recently been attained within the evaluation community, whereby official evaluation reports are public documents, and thus cannot be simply shelved if program management or funders do not like the results. This reduction in transparency is closely linked to another key impact of devolved evaluation – that local agencies can lose the opportunity for dialogue and discussion about their experience delivering programs and services, and how it compares with the experience of other agencies. The community loses the macro perspective of how the program or service links to the broader network of program and service delivery, exacerbating the impacts that the ‘silos’ of individual ministries and government departments have been trying to address over the past two decades of activity aimed at re-inventing government. The evaluation process itself has impacts for programs and nonprofit organizations. It can rigidify program definitions and eligibility criteria, and make it challenging to provide flexible supports to individuals. This is further exacerbated when funders identify program goals and criteria for measuring outcomes – it intensifies the tendency towards mission drift within agencies. While staff burnout and turnover obviously have an impact on the process of conducting evaluations, the process of doing evaluation work has also begun to compensate for this situation, as the evaluation work itself, and in some cases the external evaluators have become the new repositories of corporate memory for nonprofit organizations. By 56 documenting programs and decision-making processes, evaluation work can provide a long- term record for a rapidly changing staff complement. It can also address its official rationale – helping organizations to question what they want programs to do, and determine whether their activities are actually helping them achieve their goals. Evaluation still has that capability to ask the broad question that addresses context and program rationale, but given the challenging context, whether that actually happens is not guaranteed. Defining Accountability As Phillips and Levasseur noted, accountability is perhaps the most consistently identified point of intersection across both the devolution and evaluation literatures. With the growth of contract-based funding, expectations that contractors should be held accountable for what they are delivering have grown far beyond the expectations that existed in the era of grants-based funding of nonprofit organizations (Phillips & Levasseur, 2004). In the literatures on devolution, accountability has come to represent the vertical hierarchical relationships of the New Public Management; a tool of hegemonic and unilateral control. While this certainly represents a key aspect of what accountability means within the world of nonprofit program delivery, such a portrayal of accountability does not encompass the many subtleties of the accountability dance. The evaluation literature has held an uneasy truce with the idea of accountability. In a sense, accountability is part of the very rationale for evaluation, which encompasses a diverse range of approaches, purposes and intended uses. Evaluation as a pedagogical activity (Schwandt, 2002), or as a self-learning empowerment approach (Fetterman, 2001) still aims to foster mutual understanding, and help us to identify and discuss the moral and political contexts in which programs occur. This can encompass notions of accountability. Perhaps 57 more importantly, as Stake (2001) has noted, evaluation is inextricably tied up with the accountability notions of the clients and program managers who contract for evaluation work to be done. So no matter what our intentions regarding the evaluation work we do, those intentions are situated and embedded within specific social and institutional practices (House & Howe, 2000), and as such, subject to the definitions and expectations of others, and not just evaluators. To be sure, we can ask the question “Is this accountability expectation ‘evaluation’?” Some would say that it is not – and try to make the distinction between evaluation and audit, another uneasy long-term relationship with which evaluators have struggled over time. Yet the overlap and accommodation between the domains of audit and evaluation has persisted because, depending on the context, the line between them shifts, and the line is drawn not just within the two disciplines, but by those who use and contract for this work to be done. Further, while the emphasis of audit tends to remain on ‘outputs’ of activities, this reflects a more short-term pre-occupation, but as audit has grown to encompass long-term implications of those short-term practices, it has begun to focus on outcomes and on the broader societal benefits of programming – traditionally the terrain of evaluation. In many Canadian federal and provincial settings, evaluation activity has been centralized into audit departments, in part as a way of reducing the potential for managerialism possible when the evaluation function is held within the ministry or department overseeing the program to be evaluated. At a basic level, accountability is about being held to ‘account’ for something, and this implies a relationship with others. Fry (1995) distinguishes between feeling responsible for something compared to being held accountable. We have a ‘freedom to act’ when we are responsible for something, but don’t necessarily have the ‘obligation to answer’ for it, which 58 we do in a situation of accountability. In this sense we could view responsibility as a wider and more encompassing frame, of which accountability is a subset. Even so, even if we are not being held to account for something we feel responsible for, we can still feel an obligation to communicate about what we are doing to those with whom we interact. Considine (2002) found a high level of consistency across nations in how people felt about responsiveness, obligation and willingness to communicate as forms of accountability among front line officials. It may be useful to think of responsibility as a more subjective end of a continuum, with accountability at the external and outward directed end. Both individuals and organizations can experience this sense of responsibility or accountability. If accountability implies a relationship to others, we then need to consider who those others might be, and the nature of that relationship. For Mathison and Ross (2002), accountability implies not just a relationship, but interaction, and specifically interaction within a setting that is at once hierarchical and bureaucratic. They discuss accountability as an economic form of interaction flowing from the delegation and dispersal of power within a hierarchical system (their wording implies that this holds both within and between organizations, but this is not their focus); that “those to whom power has been delegated are obligated to answer or render an account of the degree of success in accomplishing the outcomes desired by those in power” (Mathison & Ross, 2002, 2.2). As such, accountability can be seen as a means of controlling both procedures and outcomes in complex settings (Mathison & Ross, 2002, 6.1). Part of the broader discussion about accountability relates to how the process of delegating responsibility for action can lead to both expectations and a perceived right to an explanation, and also the right to impose sanctions related to achieving those expectations 59 (Baulderstone, 2005, p. 28). And as we move to more complex settings – those involving multiple organizations – the formality of the accountability process increases. Koppell (2005) examines what he calls pathologies of accountability – he is concerned with the lack of precision in the way that the idea of accountability is used in much of the literature. He develops a typology with five different dimensions or approaches to understanding accountability: transparency, liability, controllability, responsibility, and responsiveness. He talks about how providing such a vocabulary concerning accountability can help reduce conflicts associated with incompatible expectations among those who are demanding accountability, and those who are being held to account. These distinctions are useful for thinking about the rationale for accountability, how it is enacted, and for whom accountability is relevant. In the evaluation literature, House (1993) relates the drive for accountability with authority structures. He says (1993): there are at least three mechanisms for regulating society: the state, civil society, and the market. These are reflected in political (public) authority, cultural (professional) authority, and economic (consumer) authority—power, status, and money. To some degree these forms of authority are convertible into one another. However, they serve different purposes and operate in different ways. Different information is needed to provide accountability and evaluation in each situation. (p. 38) So for House, accountability in the public sphere is managed through hierarchically organized lines of authority, and in the consumer sphere by the market – people vote with their feet (or their credit cards) based on their perception of the value of goods or services. In civil society, House’s emphasis is on professional associations – a form of member-serving third sector organization that are accountable primarily to their members, rather than to the broader society or to a benefactor. 60 In his discussion, House treats these spheres as substantially independent. Yet it is clear that with the devolution of human service programs to the nonprofit sector, the civil society model that House uses is becoming less applicable, as professional and public spheres increasingly overlap. As quasi-public organizations, nonprofits are increasingly accountable directly to public ones. In addition, devolution can decrease consumer ‘choice’ in that the processes of the New Public Management usually involve getting rid of overlaps in service, and so there is less choice of which organization to go to at the consumer level. Warren (2003, p. 50) examines the changes happening within the sector that make accountability an important issue. He outlines a rationale for holding nonprofit organizations accountable that is based on his perception that such organizations are becoming more powerful, which makes their actions potentially dangerous in three ways. The first is that when nonprofit organizations take over responsibilities of the public sphere, they must be held at least to the same accountabilities to the public that we expect of the state – so that we can avoid corruption. According to Warren (2003) the second is that, … when governments try to capture the virtues of nonprofits by farming out public functions, there is the danger of inequity in the provision of public goods and services. Especially in health and welfare, the general rule is that those locales that are richer in education and wealth also tend to be richer in nonprofits… Nonprofits simply lack the capacities to compensate for inequalities in health and welfare, and so government devolution of public responsibilities to nonprofits can reinforce existing inequalities and cleavages. (p. 50) And, Warren argues, since nonprofit organizations lack the ability to respond to and compensate for broader societal level inequalities, the state must keep track of things to safeguard the interests of equality. When the state devolves programming to local communities, its process is in many ways a passive one – it relies on the existence of local organizations to deliver programs and services. Communities that lack those organizations may go without, unless the state is able to entice an existing organization to expand to the 61 community, or support a local grassroots organization to change its mandate or grow to pick up the lacking community capacity. The distribution of nonprofit organizations rarely corresponds nicely with community need, and reliance on such a distribution can reinforce already existing inequities. Sections two and three in Chapter Five address capacity and mandate shift, respectively, and examine some of these impacts of devolution on inequality. Warren’s third point is that the power that nonprofit organizations gain by becoming major financial players in the delivery of programs can make them influential interest groups regarding public policy. While this is a point of contention within the literature on devolution, his point is clear that with power shifts come responsibilities to account for what is done with that power. This argument is consistent with the previous discussions of Mathison and Ross (2002) and Baulderstone (2005). Warren’s (2003, p. 51) final point about this shifting context of accountability is that we can examine the nature of the power being transferred, and make decisions about the need for accountability accordingly. If it is easy for individuals to leave an organization – a voluntary, member serving nonprofit, like a club perhaps – then the need for accountability is minimal. If we have little choice in our involvement with the organization – for nonprofit organizations that deliver a service we need in our community perhaps – then it warrants efforts to safeguard the service and our experience with it. For larger organizations, such as professional memberships – even though they may be member-serving organizations – we can look to how significant (large) the resources they can deploy, or how consequential their actions are on peoples’ lives, to see how publicly accountable they should be. So, for example, provincial medical or bar associations serve their members, but their actions and 62 decisions also have significant import for the health and well being of the population. Thus they need to be publicly accountable. These models overlap somewhat with one described by Aucoin and Heintzman (2000), which has been widely quoted in the devolution literature (Pollitt, 1999; Boyle, 2002; Phillips & Levasseur, 2004). Aucoin and Heintzman (2000) note: The purposes that accountability are meant to serve are essentially threefold, although they overlap in several ways. The first is to control for the abuse and misuse of public authority. The second is to provide assurance in respect to the use of public resources and adherence to the law and public service values. The third is to encourage and promote learning in pursuit of continuous improvement in governance and public management. (p. 45) The third of these purposes is the most clearly applicable to evaluation, although it is the first two that often get the most recognition in the media and the public’s eye. Gregory (1995) focuses on the first purpose, and like Warren, examines the possibilities of corruption, and the influence of both real and potential crises, scandals and tragedies. Similarly, in Canada, David Good (2003) examined the impacts of the Human Resources Development Canada (HRDC) audit scandal from the same perspective, and noted how it changed the accountability landscape for thousands of nonprofit organizations, for which the funding environment became substantially more rigorous, painstaking and demanding. This introduces another factor into accountability relationships; we do not simply have the relationship between the delegator and the one held accountable, but also the public, or the public’s interest. In fact, as Baulderstone (2005, pp. 10-11) notes, the concept of accountability in nonprofit organizations is quite complex in several related ways. The concept has different meanings for different stakeholders, and depending on the type of organization and its operating environment, the number of stakeholders may be high. While member-serving nonprofit organizations typically have fewer stakeholders, those that provide 63 public services in a context of devolution often have many intersecting accountabilities, and each of these stakeholders may have different interests, levels of interest, expectations, and standards for accountability. Indeed, standards for nonprofit accountability are often ambiguous, unclear, or non-existent. How the public perceives accountability further complicates this relationship. Indeed, how those directly involved see the accountability relationships can complicate things tremendously. Mathison and Ross (2002, 2.2) note that “because of the diffuse nature of many hierarchical systems, accountability depends on both surveillance and self-regulation.” Surveillance in this sense represents how those in power hold up their accountability measures as something performed on behalf of the broader society – so they are watchful over the interests of the public, and safeguarding against, among other things, crises, scandals and corruption. But they must then watch over so many programs, services, organizations, and staff members, that this becomes an extremely challenging technical task. So those conducting this surveillance depend also on self-regulation – “the faithful exercise of delegated authority” by those delivering programs and services, whatever their location either inside our outside the public bureaucracy. This is a powerful insight, and will be examined more fully below under several themes in Chapters Five and Six, but here it is relevant to our examination of the public’s ‘right to know.’ Mathison and Ross (2002, 2.3) note that the overall complexity of the accountability relationships serves to obfuscate the interests of various stakeholders, particularly the interests of those in power, who can hide their interests in statements about safeguarding the public good. While Mathison and Ross are primarily discussing this relationship within an internal bureaucratic hierarchical system, Stein (2001, p. 75) describes this process as exacerbated within the context of devolution. Contracting out 64 programs and services introduces what she calls triangular relationships rather than bilateral, with the public, state and civil society each having interests, and interacting within indirect and unclear lines of authority and accountability. In the process, the transparency of accountability is potentially compromised, and the responsiveness of programs to the needs of the public or the state becomes harder to identify and ascertain. To summarize our understanding of accountability relevant to the evaluation of programs and services in a context of devolution: • Accountability involves a relationship between two or more parties, which typically involves ‘economic’ interaction, reflecting the delegation and dispersal of power within a hierarchical system. • This may be internal to an organization, or involve multiple organizations, and implies an obligation to answer or render an account for actions taken. • Delegating organizations may expect and have a perceived right to an explanation, and may also have the right to impose sanctions relating to those expectations. • Accountability can involve transparency – a public obligation to be held to account, reflects authority structures within state, civil society, consumer spheres, can reflect authority structures that span those spheres, and protects us from the dangers of unchecked power implicit in delegated authority. • Some contexts of delegated power warrant higher accountability concerns, particularly when members do not have the choice of participation, or when the consequences of action are sufficiently consequential. • Expectations for accountability can include controlling for potential abuse, providing assurance that resources are used appropriately, and promoting learning and continuous improvement. • Overlapping spheres increase demands for public accountability, while offering more complex authorizing environments, which increase opportunities for obfuscation of goals and interests, and also increase the need for delegating the accountability function itself. While this summary is not definitive, it introduces many possibilities for discussion as we examine examples of how accountability plays out in a devolved programming context. Reflecting on Evaluation and its Rationale The starting point for this dissertation is the realization that program evaluation is more than simply applying the latest and most rigourous social science research methods to 65 the examination of social and educational programs. It is also a political and value-laden activity, and one that is implicated in the official decision-making processes that result in governments spending millions of dollars each year. My emphasis is not on program evaluation methodologies per se, although I examine a variety of approaches in view of the larger concerns I am dealing with. I am particularly interested in nonprofit evaluation as a specific political context in which organizations cope with demands for accountability and the changing context of governance under the New Public Management and its aftermath. The evaluation literature in the discipline over the past fifteen years has begun to move beyond its ongoing methodological preoccupation, and has been addressing the socio- political context of evaluation work in increasing breadth and detail. For example, House and Howe (2000) state that: Evaluation always exists within some authority structure, some particular social system. It does not stand alone as simply a logic or a methodology, free of time and space, and it is certainly not free of values and interests. Rather, evaluation practices are firmly embedded in and inextricably tied to particular social and institutional structures and practices. (p. 3) Evaluation does not just reflect these institutional structures and practices, it reinforces them, and in this way it can have unintended conservative implications. House and Howe advocate for an approach that recognizes the mutually reinforcing relationship between institutions and evaluation, explicitly grounded in democratic principles and deliberation. Their new model, deliberative democratic evaluation, is intended to be reflexive, inclusive, and grounded in dialogue. This model has grown in part as a response to evaluation’s connection to practice – to the design, implementation and evaluation of social and educational programs, and to real world problems requiring real world solutions – it has been proposed as one solution to the issue of values in evaluation practice. As an approach to evaluation that does not represent a ‘method’ so much as an orientation to inclusion and deliberation – and perhaps implies only 66 the recognition of a diversity of appropriate methods depending on the context – deliberative democratic evaluation provides “an explicit framework that links evaluation to the larger socio-political and moral structure” (House & Howe, 2000, p. 3). It asks such questions as: 1. Whose interests are represented in the evaluation? 2. Which stakeholders are represented, and which are missing or excluded? 3. Are there serious power imbalances? For MacDonald (1976), one of the first to talk about ‘democratic evaluation,’ perhaps its key concepts are confidentiality, negotiation and accessibility (also noted in Kushner, 2002, p. 3), implicating multiple roles for the evaluator as protector, mediator and proponent of inclusive approaches. MacDonald’s vision of democratic evaluation encompasses the rights and obligations inherent in the process – to informing the public, but balanced with the obligation to protect informants and stakeholders, and to look out for their needs. The attempt by House and Howe to shift the focus of evaluation ‘theory’ discussions from their methodological preoccupation is one addressed by Greene (2001c). Greene argues that for most of its brief existence, evaluation has been a “method-centered enterprise.” Yet these debates about evaluation methodology have not really been debates about “tools”: Methods debates are proxies for debates about the nature of the social world and our ability to know it, and about what’s important to know about the social world, the role of the inquirer in knowledge construction, and the positioning of science in society. (p. 1) When we read the journal articles, conference proceedings and books that make up much of the official discourse of the professional evaluator, discussions of methods predominate. Most evaluators recognize the values inherent in the act of conducting evaluation, yet they have tended to identify issues of values as challenges that can be addressed by finding methodological solutions. Even so, a growing current of discussion 67 within the evaluation community has involved attempting to move dialogue beyond methods, in part as recognition that the methods debates do not and can not resolve the issues. To date, the work on developing a deliberative democratic approach to evaluation practice has been promising. By incorporating a broader appreciation of the political contexts in which evaluation work is conducted, the discipline is developing better tools to understand how evaluation processes can be evaluated and improved. Deliberative Democratic Evaluation appears to be incorporating some of the most salient and practical of the many recent advances in evaluation theory – recognizing the value of input from all stakeholders and a generally inclusive approach to all phases of the evaluation – and seems to hold the promise of becoming an approach that effectively blends many interests and methodologies into a framework capable of cross-cutting the diversity of contexts in which program evaluation is conducted. Yet as I have noted above, this diversity of contexts is in many ways increasingly anti-democratic. Indeed, I contend that recent impacts of the process of devolution of programs and services have served to make it more challenging to conduct evaluation either deliberatively or democratically. Evaluation has the potential to be a tool for program improvement and policy development. However the performance measurement focus on accountability to externally determined standards obviates one of the original rationales for devolution: community-level responsiveness. Evaluation in such a devolved setting means that the public transparency efforts that have made evaluation in government increasingly part of the public debate about the worth of programs and the public good might be undermined. It decreases opportunities 68 for deliberation and debate, particularly among the stakeholders who are most affected by changes to programs and services – deliverers and their clients. While I am interested in examining factors that might be described as a political economy of evaluation, my overriding interest is in understanding how those broader systemic relationships are constructed on-the-ground. How is it that evaluation practice in nonprofit organizations can aim to improve and rationalize how programs are designed and delivered, and yet can also contribute to the maintenance and even expansion of societal inequalities? How will knowing about the process of reproducing inequalities benefit our ability to introduce systemic change? The approach I employ in the study speaks to this challenge, focusing as a first step on understanding the nature of existing relationships and practice within the context of devolved programs and services. Stame (2006), in examining the role of evaluation in governance and enhancing democracy, like Khakee (2003) notes the gap between evaluation ‘research’ (theory, in the North American literature), and practice, … evaluation might aim at enhancing democracy through specific evaluation approaches that reinforced participation, warranted transparency, promoted public welfare… Unfortunately the track record of evaluation studies in this regard is dismal: the debate on utilization has shown that it takes a long time for evaluations to be utilized, and this happens mostly in a cognitive, not instrumental, way; goal displacement seems to be the frequent result of systems of performance measurement; and, in general, public administrators have developed strategies of resistance to being evaluated. (p. 7) One way that Stame sees evaluation as having responded to these trends is by developing a diverse range of approaches to evaluation that focus on and reflect the diversity and complexity of the operating environments in which it is conducted. Building approaches that allow evaluators to examine vague or contradictory goals (goal free evaluation), and using program ‘theories’ to make the case for evaluation (theory-based evaluation), both 69 contribute to incorporating the context of evaluation into research design. Indeed, the range of evaluation approaches to choose from, which focus on such contextual issues as defining components of the evaluation process began with Guba and Lincoln’s (1989) fourth generation evaluation, and includes Stake’s (2004) responsive evaluation, Pawson and Tilley’s (1997) realistic evaluation, Patton’s (1994) developmental evaluation, Arnkil’s (2002) ‘emergent’ evaluation, and the growing literature on ‘evaluation use.’ Coping with complexity, and with the unequal power balances within any evaluation context, has become a theoretical concern for the discipline; I believe that nonprofit practice offers opportunities to reflect on the issues for which these approaches have been developed. 70 CHAPTER FIVE: CASE STUDY ANALYSIS: DEVOLUTION THEMES Whether intentionally or not, involvement in government programs can threaten some of (the) inherent advantages of nonprofit agencies. For example such involvement often creates a tension for nonprofit agencies between their service role and their advocacy role, between their role as deliverers of government-funded services and their role as critics of government and private policies. Such involvement can also put a strain on other important features the organizations, such as their reliance on volunteers, their sense of independence, their frequently informal and non-bureaucratic character, and their direction by private citizens along lines that these citizens think appropriate. Since many of these features are the ones that recommend nonprofit organizations as service providers in the first place, it would be ironic if government programs seriously compromised these features. Lester M. Salamon, 1989, p. 44. The range of possibilities for organizing how we think about and understand the effects of devolution on nonprofit organizations is enormous. Most writers in the area emphasize one or more specific themes, and indeed, a problem in the literature to date is that the themes are at many levels of generality, overlap, and in many cases themes from one writer are subsumed within different themes by another. Further, the various themes tend to reflect differing perspectives of impact – on clients, organizations, the nonprofit sector, or society as a whole, and sometimes this happens within one article or theme. A few writers (Eikenberry & Kluver, 2004; Salamon, 2003 & 1997; Alexander, 1999) attempt to develop broader schema for understanding the diversity of impacts. After examining examples from my own practice and the diversity of themes described in the literature, I have chosen five broad themes that appear to be sufficiently distinct for articulation and analysis, although they are by no means lacking in overlaps and caveats, and are not exhaustive of all possible themes. They have been chosen primarily as means for addressing my broader goal of understanding evaluation in a devolved program context, and as such, the goal of understanding and explaining the impacts of devolution is secondary to this end. 71 This chapter builds on these themes to better understand evaluation in the context of the devolution of program and service delivery. The broad themes are: 1) accountability, 2) capacity, 3) mandate drift, 4) competition, and 5) complexity. The themes are cross-cutting, and some of the examples used are appropriate for understanding multiple themes. This is both expected and intentional. In this chapter I examine each of the devolution themes in turn. Each thematic section provides three types of information and discussion: 1) examples from the case studies in the form of two or more vignettes – at least one representing a small to medium-sized nonprofit and another representing a grassroots organization, 2) relevant concepts and examples from the literature, particularly focusing on case studies conducted by others, and 3) reflection on the vignettes and the examples and ideas from the literature, representing an initial analysis of the relevance of these concepts for evaluation practice. Theme One: Accountability Perhaps the most consistently identified issue across both the devolution and evaluation literatures is accountability. Accountability has been a disciplinary constant in discussions and disputes for the evaluation profession since its inception. However, nonprofit organizations have only relatively recently elevated the issue to a primary focus, reflecting the transition from grants to contracts as their primary sources of funding. Accountability in the Context of Devolution A variety of studies over the past decade have documented how devolution has increased accountability demands on nonprofit organizations. In a survey of 178 nonprofit organizations in the U.S., Fine et al. (2000) examined a variety of evaluation practices. They found that 43% of respondents to the survey indicated their rationale for conducting 72 evaluation was to meet a funding requirement included in their contract. Larger broker organizations such as the W. K. Kellogg Foundation and the United Way consistently include evaluation and accountability requirements in funding contracts (Scott, 2003a). Poole et al. (2000) note “the drive for increased accountability in human services has put enormous pressure on nonprofit agencies to develop performance measurement systems. A great deal of the pressure comes from government, a chief funding source of services in the not-for-profit sector.” Anderson (2001) used the increased accountability demands placed on nonprofit organizations because of social service devolution as a starting point for examining the use of information developed within those organizations. Within Canada, researchers associated with the Voluntary Sector Initiative (VSI) have documented an enormous increase in pressures related to accountability and evaluation. The VSI’s 2001 survey of 1,965 voluntary organizations and 322 funders (Hall et al., 2003) found that almost half of voluntary organizations reported an increase in funder expectations over the previous three years. Similarly, nearly half of funders said that they require funded organizations to do evaluations, and another 40% ‘suggest’ that they do so. Respondents also noted a similar level of increase in evaluation requirements by foundations providing funding. The VSI survey included a wide range of nonprofit organization types, and focused exclusively on registered charitable organizations. Those nonprofit organizations that self described as Social Services or Community Benefits organizations were more likely to indicate (42%) that they were doing evaluations because they were required by funders. Interestingly, when asked for the main reason their organizations conducted evaluation, almost three quarters (73%) indicated that it was a decision made by the staff or board, taken primarily for internal reasons. Despite the increase in expectations for evaluation and 73 accountability measures, the VSI surveys indicate that less than half (47%) of funders provide financial resources for evaluation activities, or allow project funding to be used for evaluation. Almost three quarters of funders provide support in the form of advice. This contrast between high expectations and low levels of support will be examined in the next section on capacity. What does this demand for accountability look like in the nonprofit organization? Richmond and Shields (2004) argue that current contract financing approaches impose complex and burdensome accountability schemes disguised as evaluation measures. While the study by Hall et al. (2003) showed an increase in expectations about evaluation and accountability, it also showed that the task is also becoming more complex, as organizations try to make the switch from examining outputs to outcomes. The study noted that many respondents from nonprofit organizations confuse outputs and outcomes, and say that they are examining outcomes when they are actually reporting outputs. A related issue for these organizations is that when they do apply evaluation techniques learned from the literature, it tends to be those that have been designed for business or government, and which do not reflect the nonprofit environment. An example of this comes from contrasting my experience doing evaluation within government, and providing supports to nonprofit organizations. In my experience working within government as an internal evaluator prior to the devolution of programs to nonprofit organizations, I observed the process of developing a new program. It typically involved having broad political direction given to program management, who involved teams of policy and administrative staff – and occasionally evaluators – in designing programs that bridged the gap between the broad parameters 74 offered as guidelines from the political domain and the reality of what could be developed and delivered given funding and infrastructure constraints. This process usually involved significant negotiation concerning design, resources, accountability, and responsibility. After program areas became devolved to third party delivery organizations – in most cases individual nonprofit organizations, but sometimes involving broker organizations such as umbrella groups representing diverse associations of local community level deliverers, a similar kind of discussion took place between program administrators and potential brokers and deliverers of the programs or services in question. Both tried to negotiate with the more central level (for the deliverers, the administrative staff; for the administrators, the political arm giving broad direction) as much flexibility in program design and delivery as possible, trusting their own abilities and awareness of the priorities and client needs so that they would be able to make the best use of funding and resources. Similarly, both of the levels that have any control over funding and resources negotiated as strongly as possible to build control and accountability into their relationship with the respective point of delivery. When starting an evaluation as an internal evaluator, the first task I took was talking to program management and negotiating a variety of things, including the scope of the evaluation, the range of issues to be addressed, and the timing of program periods and report expectations. One of the most surprising tasks I had to undertake was negotiating the goal of the program. The program staff and management certainly had program goals that they understood, but they were often not written in a format that could be easily shared. Managers liked the flexibility of being able to adapt the program, and they feared that putting the goals in writing would mean that they were being asked to narrow the range of what they could do, and that whatever they said would be used to define their success. If we were not able to 75 accurately and completely articulate the goals of the program, then we might be trying to hold them accountable for things beyond their control, or worse – inappropriate standards. When programs were first developed, they usually had some broad goal and objective statements that could be found hidden among program documentation. Initial discussions with program managers and staff to confirm the rationale for the program and the range of activities undertaken under its auspices based on these statements typically began with a laugh and a shake of the head – “those are way out of date,” was a common theme. Program managers had an interest in keeping the program goals and definition obscure, and resisted efforts to clarify the picture of what the program entailed. Part of their rationale for doing so was that the program and context were too complex to be put into a few statements of objectives. Indeed, Abma and Noordegraaf (2003) argue that one of the reasons that performance measurement systems were developed was to simplify complex situations. The evaluation teams would meet this negotiation process with an approach focused on developing logic models that portrayed the context, goals, objectives, actions, and intended outputs and outcomes of the program. It was a new process for the programs to undertake, and managers thought of it as a useful exercise once they worked through the process of developing one. At this point in time, it was typically done after-the-fact – an aid to framing the evaluation, and not an aid to program definition. By contrast, the process of program development for those working in nonprofit organizations can be much more rigorous and restricting. The following vignette portrays a typical experience in which I have worked with nonprofit staff to develop programs. 76 VIGNETTE 5.1.1 (NONPROFIT) – OSSIFYING LOGICS It is late March, and I am in a meeting with Susan, my main contact in the small nonprofit organization. We are in a small meeting room, discussing our plans for the final data collection related to her project, and the unexpected challenges she has faced in obtaining access to teachers in the local school – they are telling her it is now too late in the school year, that everyone is busy, and have asked her to please come back in September. The project ends in May. We are developing our fallback plan. There is a rapid knock at the door, it bursts wide and a question is flung into the room before the door is half open, “Is the evaluator still here?” Three staff members of the agency are in another meeting room, and they have been hard at work for several days, preparing a proposal related to a “request for proposals” (RFP) tendered by the provincial government for a new program to be delivered by local agencies. They are hoping to get funded and bring the program to their community and agency. The RFP requests that all proposals contain a detailed evaluation plan, and that the plan include a logic model portraying the program goals, objectives, outcomes, etc. There is an example logic model with the RFP materials, but it is for a very different type of program than they are proposing. They are hoping that I can help them figure out what is required, and review their draft ‘evaluation plan’ to see if it makes sense. It is not part of my official role in working with the agency, but I am almost finished my meeting, and because of the timing of flights to the community, will be in town until the following morning. One of the staff members hints broadly that if they get the contract, they will need the help of an evaluator, so it might be worth my while to help. I tell them that I don’t have any expectation of work from this, but I do agree to help with the proposal. I am interested in seeing what kinds of 77 expectation the provincial contract has, and how those writing the proposal understand the expectations. And while I think that evaluation is a worthwhile endeavour, I suspect that I respond favourably to their request because of the look of total exhaustion on their faces, and the chaos of the proposal writing room, with empty pizza boxes and a dozen coffee cups and mugs strewn about the area. The proposal is nearing completion, the evaluation section has been drafted, and we spend a couple of hours working on a logic model. Many of the elements that the RFP suggests should be in the logic model are unknowable at this point – the proposal is for a project that would require a great deal of background work to better understand the nature of the problem in the community, and as a first stage in developing the program, such a logic model and proposal would require that those preparing it already had access to the results of the project’s first phase. I phone the provincial government contact for the RFP, and ask about the expectations for the logic model and evaluation plan, suggesting that some of the information is not yet knowable. The contact assures me that the review committee understands this, and the draft evaluation plan and logic model are expected to be “a work in progress,” and something that will be adapted over time. I am told that they include it in the RFP because they want to be assured that those writing the proposal and implementing the project “know how to do it.” I relay this information to the proposal development team, who are astonished. For them, the logic model would be part of the contract – it would be what they had agreed to do for the contract. Indeed, for other contracts this has been their experience – once put onto paper, the plan, goals, activities and expectations as first outlined develop a life of their own, and are hard to deviate from. I try to reassure them that a logic model is a tool, and should be 78 adapted to reflect the program reality, rather than used as an inflexible roadmap. They suggest that the expectation depends on who the project coordinator is for the contract – and that is not necessarily the person who makes the final decision about funding the contract. REFLECTION 5.1.1 In their national study, Hall et al. (2003, p. 3) note that there is a considerable level of cynicism in the nonprofit sector about the evaluation process – that people have a suspicion that evaluation information is not used, and it is an empty request requiring a filed report that has no long-term significance. I suspect that my efforts to clarify the intentions of government staff only served to confirm these suspicions, although we used the opportunity to discuss why and how evaluation is an important tool for local agencies, whether or not funders expect it. It is interesting that this cynicism co-exists with fears that any evaluations that do not demonstrate positive results will automatically lead to lost funding (Hall, et al., 2003). What is clear from my experience with many nonprofit organizations is that they experience the evaluation and accountability process as a ‘crap shoot’ – sometimes their government contacts are knowledgeable about evaluation and have realistic expectations, sometimes they are rigid and inflexible in their expectations and understanding of what is to be done, and sometimes they start the contract with the former, and finish it with the latter after their contact moves on to a new job. English and Kaleveld (2003) argue that … the development of a program logic for the purposes of focusing an evaluation can be a highly politicized process. This is because stakeholders within the authorizing environment (by definition) have a stake in how programs (and their sponsoring organizations) are perceived, and in some cases may build coalitions with other stakeholders to ensure this is a shared view (positive or negative). (p. 40) Most of the time, the process of developing a logic model portrayal of a program does serve to decontextualize the program, harden the ideas about what the program is trying to 79 do, and remove the perception that there is flexibility to adapt the program to meet unforeseen or unexpected challenges and client needs. And yet developing such a model often serves an important task in clarifying what people are doing and why, helping them to identify important stakeholders and interests that need to be taken into account. There is a trade-off between the benefits of the clarity obtained by articulating a program in terms of goals and objectives, and putting them into the format of a logic model, and the risk that those benefits will be offset by reducing the real or perceived flexibility of the program to adapt to changes in context, client needs, or implementation challenges. This trade-off is in part a reflection of the process of articulating goals, but the logic model itself is a representation of those goals in a format that becomes crystallized in a public way, and one that can be used in many ways unanticipated by those developing the models. There are alternative approaches for developing logic models, but they are not yet widespread in use among evaluators, and little understood by those using traditional texts and RFP attachments to learn about the process and purpose of a logic model. For example, Williams (2000) and Fraser (2001) describe alternatives to the diagrammatic form of logic model that involve revisiting goals and processes, and building in an understanding of the program as dynamic, evolving, and in an environment subject to change. This discussion of how logic models are used in nonprofit evaluation addresses the broader issue of identifying the criteria on which the performance of a program will be based. This is a political as well as practical process, and one subject to negotiation and debate. It is part of the larger movement towards performance measurement that is still sifting down to smaller nonprofit and grassroots organizations. The movement to introduce and apply performance measurement in government and in the nonprofit sector has affected the 80 contracting process, often in ways that are hard to document. They can affect the perceptions of those in government and nonprofit organizations, even when the perceived expectations do not represent actual policies of the funder. Accountability and Performance Measurement The evaluation literature has seen a lively debate concerning the use of performance measurement in evaluation. Perrin (1998), in discussing some of the effective uses and misuses of performance measurement in evaluation, describes eight problems and limitations, several of which appear to be particularly applicable to the nonprofit environment. These include: • Local stakeholders can have different interpretations of the same terms and concepts than do those who are expecting or demanding performance measures, • Goal displacement – “When indicators become the objective, they result in ‘goal displacement,’ which leads to emphasis on the wrong activities and encourages creaming and other means of making the numbers without improving actual outcomes,” (p. 371) • Sometimes meaningless or irrelevant measures are used, reflecting the need to have a measure rather than one that is useful, • Looking at programs out of context and treating them as if they are acting in isolation, which is something that programs never do, • Sometimes measures are not situationally-responsive to unique contexts, or to changing ones, in which goals are necessarily contingent, • Measures can lack relevance for decision-making or resource allocation – the kind of situation that enhances cynicism about the process, and • Performance measurement systems can lead to less, rather than more of a focus on the outcome, innovation or improvement, when rooted in a hierarchical control model. Lindgren (2001) specifically examines performance measurement in the nonprofit sector. Like Perrin, Lindgren discusses goal displacement, replacing local important measures, using measures that rely on context-dependent interpretations, and which are bound to vary depending on who is interpreting them, and introducing a strong motivation for error in measurement based on program deliverers perceiving that the information being collected is being used to evaluate their performance. 81 Greene (1999) builds on such themes. In writing about the inequality of performance measurements, Greene makes three key observations. The first concerns the meanings of program quality: “program quality, as a manifestation of human experience, is intrinsically complex and cannot be meaningfully defined by or reduced to simple endpoints” (1999, p. 164), and further, “standards for judgments of program quality are also irreducibly pluralistic and cannot be meaningfully captured in just one rendering” (1999, p. 165). Here Greene is noting that a program’s quality is brief, insubstantial, transient, and multiple – it can be very fleeting, and our experiences of it can be quite ineffable and extremely difficult to explain to someone else. She notes (1999, p. 165): Judgments of program quality are not meaningfully presented as objective, permanent assessments of a fixed reality, but rather must capture the inter- subjective, dynamic, dialogic potential of both the judge and that which is judged. Greene's second point (1999, pp. 165-199) is about representations of program quality. Here Greene is talking about how we represent our findings and our evaluation work and reports with clients, stakeholders and communities. Standard reports deny the inquirer as a knower, and social inquiry itself as a construction. This is reflected in reports written without reference to the author's voice or position. Among other things, here Greene is referring to how we create the thick descriptions that represent the broad picture of the program reality, which usually gets lost when we write up our evaluation studies. Greene’s third point (1999, pp. 168-169) deals with conversations about program quality. Here she talks of ‘readerly’ texts – highly structured, controlled, predictable, with precise and clear directions about how to understand the text – versus ‘writerly’ texts, which invite multiple interpretations, and insist that the reader participate in meaning-making, and actively write while reading (referencing Abma, 1997). “Writerly texts seek not closure, but dialogue.” 82 The logic model (and goal-setting) processes imposed upon nonprofit organizations represent a form of ‘readerly’ text – they try to remove the surprising or the ambiguous, and focus on simplification rather than explication. Abma and Noordegraaf (2003) note this simplification goal of performance measurement – that it is introduced in order to reduce complexity, and in response to uncertainty and ambiguity. They define ambiguity as the absence of, or contradictory interpretations about what needs to be done, can be done, and should be done, as well as when and where they should. They argue that performance measurement may be appropriate when ambiguity is relatively low but it is both difficult and potentially damaging in settings marked by a high degree of ambiguity, which would apply to virtually any small nonprofit or grassroots organization, or most innovative pilot programs. VIGNETTE 5.1.2 (GRASSROOTS) – ELEVENTH-HOUR REFRAMING I have been working on the first draft of my final report for a two-year innovative pilot project that is part of a provincial funding initiative. I receive an e-mail from my client Laura, forwarded from the main provincial contact – an evaluation consultant who has been working to coordinate activities among several dozen diverse projects with a community health focus. The e-mail includes an eight-page set of detailed instructions regarding data collection and required report format for the evaluation report. At the beginning of the project, Laura and I had attended a workshop on evaluation held for representatives of each of the projects funded through this envelope. It had established the provincial contact as a resource to participating organizations – most had no budget for working with an external evaluator, and for other agencies, project coordinators were performing the required evaluation component. I was one of a few consultants who attended the workshop, and I found it encouraging to see that the emphasis was on learning 83 from the projects, and specifically not focused on ‘accountability.’ Even so, a logic model was required as part of the process; it was developed in draft at the workshop, fleshed out in the weeks that followed, and forwarded to the provincial coordinator as requested. The eight-page set of requirements sent out near the end of the project represents a shift in emphasis. It speaks of using the initial logic model as the basis for addressing ‘objectives achievement,’ provides a structured reporting format that will allow ‘consistency and comparison across projects,’ and clearly represents an emphasis on accountability and a logic of justification for resources expended. I am comfortable in adapting the new requests for report format and content to meet the existing evaluation and reporting plan, which has been designed to meet the needs of the organization and the perceived community interests and expectations. I can successfully argue to both the agency board and the funder that the report submitted will adequately and more appropriately meet the project’s intent and provide meaningful information for other communities. In conversation with my client I wonder how those other projects are coping with these last minute change requests, as they may not have the experience to be able to appreciate that the requests represent a shift in emphasis and expectation. Perhaps they expected this accountability emphasis from the start? REFLECTION 5.1.2 The timing of the request, along with the shift in emphasis from a coordinator who clearly understands evaluation as a learning rather than accountability function, appear to reflect a last minute imposition from provincial government funders. It contradicts information previously shared with projects, and arrives too late in the implementation process to be easily accommodated by new data collection or a changed evaluation emphasis. 84 It also reflects a de-emphasis on the emergent nature of the innovative projects; that they were intended to adapt and grow and discover ‘what works’ rather than stick to one approach without regard to changes in context, need or formative reflection on the processes being implemented. In effect, the new request appears to represent an after-the-fact performance measurement expectation – one based on locally developed criteria and goals, to be sure – but also imposed as a requirement at odds with values previously articulated by the funder’s representative, which acknowledged the emergent, community-based and complex contexts in which all of the innovative projects have been based. In this way, the funder’s request serves to “decouple accountability from ownership and responsibility, and therefore revert to accountability as regulation and control, rather than as ‘a vehicle for shared responsibility’ among all stakeholder-citizens.” Greene (1999) further notes that those responsible for identifying performance measures and implementing them are not always the ones who are using them in decision-making, which “severs the crucial link between responsibility and authority” (p. 170). Indeed, it is not clear where the new requests are originating; and perhaps it does not matter, in the sense that the anonymity of the requirement itself is a perfect representation of that severed link. To summarize this discussion of performance measurement and nonprofit evaluation, many of the issues examined by Greene, Abma, Perrin and Lindgren relate to understanding accountability as control, and how that emphasis may reduce the apparent complexity of evaluation contexts. This accountability emphasis can also have unintended impacts, in part because complexity is an inherent and necessary element of how programs operate. The process of reducing this apparent complexity can also lead to a shifting emphasis – often referred to as goal displacement in the evaluation literature, and as mandate drift in the 85 devolution literature – which represents a potentially subtle and unacknowledged reframing of program goals that has been associated with the introduction and growth of gaps in service and programming (Phillips & Levasseur, 2004). The third thematic section in this chapter examines the issue of mandate drift in more detail, and the fifth thematic section addresses complexity. Two other cross-cutting themes that grow out of this discussion are the capacity to meet evaluation and accountability expectations, and the broad question “to whom are programs accountable – funders, community stakeholders, clients, or the broader public?” Ferris (1993) addresses capacity for evaluation by contrasting public accountability with nonprofit autonomy, “The increased role of government contracts and funding of nonprofits has heightened tensions as governments seek accountability and nonprofits seek to preserve autonomy” (p. 363). He argues that governments need to recognize that excessive intrusions into the accountability of nonprofit organizations limit the advantages of having the nonprofit sector involved in program delivery, particularly when the demands are made in the absence of adequate resources to meet the requirements. Capacity is examined in the second thematic section of this chapter. Many writers have addressed the need for programs to remain accountable to local communities, rather than being refocused solely to the concerns of funders (Alexander, 2000; Phillips, 2000; Clayson et al., 2002; Wallerstein, 1999, Ebrahim, 2002). Certainly in the case of small or medium-sized nonprofit organizations that have their programs rolled into broader umbrella programs of either federal or provincial jurisdiction, this is a particularly crucial trade-off, and one that agencies and community representatives remain aware of in their initial deliberations concerning whether to compete for funding and join with broader 86 delivery networks. The trade-off for local understanding of priorities for programming can become lost in a relatively short period of time. This is an issue addressed by Campbell (2002) in examining outcomes assessment and what he calls the paradox of nonprofit accountability: Leaders of nonprofit organizations face a particular bind in responding to the demands for results-based accountability. If they focus only on the project-level outcomes over which they have the most control over which indicators are already available, they risk default on the larger question of accountability to publicly valued goals. On the other hand, if they try to demonstrate the impact of their particular projects on community-wide outcomes, they risk taking credit inappropriately or shouldering the blame for indicators beyond their control. (p. 243) Campbell also notes that community-wide results take a long time to appear, suggesting that accountability processes need to be based in much longer time frames than is typically allowed or expected by funders. Burnley et al. (2005) also note this concern, which is examined in the fifth thematic section on complexity later in this chapter. Phillips (2000) examines community responsiveness from the perspective of what kinds of administrative or governance changes would facilitate shifting the emphasis from a hierarchical view of accountability to one focused more specifically on the horizontal governance. One of her concerns is that at present the range of decisions made about evaluation and accountability are ad hoc and vary depending on the jurisdiction, the ministry or department, and even the branches or program streams within departments. She argues that flexibility and responsiveness need to be built into the accountability relationship, which must be applied across departments and as a broad strategy within government, as the ad hoc approach tends to default to hierarchical accountability processes, despite the best efforts or intentions of individuals throughout the system. Finally, one of the aspects of accountability noted by Koppell (2005) is transparency. This is also an aspect of the trade off between local and hierarchical accountability. As I have 87 watched the process of devolution of programs and then the evaluation of those programs to nonprofit deliverers, I have noted that it has been accompanied by a potential loss of transparency. Twenty years ago when I worked as an internal evaluator, evaluation reports could easily be shelved and made to ‘disappear’ if they contained findings that were considered to be inconvenient. In the interim, there has been a shift to increased transparency as provincial or federal departments across Canada have come to treat the products of evaluations as public documents, and available to anyone who knows where to look and ask for the reports. Such reports are now routinely available on request or even on-line. Yet the devolution of the evaluation function to agencies and individual programs has introduced a new way for evaluation results to become obscured. The individual reports submitted by agencies and deliverers are often not made available as public documents for privacy concerns – they speak about local issues, name individuals, and use thick descriptions that make it difficult to protect the privacy of clients or program deliverers. As such, they can have their availability restricted, and represent a return to less transparency in evaluation. Another systemic aspect of the devolution process is the shear number of community level evaluation reports that are prepared and submitted to agencies and government departments. This has two types of potential impact. One is that with this volume, it becomes extremely difficult to keep track of and reconcile the results of so many piece-meal evaluation efforts. The other is reflected in the lost opportunities to have these evaluation efforts contribute to deliberation about program and service delivery between communities. The accountability streams are narrowly vertical, and unlike when a province-wide evaluation is conducted, the results are not necessarily linked in a cohesive manner. And if 88 they are linked in some form of meta-analysis, that is typically not shared or done cooperatively with local community participants. On the issue of transparency, Stein (2001) remains hopeful: We have arrived at a signal moment in history, when the demands of states and citizens seem to converge, at least in part, although for very different reasons. When the state was the sole deliverer of public goods, it had every incentive as a monopolist to conceal rather than reveal. The new post-industrial state needs transparency among providers, assurances of quality, and evaluations of the effectiveness of the public goods that are delivered. So do citizens. (p. 79) Perhaps citizens do need such transparency, but devolution appears to offer a potential respite from transparency for those who might be inclined to use it, and this may represent a serious systemic unintended consequence of devolving evaluation. This section has examined the theme accountability, emphasizing the increased expectations of funders for accountability within the context of devolved service delivery. Examples described how the use of performance measurement and logic modelling processes can serve to obscure the inherent complexity of program contexts, actively reduce the flexibility of program delivery, separate accountability from responsibility, and reduce the flexibility of evaluation and reporting efforts by nonprofit organizations. In this way, program delivery and evaluation can become less responsive to local concerns and implementation contexts, and have unintended consequences at both the community and societal levels. The next section continues this discussion of accountability by looking at the capacity of organizations to deliver and evaluate programs on behalf of government funders. Theme Two: Capacity This section addresses the capacity of nonprofit organizations to effectively undertake the work of devolved program and service delivery, and to undertake evaluation activities related to this delivery. 89 Concern about the capacity of the nonprofit sector to take on the work of government devolved to it has been one of the most consistent themes in the literature (Ferris, 1993; Hall & Reed, 1998; Alexander, 1999; Alexander et al., 1999; Wolch, 1999; Fredericksen & London, 2000; McDonald & Marston, 2002; DeVerteuil et al., 2002; Sommerfeld & Reisch, 2003; Mulroy, 2003; Eikenberry & Kluver, 2004; Burnley et al., 2005). More recently, the evaluation literature has also begun to examine the capacity of nonprofit organizations to undertake evaluation activities (Wallerstein, 1999; Botcheva et al., 2002; Hall et al., 2003; Phillips & Levasseur, 2004; Levasseur & Phillips, 2005). Both delivery and evaluation as activities represent stressors on the sector, and contribute to work increasingly being done by nonprofit staff ‘off the sides of their desks.’ Salamon (1993, 1997, 2003) has described the nonprofit sector as growing, resilient, central to the effective operation of contemporary U.S. society, and yet experiencing a state of crisis (1997). Core elements of the crisis that Salamon describes are fiscal and economic – lost or changing income from government, and the marketization of the sector that has involved the introduction of user fees and service charges. The fiscal challenges Salamon discusses encompass the shifts from grants to contracts discussed by Phillips and Levasseur (2004) and others, but also include overall declines in funding to the sector. The Canadian context has not seen comparable dramatic declines in funding, but instead, shifts to the types of funding and identities of the funders (as provinces have taken on program delivery). Further, the marketization emphasis in Canada has been more subtle – particularly in terms of the lower prevalence of introducing direct ‘for-profit’ competitors into the delivery landscape. Even so, both pressures are present in the Canadian nonprofit context. 90 A key preoccupation of the American and international contexts described in much of the literature is the devolution of responsibility for welfare program delivery. As noted earlier, although some aspects of this delivery has been devolved in Canada – notably employment-related programs and a variety of supplementary programs and services for individuals who are participating in the social welfare system in some way – significant parts of welfare program delivery remain under the jurisdiction of public servants. However, delivery has changed dramatically as individual caseloads have grown for social service workers in government, and specific hands-on tasks involving client support have been devolved to those in nonprofit organizations. The dynamic that has been most apparent in Canada involves shifts to the contracting process. As open grants have been replaced with targeted contracts, nonprofit organizations have been required to meet a variety of stringent conditions for funding. Some of these, such as the requirement to find and develop agreements with other partner delivery and funding organizations, will be examined in more detail later in this chapter in the discussion of the theme ‘competition.’ Other conditions include the requirement of finding and providing in- kind funding from within the organization or local community, and requirements that any funding provided through the contract not be used to pay for administrative costs within the agency, or for any long-term infrastructure, such as computer equipment or furniture. Such items are often referred to as in-kind, but must be funded from private fundraising efforts, or from grants from other agencies or foundations such as the United Way. These requirements can be particularly onerous for small grassroots organizations as they entail significant amounts of legwork and community development on the part of those putting together proposals and applying for contracts. Their intent, in part, appears to be a 91 way of separating ‘serious’ contenders for work as program deliverers from those who may be ‘dabbling,’ as well as ensuring that there is a community-level dialogue and negotiation about resources and priorities. The requirements also ensure that the nonprofit organizations end up contributing substantial time and resources towards the delivery of government programs. As Miller (2005) describes it, if a nonprofit CEO were to transfer to the private sector and have the same limitations placed on them, their experience might be: You’re now back in the for-profit universe … and you’re the owner of a restaurant. Your paying guest comes to pay the bill, offers a credit card, and prepares to sign the charge slip. But before signing, the guest says, “I’m going to restrict my payment to the chef’s salary. He’s great, and I just want to make sure I’m paying for the one thing that makes the real difference here. I don’t want any of this payment to go for light, or heat, or your accounting department, or other overhead. They’re just not that important. The chef is where you should be spending your money!” (p. 9) From the perspective of the government funder, the organization already has an administrative apparatus in place, and so there is no need to fund this. From the nonprofit organization’s perspective, this apparatus was capable of handling a smaller agency, and one without the significant burden of a new program, one that possibly requires a new physical space, and takes up time of senior staff, board members and volunteers. This situation is exacerbated by the requirement of writing detailed proposals and applications, and doing the networking required to ensure a high probability of funding. And when administrative processes within government funders or co-funders are slow or delayed, nonprofit organizations can experience substantial periods in which they are subsidizing government operations, as preparation and implementation must meet external deadlines that are oblivious to the funding processes and timelines of the funding agency (Phillips & Levasseur, 2004, p. 460). 92 The following vignette illustrates how this dynamic can play out for a grassroots nonprofit involved in developing a proposal to deliver and evaluate an innovative pilot project for a government funder. VIGNETTE 5.2.1 (GRASSROOTS) – OFF THE SIDE OF THE DESK In my work for a university centre that is coordinating a government-funded demonstration project, I have been asked to help community agencies develop their proposals to participate in the process, and in particular, work with them to figure out how to build evaluation activities into their project proposals. The proposals are the second stage of the process, which involves a solicitation of letters of intent, asking interested agencies to submit a brief letter describing their interest, and answering a couple of questions about the suitability of their community and agency. This process is established in recognition of the significant demands placed on agencies in responding to open requests for proposals, and the idea is to do an initial filter of letters of intent, and then request full proposals from only a dozen communities, of which approximately eight would be funded. In addition, all agencies invited to submit a proposal could apply for reimbursement of a little more than $1,000 towards their costs and investment in putting together the proposal. The project itself requires that participating agencies and communities contribute in-kind resources representing approximately a third of total project costs. Part of my contribution to developing the proposals involves visiting each prospective site and meeting key individuals who would be responsible for implementing the projects if they are successful. In these site visits I strive to answer questions about which agency and community resources may count as ‘in-kind’ contributions. The funder does not appear to 93 have a clear idea about what this entails, so the team I am working with tries to be generous and flexible in our definition of what is allowable. Early in the process, it becomes clear in discussing the proposal development process with the agencies that there is a disparity between the capabilities and resources that different agencies are able to contribute to developing the projects. A few agencies have individuals on staff who specialize in writing proposals – commonly the executive director. A couple of agencies have used the development funding to hire a local consultant to help them prepare their proposals. Most agencies have no staff to undertake such work, so they rely on volunteers – who may be board members, and who may also be working in other agencies or government departments as field workers and program delivery personnel. It is from one of these volunteers that I first hear the phrase “off the side of my desk.” It is explained to me as being so overloaded that there’s no more room to take on additional work ‘onto the desk.’ It signifies the second and third priorities that volunteer and community development work represents to those who are already over-extended in their official capacities and responsibilities. In this context, several individuals describe the evaluation component of the project development to be “off the corner of the side of my desk” – in other words, of such a low priority that it will be attended to rarely, if at all. Despite the financial support for proposal development and the high probability of success in obtaining funding, several of the invited agencies decline to submit a proposal – the effort represents too much commitment by already taxed staff and volunteers, with a one- in-three chance of failure. One individual describes the situation as, “We just went through this proposal writing process last month, and we put a lot of our hearts and souls into a 94 project that didn’t get funded. We’re just too burned out and discouraged to do go through that again so soon.” A total of eleven proposals are received, and eight projects funded. As I contact each agency to begin working on the evaluation components of the projects, I discover that one of the contact people I had met and anticipated working with has moved into another job, and is no longer with the agency or the project team. In another agency, my contact turns out to be a different person than planned, but someone whom I had met in my first site visit. This person is assigned to the project after the contract is awarded, and learns about the agency’s commitment by reading the proposal after-the-fact. This project is one in which the proposal and evaluation plan have been developed by a consultant no longer associated with the agency. The new project coordinator begins our discussion about implementing the project with a clear vision of how both the proposal and the evaluation will have to change in order to be implemented ‘in the real world.’ Negotiation with the agency and coordinator begins. REFLECTION 5.2.1 By the end of this project – less than one and a half years from start to finish – half of the key contacts for the eight projects have left the agency or the project prior to its completion, and others within the agency take over implementation of both the project and the evaluation activities. Having established evaluation sub-committees in each community, I usually have other volunteers who can take on some responsibilities, but in each case there is a need to bring new people up to speed about the project and the evaluation. For the second phase of the project in eight new communities, a new precedent is established, requesting that each agency identify two contacts – a primary and a backup – and have both attend all meetings in a central community, participate in conference calls, and generally be available 95 as a backup and alternate resource person. Because we were working with small and grassroots agencies, many of these individuals are volunteers, rather than staff. The issue of turnover among agencies is a serious one for nonprofit agencies and evaluators to cope with. A contributing factor for the high level of turnover is the contracting process, which can take a great deal of time, and involves a high degree of uncertainty, particularly for grassroots organizations. The volatility is experienced in several ways. When contracts are granted, there is often a very short window in which to hire needed project coordinators. Remuneration is often relatively low, and in smaller communities, the pool of workers available may not be extensive. Some individuals may jump from project to project and agency to agency over an extended period of time, but they are often looking for more long-term, secure, and higher paying work – with benefits if possible. Agencies sometimes have such people contribute in preparing proposals and applying for contracts ‘on speculation,’ with the understanding that they will be hired to do the work if the proposal is successful. However, this means that the projects are also subject to the telegraphed and often clear ‘end-dates’ of contracts. When there is no likelihood of repeat funding for a project, and it ends in a few months’ time, project coordinators are tempted to look to the next project and the next opportunity that comes up, and leave the job prior to the completion of the project – and in many cases before completing the project report and evaluation. In a Canadian case study of staffing, retention and government funding, Akingbola (2004) examines some of the ways that contract funding and temporary staffing can be detrimental to an agency's services. Akingbola (2004) notes that contract-based funding leads to the hiring of temporary staff, and affects the retention of employees. “Unlike for-profit companies that use alternative staffing for contingent work, the nonprofit (sector) is forced to 96 use temporary staff for core service delivery in government-funded programs” (p. 463). Akingbola argues that the consequences of contingent work in the sector include diminished quality of services, inability to retain acquired program competencies, program instability, employee turnover, employee distraction, and low employee morale. Even the short-term projects described in the vignette experienced substantial instability because of the brief contracting period, as well as capacity stress, regular distraction because of funding issues, and volatile morale that spanned the highs associated with ‘winning’ the contract, to lows associated with projects ending or facing the defection of key implementation staff. Another tension present in the vignette reflects a related facet of the funding game (Bernstein, 1991) as experienced by grassroots nonprofit organizations. The dance of negotiating co-funders and community partners and involving them in ‘innovative pilot projects’ reflects a reluctance to establish long-term funding for new programs (Burnley et al., 2005). Governments will provide seed money to get things started, to see ‘what works’ or establish ‘best practices,’ but will not agree to fund projects for extended periods of time. Indeed, they are often funded with the proviso and assumption that the program, if successful, will work towards self-sustainability – being able to continue without additional government funding. Further, these innovative projects must in some way be (or appear) truly innovative – they cannot obtain repeat funding, and so must continually build upon the network of connected strategic solutions to ongoing social issues. For the organizations involved in the pilot project in the vignette, this proposal development process was complicated by an increasingly common requirement for such projects – the need to clearly document and evaluate the intervention. While this project was relatively unusual in that it allowed some of the budget to be applied towards evaluation 97 activities, and even provided consulting support to this end, it still left agencies with a dilemma. The agencies experienced a trade-off between the need to propose an innovative approach that would maximize their chances of obtaining funding, and yet they wanted to keep the implementation as ‘easy to research’ as possible. Phillips and Levasseur (2004, p. 461) suggest that this trade off between the risk of innovation and the safety of ‘tried and true’ approaches that are measurable and can satisfy funders’ accountability concerns is a common one, and one that often results in reduced innovation in program development. The participants in this project clearly weighed these choices as they wrote their proposals. Alexander et al. (1999), in a study of the impact of welfare reform and devolution on community-based nonprofit organizations, found that the capacity of smaller agencies to adopt business-oriented approaches to be able to handle government contracts was profoundly limited. In part, this reflects how devolution has been implemented as policies that promote business-oriented approaches, but without the accompanying tools that businesses are routinely able to apply – such as using savings and efficiencies to invest in infrastructure, or paying for overhead out of revenues (Miller, 2005). The financial stress this puts on organizations affects most of what they undertake, including evaluation activities, as the next vignette highlights. VIGNETTE 5.2.2 (NONPROFIT) – TRADE OFF One of the medium-sized nonprofit organizations involved in the eight-community demonstration project has designed and implemented a very novel approach to community education involving two local schools. I have worked with the coordinator to understand their new model, and to find a way to portray it in the report for the funder. The approach involves the development of a detailed curriculum that I have not seen beyond the initial planning 98 stages. I am eager to see the final product as implemented and reviewed by the on-site coordinator who has been overseeing the implementation of the project and evaluation. In my third site visit to the community, I meet with the coordinator, Alicia, and express the hope articulated by those in several groups that this innovative curriculum can soon be shared with the other projects, and variations on the approach may be tried with local schools in those communities. My discussion with Alicia begins on a very positive and collegial note, but quickly takes a most puzzling turn. She tells me that she has great results from the six-month follow- up for the curriculum, which looks extremely promising. But she cannot show me the actual curriculum. I remind her that sharing and documenting the process and the outcomes is a clear part of the rationale for the project, and our contract with her agency. Alicia explains to me that the curriculum itself was developed with that part of the resources considered by the agency as ‘in-kind,’ and so the curriculum belongs to the agency, and not the project or the funder. I express my surprise and concern about the agency’s choice, and my belief that it will affect their ability to obtain the balance of their project funding, as well as future funding through this initiative. I also make an appeal to Alicia’s sense of fairness and collegiality – how important it is to share knowledge about our successes with others. I am told that the decision is not up to her, and that if I want to know more about it, I should talk to Anne, the new and recently hired Executive Director of the agency. I am able to arrange a meeting with Anne the next day. Prior to the meeting I speak with the project’s director at the university centre, and get direction about the funder’s position on ‘ownership’ of the curriculum. When I meet Anne at a local restaurant, I find her to be pleasant, clearly excited by the potential of the curriculum, but matter-of-fact and firm 99 – the curriculum was created by her agency’s staff, it belongs to the agency, and they intend to develop it further so that they can market it for long-term revenue generation. I suggest that the funder’s view is that they legally own the curriculum, having paid for its development. Anne’s position is that the funder is unlikely to try to force the agency to provide the curriculum over a small $30,000 contract – the legal bills would quickly outstrip the total project funding. Further, she appreciates that she will not be able to obtain the final contract payment of the 10% ($3,000) held back until project reports are submitted, but it is a risk that she and the agency are willing to take. And as for future funding, I am reminded that the funder has a policy of not offering repeat funding, so nothing is really at stake, and she deems that existing contracts with other departments are secure and not at risk. The meeting remains cordial; I express my disappointment and hope that the agency will reconsider, but leave the community convinced that I will not be able to obtain the project information I need for my final evaluation report. REFLECTION 5.2.2 Alexander (2000, p. 287) examined adaptive strategies of nonprofit human service organizations in the face of devolution. In coping with new expectations from funders, nonprofit organizations have developed a variety of strategies, including: 1) strategic expansion of services and client bases, 2) networking to a stabilize and develop revenue streams and resources, and 3) increased use of business techniques and technology to generate outcome measures and an image of effectiveness for funders. In my work with small and medium sized nonprofit organizations I have seen a growing and virtually unrelenting focus on trying to stabilize funding, which comes from disparate sources, never covers core agency costs, and remains short-term, even when the probability of funding being renewed is 100 very high. Often my initial discussions with nonprofit agency clients – usually the Executive Director, but sometimes board members – revolves around the utility of conducting evaluation as a means of demonstrating to potential funders the capability of the organization, and the effectiveness of the programs and services delivered. The hope expressed to me, particularly with small and grassroots organizations, is that evaluation can help establish or confirm the organization’s legitimacy. While funders are increasingly asking for evaluation as part of the accountability package, being able to demonstrate a willingness and capacity to undertake it is seen as a pre-condition to obtaining even initial contracts with a new funder – a willingness to self-regulate, and competence to be left to do it with minimal input or support. The underlying common experience of these grassroots and small nonprofit organizations is that they are stretched by trying to take on government contracts, even in an area in which they have been developing and offering their own programs and services in the past. Taking on new contracts leads to a juggling effort to find a balance that will allow them to maintain a consistent and stable staff complement, in part for their clients who develop long-term relationships with staff members, and in part for the staff, who would like some certainty in knowing what their future employment situation might be. The agencies often do not see remaining the same size as a long-term option, and feel pressure to either grow or close shop. But growing slowly or evenly is a challenge. In the Canadian context, Hall and Reed (1998) address capacity by questioning how much government can download to the nonprofit sector. Their concern is that the sector is diverse but does not have the capacity to handle what is being devolved to it. They note: The nonprofit sector also has a number of inherent limitations: an inability to generate resources consistently or on a sufficient scale; a tendency to focus on particular 101 groups of the population, leading to gaps in coverage and the duplication of services; the vesting of influence with those in society having command of the greatest resources; at a historical association with non-professionalized approaches in coping with human social welfare problems. (p. 1) What are sometimes noted as ‘duplication of services’ and ‘focusing on particular client groups’ are often implicated as problematic by funders, and certainly by the New Public Management, yet the experience of nonprofit organizations can be that these features are part of what makes them most effective as a community-level, ‘bottom rung’ in the social safety net. Alexander et al. (1999, p. 452) suggest that: Nonprofit organizations play a pivotal role in ongoing efforts to devolve federal government programs and transfer public responsibilities to the local level. In the era of welfare reform, the capacity of social service organizations to serve as the public safety net in a manner implied by devolution proponents has come under question. My case studies and vignettes help to understand part of that mechanism. The transience of programming when funding comes and goes rapidly, organizations experience stress, burnout and turnover, and fluctuate in size and the composition of their staff, means that gaps can rapidly appear. In the past, redundancy in programming among diverse delivery organizations helped to ensure that whatever was happening in one organization would not upset the whole safety net. But as contracts have divided up the available contracting work among community agencies in a more ‘efficient’ and rational manner, the organizations’ capacity struggles and volatility can create extended programming gaps within communities, starting when one agency stops delivering a service, and resuming only after this agency or another has taken on the task once again. To evaluate such a program, with serial delivery by several community agencies, and possibly involving the same delivery staff, is a daunting task if undertaken by someone from outside of the organization or community. This raises an interesting aspect of how the capacity, turnover and burnout in small and grassroots nonprofit organizations can change my role as an evaluator. When staff of an 102 agency come and go over relatively short periods of time, the evaluator can represent the only continuity between an early implementation of a program, and subsequent iterations of it. In this sense, my work as an external evaluation consultant can make me a key part of the organization’s corporate memory. For some agencies, I’ve been around longer than most of their staff members, and I’m the only one who can speak to what was done in the past, when proposals were written, and even which other community agencies were involved as partners, competitors or players. Having worked with nonprofit organizations ranging in size from very small grassroots to medium sized organizations with over a hundred employees working in multiple locations and even communities, I notice that there is a strong correlation between the size and duration that an organization has been in place, and its capacity to take on and effectively cope with contract work in the devolved world of provision of government- funded programs and services. In a case study of some eighteen community-based nonprofit organizations, Fredericksen and London (2000, p. 233) examined possible elements of community-based organization capacity as a way to determine how much capacity exists in the nonprofit sector to take on major challenges for delivery. These include leadership and vision (having a directing board, community participation on and support for the board, vision statements, representativeness of community demographics of staff and board members), management and planning (the existence of formal written policies and procedures for internal operations, evidence of planning, such as a strategic plan, written goals and objectives, a budget), fiscal planning and practice (formal financial statements, organization budget, sources and predictability of funds – self-generated, public contracts, grants, fundraising, revenues), and operational support (predictable levels of staff and skills 103 among staff, relative balance of staff vs. volunteers, education and training of staff, compensation at a level to attract qualified staff, role of staff in the organization, levels of infrastructure and support, adequacy of physical space, equipment and operational funds). Of the eighteen organizations studied, representing a total of 102 employees, only one exhibited a majority of the elements of organizational capacity that Fredericksen and London were looking for. Certainly in my case studies, such factors are not consistently found or stable within organizations over time, and change based on turnover among senior administrators in the organization, the number and type of contracts in place, and the competitive contracting environment. Botcheva et al. (2002), in another study examining nonprofit capacity, surveyed twenty-five small community agencies serving children and youth in California concerning their evaluation practices. They found that although aware of the importance of outcomes evaluation, most of the agencies lacked the resources to implement it systematically. They expressed interest in learning about it and held attitudes and beliefs that indicated that they understood that evaluation would be worthwhile to them. Canadian data provided by Hall et al. (2003) support this view of the challenge that evaluation represents for nonprofit organizations, in part by contrasting the rising expectations for evaluation by funders with the lack of resources provided by those funders – the evaluation component is often viewed as part of the administrative apparatus, and is thus expected to be part of the in-kind contribution by the organization. Another facet of the issue of capacity is the impact that devolution has on nonprofits’ previous sources of revenue. Brooks (2000, p. 211) addresses this in examining what he calls the dark side of government support for nonprofit organizations. He asks the question “does 104 government funding displace philanthropy, or encourage it?” Through a survey of the literature, Brooks notes a broad pattern of “crowding out,” particularly in social service provision and health. In terms of capacity, the concern is that potential donors to nonprofit organizations are reluctant to give to organizations that are increasingly viewed as a ‘shadow government’ – as part of the broader government program delivery network. It’s like paying extra taxes, just for fun. Overall, he suggests that the claim that government funding stimulates giving appears to lack credibility, and indeed, may do the opposite. On the other hand, another facet of the impacts of devolution on the capacity of nonprofit organizations may benefit evaluation in some ways. As the delivery of programs and services becomes institutionalized in nonprofit organizations and staff become professionalized, there has been some concern that there are fewer roles for volunteers, and the range of activities in which they can become involved in a nonprofit organization can decline. In the case studies from my practice, I have found that inviting volunteers to participate on evaluation steering committees and workgroups provides a valuable area for contribution to the organization, while also finding scarce resources – freeing up some time of paid staff – and building on the knowledge and skills of long-time volunteers who often have a longer history with an organization than more recently hired professional staff. Volunteers are also more likely to have direct links with other agencies and programs, and as such, bring a broader community perspective to evaluation than do nonprofit staff, who have a perspective on their program that is both more ‘interested’ and narrowly focused. One final issue concerning nonprofit capacity is how the pinch of resources affects what services are offered by an agency. DeVerteuil et al. (2002) discuss how the devolution of programs and services places greater burden on local resources. Their study examined how 105 a local community used a variety of strategies to limit, ration, or depress demand for programs in the face of mandated delivery. The strategies were initially articulated through low-level and indirect bureaucratic disentitlements, including quality control and spatial consolidation – reducing access by offering fewer and more remote service centres, rendering them less accessible by increasing travel costs for clients (DeVerteuil et al., 2002, p. 232). Later, disentitlement was achieved by cutting benefits, imposing time limits, and requiring workfare. What I find critical about this example is that it shows how not just the delivery is devolved, but also how the delivery agent at the local level ends up taking on responsibility for determining who gets served and how much. The onus for deciding who can access programs and services shifts down to the nonprofit organizations as they compete with one another for contracts, and ironically, this can also serve to make both the process and the inadequacy invisible to funders. In this way organizations not only self-regulate, but take on the task of regulating clients and shifting from a focus on providing service to those in need, to choosing or identifying those who will be denied service. This issue spans programs and delivery agencies, and goes beyond the actual time- span of any one program’s delivery period, particularly in a contracting situation. As such, it is not likely to get surfaced in evaluations. The issue is a central part of the next theme to be addressed: how devolution can lead to mandate drift for organizations, and what this can mean for those working to evaluate the overall delivery of programs and services over time. This section has examined the capacity of the nonprofit sector to take on the work of devolved program delivery, and some of the implications that this capacity stress has for the conduct of evaluation. Such factors as the lack of core or repeat funding, marketization pressures, the need to provide local ‘in-kind’ resources or matching funds from another 106 source, and the elimination of community-level program and service redundancies combine to elicit stress, burnout and turnover among an increasingly contingent workforce. Such turnover and stress can seriously undermine efforts to provide continuity in conducting evaluation, ensuring that it remains a low priority for agencies and program coordinators. For agency staff, evaluation remains something done “off the sides of their desks.” For someone working as a consultant with and for the organization, the focus on evaluation can be a constant reminder of scarce resources appropriated for a potentially dubious activity. Theme Three: Mandate Drift This section addresses the stability and continuity of programs delivered by nonprofit organizations in a devolved program and service context. The devolution of programs has been accompanied by structural reorganization of the sector, and a shift in how programming priorities and revenue generation are undertaken within nonprofit organizations. The lure of contract dollars to deliver programs similar to those that organizations are already providing can be difficult to resist. That the new revenues offer opportunities for organizations to grow and stabilize funding seems readily apparent to those in the organizations. That the contract revenues also bring service and delivery expectations that are different from those the organizations have been accustomed to can be somewhat less transparent, and certainly can seem like a detail that just needs to be worked out ‘down the road’. This section examines some of the hidden costs associated with the contracting process, and in particular, how evaluation activities can ameliorate or exacerbate these impacts. Central to this discussion is the increasing tension in nonprofit and grassroots organizations between their growing dependence on new contract funding, and the lost independence of the organizations in their relationships with communities and clients, and 107 with respect to their ability to offer unique and targeted programs and services. The vignettes in the section highlight some of the mechanisms for how mandate drift can occur in organizations, and the role of evaluation in possibly magnifying or ameliorating such change. Although many of the discussions in the literature on devolution focus on pessimistic interpretations of the implications for organizations, Shuman (1998) suggests a potential benefit of devolution – that it raises the opportunity for local organizations to try out new and innovative ways of delivering programs and services. His interest is in the possible role of new information technologies to support decentralized delivery when combined with a variety of forms of centralized and decentralized control. Vignette 5.3.1 portrays a small nonprofit organization that is introducing a new automated reporting system requested by the funder, with hopes that it will meet both local and national information needs. VIGNETTE 5.3.1 (NONPROFIT) – THE NEW INFORMATION SYSTEM The northern nonprofit organization has ongoing funding from a national department, and also delivers several programs that use funding from several provincial departments. The main focus is on health and health promotion in the community, and the agency has approximately a dozen staff members and twice that number of volunteers who are directly involved with the public provision of programs and services. I have been asked to evaluate the national program envelope, which consists of five broad programming areas, delivered through the central and three regional offices scattered throughout the area in smaller communities. My client is the nonprofit organization, although the evaluation is at the request of the national department, which requires that all agencies conduct such a review every five years. 108 In my early meetings with agency management, I am directed to use data compiled in an electronic database that has been custom designed for the organization. I am told that this is the third and final version of the software, and has been in place for staff to use for approximately a year. The software has been developed locally to meet the specifications of the national department, but based on an on-the-ground understanding of the local program and service environment. The national department has partly funded the development, with the understanding that if it satisfactorily meets the information needs, it will be licensed for use across the province and possibly across the country – a potential source of revenue for the agency, and a ‘ground up’ approach to information systems development that the department hopes will make it more palatable to administrators in other contracting agencies. The agency executive director presses to have the data from the information system used in the evaluation. She is interested in certifying its usefulness for the national department and prospective licensees in other jurisdictions. I start examining the database, introduced to it by one of the professional staff who has responsibility for looking after it and for training staff across the agency in its use. Lilly describes the software’s features with obvious pride, taking me through nooks and crannies of how it details participation in agency activities by all clients. She explains that each local office has a computer hooked up to the database through a network, and staff have been taught how to input data about their activities, which is to be done on a weekly basis. As we start examining various data fields, and I make notes about some of the tabulations that I want Lilly to run for me, I start to notice that many of the fields have very low counts, and some are blank. When I point this out to Lilly, and ask her about it, she looks towards the open door of her office, lowers her voice almost to a whisper, and tells me that some staff 109 have not been enthusiastic about entering data into the system. She explains that they are not yet comfortable with computers, and that they are front line program delivery staff and professional counsellors who may resent having to do what they regard as clerical work. I quickly realize that there are far less data in the system than the executive director has lead me to anticipate – data entry for several programs started only a matter of weeks earlier, and the oldest data in the system are less than three months old. The consistency of data entry also appears to have been haphazard, at best, and despite Lilly’s enthusiasm, I deem that it is extremely unlikely that the system will provide data useable for the evaluation. My first efforts to broach the topic of the lack of data in the information system with the executive director are dismissed as a product of the newness of the system, and that once we ask the right questions of it, the data will be available. Hoping that perhaps there is simply an issue of record keeping that can be resolved by devoting some dedicated data entry time by an agency clerical person or my assistant, I start my visits to the regional offices and my initial informal meetings and interviews with delivery staff. I decide to raise the database as an issue for my second stage discussions – after the first group level introductions, and when I meet individuals on a one-on-one basis. Given Lilly’s portrayal of the possibly computer-phobic response by front line staff, I am surprised to see the apparent facility of staff with using computers to find information relevant to our discussions. They have computers readily accessible, demonstrate a practiced hand in logging into the system and pointing to specific programs and activities I ask about. Without trying to pre-judge why the database has not been used, I start to ask staff general questions about what training they have had, how they use the information system, and what they like and don’t like about the database. While I do not have a mandate to evaluate the 110 database itself, it appears that I will need to develop a reasonable rationale for not using it, given the expectation by the national department and the executive director that it will have prominence as a central source of evaluation-relevant data for my study. My discussions with front line staff discover several themes that portray a different picture of the dynamics of introducing the information system. Staff actively resist the database because they feel that it does not accurately reflect the kind of work they do. For example, many of the programs involve delivering information, educational and clinical group sessions, and not just one-on-one counselling. The database does not have a field that allows the staff person entering the data to identify both group and individual interventions, or the kind of group or the number of people attending the group. The database does request individual names for participants – information not necessarily relevant or even known for public education events, and considered by staff to be inappropriate for clinical sessions. The latter represents another serious concern by front-line staff, who are not convinced of the data security of the database system. Clinical staff refuse to enter confidential information about clients into the system. Counsellors point out that non-clinical staff have access to records, and argue that there are no good reasons provided for putting that information into the databases. The key reason given – that it would allow other (backup) counsellors to access the information if needed, or when clients participated in more than one program – overturns a previously established custom of keeping and sharing physical files in a limited way, and as needed. Counsellors argue that much of the work they do concerns sensitive information, and the confidentiality of that process is vital to their effectiveness. Indeed, some clients had heard about the new information system, and without 111 knowing anything about it, have refused to talk to counsellors unless they promise not to put any information about them into the system. As I talk with people across the agency, I hear stories about rivalries and competition among regional offices, mandate and policy disputes between management and staff, and efforts to change the qualifications requirements of staff – particularly those doing clinical work. I also am told that the information system designers provided few opportunities to front line program delivery staff to give input concerning their work. Indeed, when they have had the opportunity to do so, staff seem to have made this the first point of their resistance. Resistance to the database seems to be both overt and covert. My own evaluation report makes recommendations concerning how to address some of the information system implementation issues, but does not rely on information from it concerning even the most basic program output statistics. The constellation of programs and services portrayed in the organization of the information system does not appear to coincide very accurately with the activities and program definitions provided by staff. In working to reconcile the two, I feel caught between multiple parties with different interests, but also clearly recognize that I lack much of the background necessary to understand all of the rapidly evolving dynamics of the situation. REFLECTION 5.3.1 Staff resistance to using the database appears nuanced and complex. It reflects a response to control imposed by and perceived to be imposed by the agency’s senior management, the dynamics of conflicts and competition for resources and independence among the regional offices, pre-existing trust issues in relationships among staff, and management efforts to challenge the credentials and required qualifications of staff doing a 112 variety of clinical work. Efforts to introduce the information system affect the definitions, activities, boundaries and mandates of program areas, which are in dispute. In part, staff resistance appears to represent resistance to participating in self-regulation. The agency is in a time of transition, and the database represents a focal point for much of that change. The national funding department is using the development of the information system to impose, if not order, a sense of coherence concerning the agency’s programming. They want to understand what is happening, and in order to do this they are clearly favouring and indicating by their requests those activities they view as priorities. The process also serves to polarize positions between staff and management of the agency. In working to reconcile the two, my efforts seem to represent a first step in mediation and negotiation between management and front-line staff – in effect, laying the groundwork for a discussion about what efforts are considered worthwhile, and what is the appropriate work of the agency. Management appears to be using the database as a way to lever change. It is not clear to me whether this change is at the official behest of the national funding department, although the trend does seem to reflect currents happening at other comparable agencies across the province – including those with primarily provincial level funding. From what I see of the national department’s representative in the two times that we meet, the funder is not averse to having the full range of programming activities incorporated into the database – as long as the key features they are interested in are represented. They seem somewhat unaware of how their requests for specific types of information impact on dynamics within the agency. Carrilio et al. (2003) observed that those delivering programs and services do not consistently use information systems that would help them collect data required by their 113 funders, even when they have a great deal of support, technical assistance, and training. Other factors they noticed that influenced the use of such software included organizational leadership, attitudes, accountability expectations of the funder, and the organizational culture and how programs collect organize and use data within the organization. Carrilio et al. (2003) focus on the more overt and intended components of organizational funding, rather than on resistance as an underground response. The vignette highlights some of the potential unintended impacts that information systems, and that even what are intended as benign efforts to capture a picture of what is happening in a program can lead to misunderstanding, the perception of imposed control in the organization, and re-evaluation of basic assumptions concerning what is done, what should be done, and who should be doing it. It can lead to shifts in program and organization mandate. As the evaluator dropped into this situation, even though I tried to keep from taking sides in disputes, my endeavours were used by both parties in the polarized debate to reinforce efforts to direct the changes taking place. By trying to use the information system for the evaluation, I raised what had been an underground area of control and resistance into a public dialogue about the direction of the agency. This discussion was not part of my mandate, but once the genie was out of the bottle, I was obligated to present the information I had compiled in a way that facilitated respectful discussion. Even so, my role in this deliberative process was circumspect and limited – my evaluation work identified issues and provided an opportunity for people to talk about them, but I did not facilitate the discussion. If I had negotiated taking on that role, I might have focused on the implications of the shifting mandates for the organization. Instead, with my more limited opportunities to input to this process, my report described and contrasted different versions of program goals, 114 activities, client groups and linkages, as described to me by the funder (expectations), agency management (the new vision), agency front-line staff (what they did), and other stakeholders (clients and community partners describing what they want and need). This presented the organization and staff (who each were to receive a copy of the report), with a framework for identifying how their respective visions of the agency overlapped and differed. At the heart of these processes of imposed change, resistance and negotiation are pressures to standardize programs and services, or at the very least, standardize the ways in which program and service activities are measured and documented. Such standardization impacts on agencies and delivery staff, but the key impacts are usually experienced by clients of programs and services. The trade-off is often between financial security for the program and agency vs. the client-centeredness of the process as experienced by participants. Grassroots nonprofit organizations typically become established in relation to some issue of local concern. If they are offering a program or service, it is often in response to a perceived gap among existing programs and services within a community. This local responsiveness and knowledgeability of grassroots and small nonprofit organizations is part of what makes them attractive as potential delivery agents – they have grown out of perceived need, they often have developed based on interagency cooperation and a great deal of community effort by volunteers, and they typically represent innovative approaches to coping with community needs and priorities. Recent research focuses on how small nonprofit and grassroots organizations that begin to deliver programs and services funded through contracts with government funders experience pressures for standardization. Schmid (2004) examines how nonprofit human service organizations, particularly those serving special needs and at-risk populations, can 115 lose their unique identities when taking on such contract work. He notes that it affects the organizations’ roles as gatekeepers to services, and advocates for those most at risk (p. 14). Similarly, Eikenberry and Kluver (2004), Scott (2003b) and Alexander (1999) identify comparable impacts on efforts by nonprofit organizations to advocate for their clients. This is more than simple reluctance to ‘bite the hand that feeds them,’ but reflects what can be a subtle process of shifting mandate based on trying to “cobble together projects and partners to survive” (Scott, 2003b, p. 4), and simply working to be a cooperative community participant in establishing and maintaining credibility for local efforts and proposals. The following vignette explores some of the subtle ways that organizations can experience such shifts in the mandates and ‘missions’ of programs and the entire agency. It describes a variety of the factors that contribute to how the process occurs, and speaks to how change can happen rapidly and yet in such a subtle way that participants may not even be aware that it is occurring. VIGNETTE 5.3.2 (GRASSROOTS) – STEALTHY CHANGES I am meeting the members of a program team. I have been working with the agency for several years, and have watched as one of their main programs has grown and evolved. This counselling program has provided long-term supports to community members who are experiencing multiple problems, and have not found other programs suitable for their needs. It has been a signature program for the agency within the community – one that has defined the organization’s place and value as offering unique services that are focused on the participant’s needs. Indeed, a key element of the program has been the intensive processes used to help clients identify what they needed, and how to build and plan their next steps. 116 Three years earlier, the agency obtained provincial government funding related to this program for the first time. Prior to this, the program had been funded through a combination of grants from several sources and local fundraising efforts. The new funding came as part of a broader funding envelope offered through similar agencies across the province, and in order to access this program funding, the organization was required to become affiliated with a network of service providers – a federation or ‘umbrella group’ of agencies offering similar programs and services, and through which this funding is distributed. The funding originates with the province, but the federation of agencies has recently introduced new funding guidelines. These expectations reflect an effort to provide each of the participating agencies with comparisons based on program statistics submitted as part of the reporting process to the funder and federation. At our meeting to discuss how we could evaluate the counselling program, and what might be some of the appropriate measures for such a program, the cross agency comparisons are raised as an example of what would be problematic for front-line staff. I am told that the average number of clients seen by counsellors in the agency has been lower than the average for other agencies in the comparison reports. While no overt pressure has been put on this or other agencies to increase rates, funding cutbacks loom on the horizon, and the subtext of the comparison process appears to be that those agencies reporting the lowest ‘efficiency’ rates may be at risk of having their program funding cut. My first reaction is to ask about the origins of these statistics – whether they represent an average per counsellor, or by day of counselling. I am aware that most counsellors in the agency work part-time, and so this would be reflected in any global statistics about clients per counsellor, unless the numbers were standardized in some way. As we work this through, 117 and look at the comparison tables, it appears that this is likely not an issue, but it is something that the agency’s contact with the federation will pursue to clarify how the statistics are compiled. I then ask about comparisons of apples and oranges – how does the counselling done at this agency compare with that in other agencies? One answer involves referral to the previous discussion – that even when the counselling is the same, if it lasts longer at one agency, then it might get counted differently than at another. For example, if the statistics of clients per counsellor reflect only the number of clients, and not how often each is seen within a given time period, then this agency is likely to come out less well in comparisons with agencies that focus primarily on short-term counselling. This agency and this program has a history of providing counselling that lasts up to a year in duration. Nowhere else in the community is such counselling available for those who cannot pay a per-session fee. The agency has a short-term counselling program – intended to address immediate needs and rarely providing support for longer than a few visits. But this program’s long-term supports appear to differ from those offered at many of the other agencies in the federation. Although most clients do not actually undertake counselling for the full year of potential support, those few who do increase the average significantly. At this point, the agency’s new program supervisor suggests that perhaps it would make sense to change the duration of counselling for individual clients to either six or even three months. This would ensure that the agency’s statistics would improve relative to other agencies, but it would also address the issue of ongoing waiting lists to access counselling, and give a clearer rationale for transitioning clients to other supports sooner, rather than leaving clients with the impression that they all had a guarantee of twelve months of 118 counselling. Shortening this period, even if it could be extended on an as-needed basis, would mean that most clients would be focused on shorter interventions, and not take extra counselling time that was not needed. I note that this represents a change in the agency’s mandate to accommodate the perceived accountability requirements of the funder, and ask whether this shift is one that the agency really wants to make, given its history in trying to provide supports to those in need of long-term counselling. A spirited discussion ensues. As a final point of clarification, I ask what else the counsellors do besides counselling, because if the statistics are compiled only about hours spent counselling, then other activities undertaken on behalf of the agency should not be used in constructing the statistics. For example, as this is a small agency, most counselling staff also spend some time each week working with crisis support and staffing the crisis line, doing community liaison with other agencies, and providing backup to other agency programs and services. The ensuing discussion raises the question of how to categorize or ‘count’ participation in counselling groups offered by the agency. Depending on the type of group, these sessions last anywhere from one to three hours, and have between four and a dozen participants. Some are ‘open’ groups, using a drop-in format, and as such, some of the people attending are not part of the agency’s usual counselling clientele. It seems that this facet of the counselling program has been particularly problematic in terms of how statistics are recognized by the umbrella agency, as only ‘new’ clients can be counted, no matter how many may attend each session. I arrange with the agency contact to follow-up this discussion with representatives of the federation – clarifying expectations as well as how statistics are used, and figuring out how the information about group work can be more accurately counted and reconciled with statistics about individual counselling provided by agency staff. 119 REFLECTION 5.3.2 A key pressure experienced by small nonprofit and grassroots organizations is to emulate the larger organizations or risk closing their doors. Taking on contracts for program delivery – even when the programs have been locally developed and established prior to obtaining external funding – introduces a variety of ways that programs can be shaped over time by the funder or its delegated representative. The example presented in Vignette 5.3.2 provides several ways of exploring the mechanisms through which mandate or mission drift occurs. One builds on the incremental changes accompanying the transitory nature of employment in the sector. As people move on to new jobs and different agencies, threads of continuity between community connections can weaken, and become replaced by those established and maintained by paying heed to the funder’s priorities. In the vignette, the agency’s new program supervisor is not aware of the history and rationale for the long duration of counselling offered through the program, and in this way I provide that connection through my historical role and association with the agency. Newer staff only see the current reality of program funding and external requirements, without necessarily being aware of the prior rationales for program features. Rather than relying on the evaluator to be available to provide this corporate memory for the agency, evaluation can contribute such supports by providing thick, detailed description of program histories and the rationale for programs, and by developing program logics or logic models that provide a clear background for new employees or old who want to understand how and why a program has the form that it does. As noted in section 5.1 on accountability, logic models can serve to reduce a program’s responsiveness and flexibility, and solidify or freeze its features if the logic model is treated as prescriptive rather than 120 analytically descriptive. Yet logic models can also provide a powerful shorthand overview of the rationale for a program, which can be used to communicate information to funders or community partners, or educate new agency personnel about programs, particularly if the evaluator or whoever develops the model builds in this awareness of the mutability of the logic: that it can change to reflect ongoing program changes and adaptations, but a
UBC Theses and Dissertations
Off the sides of their desks : devolving evaluation to nonprofit and grassroots organizations Hinbest, Gerald Bruce 2008
Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
- 24-ubc_2008_fall_hinbest_gerald.pdf [ 1.45MB ]
- JSON: 24-1.0066681.json
- JSON-LD: 24-1.0066681-ld.json
- RDF/XML (Pretty): 24-1.0066681-rdf.xml
- RDF/JSON: 24-1.0066681-rdf.json
- Turtle: 24-1.0066681-turtle.txt
- N-Triples: 24-1.0066681-rdf-ntriples.txt
- Original Record: 24-1.0066681-source.json
- Full Text