UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Evaluation for decision in human service programs Bruce, Peter K. 1980

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1980_A8 B78.pdf [ 4.64MB ]
Metadata
JSON: 831-1.0094839.json
JSON-LD: 831-1.0094839-ld.json
RDF/XML (Pretty): 831-1.0094839-rdf.xml
RDF/JSON: 831-1.0094839-rdf.json
Turtle: 831-1.0094839-turtle.txt
N-Triples: 831-1.0094839-rdf-ntriples.txt
Original Record: 831-1.0094839-source.json
Full Text
831-1.0094839-fulltext.txt
Citation
831-1.0094839.ris

Full Text

EVALUATION FOR DECISION IN HUMAN SERVICE PROGRAMS B.A., University of B r i t i s h Columbia, 1972 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS i n THE FACULTY OF GRADUATE STUDIES School of Community and Regional Planning by ETER K. BRUCE We accept t h i s thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA A p r i l , 1980 Copyright Peter K. Bruce, 1980 In presenting t h i s thesis i n p a r t i a l f u l f i l l m e n t of the requirements for an advanced degree at the University of B r i t i s h Columbia, I agree that the Library s h a l l make i t freely available for - reference and study. I further agree that permission for extensive copying of th i s thesis for scholarly purposes may be granted by the Head of my Department or by his representatives. It i s understood that copying or publication of th i s thesis for f i n a n c i a l gain s h a l l not be allowed without my written permission. School of Community and Regional Planning The University of B r i t i s h Columbia 2075 Wesbrook Place Vancouver, Canada V6T 1W5 A p r i l 18, 1980 ABSTRACT The fundamental purpose of program evaluation i s to gather information that w i l l increase the r a t i o n a l i t y of decisionmaking. Yet there i s ample evidence i n the l i t e r a -ture that evaluation findings are often ignored by decision-makers. Why? Some suggest that evaluations are not providing relevant information to decisionmakers and that the manner i n which evaluation i s being c a r r i e d out i s discouraging the use of evaluation r e s u l t s . P ractitioners i n the f i e l d of evaluation have long recognized t h i s problem of n o n - u t i l i z a t i o n . But attempts to correct the problem have i n large measure focused on the f i n a l stage of the evaluation e f f o r t - the dissemination of r e s u l t s . A major premise of t h i s thesis i s that u t i l i t y (as perceived by decisionmakers) must be b u i l t into each and every stage i n the evaluation. It addresses the question, "How can evaluation be designed and implemented, and i t s findings disseminated i n a way that w i l l make i t more u t i l i t a r i a n to decisionmakers?" The author i d e n t i f i e s twenty-five concepts or ' U t i l i t y P r i n c i p l e s ' which hold promise to improve the usefulness of results to decisionmakers. To i l l u s t r a t e how these p r i n c i p l e s might be applied i n a s o c i a l action setting, a u t i l i t y - f o c u s e d evaluation model i s designed for the Community Service Order Program of the B r i t i s h Columbia Corrections Branch. F i n a l l y , the l i m i t a t i o n s and opportunities of a u t i l i t y - f o c u s e d approach to evaluation are examined. ABSTRACT LIST OF ILLUSTRATIONS ACKNOWLEDGMENTS . . . Chapter I. INTRODUCTION . ; TABLE' OF CONTENTS ' i i v i v i i 1 Evaluation for Decisionmaking Study Focus Evaluation - A D e f i n i t i o n Thesis Methodology Format II. PRINCIPLES OF UTILITY-FOCUSED EVALUATION . . . 12 C l a r i f y i n g the Purpose Identifying and Organizing the Actors Formulating the Evaluation Question Measurement and Design Data Analysis and Interpretation Conveying the Results Evaluating Evaluation Summary II I . THE BRITISH COLUMBIA CORRECTIONS BRANCH . . . 29 An Overview The Vancouver Region Program Evaluation i v V IV. THE COMMUNITY SERVICE ORDER PROGRAM 41 Concept and Purpose H i s t o r i c a l Development Program Process Program Administration V. APPLYING THE PRINCIPLES - UTILITY-FOCUSED EVALUATION FOR THE COMMUNITY SERVICE A Purpose for CSO Program Evaluation Relevant Decisionmakers The Evaluation Team A C r i t i c a l Focus Methodology An Evaluation Process Summary C r i t i c a l Assumptions of the U t i l i t y Model Suitable Applications of the U t i l i t y Model Utility-Focused Evaluation - Is i t R e a l i s t i c ? A Parting Word ORDER PROGRAM 54 VI. UTILITY-FOCUSED EVALUATION 82 FOOTNOTES 89 BIBLIOGRAPHY 94 I LIST OF ILLUSTRATIONS 1. Organization Chart of the B r i t i s h Columbia Corrections Branch, Ministry of Attorney-General, 1979 30 2. Estimated Expenditures for the B r i t i s h Columbia Corrections Branch, by type, 1978-1979 32 3. B r i t i s h Columbia Corrections Branch Staf f , by Function, 1978-1979 33 4. The Vancouver Region of the B r i t i s h Columbia Corrections Branch . . 35 5. Organization Chart of the Vancouver Region, B r i t i s h Columbia Corrections Branch 36 6. Organization Chart of the Powell River Service Delivery Unit 38 7. Case Months of Cornmunity Service Completed, A p r i l , 1977 to March, 1980 (Est.) 44 8. Entry to the Community Service Order Program, by Diversion 46 9. Entry to the Community Service Order Program, by Court Order . 4 8 10. Location of Community Service O f f i c e r s i n the Vancouver Region 51 11. Types of Probation Orders Imposed by the Courts . . . 53 v i ACKNOWLEDGMENTS TO RANDI friend, confidante and enduring martyr Sincere thanks to my advisors, Peter Boothroyd, Nancy Cooley, Ted Harrison and Brahm Wiesman; to Rosemary Montgomery for her wonderful typing; and to my sponsor, the B r i t i s h Columbia Corrections Branch. v i i \ CHAPTER I INTRODUCTION Evaluation for Decisionmaking Now, more than ever before, an atmosphere of economic r e s t r a i n t i s forcing government agencies to cut costs and im-prove the effectiveness of t h e i r programs. Agency performance i s being c a r e f u l l y scrutinized by the public exchequer and administrators are having to j u s t i f y the cost-effectiveness of t h e i r operations with facts and figures. Gloomy economic predictions suggest t h i s trend toward 'no f r i l l s ' , hard-won budgets w i l l continue. The plethora of l i t e r a t u r e on evaluation suggest that public agencies are responding to these 'hard times' by a l l o c -ating more and more human and monetary resources to the develop-ment and implementation of program evaluation. Indeed, program evaluation units are now commonly found throughout federal and pr o v i n c i a l governments. But as impressive as the growth of evaluation appears, i t s track record has been more than a l i t t l e disappointing. It i s now widely accepted that evaluation has had l i t t l e to no im-pact on public sector program decisionmaking. States Weiss i f we take a clear, cold look at just how useful and cos t - e f f e c t i v e program evaluation has been for improving service programs and delivery systems, we f i n d that the 1 2 record i s hardly spectacular. Just as most human ser-vice programs have had i n d i f f e r e n t success i n improving the l o t of c l i e n t s , most program evaluations have had l i t t l e impact on improving the quality of programs. Despite our high-flown r h e t o r i c , evaluators share with program colleagues both the s e l f - s e r v i n g nature of the j u s t i f i c a t i o n for our enterprise and the spotty record of success. Evaluation r e s u l t s , i t seems are being disregarded by decision-makers . Why? Admittedly, part of the reason for evaluation's poor showing can be attributed to the purely methodological d i f f i c u l t -ies inherent i n assessing people programs. But i t i s becoming increasingly evident that the u n d e r u t i l i z a t i o n of evaluation reports i s largely a f a i l u r e on the part of those developing and implementing evaluation to produce results that are relevant to the wants and needs of decisionmakers (Decisionmakers may be any individuals or groups who make decisions which a f f e c t the delivery of program services. They may be a funding body, policymakers, managers, service providers, service r e c i p i e n t s , a c i t i z e n group or any combination of these. However, the term i s being used i n t h i s study to ref e r to those for whom the evaluation has been s p e c i f i c a l l y commissioned - the primary users of evaluation r e s u l t s ) . Thompson concludes that the f a i l u r e of evaluation has resulted from the f a l s e but widely accepted b e l i e f that evaluations that are highly v a l i d are high-l y u t i l i t a r i a n : Evaluations - however sound t h e i r methodologies and however s k i l l e d t h e i r researchers - do not automatically benefit decisions. This single, simple oversight ex-plains much of the disappointing record of evaluation. Only when evaluations are commissioned i n such a way that the researchers have incentives and d i r e c t i o n guiding 3 them to c o l l e c t the information required for decisions can we be confident that they w i l l benefit p o l i c y . The message i s clear. The fundamental purpose for evaluation in the context of the s o c i a l action program must be to serve the information needs of decisionmakers (the author recognizes, however, that there may be other, secondary 'purposes' for evaluation - see Chapter VI). Suchrrian concurs: an evaluation study should be a problem-solving enter-prise with a clear-cut r e l a t i o n s h i p to some decision-making function. Perhaps the most c r u c i a l question to be asked before an evaluation i s undertaken is,"What decisions about the program or i t s objectives w i l l be affected by the results of the evaluation?" In a similar vein, Weiss writes The basic rationale for evaluation i s that i t provides information for action. Its primary j u s t i f i c a t i o n i s that i t contributes to the r a t i o n a l i z a t i o n of decision-making. Although i t can serve such other functions as knowledge-^building and theory-testing, unless i t gains serious hearing when program decisions are made, i t f a i l s i n i t s major purpose. Results are more l i k e l y to influence program decisions, then, when the evaluation addresses the needs of decisionmakers. The task of making re s u l t s useful involves more than simply arranging the data i n a c e r t a i n way or using a p a r t i c u l a r format for the report. Useful r e s u l t s are i n e x t r i c a b l y linked to the evaluation plan, the manner i n which i t i s c a r r i e d out and the way i n which finding are presented (see Chapter I I ) . Therein l i e s the thesis of t h i s study: If there i s to be reasonable assurance that evaluation w i l l produce results that are useful and used, the i n -formation needs of decisionmakers must be addressed at each and every stage i n the evaluation e f f o r t . That i s , every major decision involved i n the planning of the evaluation, i n i t s implementation and i n the d i s -semination of results must address the question " w i l l t h i s action increase the l i k e l i h o o d that the findings w i l l be relevant to and u t i l i z e d by decisionmakers?" 4 It i s thi s premise, that u t i l i t y must be b u i l t into the eval-uation e f f o r t from s t a r t to f i n i s h , that shapes the approach taken i n thi s study ( ' u t i l i t y ' i s defined as a state of use-fulness; what i s useful i s r e l a t i v e to the perceiver). Study Focus This thesis i s a response to the need for more useful program evaluation. Its purpose i s two-fold: (1) To i d e n t i f y a set of ' U t i l i t y P r i n c i p l e s ' , ( ' U t i l i t y P r i n c i p l e ' i s a term coined by the author. I t refers to a fund-amental state or c h a r a c t e r i s t i c of evaluation that i s l i k e l y to contribute to the generation of findings that are useful to pro-gram decisionmakers) which when applied to the development and implementation of an evaluation model w i l l s i g n i f i c a n t l y i n -crease the p r o b a b i l i t y that the evaluation w i l l influence pro-gram decisionmaking. (.2) To demonstrate how some of the U t i l i t y P r i n c i p l e s could be applied by developing a u t i l i t y - f o c u s e d evaluation model for a public sector s o c i a l action program (the terms ' u t i l -ity-focused evaluation'and ' u t i l i t y model' are synonymous and refer to evaluation that i s designed and carr i e d out i n a manner that promotes the usefulness of evaluation r e s u l t s .to decision-makers and which incorporates some or a l l of the u t i l i t y princ-i p l e s developed i n t h i s study). F i r s t the study draws together suggestions for improving u t i l i t y from several sources (see Thesis Methodology, t h i s chapter). 5 The aim i s to consolidate these ' b i t s ' of theory into a set of guidelines ( U t i l i t y Principles) that the reader can then use as a basis for creating a u t i l i t y - f o c u s e d evaluation model to f i t the context of his p a r t i c u l a r program/agency. However, the p r i n c i p l e s espoused are not intended to define a written-in-stone formula for achieving more useful evaluation. They are offered only as a conceptual framework from which decisionmakers and evaluators are i n v i t e d to choose that which f i t s t h e i r unique needs and circumstances. Some of the p r i n c i p l e s raised may not be acceptable to some decision-makers or may not be possible to implement i n some contexts. Indeed, one can conceive of an evaluation that incorporates none of the p r i n c i p l e s developed i n t h i s thesis that i s never-theless highly u t i l i t a r i a n to a p a r t i c u l a r decisionmaker ( a l -though t h i s would not be u t i l i t y - f o c u s e d evaluation as defined in t h i s study). In the f i n a l analysis, u t i l i t y i s r e l a t i v e to the beholder; each decisionmaker w i l l have his own view of what w i l l make evaluation a useful t o o l for him. Equipped with a set of U t i l i t y P r i n c i p l e s the author turns to examine a p a r t i c u l a r public sector agency. The purposes of doing so are (.1) to help the reader consolidate the theory of u t i l i t y - f o c u s e d evaluation and conceptualize i t s practice, C2) to demonstrate the sort of u t i l i t y - o r i e n t e d reasoning essent-i a l to those planning and undertaking program evaluation, and (.3) to i s o l a t e the li m i t a t i o n s and opportunities related to the application of the U t i l i t y P r i n c i p l e s . In short, the case study attempts to bridge the gap between u t i l i t y theory and 6 practice and to explore the boundaries of usefulness of the u t i l i t y concepts proposed. The setting for the case study i s the Community Service Order Program, operated by the B r i t i s h Columbia Corrections Branch, Ministry of the Attorney-General. In p a r t i c u l a r , the case study focuses on the Vancouver Region of that agency. To achieve the purposes of the case study and within the setting described, the author develops a u t i l i t y - f o c u s e d evaluation model. Incorporated i n the model, are many of the p r i n c i p l e s i d e n t i f i e d e a r l i e r i n the thesis. It i s worth em-phasizing that the objective i s not to create an operational evaluation model; the intent i s s o l e l y to use the model develop-ment process to accomplish the three purposes i d e n t i f i e d above. Whenever i t seems pertinent to the study.purpose the discussion moves outside the boundaries of the Vancouver Region administration to consider relationships with other parts of the Branch and with other components of the Criminal Justice System. (The term 'Criminal Justice System' refers i n t h i s study, to a l l those agencies involved i n the administration of criminal j u s t i c e , notably the p o l i c e , crown prosecutor, courts and the B r i t i s h Columbia Corrections Branch. I t implies that these component agencies are interdependent and that they c o l l -e c t i v e l y function as a whole). Evaluation - A D e f i n i t i o n Definitions of evaluation, and consequently evaluations themselves, have long been dominated by the thinking that eval-uation necessarily means measuring program outcomes i n r e l a t i o n 7 to s p e c i f i e d goals. Suchman's view of evaluation i s character-i s t i c of t h i s goal orientation. He suggests that The key elements i n . . . a... d e f i n i t i o n of evaluation are: (1) an objective or goal which i s considered desirable or has some p o s i t i v e value; (2) a planned program of deliberate intervention which one hypothesizes i s cap-able of achieving the desired goal; and (3) a method for determining the degree to which the desired object-ive i s attained as a r e s u l t of the planned program. These "key elements" lead Suchman to propose t h i s d e f i n i t i o n of evaluation: the determination...of the r e s u l t s . . . attained by some a c t i v i t y . . . designed to accomplish some valued goal or objective... Assuming Suchman i s using the terms goal and objective in reference to a f i n a l desired state (rather than intermediate objectives), his d e f i n i t i o n must be deemed inadequate for a u t i l i t y - f o c u s e d approach to evaluation. F i r s t , i t i s oriented to the c o l l e c t i o n of outcome data which may be i r r e l e v a n t to decisionmakers' needs (/they may only want information about program processes). Second, there i s no i n d i c a t i o n from the d e f i n i t i o n whose values and information needs are being served and, therefore, no guidance as to what data should be c o l l e c t e d . L i c h f i e l d et al. overcome the f i r s t deficiency by excluding any reference to outcomes or goals. Evaluation, they state, i s the process of analyzing a number of plans or projects with a view to searching out t h e i r comparative advantages and disadvantages and the act of setting down the f i n d -ings of such analyses i n a l o g i c a l framework. The essence of evaluation i s the assessment of the compar-ative merits of d i f f e r e n t courses of action. Yet L i c h f i e l d et a l . make no mention of what to many i s eval-uation's fundamental purpose, to service decisionmaking. Riecken's d e f i n i t i o n hints at the importance of addressing decisionmaker 8 needs but remains entrenched i n the unnecessarily confining 'goal-orientation' model: Evaluation i s the measurement of desirable and undes-ir a b l e consequences of an actiongintended to forward some goal that the actor values. The d e f i n i t i o n adopted for t h i s study i s that proposed by A l k i n . I t i s broad enough to be applicable to any information needs decisionmakers may have and e x p l i c i t l y states evaluation's fund-amental purpose - to aid decisionmaking: Evaluation i s the process of ascertaining the decision areas of concern, selecting appropriate information, and c o l l e c t i n g and analyzing information i n order to report summary data usefulgto decisionmakers i n s e l -ecting among alternatives. 'Evaluation' i s only used i n t h i s study i n reference to public sector s o c i a l action programs. Thesis Methodology The approach to t h i s thesis involves three basic steps: (1) the l i t e r a t u r e on evaluation research i s examined to i d e n t i f y a set of p r i n c i p l e s to guide the development of a u t i l i t y - f o c u s e d evaluation model, (2) the case study context i s ascertained (the B.C. Corrections Branch and the Community Service Order Program) and (.3) selected U t i l i t y P r i n c i p l e s are applied to the study context to shape the evaluation model. The number of U t i l i t y P r i n c i p l e s that one could propose for evaluation seems almost i n f i n i t e . Indeed, every act i n the evaluation e f f o r t could conceivably have such a p r i n c i p l e attached to i t . I t was necessary, therefore, to select a manageable number of key p r i n c i p l e s . The p r i n c i p l e s proposed i n the thesis 9 are those which (1) are l i k e l y to have a s i g n i f i c a n t impact on evaluation's usefulness, (2) appear to have general support i n the l i t e r a t u r e , (3) as a set, form a consistent whole, r e f l e c t -ing a cohesive theory of u t i l i t y - f o c u s e d evaluation, (4) as a set, provide guidance for the would-be evaluator at each and every step i n the evaluation e f f o r t . The reader w i l l note that not a l l the U t i l i t y P r i n c i p l e s put forward i n Chapter II are e x p l i c i t l y applied to the study context. The reason i s that some of the p r i n c i p l e s refer to states of mind or human attributes (e.g. P r i n c i p l e 5: Decision-makers must (1) want and need information, (2) care about and be w i l l i n g to share r e s p o n s i b i l i t y for the evaluation, (3) have the s k i l l s to use the information e f f e c t i v e l y " ) that could not be determined without surveying the individuals involved. Time constraints did not permit such a survey. Consequently, only those p r i n c i p l e s that lend themselves to i l l u s t r a t i o n with-out imput from p r a c t i t i o n e r s i n the agency examined are used to develop the model. To make the model t r u l y u t i l i t a r i a n , input from relevant decisionmakers would, of course, be necessary. P r i n c i p l e s , by nature, must be general enough to be of wide application. Applying the p r i n c i p l e s to the case study, therefore, requires the author to i d e n t i f y the s p e c i f i c form each p r i n c i p l e should take having considered the constraints and opportunities presented by the program/agency environment. This t r a n s l a t i o n of p r i n c i p l e into practice i s guided by the following three questions: (1) Has the agency the time and resources (human and material) to incorporate t h i s p r i n c i p l e 10 i n t h i s form? (2) W i l l t h i s principle,when applied to t h i s context, require the organization to make s i g n i f i c a n t changes to i t s present mode of operation? (3) Does th i s operational form of the p r i n c i p l e add the most to the u t i l i z a t i o n p o t e n t i a l of the evaluation r e l a t i v e to other alternatives? Information for the thesis i s obtained from four sources: (1) l i t e r a t u r e on evaluation research, (2) B.C. Corrections Branch documents, (3) B.C. Corrections Branch personnel, (4) author's personal knowledge of and experience i n the B.C. Corr-ections Branch. From the l i t e r a t u r e come the problem statement and theme of the study (evaluation for decisionmaking), a defin-i t i o n of evaluation, concepts for most of the U t i l i t y P r i n c i p l e s developed i n Chapter II and conceptual frameworks for developing the s t r u c t u r a l elements of the model i n Chapter V. Branch doc-uments, together with discussions with Branch personnel provide the descriptive data needed for Chapter III (the setting) and Chapter IV (the program). I t i s these two sources as well which help to reveal the contextual considerations involved i n apply-ing the U t i l i t y P r i n c i p l e s to the case study. For seven of the l a s t eight year (1972 to 1980) the author has been employed as a Probation O f f i c e r for the B.C. Corrections Branch. Part of that time has been served i n the Vancouver Region, s p e c i f i c a l l y Squamish and Powell River. A l -though i t i s d i f f i c u l t to express exactly how and where t h i s experience has been applied to the thesis there i s l i t t l e doubt that i t has contributed s i g n i f i c a n t l y to the author's under-standing of (1) the dynamics of the organization, (2) the con-s t r a i n t s and opportunities facing the organization, (3) the l e v e l of management and evaluation expertise i n the organization, 11 (4) the relationship between the Branch and other components of the Criminal Justice System. Integrated with other inform-ation sources, the author's experience i s drawn upon throughout the thesis. Format The remainder of t h i s thesis i s divided into f i v e chapters. Chapter II develops a set of twenty-five U t i l i t y P r i n c i p l e s , some of which are e x p l i c i t l y applied i n the case study. Chapters III and IV describe the case study context. They provide the unfamiliarized reader with background information about the B r i t i s h Columbia Corrections Branch and the Community Service Order Program. Based on a number of the guidelines proposed in Chapter II, Chapter V develops a u t i l i t y - f o c u s e d evaluation model for the Community Service Order Program. In the f i n a l chapter suggestions are made about where u t i l i t y - f o c u s e d eval-uation can best be used and the p r a c t i c a b i l i t y of the approach i s assessed. CHAPTER II ; ' PRINCIPLES OF UTILITY-FOCUSED EVALUATION This chapter i s based on the premise that there are some fundamental guidelines for evaluation that w i l l enhance the pot e n t i a l usefulness of the results to decisionmakers. The purpose of t h i s chapter i s to i d e n t i f y those 'Principles of U t i l i t y . ' Taken largely from the l i t e r a t u r e , twenty-five p r i n c i p l e s are proposed which re l a t e to the three basic stages of evaluation - planning, implementation and dissemination of r e s u l t s . C o l l e c t i v e l y , the p r i n c i p l e s are related to the s t r u c t u r a l elements of evaluation (why evaluate, evaluation for and by whom, and what to evaluate) and to the process or 'how' of evaluation. Each section deals with both evaluation structure and process. C l a r i f y i n g the Purpose Often, evaluations f a i l to produce meaningful re s u l t s because inadequate attention i s paid to developing a clear statement of purpose at the inception of the evaluation. Where ambiguity exists about why the evaluation i s being done, c o n f l i c t , wasted e f f o r t and i r r e l e v a n t findings r e s u l t . Gurel i s emphatic on the subject: I have been impressed often enough and strongly enough by the problem of u n c l a r i f i e d and d i s s i m i l a r motivations that I accord i t unquestioned f i r s t place among the possible reasons so much evaluation e f f o r t comes to naught. Some motivations to evaluate take the form of 'hidden agendas' that can undermine the evaluation e f f o r t . Suchman^ i d e n t i f i e s six 'abuses' of evaluation that reduce or eliminate the u t i l i t y of results for program decisionmaking: (1) eye-wash - l i m i t a -t i o n of attention to favourable program aspects, (2) white-wash - avoidance of objective appraisals, (3) submarine -evaluation designed to eliminate a program, (4) posture -evaluating to appear s c i e n t i f i c and professional, (5) post-ponement - using evaluation as a delay t a c t i c , (6) substitu-t i o n - disguising program f a i l u r e s by focusing attention on less relevant but defensible program aspects. The problems of d i s s i m i l a r and i l l e g i t i m a t e motivations for evaluation would seem less l i k e l y to occur i f evaluator and decisionmakers worked together to develop a clear, s p e c i f i c statement of the evaluation's purpose. PRINCIPLE OF UTILITY PRINCIPLE 1: DECISIONMAKERS AND EVALUATOR SHOULD JOINTLY DEVELOP A CLEAR, SPECIFIC STATEMENT OF THE EVALUATION'S PURPOSE. Identifying and Organizing the Actors T r a d i t i o n a l l y , the evaluator has functioned indepen-dently of management, assuming f u l l r e s p o n s i b i l i t y for a l l 14 aspects of planning and undertaking the evaluation. Presumably, the b e l i e f i s that decisions surrounding evaluation are best l e f t to the expertise of the evaluator and that excessive i n t e r -action with management might jeoprodize the o b j e c t i v i t y of the evaluator. Some authors now believe that the evaluator's i s o l a t i o n from management has been responsible for much of evaluation's poor performance, suggesting that the u t i l i t y of evaluation may depend i n large measure on the active p a r t i c i p a -t i o n of decisionmakers. Rutman comments: There i s growing recognition that findings are more l i k e l y to be u t i l i z e d i f key decision makers are involved i n the planning of the study, i n discussing the findings, and i n considering how the r e s u l t s of the research can be used to improve t h e i r programs.4 Patton agrees and suggests that key decisionmakers must be i d e n t i f i e d from the outset. Key decisionmakers he defines as those who have the most c r i t i c a l information needs about the program. He argues further that decisionmakers should be chosen who are personally disposed to evaluation. They should be "1. People who can use information; 2. People to whom -information makes a difference; 3. People who have questions they want to have answered; and 4. People who care about and are w i l l i n g to share r e s p o n s i b i l i t y for the evaluation and i t s u t i l i z a t i o n . " ^ Another human factor i s raised by Weiss6 who suggests that evaluation findings might not be used i f decision-makers lack confidence i n the evaluator's knowledge and a b i l -i t i e s . Attkisson et a l . point out that decisionmakers, once armed with useful information, often lack the s k i l l s to u t i l i z e 15 the information e f f e c t i v e l y . T y p i c a l l y , they observe, human service managers were once service providers who have come up through the ranks and who lack formal t r a i n i n g as administra-tors. They speculate that such administrators "tend to r e l y on t h e i r own observations as the information base for decision-making. They tend not to be f a m i l i a r with other sources of 7 information, and therefore to underutilize them." The implication i s that when managers are the recipients of evaluation findings, they are more l i k e l y to u t i l i z e findings (and u t i l i z e them e f f e c t i v e l y ) i f they are s k i l l e d i n evalua-tio n , planning and management. Choice of evaluator should be made with u t i l i t y i n mind. There i s a long-standing debate over the merits of 8 in-house versus external evaluators. Outsiders are generally regarded as having greater o b j e c t i v i t y , presumably due to t h e i r wider perspective and r e l a t i v e l y autonomous position. Insiders, on the other hand, are l i k e l y to be far more cognizant of the study context and of the issues facing the program. Then too, they are probably i n a better position to f a c i l i t a t e the u t i l i z a t i o n of r e s u l t s . Adams expresses a biase toward the insider noting that they seem to be performing the best agency evaluations that have been accomplished i n recent years; and second, they appear unusually suited to increase the u t i l i z a t i o n rate of productive research.... "Moonlighting researchers from college campuses do not provide the continuity, the i d e n t i f i c a t i o n , the communications a b i l i t y , nor the accumulated experience that i s beginning to show up i n the superior performance of some in-house units. 16 PRINCIPLES OF UTILITY PRINCIPLE 2: THOSE FOR WHOM THE EVALUATION IS FOCUSED SHOULD BE IDENTIFIED FROM THE "OUTSET, THEIR INFORMATION NEEDS ASCERTAINED AND THEIR ACTIVE PARTICIPATION SECURED. PRINCIPLE .3: THE EVALUATOR SHOULD INTERACT WITH DECISIONMAKERS THROUGHOUT THE EVALUATION TO JOINTLY MAKE DECISIONS CONCERNING THE EVALUATION PLAN, ITS IMPLEMENTATION AND THE UTILIZATION OF RESULTS. PRINCIPLE 4: DECISIONMAKERS FOR WHOM THE EVALUATION IS FOCUSED SHOULD BE•THOSE WITH THE MOST CRITICAL INFORMATION NEEDS CONCERNING THE PROGRAM. PRINCIPLE 5: DECISIONMAKERS FOR WHOM THE EVALUATION IS FOCUSED SHOULD POSSESS CERTAIN PERSONAL CHARACTERISTICS. THEY SHOULD (1) WANT AND NEED INFORMATION, (2) CARE ABOUT AND BE WILLING TO SHARE RESPONSIBILITY FOR THE EVAL-UATION AND ITS UTILIZATION, (3) POSSESS THE PERSONAL SKILLS AND INFLUENCE TO USE THE INFORMATION EFFECTIVELY, PRINCIPLE 6: IN GENERAL, IN-HOUSE EVALUATORS SHOULD BE PRE-FERRED OVER EXTERNAL EVALUATORS. Formulating the Evaluation Question One could safely surmise that although there are count-less program aspects upon which an evaluation could focus, there exists only one focus which w i l l provide the most useful information given time and resource constraints. The onerous task facing those planning the evaluation i s to uncover that optimal focus. Below, a number of authors rai s e suggestions as to how that might be done. Thompson makes the obvious but o f t 1 ignored recommen-dation "that the po l i c y maker have c l e a r l y i n mind the decisions for which he i s commissioning the evaluation. The evaluation should gather that information, set that would best guide the 17 decision.""^ Pattern"'""'" adds that the decisionmaker, having chosen a focus, should be able to say how he would use the 12 information obtained. Others have noted that i t must be possible to c o l l e c t the desired data given the time and resource constraints of the agancy. Thompson observes that evaluations which attempt to examine a broad range of program aspects t y p i c a l l y do not produce highly useful information. 'Diffuse' evaluations, he states, require a high degree of expertise to be done properly. To improve the u t i l i t y of evaluation Thompson urges that more emphasis should be placed on the ' s p e c i f i c ' evaluation (defined as a"limited and directed assignment to provide s p e c i f i c information to the commissioner"). The warning here, of course, i s that evaluations should not be so s p e c i f i c that understanding of the phenomenon studied i s impaired or that f a l s e conclusions are reached. Furthermore, researchers have an obligation to watch for unanticipated events and should not hesitate to a l t e r the evaluation focus (with decisionmaker agreement) i f more useful information would r e s u l t . It must also be possible and desirable to take action on evaluation findings.' Where potential findings suggest action that i s p o l i t i c a l l y , morally or economically unacceptable the evaluation may be of limited usefulness (although sometimes the most important issues w i l l be those that are contentious). Chommie and Hudson stress the importance of choosing a focus which w i l l provide a d i r e c t i o n for action and suggest, i n t h i s regard, 18 that more attention be given to program process. Information about outcomes alone, they note, does not t e l l decisionmakers how or where to make program improvements or to what program successes can be attributed ('outcomes' i s used only i n reference to f i n a l program outcomes as opposed to outcomes associated with intermediate means-end r e l a t i o n s h i p s ) : Clearly, both process and outcome are proper concerns for evaluation. The authors contend, however, that giving increased attention to process rather than focusing s t r i c t l y on outcome, may have s i g n i f i c a n t informational payoff to the various people involved i n s o c i a l programs and, i n turn, may lead to a clearer understanding of how and why change does or does not o c c u r . ^ Patton observes that many evaluations, by focusing on program outcomes, have asked the wrong questions. He notes that the process of implementing a program always contains unknowns that change the id e a l so that i t looks d i f f e r e n t when and i f i t actually becomes operational. Outcome eval-uations that assume the program has been implemented as i n -tended may reach fau l t y and meaningless conclusions: Unless one knows that a program, i s operating according to design, there may' be l i t t l e reason to expect i t to produce the desired outcomes. Furthermore, u n t i l the program i s implemented and a "treatment" i s believed to be i n operation, there i s l i t t l e reason to evaluate outcomes.16 A u t i l i z a t i o n - f o c u s e d approach to evaluation, he concludes, must frame the evaluation question i n the context of program implementation, and i f desired, program outcomes. Two authors have proposed organizing many of the above considerations into a sequential, multi-step f e a s i b i l i t y anal-. . . 17 ysi s to i d e n t i f y h i g h - u t i l i t y f o c i . Thompson recommends 19 a five-step thought process p r i o r to commissioning an eval-uation: (1) i d e n t i f i c a t i o n of the decisions for which i n f o r -mation i s required, (2) consideration of the f e a s i b i l i t y and accuracy of possible studies, (3) analysis of possible decision consequences, (4) evaluation of policy e f f e c t s , (5) selection of the evaluation with the highest expected u t i l i t y value net 18 of cost. S i m i l a r l y , Wholey proposes an'evaluability asses-sment1 which analyzes the decisionmaking system to be served by an evaluation study and c l a r i f i e s the questions to be answered. The s p e c i f i c task of Wholey'S assessment procedure i s to i d e n t i f y those program a c t i v i t i e s and objectives for which there ex i s t agreed-upon measures of success and testable causal relationships. The results of the analysis are then fed back to program managers/intended users who choose the program components and objectives that are amenable to eval-uation. PRINCIPLES OF UTILITY PRINCIPLE 7: DECISIONMAKERS SHOULD WANT AND NEED INFORMATION TO HELP THEM ANSWER THE EVALUATION QUESTION, THE INFORMATION SHOULD SERVE THEIR OWN NEEDS AND THEY SHOULD HAVE A CLEAR IDEA OF HOW THE INFORMATION WILL BE USED. PRINCIPLE 8: IT MUST BE POSSIBLE TO BRING DATA TO BEAR ON THE QUESTION WITH THE TIME AND RESOURCES AVAILABLE AND POSSIBLE/DESIRABLE TO TAKE ACTION ON EVALUATION FINDINGS. PRINCIPLE 9: OUTCOMES SHOULD NOT BE EVALUATED WITHOUT KNOW-LEDGE ABOUT PROGRAM IMPLEMENTATION; INFORMATION ABOUT PROGRAM IMPLEMENTATION AND PROGRAM PROCESSES IS MORE LIKELY TO PROVIDE A DIRECTION FOR PROGRAM IMPROVEMENT THAN INFORMATION ABOUT PROGRAM OUTCOMES. 20 PRINCIPLE 10: A RATIONAL THOUGHT PROCESS SHOULD BE USED TO ASSESS THE INFORMATION VALUE OF ALTERNATE FOCI. THE ALTERNATIVE WITH THE HIGHEST NET UTILITY VALUE SHOULD BECOME THE FOCUS FOR EVALUATION. Measurement and Design The nature of evaluation research l i t e r a t u r e suggests that research i n evaluation methodology has focused almost exclusively on the development of v a l i d and r e l i a b l e methods. It seems that l i t t l e attention has been paid to the develop-ment of methods high i n u t i l i t y . The n o n - u t i l i z a t i o n of evaluation results may i n part be due to the widespread acceptance of the faulty assumption that high methodological quality i s synonymous with high u t i l i t y . Based on a survey of decisionmakers engaged i n evaluation, Patton observes that "Research quality never arises as an issue i n the u t i l i z a t i o n of many evaluations; i n the search for information, decision-makers use whatever i s available to help reduce uncertainty." He concludes: there i s l i t t l e i n our data to suggest that improving  methodological quality i n and of i t s e l f w i l l have much  ef f e c t on increasing the u t i l i z a t i o n of evaluation  research. No matter how rigorous the methodology and no matter how sophisticated the s t a t i s t i c a l manipulations, evaluation research w i l l only be useful i n proportion to i t s relevance to decisionmakers questions ( i t a l i c s i n o r i g i n a l ) . " Patton adds that "decisionmakers were concerned that findings be at least s u f f i c i e n t l y relevant that the data could be used to give some d i r e c t i o n to pending action." As Patton points out, the concern for relevance does not mean that research 21 quality should be ignored; i t simply means that q u a l i t y i s not the primary determinant of u t i l i t y . How, then, can decisionmakers gain some assurance that the method chosen w i l l generate relevant information? Again, Patton emphasizes the importance of obtaining the active p a r t i c i p a t i o n of decisionmakers: Uti l i z a t i o n - f o c u s e d evaluation t r i e s to combine a concern for research quality with a concern for relevance by involving decisionmakers i n the making of c r i t i c a l methods and measurement decisions. Since no study i s methodolog-i c a l l y perfect, i t i s important for decisionmakers to know first-hand what imperfections e x i s t - and to be i n -cluded in'deciding which imperfections they w i l l have to l i v e with i n making the inevitable leap from limited data to incremental a c t i o n . 2 0 Furthermore, decisionmakers who have a personal investment i n the evaluation process, states Patton, are more l i k e l y to use i t s product: Involving i d e n t i f i e d decisionmakers and information users in measurement and design decisions i s based on the assump-tion that u t i l i z a t i o n i s enhanced i f users believe i n and have a stake i n the data. B e l i e f i n the data i s increased by understanding i t , understanding i t i s enhanced by involvement i n the painstaking process of making decisions about what data to c o l l e c t , how to c o l l e c t i t , and how to analyze i t . ± In t h i s and i n every step i n the u t i l i t y - f o c u s e d evaluation process, the evaluation i s t a i l o r e d to the needs, wishes and b e l i e f s of those who w i l l use i t s findings. PRINCIPLES OF UTILITY PRINCIPLE 11: DESIGN AND MEASUREMENT DECISIONS SHOULD BE SHARED BY EVALUATOR AND DECISIONMAKERS THROUGHOUT THE EVALUATION TO INCREASE THEIR UNDERSTANDING OF, BELIEF IN AND COMMITMENT TO EVALUATION DATA. 22 PRINCIPLE 12: EVALUATION DESIGNS SHOULD BE SELECTED THAT ARE CREDIBLE TO DECISIONMAKERS AND EVALUATOR. PRINCIPLE 13: MAJOR CONCEPTS, VARIABLES AND UNITS OF ANALYSIS SHOULD BE DEFINED IN TERMS MEANINGFUL TO DECISION-MAKERS . PRINCIPLE 14: MULTIPLE METHODS AND MEASURES SHOULD BE USED AS MUCH AS POSSIBLE TO INCREASE THE BELIEVABILITY OF FINDINGS. PRINCIPLE 15: METHODS SHOULD BE SELECTED ON THE ASSUMPTION THAT IT IS BETTER TO HAVE AN APPROXIMATE AND HIGHLY PROBABILISTIC ANSWER TO THE RIGHT QUESTION THAN A SOLID AND RELATIVELY CERTAIN ANSWER TO THE WRONG QUESTION. Data Analysis and Interpretation Data analysis and interpretation i s usually regarded as the task of the evaluator. He aggregates and summarizes the data i n ways he believes are meaningful, arranges the findings i n what he thinks i s a suitable format and i n t e r -prets the res u l t s i n a manner that he feels best explains the phenomenon studied. The inherent r i s k i n th i s approach, as has been stressed i n other sections of t h i s chapter, i s that the evaluator 1s perception of what i s an appropriate and useful way to analyze and interpret the data may r e s u l t i n findings that are incomprehensible and/or i r r e l e v a n t to decisionmakers. Furthermore, i f decisionmakers are excluded from data analysis and data interpretation, they may have d i f f i c u l t y believing the findings. If that i s the case, decisionmakers might choose to ignore evaluation r e s u l t s . Patton urges that decisionmakers get involved i n data analysis and interpretation i n an e f f o r t to ensure that f i n d -ings are understandable and believable. Decisionmaker involve-ment with data analysis and interpretation, Patton continues, means that they w i l l have an opportunity to think about and work with the data over a longer period of time. F a m i l i a r i t y with the data w i l l avoid sudden surprises and defensive reactions among users: evaluation feedback i s most useful as part of a process of thinking about a program rather than as a one-shot information input. Thus, evaluation surprises, born of the sudden release of f i n a l reports are not l i k e l y to be p a r t i c u l a r l y well received. Such surprises are more l i k e l y to increase than to reduce uncertainty. ^ U t i l i t y i s also served, states Patton, by separating analysis from interpretation. Analysis he defines as "organizing the data, constructing appropriate s t a t i s t i c a l tables, and arranging for the data to be displayed i n an orderly, usable format." Interpretation involves "making judgments about what the data mean, establishing the im p l i -cations of the findings, and l i n k i n g evaluation results to 23 future action." The advantage of separating analysis from interpretation i s that users can analyze, draw conclusions and make interpretations without biase introduced by judg-ments of the evaluator. The evaluator forms his own con-clusions which are l a t e r integrated with those of decision-makers. This u t i l i t y - f o c u s e d approach to data analysis and interpretation, then, includes the judgments, conclusions and recommendations of both evaluators and decisionmakers. Evaluation necessarily means making judgments. To 24 make judgments, standards must be developed and applied to the data. Patton warns that problems may arise i f standards are not developed before data c o l l e c t i o n and i f decisionmakers are not involved i n standards development: Many of the most serious c o n f l i c t s i n evaluation research are rooted i n the f a i l u r e to c l e a r l y specify standards of d e s i r a b i l i t y i n advance of data c o l l e c t i o n . This can lead both to c o l l e c t i o n of the wrong data and to intense disagreement about the standards for interpreting data that have already been co l l e c t e d . Without e x p l i c i t c r i t e r i a , data can be interpreted to mean almost any-thing about the program - or to mean nothing at a l l . On the other hand, the advantages of early standard setting with decisionmaker p a r t i c i p a t i o n are considerable. Patton suggests that going through t h i s process ahead of time a l e r t s p a r t i c i -pants to additional data they need i n order to make sense of the evaluation.... Involving s t a f f or other decision-makers i n such a process helps c l a r i f y the evaluation c r i t e r i a that are being used. F i n a l l y , i f decisionmakers are involved i n establishing these c r i t e r i a themselves, the evaluative process may increase t h e i r commitment to use the data for program improvement.2-' PRINCIPLES OF UTILITY PRINCIPLE 16: DECISIONMAKERS SHOULD PARTICIPATE WITH EVALUATORS IN ANALYZING AND INTERPRETING DATA. PRINCIPLE 17: DATA ANALYSIS SHOULD BE/SEPARATED FROM DATA INTERPRETATION TO GIVE DECISIONMAKERS THE OPPORTUNITY TO WORK WITH THE DATA WITHOUT BIASES INTRODUCED BY THE EVALUATOR'S CONCLUSIONS. PRINCIPLE 18: DATA SHOULD BE PRESENTED TO DECISIONMAKERS AS IT BECOMES AVAILABLE AND IN A FORM THAT IS SENSIBLE TO THEM. PRINCIPLE 19: STANDARDS OF JUDGMENT SHOULD BE ESTABLISHED BEFORE DATA COLLECTION TO GUIDE LATER DATA INTERPRE-TATION AND TO IDENTIFY POTENTIAL INFORMATION GAPS. 25 Conveying the Results There has been a strong tendency among those engaged in evaluation to l i m i t the role of utility-promoting e f f o r t s to the dissemination process. This practice of putting a l l one's ' u t i l i t y eggs' i n one basket runs a much higher r i s k of f a i l u r e than the p a r t i c i p a t i v e approach to evaluation proposed i n t h i s chapter. In Patton's words evaluators must cease to think of dissemination as the separate and only u t i l i z a t i o n component of a project. U t i l i z a t i o n considerations' enter into the evaluation at the very beginning and at every step along the way. In u t i l i z a t i o n - f o c u s e d evaluation, dissemination e f f o r t s , far from being the whole cake of u t i l i z a t i o n , are l i t t l e more than the fr o s t i n g on the cake. Evaluations which incorporate many of the p r i n c i p l e s d i s -cussed so far w i l l have completed much of the task of pro-moting u t i l i t y before r e s u l t s are conveyed to decisionmakers. Nevertheless, a number of things can be done to ensure that findings are disseminated to decisionmakers i n 27 a useful way. Several authors have noted that evaluators, in t h e i r quest for highly v a l i d studies, often f a i l to convey the needed information to decisionmakers i n time to influence the decision. Attkisson et a l . lament when the highly v a l i d data are f i n a l l y available, decisions have already been made. Data or no data, decisions must be made within some time i n t e r v a l and untimely information, regardless of i t s v a l i d i t y , i s useless i f the decision has already been made.28 Grobman suggests that evaluators use a formal planning procedure such as Program Evaluation and Review Technique (PERT) to assure that findings are conveyed to decisionmakers in time to be useful. Caro urges the reporting of interim findings, p a r t i c u l a r l y where decisions must be made i n the short term or where decisionmakers' i n t e r e s t i n evaluation i s l i k e l y to wane. U t i l i z a t i o n suffers also when results are presented i n a form that i s not meaningful to decisionmakers. Reports should avoid unnecessarily complicated and unexplained conceptualizations, technical jargon, g e n e r a l i t i e s and excessive verbiage; recommendations should be s p e c i f i c and preferably, operational i n the short term. There has been much written about s t a f f resistance to evaluation and the problems i t creates for data c o l l e c t i o n , and, l a t e r , for program change. Evaluations can be unsettling, anxiety-producing experiences for program s t a f f . C l e a r l y , s t a f f w i l l be more receptive to evaluation, to ^ t s findings and to r e s u l t i n g change i f t h e i r anonymity i s protected when-ever possible. L i k e r t and L i p p i t t urge that s t a f f members must receive assurances that the objective of research i s to discover the r e l a -t i v e effectiveness of d i f f e r e n t methods and p r i n c i p l e s and that the study i s i n no way an attempt to perform a p o l i c i n g function. The emphasis must be on discover-ing what p r i n c i p l e s work best and why, and not on f i n d -ing and reporting which individuals are doing t h e i r jobs well or poorly. Staff cooperation means greater l i k e l i h o o d that evaluation •39 findings w i l l be translated into program change. Caro points out that cooperation i s fostered when administrators share evaluation findings with subordinates. F i n a l l y , Patton suggests that the i d e n t i t y of the evaluator i s an important factor i n establishing the 27 evaluation's c r e d i b i l i t y among users. Yet he notes that very few evaluation reports i d e n t i f y the author. The r i s k of anonymous authorship i s that users who are unable to assess the c r e d i b i l i t y of the report's source may be reluctant to trust i n i t s findings. Patton goes further. He urges that both evaluators and key persons for whom the evaluation i s commissioned should be i d e n t i f i e d i n a l l reports and presenta-tions. By doing so, there i s no question that the evaluation has the f u l l support and. p a r t i c i p a t i o n of management. PRINCIPLES OF UTILITY . PRINCIPLE 20: RESULTS MUST BE AVAILABLE TO DECISIONMAKERS IN TIME TO CONTRIBUTE TO THE MAKING OF DECISIONS. PRINCIPLE 21: INTERIM FINDINGS SHOULD BE REPORTED TO DECISION-MAKERS AS SOON AS IS FEASIBLE. PRINCIPLE 22: FINDINGS SHOULD BE CONVEYED TO USERS IN A CLEAR AND CONCISE MANNER, AND IN A FORM THAT CAN BE READILY UNDERSTOOD BY DECISIONMAKERS; RECOMMENDATIONS SHOULD BE SPECIFIC AND CAPABLE OF BEING OPERATIONALIZED. PRINCIPLE 23: FINDINGS SHOULD PROTECT THE ANONYMITY OF STAFF MEMBERS WHENEVER POSSIBLE. PRINCIPLE 24: BOTH EVALUATORS AND THOSE FOR WHOM THE EVALUA-TION IS CONDUCTED SHOULD BE IDENTIFIED IN ALL DISSEMINATION EFFORTS. Evaluating Evaluation In the aftermath of evaluation i t may be apparent that some aspects of the evaluation were highly successful; i t may also be clear that other aspects were inadequately planned, poorly executed or inherently dysfunctional. Some 28 form of self-renewal mechanism w i l l help to ensure that workable model elements are included i n future evaluations (where appropriate) and that unworkable model elements are modified or deleted. In e f f e c t , evaluation i t s e l f should be evaluated to determine what was d e f i c i e n t , what worked well, what improvements can be made and how those improvements can be put into practice. In the f i n a l analysis, 'success' for u t i l i t y - f o c u s e d evaluation w i l l be measured by the answer to the question "Did evaluation r e s u l t s improve the r a t i o n a l i t y of program decisionmaking?" Smoothing the model's rough edges and ensuring i t remains i n step with changing agency circumstances w i l l keep evaluation u t i l i t a r i a n . PRINCIPLE OF UTILITY PRINCIPLE 25: AS AN INTEGRAL PART OF THE EVALUATION PROCESS, EVALUATION ITSELF SHOULD BE ASSESSED, PARTICULARLY IN RELATION TO ITS PRIMARY PURPOSE - TO INCREASE THE RATIONALITY OF DECISIONMAKING. Summary Equipped with these guidelines for evaluation, the thesis turns next to the task of fashioning a u t i l i t y - f o c u s e d evaluation model for the Community Service Order Program. But f i r s t , the context of the case study i s described. CHAPTER III THE BRITISH COLUMBIA CORRECTIONS BRANCH An Overview The B r i t i s h Columbia Corrections Branch (also referred to as the 'Corrections Branch' or the 'Branch') i s a d i v i s i o n of the p r o v i n c i a l Ministry of the Attorney-General. Its primary functions are to provide services, programs and f a c i l i t i e s for those i n c o n f l i c t with the law. It offers programs that serve as alternatives to court action (Pre-court Services), custody and supervision for those on remand from the court (Pre-Trial Services), information reports to a s s i s t the court i n sentencing (Court Services), supervision of offenders placed on probation (Community Services), security f a c i l i t i e s for juveniles and adults ( I n s t i t u t i o n a l Services), supervision of those returning to the community from provin-c i a l gaols (Community Re-entry Services), and informational, counselling and enforcement services i n the area of Family Relations (Family and Children's Services)."*" Figure 1 shows the senior portion of the Branch's organization structure. The province i s divided into s i x administrative regions. Each region i s headed by a Regional 30 FIGURE 1: ORGANIZATION CHART OF THE BRITISH COLUMBIA CORRECTIONS BRANCH, MINISTRY OF ATTORNEY-GENERAL, 1979 Commissioner Assistant to Commissioner Regional Director of Correction*. Vancouver Regional Director of Corrections. Vancouver Island Regional Director of Dorrections North Fraser Deputy Commissioner Regional Director of Correction* South Fraser Regional Director of Corrections Interior Regional Director of Corrections Northern Inspection & Standards Informal ion Services Resource Analysis Program Analysis an4 Evaluation opard-hons $ TrrroririCrKon Religious Programs Provincial Classification Staff .Development Medical Services Psychological Services Source: Annual Report of the Corrections Branch, Ministry  of Attorney-General for Calendar Year 1978, p . l . 31 Director of Corections who assumes r e s p o n s i b i l i t y for pro-viding a l l p r o v i n c i a l correctional services within his geographical area. Each region has i t s own budget, allocates resources, hires and f i r e s s t a f f and develops programs i n l i n e with Branch p o l i c y and according to i t s p a r t i c u l a r needs and p r i o r i t i e s . As shown in Figure 1 there are a number of centralized support services within the O f f i c e of the Commissioner. Decisionmaking at the p r o v i n c i a l l e v e l i s largely the r e s p o n s i b i l i t y of the Branch Management Committee. This committee i s comprised of the Commissioner, the Deputy Commissioner, Regional Directors of Corrections, and heads of the centralized support services (see Figure 1). I t i s they who determine Branch d i r e c t i o n , set p o l i c y , e s t a b l i s h p r i o r i -2 t i e s and allocate funds to the six regions. Branch expenditures represent about one percent of the p r o v i n c i a l t o t a l . Figure 2 i l l u s t r a t e s the breakdown of estimated Branch expenditures for the year 1978/79, expressed i n percentages and d o l l a r s . The Branch has a s t a f f comple-ment of two thousand sixty-four (1978 f i g u r e ) . A breakdown by function i s shown in Figure 3. Growth i n the Branch has been dramatic over the past decade. Its expenditures quin-tupled between 1968/69 and 1978/79, and doubled between 1975/76 and 1978/79.3 32 FIGURE 2: ESTIMATED EXPENDITURES FOR THE BRITISH COLUMBIA CORRECTIONS BRANCH, BY TYPE, 1978-1979 TOTAL EXPENDITURES (EST.): $53.9M Source: Annual Report of the Corrections Branch, Ministry of Attorney-General for Calendar  Year 1978, p.19. 33 ;GURE 3: BRITISH COLUMBIA CORRECTIONS BRANCH STAFF, BY FUNCTION, 1978-1979 INSTITUTIONAL SERVICES 60% (1246) Source: Annual Report of the Corrections Branch, Ministry of Attorney-General for Calendar Year 1978, p. 19" ArMTNISTRATION 11% (226) SERVICES 29% (592) TOTAL STAFF: 2064 The Vancouver Region The Vancouver Region of the Corrections Branch i s located p r i n c i p a l l y i n the heavily populated southwest corner of the province (see Figure 4). Its boundaries include the City of Vancouver, North and West Vancouver, Squamish, Pem-berton, Sechelt, Powell River, B e l l a Coola, B e l l a B e l l a and Ocean F a l l s . In terms of d o l l a r s spent, the Vancouver Region i s the largest of the six regions i n the Branch. In 1978/79 i t accounted for t h i r t y one percent of the t o t a l Branch budget Located i n the region i s the Branch's largest p r o v i n c i a l gaol, Lower Mainland Regional Correctional Centre (Oakalla D i s t r i c t ) Over half of the Region's budget ( f i f t y nine percent i n 1978) i s assigned to t h i s i n s t i t u t i o n where forty percent of provin-c i a l inmates are housed. In addition, the Region administers three Community Correctional Centres, two juvenile r e s i d e n t i a l correctional f a c i l i t i e s and seventeen community-based o f f i c e s > 5 o f f e r i n g probation and Family Court Services. The Region i s divided into four d i s t r i c t s -- North Shore, Vancouver West, Vancouver East and Oakalla. The f i r s t three d i s t r i c t s represent geographical areas i n which the program of i n t e r e s t , the Community Service Order Program, operates. The fourth d i s t r i c t , Oakalla, i s a p r o v i n c i a l gaol and i s of no d i r e c t i n t e r e s t to t h i s study. Programs and f a c i l i t i e s i n the region that are not relevant to the CSO Program are not considered further. Figure 5 shows the upper portion of the Region's organ-35 FIGURE 4: THE VANCOUVER REGION OF THE BRITISH COLUMBIA CORRECTIONS BRANCH * includes Scruarrdsh, Sechelt, Powell River, Bella Coola, Bella Bella, Ocean Fa l l s . FIGURE 5: ORGANIZATION CHART OF THE VANCOUVER REGION, BRITISH COLUMBIA CORRECTIONS BRANCH, 1979 a COMMISSIONER I REGIONAL DIRECTOR OF CORRECTIONS REGIONAL SUPPORT STAFF DISTRICT DIRECTOR OAKALLA DISTRICT DIRECTOR A VANCOUVER WEST DISTRICT DIRECTORI NORTH SHORE DIRECTOR^  Y.D.C.-DISTRICT DIRECTOR VANCOUVER EAST DIRECTOR LAKESIDE C.C? NOTES: aSource: E. W. Harrison, Regional Director of Corrections, Vancouver Region ^Youth Detention Centre cLakeside Correctional Centre i z a t i o n a l structure. Figure 6 i l l u s t r a t e s the lower portion of the regional structure using the Powell River service de-l i v e r y unit as an example (in thi s study a service delivery unit i s defined as a community-based o f f i c e from which B.C. Corrections Branch services are dispensed. Each o f f i c e ser-vices a spec i f i e d geographical area). Each service delivery unit (with the exception of Squamish and Sechelt) i s super-vised by a resident Local Director. Probation O f f i c e r s and the Community Service O f f i c e r are d i r e c t l y accountable to a Local Director. Several Local Directors report to a D i s t r i c t Director who, i n turn, answers to the Regional Director of Corrections. Region-level decisionmaking i s carr i e d out by the Regional Management Committee whose function (on the regional scale) i s similar to that of the Branch Management Committee. I t i s made up of the Regional Director of Corrections, the four D i s t r i c t Director, the Director of the Youth Detention Centre, the Director of Lakeside Correctional Centre and a number of regional support s t a f f . Decisions are normally made by group concensus. The r i g h t to veto a committee decision or rule on any issue (although rarely done) i s retained by the Regional Director of Corrections. The Regional Management Committee determines a l l regional policy, sets regional p r i o r i t i e s , and advises the Regional Director of Corrections on the a l l o c a t i o n 6 and control of the regional budget. At the lowest management l e v e l , Local Directors, together with the D i s t r i c t Direc'tbr, comprise the D i s t r i c t Management FIGURE 6: ^  ORGANIZATION CHART OF THE PCWELL RIVER SERVICE DELIVERY UNIT DISTRICT DIRECTOR LOCAL DIRECTOR SECRETARY COMMUNITY SERVICE OFFICER PROBATION •-OFFICER: PROBATION OFFICER PROBATION OFFICER NOTE: The Sechelt Service Delivery Unit i s also administered by the Powell River Local Director but i s omitted for the sake of c l a r i t y . Committee. This body deals with administrative concerns at the d i s t r i c t and service delivery l e v e l , and allocates and oversees the d i s t r i c t budget. Program Evaluation Only recently has the Branch developed an administrative support unit with the s p e c i f i c mandate of program evaluation. The Program Analysis and Evaluation Section (see Figure 1) i s about two and one half years old. It i s charged with program planning and development, program po l i c y analysis, and program evaluation. About sixt y percent of i t s work takes the form of planning and analysis for the Branch Manage-ment Committee. The section w i l l undertake the design of program evaluations at the regional l e v e l on the basis of one evaluation per region per year. Their program evaluation services to the region are consultative only; r e s p o n s i b i l i t y for carrying out the evaluation i t s e l f rests with the region. There are f i v e analysts i n the section; each i s assigned to a s p e c i f i c group of programs.^ Also within the Commissioner's Of f i c e i s the Operations and Management Information Section. This group gathers and co l l a t e s information about a l l aspects of Branch operations and provides data to management and program s t a f f on request. Much of the section's energies over i t s f i v e years i n oper-ation have been devoted to the development of a computerized 8 management information system which i s s t i l l i n the making. In the Vancouver Region there are three support s t a f f p o t e n t i a l l y concerned with program evaluation — a Resources 40 Coordinator and two Program Support persons. Primary respons-i b i l i t i e s of the Resources Coordinator include advice-giving related to the preparation of the regional budget, monitoring regional expenditures and i d e n t i f y i n g potential cost savings. Evaluations concerned with program costs would undoubtedly involve t h i s i n d i v i d u a l . The role of the Program Support persons i s multidimentional, but includes such a c t i v i t i e s as program planning,implementation and evaluation? policy analysis; information and consultative services to management and s t a f f ; and l i a i s o n with other agencies. A l l three of these i n d i v i d -uals report d i r e c t l y to the Regional Director of Corrections and a l l three s i t on the Regional Management Committee i n an advisory capacity.^ A promising evaluation resource for the B.C. Corrections Branch i s the newly formed Criminology Research Centre at Simon Fraser University, Burnaby, B.C. The Centre describes i t s services i n the following quote: The main function of the Centre i s to f a c i l i t a t e cooper-ative research with criminal j u s t i c e programs and agencies i n B r i t i s h Columbia. The primary focus w i l l be to provide information for use by personnel, planners, evaluators, and t h e o r e t i c a l investigators i n making policy decisions within the system.... The Centre has a Direcotr and two Research Associates... In addition to personnel, the Centre offers an information service based on an extensive c o l l e c t i o n of research reports and books on research methodology , as well as acc^gs to the t r a d i t i o n a l sources of information i n t h i s area. CHAPTER IV THE COMMUNITY SERVICE ORDER PROGRAM This chapter describes the Community Service Order Program (CSO Program) as i t operates i n the area under study ('Commun-i t y Service Order Program' i s used i n a broad sense to include CSO Program-related a c t i v i t i e s of the Corrections Branch and of other agencies and persons; the term i s also used i n a narrow sense to refer only to program aspects under the d i r e c t control of the Corrections Branch. Which meaning applies should be apparent from the context i n which i t i s used). I t b r i e f l y examines the concept and purpose of the program; i t traces the development of the program i n the Branch; i t explains how offend-ers enter and pa r t i c i p a t e i n the program; f i n a l l y , i t describes the manner i n which the program i s administered i n the Vancouver Region. Concept and Purpose The basic concept of the Community Service Order Program i s that offenders ( p r i n c i p a l l y young offenders) should perform work for the benefit of the community or the victim. I t i s viewed as reparation for harm done and i s considered a means to achieve a fundamental kind of j u s t i c e . Typical tasks under-taken by offenders include the maintenance of community parks and f a c i l i t i e s , services to special needs groups such as the 42 aged and the handicapped and work for the i n d i v i d u a l or bus-iness (the victim) offended. The rationale for the program i s described i n t h i s quote from the Branch Annual Report: The Community Service Order Program continues to be a p r i o r i t y of the Branch.... More than many correctional pro-grams, i t epitomizes a sense of j u s t i c e as not only a clear response to crime and delinquency, but also a pro-gram which allows humane and e f f e c t i v e consequences for offences, consequences which are economical to operate, which avoid the unnecessary use of prison, and which are available to the many offenders who are unable to pay a f i n e . 1 From t h i s rationale the Branch developed four program ob-j e c t i v e s : To provide an a l t e r n a t i v e to the court process and a sen-tencing a l t e r n a t i v e to the court, which: 1. i s perceived by the community and the offender as enabling the offender to make amends for the offense by rendering an i d e n t i f i e d service to the community, 2. a c t i v e l y involves the community i n the administration of j u s t i c e , 3. i s more economical i n terms of s o c i a l and f i n a n c i a l costs than t r a d i t i o n a l sentencing alternatives of probation and gaol, 4. i s administered and enforced pursuant to a court order or diversion agreement consistent with t h e i r intent and within the operational guidelines of the CSO Program. H i s t o r i c a l Development The concept of Community Service by offenders has been operating informally i n t h i s province for many years. However, i t was not u n t i l the 1970's that Cornmunity Service was proposed as a Branch program for juveniles and adults. By December, 1974, the Branch had begun to hire Community Service O f f i c e r s to manage p i l o t projects i n nine centres around the province. Later the following year, the project was expanded province-wide. For the past three years Community Service O f f i c e r s have been operating i n every major location i n the province. Figure 7 shows the number of case months (a case month i s one offender supervised for one month) of supervision on the CSO Program over a three year period. The projected figures for 4 A p r i l , 1978 to March, 1980 are probably high. The graph seems to suggest that use of the program i s increasing, although not dramatically. Program Process ThisSection describes i n s i m p l i f i e d terms how persons gain entry to the CSO Program and relates the workings of the program i t s e l f . For the sake of c l a r i t y some nuances of the processes are not mentioned and on some points, the descrip-t i o n i s not s t r i c t l y accurate. However, t h i s does not impair one's understanding of the program. Offenders gain entry to the CSO Program v i a two processes ('offender' i s broadly defined i n t h i s study to refer to persons who have come to the attention of a Criminal Justice agency by allegedly committing an offence, yet have not been convicted of an offence by the court, i . e . , persons diverted from the court process or those awaiting court process. It also refers to those convicted of an offence by the court. Technically, however, only convicted persons are offenders). 44 FIGURE 7: CASE MONTHS OF COMMUNITY SERVICE COMPLETED, APRIL, 1977 TO MARCH, 1980 (EST.) CASE MONTHS (000'S) 15 10 CORRECTIONS BRANCH VANCOUVER REGION "1 APRIL, 1977-MARCH, 1978 __, APRIL, 1978-MARCH, 1979 APRIL, 1979-MARCH, 1980 Source: Operations and Management Information Section, B.C. Corrections Branch, 1980. They may enter by diversion or by court order. The flow charts i n Figure 8 and Figure 9 i l l u s t r a t e the basic pro-cesses described below. Entry by Diversion Diversion i s the process by which an offender i s dealt with by the Criminal Justice System without court intervention. Charges against selected offenders are placed in abeyance, usually with the offender's promise that he w i l l s a t i s f y certain conditions, notably, that he w i l l perform Community Service. Diversion to the CSO Program i s only used i f the offender admits his g u i l t (a r e q u i s i t e of any form of diversion), i f the offence i s minor, i f he i s un-l i k e l y to commit another offence and i f he agrees to com-plete a sp e c i f i e d amount of Community Service. I t i s norm-a l l y reserved for those with no previous criminal record and i n the main i s used for juveniles (offenders under seventeen years of age) . The process i s i l l u s t r a t e d i n Figure 8. The po l i c e opt to charge the offender. The charge and the circum-stances of the offence are reviewed by the prosecutor. The prosecutor must decide to pursue the matter i n court or divert the offender. If diversion appears to be appropriate, the prosecutor refers the matter to the Probation O f f i c e r who interviews the offender. Based on the interview, the Probation O f f i c e r recommends court action or diversion to 46 FIGURE 8: ENTRY TO THE COMMUNITY SERVICE ORDER PROGRAM, BY DIVERSION CORRECTIONS AND CSO PROGRAM COURT 47 the CSO Program (other forms of d i v e r s i o n are p o s s i b l e but are not r e l e v a n t to t h i s d i s c u s s i o n ) . With s e l e c t e d d i v e r -t e e s , the P r o b a t i o n O f f i c e r b r i e f s the o f f e n d e r on the CSO Program and asks him i f he i s prepared to undertake some Community S e r v i c e . I f he agrees , a recommendation i s made to the p rosecuto r t h a t the o f fender be d i v e r t e d to the p r o -gram. I f the p rosecuto r concurs w i t h the recommendation, a c o n t r a c t i s drawn up by the P r o b a t i o n O f f i c e r ( in c o n s u l t a -t i o n w i t h the Community S e r v i c e O f f i c e r ) s p e c i f y i n g the type of work to be done, the work placement and the number of hours to be worked. Ent ry by Court Process In the a l t e r n a t e p r o c e s s , the p rosecuto r has weighed the c i rcumstances of the o f f e n c e and has deemed i t to be i n the bes t i n t e r e s t s of the o f fender and/or the community to proceed w i t h charges . The o f fender i s p l a c e d befo re the cour t where he p leads g u i l t y or not g u i l t y to the charge . I f the matter goes to t r i a l and the accused i s found not g u i l t y o r i f the case i s d i s m i s s e d the accused leaves the C r i m i n a l J u s t i c e System. F o l l o w i n g a p l e a of g u i l t y or a f i n d i n g of g u i l t y by the c o u r t , and where the c o u r t opts f o r the Community S e r v i c e Order Program, the o f fender i s p laced on a P r o b a t i o n Order (a c o u r t order r e q u i r i n g an o f fender to do or not to do p a r t i c u l a r t h i n g s d u r i n g a s p e c i f i e d p e r i o d of time) w i t h the c o n d i t i o n t h a t he complete a s p e c i f i e d number of hours of Community S e r v i c e . 48 FIGURE 9: ENTRY TO THE COMMUNITY SERVICE ORDER PROGRAM, BY COURT ORDER 49 Program A c t i v i t y From th i s point forward the sequence of events i s the same for offenders who enter the program by diversion or by court order. The Community Service O f f i c e r interviews the offender and consults with the Probation O f f i c e r to determine an appropriate work placement. The placement i s chosen and the Community Service O f f i c e r introduces the offender to the agency, group or i n d i v i d u a l for whom he w i l l work. A work schedule i s arranged and the Community Service O f f i c e r or a designated person i n the work placement super-vises the offender on the job. When the offender completes his assigned number of Community Service hours his contact with the program ends. Completion of Community Service must precede or coincide with the termination of the offender's Probation Order. Program Administration The CSO Program i s administered by the same organi-zation structure that provides core community correctional services. The program has no specialized program di r e c t o r or coordinator; only Community Service O f f i c e r s work exclusively on program tasks. Probation O f f i c e r s provide a number of services, only one of which involves the CSO Program. Pr o v i n c i a l program po l i c y i s determined by the Branch Management Committee. The manner i n which the program i s operationalized i n the region—is determined by the Regional 50 Management Committee described i n Chapter I I I . The Regional Director of Corrections i n conjunction with D i s t r i c t Directors and regional support s t a f f develop regional program po l i c y , i d e n t i f y program strategies, a l l o c a t e resources to the program, monitor the program's effectiveness, and l i a i s e with the community and other components of the Criminal Justice System. Each D i s t r i c t Director i s responsible for the o v e r a l l operation of the program i n his d i s t r i c t . Each Local Director oversees the delivery of program services in. h i s service delivery unit. There are eleven Community Service O f f i c e r s i n the Vancouver Region (see Figure 10). Seven of those are i n the North Shore D i s t r i c t ; two are i n each of the Vancouver West D i s t r i c t and Vancouver East D i s t r i c t . Community Service O f f i c e r s i n locations (1), (2) and (3) i n Figure 10 are h a l f -time workers only. Those i n locations (1) through (5) i n c l u s i v e contract t h e i r services to the Branch; the remainder are regular employees. Two administrative anomalies e x i s t i n the region. In Squamish, there i s no Community Service O f f i c e r and program tasks are c a r r i e d out on a l i m i t e d basis by the Probation O f f i c e r s . In each of B e l l a B e l l a , B e l l a Coola and Ocean F a l l s there i s a part-time Community Service O f f i c e r but no resident Probation O f f i c e r . The Community Service O f f i c e r has several functions. He develops work placements (a job bank) for offenders; he assigns the offender to a suitable task; he d i r e c t l y or i n d i r e c t l y monitors the offender's progress on the job; he reports the offender's progres-s to the Probation O f f i c e r and/or 51 FIGURE 10: LOCATION OF COMMUNITY SERVICE OFFICERS IN THE VANCOUVER REGION OF THE BRITISH COLUMBIA CORRECTIONS BRANCH the court as required; he completes a written progress report at the end of the Community Service. For present purposes there are four types of Probation Orders (see Figure 11). Type 1 requires an offender to report regularly to a Probation O f f i c e r and, i n addition, to under-take Community Service. Type 2 has no reporting condition but includes Community Service. And with Type 4 Probation Orders, the offender has no obligation to report nor must he complete Community Service. Only offenders with Types 1 and 2 Probation Orders are participants i n the CSO Program. F i l e s of Type 2 offenders are often maintained by the Community Service O f f i c e r and minimal l i a i s o n with a Probation O f f i c e r i s necessary. With Type 1 offenders the Probation O f f i c e r assumes primary r e s p o n s i b i l i t y for the case. The Community Service O f f i c e r provides the Probation O f f i c e r with periodic reports on t h e i r mutual client's progress. Each month, each service delivery unit forwards basic program s t a t i s t i c s to the regional o f f i c e . Data includes the number of hours and type of Community Service completed, the number of offenders on the program and the means of entry. At Regional O f f i c e the data i s c o l l a t e d and forwarded to the Branch Operations and Management Information Section for f i n a l compilation and analysis. This c o l l a t e d information i s then made available to Region and Branch managers for management purposes. 53 FIGURE 11: TYPES OF PROBATION ORDERS IMPOSED BY THE COURTS REPORT NO REPORT COMMUNITY TYPE 1 TYPE 2 SERVICE NO COMMUNITY TYPE 3 TYPE 4 SERVICE CHAPTER V APPLYING THE PRINCIPLES - UTILITY-FOCUSED EVALUATION FOR THE COMMUNITY SERVICE ORDER PROGRAM This chapter develops an evaluation model for the Community Service Order Program using the U t i l i t y P r i n c i p l e s of Chapter II as a basis for design. The s p e c i f i c nature of each model element i s determined by considering the con-text i n which the program operates. The f i r s t f i v e sections of the chapter, ending with 'Methodology' develop the structure of the model; the remaining section proposes a process for each stage i n the evaluation. Not a l l the p r i n c i p l e s from Chapter II are e x p l i c i t l y addressed. Those that are applied are those that lend them-selves to i l l u s t r a t i o n (see Thesis Methodology, Chapter I for elaboration). 1 At the end of the chapter the model i s summarized i n r e l a t i o n to i t s u t i l i t a r i a n c h a r a c t e r i s t i c s . A Purpose for Community Service Order Program Evaluation I t was argued i n Chapter I that the fundamental reason for evaluation should be to provide administrators with i n f o r -mation that w i l l lead to more r a t i o n a l program decisions. However, those commissioning evaluation would do well to .have a more s p e c i f i c purpose i n mind. Numerous reasons for 55 evaluation have been suggested i n the l i t e r a t u r e . Those proposed by two authors are discussed below and arguments are put forward for the purposes deemed appropriate for the CSO Program. Scriven"*" distinguishes between summative and forma-t i v e evaluation. Summative evaluations attempt to determine the e s s e n t i a l effectiveness of programs. They are used to make decisions about continuing or terminating, expanding or cutting back a program. As such, summative information i s often useful to policymakers and funders. Formative evalua-tions focus on ways of improving and enhancing programs, not only i n t h e i r early stages but throughout t h e i r l i f e t i m e . Formative evaluation i s of central i n t e r e s t to program 2 administrators and s t a f f . Patton d i f f e r e n t i a t e s among evaluations of program outcomes, program theory and program implementation. Outcome evaluations are by far the most common. The object i s to determine the extent to which actual program outcomes match desired outcomes or goals. Program theory evaluations attempt to validate hypothesized cause-e f f e c t r e l a t i o n s h i p s . Does the treatment imposed r e s u l t i n the desired effects? Implementation evaluations seek to determine whether or not the program has been implemented in the manner intended. If the operating program d i f f e r s from the intended program, implementation evaluation might examine the effectiveness of current operations and i d e n t i f y program improvements where required. Although a single evaluation could conceivably be form-ative as well as summative and could examine program outcomes, theory and implementation, such a focus would seem too broad to ensure useful findings. A manageable focus i n the study context probably means making choices among the purposes discussed. The f i r s t argument i s that CSO Program evaluation should serve formative as opposed to summative decisions. Summative decisions, i t would appear, are neither pending nor appropriate at t h i s stage i n the program's l i f e . There i s l i t t l e question that the CSO Program i s here to stay. The program now operates province-wide and both the Branch and the Vancouver Region have assigned the program a high p r i o r i t y . To discontinue or even cut back the program would be extremely d i f f i c u l t . To consider expansion of the program at t h i s time i s premature. Decisionmakers have not had time to iron the wrinkles out of only recently developed p o l i c i e s and procedures and are un l i k e l y to have a clear idea of the most e f f e c t i v e program strategies. To expand before program weaknesses are i d e n t i f i e d and strengthened would make l a t e r changes more d i f f i c u l t , would waste limited resources and jeoprodize pro-gram effectiveness. It would seem, then, that summative information i s not relevant to the current needs of regional decisionmakers. More probable i s that decisionmakers w i l l have questions about how the program i s functioning and how i t can be improved. The program i s young. P o l i c i e s and procedures have yet to be re-fined; improvements based on formal, systematic analyses have yet to be made. In sum, formative evaluation i s more l i k e l y 57 to produce relevant, useful information than summative eval-uation. The second argument i s that information about program implementation i s more pertinent to the needs of CSO Program decisionmakers than information about either program outcomes or program theory. Outcome evaluation appears inappropriate for two reasons: (1) decisionmakers must know to what extent the CSO Program has been implemented i n order to assign mean-ing to measured outcomes (see Chapter Two); and l i t t l e inform-ation currently exists about the state of CSO Program imple-mentation, (2) outcomes information alone i s of limited use i n i d e n t i f y i n g needed program improvements. A si m i l a r argument leads to the r e j e c t i o n of program theory as a suitable purpose. U n t i l i t can f i r s t be ascertained that the desired 'treatment" i s i n place, i t i s premature to attempt to determine the v a l i d i t y of that treatment. Information about program implementation, therefore, i s a prerequisite (or at least a co-requisite) to other kinds of evaluation e f f o r t s . In the study context, an absence of such information suggests that CSO Program evaluation should seek to determine the extent to which the program i s operating as desired. Several observations suggest that the CSO Program may, indeed, not be operating as intended and lend weight to the argument for an implementation focus: (1) the program was largely developed by t r i a l and error i n the f i e l d , (2) regional program objectives were not published u n t i l 1979; program 58 p o l i c i e s and procedures were not c l e a r l y defined u n t i l 1978, (3) large d i s p a r i t i e s appear to e x i s t among service delivery units i n attitudes toward and use of the CSO Program, (4) parts of the region are phys i c a l l y , and to some extent, function-a l l y i s o l a t e d from the rest of the region, (5) communities with-i n the region d i f f e r i n siz e and socio-economic make-up, af-fecting the nature of the target population and influencing program opportunities, (6) the program i s highly decentralized and f u l l y integrated with other services, reducing the l i k e l -hood of standardized operations. These observations r a i s e the p o s s i b i l i t y that the program d i f f e r s s i g n i f i c a n t l y from one service delivery unit to the next and d i f f e r s s i g n i f i c a n t l y from that which was intended. The uncertainty facing program managers i s not knowing whether those differences are functional or dysfunctional. Simply knowing that a program does or does not conform, to that which was intended i s not s u f f i c i e n t to be useful. CSO Pro-gram decisionmakers w i l l want to know i f exis t i n g modes of operation are functional, regardless of o r i g i n a l intentions. If they are not functional, information should be gathered that w i l l enable decisionmakers to i d e n t i f y e f f e c t i v e modific-ations. To summarize, i t i s the authors contention that the purposes for evaluating the CSO Program should be: (1) to serve formative decisions, (2) to determine the extent to which the program has been implemented, (3) to ascertain the f e a s i b i l i t y of intended and ex i s t i n g program operations and 59 (4 ) to i d e n t i f y more appropriate modes of operation i f and where required. The arguments presented are not to say that outcomes should never be examined or summative decisons never made. The purposes suggested are s p e c i f i c to t h i s stage i n the program's development and to the region's current l e v e l of knowledge about the program. As the program matures and as more information i s gathered about i t , future evaluations would be expected to address d i f f e r e n t purposes. Relevant Decisionmakers In keeping with the U t i l i t y P r i n c i p l e developed i n Chapter I I , t h i s section seeks to i d e n t i f y the CSO Program decisionmakers with the most pressing information needs. Serving these individuals represents the most e f f i c i e n t ( u t i l i t a r i a n ) a l l o c a t i o n of evaluation resources. It w i l l be r e c a l l e d from Chapter III that there are four decisionmaking lev e l s i n the regional hierarchy - the Regional Management Committee, D i s t r i c t Directors, Local Directors and service providers (in t h i s context 'service providers' are persons who provide d i r e c t services to the public and who are employed by an agency i n the Criminal Justice System). Persons at each l e v e l w i l l make program decisions of a d i f f e r e n t order. Recognizing that, t h i s discussion can re f i n e i t s focus by asking (1) What kinds of decisions are there? (2) Which are most c r i t i c a l to t h i s program/agency at t h i s point i n time? (3) Who i s c e n t r a l l y involved i n making those decisions? 60 Weiss distinguishes three kinds of decisions i n an organization - p o l i c y , strategic (or managerial) and t a c t i c a l decisions: Policy decisions are made at the highest l e v e l : Should the program be continued or terminated, expanded or cut back...? Strategic decisions have to do with fundamental choices about modes of operation.... Which strategies of intervention should be continued, modified or i n i t i a t e d . ...? T a c t i c a l decisions have to do with day-to-day ^ program practices, choices about operating procedures. Matching t h i s typology with the role descriptions of decision-makers in the Vancouver Region, i t can be seen that the Regional Management Committee makes both policy (expand or cut back) and strategy decisions; i n d i v i d u a l D i s t r i c t Directors may make minor strategy decisions but are p r i n c i p a l l y charged with . implementing p o l i c i e s and strategies; Local Directors, i n conjunction with service providers, take r e s p o n s i b i l i t y for t a c t i c a l program decisions. But which decisionmaker should be the primary user of evaluation results? Policy decisions, as defined by Weiss, are synonymous with Sctiven' s summative decisions and i n the previous section were deemed a l o w - u t i l i t y focus for CSO Program evaluation at t h i s time. Strategy decisions, i t has been said, involve "fundamental choices about modes of operation." As such, strategy decisions would seem to be i n l i n e with the e a r l i e r proposed f o c i - program implementation and program improve.nent. Furthermore, evaluative information about the state and worth of current program strategies i s absent. And c l e a r l y , lower order ( t a c t i c a l ) decisions are better made after strategies have been determined to be e f f e c t i v e , or have been discarded or improved. Information for t a c t i c a l decisions w i l l serve l i t t l e purpose i f the strategies which the t a c t i c s are designed to achieve are dysfunctional. Strategy decisions, therefore, appear to be the most c r i t i c a l and information aiding those who must make strategy decisions would seem to be the most useful - i n t h i s case, the Regional Management Committee. The conclusion reached i s that the primary group of decisionmakers for whom evaluation of the CSO Program should be focused i s the Regional Management Committee. The Evaluation Team A major theme i n t h i s paper i s that evaluation w i l l have high u t i l i z a t i o n p otential when those affected by evaluation findings are active participants i n the evaluation e f f o r t . Not only i s there more assurance that r e s u l t s w i l l be relevant, but there i s more l i k e l i h o o d that s t a f f w i l l cooperate with the evaluation, f i n d i t s recommendations acceptable and p a r t i c i p a t e i n implementing change. Based on t h i s premise of p a r t i c i p a t i v e evaluation, t h i s section i d e n t i f i e s and discusses f i v e parties to plan and undertake evaluation of the CSO Program. The choice of who should evaluate springs from f i v e considerations, some of which c a l l to mind the U t i l i t y P r i n c i p l e s developed i n Chapter I I : (1) one i n d i v i d u a l should be responsible for carrying out the substantive work of developing the evaluation plan, c o l l e c t i n g and c o l l a t i n g the data and writing the report, (2) primary decisionmakers should be involved i n making a l l major decisions which define the evaluation e f f o r t , (3) secondary decisionmakers (agency and non-agency service providers) should have input into the evaluation e f f o r t , (4) recognizing the inexperience and limited s k i l l s of regional s t a f f i n program evaluation, expert advice from outside the organization might augment the u t i l i t y and v a l i d i t y of evaluation r e s u l t s , (5) data c o l l e c t i o n could be expedited i f more than one person were involved. These considerations suggest f i v e parties to plan and implement the evaluation: (1) a Regional Planner/Evaluator, (2) a Task Force comprised of primary decisonmakers, (3) an Advisory Group of secondary decisionmakers, (4) an External Consultant/5)Research Assistants. Each i s discussed below. Regional Planner/Evaluator Currently, there are two Program Support O f f i c e r s i n the region whose duties include l i a i s o n , information and consultative services as well as limited planning and evalua-t i o n a c t i v i t i e s related to regional programs. I t would appear that a Regional Planner/Evaluator could be created by simply reorganizing the duties of these two positions. One position would s p e c i a l i z e i n l i a i s o n , information"and con-s u l t a t i v e services for both community-based and i n s t i t u t i o n a l programs; the other would focus s o l e l y on planning and eval-uation a c t i v i t i e s . Both would remain located i n the Regional Office and would be d i r e c t l y accountable to the Regional Director of Corrections. The Regional Planner/Evaluator would p a r t i c i p a t e i n and be a major contributor to a l l long and short range regional planning functions and a l l regional evaluation functions. He would be an advisory member of the Regional Management Committee to whom he would provide guid-ance on planning and evaluation issues. In addition, he would serve as a member of the Evaluation Task Force (discussed next) where he would function as a co-decisionmaker, executor and group expert on evaluation matters. Based on the ideas and decisions of the Task Force as a whole, the Regional Planner/Evaluator would write up the evaluation plan, gather and c o l l a t e the data, and write the preliminary and f i n a l reports. The concept of the Regional Planner/Evaluator has sev-e r a l advantages: (1) broadening the scope of the CSO Program evaluator to include a l l regional planning and evaluation a c t i v i t -ies makes the position an economically viable proposition, (2) incorporating planning a c t i v i t i e s i n the job description pro-motes the integration of planning and evaluation, (3) locating the Regional Planner/Evaluator at Regional O f f i c e and including him i n the Regional Management Committee and i n the Evaluation Task Force promotes inte r a c t i o n between the evaluator and program managers, (4) u t i l i z i n g the exis t i n g p o s i t i o n of Program Support O f f i c e r precludes the need for additional funds (consent from Treasury Board for a new position would be highly u n l i k e l y ) , (5) choosing an 'in-house' evaluator means his services are available on demand and permits regional management to define 64 his terms of reference. Furthermore, as a regular Regional Office employee,'the Regional Planner/Evaluator would have an in-depth knowledge of the region (and the CSO Program) and could develop- e f f e c t i v e working relationships with Branch and Region management and with other Criminal Justice personnel. Evaluation Task Force The Evaluation Task Force would include the Regional Director of Corrections, one D i s t r i c t Director, one Local Director and the Regional Planner/Evaluator. The D i s t r i c t Director and the Local Director would be selected by the Regional Director of Corrections based on his knowledge of th e i r experience, a n a l y t i c a l s k i l l s and commitment to evalua-t i o n . Their task would be to make a l l major decisions that would shape the evaluation plan, i t s implementation, i t s findings and the manner i n which the findings are disseminated. Managers on the Task Force would work a c t i v e l y with the Regional Planner/ Evaluator to develop and carry out the evaluation e f f o r t . At two points i n time (when the evaluation plan i s completed and when preliminary findings are available) the Task Force would report i t s e f f o r t s to the Regional Management Committee for review and approval. A l l Task Force members would p a r t i c i p a t e in presenting findings to regional s t a f f . Several strong arguments i n favour of the Task Force can be made: (1) of most importance, i t i s a means to engage the active p a r t i c i p a t i o n of decisionmakers i n the evaluation e f f o r t , (2) including each l e v e l of management i n the Task Force 65 provides a breadth of view that w i l l f a c i l i t a t e useful re-su l t s , (3) the presence of two major decisionmakers on the Task Force lends c r e d i b i l i t y and influence to the evaluation e f f o r t and promotes the u t i l i z a t i o n of r e s u l t s , (4) the four-member Task Force i s small enough to be task oriented. Advisory Group The Advisory Group would be composed of service pro-viders from both within and outside the agency. It could i n -clude a Local Director, Probation O f f i c e r , Community Service O f f i c e r , judge, Crown Prosecutor, a placement agency repres-entative and an offender/participant. The group's basic function i s to act as a sounding board for proposals and findings submitted to them by the Task Force. The Task Force would approach the Advisory Group with a request for s p e c i f i c feedback at several stages i n the eval-uation project. Generally speaking, the group would address three questions: (1) Is the evaluation plan complete, p r a c t i c -able, acceptable? (2) Are the findings reasonable? Do they appear to concur with our own observations? (3) Are the re-commendations v a l i d , reasonable, acceptable, capable of being implemented? Suggestions made by the Advisory Group would be taken under consideration by the Task Force and the eval-uation modified as required. These advisors could function as a group (and provide a c o l l e c t i v e response to the Task Force), as individuals or as both. Arguments for having an Advisory Group are two-fold: 66 (1) i t i s a means of obtaining the varied perspectives of a broad spectrum of service providers, and as such, reduces the p o s s i b i l i t y of i n v a l i d findings and inappropriate action, (2) i t permits service providers to be and f e e l a part of the evaluation e f f o r t , thereby increasing the evaluation's cred-i b i l i t y and acceptance among s t a f f . External Consultant The External Consultant i s defined i n t h i s context as any person or group who are not employees of the Vancouver Region administration but who may or may not be an employee of the Branch. He/they would have two basic functions: (1) to provide high c a l i b r e advice to the Regional Planner/Evaluator and the Task Force on p a r t i c u l a r l y complex matters and (2) lend o b j e c t i v i t y and a detached perspective to the evaluation. Three alternatives for the External Consultant are proposed, none of which are mutually exclusive: (1) the Program Analysis and Evaluation Section, O f f i c e of the Commissioner, (2) the Criminology Research Centre, Simon Fraser University, (3) a private consultant. A description of alternatives (1) and (2) i s provided i n Chapter I I I . A l l of these resources have been u t i l i z e d by the Vancouver Region i n the past and therefore, have considerable f a m i l i a r i t y with the organization and i t s programs. Alternatives (1) and (2) are desirable i n as much as t h e i r services are cost-free. Alternative (3) has the advantage of a v a i l a b i l i t y on demand. Research Assistants One or two Research Assistants are suggested to a s s i s t the Regional Planner/Evaluator i n c o l l e c t i n g and c o l l a t i n g the data. Research Assistants would be employed as required and would l i k e l y be students employed for the summer or students serving practicums related to the evaluation project. A C r i t i c a l Focus In the f i r s t section of t h i s chapter i t was stated that evaluation of the CSO Program should (1) determine the extent to which the program has been implemented, (2) ascertain what i s working and what i s not and (3) i f necessary, i d e n t i f y more functional modes of operation. This section narrows.the focus for CSO Program evaluation yet again. Considerations relevant to choosing the focus are reviewed. Alternative emphases within the rubric of implementation evaluation are examined. An argument i s made for a p a r t i c u l a r focus-program processes. F i n a l l y , f i v e processes within the CSO Program are i d e n t i f i e d and th e i r r e l a t i v e merits as targets for evaluation are assessed. A number of considerations relevant to choosing the focus for evaluation were raised i n Chapter I I . Generally, i t was stated that the focus must represent the most pressing pro-gram concerns and must be chosen for i t s capacity to generate relevant, desirable, obtainable, usable and timely information. It was also suggested that decisionmakers assess the poten t i a l information value (in r e l a t i o n to the i d e n t i f i e d information need), of alternative f o c i , then choose the focus with the highest p o t e n t i a l for u t i l i t y . U t i l i t y P r i n c i p l e s raised i n 68 Chapter II are restated below i n the form of questions. They serve as a ' U t i l i t y Checklist' for those choosing the focus for CSO Program evaluation: (1) Do decisionmakers want and need information which t h i s focus would provide? (2) Do decisionmakers want the information for t h e i r own purposes, not simply for some-one else, (3) Can data be brought to bear on the question? (4) Is the focus manageable i n the time allotted? Would a narrower or broader focus be l i k e l y to improve the usefulness of findings? (5) Has a decision already been made (overtly or covertly) concerning t h i s question? (6) Can decisionmakers indicate how they would use the information obtained? (7) Has the agency the resources and mandate to take action on possible evaluation findings where action i s indicated? (8) Is i t desirable to change t h i s aspect of the program? What consequences might r e s u l t i f te n t a t i v e l y i d e n t i f i e d solutions were implemented? (9) A l l things considered, w i l l addressing t h i s question provide useful information to program decisionmakers? The position has been taken that evaluation of the CSO Program should examine the extent to which the program has been 5 implemented. Patton notes that there are three aspects of a progam upon which an implementation evaluation can focus: (1) program e f f o r t , (2) program processes and (3) treatment specif-i c a t i o n . Evaluation of program e f f o r t addresses the questions "What did you do and how well did you do i t ? " The object i s to document the quantity and qu a l i t y of a c t i v i t y that takes place i n r e l a t i o n to some standard of what was expected. 69 That i s , i t i s an assessmennt:6f input or e f f o r t without regard to output. E f f o r t evaluation moves a step beyond the question "Does the program exist?" to explore the question "How active i s the program?" Tripodi describes e f f o r t evaluation t h i s way: Evaluation of program e f f o r t refers to an assessment of the amounts and kinds of program a c t i v i t i e s considered necessary for the accomplishment of program goals within a p a r t i c u l a r stage of development. I t refers not only to s t a f f time, a c t i v i t y and commitment, but also to the a l l o c a t i o n and use of material resources - funds, space, equipment, etc.... Information such as the following might be obtained about program e f f o r t : What techniques for r e c r u i t i n g potential c l i e n t e l e have been employed; how much s t a f f time, e f f o r t , funds, etc. have been expended; what a n c i l l a r y resources have been used, e.g^, outside consultation, media, public r e l a t i o n s , etc.? Assessing program e f f o r t i s probably the easiest type of eval-uation. In Suchman's words " i t i s easier to maintain administ-g r a t i v e records than to measure success of e f f o r t s . " However, the value of e f f o r t information to decisionmakers i s apt to be limited. The second option i n implementation analysis i s to focus on program processes. The object i s to determine the strengths and weaknesses of the program by examining the way i n which i t actually operates. Process evaluations are concerned with why things happen the way they do, how program components f i t to-gether and how people perceive the program. The emphasis i s on how outcomes are produced rather than the outcomes themselves. Patton elaborates: Process evaluations search for explanations of the successes, f a i l u r e s and changes i n a program. Under f i e l d conditions i n the r e a l world, people and unforeseen circumstances shape programs and modify i n i t i a l plans i n ways that are r a r e l y 70 t r i v i a l . The process evaluator sets out to understand and document the day-to-day r e a l i t y of the setting or settings under study. He t r i e s to unravel what i s act-u a l l y happening i n a program by searching for the major patterns and important nuances that give the program i t s character. A process evaluation requires s e n s i t i v i t y to both q u a l i t a t i v e and quantitative changes i n programs throughout t h e i r development; i t means becoming intimately acquainted with the d e t a i l s of the program. Process evaluations look not only at formal a c t i v i t i e s and a n t i -cipated outcomes, but also investigate informal patterns and unanticipated consequences i n the f u l l context of program implementation and development. F i n a l l y , process evaluations usually include perceptions of people close to the program about how things are going. A variety of per-spective^ may be sought from people inside and outside the program. A t h i r d focus for the implementation evaluation i s treatment s p e c i f i c a t i o n . Specifying the program treatment involves i d e n t i f y i n g and measuring those aspects of the pro-gram (independent variables) that are expected to a f f e c t out-comes (dependent v a r i a b l e s ) . "What i s going to happen i n the program that i s expected to make a difference? How are program goals supposed to be attained? What theory do program s t a f f hold about what they have to do i n order to accomplish the results they want?^ A manageable focus for t h i s agency, i n the author's opinion, means choosing one of the three implementation emphases described. And i n fact, two of the choices, program e f f o r t and treatment s p e c i f i c a t i o n , do not appear to be highly useful targets for evaluation. Much i s already known about the CSO Program e f f o r t (effort s t a t i s t i c s are gathered monthly). And e f f o r t information provides l i t t l e understanding of how the program functions, of why things happen the way they do. The argument against treatment s p e c i f i c a t i o n i s simply that t r e a t -7 1 merit of the offender i s not a major objective of th i s program. ^ Beyond providing the offender with a meaningful consequency, there i s l i t t l e expectation that long-term behavioural or a t t i t u d i n a l changes w i l l r e s u l t . Information c l a r i f y i n g the extent to which the intended treatment i s operating i s there-fore not l i k e l y to be useful. Information c l a r i f y i n g the extent to which the intended treatment i s operating i s therefore not l i k e l y to be useful. Information about program processes, on the other hand, would appear to hold a great deal of poten t i a l for program im-provement. Process evaluation i s , by d e f i n i t i o n , concerned with the 'hows' and whys' of program operations - information essent-i a l to making appropriate changes i n the program and/or the agency. Furthermore, as was e a r l i e r pointed out, there i s good reason to believe that program operations may d i f f e r s i g n i f i c a n t l y from one service delivery unit to the next and from that which was intended. Process evaluation could provide d i r e c t i o n for corrective action. Process evaluation has other advantages for t h i s con-text. It does not necessarily require sophisticated methodol-ogies and high levels of evaluative expertise; i t i s an a c t i v i t y with which regional administrators are at least partly f a m i l i a r (analyses of t h i s nature are a normal management function); i t i s capable of i d e n t i f y i n g s p e c i f i c measures for change; f i n a l l y i t can be accomplished i n a r e l a t i v e l y short period of time without great expense. The conclusion i s that CSO Program processes should be the 72 Central focus of evaluation. But what processes comprise the CSO Program and what aspects of which processes are l i k e l y to be of c r i t i c a l concern to decisionmakers? The CSO Program has f i v e i d e n t i f i a b l e processes: (1) program entry, (2) placement and discharge, (3) placement development, (4) program support, and (5) program monitoring. Processes (1), (2) and (3) could be described as operational or d i r e c t service components while processes (4) and (5) are administrative or managerial i n nature. Each process i s b r i e f l y described below: (1) Program Entry. Program entry i s that set of a c t i v i t i e s that begins when the offender f i r s t comes to the attention of the Crown Prosecutor and ends with the offender's assignment to Community Service. He may gain entry to the program by diversion or by court process. (2) Placement and Discharge. This process constitutes the core a c t i v i t y of the program. A placement i s chosen for the offender and he works his a l l o t t e d number of nours. The Com-munity Service O f f i c e r supervises the work d i r e c t l y or i n d i r e c t l y and monitors the offender's performance. (3) Placement Development. In t h i s process are a l l a c t i v i t i e s undertaken by the Community Service O f f i c e r to develop Community Service placements. A c t i v i t i e s would include discussions with community agencies and unions, public speaking to community groups, program promotion through the l o c a l media and contacts with the private c i t i z e n s . (4) Program Support. Program Support incorporates a l l management and support service r e s p o n s i b i l i t i e s related to this program. Relevant a c t i v i t i e s include s t a f f development, goal setting, development and dissemination of p o l i c i e s and procedures, a l l o c a t i o n of program resources, consultative services to ' f r o n t - l i n e ' s t a f f , upper-level l i a i s o n with other agencies, provision of promotional materials, and program planning and evaluation. (5) Program Monitoring. Program Monitoring refers to the ongoing process of gathering, c o l l a t i n g and using program s t a t i s t i c s as a management t o o l . Three of the processes described, Program Entry, Placement Development and Program Support appear to be part-i c u l a r l y c r u c i a l to program effectiveness. With respect to these processes, those evaluating the CSO Program might address three questions: (1) Is the program being adequately used and i s i t reaching the intended target population? (2) Is the kind of Community Service being done that which was intended? (3) Are Community Service O f f i c e r s receiving s u f f i c i e n t support and guidance from the organization to carry out t h e i r duties e f f e c t i v e l y ? Data that supports a negative response to these questions lead to more questions of the following nature: (1) Is the intended process functional (note that 'functional' i s not being defined i n terms of f i n a l outcomes i n t h i s context)? (2) If not, what alternative mode of operation i s being used? Why? (3) Is the alternative process functional? How can i t be improved? (4) Do program processes complement each other and c o l l e c t i v e l y function as an integrated whole? 74 This section has argued that the most u t i l i t a r i a n focus for the CSO Program evaluation i s that of program processes. It was further suggested that three of f i v e processes i d e n t i f -ied - Program Entry, Placement Development and Program Support-may be keys to program effectiveness and deserve decisionmakers' special attention. The focus proposed i s not intended to be the f i n a l answer to the problem of what to evaluate i n t h i s program. In the u t i l i t y - f o c u s e d evaluation decisionmakers must make the f i n a l choice. It i s they who process the relevant concern, i t i s they who must define what i s useful and i t i s they who must l i v e with the data c o l l e c t e d . Methodology Having decided on a process focus, the next task i s to choose a methodology that w i l l gather the desired information. This section proposes a methodology suitable to a process orientation and makes several suggestions to 'build-in' u t i l i t y . There"are two basic c r i t e r i a , i t would seem, by which a methodology for any evaluation must be chosen: (1) the method-ology must be capable of gathering the information of i n t e r e s t to decisionmakers (in t h i s case information about CSO Program processes), (2) the methodology must be within the agency's time, budgetary and knowledge l i m i t a t i o n s . The f i r s t c r i t e r i o n addresses the question "What i s i t about the kind of information being sought, i t s sources and the purpose of acquiring that information that suggests a p a r t i c u l a r methodology?" Based on the implementation/process focus proposed for CSO Program evaluation, four kinds of information would be requiered: (1) information defining the intended pro-cesses, (2) information defining the operating processes, (3) information defining the effectiveness of intended and operating processes and (4) information i d e n t i f y i n g required changes. This kind of data would come from (1) observations of behaviour, (2) assessments of knowledge and attitudes, (3) documents. By nature, t h i s data w i l l be highly descrip-t i v e , subjective and q u a l i t a t i v e . Unlike some program outcomes, the essence of human processes cannot be reduced to quanti-t a t i v e terms. One must ask too, "Is the purpose of examining t h i s process to predict some s o c i a l phenomenon or to under-stand i t ? " The focus chosen for CSO Program evaluation would suggest that understanding i s the objective, that improvement must be based on a clear grasp of the program through 'verstehen.' What, then, does t h i s h o l i s t i c analysis-of-human-interaction type of evaluation suggest about the nature of the required methodology? Certainly, i t s data gathering mechanism must be f l e x i b l e , open-ended, minimally structured; i t must act i v e l y engage the evaluator i n 'on-site 1 discussions with and observations of service providers and program participants; and i t must r e l y heavily on the interview and observation s k i l l s of the interviewer. / Applying the second c r i t e r i a to the study context completes the image of the required methodology. Evaluation resources i n the Vancouver Region w i l l be limited. In a s o c i a l 76 action setting, time i s pere n i a l l y i n short supply. Many decisions must be made i n the time-frame determined by the annual budget. Evaluative expertise i n the region i s equally l i m i t e d . This i s the agency that i s seriously addressing the need for evaluation for the f i r s t time. Unless a formally trained and experienced evaluator i s brought i n from outside the agency (a strong trend i n the Branch to promote from 'within' suggests t h i s i s unlikely) a highly sophisticated methodology would seem inappropriate. A U t i l i t y - f o c u s e d methodology i n thi s context, there-fore, w i l l (1) be descriptive and q u a l i t a t i v e i n nature,(2) be capable of gathering the data i n a rel a t i v e l y - short period of time (generally speaking allowing one month for data c o l l -ection and three months to plan and carry out the entire evaluation seems reasonable. Lengthy evaluations r i s k losing the i n t e r e s t and p a r t i c i p a t i o n of decisionmakers and may reach a point of diminishing returns), (3) be capable of implement-ation by an i n d i v i d u a l or individuals with minimal knowledge of evaluation theory and practice. These arguments lead the author to propose a simple case study design. The evaluator would gather data at one point i n time by v i s i t i n g selected sampling s i t e s (service delivery units) i n the region. Interviews would be undertaken with key service providers, observations made and o f f i c e doc-uments related to program processes reviewed. Several suggestions are put forward to help ensure a h i g h - u t i l i t y e f f o r t : (1) check that s t a f f agree with the 77 methodology chosen and inform them about how the information w i l l be used, (2) establish standards of judgment p r i o r to data c o l l e c t i o n and avoid attempting to quantify unquanti-f i a b l e performance indicators,(3) protect the c o n f i d e n t i a l i t y of respondents by assigning numbers to names and locations, (4) whenever possible, ensure verbal statements are cross-referenced with observations of behaviour and/or documented action, (5) f i e l d t e s t selected measures for gaps and ambig-u i t i e s p r i o r to data c o l l e c t i o n (note that the terms standard of judgment, performance indicator and measure mean d i f f e r e n t things. A performance indicator i s a key variable that helps to explain the phenomenon studied; a measure i s the device by which the presence of the performance indicator i s detected; and the standard of judgment i s a statement about the desired or expected state). An Evaluation Process The s t r u c t u r a l model elements just proposed and many of the U t i l i t y P r i n c i p l e s developed i n Chapter II can now be drawn together to create an evaluation process for the CSO Program. In l i n e with the major premise of t h i s thesis, the process described c a l l s for p a r t i c i p a t i o n by program decision-makers and information users throughout the evaluation. The process proposed assumes the existence of a Task Force, an Advisory Group, an External Consultant, a Regional Planner/ Evaluator and a Research Assistant as suggested e a r l i e r i n t h i s chapter. For descriptive purposes, the process i s divided into three parts - the evaluation plan, data c o l l e c t i o n and 78 analysis, and dissemination of the r e s u l t s . Only the major steps are noted. Although the steps are l i s t e d sequentially, i t i s expected that the process would be highly i t e r a t i v e i n practice. Developing the Evaluation Plan (1) Task Force i d e n t i f i e s purpose of evaluation and i s o l a t e s several promising f o c i . (2) Task Force tests f o c i against U t i l i t y Checklist; the focus with the highest net u t i l i t y value i s selected. (3) Task Force i d e n t i f i e s performance indicators, measures and standards of judgment, and develops a meaningful format for aggregating data. (4) Evaluator refines (2) and (3) and develops a detailed methodology based on guidelines set by Task Force. (5) Task Force s c r u t i n i z e s methodology for (a) capacity to obtain required information, (b) capacity to be implemented with resources available. (6) Evaluator develops a work schedule and reviews with Task Force. (7) Evaluator checks evaluation plan for i n t e g r i t y and puts the plan i n writing. (8) Plan i s submitted to Advisory Group and External- Con-' sultant for comment; modifications are made as required. (9) Task Force personally presents plan to Regional Manage-ment Committee for discussion and approval. 79 C o l l e c t i n g and Analyzing the Data (10) Evaluator f i e l d tests measures and makes minor mod-i f i c a t i o n s as required. (11) Evaluator and Research Assistant gather data at chosen sampling s i t e s . (12) Data i s co l l a t e d a f t e r each d i s t r i c t i s sampled. Collated data i s made available to Task Force and D i s t r i c t Directors as soon as possible. (13) Task Force decisionmakers and evaluator interpret data separately, then integrate positions and formulate recommend-ations for action. Reporting the Findings (14) Evaluator prepares preliminary report including re-commendations and an implementation plan and presents to Task Force for approval. (15) Preliminary report i s d i s t r i b u t e d for review and comment to Regional Management Committee, Advisory Group, External Con-sultant and a l l s t a f f . (16) Task Force members personally present report to D i s t r i c t Management Committees, Community Service O f f i c e r s and Regional Management Committee. (17) Regional Management Committee gives conditional or unconditional approval to preliminary report and to the extent possible takes any required action on evaluation findings. (18) Evaluator writes f i n a l report based on feedback from a l l sources and presents to Task Force for approval. 80 (19) F i n a l report (bearing Task Force signatures and acknowledging consultants and advisors) i s approved by Regional Management Committee and d i s t r i b u t e d to Region and Branch s t a f f as well as to courts and Crown Prosecutors. Remaining tasks to implement program improvements are delegated. (20) Task Force evaluates the evaluation e f f o r t with Ex-ternal Consultant and required changes are noted. Summary This chapter represents one attempt to create an eval-uation model with h i g h - u t i l i t y p o t e n t i a l . Certainly, i t i s not the only form that a u t i l i t y - f o c u s e d evaluation model could have taken i n t h i s context. Indeed, had decisionmakers from the B.C. Corrections Branch actually been involved i n the plan-ning of t h i s model, i t may have taken a decidedly d i f f e r e n t shape. Nevertheless, the model presented serves the purpose of demonstrating a number of the U t i l i t y P r i n c i p l e s proposed. Its notable c h a r a c t e r i s t i c s include: (1) a s p e c i f i c purpose and focus based (in practice) on the information needs of decision-makers, (2) the involvement of decisionmakers who have the most c r i t i c a l information needs of the agency/program, (3) a team of individuals selected from within and outside the agency to plan and undertake the evaluation, (4) a methodology chosen for i t s a c c e p t a b i l i t y to decisionmakers, capacity to gather the desired information and compatibility to the agency's resource base, (5) a written evaluation plan based on decisionmaker input and approval, (6) decisionmaker p a r t i c i p a t i o n i n data analysis and interpretation, (7) presentation of preliminary findings 81 to decisionmakers, (8) decisionmaker p a r t i c i p a t i o n i n the dissemination of r e s u l t s , (9) an evaluation 'post mortem.' The key to u t i l i t y - f o c u s e d evaluation l i e s i n the careful match-ing of model elements to the human, f i n a n c i a l , geographical and programmatic variables of the evaluation context. In p a r t i c u l a r , the model elements must be i n keeping with decision-makers' perceptions of u t i l i t y . CHAPTER VI UTILITY-FOCUSED EVALUATION In t h i s f i n a l chapter, the usefulness of u t i l i t y -focused evaluation i t s e l f i s addressed. Discussion centres on i t s fundamental element - decisionmaker p a r t i c i p a t i o n . Organizations i n which the model i s l i k e l y to be most e f f e c t -ive are i d e n t i f i e d ; an argument i s advanced to support the labour intensive nature of u t i l i t y - f o c u s e d evaluation; f i n a l l y , the model i s viewed as a vehicle for more e f f e c t i v e human re-lations i n the organization. C r i t i c a l Assumptions of the U t i l i t y Model A fundamental premise of the u t i l i t y model developed i s that decisionmakers must p a r t i c i p a t e i n the evaluation e f f o r t . But the capacity of the organization to p a r t i c i p a t e i s based on several assumptions - assumptions which may not apply to a l l organizations. P a r t i c i p a t i v e evaluation assumes, for example, that (1) s t a f f are w i l l i n g to p a r t i c i p a t e i n evaluation; that there i s an atmosphere of t r u s t , cooperation and information sharing i n the organization, (2) job descriptions are s u f f i c i e n t -l y f l e x i b l e to permit a collaborative e f f o r t and (3) relevant decisionmakers are i n s u f f i c i e n t proximity to each other to make a p a r t i c i p a t i v e approach to evaluation f e a s i b l e . 82 83 Clearly, these assumptions w i l l hold true i n some organizations more than i n others. Those organizations that f i t the assumptions could be expected to adopt the u t i l i t y model with ease and gain s i g n i f i c a n t benefit from i t ; for other organizations, the obstacles to p a r t i c i p a t i o n may be so great as to preclude the use of the u t i l i t y model. The question, then, i s which organizations are p a r t i c u l a r l y suited to the u t i l i t y model and which are not? Suitable Applications of the U t i l i t y Model The author's b e l i e f i s that the u t i l i t y model i s better suited to (1) decentralized organizations as opposed to those that are centralized and (2) small rather than large organizations (or small components of large organizations). Decentralized agencies are l i k e l y to have broadly-based p a r t i c i p a t i v e decision-making structures i n place. It i s l i k e l y that extensive inform-ation sharing i s routine i n such organizations. For s t a f f i n these agencies, engaging i n p a r t i c i p a t i v e evaluation would merely be a new application of a fa m i l i a r exercise. But for s t a f f i n highly c e n t r a l i z e d , h i e r a r c h i c a l agencies, p a r t i c i p a t i v e eval-uation may be viewed with reticence and alarm. Those used to autocratic decisionmaking and minimal information sharing may not be able to adjust to the open, intensely i n t e r a c t i v e app-roach required by the u t i l i t y model. Today, of course, many organizations (including the B r i t i s h Columbia Corrections Branch) have changed to a decentralized decisionmaking structure. Those 84 that remain centralized (in practice at least) tend to be large i n s t i t u t i o n s - hospitals for the medically and mentally i l l , p e n i t e n t i a r i e s , the armed forces and the p o l i c e . U t i l i t y -focused evaluation i n these contexts may be a d i f f i c u l t (although by no means impossible) undertaking. There i s a clear message here also for those that would apply the u t i l i t y model to other programs i n the B.C. Corrections Branch. The model i s more l i k e l y to get results with community-based programs than with institution-based programs. Staff associated with the former have had a r e l a t i v e l y long exposure to p a r t i c i p a t i v e decisionmaking and i n a l l p r o b a b i l i t y , practice p a r t i c i p a t i v e decisionmaking to a greater extent than t h e i r institution-based counterparts. Paramilitary organizations, such as prisons, can be expected to be very r e s i s t i v e to change. In sum, where adoption of the u t i l i t y model obliges s t a f f to make s i g n i f i c a n t changes i n the way i n which they normally function, the l i k e l i h o o d that i t w i l l be e f f e c t i v e l y used i s minimal; where only minor changes i n attitude and behaviour are required to accomodate the model, the pot e n t i a l for u t i l i t y - f o c u s e d evaluation i s greater. Secondly, small organizations (or portions of an organization) confined to a small geographical area are more l i k e l y users of the u t i l i t y model than large, extensive organ-iza t i o n s . U t i l i t y - f o c u s e d evaluation assumes that key decision-makers w i l l be i n close, physical proximity to one another to permit intensive i n t e r a c t i o n . It further assumes the existence 85 of e f f e c t i v e working relationships among those involved i n the evaluation e f f o r t . Both are more l i k e l y to occur i n the small agency where relevant decisionmakers are f a m i l i a r with each other and where t h e i r physical proximity permits frequent interactions. One would surmise, for example, that the geographically small Vancouver Region of the B.C. Corrections Branch i s more amenable to the u t i l i t y model than the geographically extensive Northern Region which services over half the province. By the same reasoning, one would expect the u t i l i t y model to have greater effectiveness at the d i s t r i c t and l o c a l levels of the Corrections Branch than at the regional l e v e l (provided evaluative expertise i s a v a i l a b l e ) . As the geographical distance among those engaged i n evaluation decreases, the opportunities for greater p a r t i c i p a t i o n increase. Expanding the example, attempting to implement the . u t i l i t y model at the Branch l e v e l (to make summative decisions about a province-wide program, for example) i s l i k e l y to be less successful than at lower levels i n the organization. Key program decisionmakers (the Regional Directors of Corrections) are dispersed throughout the province, meeting only occassionally to e f f e c t management decisions. Ongoing inte r a c t i o n among those key decisionmakers does not seem fea s i b l e . A modified form of the u t i l i t y model could be implemented but generally speaking, the model i s l i k e l y to be most e f f e c t i v e when applied to organizational structures no larger than regions found i n the B.C. Corrections Branch (and preferably smaller) and when applied to organizations/programs 86 which service a r e l a t i v e l y small geographical area. Utility-Focused Evaluation - Is i t R e a l i s t i c ? Without doubt, the most controversial aspect of the u t i l i t y model i s the time i t demands of those involved. Skep-t i c s would argue that asking already overworked decisionmakers to contribute valuable time to evaluation (which i s not t r a d i t i o n a l l y a l i n e management function) i s simply not r e a l i s t i c . But the fact remains that evaluations that have not included program decisionmakers have, i n large measure, f a i l e d to influence decisionmaking. Where decisionmakers firm-l y believe they do not have time to engage i n p a r t i c i p a t i v e evaluation, i t may be more appropriate to forego evaluation altogether than run the r i s k of wasting manpower, time and resources to produce just another unused report. Evaluation and i t s parent endeavor, planning, have by and large been perceived as rather d i s t a s t e f u l appendages to 'bonafide' administrative a c t i v i t i e s - something to be engaged i n af t e r a l l other more pressing duties have been duly discharged. The frequent outcome, of course, i s that time i s never found for evaluation or planning and the agency plods ad infinitum from one c r i s i s to the next. The author argues strenuously that i f resources are to be e f f i c i e n t l y allocated i n the public sector, administrators must extract themselves from the 'catch-22' thinking which prevents them from making a commitment to evaluation, from making more 87 informed decisions. P l a i n l y and simply, they must arrange the i r p r i o r i t i e s to f i n d the time to p a r t i c i p a t e i n evaluation.' If they do not, i t i s reasonable to assume that i l l - i n f o r m e d program decisions w i l l be the rule, not the exception. Q The second point to make i s that there i s no hard and fast rule governing how much or how l i t t l e decisionmakers should p a r t i c i p a t e i n evaluation. There e x i s t s , presumably, a minimal l e v e l of p a r t i c i p a t i o n below which u t i l i t y might be seriously affected. Above that l i n e there w i l l be some room for varying the amount of p a r t i c i p a t i o n to f i t the time schedules of those involved. F i n a l l y , there i s no reason to believe that evaluation cannot be highly p a r t i c i p a t i v e yet not be excessively time-consuming. Whether i t i s or not w i l l depend on how well those party to the evaluation use t h e i r time. Time can be kept to a minimum i f (1) those involved are task-oriented, (2) meetings are well-structured and t h e i r purposes are established ahead of time, (3) the evaluator can provide strong guidance to decisionmakers by i d e n t i f y i n g decision options and commenting on t h e i r f e a s i b i l i t y , (4) participants take advantage of normal meeting schedules to discuss the evaluation. A Parting Word The thrust of t h i s thesis has been to develop and demonstrate a method to gather useful information for program decision-88 makers. But beneath that purpose there l i e s a potential of another kind - a payoff i n human r e l a t i o n s . Asking s t a f f to par t i c i p a t e i n evaluation i s a way of recognizing and u t i l i z i n g the value of the human beings that comprise the organization. It i s an opportunity to develop and strengthen a team approach to organization problem-solving. I t i s an expression of t r u s t , a vote of confidence from s t a f f to fellow s t a f f . I t says "We believe i n the human pot e n t i a l of thi s agency; we believe that s t a f f at a l l levels have a s i g n i f i -cant contribution to make to the management of t h i s program; we believe that only by c o l l e c t i v e e f f o r t w i l l we f i n d and implement the most e f f e c t i v e , e f f i c i e n t means to achieve our goals. We believe i n ourselves." FOOTNOTES Chapter I ''"Carol H. Weiss, "Alternative Models for Program Evaluation, " i n Evaluation and Accountability i n Human  Service Programs, eds. William C. Sze and June G. Hopps (Cambridge, Mass.: Schenkman Publishing, 1974), p. 113. Mark S. Thompson, Evaluation for Decision i n S o c i a l  Programmes (Westmead, England: Saxon House Publishing, 1975), p. 6. 3 Edward A. Suchman, "Action for What? A Crit i q u e of Evaluative Research," i n Evaluating Action Programs: Readings  i n S ocial Action arid Education, ed. Carol H. Weiss (Boston: Allan and Bacon, 1972), p.55. 4 Carol H. Weiss, " U t i l i z a t i o n of Evaluation: Toward Comparative Study," i n Evaluating Action Programs: Readings  i n Social Action arid Education, ed. Carol H. Weiss (Boston: Al l a n and Bacon, 1972), p. 318. ^Edward A. Suchman, "Action for What? A C r i t i q u e of Evaluative Research," i n Evaluating Action Programs: Readings i n Social Action arid Education, ed. Carol H. Weiss (Boston: A l l a n and Bacon, 1972), p. 53. ^Edward A. Suchman, Evaluative Research, P r i n c i p l e s  and Practice i n Public Service arid S o c i a l Action Programs (New York: Russell Sage Foundation, 1967), p. 31. 7 Nathaniel L i c h f i e l d , Peter Kettle and Michael Whitbread, Evaluation i n the Planning Process (New York: Pergamon Press, 1975), p.4. Henry W. Riecken, "Memorandum on Program Evaluation," in Evaluating Action Programs: Readings i n Social Action and  Education, ed. Carol H. Weiss (Boston: A l l a n and Bacon, 1972), p. 87. 9 Marvin C. A l k i n , "Evaluation Theory Development," i n Evaluating Action Programs: Readings i n Social Action and  Education, ed. Carol H. Weiss (Boston: A l l a n and Bacon, 1972), p. 107. 89 90 Chapter II Lee Gurel, "The Human Side of Evaluating Human Services Programs: Problems and Prospects,"in Handbook of Evaluation  Research, eds. Marcia Guttentag and Elmer Struening (London: Sage Publications, 1975), p.25. 9 . . . Edward A. Suchman, Evaluative Research, P r i n c i p l e s  and Practice i n Publis Service and Social Action Programs (New York: Russell Sage Foundation, 1967). 3 Michael Quinn Patton, Utilization-Focused Evaluation (London: Sage Publications, 1978). ^Leonard Rutman, "Planning an Evaluation Study," i n Evaluation Research Methods: A Basic Guide, ed. Leonard Rutman (Beverly H i l l s , C a l i f . : Sage Publications, 1977), p.37. 5Patton, Ibid., p.284. fi • Carol H. Weiss, as c i t e d i n Suchman, 1967, Ibid., p.20. 7 C. C l i f f o r d Attkisson, Timothy R. Brown, and William A. Hargreaves, "Roles and Functions of Evaluation i n Human Service Programs," i n Evaluation of Human Service Programs, eds. C. C l i f f o r d Attkisson, William A. Hargreaves, Mardi J. Horowitz and James E. Sorensen (New York: Academic Press, 1978) , p.64. o John Van Maanen, "The Process of Program Evaluation," in The Grantsmanship Centre News, no. 27 (January/February, 1979) : 32. q Stuart Adams, Evaluative Research i n Corrections: A P r a c t i c a l Guide (Washington, D.C: National In s t i t u t e of Law Enforcement and Criminal Justice, 1975), p.110. l°Mark S. Thompson, Evaluation for Decision i n Social Programmes (Westmead, England: Saxon House Publishing, 1975), p.37. i : LPatton, Ibid., p. 88. 12 Marilyn A. Biggerstaff, "The Administrator and Social Action Evaluation," Administration i n Social Work, v o l . 1, no. '. (Spring, 1977): 72. 13 Thompson, Ibid., p.28. 1 4 P a t t o n , Ibid., p.83. -'--'Peter w. Chommie and Joe Hudson, "Evaluation of Outcome and Process," . 91 l 6 P a t t o n , Ibid., p.154. -^Thompson, Ibid., p. 157 18 Joseph S. Wholey, "E v a l u a b i l i t y Assessment," i n Evaluation Research Methods: A Basic Guide, ed. Leonard Rutman (Beverly H i l l s , C a l i f . : Sage Publications, 1977), p.41. 1 9 P a t t o n , Ibid., p.253. 2 0 I b i d . ,. p. 255. 2 1 I b i d . , p. 243. 2 2 I b i d . ,p. 264 . 2 3 I b i d . , p.258. 2 4 I b i d . , p.243. 2 5 I b i d . 2 6 I b i d . , p.270. 27 'See Adams, Ibid., p.112; Attkisson et a l . , Ibid., p.62: and Francis G. Caro, "Approaches to Evaluative Research: A Review," Human Organization, v o l . 28, no.2 (1969), p.95. 2 8 A t t k i s son et a l . , Ibid., p.62. 2^H. Grobman, Evaluation A c t i v i t i e s of Curriculum  Projects: A Starting Point (Chicago: Rand McNally, 1968). 3 0Caro, Ibid., p.95. 31 R. L i k e r t and R. L i p p i t t , " U t i l i z a t i o n of Social Science," i n Research Methods i n the Behavioral Sciences, eds. L. Festinger and D. Katz (New York: Holt, 1953). 32 . Francis G. Caro, "Issues i n the Evaluation of Social Programs," Review of Educational Research, vol.41, no.2 (A p r i l , 1971): 103. 3 3 P a t t o n , Ibid., p.239. Chapter III ^ B r i t i s h Columbia, Annual Report of the Corrections  Branch, Ministry of Attorney-General for Calendar Year 1978 (Vic t o r i a , B.C.: Queen's Printer, 1979), p.6. 2 From discussion with Mr. E. W. Harrison, Regional Director of Corrections (Vancouver Region), B r i t i s h Columbia Corrections Branch, Vancouver, B.C. 92 -^Derived from B r i t i s h Columbia, Annual Report of the  Corrections Branch, Ministry of Attorney-General for Calendar  Year 1978, p.19; and B r i t i s h Columbia, Annual Report of the  Corrections Branch, Ministry of the Attorney-General for Calendar Year 1976, p.Q22. ^ B r i t i s h Columbia, Annual Report of the Corrections  Branch, Ministry of Attorney-General for Calendar Year 1978 (Vict o r i a , B.C.: Queen's Printer, 1979), p.19. ^Information provided by the Of f i c e of the Vancouver Region, B r i t i s h Columbia Corrections Branch, Ministry of Attorney-General. From discussion with Mr. E. W. Harrison, Regional Director of Corrections (Vancouver Region), B r i t i s h Columbia Corrections Branch, Vancouver, B.C. 7 From discussion with 0. E. Hollands, Director, Program Analysis and Evaluation Section, O f f i c e of the Commissioner, B r i t i s h Columbia Corrections Branch, V i c t o r i a , B.C. ^From discussion with G. Muirhead, Acting Director, Operations and Management Information Section, O f f i c e of the Commissioner, B r i t i s h Columbia Corrections Branch, V i c t o r i a , B.C. Q . . . From discussion with E. W. Harrison, Regional Director of Corrections (Vancouver Region), B r i t i s h Columbia Corrections Branch, Vancouver, B.C. "^From information issued by the Criminology Research Centre, Simon Fraser University, Burnaby, B.C., A p r i l , 1979. Chapter IV ^ B r i t i s h Columbia, Annual Report of the Corrections  Branch, Ministry of the Attorney-General for Calendar Year  1976 (Vic t o r i a , B.C.: Queen's Printer, 1979). 9 . . . Ministry of Attorney-General, B r i t i s h Columbia Corrections Branch (Vancouver Region), Minutes of the Regional  Management Committee Meeting, A p r i l 23,1979, Vancouver, B.C., A p r i l , 1979. B r i t i s h Columbia, Annual Report of the Corrections  Branch, Ministry of Attorney-General for Calendar Year 1978 (Vict o r i a , B.C.: Queen's Printer, 1979), p.27. 93 From the Operations and Management Information Section, Office of the Commissioner, B r i t i s h Columbia Corrections Branch, V i c t o r i a , B.C. Chapter V ^Michael Scriven, "The Methodology of Evaluation," in Evaluating Action Programs: Readings i n Social Action and  Education, ed. Carol H. Weiss (Boston: A l l a n and Bacon,1972), p.126. ^•Michael Quinn Patton, Utilization-Focused Evaluation (London: Sage Publications, 1978). 3 . From discussion with E. W. Harrison, Regional Director of Corrections (Vancouver Region), B r i t i s h Columbia Corrections Branch, Vancouver, B.C. ^Carol H. Weiss, "Alternative Models for Program Evaluation," i n Evaluation and Accountability i n Human Service  Programs, eds. William C. Sze and June G. Hopps (Cambridge, Mass.: Schenkman Publishing, 1974), p.114. Patton, Ibid., p.166. c Edward A Suchman, Evaluative Research, P r i n c i p l e s and  Practice i n Public Service and Social Action Programs (New York: Russell Sage Foundation, 1967). 7 Tony Tripodi, P h i l l i p F e l l e n , and Irwin Epstein, Social Program Evaluation: Guidelines for Health, Education and Welfare Administration (Itasca, 111.: F. E. Peacock, 1971), p.45 Q Suchman, Ibid., p.61. 9 Patton, Ibid., p.166. 1 0 I b i d . , p.167. "'"''"Opinion expressed by B. Robinson, Commissioner of Corrections, B.C. Corrections Branch, on a CBC Radio broad-cast concerning the Community Service Order Program (Fall,1979). BIBLIOGRAPHY Adams, Stuart. Evaluative Research i n Corrections: A P r a c t i c a l  Guide. Washington, D.C: National I n s t i t u t e of Law Enforcement and Criminal Justice, 1975. Al k i n , Marvin C. "Evaluation Theory Development." In Evaluating Action Programs: Readings i n Social Action  and Education. Edited by Carol H. Weiss. Boston: A l l a n and Bacon, 1972, pp.105-17. Attkisson, C. C l i f f o r d ; Brown, Timothy R.; and Hargreaves, William A. "Roles and Functions of Evaluation i n Human Service Programs." In Evaluation ,of Human Service  Programs. Edited by C. C l i f f o r d Attkisson, William A. Hargreaves, Mardi J. Horowitz and James E. Sorensen. New York: Academic Press, 1978, pp.59-95. Biggerstaff, Marilyn A. "The Administrator and Social Agency Evaluation." Administration i n Social Work vol.1, no.l (Spring, 1977): 71-78. B r i t i s h Columbia. Annual Report of the Corrections Branch, Ministry of Attorney-General for Calendar Year 1978. V i c t o r i a , B.C.: Queen's Printer, 1979. B r i t i s h Columbia. Annual Report of the Corrections Branch, Ministry of the Attorney-General for Calendar Year 1976. V i c t o r i a , B.C.: Queen's Printer, 1977. B r i t i s h Columbia. Report of the Auditor General for the Year  Ended 31 March, 1978. V i c t o r i a , B.C.: Queen's Printer, 1979. Caro, Francis G. "Issues i n the Evaluation of Social Programs." Review of Educational Research vol.41, no.2 ( A p r i l , 1971): 87-114. Chommie, Peter W. and Hudson, Joe. "Evaluation of Outcome and Process." . Grobman, H. Evaluation A c t i v i t i e s of Curriculum Projects: A  Starting Point. Chicago: Rand McNally, 1968. 94 95 Gurel, Lee. "The Human Side of Evaluating Human Services Programs: Problems and Prospects." In Handbook of  Evaluation Research. Edited by Marcia Guttentag and Elmer Struening. London: Sage Publications, 1975, pp.11-50. L i c h f i e l d , Nathaniel, Kettle, Peter and Whitbread, Michael. Evaluation i n the Planning Process. New York: Pergamon Press, 1975. Lik e r t , R. and L i p p i t t , R. " U t i l i z a t i o n of Social Science." In Research Methods i n the Behavioral Sciences. Edited by L. Festinger and D. Katz. New York: Holt, 1953. Ministry of Attorney-General, B r i t i s h Columbia Corrections Branch, Vancouver Region. Minutes of the Regional  Management Committee Meeting, A p r i l 23, 1979. Vancouver, B.C., A p r i l , 1979. Patton, Michael Quinn. Utilization-Focused Evaluation. London: Sage Publications, 1978. Riecken, Henry W. "Memorandum on Program Evaluation." In Evaluating Action Programs: Readings i n Social Action  and Education. Edited by Carol H. Weiss. Boston: A l l a n and Bacon, 1972, pp.85-104. Rutman, Leonard. "Planning an Evaluation Study." In Evaluation Research Methods:1 A Basic Guide. Edited by Leonard Rutman. Beverly H i l l s , C a l i f . : Sage Publications, 1977, pp.15-40. Scriven, Michael. "The Methodology of Evaluation." In Evaluating Action Programs: Readings i n Social Action  and Education. Edited by Carol H. Weiss. Boston: A l l a n and Bacon, 1972, pp.123-36. Suchman, Edward A. Evaluative Research, P r i n c i p l e s and Practice i n Public Service and Social Action Programs. New York: Russell Sage Foundation, 1967. Suchman, Edward A. "Action for What? A Criti q u e of Evaluative Research." In Evaluating Action Programs: Readings i n Social Action and Education. Edited by Carol H. Weiss. Boston: A l l a n and Bacon, 1972, pp.52-84. Thompson, Mark S. Evaluation for Decision i n Social Programmes. Westmead, England: Saxon House Publishing, 1975. 96 Tripodi, Tony; F e l l i n , P h i l l i p ; and Epstein, Irwin. Social ' Program Evaluation: Guidelines for Health, Education  and Welfare Administration. Itasca, 111.: F. E. Peacock, 1971. Van Maanen, John. "The Process of Program Evaluation." In The Grantsmanship Centre News no.27 (Jan./Feb., 1979): 29-74. Weiss, Carol H. " U t i l i z a t i o n of Evaluation: Toward Compara-ti v e Study." In Evaluating Action Programs: Readings  i n Social Action and Education. Edited by Carol H. Weiss. Boston: A l l a n and Bacon, 1972, pp. 318-26. Weiss, Carol H. "Alternative Models for Program Evaluation."' In Evaluation and Accountability i n Human Service  Programs. Edited by William C. Sze and June G. Hopps. Cambridge, Mass.: Schenkman Publishing, 1974, pp.113-22. Wholey, Joseph S. "E v a l u a b i l i t y Assessment." In Evaluation  Research Methods: A Basic Guide. Edited by. Leonard Rutman. Beverly H i l l s , C a l i f . : Sage Publications, 1977, pp.41-56. I 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0094839/manifest

Comment

Related Items