DEVELOPMENT AND APPLICATION OF A METHODOLOGY FOR EVALUATING ADULT BASIC EDUCATION PROJECTS by SHELDON ROBERT HARVEY B.A., University of Winnipeg, 1970 B.Ed., University of Manitoba, 1975 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS i n THE FACULTY OF GRADUATE STUDIES (Adult Education Division, Department of Administrative, Adult and Higher Education Faculty of Education University of British Columbia) We accept this thesis as conforming to the required standard. THE UNIVERSITY OF BRITISH COLUMBIA July, 1981 © Sheldon Robert Harvey, 1981 In presenting t h i s thesis i n p a r t i a l f u l f i l m e n t of the requirements for an advanced degree at the University of B r i t i s h Columbia, I agree that the Library s h a l l make i t f r e e l y a v a i l a b l e for reference and study. I further agree that permission for extensive copying of t h i s thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. I t i s understood that copying or p u b l i c a t i o n of t h i s thesis for f i n a n c i a l gain s h a l l not be allowed without my written permission. The University of B r i t i s h Columbia 2075 Wesbrook Place Vancouver, Canada V6T 1W5 DE-6 (2/79) ABSTRACT In British Columbia, adult basic education (ABE) has evolved into a significant program area on the verge of gaining acceptance as a legitimate and important part of the public education system. If the resources currently committed to these programs are to be solidified and increased, it is imperative the present program impact be measured, the needs, resources, processes and outcomes be articulated and the benefits of increased programming be predicted. This is best accomplished through effective use of program evaluation. The fiel d of program evaluation is characterized by a lack of well developed theory, a series of complex models, an absence of methodology and an abundance of designs and checklists which are not tied to a sound theory or model. The purpose of this study was threefold. F i r s t l y , it reviewed the extant models and methodologies for the evaluation of adult education programs. Secondly, a methodology for evaluating innovative, ABE programs was developed. Finally, the methodology was used to guide an evaluation of an adult basic education project conducted by i i a British Columbia college, evaluated accomplished few of methodology provided a useful guide the evaluation process. While the project being its stated objectives, the and flexible structure to It is hoped that this methodology will be field tested on a variety of ABE programs and that additional research will result in an even more sophisticated methodology designed to strengthen the ties between the best theories and models and the field of practice. i i i TABLE OF CONTENTS Page ABSTRACT i i TABLE OF CONTENTS iv LIST OF FIGURES vi ACKNOWLEDGEMENTS v i i CHAPTER ONE INTRODUCTION 1 PURPOSE OF THE STUDY 3 NEED FOR EVALUATION .3 PLAN OF THE STUDY 8 CHAPTER TWO LITERATURE REVIEW 9 Definitions of Evaluation 10 The Tylerian Model 11 Scriven's Model of Evaluation 12 Stake's Concept of Evaluation 15 Decision Management Strategies 17 COMPARISON OF EVALUATION MODELS 22 CHAPTER THREE AN OUTLINE OF A METHODOLOGY 26 EVALUATION PROCESS ACTIVITIES 29 Defining the Boundaries of the Evaluation 30 Preparing the Program Statement 35 i v Page Creating a Specififc Evaluation Plan 37 Performing the Evaluation 40 Reporting the Evaluation 41 CHAPTER FOUR INTRODUCTION 44 SUMMARY OF PRE-E V AL UATI ON ACTIVITIES 44 THE EVALUATION 46 Defining the Boundaries of the Evaluation.... 46 Preparing the Program Statement 50 Creating a Specific Eval uat i on . Pl an 51 Performing the Evaluation 56 Reporting the Evaluation Results 58 SUMMARY AND OVERVIEW 59 CHAPTER FIVE SUMMARY 61 CONCLUSIONS 65 BIBLIOGRAPHY 70 APPENDIX A 74 v LIST OF FIGURES Page 1. Decision-Making Settings 5 2. The Pathway Comparison Model 14 3. Data Collection and Analysis 16 4. Types of Decisions 20 5. Outline of an Evaluation Methodology 28 6. Defining the Boundaries of the Evaluation 34 7. Preparing a Program Statement 36 8. Creating a Specific Evaluation Plan 39 9. Performing the Evaluation 40 10. Reporting the Evaluation Results 41 11. Sample Impact Model 55 12. Conceptualizing the Project 98 vi ACKNOWLEDGEMENTS I would like to thank several members of the Department of Administrative, Adult and Higher Education, Faculty of Education. First, Dr. G. Dickinson, who offered me the opportunity to participate in an exciting series of research and development projects, one of which formed the basis of this study. He also offered support and expert, consistent advice throughout. Dr. P. Cookson spent long hours reading various drafts of this paper, wrote voluminous critiques and regularly drew my attention back to the forest whenever I insisted on staring at the trees. Dr. D. Rusnell was helpful in the development of an earlier paper on evaluation which i n i t i a l l y drew me to the area of program evaluation. Robert and Mae Harvey, my parents, consistently and lovingly encouraged and supported me in this as well as many other projects. I would like to express appreciation to my wife, Betty Harvey, who persevered, cajoled, edited and encouraged when each was required. Carol McGillivray and Sherry Estes typed this document, often at odd hours and from a semi-legible draft. I express my sincere appreciation to all of those mentioned above. v i i CHAPTER I INTRODUCTION It is widely accepted in Canada that at least nine years of schooling are required to attain functional literacy. Nearly five million people in the population, aged fifteen and over, have less than nine years of schooling. British Columbia alone has in excess of 378,000 people or twenty-four percent of the adult population in this category (Dickinson, 1979, p. 1). If the definition of basic education usually applied to the pre-adult population, up to and including twelve years of schooling, is extended to adults, the number lacking basic education is even more dramat i c. Many correlational studies have illustrated the relationship between a lack of education and such variables as low income, high unemployment, high welfare dependence and other social problems. In addition, i t has been well established in adult education that the best educated participate in continuing education while the undereducated participate rarely or not at a l l . The public educational institutions of British Columbia have provided few opportunities for the undereducated of this Province (Faris, 2 1979), but the need for adult basic education (ABE) programs is well documented (Faris, 1979; Dickinson, 1978; Thomas, 1976). It appears therefore, that many of these individuals as well as society in general would benefit from a systematic approach to providing appropriate, high quality educational opportunities for the undereducated adult. The Ministry of Education of the Province of British Columbia has not articulated a policy on ABE, but numerous reports and briefs, including the Report of the Committee on Continuing and Community Education, have described this as an area of high priority. The Report of the Committee on Adult Basic Education (Faris, 1979, p. 17) stated that: ...the main prerequisite for closing the gap between the need for and the provision of Adult Basic Education was an integrated and systematic approach. Adult Basic Education as it presently exists in British Columbia is characterized by an ad hoc approach which has resulted in uncertainty, insecurity, and inefficiency in the deployment and use of resources. One of the strategies adopted .by the Ministry to ameliorate this situation has been the sponsorship of pilot projects at various educational institutions to i n i t i a t e , expand or improve ABE programs or to develop innovative approaches. 3 Given the nature and purpose of such projects i t is imperative that systematic formative evaluation be b u i l t - i n , both to maximize the likelihood of success and to provide data for summative evaluation and dissemination of findings to other institutions and regions. PURPOSE OF THE STUDY The purpose of this study was threefold. F i r s t l y , it reviewed the extant models and methodologies for the evaluation of adult education programs. Secondly, a methodology for evaluating innovative, short term adult education projects was developed. Finally, the methodology was used to guide an evaluation of the Adult Basic Education project conducted by a British Columbia college. NEED FOR EVALUATION Those charged with the responsibility for program planning and evaluation must decide which programs will benefit from an evaluation study and what resources to commit to such a study. Most educators would agree that the potential benefit of an evaluation is greater for some programs than for others. The d i f f i c u l t y is in identifying those programs for which an evaluation will be 4 cost-effective. Stufflebeam has developed a conceptualization which addresses this situation. Stufflebeam (1973) presents four different decision settings which result from the intersection of two continua. The f i r s t continuum, degree of change, refers to the expected outcome of the activity ranging from a small change to a large change. The second continuum, information grasp, refers to the quantity and quality of information available to the program planners, ranging from low to high. Figure 1 illustrates the resulting decision-making settings. The f i r s t decision setting, metamorphism, would produce a completely new system based upon a total understanding of need, required resources and desired outcomes. Such a situation is rarely encountered in reality. The homeostasis setting is typical of a stable, on-going program. The decisions that are made result in small changes intended to maintain or restore the program. 5 FIGURE 1 Decision Making Settings 2. HOMEOSTASIS 1. METAMORPHISM Acti vity: Restorat i ve Activity: Ut o p i a n Purpose: Mai ntenance Pu rpose: Complete change Bas i s : Technical Bas i s : Overarchi ng standards theory and quality cont rol 3. INCREMENTAL ISM 4. NEOMOBILISM Act i vity: Developmental Act i vity: Innovat i ve Purpose : Cont i nuous Pu rpose : Invent i ng , i mprovement testing, and Bas i s : Ex pert di ffusi ng judgment plus solut i ons to structured significant prob1 ems Bas i s : Conceptuali zation, heuristic invest-igation, and structured inquiry SMALL DEGREE OF CHANGE LARGE (From "Educational Evaluation and Decision Making" by D.L. Stufflebeam. In B.R. Worthen & J.R. Sanders, Educat i onal Evaluation: Theory and Practice, Worthington: Charles A. Jones Publishing, 1973 ). 6 The risks are small since these decisions are based upon a solid grasp of the relevant information. Incremental decision-making produces small changes based upon incomplete information. This is characteristic of small pilot projects operated within larger on-going programs. Such projects are intended to test some new concept with very l i t t l e risk or expense. While homeostatic decisions tend to maintain a program, incremental decisions attempt to move a program slowly to a new position. The fourth quadrant of Figure 1, neomobilism, results from a desire for large changes but without a solid data base. Such situations result from a pressing need for action or the perceived opportunity to make a quantum leap. The project described in this study, similar to many large scale projects, can be described as a neomobi1istic setting. The project involved a relat ively new institution attempting to serve a different cl ientele , u t i l i z ing new methods and materials without the benefit of a supportive theoretical or empirical data base. Given this situation, the need for evaluation was especially great to assist the program planning and implementation process and to document all aspects of the project so that future projects would have more information with which to work. 7 Evaluation is a much discussed but l i t t l e practised activity in ABE. A review of published evaluation studies shows that they tend to f a l l into two categories -system-wide surveys and end-of-program reports. The former serves a valuable function in that i t collects data on a system-wide basis. Knox, Mezirow, Darkenwald and Beder (1972) have developed a methodology called perspective discrepancy assessment which is well suited for this type of evaluation. This strategy, however, relies heavily upon comparing percentile distributions among groups and therefore is not directly applicable to the evaluation of a smaller program. In addition, the six major areas of decision making - recruitment, staffing, instruction, staff development, collaboration, goal setting - are not the most important issue in all ABE programs, especially innovative pilot projects. The second category of evaluation studies, end of project reports, seldom meet the cr i t e r i a of evaluation. Typically, these reports are prepared in order to satisfy the requirements of a funding organization and do l i t t l e more than describe the events which transpired during the 1i fe of the project. 8 What is required by many ABE projects and programs is a methodology which makes evaluation an integral part of the program planning process. The data, both descriptive and judgmental, should assist the planners in modifying various aspects of the program to maximize the likelihood of achieving the stated goals or in revising the goals to better match the realities of a new situation. The methodology should also f a c i l i t a t e the completion of summative reports since good formative evaluation simplifies the summative evaluation process. PLAN OF THE STUDY The following chapter presents a review of the literature pertaining to the leading definitions and models of educational evaluation as they relate to this study. Chapter Three presents a methodology based upon one of these models. Chapter Four describes the implementation of this methodology in conducting an evaluation of an ABE pilot project. The final chapter draws conclusions based upon the information gathered and experiences resulting from this study. 9 CHAPTER I I LITERATURE REVIEW The evaluation of adult basic education programs has been seriously limited by the lack of a sound theoretical base. Most evaluation plans have been implemented without the assistance of a conceptual framework to guide the process. Traditionally, evaluation has been defined in terms of participant satisfaction, participation patterns, and dropout rates, and characterized by the ad hoc collection of data which often answers questions no one is asking. Further, most evaluation reports are submitted too late to affect the project or program under study. ABE programs tend to be in a state of constant change because of their dynamic nature and precarious funding. Given this fact and the inadequacies of current evaluation methodologies significant benefit could be derived from the development of a methodology which takes into account the realities of ABE programming. Evaluation must become more than an inconvenience imposed by funding authorities and begin to serve a dual purpose of program improvement and demonstration of accountability. 10 Definitions of Evaluation A conceptualization of the evaluation process will be significantly influenced by its definition. The Webster (1972) dictionary defines evaluation as "to examine and judge". While there is nothing technically incorrect with this definition, i t offers no direction as to how i t can be operationalized. Several educational evaluators have modified this basic definition in an attempt to remedy that problem. Tyler (1950) states that evaluation is essentially the process of determining to what extent the educational objectives are actually being realized by the curriculum and instruction. Stake (1967) suggests that evaluation must describe and judge, and Scriven (1972) argues that evaluation should appraise not only the achievement of goals but also judge the merits of the goals themselves. A widely accepted and useful definition has been offered by Stufflebeam. It states: "Evaluation is the process of delineating, obtaining and supplying descriptive and judgment information, concerning some object's merit, as revealed by its goals, plans, process and product and for some useful purpose such as decision making and accountability" (1976, p. 14). While there are similarities among definitions, each seems to place the emphasis on a different 11 aspect of the process. A brief examination of the models associated with these definitions will serve to i l l u s t r a t e their different emphases. The Tylerian Model The Tylerian Model emphasizes the degree to which programs are achieving their intended objectives. There are five steps in the process. 1. Analysing objectives to identify and c l a r i f y their two basic dimensions. These dimensions are: (a) the behaviour expected from the student, and (b) the key content that is to be mastered. 2. Identifying situations that will give the student a chance to express the behaviour related to the cont ent. 3. Selecting or developing instruments that wi l l record the behaviours. 4. Analysing the amount of change that's taking place in students. 5. Analysing the strengths and weaknesses of the program in helping students achieve the objectives (Steele, 1973, p. 154). While the attainment of objectives is an important aspect of a program to evaluate, i t is not the only one. Programs may, for example produce important side effects which are t o t a l l y ignored by the o f f i c i a l program objectives. This is especially apparent in ABE. For 12 example, i t is common for ABE students to approach their return to school with understandable trepidation. The potential is present to either structure the learning situation such that they experience frequent success which may restore confidence and i n s t i l l a desire to continue learning or to teach the material in a manner which may reinforce their past failures. Thus, while the stated goals refer to cognitive areas there may be important attitudinal side effects. The Tylerian Model is thus too narrow an approach for evaluating ABE programs. Scriven's Model of Evaluation Scriven argues that in addition to determining whether objectives have been achieved, evaluation must assess the merit of the objectives. This dual assessment allows the evaluator to judge the overall merit of the program. Scriven differentiates between the goals and the roles of evaluation. The goal, he suggests, is always to arrive at an overall judgment of merit. While the roles are many and varied, they are of two types, formative evaluation which is a developmental process aimed at program improvement, and summative evaluation which is an objective, independent assessment of merit. For both types the evaluation process is seen as essentially one of collecting 13 relevant information, condensing i t , and judging the merit of a program. In addition to the concepts and definitions referred to above, Scriven has developed what he calls the Pathway Comparison Model. The Pathway Comparison Model consists of a series of ten steps intended to guide an evaluation study. The f i r s t five steps represent the conceptualizing phase, that is from data to c r i t e r i a . This phase can be viewed as a process of understanding what needs to be done, gathering information and condensing the information. Steps 6-10 represent the credentialing phase, that is from cri t e r i a to evaluative conclusion. Credentialing involves an examination of costs and alternative programs and the determination of a final judgment of merit (Scriven, 1973, p. 10). Figure 2 summarizes the steps of the Pathway Compari son Model . This is not a linear model nor is i t intended that one proceed through the steps only once. Properly implemented, i t is an interactive process often requiring recycling through the steps. 14 FIGURE 2 The Pathway Comparison Model 1. Characterization - what is being evaluated and at what level of generality of specif i city? 2. Clarification of Conclusion with Client -what form of evaluative conclusions does the client require. e.g. system x is superior to existing system or system x is superior to a l l available systems. 3. Causation - is it an issue? If so, how is it to be dealt with? 4. Comprehensive Check of Consequences -includes looking for unintended outcomes. 5. Conceptualization (Compression) - typically using preceeding data but may use some from steps 6-8. 6. Costs - since this is a comparison model one must examine the cost of a program in relation to the cost of similar or alternate programs. The opportunity cost should also be considered. 7. Consumer Characteristics - Market and need analysis examines the consumers of the product and the need for the evaluation. 8. Critical Competitors - alternate pathways to the same goal. (1-7 for each of them). 9. Credentialing - combining. 10. Conclusions and communications - data processing, design, writing., dissemination. (Scriven, 1973, p. 28). 15 In addition to the Pathway Comparison Model and formative - summative classification, Scriven has described an approach to evaluation called goal-free evaluation. The underlying premise is that an evaluator unaware of the intended outcomes of a program will have greater objectivity and all effects will be assessed equally, not merely the intended effects. Goal-free evaluation is intended not as a substitute for goal-based evaluation, but as a useful supplement. Thus Scriven has taken a broader approach to evaluation than the one suggested by the Tylerian Model. Stake's Concept of Evaluation Stake's thinking on evaluation has been influenced by the work of Scriven. While Scriven has occupied himself with the philosophical issues, Stake has attempted to develop a more specific, systematic procedure for performing evaluations. Stake differentiates between informal and formal evaluations, and supports Scriven's position that an evaluation is not complete until a judgment has been made. The framework suggested by Stake is intended to guide data collection and data analysis. Three types of information are sought - antecedent, transaction and outcome - on four different levels - intents, observations standards and judgments. This approach to evaluation is known as the Countenance Model. 16 The descriptive data, intents and observations are analysed by "finding the contingencies among antecedents, transactions, and outcomes and finding the congruence between intents and observations" (Stake, 1973, p. 117). The judgment process follows a different model. Each of the types of data may be judged according to either a set of absolute standards or relative standards. FIGURE 3 Data Collection and Analysis INTENTS OBSERVATIONS STANDARDS JUDGMENTS RATIONALE DESCRIPTION MATRIX JUDGMENT MATRIX (From "The Countenance of Educational Evaluation" by R.E. Stake. In B.R. Worthern & J.R. Saunders, Educational Evaluation: Theory and Practice, Worthington: Charles AT Jones Publishing, 1973). 17 In addition to the Countenance Model, Stake has contributed the concept of responsive evaluation. This dimension of evaluation is contrasted with preordinate evaluation. "Preordinate studies are more oriented to objectives, hypotheses and prior expectations... Preordinate evaluators know what they are looking for and design the study so as to find i t . Responsive studies are organized around phenomena encountered often unexpectedly - as the program goes along" (Stake, 1976, p. 20). Responsive evaluation emphasizes the need to work with those who may be affected by the evaluation, to understand and be sensitive to their needs, to be willing to sacrifice some measure of precision to gain useful results, and to recognize the existence of differing value perspectives. This process can be likened to grounded theory in that many of the directions and decisions are determined as the program unfolds. In practice responsive evaluation is often more concerned with program activities than with intents and outcomes. Decision Management St rat egi es A fourth major model be characterized as decision of educational evaluation may management strategies and is 18 best represented by the work of Stufflebeam. The conceptualization of evaluation developed by Stufflebeam is called the CIPP Evaluation Model (Context, Input, Process, Product). As Sufflebeam noted, "servicing decision making in change efforts was and is the most unique characteristic of the model. It has been extended recently, however, to provide information both for decision making and accountability in change efforts" (1974, p. 117). Knowledge of the decision situation is the central issue in the CIPP model. Stufflebeam originally stated that, at a minimum, an evaluator must know: 1. who the decision makers are, 2. what decision questions they must answer, 3. what decision alternatives are to be considered, 4. what cr i t e r i a are to be used in judging the alternatives, and 5. the projected timing of the steps in the decision process. In his 1969 a r t i c l e , he refined the categories of knowledge required about the decision situation and cited the following seven types of information: 19 1. the locus of decision making, 2. the focus of the decisions, 3. the substance of the decisions, 4. the function of the decisions, 5. the objects of the decisions, 6. the timing of the decisions, and 7. . the relative c r i t i c a l i t y of decisions. The combination of these seven variables could be used to produce a multitude of decision situations. Stufflebeam felt the next stage of development should be a taxonomy of decision situations which was exhaustive and mutually exclusive. Such a classification scheme would then be generalizable to all situations. This was accomplished by classifying decisions as relating to either ends or means and to either intentions or actualities as depicted in Figure 4. According to Stufflebeam four types of decisions suggest the need for four types of evaluation, hence the terms context, input, process and product were created. Context evaluation facilitates planning decisions by examining such variables as the environment, the needs, and 20 FIGURE 4 Types of Decisions INTENDED ACTUAL PLANNING DECISIONS RECYCLING DECISIONS ENDS to determine to judge and react to obj ect i ves attai nments MEANS STRUCTURING DECISIONS to design procedures IMPLEMENTING DECISIONS to ut i1 i ze , cont rol , and refine procedures (From "Educational Evaluation and Decision Making" by D.L. Stufflebeam. In B.R. Worthen & J.R. Saunders Educat i onal Eval uation: Theory and Practice, Worthington: Char! es A". Jones Publishing, 1973. the opportunities. This corresponds closely to the needs assessment component common to most adult education program planning models. The result is information to assist in the determination of objectives. 21 Input evaluation facil i t a t e s structuring decisions. Given the program objectives, there are numerous strategies which could be employed to achieve those objectives. These strategies are appraised in terms of resources, time, budget, relevance and potential. The resulting information guides the decision of which plan to select. Process evaluation provides information for implementing decisions. Process evaluation has three main objectives. The f i r s t is to detect or predict defects in the procedural design or its implementation during the implementation stages, the second is to provide information for programming decisions, and the third is to maintain a record of the procedures and events as they occur (Stufflebeam, 1969). The information thus provided is useful both for making adjustments while the program is running and for interpreting the results of the program. Product evaluation serves recycling decisions. This part of the Stufflebeam Model corresponds closely to Tyler's approach to evaluation in that the emphasis is on attainment of objectives only to the exclusion of how or why this occurs. Essentially it is a process of operationalizing the objectives, assessing the attainment in 22 terms of the specified c r i t e r i a and comparing the results with some standard. These activities should occur during the program as well as at the conclusion. The results of the context, input and process information provide the basis on which to interpret the product evaluation. COMPARISON OF EVALUATION MODELS Stufflebeam (1 969, p. 45) has stated that "the conceptual bases for evaluation are of fundamental importance. If these conceptions are faulty, then evaluations which are based on them must also be faulty". This chapter has reviewed several definitions and four of the leading models of evaluation. The models have similarities and differences as well as strengths and weaknesses. The Tyler model was reviewed not as a practical alternative for today's program evaluation needs, but to show the changes which have occurred in the past twenty years. Michael Scriven has provided the basis for the development of a theory of evaluation but much work needs to be done on his Pathway Comparison Model before i t can be used to guide evaluation studies. As Worthern and Saunders (1973, p. 106) state, "the methods required to reliably arrive at an overall appraisal have by no means been f u l l y specified. Thus the practical applications of Scriven's 23 suggestions, although appealing in the abstract, have not been f u l l y realized at this point in the development of the evaluation process". The models proposed by both Stake and Stufflebeam have been used as the conceptual frameworks for developing methodologies. Steele (1973) has listed the problem situations in which each of these models is applicable. She groups these situations into three types: problems with programming, problems in program management and problems in evaluation, and describes their applicability as shown below. STUFFLEBEAM MODEL IS APPLICABLE IF I. You're having problems with programming with: 1. setting p r i o r i t i e s , 2. choosing among program p o s s i b i l i t i e s , 3. developing objectives, 4. selecting content and focus, 5. examining factors affecting participation and leaving : 6. determining how you can use scarce resources most effect i vely, 7. determining the results of programs. II. You're having problems with management with: 1. understanding the kinds of decisions involved in programmi ng, 2. getting the most out of limited resources, 3. deciding whether ideas for programs are good, 4. understanding the relationship of evaluation to decision making, 5. setting disputes when two or more methods or plans are advocated for doing the same thing, 6. designing accountability strategies, 24 7. developing a multicourse curricula or a multiactivity program, 8. assigning resources to staff units, 9. analysing weaknesses and problems in the operating of the unit, 10. working with an advisory committee, 11. improving use of time and other resources, 12. developing budgets, 13. helping staff use evaluation as a management tool. III . If you're having problems in evaluation: 1. improving staff attitudes toward evaluation, 2. forming judgments about programs, 3. organizing a comprehensive program review, accreditation team, etc. 4. determining what parts of the program to involve in your evaluation, 5. determining whether an extensive research effort is appropri ate, 6. determining the kind of data needed. STAKE MODEL IS APPLICABLE IF I. You're having problems with programming with: 1. understanding how program components interact to produce results, 2. examining the results of programs, 3. identifying key elements contributing to the success or failure of a program. II. You're having problems with program management with: 1. using a systems approach to organizing programming, 2. considering input, process and outcome relationships of various parts of your program. III. If you're having problems in evaluation: 1. distinguishing between describing and evaluating program results, 2. determining what parts of the program to include in your evaluation, 3. handling a good deal of unrelated data and making sense out of how it f i t s into patterns, 4. setting performance standards, 5. determining the kind of data needs. (From Contemporary Approaches to Program Evaluation: Implications for Evaluating Programs for Disabled Adults, by Sara Steele. Washington: Capital Pub., 1973). 25 Steele's analysis indicates that Stufflebeam has produced a more comprehensive model, that is one which has application in a greater number of situations. Stuff1ebeam1s model also has the advantage of parsimony over that of Stake. This is particularly true for the f i e l d of adult education since Stuff1ebeam's Context, Input, Process, Product resembles the basic adult education program planning model of needs analysis, goal setting, design and management of instruction and evaluation. For these reasons Stuf f l ebeam's model has been selected as the basis upon which to develop a methodology for the evaluation of ABE projects. The following chapter describes this methodology which, while based on the work of Stufflebeam, clearly has an eclectic flavour. 26 CHAPTER I I I AN OUTLINE OF A METHODOLOGY Chapter Two outlined a number of leading models of program evaluation. Each offers a sl i g h t l y different conceptualization of the evaluation process but all tend to address themselves to the questions of "why" and "what" while ignoring an equally important? "how". The importance of theory building and the refinement of models is indisputable but i f evaluation is to assume its rightful role in the program planning process, a methodology or a taxonomy of methodologies must be developed. Stufflebeam (1969, p. 68) has stated: Once an evaluator has selected an evaluation strategy, e.g., context, input, process, product, he must next select or develop a design to implement his evaluation. This is a d i f f i c u l t task since few generalized evaluation designs exist which are adequate to meet emergent needs for evaluation. Grotelueschen et al . have produced a design used to guide the evaluation of programs which of a typical ABE program. This typical which can be f i t the model program is 27 relatively traditional, classroom-based, instructional and ongoing. The important areas of such a program are those identified by the authors, namely, program emphases, program resources, program outcomes, continuing professional education and teacher evaluation. Thus whether the context be classroom, local, state or federal this work has achieved its stated goal of examining "how and why program evaluation activities can be related to improving instruction within the local program context" (p.v). The emphasis throughout is on classroom instruction. As the authors state, this design "does not deal extensively with functions such as counselling and budgeting that are vital but less directly related to instruction" (p.v). Thus, while the approach developed by Grotelueschen et a l . is valuable when evaluating the types of programs for which it was intended, it is not applicable to all the activities subsumed under the term A.B.E. Non-traditional activities such as many of those funded by Training Improvement Project grants (Canada Employment and Immigration Commission) and Continuing Education Project system (Ministry of Education) do not conform to the five administrative areas which are central to the work of Grotel ueschen et al. Such non-traditional programs require a broader, more generalizable methodology which is anchored in a sound theoretical base. 28 The following section of this paper has drawn concepts from several models in an attempt to design an evaluation methodology flexible enough to accommodate innovative and non-traditional ABE programs. This was accomplished by combining Stuff1ebeam's CIPP model with a series of five evaluation process activities (Figure 5). FIGURE 5 Outline of an Evaluation Methodology Evaluation Focus of the Evaluation Process Acti vities Context Input Process Product Define boundaries Develop program Create a specific plan Perform evaluat i on Report findings The five activities listed on the vertical axis result from a review of the literature regarding the steps, phases or 29 activities which make up the evaluation process (Verner, 1964; Fitz-Gibbon, 1978; Grotelueschen, 1974). Most theorists have identified between four and eight discrete activities. The differences, however, result from variances in the definition of what is included in each activity rather than a substantive disagreement about what comprises the evaluation process. As Verner and Booth (1964, p. 96) state, "The evaluation process is the same regardless of whether i t is a total program in a community or a single institutional session that is being evaluated. The only differences that appear are in terms of the variables to be considered and the scope of the task". EVALUATION PROCESS ACTIVITIES A f u l l y developed evaluation methodology would require that each of the cells in Figure 5 contain a l i s t of all activities and decisions required at that stage of the evaluation and peculiar to that focus. One advantage of this approach is that i t allows the evaluator to define^ or limit the boundaries of the evaluation. The four types of evaluation (CIPP) may be used separately, in combination, or in t o t a l . The specifics of each of the activities vary 30 depending upon whether a context, input, process or produce evaluation is being conducted. The remainder of this section provides a general description of the issues which must be addressed at each stage. Defining the Boundaries of the Evaluation The f i r s t activity in the evaluation process is to define the boundaries of the evaluation. In many situations this will be the negotiations between the evaluator and the sponsoring agency which lead up to an agreement or contract to evaluate. Before either side commits i t s e l f , there should be a mutual understanding of what the program is about, what is expected of the evaluation, the evaluator's philosophy of or approach to evaluation, who the evaluation is for, what the relationship between sponsoring agency, program personnel and evaluator will be and the content and form of the evaluation reports. The above process is comprised of three stages -collection of data, negotiation and achieving consensus. The negotiation and consensus steps consist of five steps -identifying audiences, defining focus, defining roles, 31 specifying content and form and checking interpretations. Upon receiving a request for an evaluation proposal or an offer to conduct an evaluation, the evaluator should collect available data and become familiar with the (proposed) program. These data may be in the form of program proposals, annual reports, budgets, needs assessments, minutes of meetings, previous evaluations and research studies. A second source of data is the personnel associated with the program. This includes the funding agency personnel, the program personnel and present or past participants. Once these data have been gathered and the evaluator has at least a general overview of the (proposed) program, the negotiation phase can begin (Fitz-Gibbon, 1978). The identification of the audience is an important f i r s t step of the negotiation phase. To whom will the evaluation reports be addressed and who else will receive copies? This is more than a clerical question. If the report is addressed to the funding agency, there will be at least the implication of a monitoring or accountability function. If the report is addressed to the program director, the implication is of a program improvement function. Thus, while any number of legitimate audiences 32 may receive the results of the evaluation study, i t is imperative that the primary audience be identified. Once the issue of audience identification is resolved, the focus and role of the evaluation must be defined. Focus refers to the types of decisions which need to be made - i f planning decisions, then context evaluation; i f structuring decisions, then input evaluation; i f implementing decisions, then process evaluation, and i f recycling decisions, then product evaluation (see Figure 4 ) . The role of evaluation may be either formative for program improvement or summative for demonstration of accountability and assessment of merit. These issues are interactive rather than independent. Audience identification will influence and be influenced by the role of the evaluation and both in turn will affect the next issue - the relationship between the evaluator and the project personnel. The evaluator may be an external, objective observer or a member of the program team participating in the program planning process. The final issue to be resolved at this stage is the form and content of the evaluation output. The 33 evaluator may produce weekly, monthly, quarterly and/or end of project reports. The reports may be written or oral, formal or informal. The content may be descriptive, judgmental and/or recommendation making. Once all of the above issues have been resolved, the evaluation can proceed from a solid base of mutual understanding and common expectations. Figure 6 summarizes the process of defining the boundaries of the evaluation. 34 FIGURE 6 Defining the Boundaries of the Evaluation 1. Collect data and become familiar with the (proposed) program: a. written documentation b. funding agency personnel c. program personnel d. past or present participants 2. Identify the evaluation audiences: a. the primary audience b. the secondary audience c. those with a right to know 3. Define evaluation focus and roles: a. context input, process and/or product b. formative and/or summative 4. Define the role of the other project personnel. 5. Specify form and content evaluator and relationship with of the evaluation output: a. weekly, monthly, quarterly and/or end of project reports b. written or oral , formal or informal c. descriptive, judgmental and/or recommendation making 6. Check final interpretation with program personnel. 35 Preparing the Program Statement The program statement is the working document on which the evaluation study is based. It contains basically the program rationale, goals, intended a c t i v i t i e s , budget and timeline. If there exists a proposal for funding, then some or a l l of this task may be complete. This is seldom the case, however, since funding proposals are often based more on what the funding agency wants to hear than on what the project personnel intend to do. It is the responsibility of the evaluator to validate the program statement i f one exists or to coordinate the preparation of the statement i f none exists. This is a corroborative task between the evaluator and the program personnel. The core of the program statement is made up of the program rationale, goals and intended a c t i v i t i e s . The rationale states the need for the program. The program goals must be clearly stated and r e a l i s t i c . Programs often begin with Utopian goals. The preparation of the program statement provides an opportunity to re-examine the goals and assure that they are attainable. The intended activities should be stated as specifically as possible. 36 One test of the adequacy of a program's rationale goals and intended activities is logical contingency (Stake, 1967). Using Stake's terminology, one asks the question, "Can the intended outcomes be expected to occur as a result of the intended inputs and transactions?" A timeline and a budget are essential components of the program statement. If well constructed they are valuable planning tools and provide guidelines for implementing the program and a c r i t e r i a against which to judge the program as i t progresses. As stated earlier, the program statement is a working document and can be expected to change as the program develops. Figure 7 summarizes the process of preparing a program statement. FIGURE 7 Preparing a Program Statement 1. Clearly state the program rationale, goals and intended act i v i t i es 2. Check the rationale, goals and intended activities for logical contingency 3. Construct a program timeline 4. Construct a program budget 5. Check final program statement with program personnel 37 C reat i ng a Specific Evaluation Plan Given an understanding of what is expected from the evaluation and what the program is about, the evaluator can now construct a specific plan for the evaluation study. The f i r s t step is to identify the key issues. If the focus is on a product evaluation, the- issues will take the form of specific terminal objectives. If, however, the program goals are long-term and complex, i t may not be possible to measure them objectively. In this situation, the evaluator must work from an impact model and attempt to measure the process goals and interim objectives. An impact model allows one to move backward' from an unmeasurable goal to measurable activities and objectives. For example, i f the goal of a program is to reduce the incidence of i l l i t e r a c y by twenty percent, an interim objective might be to produce a. specified, incremental increase in the reading level of a certain number of persons. The process goals describe the planned activities which must occur i f the interim objectives are to be met. An example of a process goal is to schedule and advertise a literacy course which would attract ten students and maintain eighty percent attendance. The u t i l i t y of an impact model depends upon the validity of the underlying assumption, that is that i f the 38 process goals are achieved then the interim objectives will be achieved and ultimately the program goal will be achi eved. Not a l l of the goals or activities need to be addressed by an evaluation. Decisions must be made as to where the resources of the evaluation will be most productive. The allocation of the resources of the evaluation study should reflect the importance of the goal or activity to the overall program, the relative likelihood of success, and the cost. For example, one would not allocate many resources to evaluating a relatively minor component which has a high probability of success. On the other hand, one would be willing to use a significant portion of the resources to evaluate a crucial activity which had a lower probability of succeeding. When the key issues have been identified, the next task is to select the type of data which are indicative of success or failure and which will be credible to the audiences of the evaluation study. This answers the question, "What type of data should be collected?". A timeline for the evaluation answers "when", and the 39 selection or construction of the instruments answers "how". The remaining component of the plan is the cri t e r i a to be used for interpreting the data. In some cases, the cr i t e r i a can be stated very specifically, for example, enroll at least one hundred participants or ninety percent of the students will gain at least two grade levels on a reading achievement test. In other cases, c r i t e r i a cannot be determined as specifically and may require the use of subjective standards. Figure 8 summarizes the process of creating a specific evaluation plan. FIGURE 8 Creating a Specific Evaluation Plan 1. Identify key issues 2. Select credible data to be collected (the "what") 3. Construct a timeline for the evaluation (the "when") 4. Select and/or construct the instruments (the "how") 5. Determine the cri t e r i a for judging the data 6. Check the plan with the program personnel 40 Performing the Evaluation When the preced i ng act i vit ies have been accomplished, the actual implementation of the evaluation plan should be essentially a technical task. The data are collected as per the timeline, analysed and judged according to the c r i t e r i a . This, however, is the ideal and is rarely encountered in real i ty . More l ike ly it will be found that timelines will need to be revised, intended activit ies altered, goals redefined, and budget altered. This is one of the values of evaluation as it provides the data base and mechanism for rational change. All changes should be reflected in the program statement to assure that decisions are not made arbi trar i ly and independently, but rather as part of the overall , integrated plan. Figure 9 summarizes the process of performing the evaluation. FIGURE 9 Performing the Evaluation 1. Collect the data 2. Analyse data 3. Revise the evaluation plan as required 4. Check analysis with the Program personnel 41 Reporting the Evaluation Results All of the preceeding activities will be valueless i f the results of the evaluation are not communicated to those in decision making roles. The findings must be credible to the evaluation audience. Additionally, the reports must be timely. Timeliness is especially important for formative evaluations i f decisions are to be made while there is an opportunity to alter the program. The reports should have the form and content that were agreed upon during the definition of boundaries phase. The role of the evaluator in the decision making process should conform to agreement reached then. Figure 10 summarizes the process of reporting the evaluation results. FIGURE 10 Reporting the Evaluation Results 1. Ensure c r e d i b i l i t y 2. Ensure timeliness 3. Form and content should conform to the agreement 4. Assist decision makers in assessing required changes and examining alternatives 5. Check total evaluation process with program personnel 42 For ease of discussions, each of the five evaluation process activities have been dealt with separately. They are not, however, discrete or independent. The entire process is interactive. Decisions or changes made at any phase of the evaluation may affect any other phase. For example i f the focus of an evaluation is process, the role is formative, and the audience is the project personnel then these decisions have an impact upon several other evaluation process a c t i v i t i e s . The evaluation reports must be submitted in time to permit changes to the program, data collection cannot emphasize follow-up and the evaluation procedures and reports must be credible to the project personnel. Further i f there are significant changes to the program statement in terms of the budget, objectives or timeline, then this may necessitate changes to the evaluation timeline, budget, data collection, instrumentation, and form and content of evluation reports. If evaluation is to have an impact upon educational practice, i t must begin to prove its usefulness by providing information for program improvement. Given this goal, formative evaluation should be given priority 43 over summative evaluation. This is especially important in ABE programs which are evolving and developing. If evaluation results are to be u t i l i z e d , i t is essential that program personnel be aware and supportive of each phase of the evaluation process. The evaluator's role is to provide the technical assistance and coordination necessary to make this possible. The use of this methodology in the evaluation of an ABE Project conducted during 1979-80 is described in the following Chapter. 44 CHAPTER IV INTRODUCTION The methodology outlined in Chapter 3 of this paper was designed for use in guiding the evaluation of ABE programs, especially innovative or non-traditional programs. The opportunity arose to test the u t i l i t y of the methodology when the Continuing Education Division of the Ministry of Education chose to let a contract for the evaluation of a major ABE project being designed and implemented at a B.C. college. This chapter wi l l summarize the activities which occurred prior to the commencement of the evaluation, describe the evaluation process acti v i t i e s and assess the u t i l i t y and efficacy of the methodology of guiding the evaluation study. SUMMARY OF PRE-EVALUATION ACTIVITIES In 1977 the Continuing Education Division of the Ministry of Education solicited a proposal, from one of the province's colleges, for the development and implementation of an adult basic literacy delivery system. 45 Previous to this date, the college had offered only those programs sponsored by the Canada Employment and Immigration Commission. In the area of academic upgrading this meant Basic Training for Skil ls Development at the II, III, and IV levels. Level II, which equates to Grade 5 through 8 was offered in only one community. Thus, given the huge geographic area (one-third of the province), the relatively high incidence of i l l i t e r a c y (14%), and the traditional lack of programs, the need for a non-traditional delivery system seemed apparent. Preliminary discussions and research began in early 1977. Official recognition of the need for such a project was given by the College Council in August, 1977 when a motion was passed to establish a special pilot project in adult basic l iteracy. An early draft of the project proposal was approved in principle by the council in September, 1978. The proposal was reported to be "in a state of limbo" in November 1978 pending a meeting with Ministry o f f i c ia l s . This meeting did not occur until January 10, 1979. The College Council gave final approval to the project in March, 1979 and the funding for the 1979-80 fiscal year was included in the college operating budget. 46 Since the project was ambitious in intent, expensive, and f e l l into Stufflebeam's decision making setting of neomobilism (see p. 5), a large degree of change combined with low information grasp, the Ministry of Education decided to contract for an external, formative evaluat i on. THE EVALUATION The following section, which describes the actual implementation of the evaluation, is organized upon the five evaluation process activities described in Chapter 3 (see p. 28). Each of the process activities are referenced to the appropriate figure in Chapter 3 so that the reader may compare the methodology outlined there with the actual evaluation study being described in this chapter. The numbers and letters in parentheses in the following sections refer to the items and sub-items found on each table. Defining the Boundaries of the Evaluation The purpose of this activity was to focus and c l a r i f y what the evaluation was intended to do and for whom (see Figure 6). The f i r s t step for the evaluator was to 47 collect data and become familiar with the Project (1). To this end, interviews were arranged with Ministry of Education (1-b) and College o f f i c i a l s (1-c), the most recent project proposal was reviewed, a summary of College Board minutes (1-a) was obtained and reviewed as well as the report (1-a) of the tour of a stage play designed to raise public awareness. Following this information gathering exercise a series of meetings with Ministry personnel resolved the next four issues (2-5) which were audience identification, definition of focus and roles, definition of relationships, and specification of the form and content of the evaluation out put. The primary audience of the evaluation was to be the College o f f i c i a l s (2-a) charged with the implementation of the proposal. The secondary audience was to be the Ministry of Education o f f i c i a l s (2-b) charged with monitoring the project and both the Ministry and the College had the right to share the evaluation findings with others (2-c) who where judged to have a legitimate interest. 48 Recalling Stuff1ebeam1s four f o c i , context, input, process and product, the decision was made that the study would be primarily a process evaluation (3-a) with enough emphasis on the product focus to be able to place the findings in proper perspective. The Context and Input components had been completed prior to the commencement of this study. All parties agreed on the need for the development of an adult basic literacy program (context) and the necessary resources (inputs) appeared available. These resources included the commitment of the College Board as illustrated by the College Educational Plan, the enthusiasm of the College Administration and the faculty member charged with developing the proposal, and the a b i l i t y and willingness of the Ministry of Education to fund the project over a two year period. The role of the evaluation was formative (3-b). The evaluator's role was to provide consultative and technical services to the project staff. The intent was to free the staff of the time-consuming and specialized duties of designing and implementing the evaluation and to provide a mechanism for accessing the distant resources of the lower mainland in general and the University of British Columbia 49 in particular. The role of monitoring the project for accountability remained with the Ministry of Education, although some of the impressions created and decisions made were based upon data collected for the formative evaluation. Thus the role of the evaluators (4) fe l l between that of independent external observers and actual members of the project team. Since the Ministry of Education was funding the evaluation study the reports were to be sent there with copies to the College principal , the responsible college administrator and the project coordinator. It was decided that during the f irst year (5-a) of the project three written reports (5-b) would be submitted which would describe the act ivit ies during the period in question, making observations and judgments regarding the relationship between these activit ies the project goals and plan and make recommendations (5-c) intended to maximize the likelihood of the project achieving the intended goals. Due to the geographic distances between the three participants - the College the Ministry of Education and the evaluator - the decisions were discussed in a series of 50 bilateral meetings (6) but nevertheless agreed to by a l l . The evaluator was able to travel to the colleges on three occasions for a total of seven days. This made the establishment of a trusting relationship d i f f i cu l t and since by the time of the f irst visit the project was seriously behind schedule the evaluation study seemed always to be perceived as somewhat of a threat. While there was no overt lack of cooperation this situation existed throughout the period of this study and did have an impact on how the evaluation reports were received and acted upon. Preparing the Program Statement is the working document on based (see Figure 7). It goals, intended ac t iv i t i e s , The program statement which an evaluation study is contains the program rationale budget and timeline. The project proposal (1) met al l of the requirements of a program statement and constituted the plan which the project personnel expected to follow. In checking logical contingency of the rationale, goals and intended act ivit ies (2), that is "Can the intended outcomes be 51 expected to occur as a result of the intended inputs and transactions?" (Stake, 1973), it was apparent, that despite the resources, the stated goals were overly ambitious. This perception was communicated to the project coordinator but the decision was made to attempt to achieve a l l the goals. The timeline (3) and the budget (4) appeared well designed although project activities were tigh t l y scheduled and would therefore require precise planning and effective management of time. It was agreed (5) that the project proposal would provide the base line against which the evaluation would measure progress toward the goals. While the college had committed i t s e l f by contract to perform those activities stated in the project proposal there was an informal understanding that changes could be made by mutual agreement as the project developed. This in fact was one of the responsibilities of the evaluator, to recommend revisions to the plan in order to maximize the attainment of goals. Creating a Specific Evaluation PI an The f i r s t step in creating a specific evaluation plan is to identify the key issues (see Figure 8). In the 52 project in question there were three general goals (1), each with a series of related a c t i v i t i e s . Goal 1: To increase the educational opportunities available to undereducated adults especially those who live and work in small, rural settlements, native Indian communities or who live in a major centre but could not attend full-time classes. Intended a c t i v i t i e s : 1. to develop a curriculum for a tutor training course 2. to train volunteer tutors 3. to recruit 400 students 4. to match tutors and students for purposes of instruction 5. to provide support services and in-service training for the volunteer tutors 6. to establish and operate a "Laundromat Learning Centre" 7. to establish and operate an Oil Patch Camp Class Goal 2: To develop public awareness of the need for and av a i l a b i l i t y of adult basic education programs. Intended a c t i v i t i e s : 1. to develop a questionnaire for assessing educational needs 2. to recruit and train volunteers from the Women's Institutes to administer the questionnaire 3. to compile and assess the data 4. to plan and implement a publicity/awareness campaign including a newsletter, paper-graphics advertising, media advertising, stage play, tutor signs, and tutor merit awards. Goal 3: To develop curriculum models and materials which will improve the quality of adult basic education programs within the college region and also have u t i l i t y for other regions of the province. 53 Intended a c t i v i t i e s : 1. to review existing curricula and learning materials and develop or adopt a rational, integrated curricula which will be consistent throughout the college region 2. to pilot test the above-mentioned curricula 3. to publish the revised form of above-mentioned curricula 4. to develop and publish other learning materials as required, in the areas of social orientation s k i l l s , foundation literacy and work exploration and readiness. The overall project goal was to i n i t i a t e a delivery system which would significantly reduce the level of i l l i t e r a c y in the college region. Given that this goal is long-term and complex its direct measurement was not within the scope and resources of the evaluation. This fact, plus the decision to conduct a formative, process evaluation, determined that this study would concentrate primarily upon the achievement (2) of interim objectives and the actual implementation of the project proposal as planned. The timeline (3) for the evaluation was constructed to parallel the proposed activities and development of the project. Evaluation reports were to be submitted as soon as possible following September 30, 1979, January 1, 1980 and April 1, 1980. The evaluation plan called for the f i r s t report to emphasize curriculum 54 development, public awareness and the management of the project. The second report would emphasize tutor recruitment and training and the final report would summarize the entire f i r s t year and assess the impact of the project on the adults registered as learners. Given the geographic distance between the evaluation and the project locale and the necessity of conducting an unobtrusive evaluation, i t was planned to use the data (4) collected as an ongoing part of the project as the basis for the evaluation. For example, using the pre and post tests administered by the tutors instead of administering tests solely for the purpose of the evaluat i on. The final task in the creation of a specific plan is the determination of c r i t e r i a for judging the data. Given that the focus was on process and that the interim objectives and activities were clearly specified, the primary criterion was the performance of the task (5). For example, the program statement called for the recruitment of 400 adult students and the criterion was the number recruited. Thus at this level of evaluation, the criterion 55 was the performance of the planned activities within the parameters of the budget and the timeline. At another level, the criterion became the quality of the performance and an assessment of the impact these activities had or would have on the long term goal of reducing the level of i l l i t e r a c y (5). This secondary level is more subjective and requires the development of an impact model. Figure 11 briefly describes the model which was designed to reflect the relationship between the project and its a c t i v i t i e s , interim objectives and goals. FIGURE 11 Sample Impact Model Act i v i t i es Interim Objectives Goal Example 1. recruit 400 tutors 2. recruit 400 adult learners 3. train tutors 4. match tutors and learners 5. support tutor/ learner pairs 1. raise the reading level of 400 adults by two grade levels Reduce level of i l l iteracy within the college region 56 The model predicts that i f the activities are performed as planned then the interim objectives will be achieved and those will contribute to the achievement of the goal. The c r i t e r i a then were the measurement of the degree to which the activities were performed, the objectives achieved and subjective assessment of the valid i t y of the underlying impact model. Performing the E va1uat i on In Chapter 3 (p. 30) i t was stated that, "the actual implementation of the evaluation plan should be essentially a technical task. The data are collected as per the timeline, analysed and judged according to the c r i t e r i a (see Figure 9). This, however, is the ideal and is rarely encountered in the real world of education" (Chapter 3, p. 30). True to this prediction neither the project nor the evaluation proceeded as planned. Two activities which were performed prior to the beginning of the project were successful. These were the creation of a stage play which depicted the plight of an adult i l l i t e r a t e in today's society and the operation of a 57 volunteer tutor training program. Both of these activities were new to the college and the college personnel. In retrospect, the success of these high profile activities appear to have consumed the energy and enthusiasm of those charged with the management of the project. This combined with several developments which are described in the f i r s t evaluation report (see Appendix A) resulted in the project getting off to a slow start. The evaluation plan called for the f i r s t report to examine curriculum development, the public awareness campaign and the management of the project. The second report was to emphasize tutor recruitment and training while the third report was to summarize and assess the learning of the students. Due to the planned activities which did not occur or occurred only in part, the plan of the evaluation study was altered considerably. The data collection (1) process became one of chronicling the activities which had occurred. The analysis (2) was a comparison of these activities with those of the program statement and examining underlying reasons for the lack of progress. This analysis produced data which were useful for identifying alternative actions for the consideration of the project personnel and for the revision of pri o r i t i e s . 58 The type of data utilized and the source of the data differed greatly from the evaluation plan. The plan called for the evaluation to u t i l i z e data generated by the various project a c t i v i t i e s , such as pre/post tests, student profiles, dropout rates, but the fact that the project was chronically behind schedule and that many of the activities never occurred shifted the emphasis of the evaluation. The main issue became the management of the project, and the main source of data collection became interviews with the project staff and college and Ministry of Education o f f i c i a l s (3 ). The development described above greatly affected all aspects of the evaluation study including the content of the evaluation reports. Reporting the Evaluation Results The evaluation reports (see Figure 10) were submitted in the agreed format (3) and close to the planned timeline (2). In each report recommendations (1,4) were made which sought to assist the project personnel in setting both short term and long term p r i o r i t i e s . The reports are contained in Appendix A. 59 SUMMARY AND OVERVIEW Neither the project nor the evaluation proceeded as planned. As the evaluation reports describe, the project did not accomplish many of the stated goals. The immediate cause of this situation was the failure to in i t i a t e and implement many of the planned a c t i v i t i e s . Personnel were not hired, curriculum was not developed and the intended numbers of tutor and learners were not recruited. The evaluation study in part due to the f l e x i b i l i t y of the methodology described in Chapter 3, responded to the changing conditions and produced results which did have a positive influence on the project. Some of the recommendations were acted upon and, as in the famous Hawthorne studies, the mere presence of an outside observer may have resulted in greater productivity. As stated earlier, formative evaluation facilitates summative evaluation. The evaluation reports provide, for those who have access to them, valuable information on the development and implementation of a volunteer-tutor literacy program and il l u s t r a t e some of the pi t f a l l s of decentralized curriculum development. 60 In conclusion, the methodology provided the conceptualization of the evaluation process as well as a description of the activities which were essential components of that process. The methodology successfully combined the strength of a sound evaluation model (Stufflebeam's) with the practicality of a series of evaluation process a c t i v i t i e s . The following chapter summarizes this study, and makes recommendations for the further development of the evaluation of ABE programs. 61 CHAPTER V SUMMARY AND CONCLUSIONS This concluding chapter is divided into two major sections. The f i r s t section summarizes the purpose of the study, the development of the evaluation methodology and the pilot test. The second section draws and bri e f l y discusses the conclusions resulting from this study and makes recommendations for further research. SUMMARY In British Columbia, ABE has evolved into a significant program area on the verge of gaining acceptance as a legitimate and important part of the public educational system. This development, however, has occurred in an ad hoc manner and resulted in a diverse collection of uncoordinated, ill-defined programs seeking to attain unstated or fuzzy goals. If the resources currently committed to these programs are to be solid i f i e d and increased, i t is imperative that the present program impact be measured, that needs, resources, processes and outcomes 62 be articulated and that the benefits of increased programming be predicted. This is best accomplished through the effective use of program evaluation. The state of development of evaluation for ABE has inhibited practitioners from taking advantage of this technology for decision-making and accountability. The f i e l d of evaluation may be viewed as an hierarchy. At the base are checklists and as one moves up the hierarchy through designs, methodologies, models, and theories there is an increased a b i l i t y to apply the technology to a range of educational situations. Specificity of guidelines and evaluation a c t i v i t i e s , however, decrease as one moves up the hi erarchy. Simplistic design and checklists are seldom linked to any of the leading models and tend to encourage the compilation and analysis of existing data without necessarily addressing important issues. Models of evaluation have been developed by several researchers including Scriven, Stake and Stufflebeam. These models, while of interest to academics and students of evaluation, 63 provide l i t t l e guidance to educational practitioners. Few methodologies exist which are useful for evaluating innovative, non-traditional ABE programs. A well developed theory of evaluation has yet to evolve. Scriven's work on some of the more philosophical issues comes as close as any in beginning the theory development process. Thus, the f i e l d of program evaluation is characterized by a lack of sound theory, a series of complex models, an absence of methodology and an abundance of designs and checklists which are not tied to a sound theory or model. This study reviewed and analysed the leading models of evaluation, identified the CIPP model as being superior, and developed an evaluation methodology based upon this model. Five evaluation process activities were defined which are general enough to be applicable to most evaluation activities while providing sufficient specific guidance to be of assistance to the informed generalist. These activities were crossed with the CIPP model to produce a matrix which outlines an evaluation methodology. This outline assists in defining the focus of a proposed evaluation study, either context, input, process, product or some combination of the four. Each of the process 64 activ i t i e s was expanded to a l i s t of issues which should be addressed by an evaluation study. While each of the process activities was described separately, they are not discrete or independent. They are interactive and decisions made at any phase may affect any other phase. The overall approach represented by the methodology attempts to maximize the likelihood that an evaluation study is timely, credible, supported by those involved in the program being evaluated, and useful to the decision makers identified as the primary audience. Chapter 4 brie f l y described a pilot test of the methodology when it was used to evaluate an ABE project that included curriculum development, public awareness, and non-traditional delivery methods. This description was organized according to the outline presented in Chapter 3. The focus of the evaluation was primarily on process with sufficient emphasis on the product to allow the findings to be placed in proper perspective. While the project being evaluated accomplished few of its stated objectives, the methodology provided a useful and flexible structure to guide the evaluation process. 65 CONCLUSIONS The conclusions discussed below result from the development of the evaluation methodology described in Chapter 3 and the evaluation study which piloted the methodology and is briefly described in Chapter 4 . On the positive side, i t is concluded that the methodology is a useful tool for developing an overview of the evaluation process. The methodology is grounded in a model (Stuff1ebeam's) which is as well developed as any currently available. It further describes a series of activities which interact with various parts of the model. This combination partially f i l l s the void which has existed to date between the developing theories and models of evaluation and simplistic checklists which are not based on any theory or model. Thus the methodology goes part of the way toward linking theory and practice. The methodology is especially useful at the beginning and at the end of an evaluation study, that i s , in focusing the evaluation plan and in reporting the results. Too often evaluations consist of collecting and summarizing 66 available data. Using the matrix produced by crossing the evaluation foci and process a c t i v i t i e s , evaluators will be assisted in more clearly defining the nature and purpose of the study and will be better able to present their findings to the audiences of the evaluation. Evaluation, especially formative evaluation, loses much of its practicality i f it is not responsive to the needs of those who will use the results and i f i t is not sensitive to the changes which occur in most programs as they develop. Further, i f those involved in a program are not supportive or at least tolerant of an evaluation, the task is complicated. Properly implemented, the methodology does maximize the likelihood that a study will be responsive to needs, sensitive to changes and receive support from program personnel. This approach is built into a structure and is illustrated by the concluding items in each of the process stages. While every effort was made to communicate the purpose and plan of the evaluation, the process was s t i l l perceived as a threat by the personnel of the project on which an evaluation was conducted. Evaluation has been described as the "whore of the establishment" (Scriven, 1972 , p. 1) and this attitude persists among many adult 67 educators. It is safe to conclude that evaluators need to go to extraordinary lengths in their attempts to secure the support of a l l those involved in a program being evaluated. Despite the resulting d i f f i c u l t i e s , the methodology provided a flexible structure for guiding the evaluation of a program which strayed significantly from the original plan and t i metable. On the negative side, the methodology is not a f u l l y developed package for use by those uninitiated in the literature and jargon of program evaluation. If this work is to have an impact on adult basic education programs, i t could be achieved through a workshop or series of workshops involving those most l i k e l y to be charged with program evaluation. From a British Columbia perspective, this group would consist of the directors of adult basic education from fifteen community colleges and some twelve to fifteen school boards. The explanation and discussion of the methodology by such a group over a two-day period could result in significant improvement of both quality and quantity of evaluation studies in this province. 68 The result of a search for exemplary evaluation studies in the area of ABE was a realization that few systematic evaluations are conducted and even fewer result in enhancing the degree to which informed decisions can be made or accountability documented. Given the traditionally marginal nature of adult basic education programs and the possibility of a period of prolonged economic restraint, i t is imperative that evaluation move away from the case study and "happiness index" approach and toward a more systematic and sophisticated evaluation plan. Further development of the methodology should await several f i e l d tests which could f a c i l i t a t e creation of a comprehensive l i s t of questions or decision points for each of the cells of Figure 5 (see page 28). This l i s t would, in effect, be an expansion of each of Figures 6-10 specifi c a l l y for each foci (context, input, process and product) of evaluation. Stufflebeam described such a development as producing a taxonomy of decision situations which is exhaustive and mutually exclusive. This study has taken the f i r s t step toward this end, but much work remains to be done. It is hoped that this methodology will be 69 pilot tested on a variety of ABE programs and that additional research will result in strengthened ties between the best theories and models and the fi e l d of practice. 70 BIBLIOGRAPHY Alkin, Marvin C. and Fitz-Gibbon, Carol. "Methods and Theories of Evaluating Programs." Journal of Research and Development, Spring 1975, 8 (3), 2-15. Bolton, Elizabeth B. "Towards a Practical Approach to Evaluating Adult Education Programs: An Application of Scriven's Methodology." Community/Junior College Research Quarterly, 1977, 1_ 409-420. Dickinson, Gary. Undereducated Adults in British Columbia- 1976. Vancouver: Department of Adult Education, The University of British Columbia, 1979. Dickinson, Gary. The Undereducated of British Columbia. Vancouver: Department of Adult Education, The University of British Columbia, 1978. Dickinson, Gary. "Judgmental Aspects of Program Evaluation". Adult Training, 1977, 2 (4), 26-29. Dickinson, Gary and Lamoureux, Marvin. "Evaluating Educative Temporary Systems". Adult Education, 1975, XXV (2), 81-89. Faris, Ron (Chairman). Discussion Paper 01/79: Report of the Committee on Adult Basic Education. Information Services, MinistTy of Education, Science and Technology, Province of British Columbia, 1979. Faris, Ron (Chairman). Report of the Committee on Continuing and Community Education in British Columbia Victoria: Ministry of Education, Province of British Columbia, 1976. Farmer, James. "Strengthening a Design for Evaluating a Broad-Aimed Functional Literacy and Family Life Education Program." Literacy Discussion. 5 (3), 421-40. Fitz-Gibbon, Carol T. and Morris Lynn Lyons. Program Evaluation Kit - t i t l e s : Evaluator'sHandbood ffow to Deal with Goals and Objectives How to Design A Program Evaluation fTow~to Measure Program Implementation How to Measure Achievement How to Measure Attitudes How to Cal cu 1 ate~~Stat i st ics How to Present An Evaluation Report Beverly H i l l s : Sage Publications, Inc., 1978. 71 Forest, Laverne. "Program Evaluation For Reality". Adu1t Education, Spring, 1976, 26_ (3 ), 167-1 77. Grotelueschen, A.D., Gooler, D.D., Knox, A.B., Kemmis, S. , Dowdy, I. and Brophy, K. An Evaluation Planner. Urbana: University of I l l i n o i s , 1974. Groeleuschen, A.D., Gooler, D.D. and Knox, A. B. E yaluation in Adult Basic Education: How and Why. Danvi11e: The Interstate Printers and Publishers, Inc., 1976. Knox, A.B., Mezirow, J., Darkenwald, G.G., and Beder, H. An Evaluation Guide for Adult Basic Education Programs. New York: Center for Adult Education, Teachers College, Columbia University, 1972. Mezirow, Jack. Evaluating Statewide Programs in ABE: A Design with "Tnst rument at i on. iTew York : Center for Adult Education, Teachers College, Columbia University, 1975. Mezirow, J., Darkenwald, G., & Beder, H. An Evaluation of Adult Basic Education in the State of Iowa. New York: Columbia University, 1 975. Niemi John A. and Anderson, Darrell V. "Remedial Adult Education. An Analytical Review of Evaluation Research." Continuous Learning, 8: 90-94. Rossi, P.H. and Williams, Walter (Eds.) Evaluating Social Programs: Theory, Practice and P o l i t i c s . New York: Seminar Press, 1972. Saylor, J.G. and Alexander, W.M. Planning Curriculum for Schools. New York: Holt, Rinehart and Winston, Inc., 1966. Scriven, Michael. "The Evaluation of Educational Goals, Instructional Procedures, and Outcomes or the Iceman Cometh". ERIC Document Reproduction Service No. ED 079 394, 1972. Scriven, Michael. "The Methodology of Evaluation". In Worthen, B.R. and Sanders, J.R. (Eds.), E ducat ional Evaluation: Theory and Practice. Worthi ngton, Oh i o: Charles A. Jones Publishing Company, 1973. Scriven, Michael. "Prose and Cons About Goal-Free Evaluation". Evaluation Comment - III (4 ) 1 - 8. Shearon, Ronald. "Evaluating ABE Programs". AduIt Leadership 1_9 (1), 15-16; 20-21. 72 Stake, Robert. Evaluating the Arts in Education: A Responsive Approach. Columbus Ohio: Charles E. Merrill Publi shing Company, 1975. Stake, Robert E. "The Countenance of Educational Evaluation". Teachers College Record, April 1967, 68, 623-40. Steele, Sara. Comtemporary Approaches to Program Evaluation: Implications for Evaluating Programs for Disadvantaged Adults. Washington: Capitol Pub., 1973. Steele, Sara. "Program Evaluation - A Broader Definition". Journal of Extension, Summer, 1970 pp. 5-17. Stufflebeam, Daniel L. "Educational Evaluation: Some Questions and Answers". A Paper Presented at the Annual Meeting of the American Association of School Administrators, Atlantic City, New Jersey. ERIC Document Reproduction Service No. 128 425. Stufflebeam, Daniel L. "Evaluation as a Community Education Process". Community Education Journal, March/April 1975, 5 (2), 7-12; 19"^ Stufflebeam, Daniel L. "Alternative Approaches to Educational Evaluation: A Self-Study Guide for Educators". In James W. Popam (Ed.), Evaluation in Educati on: Current Applications. Los Angeles: McCutchan Publishing Corporation, 1974. Stufflebeam, Daniel L. Educational Evaluation and Decision Making. Itasea, I l l i n o i s : F~.~E~. Peacock Publ ishers , Inc. 1971. Stufflebeam, Daniel L. "Evaluation as Enlightenment for Decision Making". In W.H. Beatty, (Ed.), Improvi ng Educational Assessment and an Inventory of Measures of Affective Behaviour. Wash i ngton, D.C.: Association for Supervision and Curriculum Development, NEA, 1969. Thomas, Audrey M. Adult Basic Education and Literacy Activities in Canada 1975-76. Toronto: World Literacy of Canada, 1976. Tyler, Ralph W. Basic Principals of Curriculum and Instructi on. Chicago: University of Chicago Press, 1950. Webster's Seventh New Col 1egiate Dictionary. Toronto: Thomas Allan and Som Limited, 1972. 73 Wort hen, Blaine and Saunders, James. Educat i ona1 Evaluation: Theory and Practice. Worthington, Ohio: Charles A. Jones Pu b 1 i s h i ng~, 1973. Verner, C. and Booth, A. Adult Education. New York: The Center for Applied Research in Education, 1964. 74 APPENDIX A Appendix A consists of the three reports produced as a result of the evaluation study described in Chapter 4. Names and locations have been deleted to preserve the anonymity of individuals and institutions. 75 FORMATIVE EVALUATION OF THE ADULT BASIC EDUCATION PROJECT Report for the Quarter Ending September 30, 1979 76 The Department of Adult Education at the University of British Columbia was requested by the Division of Continuing Education of the Ministry of Education, Science and Technology to serve as external evaluator for a pilot project in adult basic education being developed by ( ) College. As specified in the agreement between the Department and the Division, this document is the f i r s t of three formative evaluation reports to be submitted. The purpose of the report is to review the progress of the project to September 30, 1979 and make recommendations for its improvement. In preparing this report, the evaluators reviewed documents provided by the Division and the College, met with the project staff, and consulted with other College and Ministry o f f i c i a l s . Readers of this report should bear in mind that the project is in its early stages, with six months having elapsed in a proposed two year span. The observations, judgments, and recommendations made herein are intended to f a c i l i t a t e the successful attainment of the project's objectives rather than to impede or halt its progress. Official recognition of the need for the project was given by the College Council in August, 1977 when a 77 motion was passed to establish a special pilot project in adult basic education, to be supported by supplementary funding from the Ministry. An early draft of the project proposal was approved in principle by Council in September, 1978 pending a meeting with Ministry o f f i c i a l s which did not occur until January 10, 1979. College Council gave final approval to the project in March, 1979 and funding for the 1979-80 fiscal year was included in the College operating budget. The intent of the project is in keeping with the Educational Master Plan of the College. That document recognized the fact that many of the region's residents have l i t t l e formal education. The College has committed i t s e l f to serving the educational needs of a l l area residents through part-time as well as full-time programs in smaller communities and rural areas as well as the major centres. Non-traditional types of program delivery are acknowledged in the Educational Master Plan as being both desirable and necessary given the geographic, climatic, and population conditions of the region. In addition, the College Council in March, 1979 identified adult basic education as the top program priority for 1979-80. It appears, then, that the project has a firm grounding of support at the policy level within the College. 78 The emphasis given to adult basic education programs appears justified in light of the number and percentage of residents who have not attained more than eight years of schooling. As the 1976 Census indicates, 1,100 adults (4 percent) have five or less years of schooling and 6,295 (22 percent) have five to eight years of schooling. thus, there are approximately 7,500 adults who are potential participants in the project. Rural and native Indian adults were identified as the principal target groups in the project proposal, and this is consistent with the College policies noted above. The College had engaged in several activities prior to the current project which can be viewed as preparatory. During 1978-79, the College ran adult basic education programs at eight locations. With the exception of two programs conducted on Indian reserves, the remainder were located in College f a c i l i t i e s . A total of sixteen programs were conducted involving 254 students. The range of programs included BTSD levels II, I I I , and IV, BJRT, GED, BEST, EOW, REPLACE, Pre-technical , and Tutor Training. Most of the students attended full-time. 79 The stage play, ( ), was written and produced under a separate agreement prior to the commencement of the project, although i t became a major component of the project's public awareness campaign. The play was sponsored by the Ministry and developed j o i n t l y by the College and ( ). The play was designed to introduce audiences to the concept and magnitude of adult i l l i t e r a c y in society. Through a series of scenes emphasizing personal accounts, the play illustrates the dilemma faced by the non-reader in a print-oriented society. It attempts to increase public awareness of adult i l l i t e r a c y and to stimulate a more positive attitude toward the person who wants and needs to learn to read. The tour consisted of 23 performances in the period April through June, 1979. It was seen by some 2,000 students and 450 adults. The reviews of and reactions to ( ) were generally very favourable. A third preparatory activity was the training of 29 tutors, of whom 19 completed training, in ( ) under a separate contract with the Ministry. That program was completed in April, 1979 so that the prospective tutors were available for matching with students at the time the project c ommenced. 8 0 The goals of the project are divided in three major categories: public awareness, curriculum development, and outreach development and delivery. Each of these goals, together with their derivative objectives and a c t i v i t i e s , is discussed below in terms of progress to date. The goal of public awareness was aimed at increasing the level of public support for the proposed programs and encouraging adults in need of basic education to come forth and participate in project ac t i v i t i e s . The major vehicle for achieving this goal was the play ( ). Public awareness was also increased by radio and newspaper advertising, development of a logo, posters, and speeches made to fraternal and community organizations. A proposed survey of rural areas and communities has been deferred to a later date. The goal of curriculum development involved the rationalization of existing curricula being used throughout the College region, the development of materials in the area of social orientation s k i l l s , and the development of a tutor training curriculum. To date, the adult basic education curricula used in various College centres have been 81 reviewed and plans have been made to integrate them. The social orientation s k i l l s materials await the appointment of a new project staff member. Several tutor training curricula are being studied with a view to adapting or developing one that is appropriate for the region. The goal of outreach development and delivery involves the design and implementation of a program delivery system to f a c i l i t a t e participation by adults in the rural areas of the region on a part-time basis. The main component of this delivery system is a volunteer tutor program. As was noted earlier, the College had already trained 19 tutors, and those tutors were paired with registered part-time students in April, 1979. Following the summer period, 16 tutors were ready to recommence their tutoring activities. Six tutor training courses are scheduled to begin in September and October, and it is expected that 75 tutors will be trained and working with adult learners by the end of 1979. Other proposed activities included a Laundromat Learning Centre and an Oilpatch Camp Class. There has been no visible progress to date in the establishment of either 82 of those a c t i v i t i e s . In May, 1979 a joint Project-Community Education Services program was proposed for ( ) Lake. The intent was to supplement the Community Education Services program with project funding and support. The Request for Additional Course proposal has not been approved, however, and it is unclear whether that activity will be pursued. Since the project commenced in Apr i l , 1979, the major substantive accomplishments have been the establishment of an advisory committee, the design of administrative procedures such as a tutor registry, student performance measures, and tutorial centres, and the hiring of a second staff member, an instructor and curriculum developer. In addition, 19 tutor-student pairings were made and operated from April through June, 1979. A comparison of the proposed and actual timetables shows clearly that the project is considerably behind schedule. Two project staff were scheduled to be hired in Apri l , 1979. One was hired in June, but the other position was s t i l l vacant as of September 21. As a result of the delays in staffing the project, some eight potential person-months of productivity have been lost. Given that much of 83 the period from April through August could have been used for planning and curriculum development, those activities must now be undertaken between September and November which would normally be peak periods for the delivery of i nst ruct i on. The immediate reason for f a l l i n g behind schedule was the lack of staff available in the early months of the project. That staffing was not completed earlier seems to be a result of a number of events within the College which led to uncertainty on the part of the project coordinator. Those events, including the resignation of the College Principal and the withdrawal of sponsorship of other adult basic education programs by CEIC, apparently left the status of the project in an ambiguous position. If the project is to approach its original objectives and timetable, it will require strong leadership and support from within the College. Based on the observations and conclusions noted earlier, the evaluators recommend: 84 1. That the College appoint a senior administrator to assume responsibility for a l l adult basic , education activities within the institution. Some of the d i f f i c u l t i e s encountered by the project seem to have stemmed from the lack of a clear line of authority and a confusion of roles and responsibilities. The level of activity and range of programs in adult basic education may be substantial enough to call for the sole attention of a senior o f f i c i a l within the College. 2. That the staffing of the project be completed as rapidly as possible consistent with the a v a i l a b i l i t y of qualified personnel. An able and dedicated staff is the keystone to the success of an adult basic education program, and this project has operated for too long without its f u l l complement of personnel. 3 . That the objectives and activities be reduced to those regarded as essential to the success of the project. This could result, for example, in the indefinite delay of such activities as the community survey, the Bookmobile, and the Oil Patch Class. Although this could reduce the overall potential impact of the 85 project, i t would enable the staff to focus on a smaller number of concerns which would have a greater probability of success. That the project staff concentrate its attention on the third goal, outreach development and delivery. By this time, public awareness activities such as the play should be essentially completed, and the attention of the staff should focus on delivery instruction to the target populations. That the enrolment target of 400 participants in the current year be reviewed. Only 5 percent of the projected enrolment has been attained halfway through the f i r s t year, and i t may be unrealistic to expect that goal to be reached by March 31, 1980 and s t i l l maintain a high quality of instruction. That the Project Advisory Committee be reactivated and assisted in defining its functions and responsibilities more clearly. The Committee could probably be used more effectively than it has been to date, and could serve such purposes as increasing the v i s i b i l i t y of and «6 support for the project in the community, recruiting participants, and improving the services provided by the project staff through the members' fami 1 i arity with the target populations. 7. That o f f i c i a l s of the Division of Continuing Education and the College meet to review and c l a r i f y the objectives and attainments of the project to date. As this is a pilot project with implications for extension to other regions, i t is important that continuous program monitoring be conducted so that a l l parties can learn from the experiences of this i n i t i a l program. 87 FORMATIVE EVALUATION OF THE ADULT BASIC EDUCATION PROJECT #2 Report for the Quarter Ending December 31, 1979 8 8 INTRODUCTION The Department of Adult Education at the University of British Columbia was requested by the Division of Continuing Education of the Ministry of Education, Science and Technology to serve as external evaluator for a pilot project in adult basic education being developed and implemented by ( ) College. As specified in the agreement between the Department and the Division, this document is the second of three formative evaluation reports to be submitted. , The purpose of these reports is to review the progress of the project and to make recommendations for its improvement. In preparing this report, the evaluators reviewed documents provided by the College and the Division, interviewed project staff, volunteer tutors and a student, observed tutor training classes in operation, and consulted with other College and Ministry o f f i c i a l s . Readers of this report should be aware that the project is approaching its mid-point with nine months having elapsed in a proposed twenty-four month span. 89 PROGRESS: OCTOBER 1 - DECEMBER 31, 1979 The goals for the project are divided into three major categories: outreach development and delivery, curriculum development and public awareness. Each of these goals is discussed in the following section in terms of progress during the period October 1 - December 31, 1979. The goal of outreach development and delivery involves the design and implementation of a delivery system to provide educational service to the target groups identified in the project proposal. On November 26 a contract person was hired for two weeks to contact those tutors trained in April and to attempt to pair them with clients. Seven tutor training classes, involving seventy-four participants, were started during the period covered by this report. The classes are located in ( , , , and ). Graduates of these classes will be ready to commence tutoring activities in late January. Plans have been made to train approximately twenty students 90 in regular adult basic education college programs to act as tutors. Full-time college ABE instructors from ( ) Lake and ( ) have received six days of orientation to tutor training and are prepared to offer a "Tutor Training" component in the ABE curriculum. An Outreach Tutorial Centre has been established in a t r a i l e r located on the ( ) campus. This f a c i l i t y is intended to provide a location for tutor-client meetings when other arrangements cannot be made, for learning materials and professional reference materials which cannot be distributed to all .tutors, for independent study by clients and for meetings between a professional tutor and volunteer tutors as well as clients. The ( ) centre is used also as the Tutor Training classroom. Discussions are underway between project staff and representatives of the ( ) Public Library regarding the establishment of an Outreach Tutorial Centre in the library. Two library employees are currently enrolled in a Tutor Training class and are willing and qualified to work with clients who come to the library. Outreach Tutorial Centres are also being considered for other communities. 91 In the area of curriculum development, the project is committed to the development of a Tutor Training Resource Manual, Social Orientation S k i l l s modules and a Foundation Literacy curriculum. The lesson plans, notes and handouts used for Tutor Training have been compiled as a f i r s t draft of the Resource Manual. This 157 page draft will be revised in light of the post-program evaluations and edited to produce the final document. The plans for the Social Orientation Ski l l s modules have evolved into a proposed series of audio tapes and worksheets entitled "I'm In Charge Here" (INCH). A contract worker was hired for the period November 1 - December 21 to design and produce these materials. At the time of this report the materials were not available for review. No concrete progress has been reported on the development of the Foundation Literacy Curriculum. The public awareness component of the project is an ongoing process. Project staff describe the rationale, aims and activities of the project to all interested individuals and groups. Posters, bookmarkers, tutor signs, radio and television are being used to inform the public of the scope of undereducation among the adult population and of the services which are available at ( ) College. 92 OBSERVATIONS During the period covered by this report the project has operated with its f u l l complement of three full-time professional staff. This combined with the hiring of two short-term contract people was reflected in an increased level of activity and greater progress toward the articulated goals of the project. The following table represents the actual and projected numbers of tutors-in-training, tutors trained and active tutor-client pairs through the f i r s t year of the project. The projections indicate the maximum potential number as no allowance has been made for those who do not complete tutor-training or complete but do not commence tutoring activity. The nineteen tutors trained during April, 1979 are included in the following figures. 93 TABLE 1 Actual and Projected Tutors In-Training, Tutors Trained, and Tutor-Client Pairs Tutors Tutors Tutor-Cl M onth In-Training Trained Pai rs April 0 19 9 May 0 19 9 June 0 19 4 July 0 19 4 August 0 19 4 September 0 19 4 October 0 19 9 November 74 19 9 December 74 19 10 January 74 19 10 February 20 93 90* March 20 93 90 * 3 of the original 19 trained tutors are not available for turoring activity. The ( ) Lake activities parallel closely the intended activities of the project proposal. If the volunteer tutor concept succeeds i t will be providing educational services to a part of the population of the region that has traditionally not had access to college services. Further, ( ) Lake is a Native Indian community and thus represents one of the main target groups identified in the project proposal. The operation of the project appears to be impaired to some extent by an apparent lack of structures 94 for cooperative planning and information sharing among staff members. The fact that two of the professional staff had not received copies of the f i r s t evaluation report two months after its submission, let alone met to discuss i t s implications, suggests a need for such structures. Preliminary examination of the tutor training classes suggests that the participants possess the necessary prerequisite s k i l l s and attitudes. This, combined with the high quality of instruction provided by the project staff, should produce a corps of trained and motivated volunteer tutors. RECOMMENDATIONS Based upon the progress to date and the observation noted above, the evaluators make the following recommendations for the consideration of the project staff. 1. That a l l members of the project team meet regularly to review the goals, activities and priorities of the project. There should be a review of the project proposal, the progress to date and the two evaluation reports. Figure 1 (see p. 98) may be of assistance in conceptualizing the overall project. 95 2. That a revised "program statement" be written. This document could be an abbreviated but updated form of the project proposal. This could assist the project team in establishing priorities and specifying activities for the remainder of the project. It would also be of value for informing other interested groups of the pilot proj ect. 3. That the number one priority for the remainder of the project be clearly established as the activation, maintenance and support of the maximum number of Tutor-Client pairs. The fact that the majority of the tutors trained in April, 1979 were not paired with clients until late November suggests that the rationale for the project, increased educational services for undereducated adults, is not receiving the necessary attention because of other college and project a c t i v i t i e s . 4. That the project staff reconsider their assumption that the majority of tutoring activity will cease during the period April to September. If this assumption is 96 correct, the f e a s i b i l i t y of the entire volunteer tutor concept must be questioned as it would halve the potential impact of the project. Other programs of this nature, including those in areas similar to the ( ) region, have maintained a high level of tutoring activity on nearly a year round basis. 5. That an additional staff member be hired on contract to complete the development of the Foundation Literacy Curriculum. If one of the reasons for this development was to provide guidelines and materials for use by tutors with their clients, then this should be completed as soon as possible so as to be available to those tutors already working in the f i e l d . 6. That special effort be made to promote and support the programs in ( ) Lake and ( ) and the proposed activities in ( ), ( ) Lake and ( ). These programs will serve those groups which were' identified as target groups in the project proposal. Consideration should be given to employing part-time para-professional s as community contacts to support the tutors and to staff Outreach Tutorial Centres where these are established. 97 7. That the project staff consider reducing the length of the tutor training program in light of the success of shorter programs elsewhere. In-service training could be used to make up for the shorter training period and would provide an opportunity to reunite the tutors to discuss and learn from their experience as tutors. Figure 12 CONCEPTUALIZING THE PROJECT PLANNING STAGE IMPLEMENTATION STAGE PAYOFF STAGE CURRICULUM DEVELOPMENT Tutor Training Manual Foundation Literacy Social Orientation S k i l l s PUBLIC AWARENESS Marks on Paper Medi a Personal Contacts MANAGEMENT Proposal Design Job Descriptions Hiring Orientation Delegation Problem Solving Mutual Planning Evaluati on TUTOR TRAINING TUTORING ACTIVITY (increased educational OUTREACH opportunity for under-TUTORIAL educated adults) CENTRES FORMATIVE EVALUATION OF THE ADULT BASIC EDUCATION PROJECT nal Report on Project Activities for the Year Endi March 31, 1980 100 INTR0DUCTI0N The Department of Adult Education of the University of British Columbia was requested by the Division of Continuing Education of the Ministry of Education to serve as external evaluator for a pilot project in adult basic education at ( ) College. As contracted in an agreement between the Department and the Ministry, this report is the third of three formative evaluation reports to be submitted. The purpose of this report is to review the progress of the project during the period April 1, 1979 through March 31, 1980 and to make recommendations for its improvement. Readers of this report are reminded that previous reports supplement this document and establish a context for i t . In preparing this report, the evaluators reviewed documents provided by the Division and the College, met with project staff, consulted with other College and Ministry personnel, visited tutor training classes, interviewed tutors and clients, and reviewed documents and reports emanating from the project. As in the previous reports, the observations, judgments and recommendations made herein are intended to f a c i l i t a t e the successful attainment of the project's objectives. 101 NEED There is considerable data to support the need for programs for undereducated adults in the ( ) College region. The 1976 Census indicated that 1,100 adults (4 percent) have five or less years of schooling and 6,295 (22 percent) ahve five to eight years of schooling. Thus there are approximately 7,500 adults in the region who are potential participants in the project. During 1978-79 the College offered adult basic education programs at eight locations. Two programs were conducted on Indian reserves, while the remainder were located in College f a c i l i t i e s . A total of sixteen programs were conducted involving 254 students, an average of sixteen students per program. Since most of those programs were available only on a full-time basis in larger communities, the present project should assist the College to achieve its stated goal of serving the educational needs of a l l area residents through part-time as well as full-time programs in smaller communities and rural areas as well as the major centres. Non-traditional types of program delivery are acknowledged in the Educational Master Plan as being both 102 desirable and necessary given the geographic, climactic, and demographic conditions of the region. In response to these and other data, the College Board in March, 1979 identified adult basic education as the top program priority for 1979-80. GOALS The goals and objectives which guided the development of the project were in three general areas: public awareness, curriculum development, and outreach development and delivery. Public Awareness: Curriculum Development 1. 2. 3. 4. to identify the undereducated adults; needs of to recruit volunteer tutors; to recruit clients (learners); to increase public awareness of the need for basic education programs for adults; to increase public awareness of programs available to undereducated adults. to develop basic literacy learning materi als; to develop "social orientation s k i l l s " learning materials; 103 Outreach Development Deli very: 3. to develop a Tutor Training Manual. 1. to develop and implement an and effective Volunteer Tutor Program; 2. to establish tutoring/1earning centres throughout the college region (e.g. o i l patch, Indian reserve, major centres). SUMMARY OF PROJECT ACCOMPLISHMENTS A preliminary draft of the project proposal was approved in principle by the College Board in September, 1978. Following a meeting with Ministry of Education o f f i c i a l s in January, 1979, the Board gave final approval to the project in March, 1979 and funding for the 1979-80 fiscal year was included in the College operating budget. Two activities which occurred as a result of separate contracts between the College and the Ministry were completed prior to the project's o f f i c i a l start but prepared the way for i t . These were the development of the stage play ( ) and one tutor training course. Although financially separate, those activities were educationally related to the project. 0 104 The play was a major component of the project's public awareness campaign. The play was designed to introduce audiences to the concept and magnitude of adult i l l i t e r a c y in society. Through a series of scenes emphasizing personal accounts and experiences the play illustrates the dilemma faced by the non-reader in a print-oriented society. The tour consisted of 23 performances between April and June, 1979. It was seen by some 2,000 students and 450 adults. The reviews of ( ) were generally very favourable. A tutor training course for 29 registrants was held in March and April, 1979. Nineteen of the registrants successfully completed the course and 16 were matched with clients. Throughout the f i r s t year of the project there were many a c t i v i t i e s , in addition to ( ), related to the goal of increased public awareness. These activities were too numerous to describe in detail and i t is all but impossible to assess their impact. Briefly, the project staff described the rationale, aims, and activities of the project to a l l interested individuals and groups. Posters, bookmarks, tutor signs, radio, television, and newspaper 105 were used to inform the public of the scope of undereducation among the adult population and of the services available at ( ) College. As of March 31, 1980, three curriculum development projects were underway and none of them had been completed. Of a proposed eight-chapter Tutor Training Resources Manual, two chapters were completed in draft form, four chapters were nearing completion of a f i r s t draft, and two chapters were in progress. A seven-module program dealing with Social Orientation S k i l l s and Interpersonal Communication Skills was under development, but only one module was ready for production. Four sample modules in a Foundation Literacy curriculum were being pilot tested. The goal of outreach development and delivery involved designing and implementing a delivery system to provide educational services to the target groups identified in the project proposal. The main component of this system was a volunteer tutor program. Nine tutor training courses were conducted in six locations including ( ) 3 courses), ( ) (2 courses), ( ), ( ) Lake, ( ) and ( ), and 99 tutors were trained. Ninety-three clients were recruited and matched with tutors, and 31 tutors were engaged in upgrading programs. The 93 106 clients served included 56 residents of c i t i e s , 27 of rural areas, and 14 of native Indian communities. Two other activities of the project contributed to increased public awareness and to the support of tutor-client pairs. Outreach Tutorial Centres were established in ten locations including three places in ( ) and one each i n ( ) , ( ) , ( ) , ( ) , Lake, ( ) Lake, ( ), and ( ) Lake. All of those centres have at least some materials for tutor use. In addition, five issues of a newsletter t i t l e d ( ) were produced and distributed to tutors, clients, and interested citizens. OBSERVATIONS The project f e l l far short of reaching its intended target of 400 clients, and in fact reached only one-fourth that number. In the year end report of the project staff, three problems were listed as contributing to that disappointing outcome. 1. The late hiring of staff. The delay in hiring the second and third staff members meant a loss of eight person-months of productivity from the potential of the proj ect. 107 2. Inadequate job descriptions. Provision was not made for adequate coverage of such vital areas as tutor training and tutor-client matching. 3. Unrealistic goals. The project appeared to underestimate the time and work required to identify and train tutors, match them with clients, and monitor the outcomes for 400 client-tutor pairs spread over a geographic area covering one-third of the province. Coupled with the goal of preparing curriculum materials, the goals overall seemed too much to accomplish given the finance and staffing patterns used. In addition to the problems cited above, i t should be noted that the timing of the various project activities was not properly synchronized. For example, public awareness was being raised at the time when the project had no curriculum and few tutors who could provide any service. Curriculum development, which requires special s k i l l s and is best accomplished through a team approach, did not get well underway until late in the year and is s t i l l not completed. Meanwhile, tutors are being trained and instruction is being given to clients. The period in the autumn of 1979, which would normally be that of high instructional delivery, was used for curriculum development. 108 It should also be noted that the project seemed to lack a clear focus and direction. Although the project proposal spells out some specific objectives, such as serving 400 clients, the staff of the project seemed uncertain as to how best to direct their energies in regard to the various components. As a result, the project's funds and human resources were spread thinly over a number of different and often competing a c t i v i t i e s . The component which suffered the most from this situation was the establishment and support of the volunteer tutor-client system which should have been the priority objective. To determine the nature of the service being provided to educationally disadvantaged adults, a telephone survey of clients was conducted during April, 1980. Twenty clients were contacted, of whom fifteen (75 percent) were s t i l l active in the program. The fifteen active clients met with their tutors an average of six times per month for slightly less than two hours per meeting. This would result in an average of approximately twelve hours of instruction per client per month, although the range went from a low of two hours to a high of 80 hours per month. The median was five hours of tutoring per month. 109 Almost a l l of the clients contacted indicated that they would be unable or unwilling to attend a regular class at a College centre. The most frequently stated reasons for this were that employment involved shift work or there were young children who required the client's attention. Most of the clients contacted were satisfied with the tutoring they had received, and only three indicated d i f f i c u l t i e s . The tutor training courses, which originally were 30 hours but were reduced to 24 hours, provided roughly 3,000 student-contact hours of instruction. The clients of those tutors had received an estimated 1,500 hours of instruction by the end of the f i r s t project year. RECOMMENDATIONS Based upon the information gathered and observations made during the period April, 1979 through March, 1980, the following recommendations are made for the consideration of the staff as the project enters the 1980-81 year. 1. That a revised program goal statement be prepared. This statement could be an abbreviated, revised form of the project proposal and would assist the staff in 110 establishing priorities and specifying activities for the duration of the project. This statement should be specific and detailed in terms of products, locations, and schedules. That consideration be given to decreasing the emphasis on full-time, professional staff and increasing the use of part-time programmers in smaller centres and rural areas. This would provide for increased f l e x i b i l i t y and allow more activiites to occur outside the major College centres. It would also permit better coverage of the college area and reduce the travelling required by full-time staff. That the number one priority for the second year of the project be clearly established as training, activating, maintaining, and supporting the maximum number of tutor-cl i ent pai rs . That the curriculum development projects which are now underway be brought quickly to a close. It might be helpful to establish a firm target date, such as July 1, 1980, for completion of a l l curriculum development work. Those projects not completed by the target date should be abandoned. I l l 5. That public awareness activit ies continue only to the extent required to attract tutors and cl ients , and then only when the program is operating at a higher level of efficiency. 6. That the project staff develop a systematic data gathering procedure to enable an accurate record to be kept of project act ivit ies and services. Of particular interest here is the need to keep records and l i s ts of tutors, c l ients , and services provided. Such data will enable the project staff to report on its act ivit ies during the second year. 7. That the project staff develop a plan to assess the quality of the education being provided, and follow this up with steps to ensure that clients are receiving the best quality of instruction possible. The f irst year of the project produced a few significant contributions to the advancement of adult basic education in the College region. Notable among these were the production of the stage play, the extensive public awareness campaign, and the preparation of a group of trained tutors. For a variety of reason, however, the 112 project did not accomplish a l l that i t might have. The second year should seek to build upon the positive accomplishments of the f i r s t year to expand greatly the opportunities for adults in the College region to attain or upgrade their basic literacy s k i l l s .