Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

An assessment of illuminative evaluation as an approach to evaluating residential adult education programs Hasman, Ruth Margaret Reiner 1982

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
UBC_1983_A8 H38.pdf [ 5.11MB ]
Metadata
JSON: 1.0055871.json
JSON-LD: 1.0055871+ld.json
RDF/XML (Pretty): 1.0055871.xml
RDF/JSON: 1.0055871+rdf.json
Turtle: 1.0055871+rdf-turtle.txt
N-Triples: 1.0055871+rdf-ntriples.txt
Original Record: 1.0055871 +original-record.json
Full Text
1.0055871.txt
Citation
1.0055871.ris

Full Text

C.J An Assessment of Illuminative Evaluation as an Approach to Evaluating Residential Adult Education Programs by Ruth Margaret Reiner Hasman B.A., San Jose State College, 1967 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS in THE FACULTY OF GRADUATE STUDIES (Department of Administrative, Adult and Higher Education) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA December 1982 © Ruth Margaret Reiner Hasman, 1982 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of /AcLoJL£ Izdx^al^j The University of British Columbia 1956 Main Mall Vancouver, Canada V6T 1Y3 Date o2<»; I9?g. i i ABSTRACT The purpose of this study was to test the suitability of illuminative evaluation as a methodology for determining the value of residential adult education programs. Illuminative evaluation methodology was selected for several reasons. First, the methodology functioned independently of the program. Second, it permitted the flexibility needed to evaluate a developing program. Third, it provided a means of studying spontaneous events. Fourth, it allowed for representation of multiple viewpoints, and lastly, few studies of this methodology had been undertaken (Miles, 1981; Parlett & King, 1971). For those reasons, it seemed important to investigate the suitability of illuminative evaluation. A residential program was determined to be particularly suitable for testing illuminative evaluation because it had some unique advantages that did not exist in other program formats. The chief advantage of the residential format over the more traditional types was that of removing the participant temporarily from his ongoing responsibilities. This made it possible for the investigator to have continuous contact with the participants which is important for a methodology that relies on fieldwork techniques. In this study, illuminative evaluation methodology was applied to the evaluation of a residential program at the Justice Institute of British Columbia. In order to test the suitability of the methodology, three criteria appearing frequently in the literature were judged appropriate to this study—technical adequacy, utility and efficiency. The literature suggested that an evaluation should produce technically sound information that is useful to some audience and is worth more to the audience than it costs (Grotelueschen, 1980). Evidence of the degree to which illuminative evaluation met these criteria was collected during the program. Techniques such as interviews, questionnaires, and observations were used to collect the evidence. The evidence was analyzed using quantitative and qualitative techniques to determine whether the methodology met the standards set by the criteria. The evidence collected showed that this methodology satisfied the criteria requirements of technical adequacy and utility. Although it was weak on the efficiency criterion, the methodology compensated with particular strengths in utility and technical adequacy. For further research, there are a whole host of possible areas that illuminative evaluation opens up. Further work needs to be done to develop specific tasks, questions, and/or procedures which could guide implementation of each stage of the illuminative evaluation methodology. Further studies could be done to contribute to the understanding of the methodology and studies could be done to determine the suitability of the methodology for evaluating other adult education program formats. iv TABLE OF CONTENTS ABSTRACT ii LIST OF TABLES v ACKNOWLEDGEMENT viCHAPTER I: INTRODUCTION 1 The Problem 4 Research Approach 5 Summary 8 CHAPTER II: LITERATURE REVIEW 9 Historical Emergence Of Evaluation 9 Social Sciences 11 Education 2 Classical Versus Naturalistic Paradigm 13 Classical Paradigm 18 Naturalistic Paradigm 24 Illuminative Evaluation 6 Summary . 30 CHAPTER III: METHODOLOGY 32 Criteria 33 Technical Adequacy 3Utility 7 Efficiency 40 Study Site 3 Summary 7 CHAPTER IV: ILLUMINATIVE EVALUATION STRATEGY 49 Pilot Phase 50 Issues Clarified 51 Evaluation ProcessQuestionnaire Design 53 Data Collection 6 Operational Program 61 Summary 6CHAPTER V: RESULTS 3 Technical Adequacy 6Utility 69 Efficiency 78 Summary 82 CHAPTER VI: SUMMARY, CONCLUSIONS, IMPLICATIONS AND RECOMMENDATIONS 85 SummaryConclusions 7 Technical Adequacy 88 Utility 90 Efficiency 2 Implications And Recommendations 95 REFERENCES 99 REFERENCE NOTES ...106 APPENDIX A: Expectations Questionnaire 107 APPENDIX B: Mini-Session Questionnaire 110 APPENDIX C: Final Questionnaire 112 APPENDIX D: Follow-Up Questionnaire 116 APPENDIX E: Revised Questionnaires 119 vi LIST OF TABLES 1: Some Basic Differences Between Classical And Naturalistic Paradigms 18 2: Illuminative Evaluation Stages 29 3: List Of Evaluation Costs 42 4: Data Collection Schedule 57 5: Effort And Time Spent On Illuminative Stages 79 6: Criteria And Standards Used For Determining Suitability Of Illuminative Evaluation Methodology 84 vii ACKNOWLEDGEMENT The support and direction provided by the members of my thesis committee served as the motivation to complete this research. I gratefully acknowledge the wisdom and encouragement of Dr. T. J. Sork and Dr. J. E. Thornton. In addition I thank Dr. J. G. Dickinson for his initial guidance in getting this study underway. I would like also to thank Mr. Henry Kennedy, Director of Land Titles for allowing me to evaluate his Land Title School programs and Mr. Paul Dampier, Program Director at the Justice Institute for his help and guidance in the initial stages of this project. To Annette Buckmaster, a very special friend, I owe sincere thanks for her unfailing moral support and encouragement during my last year. Special thanks also go to MarDell Parrish, for without his help in word processing, this thesis might still be on the computer. Throughout the last five years many of my fellow students in adult education provided encouragement, advice and help. To each of them I extend thanks and appreciation. Finally, I would like to express my sincere thanks to David who empathized with my "thesis phases" and to Janet and Brent who kept saying "Are you finished yet, Mommy?" Without their cajolement, support and encouragement, this project would not have been completed. 1 CHAPTER I INTRODUCTION The purpose of this study was to test the suitability of illuminative evaluation as a methodology for determining the value of residential adult education programs. The study specifically focused on the advantages/disadvantages, and strengths/weaknesses of this methodology. As the numbers of people seeking adult education have grown, new methods and techniques have developed to meet the increased demand (Apps, 1979; Houle, 1971). One of the fastest growing educational developments has been use of short term residential group learning programs for adults. There are few people in education and business today who have not attended a residential course, conference, seminar, colloquium or workshop. These programs provide participants with a concentrated experience, a change in environment and an opportunity for close interaction and mutual problem-solving with peers (Garside, 1969; Houle, 1971; Miller, 1964; Schacht, 1960). Because of their concentrated nature, the programs are capable of providing 2 an experience with a powerful impact. To enhance the residential program's capacity for impact, planners or evaluators should evaluate the program systematically, collecting information from many sources. This information can be used to improve effectiveness, modify ineffective procedures, and assist in designing both follow-up activities and future programs (Beckhard, 1956). Since program evaluation in education is in the early stage of theory development, it has become an area of intense academic interest. As might be expected a plethora of divergent views and terms have been created by those trying to describe, analyze, explain, theorize, or otherwise capture the essence of evaluation (Rusnell, Note 1). Despite interest in the process by academics, practitioners in the field have been less enthusiastic. "Among theorists evaluation is one of the most hotly debated activities in the educational process; among practitioners it is one of the most ignored" (Davis & McCallon, 1974, p. 271). The foremost reason for reduced enthusiasm regarding evaluation is the lack of guidance provided to practitioners by the literature. In any new field, guidance is expected from the experts through their literature, but the literature about program evaluation has served more to confuse than to guide. Worthen (1974) noted that evaluation literature is badly fragmented into unrelated pieces and is as difficult to synthesize as it is to make a meaningful picture from a random handful of pieces to a jigsaw puzzle. Looking at the individual pieces is 3 little more helpful, for the level of discourse in individual writings is often aimed at fellow evaluation theorists more than at schoolmen, thereby communicating a great deal of detail about a topic which lacks a larger context within which it could be useful. Working under this handicap, busy practitioners can hardly be faulted for not expending the necessary time to try to develop a clear picture from the current evaluation literature (p. 2). A second reason is that evaluators have the problem of which definition to use. During its development, program evaluation has come to have many different definitions. These definitions are derived largely from the emphasis placed on quantitative versus qualitative studies. Program evaluation, according to Blackwell and Bolman (1977), should give individuals and systems some control over their mutual growth and development so that they can function optimally. It should be a "systematic" collection of information from many sources in order to improve planning effectiveness, to modify procedures where necessary, and to serve as a guide in planning future programs (Beckhard, 1956). Bass & Vaughan (1966) suggested that evaluations should be planned at the same time as the program and should constitute an integral part of the total program from beginning to end. Evaluation must be purposeful and not done just for its own sake (Steele, 1970). Since no definition suits every situation, a definition that tries is likely to fall short in numerous ways. Against such odds evaluators usually withdraw to their own definitions of evaluation (Stake, 1979). From the numerous definitions mentioned in the literature, two seem appropriate to this study: 4 By the term evaluation, we mean systematic examination of events occurring in and consequent on a contemporary program--an examination conducted to assist in improving this program and other programs having the same general purpose (Cronbach, 1980, p. 14). Evaluation is a collection of methods, skills and sensitivities necessary to determine whether a human service is needed and likely to be used, whether it is conducted as planned, and whether the human service actually does help people in need. While doing these tasks evaluators also seek ways to improve programs (Posavac & Carey, 1980, p. 6). The Problem Although more than a hundred evaluation models have been developed since Tyler's objectives-centered model, evaluators are still looking for alternative models (Cronbach, 1980). They are seeking new ways to evaluate programs as well as ways to improve utilization of results. House (1972) put it this way: "Producing data is one thing! Getting it used is quite another" (p. 412). Thus, it appears that identifying an appropriate methodology is quite significant. Since programs evolve and change over time, alterations for improvement may need to be made during the program. Therefore, a criterion of an evaluation methodology is that it should not require the program to stand still or stay the same in order to be evaluated (Katz & Morgan, 1974; Stake, 1978). In other words, the methodology should be independent of the program 5 being evaluated. The methodology identified that met the above requirement is "illuminative evaluation" (Parlett & Hamilton, 1977), for it involves examining the program without interfering, manipulating or restricting the activities. Illuminative evaluation methodology was selected for several reasons. First, the methodology functioned independently of the program. Second, it permitted the flexibility needed to evaluate a developing program. Third, it provided a means of studying spontaneous events. Fourth, it allowed for representation of multiple viewpoints, and last, few studies of this methodology had been undertaken (Miles, 1981; Parlett & King, 1971). For those reasons, it seemed important to investigate the suitability of illuminative evaluation. Research Approach Within educational evaluation, two distinct paradigms can be found: the classical and naturalistic. Each has its own strategies, foci and assumptions. Most formal educational evaluation studies use the classical paradigm which derives its methodology from experimental psychology. These studies assess the effectiveness of a program by examining whether or not it has reached required standards on pre-specified criteria. Studies of this kind are designed to yield objective numerical data that can be statistically analyzed. 6 Recently, however, there has been increasing resistance to evaluations of this type (Parlett & Hamilton, 1976; Smith, 1976; Stake, 1978). There is a movement to use a second paradigm related to social anthropology. This paradigm requires a fundamentally different evaluation methodology from that used with the classical paradigm. These two paradigms are discussed thoroughly in Chapter II. Frequently evaluations based on the naturalistic paradigm involve a case study of a program or project. Case study methodology according to Stake (1978) has fallen into disrepute among social scientists; however, he suggests that case studies are still needed in certain types of evaluations. For example, when the evaluation is aimed at improvement of a specific program, when the information collected is for participants and not just scientists, when the concern is for individuals rather than broad generalizations, then a case study approach that identifies unique characteristics and idiosyncracies can be invaluable (Patton, 1978). The methodology tested in this study, illuminative evaluation, is relatively new and is based on the naturalistic paradigm. It is not a standard methodological package but a general research strategy (Parlett & Hamilton, 1976). It is a dynamic evaluation process which is not tied to a single treatment, predetermined goals or outcomes, but rather focuses on the actual operations of a program over a period of time (Patton, 1978) . This process requires sensitivity to both qualitative and quantitative changes in a program throughout its 7 development, not just at some end-point in time. Since illuminative evaluation is built on diversity and adaptability, the strategies used are adaptable and eclectic. This is extremely important in program evaluation, for innovative programs are often changed as planners learn what works and what does not, and as planners experiment and change their priorities. In this study, illuminative evaluation methodology was applied to the evaluation of a residential program at the Justice Institute of British Columbia. This program is described in Chapter III. In order to determine its suitability, illuminative evaluation methodology should meet certain criteria. The literature suggests that an evaluation methodology should produce information that is: (1) technically sound, (2) useful to some audience and (3) worth more to the audience than it costs (Grotelueschen, 1980). It was decided to use the above as criteria for testing illuminative evaluation. Complete descriptions of these criteria are found in Chapter III. Evidence of the degree to which illuminative evaluation met these criteria was collected during the program. Techniques such as interviews, questionnaires, and observations were used to collect the evidence. The evidence was analyzed using qualitative and quantitative techniques to determine whether the methodology met the standards set by the criteria. 8 Summary Chapter I provided general background as well as a statement of purpose, a statement of the problem and a description of the research approach. The remainder of this thesis is organized into five chapters and appendices. The review of selected literature appears in Chapter II. Chapter III provides the research methodology while Chapter IV contains the operationalization of the illuminative strategy. The results appear in Chapter V. Chapter VI includes a summary of the previous chapters and conclusions based on the research f indings. 9 CHAPTER II LITERATURE REVIEW The review of the literature presented here contains a brief description of the historical emergence of evaluation followed by the development of evaluation in social sciences and education. Then the review is directed to classical and naturalistic paradigms used in evaluation studies. The final section contains a review of the illuminative evaluation model, the focus of this study. Historical Emergence of Evaluation Evaluation emerged in the 1600's (Cronbach, 1980) when natural science established itself as a powerful instrument for overturning traditional beliefs. Since its early beginnings it has developed in a variety of ways in various fields. Cronbach's (1980) review of the historical emergence of 10 evaluation "reminds us that applied social research, like other human endeavors, develops not in a steady expansion but in spurts and slumps and changes of direction" (p. 23). As one reviews the current evaluation literature in a number of substantive areas--education, training, community action, health, psychotherapy--an interesting pattern occurs. Regardless of the field, the same issues or concerns reappear. For example, common concerns include the "roles" of evaluation, the evaluation design, measurement and collection techniques, the neutrality of the evaluator, the value of observation, the function of formative evaluation, the use of objectives, the value of long-term studies, and utilization of data. Over the years, evaluation studies have been strongly influenced by the methodologies of all the social sciences. Evaluation studies have become a reflection of the diverse academic and professional identities, ideological and political outlooks and past career commitments of evaluation researchers (Freeman & Solomon, 1981). Because of evaluation's interdisciplinary nature, the methods of each discipline, and the assumptions which underlie them, have been subjected to critical scrutiny and have benefited from revisions resulting from these new perspectives (Guttentag & Saar, 1977). Because of its strong influence on educational evaluation, the history of the development of evaluation in the social sciences is reviewed below. This is followed by a discussion of the development of evaluation in the field of education. 11 Social Sciences In the 1930's, social science research changed its emphasis. Psychologists were beginning to undertake studies of an experimental character in and out of the laboratory. For example: Newcomb's (1943) study of attitude change among girls at Bennington College, Lippitt and White's study of the impact of authoritarian and democratic leadership styles on children's group relationships (Lippitt, 1940), and Kurt Lewin and his associates' studies on social influence undertaken during the 1930's and 1940's. Then, there was the monumental applied-research program carried out by Stouffer and associates on American soldiers during World War II and the famous Western Electric Studies of the 1930's that contributed "Hawthorne Effect" to the vocabulary of social science (Bernstein & Freeman,1975). In the 1950's and 1960's many social action and intervention efforts were scrutinized and evaluated by social scientists--deliquency-prevention programs, penal-rehabilitation efforts, psychotherapeutic and psychopharmacological treatments, public housing projects and community organization activities. However, it was not until the massive U.S. federal expenditures during the "Great Society" programs during the 1960's and 1970's that accountability began to mean more than assessing staff sincerity or political head counting of opponents and proponents. In the 1970's, evaluation emerged as a political tool. During that time, evaluations were regularly required of all 1 2 health, education, and welfare programs. The requirement for evaluation was a political response to the perceived demand for increased governmental accountability. Educat ion The field of education traditionally has had an interest in evaluation of curricula, instruction, programs, participants, and materials. This field has tended to consider its problems unique and its methods special and different from evaluation of other kinds of programs. However, as educational evaluation has followed innovative programming beyond the classroom to involve the social issues of the day, it has become almost indistinguishable from evaluation of other planned social interventions. Weiss (1972) said: "Educational evaluators have much to learn from--and to teach--those in other fields, and they have much to loose by developing special perspectives and a special vocabulary that inhibits communication and interchange of experience" (p. 13). Pooling information benefits those facing similar problems across the range of program areas. Following a relatively inactive period in the 1950's, development of educational evaluation theory was revitalized in the mid 1960's. This revitalization was influenced by Cronbach (1963), Scriven (1967), Stake (1967) and Stufflebeam (1967). The field's development was further stimulated by the evaluation requirements of U.S. federal education programs launched in 1965, and by the U.S. accountability movement that began in the early 1970's (Stufflebeam & Webster, 1981). 13 This brief sketch illuminates the growth of evaluation in two distinct fields. Although developing independently, evaluative methods in these fields have moved in the same direction. Demands from funding agencies have helped the trend towards accountability. While there has been continuity in the development of the evaluation field, a qualitative change has occurred. With the emergence of large scale national projects in the 1960's, it was found that evaluation approaches based on the classical paradigm were simply inadequate to deal with the evaluation questions and issues posed by these projects (Cronbach, 1963). Classical vs. Naturalistic Paradigm What, then, are the options available for evaluative studies? The literature reveals two paradigms that are used to guide evaluations; they are the classical and naturalistic paradigms. The classical paradigm comes from the tradition of experimentation in agriculture, which gave us many of the basic experimental techniques most widely used in evaluation. This paradigm assumes quantitative measurement, experimental design and multivariate, parametric statistical analysis. By way of contrast, the naturalistic paradigm has its roots in the fields of anthropology and ethnography. Using the techniques of interviewing and personal observation, this 1 4 paradigm relies on qualitative data, and detailed description derived from close contact with the target of study. The classical paradigm aims at prediction of phenomena, while the naturalistic paradigm aims at understanding phenomena (Patton, 1978) . Which of these paradigms—classical or naturalistic--provides best guidance for an evaluation? "There is of course no definitive answer to that question....The choice between paradigms in any- inquiry or evaluation ought to be made on the basis of the best fit between the assumptions.... and the phenomenon being studied or evaluated" (Guba & Lincoln, 1981, p. 56) . Although the literature has shown that neither the classical nor the naturalistic paradigm is intrinsically better than the other, the debate goes on. Kuhn (1970) has pointed out that the two sides "...will inevitably talk through each other when debating the relative merits of their respective paradigms....[E]ach paradigm will be shown to satisfy more or less the criteria that it dictates for itself and to fall short of a few of those dictated by its opponent" (p. 109-110). Since neither paradigm solves all problems, they should be viewed as alternatives from which the evaluator can choose. The evaluator should select the paradigm and the methodology that suits the type of program being evaluated and the nature of the evaluation questions, for paradigms only tell researchers what to emphasize, what to look for, what questions to be concerned with, and what standards to apply. In order to make those 15 choices, it is necessary to be aware of the assumptions of each. Although these two paradigms differ on a number of assumptions, the discussion below will be limited to nine major assumptions (Guba & Lincoln, 1981). Philosophical base. Bogdan and Taylor (1975) differentiate between the two relevant philosophical perspectives. "One, positivism...seeks the facts or causes of social phenomena with little regard for the subjective states of individuals." The second, phenomenology, "is concerned with understanding human behavior from the actor's own frame of reference. Since the positivists and the phenomenologists approach different problems and seek different answers, their research will typically demand different methodologies" (p. 2). Thus, the naturalistic investigator, a phenomenologist, is concerned with description and understanding of social phenomena, while the classical investigator, a positivist, is concerned with "scientific" facts and their relationship to one another. Inquiry paradigm. A second difference between the two approaches can be found in the guiding paradigm. The classical investigator, with his positivist leanings, tends to see the world as composed of variables. Certain variables can be manipulated to determine their effects on other variables. The naturalistic investigator, on the other hand, is concerned with description and understanding, and is guided by a paradigm based on ethnography. Purpose. A third difference between the two approaches is purpose. The classical approach tests some proposition about a 16 relationship called a hypothesis. The purpose is to verify the hypothesis by testing ideas empirically. The purpose of the naturalistic approach, on the other hand, is the discovery of relationships that can be observed rather than arranging for it to happen under controlled conditions. Framework/design. Pre-ordinate, fixed designs are one of the hallmarks of a classical approach, while emergent, variable designs are among the hallmarks of a naturalistic approach. Setting. It is clear from the above statements that the classical investigator leans toward the laboratory setting for investigations, while the naturalistic investigator carries out investigations in a natural, non-contrived, environment. Conditions. The classical investigator seeks to control conditions; the naturalistic investigator opens the investigation to uncontrolled conditions as much as possible. Treatment. The concept of treatment is extremely important in classical experimental science. To the naturalistic investigator the concept of treatment is very foreign since it implies some kind of manipulation or intervention. Scope. Classical investigators must focus on a limited range of variables in order to be able to deal with them in the controlled, systematic way that characterizes this approach. Conversely, naturalistic investigators are more ready to consider any variable that appears relevant. They approach the problem from a holistic view. Methods. Lastly, both classical and naturalistic researchers wish to be objective in their methodology, but the 17 meaning which they ascribe to that term is quite different. The classical investigator strives for objectivity in the sense of inter-subjective agreement. The naturalistic investigator, places little store in that form of objectivity and strives instead for confirmability, i.e. agreement among a variety of information sources. The nine points of difference noted above are summarized in Table 1 (Guba, Note 2). The dimensions of the table illustrate the fundamental differences in viewpoints between classical and naturalistic approaches. Nevertheless, it would be naive to believe that every classical investigator would always conform to the points of view mentioned, just as it would be absurd to suppose that a naturalistic investigator would never deviate. 18 Table 1 Some Basic Differences Between Classical and Naturalistic Paradigms COMPARISON ITEM CLASSICAL NATURALISTIC Philosophical base Logical positivism Phenomenology Inquiry paradigm Exper imental physics Anthropology Purpose Verification Discovery Framework/design Fixed Var iable Setting Laboratory Nature Condit ions Controlled Invited interference Treatment Stable Variable Scope Limited variables Holistic Methods Object ive--in sense of inter-subject agreement Objective--in sense of factual/ confirmable Classical Paradigm The literature reviewed confirmed the dominance of the classical paradigm with its quantitative, experimental bias. Campbell and Stanley (1963) called this paradigm "the only available route to cumulative progress" (p. 3). It was this belief in and commitment to the natural science model on the part of most prominent academic researchers that made the 19 classical paradigm dominant (Patton, 1978). As Kuhn (1970) explained, "a paradigm governs, in the first instance, not a subject matter but rather a group" (p. 80). Those groups most committed to the dominant paradigm are found in universities where they not only employ the scientific method in their own evaluation research but where they also nurture students in a commitment to that same methodology (Kuhn, 1970). Like the majority of evaluative studies, evaluations of short-term residential programs belong to the group dominated by the classical paradigm. A survey of the literature yielded only evaluations relying heavily on the assumptions and characteristics of the classical paradigm described previously. No studies were identified that conformed to the naturalistic paradigm. Based on the classical paradigm, the researchers in the studies reviewed utilized either pre-experimental, true experimental or quasi-experimental designs. The following discussion is limited to short-term residential programs, since this study concerned testing an evaluation methodology on programs in this format. The discussion separates and critiques the studies on the basis of design. One-shot Case Study Much evaluation research in education conforms to a design in which a single group is studied only once (one-shot case study) subsequent to some treatment (conference, workshop) presumed to cause change. Three studies reviewed used the one-20 shot approach (Havelock, 1971; Milozarek, 1976; Scruggs, 1976). Basically, the planners in the above studies wanted to know how program participants felt at the conclusion of the program. This simple form of evaluation requires that a set of systematic observations be made of one group at some specified time. These studies implicitly compare the residential experience with other observed and/or remembered events. The inferences are based on general expectations of what the data would have been had the experience not occurred. In addition, "the many uncontrolled sources of differences between any one study and potential future ones are so numerous that justification in terms of providing a bench mark for future studies is hopeless" (Campbell & Stanley, 1963, p. 7). Where only one group is measured, interpretation of the results is difficult and often unconvincing. "This workshop was rated successful by the participants. In general, the open-ended responses were divided into two categories: outright praise, and requests for more time and depth of topic" (Havelock, 1971). Without a comparison group, it is hard to know whether the results would have been equally good with some other program, or whether the program was actually responsible for producing the results at all. Pretest-Post Design If the question an evaluator is seeking to answer cannot be addressed through one set of observations made at the completion of the program, then the next more complex research design 21 should be used. The pretest-posttest design is used when the evaluator wants to know if participants improved or at least did not deteriorate while being served by a program. Nine articles were identified that used this design (Cox, 1974; Deantonio,1973; Densmore, 1965; Dickinson & Lamoureux, 1975; Halverson & Thiesse, 1979; Pattison, 1968; Roberts & Holmes, 1971; Valla, 1975; Wohllenben, 1965). Like the one-shot case study, one cannot conclude that the program caused the improvement. The program might have caused the improvement; however, this design is not rigorous enough to permit such a conclusion. All of the above studies used either the one-shot case study or pretest-posttest design. These studies were highly localized. Their value was limited to the program studied and therefore not generalizable. Sutton (1966) suggested that localized studies should be appraised only in terms of their operational values to the institution making them. True Experimental When evaluators want to discover the cause of changes in program participants, evaluations of greater complexity must be designed. In order to show that something caused something else, it is necessary to demonstrate that: "(1) the cause precedes the supposed effect in time; (2) the cause covaries with the effect; and (3) no other alternative explanations of the effect exist except the assumed cause" (Posavac & Carey, 1980, p. 196). Campbell and Stanley (1963) suggest that only 22 true experimental and quasi-experimental designs will prove causal relations. Several studies of residential programs have used the true experimental design. They used a pretest-posttest control group with participants randomly selected for the two groups (Bale & Molitor, 1978; Bunch, 1976; Conrad, 1976; Devlin, 1966; Jenkins, 1976). Occasionally researchers have used three randomly selected groups (Blaney & McKie, 1969; Peterson, 1971). In practice, evaluators are often in a position in which it's impossible to randomly select groups and manipulate conditions. In those cases, evaluators choose a quasi-experimental design which controls some but not all "threats to validity" (Campbell & Stanley, 1963). In quasi-experimental designs, an intact group is selected as a control because of its similarity to the experimental group. Sometimes this type of group is called a "comparison group" to distinguish it from a true control group. Including the comparison group permits a distinction to be made between the effects of the program and the several alternative plausible interpretations of change. Because the comparison group is tested at the same time as the experimental group, both groups have the same, amount of time to mature. Historical forces presumably affect the groups equally. Because both groups are tested twice, testing effects should be equivalent. Finally, the rates of participant loss between pretest and posttest can be examined to be sure they are similar (Posavic & Carey, 1980). 23 Quasi-experimental Design The quasi-experimental design most frequently used in the studies reviewed was the nonequivalent control group. This design necessitates the use of a pretest to provide control for selection bias. Only through a pretest can intact group equivalence be demonstrated, enabling the evaluator to compare the results of the two groups. Five of the studies using this design investigated one or more aspects of a particular program (Bringle, 1967; George & Green, 1976; Stewart, 1965; Torrence, 1966; Touzel, 1975). Four studies (Edelbach, 1973; Lacognata, 1961; Smallegan, 1971; Wientge & Lahr, 1966) compared residential to non-residential programs. The above classical/experimental/quasi-experimental studies focused on isolated psychological variables, i.e. satisfaction, anxiety, self-esteem. Such studies have not allowed insight into the complex impact that programs have on participants. Moreover, they have not always given program planners information needed to make programmatic decisions. Finally, much evaluation has been highly critical. It has been composed of negative punitive statements which typically discourage, anger, and disappoint the evaluation audience (Patton, 1978). The problem for evaluation is that the very dominance of the classical paradigm with its quantitative, experimental emphasis appears to have cut off the great majority of evaluators from serious consideration of any alternative evaluation paradigm (Patton, 1978). 24 Naturalistic Paradigm Recently, however, there has been increasing resistance to utilizing the classical paradigm in evaluation studies (Parlett & Hamilton, 1976; Patton, 1978; Smith, 1976; Stake, 1978). One alternative to the classical paradigm is the naturalistic paradigm. The naturalistic paradigm is not new but has its roots in ethnography and anthropology. A naturalistic inquiry is a dynamic process which is not tied to a single treatment or predetermined goals or outcomes, but rather focuses on the actual operations of a program over a period of time (Guba & Lincoln, 1981; Parlett & Hamilton, 1976; Patton, 1978). This process requires sensitivity to both qualitative and quantitative changes in a program throughout its development, not just at some end-point in time. Hamilton (1977) has characterized those alternative models as "pluralist" evaluation models. That is, models that take account of the value positions of multiple audiences. In practical terms, pluralist evaluation models (Parlett & Hamilton, 1976; Patton, 1975; Stake, 1967) can be characterized in the following manner: Compared with the classic models, they tend to be more extensive (not necessarily centered on numerical data), more naturalistic (based on program activity rather than program intent), and more adaptable (not constrained by experimental or preordinate designs). In turn they are likely to be sensitive to the different values of program participants, to endorse empirical methods which are couched in the natural language of the recipients, and to shift the locale of 25 formal judgment from the evaluator to the participants (p. 339). There are many methodological questions that can be raised about naturalistic inquiry, ranging from basic epistomological issues to operational or procedural matters. Guba (1978) has attempted to define the difficulties that face a naturalistic inquiry. The major problem, as he saw it, is that of authenticity—the establishment of the basis for trust in the outcomes of the evaluation. Other methodological problems are setting limits to the inquiry and focusing on the categories within which the data can be assimilated and understood. Despite the difficulties just mentioned, naturalistic inquiry has begun to gain credibility. Leading evaluation theorists have been strongly interested in moving away from more classical paradigms (Cronbach, 1980; Guba & Lincoln, 1981; Patton, 1978) and practitioners have begun to apply naturalistic techniques to evaluative studies (Erickson, 1977; Fienberg, 1977; Lutz, 1974; Parlett & Hamilton, 1976; Rist, 1975). A number of evaluation models have emerged which seem especially congenial to the use of the naturalistic paradigm. Five emergent models especially compatible with naturalistic inquiry are: the Responsive Model (Stake, 1975), the Judicial Model (Wolf, 1979), the Transactional Model (Rippey, 1973), the Connoisseurship Model (Eisner, 1975), and the Illuminative Model (Parlett & Hamilton, 1977). These five models have close philosophic and operational ties with naturalistic inquiry. Their emergence at this time argues strongly for the utility of naturalistic inquiry for the field of educational evaluation, 26 and helps make the case that naturalistic inquiry should be investigated as an alternative methodology. Illuminative Evaluation The illuminative evaluation model (Parlett & Hamilton, (1977) was chosen for testing because it matched the philosophy and value system of the investigator. This model: (1) permitted the study of changing and emerging problems, (2) encouraged multiple viewpoints and perspectives, (3) focused on program activities and issues rather than outcomes and (4) provided for a means of studying spontaneous events and situations. [I]lluminative evaluation, takes account of the wider contexts in which education programs function. Its primary concern is with description and interpretation rather than measurement and prediction. It stands unam biguously within the alternative methodological paradigm. The aims of illuminative evaluation are to study the innovatory program: how it operates; how it is influenced by the various school situations in which it is applied; what those directly concerned regard as its advantages and disadvantages; and how students, intellectual tasks and academic experiences are most affected. In short, it seeks to address and to illuminate a complex array of questions (Parlett & Hamilton, 1977, p. 144). Characteristically illuminative evaluations have three principal stages: observation, inquiry, and explanation. The first stage is an exploratory phase during which the investigator becomes knowledgeable about the program and people involved and tries to understand and document the day-to-day 27 reality of the setting or settings under study. No attempt is made to manipulate, control or eliminate situations or program developments. Faculty, participants, planners and any other persons involved in the project are observed and interviewed. Documents are reviewed to obtain an historical perspective as well as a perspective on how people regard the innovation. The second stage is a narrowing and focusing process. It is an interactive process between evaluators and relevant decision-makers or information users. Narrowing and focusing the study means dealing with several basic concerns. What is the purpose of the evaluation? How will the information be used? What will we know after the evaluation that we do not know now? What can we do after the evaluation that we cannot do now for lack of information? What topics or concerns should be selected for intensive investigation? Narrowing and focusing are key elements because programs are so complex and have so many levels, goals, and functions. There are always more potential study topics than there are time and resources to examine. The alternatives, therefore, have to be narrowed, clarified and redefined. When the alternatives have been clarified and defined, the evaluator must determine evaluation procedures. Illuminative evaluation does not have simple, standardized procedures for that function, so the evaluator might incorporate other models that offer guidelines for operationalizing the model. For example, if the study focuses on participant reactions, the extent to which the program content was assimilated and/or the 28 change in job behavior, the evaluator might incorporate Kirkpatrick's (1967) or Hamblin's (1974) model. Both these models offer guidelines for operationalizing the evaluation. The third stage consists of seeking general principles underlying the organization of the program, spotting patterns of cause and effect, and placing individual findings within a broader explanatory context (Parlett & Hamilton, 1976). (See Table 2 for summary of the three stages.) Within the three stage framework of illuminative evaluation, the investigation can combine four different data gathering techniques permitting the program to be examined from a number of angles. These are (1) observation of the participants and events; (2) interviews with participants, resource persons, and administrators; (3) questionnaires covering many aspects of the program; and (4) historical research with existing documents. The following paragraphs describe the data gathering techniques in more detail. Observat ions are an essential part of illuminative evaluation. They are intended primarily to build-up a continuous record of on-going events, to add interpretive comments on obvious and latent features of the program, and to uncover tacit assumptions and interpersonal relationships. Interviews are used primarily to determine the perceptions and views of individual participants. Discovering the view of participants is crucial to assessing the impact of the program. Informal interviews often provide unique insights into program processes experienced by different people. Table 2 Illuminative Evaluation Stages STAGES ACTIVITIES STAGE ONE—OBSERVATION -review or discover what is expected at the outset The investigator becomes -consider the questions, hypotheses or issues already raised knowledgeable about the program and -look for possible studies to use as a models people involved. -review historical documents -form initial plan of action -anticipate key problems, events -consider possible audiences for preliminary and final reports STAGE TWO--INOUIRY -arrange access to program, negotiate pl-an of action TrT^ i nvest igator narrows and -discuss arrangement for maintaining confidentiality of data, source and reports focuses the study. -identify informants and sources of particular data -select or develop questionnaires or standardized procedures if any -work out record-keeping system -make observations, interviews, use questionnaire -keep records of activities and changes STAGE THREE—EXPLANATION -classify raw data; begin interpretations This is the analysis and -gather additional data, triangulate data to validate key observations interpretation phase. -search for patterns of data -seek linkages between program arrangements, activities and outcomes -select illustrations, special intretations -draw tentative issues, organize according to issues -describe the setting where the activity occurred -draft reports -describe methods of investigation -revise and disseminate reports 30 Questionnaires and tests are included to obtain information that sustains or qualifies earlier, tentative findings. Historical research using documentary and background sources provides information about the development of events. The gathering of background information yields an historical perspective of the way the program was regarded by different people before the evaluation began. This information can be obtained from letters, minutes of meetings, and reports. The data gathered often suggest topics that need investigation and expose aspects of the program that otherwise would be missed. The three stages of illuminative evaluation do not function separately; they overlap and are interrelated. The transition from stage to stage occurs as problem areas become progressively clarified and redefined. Beginning with an extensive data base, using the data gathering techniques mentioned above, the investigator systematically reduces the scope of the inquiry to give more concentrated attention to the emerging issues. This "progressive focusing" permits unique and unpredicted phenomena to be given due weight. It reduces the problem of data overload and prevents the massive accumulation of unanalyzed material (Parlett & Hamilton, 1976). Summary This chapter illuminated the historical development of evaluation. Although evaluation processes have been developed 31 independently, most fields have developed in the same direction--toward accountability. This trend toward accountability was noted both in social science evaluation and in educational evaluation. While there has been continuity in the development of the evaluation field, a qualitative change has occurred. With the emergence of large scale U.S. projects in the 1960's, it was found that the classical evaluation approaches were inadequate to deal with evaluation questions and issues posed by those projects (Cronbach, 1963). One alternative to classical evaluation that arose was naturalistic inquiry which had its roots in the fields of anthropology and ethnography. Using the techniques of in-depth, open-ended interviewing and personal observation, this approach relied on qualitative data, and detailed description derived from close contact with the target of study (Patton, 1978). The illuminative evaluation model that has emerged from the naturalistic paradigm permits the studying of changing problems, encourages multiple viewpoints, focuses on program activities and provides for a means of studying spontaneous events and situations. 32 CHAPTER III METHODOLOGY Illuminative evaluation methodology was selected for this study for several reasons. Firstly, illuminative evaluation is based on the assumption that evaluation should "respond" to the needs, interests, and perceptions of the participants rather than on measurement criteria established a priori. Secondly, it acknowledges that there are multiple realities and multiple truths. Thus, unlike the majority of investigative efforts, illuminative evaluation elicits, considers, and builds on the in-depth information that is provided by participants, instructors and coordinators alike. Finally, data interpretations portray similarities and differences in perceptions while describing the origins and context for such agreements and discrepancies (Guba & Lincoln, 1981). In order to determine the suitability of this methodology, for evaluating residential adult education programs, it was tested on a selected site. Then the results of the testing were compared with the standards set by pre-specified criteria. If 33 the evidence of suitability meets the standards, the methodology can then be deemed suitable. The remainder of this chapter contains descriptions of the criteria, standards, and study site. Criteria Three criteria appearing frequently in the literature were judged appropriate to this study. The literature suggested that an evaluation methodology should produce information that is (1) technically sound, (2) useful to some audience and (3) worth more to the audience than it costs (Grotelueschen, 1980). These criteria, used to judge the suitability of this methodology, will be described in the following paragraphs. Technical Adequacy Two standards of a technically adequate methodology are the objectivity and validity. Of the two standards mentioned, objectivity is probably the most controversial. "For how can an inquiry be objective if it simply 'emerges'; if it has no careful control laid down a priori; if the observations to be made or the data to be recorded are not specified in advance...." (Guba & Lincoln, 1981, p. 124). The difficulty seems to stem from the meaning given to the term objectivity. Scriven (1972) has pointed out that the terms objective and subjective are opposites, but they are widely used to refer to 34 contrasts in two different senses: a quantitative and a qualitative one. In the quantitative sense, 'subjective' refers to what "occurs to the individual subject, while 'objective' refers to what a number of subjects experience." In the qualitative sense, "'subjective' means unreliable, biased or probably biased, a matter of opinion, and 'objective' means reliable, factual, confirmable or confirmed, and so forth" (Scriven, 1972, pp. 95-96). Basically, Scriven suggested that what one individual experiences is not necessarily unreliable, biased, or a matter of opinion, just as what a number of individuals experience is not necessarily reliable, factual, and conf i rmable. Illuminative evaluation methodology based on the naturalistic paradigm emphasizes the objectivity of the data while evaluation methodologies based on the classical paradigm emphasize the objectivity of the investigator. In the illuminative model, the objectivity of the data is of critical concern; it should be both factual and confirmable. The second standard of a technically adequate evaluation methodology is validity. Illuminative methodology emphasizes validity. It is concerned with the meaning and meaningfulness of the data collected and instrumentation employed. Does the instrument measure what it purports to measure? Do the data mean what we think they mean? (Patton, 1978). Illuminative methodology makes the issue of validity central by getting close to the data, being sensitive to 35 qualitative distinctions, developing empathy with program participants, and attempting to establish a holistic perspective on the program. This closeness to the data suggested by Denzin (1971) and others (Campbell, 1975; Guba & Lincoln, 1981; Patton, 1978) is not the only legitimate way to understand human behavior, but it is an alternative to the distance prescribed by the dominant classical paradigm. The focus in the illuminative methodology is on a valid representation of what is happening, not at the expense of reliable measurement, but without allowing reliability to determine the nature of the data (Guba & Lincoln, 1981; Parlett & Hamilton, 1976). House (1980) pointed out that Validity is provided by cross-checking different data sources and by testing perceptions against those of participants. Issues and questions arise from the people and situations being studied rather than from the investigator's percept ions .... In constructing explanations, the naturalist looks for convergence of his data sources and develops sequential, phase-like explanations that assume no event has single causes (p. 280). In order to determine the technical adequacy of illuminative methodology, the data produced must be objective and valid. In other words, it must be shown that the data are both factual and confirmable as well as give a valid representation of events. Two procedures were used to determine the technical adequacy of illuminative evaluation; those procedures are triangulation and continuous observation. Triangulation is a process of cross-checking findings. Cross-checking enables the investigator to determine if the data 36 collected from multiple data sources confirm each other. Besides facilitating cross-checking, triangulation also increases the credibility of data through validation. In this study, data from informal interviews, questionnaires, and observations were cross-checked. The process of combining these data sources produced data that were objective and valid. Continuous observation is also important in determining technical adequacy, because continuous observation will provide a profile of the program. In this study the investigator built a continuous record of on-going events, transactions and informal remarks. Much of the on-site observation involved recording discussions with and between participants. These provided additional information that might not otherwise be apparent from more formal interviews and questionnaire responses. The data from continuous observation was used to provide a valid picture of the program. In addition, those data were used in the triangulation process. For example, oral responses were cross-checked with written responses. If by using triangulation and continuous observation the data collected could not be confirmed or validated, the illuminative evaluation methodology could not be judged technically adequate. If, on the other hand, the data were confirmed through both triangulation and continuous observation, the methodology could be judged technically adequate. Thus, a technically adequate methodology provides evidence that is both objective and valid as defined in this section. 37 Utility An evaluation methodology should be flexible yet produce results that are useful. The results should be relevant, important, and credible. Once the evaluation is completed, the logical expectation is that decision-makers will use the results to make rational decisions about future programming. However, all too often the results are ignored. With all the money, time, effort and skill that went into the acquisition of information, why does it generally have so little impact? Weiss (1972) suggests several reasons: evaluation results do not match the informational needs of decision-makers, results may not be relevant to the level of decision-maker who received them, or results lack clear direction for future programming. As House (1977) says evaluations "can be no more than acts of persuasion .... Expecting evaluation to provide compelling and necessary conclusions is to expect more than evaluation can deliver. But if it cannot produce the necessary, it can providethe credible, the plausible, and the probable'" (pp. 5-6) . In this study, credibility was an important standard for judging the utility of illuminative evaluation methodology. Assurance of credibility in illuminative evaluation is probably best obtained through frequent and thorough interaction with participants as the information develops. Thus, information with limited credibility can be identified easily and either eliminated or strengthened. Of course, such a process could expose the investigator to biases. While this possibility is 38 undoubtedly real, the investigator in this study hedged against biases through such safeguards as triangulation and continuous observation described previously. Thus, triangulation and continuous observation are useful for determining both technical adequacy and utility. Another approach to increase credibility of evaluations is "participant evaluation" (Campbell, 1979). It is a move toward using participant judgments as part of the evaluation itself to provide credibility checks: Participants...will usually have a better observational position than... outside observers of a new program. They actually have experienced the preprogram conditions from the same viewing point as they have the special program. Their experience of the program will have been more relevant, direct and valid, less vicarious. Collectively, their greater numerosity will average out observer .idiosyncracies that might dominate the report of any one ethnographer. While participants are asked to generate a lot of data in program evaluation, rarely are they directly asked to evaluate the program, to judge the adequacy, to advise on its continuance, discontinuance, dissemination, or modification. Rather than evaluating programs, participants are usually asked about themselves and their own adequacy. We are thus wasting a lot of well-founded opinions (Campbell, Note 3). This study used Campbell's approach to produce credible data. The participants were asked to evaluate the program, to judge its adequacy and to advise on its modification. Two other standards of utility are the relevance and importance of the data. In this study, relevance and importance of data were determined through interviews and informal discussions with the program planner, director and Advisory Counc i1 members. 39 Flexibility is also an important standard for judging the utility of illuminative evaluation. Flexibility is extremely important in program evaluation, for innovative programs are often changed as planners learn what works and what does not, and as planners experiment and change their priorities. One of the chief advantages of illuminative evaluation is flexibility, since it does not have prescribed constraints. The flexibility of illuminative evaluation methodology allows the investigator to match the evaluation to the program. In addition, flexibility insures that the program is not required to stand still or stay the same in order to be evaluated (Edwards & Guttentag, 1975). A flexible, personalized evaluation design built upon close observer-participant and observer-instructor interaction lends itself to the highly informal, personalized environment of adult education. The course of this study was not be charted in advance, since the course was dependent on the actual operation of the program. Within the three stage framework of illuminative evaluation, an information profile was assembled using data collected from four areas: observations, interviews, questionnaires and historical documents. To make a judgment regarding utility, illuminative evaluation methodology had to provide evidence that met the standards described above. To meet the standards set, the data collected by various means had to be credible, important and relevant. In addition, the methodology had to pose no constraints on the program. 40 Ef f ic iency The last criterion used to determine the suitability of illuminative evaluation methodology is efficiency. An efficient evaluation methodology should produce results that are worth more than their costs (Groteleuschen, 1980). This criterion is the most difficult to precisely define in terms of standards and measurements. Unlike technical adequacy and utility, which are frequently described in the literature, efficiency "when treated at all, is treated almost tangentially" (Haller, 1974, p. 405). The following is illustrative of the reason for tangential treatment by many evaluation specialists: "It embarrasses me to admit that I do not know anything about the measurement of costs. I will have to leave that to somebody else" (Stake, 1973, p. 312). Decisions cannot be made easily in advance as to what percentage of program resources should be expended on evaluation. On the one hand, every dollar and hour spent on evaluation is taken from other aspects of the program, and those costs become a very important factor when it comes time to make decisions (Haller, 1974). On the other hand, evaluation can be regarded as an investment in the future of the program. The value of the investment will vary with accountability demands against the program and the value of reporting program performance. Reasonable costs for an evaluation can be decided by estimating the significance of issues and the likely impact of the evaluation. In a sense, efficiency represents the ratio of 41 effort to effect. Although various evaluation needs will entail different expenditures of resources, some form of evaluation is possible within any budget. To determine efficiency of a methodology, cost estimates in terms of outlay (such as supplies, space), time expenditures (such as administrative effort, interviews), and expertise needed (such as program personnel or instructors) should be examined. Cost estimates of acquiring information also should take into account hidden costs. These include time lost to the program by evaluating, alternative use of funds, and human costs such as invasion of privacy, dangers of creating negative attitudes and reactions, or generating pressure on program personnel (Grotelueschen, 1980). These may be compared with the costs of not evaluating. There will be situations in which it is possible to assess cost in dollars and others in which it is not. When costs can be reasonably measured in dollars, it is usually desirable to do so, although it sometimes requires a little ingenuity. Dollars, as measuring devices, provide a convenient, generalizable and comparable estimate of the evaluation costs (Haller, 1974). Table 3 lists the costs to be determined during the course of the evaluation. Costs in time and/or money are collected in the following general categories: personnel, materials and equipment, participant time and evaluator time. 42 Table 3 List of Evaluation Costs CATEGORY TIME(hrs.) COST(dollar) PERSONNEL Administrative Di rector Secretary Clerical Staff Professional Instructors Consultants MATERIALS & EQUIPMENT Supplies Space Equipment PARTICIPANT TIME During program After program EVALUATOR TIME Before program During program Waiting time After program CONTINGENCY COSTS 43 The question to be answered by the criterion of efficiency is whether the same outcomes could have been achieved at less cost. As greater demands are placed on limited financial resources, questions of evaluation efficiency will demand and receive closer consideration. The criteria of technical adequacy, utility and efficiency described above were used to determine the suitability of illuminative evaluation methodology. Evidence was collected at the site described in the next section and matched against the standards set out above. Study Site A residential program was determined to be particularly suitable for testing illuminative evaluation because it has some unique advantages that do not exist in other program formats. (1) The advantage of detachment from the usual routine and the sense of freedom this imparts. (2) The advantage of an environmental break which affords a challenge by the new environment to another pattern of behavior. (3) The advantage of concentration on one field of work without the usual distractions. (4) The advantage of time for assimilation and integration. (5) The advantage of intimacy of students and tutors which reinforces new knowledge. (6) The advantage of a community spirit which encourages tolerance and open-mindedness (Schacht, 1960, pp. 2-3). 44 The chief advantage of a residential program over the more traditional types was that of removing the participant temporarily from his ongoing responsibilities. This made it possible for the evaluator to have continuous contact with the participants. Continuous contact is important for a methodology that relies on fieldwork techniques. The Justice Institute of British Columbia was selected as a site for this study since it offered numerous residential programs. The Justice Institute as a post-secondary educational institution is a member of British Columbia's post-secondary network of colleges and institutes. It provides leadership and coordination to support, develop and deliver a wide range of training and education programs for people working within the field of justice and public safety. These programs are designed to improve the quality of justice and public safety for the citizens of British Columbia. The Land Title School program of the Justice Institute was identified as an appropriate residential program for testing illuminative evaluation methodology because it had the unique residential characteristics described previously. Moreover, it met Edwards & Guttentag's (1975) criteria for formal evaluation. They specified that formal evaluation was appropriate if a program was new, newly changed, or about to change. The Land Title School program met that criterion, because it was new. In addition, the program planner was seeking answers to the following types of questions. Is this program a good idea? If so, what can we do to make it work as well as possible? If not, 45 how can we devise something better, given existing constraints? (Edwards & Guttentag, 1975, Garside, 1969; Parlett & Hamilton, 1976). For those reasons, the Land Title School was determined to be a suitable program to test illuminative evaluation methodology. The Land Title School program was developed by the Justice Institute of British Columbia for the Land Title Branch, Ministry of Attorney General. The Director of Land Titles felt personnel working in the offices should have an opportunity to better understand the legal background of their work. The staff often had been requested to interpret various regulations and procedures which were part of the registration process. Although staff weren't obliged to offer such assistance, in reality it was often the best way to expedite individual cases. The Director felt the more knowledgeable the staff member, the easier it would be to satisfy requests for explanations. The goal of the Land Title School program was to provide Land Title personnel with job enrichment courses in three areas: land law theory, environmental awareness and supplemental training. The most important area was land law theory. In land law courses, the legal context for the land registry process was presented. Topics included principles of British Columbia land law, law history, and Land Title Act. The second area was environmental awareness. The work of a Land Title Office reflects and is effected by activities of the wider community. These courses aimed to provide a broader understanding of the relationship between the Land Title Office 46 and the environment in which it operates. As an example, sessions were presented on urban land use, British Columbia land history, operation of a lawyer's office and an anthropological view of land. The third area was supplemental training. Supplemental training was not essential to prescribed job performance or successful completion of promotional exams. Supplemental training was designed to give the participants greater insight into how to perform their various tasks. Examples of supplemental training are legal descriptions, documentation and public relations. Three courses made up the Land Title School program. They were the Introductory, Intermediate and Advanced Courses. The three day Introductory Course for newly hired clerks provided an overview of British Columbia's Land Title system and its legal heritage. The core legal knowledge course was presented in the two week Intermediate Course. The intensive program gave participants an understanding of the law relating to land and the legal, social and economic implications of land use. The five day Advanced Course concentrated on specific land registry i ssues. Each course was divided into a series of half-day mini-sessions. Each mini-session was taught by a different resource person drawn from university faculty, consultants, and lawyers. The mini-session content was developed from a needs assessment conducted by the Justice Institute and conformed to the three content areas identified as essential--land law theory, 47 environmental awareness and supplemental training. The Land Title School program as outlined above met the criteria for an appropriate site for testing illuminative evaluation methodology. Firstly, the Land Title School was a new program. Secondly, the program coordinator was anxious to have the program evaluated. Lastly, it was offered on a residential basis which meant that the investigator would have continuous contact with the participants. Summary This chapter was divided into two sections. The first section contained a description of the criteria used for judging the suitability of illuminative evaluation methodology. Three criteria appearing frequently in the literature were judged appropriate to this study: technical adequacy, utility, and efficiency. The literature suggested that an evaluation methodology should produce information that is (1) technically sound, (2) useful to some audience and (3) worth more to the audience than it costs (Grotelueschen, 1980). In this chapter, the criteria were defined, standards set, and the method for collecting evidence to judge each criterion was described. In order to meet the set standards, the methodology must first provide evidence that is both objective and valid. Second, the evidence must be credible, important, and relevant while the methodology remains flexible. Third, the 48 evidence must be worth more than the costs. The second section of this chapter contained a description of the site selected for testing illuminative evaluation methodology. A residential program was determined to be particularly suitable for testing this methodology because it had some unique advantages that did not exist in other program formats. The chief advantage of a residential program over the more traditional types was that of removing the participant temporarily from his ongoing responsibilities. This made it possible for the evaluator to have continuous contact with the participants, because continuous contact is important in a methodology that relies on fieldwork techniques. The Land Title School program of the Justice Institute of British Columbia was identified as an appropriate residential program for testing illuminative evaluation methodology because it had the unique residential characteristics described in this chapter. The Land Title School was a residential program designed for voluntary job enrichment. It was intended to appeal to those who wished a greater insight into their work than was required to competently perform assigned duties. The courses designed for the program were Introductory, Intermediate and Advanced. These courses would be offered yearly on a residential basis. The fact that the Land Title School was a residential program and new made it an ideal site for testing the suitability of illuminative evaluation methodology. 49 CHAPTER IV ILLUMINATIVE EVALUATION STRATEGY This chapter provides a discussion of how illuminative evaluation was employed in this study. Briefly, the illuminative methodology was utilized to evaluate each phase in the development of the Land Title School program. It was used to describe and understand relationships that could be observed in a natural, non-contrived environment under controlled conditions. The first or pilot phase of the Land Title School program represented a trial and error period during which new approaches or procedures were tried out on a rather flexible and easily revised basis. During the development of the program, some modifications occurred. The first pilot course was the Intermediate Course (March, 1980). In order to evaluate this course, the illuminative evaluation methodology was employed. The success of this course led to the development of two more pilot courses—Introductory Course (November, 1980) and the Advanced Course (December, 1980). For consistency, the three 50 stage illuminative methodology was utilized again. The main objective of the evaluation of the pilot phase was to learn enough to further develop the program. The second or operational phase of the Land Title School program consisted of modifications of all three courses. Based on what was learned in the pilot courses, these courses were modified so that they stood the greatest chance of success. This final phase was also evaluated using the illuminative methodology. In order to operationalize the illuminative methodology, a number of steps had to be taken at the observation, inquiry, and explanation stages (see Table 2). These three stages were repeated for each course. The result was six complete evaluation cycles since each of the three courses was run twice. In the remainder of this chapter, processes used in each evaluation cycle will be described. Pilot Phase The first evaluation cycle of the illuminative methodology started with the pilot Intermediate Course. In the observation stage, the investigator became familiar with the Intermediate Course through analysis of background documentation, discussions with the program coordinator and Land Title Director. This familiarization process enabled the investigator to proceed to the inquiry stage. 51 The inquiry stage was an interactive process between evaluator and program planner. This stage consumed the major portion of the investigator's time because a number of steps had to be taken. For example, issues had to be clarified, the evaluation process determined, questionnaires designed and data collected. The following paragraphs present detailed descriptions of the steps taken during this inquiry stage. Issues Clarified First, the following issues were discussed and clarified--the purpose of the evaluation, what process should be used, how the information would be used, what topics should be selected for intensive investigation. When these issues had been clarified, the specific evaluation process was determined. Evaluation Process Evaluation processes can be divided into a number of levels and evaluation can be carried out at any of these levels. In this study, it was decided to concentrate on the hierarchical levels identified by Kirkpatrick (1967) and Hamblin (1974). These levels, starting from the lowest level, are: (1) participant reactions, or how well they liked the program; (2) learning or the extent to which the program content was assimilated; (3) behavior change, or the change in job behavior; 52 (4) results, or the change in organizational variables. Kirkpatrick and Hamblin assumed there was a cause and effect chain linking the four levels. This hierarchical chain could break at any of its links. For example, a person could react correctly but fail to learn; or he could learn, but fail to apply his learning on the job; or he could change his job behavior, but this could have no effect on the organization. The job of the evaluator is to determine if the links in the chain hold and if they don't where they broke and why. Evidence can be collected at any of these levels, however, the degree of difficulty in collecting evidence at each level increases as one ascends the hierarchy. The participant reaction level is the simplest and easiest level. As the hierarchy is climbed, the difficulty and the resources required to measure actual program outcomes generally increase (Bennett, 1975). The difficulty often starts at the behavior change and results levels because the evaluator does not usually have adequate information about or control over the non-training activities of the organization. Furthermore, the techniques which are used to evaluate at those levels will normally be those which the organization already has at its disposal and uses for other purposes. If the appropriate techniques such as productivity measurements or cost-benefit analysis don't already exist in the organization, evaluation at the higher levels will be impossible because techniques of this kind cannot be introduced for education or training purposes alone (Hamblin, 53 1974). Therefore, in many cases it may be impractical to evaluate at every level. In the case of this study, it was impractical to evaluate at the results level, for the Land Title Branch did not have techniques set up for evaluation at the results level. Therefore, it was decided to concentrate on the first three levels: participant reactions, learning, and behavior change. Eesides describing the levels of evaluation for this study, Kirkpatrick and Hamblin gave detailed examples, and suggested procedures and techniques that could be used in most programs. This guidance was lacking in the literature on the illuminative evaluation methodology. Questionnaire Design In the process of designing questionnaires, the investigator discovered that some theorists claim a program should never be carried out unless it has clear objectives (Bennett, 1975; Patton, 1978; Stufflebeam, 1967). Others say that, although it's permissible to carry out such a program, it is impossible to evaluate it. However, there are people who disagree with both philosophies (Hamblin, 1974; Warr, Bird & Rackham, 1970). This study was guided by the latter authors, for this program did not have any measureable objectives. It was difficult to set measureable objectives for the Land Title School program or specific sessions even at the participant reactions and learning levels, because so little was known about the participants' previous state of learning. In 54 cases when objectives are not formulated in measureable terms, the best way of assessing reactions or learning changes may be simply to ask participants whether they find the course interesting, whether they think their knowledge has improved in specific areas and/or the most important or most job-relevant point they remember from a program. In the absence of behavioral objectives which specify precise evaluation criteria, the evaluation must adopt open-ended techniques. Due to the absence of measureable objectives, the questionnaires developed for this study contained mainly open-ended questions. The questionnaires developed are described below. Expectations Questionnaire (See Appendix A). This questionnaire was used to assess participant expectations of the Land Title School prior to the start of the program and at its conclusion. The questionnaire, developed by Warr, Bird, and Rackham (1970, p. 65), was used to obtain feedback on participants' expectations regarding the usefulness, enjoyment, relevance, and importance of the course. Participants approach a course with a set of expectations which are important in determining their reactions to the program. Mini-session Questionnaires (See Appendix B). Since the course was divided into a series of mini-sessions taught by different resource persons, a questionnaire was designed that could be used at the end of each session. The mini-session questionnaire was used to assess participants' reactions and perceived learning. Participants were asked to rate each mini-session on a scale of 1 to 7 from "not very..." to 55 "extremely..." on interest, relevance to job, and new information gained. In other words, participants were asked to make decisions about the usefulness of specific content areas in terms of three important contexts: relevance to their own expectations, perceived application to their work situation and their previous knowledge. In addition to rating scales, participants rated the mini-sessions using open-ended responses. The questionnaire, originally developed by Warr, Rackham and Bird (1970), was adapted for use in this study by the investigator and program coordinator. The questionnaire was administered immediately after each mini-session. Final Questionnaire (See Appendix C). This questionnaire, administered immediately following the program, was used to determine how participants felt about the program. The instrument consisted of rating scales to assess participant reactions to the program as a whole and participant beliefs about the relevance of information gained to their work. A series of open-ended questions was also included. Follow-up Questionnaire (See Appendix D). Forty-five days after completion of the course, participants were sent a questionnaire consisting of several sections of the final questionnaire. The remainder of the instrument contained a series of open-ended questions about the relevance of information, the effect of course on participant job behavior, and participant satisfaction with the general program. 56 Data Collection Lastly, in the inquiry stage of the pilot Intermediate Course, data were collected through observation, informal interviews and questionnaires. The investigator used the following procedures to collect data through questionnaires. After the program director and Director of Land Titles officially started the pilot Intermediate Course, the investigator introduced the study, solicited cooperation and gave instructions. All participants were instructed to select a numerical identity code that was to be used on all questionnaires. Then the expectations questionnaire was given to all participants. A packet of mini-session questionnaires was provided to participants so they could rate each session that was taught by a different resource person. These questionnaires were to be completed at the end of each mini-session and returned to the investigator. The final questionnaire was administered on the final day of the course during the evaluation session. The follow-up questionnaire was sent to participants forty-five days after course completion. Table 4 summarizes the data collection schedule. The data collection step completed the inquiry stage of the illuminative methodology. The steps taken during the inquiry stage enabled the investigator to proceed to the third and final stage of the methodology. In the explanation stage, all the data collected from the pilot Intermediate Course were analyzed using either qualitative Table 4 Data Collection Schedule TECHNIQUE SOURCE TIME Historical Documents Program Director Two weeks prior to course Such as course proposal, minutes of meetings, letters, course brochure. Expectations Participants Prior to start of mini-This questionnaire assessed participant expectations prior to start of sessions on the first day course. For example, would it be usefu1-use 1 ess; helpful-unhelpful; of course and important-unimportant. Mini-sessional Participants End of mini-session This questionnaire concerned interest of session information gained, relevance to job, length of session, level of session. Observation Investigator During courses of participants during course, at breaks. Interview Participants During courses during breaks before and after class. Expectations & Final Participants Final day of course These questionnaires assessed fulfillment of participants' expectations and their feelings at the end of the course. Follow-up Participants 45 days following last This questionnaire was concerned with relevance of information, effect day of course of course on job behavior and satisfaction with general program. 58 or quantitative techniques. Since the class size was small (n=43), only simple analytical procedures were used. All quantitative data, obtained through questionnaires only, were arranged numerically by participant identity code. This technique enabled the investigator to trace the ratings of individual participants if required. Mean scores were calculated for each rating scale using a small table top computer. These scores were used to develop summary and trend charts. Qualitative data obtained from questionnaires, observations, and interviews were typed to aid analysis. Data from questionnaires were arranged numerically by participant identity code; thus, each participant's comments could be cross checked with the quantitative data. All coded responses for each mini-session were combined and typed. This procedure was followed for each open-ended question on the final and follow-up questionnaires. The procedure of combining and typing responses facilitated ease of analysis and eliminated biases created by an individual's handwriting. All the data collected were used to establish an information profile of the program. The data were used to answer the questions posed by the program coordinator mentioned previously. Is this program a good idea? If so, what can we do to make it work as well as possible? If not, how can we devise something better, given existing constraints? When interpretation of the data was completed, a final report was written and sent to all participants, instructors, 59 Land Title registrars, and Advisory Committee members (Hasman, Note 4). With the distribution of the report, the first cycle of the three stages of the illuminative methodology was complete. The first evaluation cycle required more investigator's time than any of the subsequent cycles because all procedures for each stage had to be established. The investigator had to quickly become familiar with Land Title work, the personnel, and the program. Then, with the cooperation of the program coordinator, issues had to be clarified and defined and evaluation processes determined. Next operationalization procedures had to be developed, questionnaires designed, and data collected and analyzed. Finally results had to be reported. The second and third cycles of the illuminative stages started after development of the pilot Introductory and Advanced Courses respectively. Since there were no changes made during these cycles, they will be described together. The issues had been clarified, evaluation processes determined and questionnaires designed during the pilot Intermediate Course, so the amount of time and effort involved in the observation and inquiry stages of the second and third cycles was decreased significantly. A few modifications, however, were made during the observation stage. A decision was made to discontinue the identification code of each participant for two reasons. First, the group size had been reduced to a maximum of twenty and second, there was no benefit in tracing individual responses. 60 Besides eliminating the numerical codes, some questionnaires needed alterations. For example, the sessional, final and follow-up questionnaires were modified as a result of suggestions from the program coordinator, the investigator and Advisory Council members (see Appendix E). The expectations questionnaire was replaced by an "expectations warm-up" exercise. This questionnaire was changed, because the program coordinator wanted to eliminate some of the evaluation forms. The modification provided both expectations data and a group "warm-up" (See Appendix E). After making the above modifications for both pilot Introductory and Advanced Courses, the investigator collected data by interviews, observations, and questionnaires. The data collection methods were the same as those described in the first cycle. The explanation stage followed data collection. This stage followed the same procedures set down during the first cycle except that the responses were not coded by identity number and a desk calculator was used instead of a computer. All open-ended responses, interviews and observation notes were typed so that the investigator could interpret the results. Like the pilot Intermediate Course evaluation, a report was written and distributed (Hasman, Note 4). The second and third cycles through the illuminative stages were then complete. As mentioned, the time and effort involved during these evaluation cycles was reduced significantly due to the procedures established during the first cycle. 61 Operational Program The last three evaluation cycles were completed during the operational phase of the Land Title School program. In the final cycles through the illuminative stages, the only changes made were to the program itself. After a few minor program alterations, the three courses were in final operational form. Evaluations of each course followed the procedures established during the first three cycles of the illuminative stages. Since all three cycles were the same, they will be described together. The investigator reconfirmed the focus of the evaluation and proceeded to gather data through observations, interviews, and questionnaires. In due course, the data were organized, interpreted, and reported. Upon presentation of the report to the program coordinator, participants and Advisory Council members, the final three cycles of the illuminative stages were completed. Summary This chapter provided a discussion of how illuminative evaluation methodology was employed in this study. The illuminative evaluation methodology was utilized to evaluate all three courses of the Land Title School program. Since each course ran in both a pilot and an operational form, six 62 evaluation cycles of the three-stage illuminative methodology were used. In the first cycle of the illuminative evaluation methodology, time was equally divided between the three stages: observation, inquiry, and explanation. During the second and third cycles of the methodology, time was reduced in the observation and inquiry stages. This is due to the fact that the procedures had been established during the first cycle. Since only minor alterations were made, the investigator did not focus efforts on those stages. The explanation stage also required somewhat less time. This was due to smaller class size which reduced the amount of data. The last three cycles of the illuminative stages encompassed the three course operational program. In these cycles, no changes were made to the evaluation procedures, so the majority of the investigator's time was spent on the explanation stage. The analysis took about the same time for all six cycles. 63 CHAPTER V RESULTS The literature reviewed for this study suggested that an evaluation methodology should produce information that is (1) technically sound, (2) useful to some audience and (3) worth more to the audience than it costs (Groteleuschen, 1980). These criteria described in Chapter III were selected for this study. Evidence used to assess the suitability of illuminative evaluation in relation to these criteria will be presented in this chapter. Technical Adequacy A technically adequate evaluation methodology will produce evidence which meets the standards of objectivity and validity as defined in Chapter III. In order to be judged technically adequate, the methodology must produce data that can be 64 confirmed. In other words, the burden of proof moves from the investigator to the information itself. Illuminative evaluation methodology encouraged collection of data from multiple sources and perspectives in order to cross-check and confirm results. This would ensure objective and valid information. Procedures used in this study for confirming results were triangulation and continuous observation. The evidence presented in the paragraphs below support the technical adequacy of the illuminative evaluation methodology. Triangulation Triangulation was used extensively in this study to provide evidence of objectivity and validity. Triangulation forces the observer to combine multiple data sources, research methods, and theoretical schemes in the inspection and analysis.... 11 forces him to situationally check the validity of his causal propositions....It directs the observer to compare his subject's theories of behavior with his emerging theoretical scheme.... (Denzin, 1971, p. 177). The first example of triangulation involved cross-checking qualitative and quantitative data from each mini-session. The quantitative data consisted of mean scores from each mini-session while the qualitative data consisted of typed responses from each mini-session. As a result of cross-checking these two types of data, the ratings of each mini-session were confirmed. Additionally, comments from the final questionnaire confirmed and further validated the mini-session data. 65 Besides cross-checking and confirming, the qualitative data were used for interpreting quantitative data, because statistics cannot tell why someone rated a session high or low. For instance, several participants rated sessions low on the variable "new information gained" while the majority rated the same session high. The investigator wondered why the ratings were low. The following comments illuminated the problem. This material was just covered by (Mr. Registrar) before this course. Everybody 'jogs'. I am aware of stress and health hazards and I would think most of the class would be. In another instance, a session rated poorly on "interest" and "information gained" but high on "relevance." Without the qualitative information, the investigator would not know if there were problems with content and/or instructor. The qualitative information revealed the problem: I've waited two weeks especially for this lesson and was thoroughly disappointed. I think this specific area could prove beneficial to us all and gone into some depth. The level of 'presentation' we received was far below that which could prove useful and relevant. In addition to cross-checking and interpretation, qualitative information provided a richness of description difficult to capture in a quantitative summary. Campbell (1975) noted that recognizing these functions of qualitative data "immediately legitimizes the 'narrative history' portion of most evaluative reports" (p. 9). He suggested the importance of 66 qualitative data be given formal recognition in the planning and execution of evaluations. "Evaluation studies are uninterpretable without this, and most would be better interpreted with more" (p. 9). A second example of triangulation was comparison of responses on pre- and post-expectations questionnaires. Three variations of the expectations questionnaire were used. The first variation consisted of comparing pre-course expectations using a semantic differential type scale to open-ended comments on the final questionnaire. The second variation consisted of the following. Participants were asked to write down their expectations on the first day of class. Then, on the final day, they were asked to check their expectations from the first day and see if they were fulfilled. If their expectations were not met, they were asked to explain. For example, one participant wrote "I'm hoping not to be bored." Comments from the final questionnaire reflected that the participant had not been bored: I came into this course expecting to be bored and was surprised about the competence of the instructors and the time they took to explain the answers to all our questions. In some cases, participants had certain expectations as a result of talking to former participants. The following comment illustrates how participants can come with erroneous expectations and have them changed by the course. Did not have expectations but was more infor mative and interesting than I was led to believe by previous students. 67 The third variation consisted of participants presenting their expectations orally on the first day of class. Each participant's expectations were noted and typed. On the final day, participants wrote whether their expectations had been fulfilled. Finally, after comparing the written responses to the typed ones, the participants were asked to note any differences. It was found that many participants felt the course would be job training as reflected in this comment: Thought it would be more in line with my work duties, but because of the way the system (L.T.O.) is structured, I can see why it was more general. A third example of triangulation involved data from interviews. The information from interviews was used to cross check quantitative data from the mini-sessions. The investigator checked to determine if the participants were saying one thing orally and another thing in writing. A final example of triangulation involved unsolicited information. Two participants called the investigator to say that the information gained from the pilot Intermediate Course plus their notes had helped them pass their promotional exam. In another instance, several participants told one instructor that he should be hired by Land Title to offer his mini-session to all employees. These unsolicited comments further confirmed responses from the courses. 68 Continuous observation If one way to establish the validity of data is through the use of triangulation, another way is through the use of repeated observations. Eisner (1975) makes the point that "One of the reasons why it is important... to have extended contact with an educational situation is to be able to recognize events or characteristics that are atypical. One needs sufficient time in a situation to know which qualities characterize it and which do not" (p. 218). Thus, validity is, to some extent, a function of the amount of time and effort which the investigator invests in repeated and continuous observation. Not only will the investigator be able to differentiate typical from atypical situations, or identify pervasive qualities which characterize a situation, but he will also know when to give credit to the occasional idiosyncratic observation which nevertheless carries great insight and meaning (Guba, Note 2). Continuous observation and extensive contacts are hallmarks of illuminative evaluation methodology. Continuous observation provides a variety of information that could not have been collected by any other means. Through observation, the investigator discovered the mood of the group. The information gained through observation was passed on to upcoming speakers, for the mood of each group was different. For example, one group didn't ask any questions during presentations, although at coffee breaks they asked many questions. Upon questioning the participants, the investigator discovered that the participants were hesitant to interrupt the speaker because they felt it was 69 either rude or disruptive. This information about the group was passed on to succeeding speakers. The investigator also was able to observe the development of the group. Some participants were shy or unaccustomed to a participative group. When this was observed, the investigator alerted the program coordinator. These insights were passed on to ensuing speakers who used the information to help the process along; for example, the speaker might use an "ice breaker" or warm-up exercise. Through continuous observation and extensive contact, the investigator was able to observe group trends, alleviate misconceptions about the program and check how the program was being received. The above evidence supports the technical adequacy of the illuminative evaluation methodology. Since the qualitative data were confirmed and validated by the quantitative data, this methodology met the standards of objectivity and validity as defined in Chapter III. Utility In order to meet the criterion of utility, the illuminative evaluation methodology must remain flexible while producing useful, valid information. This methodology should produce evidence that meets the standards of relevance, importance, credibility, and flexibility as defined in Chapter III. The following evidence is presented in support of the criterion of 70 utility. Relevance and Importance Evidence of data relevance to the program planner is shown in the types of programming changes made as a result of the pilot Intermediate Course findings. For example, in subsequent courses, descriptive brochures were distributed; class size was reduced; time allocations for mini-sessions were varied; more breaks were scheduled; group discussions and field trips were used rather than straight lecture; more audio-visual aids were introduced; and resource persons with teaching experience were sought. These changes made as a result of the evaluations were used to further develop the program. Written and oral reports by the investigator were important to members of the Land Title School Advisory Council. The members found the information provided was useful in determining future directions and policy for the Land Title School program. For instance, the investigator reported the lack of class discussion in the pilot Intermediate Course. Based on that information, the Council recommended a maximum class size of twenty which they felt would facilitate class discussion. The qualitative information was relevant and useful to the resource people. Participant comments were forwarded to speakers, so that they could use the information to improve their courses. As evidenced through comments, a number of participants did not understand the relationship between certain mini-sessions and the work that they did. The participants 71 found it difficult to bridge the gap between theory and practice. When an instructor was made aware of that difficulty, he tried to explain the relevance or relationship. Furthermore, as a result of participant comments, several instructors requested tours through the Land Title Office in order that they might better understand the needs of the participants. Credibility Assurance of credibility was obtained through involvement. In order to obtain participant and decision-maker involvement in this evaluation, it was necessary to gain their confidence by demonstrating interest in their opinions and willingness to act on their advice. The participants were asked to evaluate the course through specific, detailed comments and suggestions regarding changes, additions/subtractions, and modifications of each mini-session. The participants were told that the course was designed to suit their needs. If it was not relevant, important and/or appropriate, it was their responsibility to respond accordingly on the questionnaires. The following is an example of the way participants were encouraged to evaluate the program: Please reflect on your experiences of the past week when answering the following items. Be candid in expressing your feelings, whether they are positive or negative. Make your comments very specific for they will help us tremendously when we plan the next course. In addition, participants were given copies of the evaluation report, so they would have tangible evidence that the 72 information they generated was being read and - used. For example, several courses were developed from participant ideas. When that occurred, it was mentioned to subsequent groups. All this helped establish the credibility of the evaluator and the utility of the evaluation. The credibility of the study for the program coordinator and Advisory Council members was enhanced by involving them in the decision-making process. For example, they were involved in decisions concerning the nature, purpose, and methods of evaluation. Involvement of those persons encouraged them to keep informed by reading reports and attending meetings. Flexibility Flexibility is inherent in most methodologies based on the naturalistic paradigm. However, the question to be answered in this study is can the illuminative evaluation methodology be flexible yet produce technically adequate and useful data. One of the chief advantages of illuminative evaluation methodology is its flexibility, for it does not have prescribed constraints. The illuminative methodology allows flexibility in data collection techniques, types of data used and programming changes to name a few. Evidence of flexibility is presented below. Data Collect ion Techniques. A number of data collection techniques were used—observation, interview, questionnaire and historical documents. The illuminative evaluation methodology suggested how these techniques could be used and encouraged the 73 use of all four. However, certain techniques were more suitable for a specific stage of the methodology than were others. For example, historical documents were used only during the observation stage, because the investigator needed to gain insight and understanding of the project's background and development. During the inquiry stage, interviews and meetings were used to determine the nature, purpose, and focus of the evaluation. Interviews as well as observations and questionnaires were used exclusively to collect data during each course. The use of these varied techniques allowed the investigator flexibility as well as a means of cross-checking and confirming results. Questionnaire Development. Flexibility was critical during original questionnaire design and subsequent modifications. By experimenting with evaluation instruments, the investigator developed and modified questionnaires. Since neither the program nor the mini-sessions had clear cut objectives, the investigator had to get as much information as possible by utilizing open-ended questions. Then a sifting, narrowing and focusing process was used to reject those questions that were useless and refine and improve those that were useful. The questionnaires were redesigned and modified at the end of the pilot Intermediate Course and again at the completion of the pilot Introductory and Advanced Courses. This refining process ensured that only the most relevant data would be collected. For example, after the pilot Intermediate Course, the Land Title Director wanted to delete the question which referred to 74 participants' perceived relevance of the course. Since the purpose of the course was not job training but job enrichment, he felt that those questions might mislead the participants into believing the course should be job relevant. Thus, the job relevance questions were eliminated in subsequent questionnaires. Another example of change concerned expectations. It was decided during review of the pilot Intermediate Course that information of prior expectations could be gained through a warm-up exercise prior to the start of the program thereby eliminating one form. Data Collect ion Methods. In some instances the method of data collection had to be modified. For example, in the pilot Intermediate Course, it was important to know if the "level of presentation" and "length of session" were appropriate to the participants. The investigator tried to measure those variables statistically, but the information obtained was not useful. For example, the ratings of "length of session" were mid-range with no significant variance; that is, the mini-sessions were neither too long nor too short. The ratings of "level of presentation" also clustered around the mid-point although there was some variance. Moreover, the participants made no written comments to clarify those ratings. The investigator, however, heard comments that seemed to contradict the mid-range ratings: "The hardest part was to stay seated because I'm always running around the office." Therefore, the investigator changed the data collection method during the pilot Intermediate Course from rating scales to 75 observations and interviews in order to obtain valid and useful data. Through observations and interviews, it was discovered that mini-sessions were too long. More importantly, participants needed more breaks. In subsequent courses, rating scales for "length of session" and "level of presentation" were eliminated, since they provided no useful information. Information on "level" and "length" was obtained through informal interviews and observations in all succeeding courses. Types of data. The data collected in this study were not limited to one type; both qualitative and quantitative data were collected. The quantitative format enabled the investigator to produce summaries of mini-sessions quickly and accurately. These data were used for comparing individual mini-sessions as well as for comparing the ratings of two different groups on the same mini-session. Comparisons could also be made of entire courses. For example, the pilot Intermediate Course could be compared with the operational Intermediate Course. Although reading and summarizing numerous lengthy responses to open-ended questions was a very time-consuming procedure, the qualitative data gave the investigator insight into participants' perceptions. The following is a composite of participant perceptions from the pilot Intermediate Course. These composite perceptions give far more depth, richness and feeling than numerical ratings. I really enjoyed this course and obtained much valuable knowledge. It will give me more confidence in my day-to-day work. I realize the work involved in planning this course for us and I think it's terrific. It never hurts to learn 76 more about your job. The more knowledgeable I am about my job, the more I'll enjoy it. Result better work! I feel we -should have a course like this once a year. Let me know the data. Bravo!! Besides providing insight, open-ended questions have other advantages that are useful for this study. First, open-ended questions permitted ventilation of participant feelings. Participants were given the opportunity to express their exact opinions in an open-ended response. If they had been asked to simply check items, they might have felt forced into responses that did not exactly match their attitudes. For instance, participants were asked how relevant the course was to their jobs. The following are some examples of their responses: Dealt too much on his own point of view; got the impression he was trying to flog his book. This lecture is not relevant to our jobs as it was neither a presentation of new material nor an in-depth treatment of known information. Although the theories involved were relevant and possibly interesting the relevance seemed too far removed from our own experiences. Second, open-ended questions produced responses which drew the evaluator's attention to a situation or outcome that was unanticipated when the course was developed and/or when questionnaires were designed. I had no idea of the problems faced by my fellow clerks in other offices. It highlighted my weaknesses so now I can improve them. The calibre of the lecturers was excellent. All knowledge that a person gains over his/her 77 life will effect what and who that person is or becomes in that all persons continue through their life to change. That life is not as simple or as boring as it sometimes seems! Third, open-ended questions did not limit the range of possible answers as would closed response questions. For example, if you wanted to know about participants' salient impressions of the program, an open-ended question asking for impressions is better than a checklist of possible responses: If you were to reorganize the course, what would you change, leave the same, etc.? Explain. Program Changes. Due to flexibility of the evaluation methodology, the program itself could be modified without invalidating the study. This flexibility is extremely important in a developing program. For example, as a result of the evaluation of the pilot Intermediate Course, several programming changes were made. More "stretch breaks", field trips, and longer lunch breaks were a few of the changes. These changes were made because participants were not accustomed to being students and found it quite difficult to sit for long periods of time. In addition, the evaluation results of the pilot Intermediate Course showed that the group size (n=43) was too large to facilitate interaction among participants and between participants and instructors. Since the size seemed to inhibit discussion, classes were reduced to a maximum of 20. Due to the flexibility of the methodology, these changes improved the program but did not effect the validity of this study. The above evidence supports the utility of the illuminative 78 evaluation methodology. Since the methodology remained flexible while producing useful, valid information, it met the standards defined in Chapter III. Ef f ic iency An efficient evaluation methodology should be worth more to the recipients than it costs. This is not an easy criterion to measure for all evaluations require time and money. In order for the illuminative evaluation methodology to be judged efficient, the question to be answered by this study was whether the same outcomes could have been achieved at less cost, because the investigator is resposible to make the most out of the resources available. The following evidence is presented in support of the criterion of efficiency. It was impossible to attach a dollar value to the time invested by various people during this study, for the bookkeeping involved would have increased the costs unnecessarily. As an alternative, Table 5 was developed to graphically summarize the relative amount of time spent on each stage of the illuminative evaluation methodology during the development of the Land Title School program. This table shows that the greatest amount of time and, therefore money, was invested in the pilot Intermediate Course. The cost of the pilot Intermediate Course was high due to high developmental costs. As the time spent was reduced so were the costs. By the Table 5 Effort and Time Spent on Illuminative Stages PILOT PROGRAM OBSERVATION INQUIRY EXPLANATION INTERMEDIATE COURSE INTRODUCTORY COURSE 5 WEEKS 3 WEEKS OPERATIONAL PROGRAM OBSERVATION INQUIRY EXPLANATION 2 1/2 WEEKS 2 1/2 WEEKS ADVANCED COURSE APPROXIMATE EFFORT: = MINIMAL • 3 WEEKS 2 1/2 WEEKS APPROXIMATE TIME: =AVERAGE =MAXIMUM = 1/2 WEEK =1 WEEK =1 1/2 WEEKS =2 WEEKS 80 time of the operational program, the costs had stabilized. The material and supplies costs remained constant throughout the program. Based on the experience in this study, an investigator should expect to spend between four and five weeks on a one week course of this kind. Investigator time is divided between the activities of the observation, inquiry and explanation stages of the methodology (see Table 2). The following explanation of Table 5 is divided according to the three stages of the illuminative methodology. The observation stage of the pilot Intermediate Course involved considerable investigator time, minimal program coordinator time and no participant time. In this stage, the investigator became familiar with the program mainly through historical documents and discussions with the program coordinator. Being thoroughly familiar with the Land Title School program, after the pilot Intermediate Course, the investigator required less time for this stage. As noted on the table, in subsequent courses, time was reduced significantly until it reached a stabilized level. Based on the experience in this context, a person should expect to spend about one week or less on this phase. The inquiry stage of the pilot Intermediate Course also required considerable investigator, coordinator and Advisory Council member time (approximately two weeks). It was during this stage that the issues were clarified, evaluation process determined, questionnaires designed and data collected. Participants were involved in the data collection phase. 81 Participant time involvement was approximately the same for each course since only minor alterations were made to the questionnaires. The time spent on this stage during the pilot Introductory and Advanced Courses was reduced due to the fact that the evaluation design and procedures had been established during the pilot Intermediate Course. Only minor alterations were made to questionnaires, so the investigator as well as coordinator and Council members did not invest much time. Since no changes were made to the evaluation procedures or questionnaires during the operational program, the time invested remained stable. The explanation stage involved both investigator and secretarial time (approximately two weeks). This stage of the pilot Intermediate Course required significantly more time than subsequent courses for two reasons: (1) the class size was large (n=43); and (2) all the data from questionnaires were coded by student identity number. In the following courses the class size was reduced to a maximum of twenty participants which meant a reduction in the amount of data collected. In addition, student coding was dropped for it did not provide useful information. These two changes resulted in both reduced secretarial and investigator time to approximately one week. Both qualitative and quantitative data were collected during this study. Coding, typing and analyzing qualitative information involved considerable secretarial and investigator time. In comparison, coding and analyzing the quantitative information was rapid since only mean scores were calculated. 82 The actual analysis time was the same for all the courses after the pilot Intermediate Course (approximately one week). Although the qualitative information required more time for collecting, coding and analyzing than quantitative information, it was more relevant and important to the resource people, program coordinator and Advisory Council members. Evidence in support of this statement is presented in the Utility section ("Relevance and Importance") of this chapter. In addition, evidence in "Types of Data" in the Utility section support the utility of the more costly qualitative data. The above evidence was presented in support of the criterion of efficiency. The evidence presented is only one aspect of the efficiency criterion. It addresses the criterion from the data cost perspective. The value of cost was determined by the investigator based on comments from Advisory Council members, participants, and resource persons not by an impartial person. Summary In order to determine the suitability of the illuminative evaluation methodology for evaluating residential adult education programs, it was tested on the Land Title School program. This chapter contained the results of testing illuminative evaluation methodology on the selected site. Evidence from the testing was compared with the standards set by 83 the pre-specified criteria contained in Chapter III. Table 6 summarizes the criteria and standards that were used to determine the suitability of illuminative evaluation methodology. The results indicated that: (1) The data were confirmed through triangulation and continuous observation. (2) The data collected by various means were credible, important and relevant. (3) The data collected could not have been obtained by less costly methods. Thus, the evidence of suitability satisfied the standards set by this study. Table 6 Criteria and Standards Used For Determining Suitability of Illuminative Evaluation Methodology CRITERIA STANDARD TECHNIQUE Technical- Adequacy Objectivity and Triangulation provides evidence of objectivity and validity by cross-checking Validity qualitative and quantitative data, by aiding interpretation of quantitative data, and by comparing pre-post questionnaire results. Continuous Observation provides evidence of validity through repeated observation. It also provides information unattainable by any other means, i.e. mood of group or development of group. Ut i1i ty Relevance and Importance Written and oral reports. These were used to provide information for decision making, for revision of mini-session material, and for providing participants with evidence that the information they generated was used. Cred i b i1i ty This was insured by involvement. Advisory committee members were involved in decision-making; participants evaluated the course giving detailed comments and suggestions regarding changes. F1 ex i b i1 i ty Flexibility of this methodology must be maintained while producing technically adequate and useful data. Flexibility was demonstrated by changes made in data collection techniques, questionnaire development, types of data used and program changes. Efficiency Cost/benefit Time and effort invested by various people during this study. Comparison of costs of analyzing qualitative versus quantitative data. Cost of data collection compared to utility of data as determined by the decision maker. — CO 85 CHAPTER VI SUMMARY, CONCLUSIONS, IMPLICATIONS AND RECOMMENDATIONS Summary For many years, adult educators have been interested in evaluating their programs. Until recently, most formal educational evaluation studies have used the classical paradigm which derives its methodology from experimental psychology. However, there has been increasing resistance to evaluations of this type (Parlett & Hamilton, 1976; Smith, 1976; Stake, 1978) and a movement to use an alternative paradigm related to social anthropology has emerged. This alternative, the naturalistic paradigm, requires a fundamentally different evaluation methodology from that used with the classical paradigm. A number of evaluation models have emerged from the movement. These models have close philosophic and operational ties with the naturalistic paradigm. Their emergence at this time argues strongly for the utility of naturalistic inquiry for 86 the field of education, and helps make the case that naturalistic inquiry should be investigated as an alternative methodology. Illuminative evaluation was selected for this study from the emergent models because it is relatively new and based on the naturalistic paradigm. It is a dynamic evaluation process which is not tied to a single treatment, predetermined goals or outcomes, but rather focuses on the actual operations of a program over a period of time (Patton, 1978). This is extremely important in program evaluation, for innovative programs are often changed as planners experiment and change their priorities. Residential programs are particularly suitable for testing the value of the illuminative evaluation methodology because this type of program has some unique characteristics that do not exist in other program formats. For example, residential programs differ from most traditional types of programs because the participants are temporarily removed from their ongoing responsibilities. This makes it possible for the investigator to have continuous contact with the participants. Continuous contact is important for a methodology that relies on fieldwork techniques. The Land Title School of the Justice Institute of British Columbia was identified as an appropriate residential program for testing illuminative evaluation methodology because its underlying assumptions (see Table 1) fit the evaluation needs of the Land Title School program. The methodology: 87 (1) allows for the study of open, changing systems and emergent problems; (2) encourages the representation of multiple viewpoints and value perspectives; (3) focuses on program activities and issues rather than outcomes; (4) provides a means of studying spontaneous events, situations, and crises; (5) is sensitive to the context and setting. To determine the suitability of this methodology, evidence collected during the study was compared with the standards set by the pre-specified criteria: technical adequacy, utility and efficiency. It was felt that for the purpose of this study, the illuminative evaluation methodology should produce information that was technically sound, useful to some audience and worth more to some audience than it costs (Groteleuschen, 1980). Conclusions Evidence of the degree to which illuminative evaluation met the technical adequacy, utility and efficiency criteria was collected during the Land Title School program. Techniques such as interviews, questionnaires, and observations were used to collect the evidence. The evidence was analyzed using qualitative techniques to determine whether the methodology met 88 the standards set by the criteria. The remainder of this section provides descriptive interpretations of the evidence collected on each criterion. Technical Adequacy Two major criticisms of illuminative evaluation and other naturalistic inquiries appear in the literature. (1) Personal interpretations cannot be objective. (2) Descriptive studies cannot be valid. In response to the criticism of objectivity: any evaluation study—whether it conforms to the classical or naturalistic paradigm—requires skilled human judgment. Human judgment is necessary at every stage of any study whether it be descriptive or experimental (Guba & Lincoln, 1981; Patton, 1978). For example, it is used in choosing samples, constructing questionnaires, administering questionnaires, choosing statistical treatment, interpreting statistical data and presenting findings. Responses to the second criticism of validity are presented in the following quotes: a methodology, whether descriptive or infer ential, experimental or non-experimental, can seldom obtain valid results unless closely associated with substantive knowledge of the process being studied. (Bennett & Lumsdaine, 1975, p. 20). Evaluation data are never clearcut and absolute: studies are always flawed in some way, and there are always questions of reliability and validity. Error-free instruments do not and cannot exist in the measurement of complex human, social, behavioral, and psychological phenomena (Patton, 1978, p. 180). 89 Although the process of collecting data through various techniques is time consuming and expensive, it is worthwhile. This process helps ensure the technical adequacy of the findings. Once information has been confirmed by two or more techniques, the uncertainty of its interpretation is greatly reduced. Each technique contains its bit of error, perhaps sufficient to cause rejection if that were all that was available. But when a series of bits of evidence are triangulated and all evidence tends in the same direction, that direction assumes greater believability (Guba, 1968; Webb et al., 1966). In this study a certain level of objectivity and validity was attained by cross-checking and confirming data collected from historical documents, observations, questionnaires and interviews. The data collected using the above techniques were validated and confirmed through triangulation and continuous observation procedures. No study can be completely objective and valid as pointed out in the discussion above. However, by using a variety of techniques to cross-check, confirm and validate findings, criticisms of "lack of objectivity and validity" are reduced. From the evidence presented in Chapter V, this study met the standards of objectivity and validity. 90 Utility The evidence presented in Chapter V demonstrated that the illuminative evaluation methodology could be flexible yet produce useful results. Since programs are not static, evaluation methodologies to be useful should be flexible. Therefore, a model such as illuminative evaluation that unfolds through successive phases and strategies is more useful than a model based on uniformity and rigidity. In this study, the evaluation design was not formulated in advance but continuously evolved and was modified as the evaluator interacted with participants and decision-makers. Data collection instruments and techniques also evolved and were refined as the program progressed. Due to the flexibility of the model, these changes did not effect the validity or objectivity of the data. The benefits of collecting both qualitative and quantitative data from a variety of perspectives are presented in Chapter V. Besides confirming and validating the quantitative data, the advantage of obtaining qualitative data from many perspectives is that the investigator can build on emergent insights by collecting descriptive information that gives a useful, meaningful representation of what happened. A comprehensive description of what happened greatly aids judgment, decision-making, and utility. In contrast, quantitative data are easy to code and analyze but have a number of weaknesses. It is doubtful that much can be learned by asking participants to rate their perceptions. Scales like 5-4-3-2-1 tend to obscure facts of feeling, not to 91 clarify them. Some participants use the scale backwards; others make a policy of "never giving a 5 or a 1". In addition, simply knowing that outcomes are high, low or different does not reveal much about what to do about them. The results of the evaluation met the utility standards of relevance, importance and credibility. The results obtained from the evaluations were credible to the investigator, program coordinator and resource people. Because the results were credible, they were utilized. Meeting the pre-specified utility standards required the involvement of a wide range of people--participants, program coordinator, Advisory Council members and the investigator. The investigator spent considerable time establishing rapport and coordinating the involvement and feedback process. Although those processes were time consuming and, therefore costly, they were important for this study for three reasons. First, the investigator established rapport and involved participants in the evaluation process. For example, the credibility and technical adequacy of the findings were significantly enhanced by checking out information with the participants. In addition, with participant involvement in the evaluation, the investigator could be reasonably sure that the findings reflected the insights and judgments of the group. Second, coordinator and Council member involvement helped ensure the relevance and credibility of the data collected for them. The investigator discussed the findings with the coordinator and Advisory Council members, helped them draw 92 implications and recommendations for action from the data and monitored the results of modifications made on the basis of the evaluat ion. Third, the involvement of the coordinator and Council members ensured the utilization of the data collected. Utilization of information collected from evaluations is a crucial indicator of the value of evaluation. "The basic rationale for evaluation is that it provides information for act ion... unless it gains serious hearing when program decisions are made, it fails in its major purpose" (Weiss, 1972, p. 318). Ef f ic iency Illuminative evaluation studies are costly, because they involve investigators for seemingly inordinate durations at considerable financial expense to the program. This study was no different. Much time was spent in the developmental stages of this study. The investigator needed to initiate the sustaining relationships that made the evaluation possible and credible--establishing rapport, and coordinating the involvement and feedback process. In addition, considerable time was spent developing a specific evaluation plan because the illuminative methodology lacked procedural guidelines to facilitate the evaluation activities. This is a major weakness of the methodology and a reason for high developmental costs. Although the development and design costs were high during the initial cycles of the methodology, they became less costly 93 in subsequent cycles for two reasons. One, the investigator did not have to spend as much time as in the initial developmental stage. Two, the investigator became more proficient in carrying out evaluation activities each time the program cycled through the illuminative stages. Thus, the illuminative evaluation methodology appears to be more efficient for on-going residential programs than for "one-shot" programs. This study produced mainly qualitative data. Qualitative data are both difficult to analyze and require more time for analysis than quantitative data. Qualitative data can also be confusing. But even under these circumstances, if careful judgment is exercised the data retains its value for decision making. For example, occasionally a "balancing phenomenon" occurs in which comments contradict each other in almost equal numbers. Seventeen people will say that the program moved too slowly; eighteen will say that it was too rapid. What does this really tell an evaluator? Probably not that the program was either too slow or too fast, but that the design needs to provide more time for individualization. It might also tell the resource person that there is too little ongoing feedback during the session. The main weakness of qualitative data is that the resultant descriptions are often long and involved. A decision-maker or program planner does not always have time to read a long report in preparation for a decision. Thus, the investigator has to be selective in the information presented. The selection process is a potential source of bias which can harm the validity of 94 results. The cost of the methodology must be compared with its benefits. The benefits of using qualitative data presented in Chapter V showed the information was useful to the program coordinator, Advisory Council members, instructors and participants. Encouraging the participants to give their opinions and feelings made them feel part of the planning process. To enhance this, they were given copies of the evaluation reports, so they could see both their contribution and the program in totality. The qualitative information was relevant and useful to the resource people, for it enabled them to improve their sessions by providing adequate information on which to act. The Advisory Council members and program coordinator found the data relevant, useful, and important for both decision-making and program planning. Thus, the benefits outweighed the costs of the evaluation methodology so its efficiency was adequate for this program. Although the methodology is not highly efficient, the loss in efficiency is balanced by gains in the other two criteria. In considering the importance of the criteria, utility is a far more important factor in evaluation than efficiency, for far too many evaluation studies gather dust. If the evaluation is not useful and utilized, it is inefficient regardless of the actual cost. Based on the interpretation of the findings of this study, the illuminative evaluation methodology is judged suitable for evaluating residential adult education programs, for the 95 evidence collected met the standards of technical adequacy, utility and efficiency. An important question remains for investigators. Under what conditions does an evaluation methodology based on the naturalistic paradigm provide the best guidance for evaluations? Neither the literature nor this study has shown that methodologies based on either paradigm—classical or naturalistic—is intrinsically better than the other. The final choice between methodologies in "any inquiry or evaluation ought to be made on the basis of the best fit between assumptions...and the phenomenon being studied" (Guba & Lincoln, 1981, p. 56). Implications and Recommendations Implications The findings of this study have the following implications for • researchers and practitioners desiring to use the illuminative evaluation methodology as a means of determining the value of a program. (1) The degree of fit between the assumptions of the illuminative evaluation methodology and the program's evaluation needs is an important consideration in choosing this methodology. Illuminative evaluation methodology is particularly suitable for evaluating adult education programs 96 which have complex goals that are difficult to define precisely and thus defy quantitative measurement. Because the initial developmental costs are high, illuminative evaluation methodology is more efficient when used with on-going programs. (2) The benefits of using this type of evaluation increase if decision-makers are involved, since the more decision-makers are involved with a project, the more they are apt to utilize the information. (3) The participants should also be actively involved to maximize the utility of this type of evaluation. By involving them, the investigator can be reasonably sure that the findings will reflect the insights and judgments of the group. (4) Investigators desiring to use this methodology need to develop good interview and observation skills, for these skills are critical to data collection. (5) Investigators need to become aware of their own biases. They should try to be understanding and open to differing points of view, at the same time avoiding collusion or over-involvement which tend to create biases. Recommendations Based on the study completed and reported, the following recommendations are presented for those desiring to do further research or those desiring to employ this methodology. (1) Further work needs to be done to develop specific tasks, questions, actitivies and/or procedures which could guide implementation of each stage of the illuminative evaluation 97 methodology. Because the illuminative evaluation methodology is weak on specifying evaluation activities, it is open for abuse and it also diminishes the possibility of generalizing results. Procedural activities such as those presented in Table 2 could be used and/or further refined. (2) Educational programs which prepare individuals to become adult educators could be expanded to include the naturalistic approach to evaluation. This approach could be assimilated into current adult education curriculum as an alternative to the classical approach. (3) Further studies should be done to build up the understanding of the illuminative evaluation methodology. If evaluation studies using illuminative evaluation methodology were more accessible and/or published more frequently, investigators would be able to determine the suitability of this methodology for other types of programs. (4) Further evaluations of residential programs using illuminative evaluation methodology need to be done in order to contribute to the understanding of this methodology for residential program formats. Further studies will aid generalization of results, for there are obvious limitations to this study such as evaluator bias, limited generalizability, and small n's. In addition, the investigator played a dual role of program coordinator and evaluator. (5) Further studies could be done to determine the suitability of illuminative evaluation methodology for evaluating other types of adult education programs. Can 98 elements of this methodology be applied to other program formats? What modifications need to be made to the methodology? The findings in this study challenge adult educators to become more creative and resourceful in their approach to evaluation. They also present a model of one direction this creativity may take. 99 REFERENCES Apps, J. Problems in continuing education. New York: McGraw-Hill, 1979. Bale, R.L. & Molitor, J.A. The Mountain-Plains Educat ion and  Economic Development Program: Success or Failure?, 1978. (ERIC Document Reproduction Service No. ED 153 073). Bass, B.M. & Vaughan, J.A. Training in industry: The  management of learning. Belmont, CA: Wadsworth Publishing Co., 1966. Beckhard, R. How to plan and conduct workshops and conferences. New York: Association Press, 1956. Bennett, C. Up the hierarchy. Journal of Extension, 1975, _1_3 (March, April), 7-12. Bennett, C.A. & Lumsdaine, A.A. Social program evaluation: Definitions and issues. In Bennett, C.A. & Lumsdaine, A.A. (eds.) Evaluat i on and exper iment: Some critical  issues in assessing social programs. New York: Academic Press, 1975. Bernstein, I.N. & Freeman, H.E. Academic and entrepreneur ial  research. New York: Russell Sage Foundation, 1975. Blackwell, B.L. & Bolman, W.M. The principles and problems of evaluation. Community Mental Health Journal, 1977, _1_3 (2), 175-186. Blaney, J.P. & McKie, D. Knowledge of conference objectives and effect on learning. Adult Education, 1969, J_9 (2), 98-1 05. Bogdan, R. & Taylor, S.J. Introduction to qualitative research  methods. New York: John Wiley & Sons, 1975. Bunch, M.B. Summat ive evaluat ion of Mountain-Plains Volume 11. 1976. (ERIC Reproduction Service No. ED 150 3987"] Bringle, R.R. Effects of human relations laboratory training on flexibility and attitudes toward supervision. (Doctoral dissertation, Oregon State University, 1967). Campbell, D.T. Assessing the impact of planned social change. In Lyons, G.M. (ed.) Soc ial research and public polic ies. Hanover, N.H.: Public Affairs Center, Dartmouth College, 100 1 975. Campbell, D.T. Degrees of freedom and the case study. Comparative Political Studies, 1975 (2) 178-193. Campbell, D.T. & Stanley, J.C. Experimental and quasi - experimental designs. Chicago: Rand McNally, 1963. Conrad, R.W. Summative evaluation of Mountain-Plains Volume  III. 1976"! CERIC Document Reproduction Service NO. ED 150 399). Cox, W.F., Jr. Development and implementation of a training  program for educat ional research and developmental  personnel, Final Report. 1974. (ERIC Document Reproduction Service No. ED 095 157). Cronbach, L.J. Course improvement through evaluation. Teachers  College Record, 1963, 64^ 672-683. Cronbach, L.J. (ed.) Toward reform of program evaluat ion. San Francisco: Josey-Bass Publishers, 1980. Davis, L. & McCallon, E. Planning, conducting, and evaluating  workshops. Austin, Texas: Learning Concepts, 1974. Deantonio, E. Evaluation of the community school relations  workshop, 1971 -1972, 1973. (ERIC Document Reproduction Service No. ED 089 147). Densmore, M.L. An evaluative analysis of selected university conference programs conducted at Kellogg Center for Continuing Education, Michigan State University. (Doctoral dissertation, Michigan State University, 1965). Denzin, N.K. The logic of naturalistic inquiry. Social Forces, 1971, (50) 166-182. Devlin, L.E. The influence of directed instruction on learning in a conference setting. (Unpublished M.A. Thesis, University of Chicago, 1966). Dickinson, G. & Lamoureux, M.E. Evaluating educative temporary systems. Adult Educat ion, 1975, xxv, (2), 81-89. Edelbach, R.D. A study of the influence of residential and non residential skill training programs on trainee self-esteem, occupational identification, and work motivation. (Doctoral dissertation, Rutgers University, The State University of New Jersey, 1973). Edwards W. & Guttentag, M. Experiments and evaluations: A re examination. In Bennett, C.A. & Lumsdaine, A.A. (eds.) Evaluat ion and experiment: Some critical issues in 101 assessing soc ial programs. New York: Academic Press, 1975. Eisner, E.W. The perceptive eye: Toward the reformaton of  educational evaluation. Stanford: Stanford Evaluation Consortium, December 1975. Erickson, F. Some approaches to inquiry in school-community ethnography. Anthropology and Education, 1977(8). Fienberg, S.E. The collection and analysis of ethnographic data in educational research. Anthropology and Education, 1977(8), 50-57.. Freeman, H.E. & Solomon, M.A. Evaluat ion studies review  annual, volume 6. Beverly Hills: Sage Publications, 1981. Garside, D. Short-term residential colleges: Their origins and value. Studies in Adult Education, 1969, J_ (1), 2-30. George, C. & Green, D. Summary of outcome evaluat ion report  for preparing educat ional training consultants: skills  training (PETC-I), 1976. (ERIC Document Reproduction Service No. ED 134 627). Grotelueschen, A.D. Program evaluation. In Knox, A.B. & Associates. Developing, administering and evaluatinq adult  educat ion. San Francisco: Jossey-Bass Publishers, 1 980') . Guba, E.G. Toward a methodology of naturalistic inquiry in educational evaluat ion. CSE Monograph Series in Evaluation. Los Angeles: Center for the Study of Evaluation, 1978. Guba, E.G. & Lincoln, Y.S. Effective evaluation. San Francisco: Jossey-Bass Publishers, 1981. Guttentag, M. & Saar, S. Evaluat ion studies review annual,  volume 2. Beverly Hills: Sage Publications, 1977. Haller, E.J. Cost analysis for educational program evaluation. In Popham, W.J. Educat ional evaluat ion: Current  applications. Berkeley: McCutchan Publishers, 1974. Halverson, M.B. Facing the realities: Some conference planning principles. Adult Leadership, 1974, 2 (2), 47-49. Halverson, M.B. & Thiesse, J. Practical evaluation and the regional conference approach. Lifelong Learning: The Adult  Years, 1979, m (3), 4-6. Hamblin, A.C. Evaluat ion and control of training. London: McGraw-Hill, 1974. Hamilton, D. Making sense of curriculum evaluation. In 1 02 Shulman, L. (Ed.). Review of Education, Vol. 5. Itasca, II.: F.E. Peacock, 1977. Havelock, R.G. & Havelock, M.C. Preparing knowledge linking  change agents in educat ion: A materials and training  development project. Final Report, 1971. (ERIC Document Reproduction Service No. ED 056 257). Houle, CO. Residential cont inuing educat ion. Notes and Essays on Education for Adults #70. Publications in Continuing Education. New York: Syracuse University Publications, 1971. House, E. The conscience of educational evaluation. Teachers  College Record, 1972, 73 (3), 405-414. House, E. The logic of evaluative agreement. Los Angeles: Center for the Study of Evaluation, UCLA, 1977. House, E. Evaluating with validity. Beverly Hills: Sage Publications, 1980. Jenkins, R.L., III. Summative evaluat ion of the Mountain- Plains community development component. An af feet ive  evaluation report, 1976. [ERIC Document Reproduction Service No. ED 150 369). Katz, D.S. & Morgan, R.L. A holistic strategy for the formative evaluation of educational programs. In Borich, G.D. (ed.), Evaluating educational programs and products. Englecliffs, N.J.: Educational Technology Publications, 1 974. Kirkpatrick, D.L. Evaluation of training. In Craig, R.L. & Bittel, L.R. (eds.) Training and development handbook. New York: McGraw-Hill, 1967. Kuhn, T. The structure of scientific revolutions. Chicago: University of Chicago Press, 1970. Lacognata, A.A. A comparison of the effectiveness of adult  resident ial and non-resident ial learning situations. Chicago: Center for the Study of Liberal Education for Adults, 1961. Lippitt, R. Studies in experimentally created autocratic and democratic groups. University of Iowa Studies in  Childwelfare, 1940, J_6 (3), 45-198. Lutz, T.W. & Ramsey, M.A. The use of anthropological field methods in education. Educat ional Researcher, 1974(3), 5-9. Miles, W.R. Illuminative evaluation for formative decision-103 making. Evaluation Review, 1981, 5 (4), 479-499. Miller, H.L. Teaching and learning in adult education. New York: Macmillan Co., 1964. Milozarek, G.J. Field test and outcome milestone report for  prepar ing educational training consultants: Consulting  (PETC-II), 1976. (ERIC Document Reproduction Service No. ED 126 157). Newcomb, T. Personality and soc ial change. New York: Holt, Rinehart, & Winston, 1943. Parlett, M. & Hamilton, D. Evaluation as illumination: A new approach to the study of innovatory programs. In Glass, G.V. (ed.) Evaluat ion studies: Review Annual, Volume I. Beverly Hills, Ca.: Sage, 1976. Parlett, M. & Hamilton, D. Evaluation as illumination: A new approach to the study of innovatory programmes. In Hamilton, D. et al. (Eds.) Beyond the numbers game. London: Macmillan, 1977. Parlett, M. & King, J.G. Concentrated study:A pedagogic  innovation observed. London: Society for Research into Higher Education, 1971. Pattison, R.M. (ed.) Counselling educationally disadvantaqed  adults. Indianapolis: Indiana State Department of Public Instruction, Division of Adult Education, 1968. (ERIC Document Reproduction Service No. ED 023 015). Patton, M.Q. Utilization-focused evaluat ion. Beverly Hills: Sage, 1978. Peterson, K.P. Influence of evaluative conditions and pre-conference contact on participants* evaluation of a conference. (Masters thesis, University of British Columbia, 1971). Posavac, E.J. & Carey, R.G. Program evaluation: Methods and  case studies. Englewood Cliffs: Prentice-Hall, 1980. Rippey, R.M. (ed.) Studies in transactional evaluation. Berkeley, Ca: McCutcheon, 1973. Rist, R.C. Ethnographic techniques and the study of an urban school. Urban Education, 1975(10), 86-108. Roberts, E.R. & Holmes, A. The professional development series  for work experience educat ion. Evaluat ive report. 1971. (ERIC Reproduction Service No. ED 124 756). Schacht, R.H. Weekend learn ing in the U.S.A. Notes and Essays 104 on Education for Adults #29. Chicago: Center for the Study of Liberal Education for Adults, 1960. Scriven, M. The methodology of evaluation. In Stake, R.E. (ed.) Curriculum evaluation. AERA Monograph Series on Curriculum Evaluation (Vol. O, Chicago: Rand McNalley, 1967. Scriven, M. Objectivity and subjectivity in educational research. In Thomas, L.G. (Ed.) Philosophical redirection  of educat ional research. 71st Yearbook, Part I, National Scoiety for the Study of Education. Chicago: The University of Chicago Press, 1972. Scruggs, J.A. Florida migratory chiId compensatory program  state conference (September 1975). Evaluation report, 1976. (ERIC Reproduction Service No. ED 148 507). Smallegan, M. A comparison of two training formats for persons with varying interpersonal needs. Adult Educat ion, 1971, 2J_ (3) , 1 66-1 76. Smith, M.L.; Gabriel, R.; Schott, J. & Podia, W.L. Evaluation of the effects of outward bound. In Glass, G.V. (ed.) Evaluation Studies Review Annual, Volume I. Beverly Hills: Sage, 1976. Stake, R.E. The countenance of educational evaluation. Teachers College Record, 1967, 68, 523-540. Stake, R.E. Evaluation design, instrumentation, data collection, and analysis of data. In Worthen, B.R. & Sanders, J.R. Educat ional evaluat ion: Theory and practice. Worthington, Ohio: Charles A. Jones Publishing Co., 1973) . Stake, R.E. Evaluating the arts in education: A responsive  approach. Columbus, OH: Merrill, 1975. Stake, R.E. The case study method in social inquiry. Educational Researcher, 1978, 7 (Feb.), 5-8. Stake, R.E.; Brown, C.; Hoke, G.; Maxwell, G.; & Friedman, J. Evaluat ing a regional envi ronmental learning system. Urbana: Center for Instructional Research and Curriculum Evaluation, 1979. Steele, S.M. Program evaluation: A broad definition. Journal  of Extension, 1970 (summer), 5-17. Stewart, CW. A study of the results of a program of continuinq  education for protestant clergy. Bloomfield Hills, Michigan: Institute for Pastoral Studies, 1965. (ERIC Document Reproduction Service No. ED 021 190). 105 Stufflebeam, D.L. The use of and abuse of evaluation in Title III. Theory into Practice, 1967, 6 (June), 126-133. Stufflebeam, D.L. & Webster, W.J. An analysis of alternative approaches to evaluation. In Freeman, H.E. & Solomon,' M.A. (eds). Evaluat ion studies review annual, volume 6. Beverly Hills: Sage Publications, 1981. Sutton, E.W. Analysis of research on selected aspects of adult education. (Doctoral dissertation, Florida State University, 1969). Torrence, P.E. The Tuskegee experiment in adult training. Adult Leadership, 1966, J_5 (3), 83-84; 96. Touzel, T.J. The stability of attitudes in a longitudinal study resulting from a summer conference on individualized instruction. (Doctoral dissertation, The University of Tennessee, 1975). Valla, D.C. An evaluative study of three workshops for the aging in nine planning districts of Virginia. (Doctoral dissertation, Virginia Polytechnic Institute and State University, 1975). Warr, P.B.; Bird, M.W. & Rackham, N. Evaluat ion of management  training. London: Gower Press, 1970. Webb, et al. Unobtrusive Measures. Chicago: Rand McNally & Co., 1966. Weiss, C.H. Evaluating educational and social action programs: A treefull of owls. In Weiss, C. (ed.). Evaluating  act ion programs. Boston: Allyn & Bacon, 1972. Wientge, K.M. & Lahr, J.K. The influence of soc ial climate on adult achievement. The Impact of a resident ial experience  on learning and attitude change of adult students enrolled  in an evening credit class. St. Louis: Washington University, 1966. [ERIC Document Reproduction Service ED 01 1 371). Wohllenben, A. The pattern of anxiety in residential conferences. (Unpublished M.A. Thesis, University of Chicago, 1965). Wolf, R.L. The use of judicial evaluation methods in the formulation of educational policy. Educat ional Evaluat ion  and Policy Analysis, 1979, 57 (1), 19-28. Worthen, B.R. A look at the mosaic of educat ional evaluation  accountabi1ity. Portland: Northwest Regional Educational Laboratory, 1974. 106 REFERENCE NOTES Note 1 Rusnell, D. Decisions in the design of evaluation. Unpublished manuscript, 1978. Note 2 Guba, E.G. Toward a methodology of naturalistic inquiry in educational evaluation. Unpublished manuscript, February 1, 1978. Note 3 Campbell, D.T. Qualitative knowing in action research. Unpublished manuscript, 1979. Note 4 Hasman, R.M. Land Title School Evaluation Report. Unpublished manuscript, 1980. 1 07 APPENDIX A Expectat ions Questionnaire LAND TITLE SCHOOL Please give us your frank reactions and opinions; they will help us evaluate this course and improve future programs. All information is confidential and will be used only to improve future programs. We would be grateful if you would use an ID number of your choice on this form as it will help us to analyze the results, Please use the same ID number on all forms. ID t Please give your opinion of training by circling the appropriate number in each of the opinion scales below. Example: , I 2 3 5 6 7 , r 1 1 1 ——i 1 1 I extremely very fairly 'n fairly very extremely between -V-COMPLICATED SIMPLE complicated 1 2 3 4 5 6 7 s imple unprac tical 1 2 3 4 5 6 7 practical accurate 1 2 3 4 5 6 7 inaccura te dull 1 2 3 4 5 6 7 interes ting difficult 1 2 3 4 5 6 7 easy helpful 1 2 3 4 5 6 7 unhelpful fast 1 2 3 4 5 6 7 slow important 1 2 3 4 5 6 7 unimportant How useful do you think this training will be? 1 2 3 4 5 6 7 useless useful How enjoyable do you think this training will be? 1 2 3 4 5 6 7 didn't enjoy enjoyed it it very much very much 109 LAND TITLE SCHOOL Please give us your frank reactions and opinions; they will help us evaluate this course and improve future courses. All information is conf i dent i a 1 and will be used only to improve future courses. ID # Based on your experience of the past 2 weeks, please give your opinion of the Land Title School by circling the appropriate number in each of the opinion scales below. E xamp1e: Complicated Impractical Dull D i ff i cu 1 t Helpful Fast Important 1 2 3 CS 5 6 7 extremely very fairly Tn fairly very extremely between too short LAND TITLE SCHOOL too long 1 2 3 fr 5 6 7 Simple 1 2 3 fr 5 6 7 P r a c t i ca 1 1 2 3 fr 5 6 7 Interest ing 1 2 3 fr 5 6 7 Easy 1 2 3 fr 5 6 7 Not Helpful 1 2 3 fr 5 6 7 S 1 ow 1 2 3 fr 5 6 7 Un i mpor t an t How useful was this training program? 1 2 3 fr 5 6 Useless How enjoyable was this training program? Usef u 1 I 2 3 fr 5 6 7 Enjoyed It Very Much Didn't Enjoy It Very Much 1 10 APPENDIX B Mini -Session Questionnaire Name of Session: ID # theanumbere ^ f°ll0Wing ltems ^ Placing a circle around I found thl« (••ilon to be: I-.,.. 9,5, 6 7 in l«TrTyvery extremely between • hort long 1. Enjoyment of session 12 3^5 6 7 Didn't enjoy it very much Enjoyed it very much 2. Amount of new information picked up during session 1 2 3 ij S 6 7 Taught me little I Taught me a lot didn't know 3. Relevance of session to own job 1 2 3 h 5 6 7 Not very relevant Very relevant Length of session 1 2 3 if s 6 7 Too long Too short 5. Level of presentation 1 2 3 '4 5 6 7 Complicated Simple Comments & Suggestions: APPENDIX C Final Questionnaire 1 13 ID # LAND TITLE SCHOOL EVALUATION OF TOTAL PROGRAM I have found this program: chaotic unstimulating unimportant uninteresting I learned nothing not relevant to my job 1 2 3 k 5 6 7 1 2 3 it 5 6 7 1 2 3 it 5 6 7 1 2 3 5 6 7 1 2 3 it 5 6 7 1 2 3 it 5 6 7 well-ordered 7 stimulating interesting I learned a lot relevant to my job 1. Which of your expectations of this program were fulfilled? 2. Which of your expectations of this program were unfulfilled? 3. What information skill gained through this program is most valuable to you? Do you feel this program will effect your work? Yes No How much effect? i 2 3 ' fr 5 6 7 little =0_o^ rouch effect so_so effect Please explain your answer: What were the major strengths of this program? What were the major weaknesses of this program? Did you receive enough information on the content of this program before coming? Yes ; No ,. If no, would it have been helpful to have this information? Yes ; No . What suggestions do you have for future programs? a. Number of sessions b. Length of each session c. Subjects to be covered d. Changes you would make e. Other Based on your experiences of the past two weeks, would you come to the next level course. Yes ; No . Comment 1 1 6 APPENDIX D Follow-Up Questionnaire LAND TITLE SCHOOL FOLLOW-UP Please give us your frank reactions and opinions; they will help us evaluate this program and improve future ones. All information is confidential and will be used only to improve future programs. '••Ie would be grateful if you would use the sane ID number chosen during the course. ID f Please give your opinion of training by circling the appropriate number in each of the opinion scales below. Example: | 3 © 5 6 i— —i 1 r extremely very fairly in fairly very extremely COMPLICATED between SIMPLE comlicated 7 simule unpractical 1 2 3 7 practical accurate 7 inaccurate dull _5 6 7_ interesting difficult helpful fast 7 easy _5 6 7 unhelpful 7 s low imoortant 7 unimportant How useful was this training? 12 3 4 useless 6 7 useful How enjoyable was this training? 12 3 4 didn't enjoy it very much enjoyed it very much 1 18 Has this program effected your work? YES ; HO How much effect: 1 2 3 4 5 6 7 little so-so much effect effect Please explain your answer: V.'hat information or skill gained through this program is most valuable to you? 'las your outlook toward your job changed? YES Please explain: Do you have any suggestions for future courses? APPENDIX E Revised Questionnaires Mini-session Final Follow-up Expectat ions JUSTICE INSTITUTE OF B.C. LAND TITLE SCHOOL Name of session: Please rate this session on the following items by placing circle around the appropriate number. Please be candid in expressing your feelings, whether they are positive or negative. Note the definitions below. Example: I found this session to be: 1 2 3 ^ 5 6 7 . extremely veTy fairly IRfairly very extremely between short long Definitions: INTEREST holds your attention captures your imagination stimulates HEW INFORMATION hadn't heard It before heard It before but didn't understand RELEVANCE applicable pert iner.t appropriate INTEREST of session to me: 1 2 3 no interest much interest NEW INFORMATION gained during session: 1 2 3 4 5 gained little new information 6 7 gained a lot o£ new information 3. In terms of my job, I found the information from today's session to be: not useful 0 hiqhly useful useful b) previously NOT known both old and new previously known Comm en t s: FINAL EVALUATION LAND TITLE SCHOOL Please reflect on your experiences of the past week when answering the following items. Make your comments very specific. Your comments will help us tremendously when we plan the next course. Definitions: INTEREST RELEVANCE holds your attention applicable captures your imagination pertinent stimulates appropriat1. How relevant do you now consider this entire course to your position? not very relevant relevant Personally how interesting was this course? 1 2 3 4 5 6 7 not very very interesting interesting Did the content of the course agree with your original expectations? very little moderately very much Please explain your answer: What information gained through this course do you feel will be most valuable to you? How have you judged value (in question M)? (Please check one) Most practical use Most remembered' Most revealing Most interesting Other (Please explain) What were the major strengths of this entire course? Be specific. What were the major weaknesses of this entire course? Be specific. Please give any additional comments and/or suggestions. 123 LAND TITLE SCHOOL FOLLOW-UP 1. What is your present position? 2. How long have you held this position? 3. What course did you attend? 4. Have you shared with your fellow workers the handouts and information presented at the course? Yes No_ If yes under what setting (i.e. staff meetings, over coffee, etc.) 5. How often have you used the handouts from the course? 6. Did the information presented enable you to solve problems or meet situations on your job which previously you had not been able to do on your own? Yes No Explain: 7. Did you find the information presented in the course useful in your daily work? Yes No 8. If you were to reorganize the course, what would you change, leave the same, etc.? Explain. Expectations Warm-Up Names are important to all of us. It feels comforting to be addressed by name in a strange environment. Having one-person introduce him/herself is fine but there are few people who will remember even half the names mentioned. Using double-folded sheets of paper or old computer cards as desk-top name cards Is more useful than the stick-on type of name tag. Have each person print (in bold letters) the name they want to be called on both sides of the card. Have each person put on the inside "Only for you to see" the completion of these sentences: -What I'd really like to do right now is... -I hope this course won't be... -What I would like to learn in this course includes... While this information is confidential at this stage, you may ask volunteers later on in the session to share it with the class. It is a technique to help members focus on their expectations, their present feelings and their hopes. It might also convey the notion that you care about these feelings and are aware of their presence in the room. 

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United Kingdom 79 10
United States 66 5
Unknown 59 1
Philippines 33 10
India 31 1
Indonesia 22 5
China 16 0
Canada 15 0
Pakistan 14 0
Jamaica 12 0
Uganda 11 0
Norway 9 0
Nigeria 9 4
City Views Downloads
Unknown 196 20
San Mateo 22 0
Manila 17 0
Cramlington 15 4
Glasgow 15 0
Kingston 12 0
Mountain View 10 4
Clifton 10 0
Shenzhen 9 0
Jakarta 9 2
Delhi 8 0
Beijing 7 0
Winchester 7 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0055871/manifest

Comment

Related Items