UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A programmatic approach to post-occupancy evaluation Cormier, Donald A. 1979

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
831-UBC_1979_A7_3 C67.pdf [ 5.84MB ]
Metadata
JSON: 831-1.0094651.json
JSON-LD: 831-1.0094651-ld.json
RDF/XML (Pretty): 831-1.0094651-rdf.xml
RDF/JSON: 831-1.0094651-rdf.json
Turtle: 831-1.0094651-turtle.txt
N-Triples: 831-1.0094651-rdf-ntriples.txt
Original Record: 831-1.0094651-source.json
Full Text
831-1.0094651-fulltext.txt
Citation
831-1.0094651.ris

Full Text

A PROGRAMMATIC APPROACH TO POS T-OCCUPANCY EVALUATION by DONALD A. CORMIER A Thesis Submitted in P a r t i a l Fu l f i l lment of the Requirements for the Degree of MASTER OF ARCHITECTURE IN THE FACULTY OF GRADUATE STUDIES (School of Architecture) APPROVED: The Univers i ty of B r i t i s h Columbia Vancouver, B r i t i s h Columbia October, 1979 (c) Donald A. Cormier, 1979 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the Head of my Department or by his representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of . The University of British Columbia 2075 Wesbrook Place Vancouver, Canada V6T 1W5 D E - 6 B P 7 5 - 5 1 I E ii ABSTRACT There is a rapidly growing recognition of the need, by architects and others concerned with the quality of our building efforts, for a systematic post-occupancy evaluation process. The regular use of such a process is considered essential if we are to learn effectively from our building experi-ences. Only by gaining objective data on the results of building design, construction and use, and using this data in subsequent designs or to build an objective knowledge base for architecture can the design and building process be complete. Closing the feedback loop by doing systematic evalu-ation is an essential step. The building industry has very little experience with systematic building evaluation. Most evaluation which has been done by architects has been highly personal and subjective. This study reviews concepts and experi-ences from program evaluation as a guide for the development of syste-matic building evaluation. Program evaluation is a field of applied sociolo-gy which has been concentrating, over the last 20 years, on the develop-ment and use of systematic evaluation procedures to assess social pro-grams. There are similarities between social programs and building projects which suggest that evaluation ideas could be transferred. Both are undertaken in response to human goals or purposes, both are planned to have some af-fect on people and their activities, and both have specific outcomes and impacts. In this study, selected ideas from program evaluation have been combined with some lessons learned from past efforts at systematic build-ing evaluation to form a programmatic post-occupancy evaluation process. The study concludes that a programmatic post-occupancy evaluation process as proposed in this study could provide the basis for a regular building evaluation system. Program evaluation experience is a valuable source of experience which can and should be used to guide the development of systematic building evaluation. Given the recognized need for objective in-formation about buildings and their use from outside and within the build-ing professions, it is inevitable that a systematic evaluation process like the one presented in this study will soon come into use. Architects must take a positive, active role in this development or they will become fol-lowers rather than leaders in the establishment of an objective knowledge base for their profession. iv CONTENTS ABSTRACT ii INTRODUCTION - STATEMENT OF THE PROBLEM 1 CHAPTER 1 - A REVIEW OF PROGRAM EVALUATION 6 1.1 - Introduction and Background 6 1.2 - Evaluative Research - The Theoretical Basis of Program Evaluation 8 1.3 - The Definition and Scope of Program Evaluation 11 IA - Basic Steps for Doing Program Evaluation 16 1.5 - General Problems and Pitfalls 32 1.6 - Summary and Implications 34 CHAPTER 2 - A REVIEW OF PAST EFFORTS AT BUILDING EVALUATION 36 2.1 - Architectural Criticism - An Informal Evaluation Process 36 2.2 - Early Attempts at Systematic Evaluation 37 2.3 - The Performance Concept as a Basis for Evaluation 45 l.k - Building Evaluation: A Behavioural Science Perspective • 47 2.5 - Post-Occupancy Evaluation - Towards a Consolidated Perspective 53 2.6 - Relating POE to the Building Delivery Process 55 2.7 - A Summary of Building Evaluation Approaches 60 CHAPTER 3 - BUILDING PROGRAMMING: THE BASIS FOR SYSTEMATIC BUILDING EVALUATION 63 3.1 - Building Programming 63 3.2 - Stating Project Goals 66 3.3 - Programming Problems 69 3.4 - Summary and Implications 70 CHAPTER 4 - SYNTHESIS: A PROGRAMMATIC APPROACH TO POST-OCCUPANCY EVALUATION 72 4.1 - A Conceptual Model 72 4.2 - A Basic Evaluation Proces 76 4.3 - General Problems 96 CHAPTER 5 - CONCLUSION - OPPORTUNITY AND CHALLENGE 101 vi LIST OF FIGURES Fig. 1.1 The interrelated functions of management 10 Fig. 1.2 Schematic representation of a program 11 Fig. 1.3 A systems/process evaluation typology 13 Fig. 1.4 Analysis of program evaluation steps 17 Fig. 1.5 Classical experimental model in program evaluation 25 Fig. 2.1 Conceptual model of the system of building and people 39 Fig. 2.2 Design and evaluation: two aspects of the same process - 40 Fig. 2.3 The Design Cycle 50 Fig. 2.4 The Building Delivery Process 56 Fig. 2.5 Model 1: A non-collaborative cross-sectional study 57 Fig. 2.6 Model 2: A collaborative cross-sectional study 58 Fig. 2.7 Model 3: A collaborative cross-sectional and longitudinal study 59 Fig. 4.1 Conceptual framework for programmatic POE 74 Fig. 4.2 Data gathering domain 92 1 I N T R O D U C T I O N S T A T E M E N T O F T H E P R O B L E M There is no systematic post-occupancy evaluation process regularly used in the building industry. Therefore the design, construction and operation of buildings is continuing largely without the benefit of knowledge from past building experiences. This raises many questions about the value of our buildings and our approach to their design. Are our building practices getting better or worse and by how much? Whose interests are buildings serving and to what extent? What direction should future design and con-struction development take? Questions like these can only be answered by evaluating the outcomes of our current building practices. Michael Brill likens the building industry to the running of a candy store; "It has an almost flat learning curve because it has no way to evaluate its performance and few mechanisms to incorporate and diffuse new experience''.^) Robert Bechtei asserts that, "The requirement of evaluation is the most devastating criticism of current design practice since the assumption behind evaluation is that without adequate knowledge of what one has done in the past there is a serious question as to whether one knows what he is doing in the present."^) Bechtei also says that "the largest amount of design 1. Michael Brill, "Evaluating Buildings on a Performance Basis," Designing for Human Behavior; Architecture and the Behavior  Sciences, ed. 3on Lang, Charles Burnette, Walter Moleski and David Vachon (Community Development Series; Stoudsburg, Pennsylvania: Droden, Hutchinson and Ross, Inc., 1974), p. 316. 2. Robert B. Bechtei, "Social Goals Through Design: A Half Process Made Whole", Paper delivered at American Institute of Planners Conference (Boston, Massachusetts, October, 1972), p. 2. that goes on today is a half process - a process without sufficient infor-mation to perform optimally and almost entirely without evaluation."^) In presenting their view on a new approach to architectural research, Hillier and Leaman identify the development of the "monitoring function" as a key to establishing a mechanism for building architectural theory and answering basic questions about architecture. "We must develop adequate techniques for monitoring buildings on a sufficient scale and this means techniques for understanding the human use of buildings as well as their physical performance."^) "The cycle from theory through integration in design, to building and monitoring is an essential part of the research strategy."(-5) To them the important task is to link the cycle with theoretical research as it develops. In the conclusion of a demonstration evaluation study, the Building Perfor-mance Research Unit team made the following observation: "the picture of the design process which emerged is quite different from the usually accepted one. The building and its environment, rather than being a pro-duct of one person or a single point in time, are produced by a wide range of influences, with politicians at one end and pupils (users) at the other. In order to improve the quality and value of buildings, changes in society at many levels are necessary. The architect is in the best position, to initiate these changes (this may be questionable) but his impact will be small unless he can make the total consequences of the process clear to society. Continuous public and objective appraisal of his own and his pro-fession's products gives him the most powerful weapon for doing this".(6) 3. Ibid. 4. Bill Hillier and Adrian Leaman, "A New Approach to Architectural Research", Royal Institute of British Architects Journal (December, 1972), p. 520. 5. Ibid, p. 519. 6. Thomas A. Markus, "Building Appraisal: St. Michael's Academy, Kilwinning," Architectural Journal (January, 1970), p. 48. Clearly the need for systematic evaluation has been widely recognized but the question of how to go about it remains largely unanswered. According to Kevin Green of the AIA Research Corporation "Many disparate notions make up the current state of environmental evaluation," and about the only facet of the subject which is generally agreed upon is the "center core of enquiry."^) There are two major actors involved in the develop-ment of evaluation - the architectural and applied social science communi-ties - and they have differing views on what constitutes evaluation. Green says that what is needed is an evaluation system which "satisfies architects concerned for the integrity of their process and social scientists concerned for the veracity of their analysis."^) What seems to be needed in architecture is a process equivalent to that which has been a focus of concern of social program administrators and legislators for at least a decade: namely, how do we know the millions of dollars invested in a program are producing the results wanted when it was authorized, designed and executed? This question entails what is known in applied social science and organizational management as "program evalua-tion." It might also be termed "post implementation evaluation" or ... in architecture . . . post-occupancy evaluation. Post-occupancy evaluation (POE) is a new subject area in architecture which brings together ideas from various attempts at systematic evalua-tion. POE is concerned with the evaluation of buildings and their use sometime after they are completed. A major criticism of POE's done to date is that they do not provide infor-mation which is of direct relevance or use to designers and others involved 7. Research and Design, ed. Kevin W. Green (Washington, D.C.: America Institute of Architects Research Corp., 3uly, 1978), Vol. I, No. 3, p. 1. 8. Ibid. in the building delivery process. There is a so called "applicability gap". This was the theme of a recent EDRA conference.^) Many of the evalu-ations now being done will continue to be of little value to designers because they are "unhinged" from the design process. "The evaluation of solutions without reference to the design process which generated them is seen to be a 'dead end'^O) m # # Understanding people's responses to spatial qualities and configurations without regard to or knowing the goals of the system and the activities to be carried out does not increase our capacity to design."^11) In this study, a programmatic approach to POE will be developed and pro-posed as a means of establishing a regular evaluation process which takes the intentions of those involved in building design and use more directly into account. Since there is virtually no experience with this type of evaluation within the building industry, the study will draw upon the experiences of "program evaluation" for guidance.(12) Program evaluation is a field of applied sociology concerned with the development and use of evaluative research techniques to evaluate the outcomes of social programs using their stated goals as the basis for evaluation. By combining ideas from program evaluation with lessons from some past building evaluation attempts a programmatic approach to POE will be developed and assessed. 9. In 1976 the Environmental Design Research Association held its 7th Annual Conference in Vancouver, British Columbia, the theme was "Beyond the Applicability Gap" which emphasizes the issue of putting behavior information and other types of new research information into the practise of design. Many basic problems were raised, discussed and reported in the Proceedings of EDRA 7. 10. Brill, p. 317. 11. Ibid. 12. The possibilities of drawing on program evaluation in applied social science were made evident from two sources: Dr. Richard Seaton, University of British Columbia and David E. Campbell's article "Evaluation of the Built Environment: Lessons from Program Evalua-tion", in The Behavioral Basis of Design Book 1; Selected Paper  EDRA 7, ed. Peter Suedeld and James A^  Russell (Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1976). This paper argues that a programmatic approach to post-occupancy evalua-tion . . . that is, post-occupancy evaluation done with specific reference to project goals stated in a building program . . . could provide the basis for a regular, flexible and systematic building evaluation process. CHAPTER 1 A REVIEW OF PROGRAM EVALUATION 1.1 Introduction and Background Program evaluation is a distinct area of social science research applied to the evaluation of social programs (health, education and welfare pro-grams). The scientific method (wherein reserved expectations or intentions are to be subject to empirical test) is used in program evaluation to assess the outcomes of programs with respect to their intended goals. In this chapter a number of program evaluation concepts and issues are discussed which have relevance to the development of systematic building evaluation, they include: evaluative research - the theoretical basis of program evalu-ation; definitions of the scope of program evaluation; action program eval-uation; steps in the program evaluation process; and some general problems and pitfalls. The growth and development of program evaluation over the last 20 years has been closely associated with the adoption of more rational decision making processes in management, particularly by governments. It has been stimulated by a growing public demand for more accountability and respon-siveness in the provision of expensive social programs. There has been a demand for evidence that programs are successful in ways that are mean-ingful to those who are supposed to be served. The history of program evaluation can be traced back a century, but its most dramatic growth commenced with the introduction of the Planning Programming and Budgeting System (PPBS) in the U.S. Department of Defense in 1961 and with its widespread adoption in other U.S. government departments in 1965/1) The Canad ian Government formally adopted the use of PPBS in 1969.(2) 1. Carol H. Weiss, Evaluation Research: Methods for Assessing Program  Effectiveness, ed. Herbert Costner and Neil Smelser (Prentice-Hall Methods of Social Science Series; Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1972), p. 89. The PPBS is a cyclic decision making process which begins with the estab-lishment of objectives. Plans, programs, and budgets are then prepared and implemented to achieve these objectives, and the process is repeated periodically. The significance of the adoption of PPBS to program evalua-tion development was that the use of such a rational decision making pro-cess created a demand for objective evaluation information. Evaluation is used to determine how well objectives are being met, how effectively pro-grams are running and how efficiently budgets are being used. The infor-mation resulting from evaluation provides a report on past performance for accountability purposes and for making modification to existing programs or planning new ones. Distinction should be made regarding the evaluation of different levels of organizational activity. Missions are considered general activity groupings and their achievement is assessed against general objec-tives while programs and projects are evaluated against more immediate objectives. The application of program evaluation concepts to an increasingly broad range of program areas is becoming evident. The Canadian Government has recently issued a directive to all its departments instructing that pro-gram evaluation must be applied to all parts of their operations.^) The same pressures from public demand for accountability and objective in-formation demonstrating the satisfaction of user objectives are becoming evident in the construction industry. Public pressure has been responsible for stopping many large construction projects in the recent past.(^) The concern and involvement by the public and users correspond to the growing demand on the part of those affected by building projects for objective in-formation on the impact and performance of new buildings. 2. A.W. Johnson, "P.P.B. in Canada", Public Administration Review, (January - February, 1973), p. 23. 3. Government of Canada, "Evaluation of Programs by Departments and Agencies." A Draft Report of Guidelines Prepared for Treasury Board Policy 77-47 on Program Evaluation in the Federal Public Service (Ottawa: Government of Canada, 1978). 4. Examples of projects blocked or delayed by public concern are the Vancouver airport runway expansion, Pickering airport, Toronto downtown development freeze, Spadina expressway. 1.2 Evaluative Research - The Theoretical Basis of Program Evaluation Edward Suchman, one of the early evaluative research theorists, describes evaluative research as follows. "We distinguish between evaluation as the general social process of making judgements of worth, regardless of the basis for such judgements, and evaluative research as referring to the use of scientific method for collecting data concerning the degree to which some specified activity achieves some desired effect.''^ -*) Evaluative research is distinct from other types of research but shares some methods with them and in fact incorporates other types of research within its scope. The following set of definitions by Albert Cherns^), modified by this author, describe the basic distinctions: 1. Basic research concerns itself with resolving, illuminating or exempli-fying theoretical problems identified as disciplinary needs. 2. Applied research deals with problems generated in the application field of a discipline and is not aimed at directly solving practical problems. 3. Operational research addresses itself to on-going problems within an organizational framework but does not employ or include experimental action. The strategies and methods which distinguish operational research are: "(a) Observation of the 'mission' of the organization. (b) Identification of its goals. (c) Establishment of criteria of goal attainment. (d) Devising measures for assessing performance against these criteria. 5. Edward Suchman, Evaluative Research (New York: Russell Sage Foundation, 1967), pp. 7-8. 6. Albert Cherns, "Social Research and Its Diffusion", Readings in  Evaluation Research, Francis G. Caro (New York: Russell Sage Foundation, 1971), pp. 64-65. (e) Carrying out these measurements and comparing them with the goals. (f) Completing the feedback loop by reporting on the discrepancy between goal and achievement".^) 4. Evaluative research, though it often includes operational research as defined above, differs in that it has the added dimension of planned change which is introduced and observed as part of its research strate-gy. Planned change refers to the change that occurs as a result of a program. A certain situation exists prior to the implementation of a program. The program is introduced to affect some change and evalu-ation research methods are used to determine the extent to which planned change has been realized. Evaluative research is concerned primarily with problems of developing scientific methodology for use in evaluation studies. According to Edward Suchman, "evaluation con-notes some judgement concerning the effects of planned social change. The target or object of the evaluation is usually some pro-gram or activity which is deliberately intended to produce some desired result. The evaluation itself attempts to determine the degree of success or failure of the action in attaining this result. Thus, evaluation represents a measurement of effectiveness in reaching some predetermined goal." "The key elements in this definition of evaluation are: (1) an objec-tive or goal which is considered desirable or has some positive value; (2) a planned program of deliberate intervention which one hypothe-sizes is capable of achieving the desired goal; and (3) a method for determining the degree to which the desired objective is attained as a result of the planned program. All three must be present before eval-uation can take place. 7. Ibid. 8. Edward A. Suchman, "Action for What? A Critique of Evaluation Research", Evaluating Action Programs: Readings in Social Action  and Education, ed. Carol H. Weiss (Boston: Allyn and Bacon, Inc., 1972), pp. 53-54. 10 Evaluative research is concerned specifically with the development of study techniques while program evaluation is concerned with the application of evaluative research methods in the evaluation of actual programs. Program evaluation is an operational activity and forms part of the man-agement decision making process. Figure 1.1 shows the conceptual rela-tionship between program evaluation as part of the management process and evaluative research as an activity apart from the management process but related to it through program evaluation.^) Fig. 1.1 The interrelated functions of management. The distinction between evaluative research and program evaluation helps sort out methodology questions from application questions and makes them easier to study and understand. 9. Jack L. Franklin and Jean H. Thrasher, An Introduction to Program  Evaluation (New York: John Wiley and Sons, 1976), p. 174. 1.3 The Definition and Scope of Program Evaluation Program evaluation pertains to the application of evaluative research tech-niques to the evaluation of a program. All programs consist basically of three parts: inputs, a process of implementation, and results or out-puts. (10) Figure 1.2 shows a schematic representation of a program. Input Feedback Loop Processor Output Env i r onmen t Fig. 1.2 Schematic representation of a program. Programs are created to achieve certain desirable goals and objectives. Examples of programs are Head Start (an educational program for lower -socioeconomic children), the American Cancer Association's anti-smoking campaign, and the death penalty (to discourage c r i m e ) . E a c h of these has specific objectives for changing human behaviour. The program evalu-ator's task is to determine how well the program is working and to feed that information back to decision makers. Programs do not take place in a vacuum. There are sometimes significant environmental influences which are not a direct part of the programs but which can affect the outcomes of a program quite drastically. Program evaluators must be aware of these outside factors and their potential effect on program outcomes. 10. Ibid. p. 142. 11. David E. Campbell, "Evaluation of the Built Environment: Lessons from Program Evaluation", The Behavioral Basis of Design Book 1:  Selected Paper EDRA 7, ed. Peter Suedeld and Oames A. Russell (Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1976). p. 241-245. A number of definitions of program evaluation were found in the litera-ture. These definitions varied mainly with respect to the scope of activi-ties included as part of program evaluation. One end of the range is re-presented by James Ciarlo's definition which limits the focus of evaluation to only the outcomes of a program.(12) A more popular definition exem-plified by Suchman includes both outcomes and inputs.(13) This definition excludes program activities, however, which Suchman says are the domain of administrators who should use process evaluation techniques (operations research) to evaluate the program activities. A more inclusive definition was reported at the Southern Regional Conference on Mental Health. In their proceedings the purpose of program evaluation was described as follows: "to determine the degree to which a program is meeting its objec-tives, the problems it is encountering and the side effects it is creat-ing"^ 1^ ) To do this, program evaluation would have to include the moni-toring of funds, personnel, client intake, problems, quality of services and outcomes, but would presumably exclude organizational analysis and needs assessment. Finally, at the other end of the scale, Binner advocates a systems approach to program evaluation, which if taken literally, would leave little outside the legitimate concern of program evaluation.(1-5) Program evaluation theorists seem to agree that evaluators must maintain a primary focus on outcomes. Beyond this there should be a recognition that to really understand outcomes often requires looking at issues which may not appear to be specifically related to them. The experience of pro-gram evaluation suggests that it is advisable at present to keep a fairly open mind about what is included in a definition of program evaluation. Therefore various elements of process or context should be considered part of program evaluation if, and to the extent that, they are necessary for the assessment of program outcomes. 12. Franklin and Thrasher, p. 21. 13. Ibid. 14. Ibid., p. 22. 15. Ibid., pp. 22-23. 13 An Evaluation Typology The following evaluation typology was developed by Thrasher & Franklin to help distinguish between several types of evaluation which can be done before, after or during programs.^6) Figure 1.3 illustrates the typology as it relates to a general program model. <j> Input i A . T t Program Output Effects • NEEDS ASSESSMENT (EVALUATION OF PROGRAM RELEVANCE) i J EFFORT EVALUATION PROCESS EVALUATION PROGRAM EFFICIENCY PROGRAM EFFECTIVENESS OUTCOME EVALUATION CONTINUOUS MONITORING IMPACT EVALUATION Fig. 1.3 A system/process evaluation typology. 16. Ibid., p. 164. Needs Assessment refers to the process used to gauge the state of the target population relative to the services offered. It is important to plan-ning as well as evaluation. To be used for evaluative purposes, needs assessment must be done at more than one point in time. The comparison of results can provide a measure of effectiveness. Effort Evaluation is concerned with determining how much is expended during a program in terms of resources; funds, time, manpower. Effort evaluation is concerned only with the inputs into the program itself. As Suchman puts it: "Evaluations in this category have as their criterion of success the quantity and quality of activity that takes place. This represents an assessment of input or energy regardless of output. It is intended to answer the question 'What did you do' and 'How well did you do it?».U7) Process Evaluation looks at what goes on within a program. There is con-siderable debate about whether process evaluation is a legitimate concern of program evaluation. It generally includes program monitoring, client tracking, cost accounting, compliance, indicators of adequacy and general goal directedness. Whether or not it is a legitimate part of program eval-uation the activities included in process evaluation are certainly evaluation related and according to Joseph S. Wholey, no program evaluation system can be complete without process information.(18) 17. Suchman, Evaluative Research, p. 61. 18. Joseph S. Wholey et al., Federal Evaluation Policy (Washington, D.C.: Urban Institute, 1970), p. 27. 19. Weiss, p. 7. Outcome Evaluation addresses the question, 'How have clients changed as a result of receiving services?' Everyone agrees that the assessment of pro-gram outcomes is the properly designated area for program evaluation. It most often consists of a follow-up survey where clients who have received a service are contacted to determine whether the problem situation the program was intended to address has been substantially altered. Impact Evaluation looks at the broader effects of a program. In addition to responding to specific client or individual needs a program usually has some broader social impacts on the community at large and these are the concern of impact evaluation. Program evaluation is clearly concerned with outcome evaluation. But as was discussed in the previous section the scope of what is included in pro-gram evaluation, beyond outcome evaluation, is open to interpretation. The typology shown here illustrates the many evaluation types available for use at different times during a program. By recognizing and understanding these differences, managers, practitioners and evaluators are better able to appreciate where and how specific evaluation types can be most effec-tively applied to program study. Consideration should be given to the development of a similar evaluation typology related to the building process. It could serve as an aid for dis-tinguishing various kinds of building-related evaluation and provide a framework for relating new techniques and methods as they evolve. "Action" Program Evaluation The word "action" has been used by program evaluators to distinguish pro-gram evaluations which take place in an active field setting.(19) It em-phasizes that there are special difficulties associated with evaluating a program which is operational because circumstances are changing as the evaluation goes on. Recognition of this situation has lead to the develop-ment and use of special study methods which selectively compromise the rigor of scientific investigation in the interest of practicality. 19. Weiss, p. 7. These approaches are discussed in more detail in the following sections of this chapter. Consideration of the problems particular to "action" program evaluation is of special interest in this study because post-occupancy build-ing evaluation takes place in a similar action setting. 1A Basic Steps for Doing Program Evaluation The program evaluation process has been described by many authors as a set of general sequential steps. These processes show considerable agree-ment about many of the steps. A number of these are recorded on figure 1.4 and compared to each other. Each step was examined in detail and those which were comparable were identified and linked horizontally on the chart. This analysis process led to a set of six general steps shown at the far right of the chart. These steps are common to most of the approach-es presented and could be used to describe a general program evaluation process. Each of these six steps are discussed in more detail in the next sections. STEP 1. Definition of the problem Program evaluators have found that before undertaking any evaluation study a number of questions should be asked to determine whether or not the evaluation is likely to be worthwhile, and if so, to guide its design. First, has the user of the evaluation results been clearly identified? This may not be the same group that is requesting the evaluation study and conflicts of interest may be encountered. The users of the evaluation re-sults must be involved and consulted from the outset of the evaluation if the results are to be accepted later. Second, has the purpose for request-ing an evaluation study been ascertained? The main purpose of an evalua-tion study should be to generate information about the outcome of a pro-gram which can contribute to decision making. There are several potential misuses of evaluation studies which have been identified by Edward Such-man. He calls these five ritualistic forms of evaluative misuse or pseudo-evaluation, and cautions that evaluators must be wary of them. (20) 20. Suchman, "Action for What?", p. 81. 1 7 ff" HYMAN ( WRIGHT •The basic nttrtod of evaluation research has five major aspects. OEM ING -The four requirements for an effective system of . evaluation.•< 2 ?; r Conception and measurement of the objectives of the action program-and of unanticipated relevant outcomes. A meaningful operational measure of success or of failure. IEISS •In traditional formulation i t (evaluation research? consists of ive basic stages. • C-O) 1. Finding out the goals of the program. 2 . Translating the goals into measurable indicators of goal achievement. RIECKEN -A model for evaluation studies.• (/">> 1. Determining program objectives.: GIFFORD •An evaluation orocess is most effective when oractitioners are supportive and interested in outcome." <25> t. Practitioners nave a problem.' -2 . Set up objectives and measures of performance. T«i HAIKRIOGE > -In many evaluations of •:, educational programs i t is u"> possible to identify seven phases..." THOMPSON •A formulation of evaluation widely accepted as a starting point for discussion and as basis for further definition is that of the American Public Health Associat i on : . . . ( i t ) includes at least the following steps. - <?7> 1. Setting objectives for evaluation. 2. Selecting objectives to be measured 1. Formulation of objective. Identification of the proper cr i ter ia to be used in measuring success. IENERAL STEPS FOR PROGRAM EVALUATION -1. Definition of the problem. Identify and record program goals. ii '•mm I I 1 Select or develop cr i ter ia for measuring program goals. 1 I 4 : Formulation of research design and of cri teria for proof of the effectiveness of a program. The research procedures themselves including provision for estimating arid reducing errors in 'measurement. Problems of index construction and proper evaluation of effectiveness. Procedures for understanding the findings on effectiveness or in-effectiveness. 2. Satisfactory design of experiments, tests, surveys, or examination of data already recorded. Method for presentation and interpretation of the results of the experiments, tests, surveys or other investigators. Some off icial or some group of people authorized to take action. Collecting data on the indicators for those who have been exposed to the program. Collecting similar data on an equivalent group that has not been exposed to the program (control group) Comparing.the data on program participants and controls in terms of goal cr i ter ia. 2. Describing operations. 3 Measuring effects. 4. Establishing baseline. 5. Controlling extraneous factors. 6. Detecting unanticipated consequences 3. Gather data and create model. 4. Discussion of model. 5. Ref ine model. 6. Agreement 3. Choosing instruments and procedures. 4. Selecting samples. 57 Establishing measurement ar.d observation schedule. 67 Choosing analysis techniques — 7 . P e r f o r m e v a l u a t i o n and p r o v i d e r e s u l t s : 7. Drawing conclusions and recommendations. Determination and explanation of the degree of success. -4. R e c o m m e n d a t i o n s a c t i v i t y . for future programme .4. Oesign study plan. ||— 5. Collect and analyze data. 6. Present findings. 21. Herbert H. Hyman and Charles R. Wright, "Evaluating Social Action Programs", Readings in evaluation Research, ed. Francis G. Caro (New York: "Russell Sage Foundation, 1971) , p. 186. 22. W. Edward Deming, "The Logic of Evaluation", Handbook of Evaluation Research, ed. Elmer L. Strucning and MarcTa Guttentag (Beverly Hills: "Sage Publications Inc., 1975), I, p. 56. 23. Carol II. Weiss, "Evaluating J'iducaJional and Social Action Programs: A Tree Full of Owls'', Evaluating Action Programs, cd. Carol H. Weiss (Boston: A!1'/!. miTT Bacon Inc., 1 9 / 2 ) , p. 6. 2<t. Henry W. Riccken, "Memorandi.j'n on Program Evaluation", Evaluating Action Programs, ed. Carol H, Weiss (Boston: Allys and Bacon Inc., T972), p. 87. 25. Franklin and Thrasher, p. 9 8 , The authors icredit Bernard Gifford with this list of evaluation steps. 26. David G. Hawkridgc, "Designs for Evaluation Studies", Evaluative Research: Strategics and Methods (Pittsburgh: American Institutes for ResearchT '1970), p. 28. 27. Mark S. Thompson, Evaluation for Discussion in Social Programmes (Weastmead England: Saxon House, D.C. MeatFT Ltd., 1975) , p. F i g . 1.* Analys i s of program eva 1 uation steps 18 "1. Eyewash 2. White-wash 3. Submarine 4. Posture 5. Postponement an attempt to justify a weak or bad program by deliberately selecting for evaluation only those aspects that 'look good' on the surface. Appearance replaces reality. an attempt to cover up program failure or errors by avoiding any objective appraisal. Vindication replaces verification. an attempt to 'torpedo' or destroy a program regardless of its effectiveness. Politics replaces science. an attempt to use evaluation as a 'gesture' of objectivity or professionalism. Ritual replaces research. an attempt to delay needed action by pre-tending to see the 'facts'. Research replaces service. "(28) If any of these are discerned as the purpose behind a request for evalua-tion, then Suchman suggests that it will probably be a great waste of time and effort to proceed any further, since the question of how well a pro-gram has performed is not of real interest to those requesting the evalua-tion. If the evaluator is concerned with doing an objective evaluation, conflict is likely to result and to lead to confusion, frustration and wasted effort. In addition to being a problem in doing an evaluation, these pseudo-evaluation purposes can be obstacles to the effective use of evalu-ation results. Formative vs. summative evaluation Program evaluators have found it useful to distinguish between formative and summative evaluation. (29) Formative evaluation refers to evaluation which is done during the development of a program. It is concerned with obtaining and using information in an ongoing program situation. The eval-uation of a prototype building would be an example of the type of situa-tion in which formative evaluation would be used for a building evaluation. In this case, evaluation studies would be done at intervals during the pro-ject and the resulting information used during each successive stage. 28. Ibid. 29. Michael Scriven, "The Methodology of Evaluation," Perspectives of  Curriculum Evaluation, ed. Ralph W. Tyler et al. (Chicago: Rand McNally, 1967), pp. 39-83. Characteristically, formative evaluations are done in relatively short time periods and are concerned with very specific and limited objectives. This often necessitates the use of "quick and dirty" study approaches which are feasible within restricted time frames. Summative evaluation is done after a program is completed. It provides information for judging the worth or merit of the completed program. In the case of a building project, summative evaluation would be done after a building project is completed and occupied. Time is usually available, allowing the use of more rigorous study approaches than those used in for-mative evaluation. The resulting evaluation information is more reliable than that gained from formative evaluations. Such evaluations are usually more comprehensive and thorough than formative evaluations. This distinction has proven useful in program evaluation because it helps evaluators identify the kind of evaluation study required in a particular situation. It also assists them in making selections from a wide range of possible evaluation strategies and methodologies. Many program evaluations are actually a composite summative-formative type, but distinguishing the parts helps in organizing the study design. STEP 2. Identify and record the program goals. To do program evaluation requires the existence of explicit goals, and these must be developed carefully if they are to be useful for evaluation purposes. (30) Goals and objectives must be clear and specific as to what is to be achieved. They must be acceptable to those responsible for both program implementation and program evaluation. They should relate logi-cally to higher or more general goals. It is essential for evaluation pur-poses that the goals are realistic - attainable and measurable - that is, they permit measurement of achievement. Goals should be formulated in such a way that they do not prescribe only one approach to program 30. Refer to Weiss, Evaluation Research, pp. 26-31 for a discussion on the formulation of program goals. design but provide a general set of requirements which encourage the development of alternative solutions. Finally objectives must be expressed, communicated to and understood by all concerned. This is a fundamental first step in the development of any program and its thorough execution is essential if meaningful evaluation is to be achieved. Program evaluators have found that it is advantageous for evaluation pur-poses to have the evaluator participate in the goal setting process. (31) This provides an opportunity for the evaluator to gain a better understand-ing of the reasons behind the selection of particular goals and to appreci-ate special nuances which are often not conveyed in written documents. There is a side benefit from this participation. Since many evaluators have had previous experience in evaluating the outcomes of program goals they can offer valuable assistance in goal formulation. The prime partici-pants in this activity however, are the members of the program staff. Manifest and latent goals. Program evaluators have become sensitive to the existence and importance of latent goals in addition to those which are clearly stated, or manifest. These latent or unstated goals can sometimes determine success or failure no matter what else happens. An organization's own survival would be an example of a goal which may not be stated but may override the impor-tance of all other goals.(32) These should be considered and stated where possible since they can be of major significance to a program's develop-ment and operation. The detection and consideration of latent goals is an important reason for having program evaluators participate in program goal setting. 31. George H. Johnson, "The Purpose of Evaluation and the Role of the Evaluator", Evaluative Research: Strategies and Methods (Pittsburgh: America Institutes for Research, 1970), p. 18. 32. Lee Gurel, "The Human Side of Evaluating Human Services Programs," Handbook of Evaluative Research, ed. Marcia Guttentag and Elmer L. Struening (Beverly Hills, California: Sage Publications, Inc., 1975), p. 13. Gurel points out that students of organization know that organizations have two super-ordinant sets of goals one has to do with stability and survival and the other has to do with growth and change. Classification of goals. Edward Suchman has stated the problem of goal classification as follows. "In principle one may visualize an unlimited universe of possible objectives (goals) and subobjectives (sub-goals) corresponding to the various levels that make up a total program. These need to be arranged according to some organizational hierarchy".(33) Suchman says that a common way of classifying objectives is according to three general levels of organizational responsibility: immediate, intermediate and ultimate.(34) Immediate ob-jectives correspond to the activities of the field staff level where concern is for direct delivery of service, and success or failure is measured against immediate criteria such as - effort expended and quantity and quality of services delivery. Performance measurement systems are often used to evaluate operational performance at this level. Intermediate goals are the concern of the supervisory level where program direction is the focus and evaluation is based on the accomplishments or results of field staff. The third level corresponds to the concerns of central staff - responsible for general strategic planning and the development of overall goals or objec-tives. These categories correspond to the three basic levels of decision making generally recognized in most organizations and provide a useful means of distinguishing among groups of organizational goals. Selection of goals for evaluation Once program goals have been organized and listed a selection must be made of those which are to be used for evaluation. This selection is necessary because most programs have many more goals than could be practically evaluated. The most important goals should be selected which reflect the concerns of the decision makers who will use the evaluation results. In making this selection it is important that the evaluator 33. Suchman, "Action for What?", p. 68. In this reference, objectives are synonomous with goals. The issue of definitions is discussed in Chapter 4. 34. Ibid. and program staff agree on the chosen goals. This will greatly increase the chances of having the evaluation results accepted and used by the pro-gram staff. Care must be taken to select goals which are most likely to lead to information which is usable and practical. (35) STEP 3. Select or develop criteria for measuring program goals. In order to do evaluation, suitable measurement criteria must be deter-mined for each of the program goals selected for evaluation. Finding or developing appropriate criteria which capture the essence of the subject under investigation is a problem common to all research activities. Pro-gram evaluators have learned that they must ensure that the criteria they use for evaluating goals have a broad concensus among practitioners. This will greatly increase the likelihood of the acceptance and use of the study results. The first step is to search for existing measures. If suitable existing measures can be found and used, they provide important advantages. First, there is previous experience with the use of the measures and, secondly, there will be comparative data available against which to check evaluation results. There are many types of measures which have been used in program evaluation. They can deal with attitudes, values, know-ledge, behaviour, budgetary allocations, agency service patterns, producti-vity, and many other items. They can relate to people being served, agencies offering service, the neighbourhood or the community, or the public at large. (36) When there are no existing measures the evaluator must develop new ones. In this case there are some general guidelines which evaluators can follow. It is essential that the evaluator, "Stick to the relevant core of the subject under study . . . measures that are off-center from the main issue, even when reputable and time-honoured, are likely to be of little use at all. . . . Before embarking on the development of new measures, 35. Carol H. Weiss, Evaluation Research, pp. 30-31. Weiss discusses four major considerations for choosing among goals. 36. Ibid, p. 39. the investigators should have an acquaintance with considerations of validity and reliability . . . careful conceptualization and definitions are called for, and questions have to be pre-tested and revised (often several times around) until it is clear that they are bringing in the desired infor-mation"^ 3 7) The selection of suitable measures which are acceptable to program staff is of fundamental importance in the process of program evaluation. STEP 4. Design of the study The design of a program evaluation study involves the development of a plan for data gathering and analysis. This includes selection of a research model to guide the study, identification of the group or groups to be studied, selection of specific data gathering and analysis techniques, and establishment of scope and time frames for the study. There are two basic models used for most program evaluation studies; the goal attainment model and the systems model. (38) Goal-attainment model It is generally accepted by most program evaluators that clarifying goals is one of the most important and difficult aspects of evaluation. The goal attainment model is based on the premise that if the program goals can be well defined, then the best methods for assessing the program will be cor-rectly selected. The specification of goals is therefore of paramount im-portance to the whole evaluation study. This model focuses on goal set-ting as the main activity for directing an evaluation study. It puts little 37. Ibid, p. 37. 38. Herbert C. Schulberg and Frank Baker, "Program Evaluation Models and the Implementation of Research Findings", Readings in Evaluation  Research, ed. Francis G. Caro (New York: Russell Sage Foundation, 1971), pp. 56-62. emphasis on the program's structure or other contextual factors.(39) The main strength of this model is that it has many of the characteristics of classical research; as many evaluation theorists point out, approximation of the classical experimental model is very desirable in conducting program evaluation studies. It provides a rigorous study approach which can help make results more reliable. This model has a number of major weak-nesses, however. Because it focuses the evaluator's attention so strongly on goal clarification and research mechanics, it tends to downplay other important considerations. For example, insufficient concern is often given to the dissemination of the evaluation results to decision makers. This model is also concerned mainly with ultimate goals while clients are interested in immediate or intermediate goals. Systems model The systems model begins by establishing a working model of the pro-gram. It recognizes the interrelatedness of goals and aims at providing a method for focusing on those that are most relevant in a particular situa-tion. In addition to the achievement of goals and subgoals, the systems model includes consideration of other issues such as the effective coordina-tion of organizational subunits; the acquisition and maintenance of neces-sary resources; and the adaption of an organization to the environment and to its own internal demands.(*u) The main drawback of this model is that it is more elaborate, demanding and expensive to follow than the simpler goal-attainment model. On the other hand, it has the capability of showing feedback mechanisms and indi-cating the factors which help or hinder the effective communications of evaluation information. The selection of an evaluation model will generally depend on the purpose of the study and the use to which the results are to be put. 39. For discussion of four variations of the goal attainment model, see Franklin and Thrasher, Op. Cit., p. 79. 40. Schulberg and Baker, pp. 56-62. 25 Experimental designs The classical experimental model is considered by most program evaluation theorists to be the ideal study model to follow because it provides the most reliable results. Figure 1.5 illustrates this experimental model as it might be used in evaluation. (^D Measurements Measurements Before After Exper imenta 1 — A [ « - — ^ E x p o s u r e to - ... fy-Group Program Di f ference ' Random , - indicates Assignment effect of Program Control m B3 f i i B ^ i Group i . 4 r Fig. 1.5 Classical experimental model in program evaluation 41. Franklin and Thrasher, pp. 49-54. There is a further discussion of the use of the experimental model by Edward Suchman, Evaluating  Action Programs, pp. 64-65. To set up a classical experiment, experimental and control groups are established by random selection. Both groups are then tested before and after the program has been implemented. If the difference between and A2 in the above diagram, is reliably greater than the difference between Bj and B 2 , the program has demonstrated a measurable effect. The main feature of the classical experimental model which makes it so desirable is that it provides for random control of all the variables except the one under study. This means that any reliable changes observed are likely a result of the program. By using this model in program evaluation, the extent of the changes brought about by a program could be reliably determined. Unfortunately, the control needed to apply this classical experimental approach has sometimes proven difficult to arrange in action settings where program evaluations actually take place. As noted earlier, in an action setting program circumstances are changing as the evaluation is proceeding; this makes control of variables tenuous. Accordingly, true experimental control of variables in an action setting occurs only rarely as a happy accident. The use of the classical experimental model is particularly difficult in formative evaluations, because the programs under study are on-going and there is usually only a short time-frame in which evaluation results must be provided. It takes a considerable amount of time to set up and do a rigorous experimental study and time is usually not available in formative program evaluation studies. Quasi-experimental designs Since the classical experimental model is not practical in action settings program evaluators have had to develop and adopt less rigorous experi-mental models. Quasi-experimental designs are loosely modeled after the classical experi-mental model.(^2) Jhe major difference is that several of the outside variables are left uncontrolled. Study results are consequently less precise and more open to question than those from the classical model. Evalua-tors have found that it is not essential to guard against every possible source of error in doing an evaluation study. The aim should be to control those sources of error most likely to appear in a given situation. The quasi-experimental models have one main advantage over the classical ex-perimental design; they are often feasible when classical experimentation is not. Program evaluators have turned to the less rigorous quasi-experi-mental designs with the recognition and understanding of their inherent weaknesses and limitations compared to the classical model. There are several quasi-experimental approaches which have gained accep-tance among program evaluators; time series design, nonequivalent control group design and a combination of these two.(^3) The time series design calls for taking measurements of groups or individu-als at a number of different points in time, both before and after expo-sure to a program. A comparison of the results from the series of mea-surements indicates change trends. If enough measures are made both before and after program exposure, general trends can be quite accurately observed and the effects of the program determined with confidence. This approach does not rule out the possibility that something other than pro-gram exposure could have caused the observed difference. Program evalu-ators have found that by observing other events that the sample group is exposed to during the program, they can usually determine, with enough confidence for evaluation purposes, the extent to which results can be attributed to program effects. 42. Franklin and Thrasher, p. 55. 43. Ibid, pp. 55-61. Another quasi-experimental design is the non-equivalent control group design. It consists of a pretest and posttest with a control group but without random assignment. This design closely resembles the classical ex-perimental design with both an experimental and control group established. The basic difference is that the subjects are not randomly assigned to the experimental and control groups. In actual program situations evaluators usually do not have control over the assignment of programs to one group or another. This is the responsibility of program administrators, so evalu-ators cannot make random assignments. Consequently, they must try to select a control group from among other groups not subject to the program which have similar characteristics to the experimental group. If this selection is done carefully evaluators have found that this design can have reasonably high internal validity but low external validity. Deniston and Rosenstock did a study of the validity of quasi-experimental designs by comparing two before-after designs without control groups and two nonequivalent control group designs with the findings from a classical experimental d e s i g n . T h e y found that before-after designs without control groups overestimated program effectiveness and the nonequivalent control group designs underestimated program effectiveness. Finally there is a combination of the time series and the nonequivalent control group design which provides a much stronger approach than either of its component designs. By doing the time series measurements of both the experimental and control group, differences between them, before and after the program will be pointed out, so the effects of program exposure can be more easily interpreted. Non-experimental designs In some evaluation situations even quasi-experimental study designs have been found too elaborate for prevailing circumstances. In these cases 44. O.L. Deniston and I.M. Rosenstock, Health Services Reports (February, 1973), pp. 153-164. non-experimental designs have been used by program evaluators. These designs are less reliable than quasi-experimental designs because of their inability to control outside effects. The cause of observed change could be attributed to many things other than the program, and non-experiment-al methods cannot give a conclusive answer to this question. Non-experi-mental designs usually take one of three forms: 1. after-only study of program participants, 2. before-and-after study of a single program, or 3. after-only study of participants and non-random controls.(^5) The pro-cedures appropriate to these forms are obvious from their titles. Non-experimental designs have advantages in certain situations. They can provide faster results, for less cost than quasi-experimental designs. They are suitable for some formative evaluation studies when results are needed very quickly. They can also provide a cursory look at the effectiveness of a program, providing information upon which to decide whether a more in-volved quasi-experimental study is warranted. Finally, they do have a decided advantage over informal evaluations because they employ a syste-matic study approach in which clearly stated program goals, and measure-ment criteria are used for evaluation. The issues involved in the use of experimental and non-experimental designs in program evaluation are likely to be of limited interest to build-ing evaluators at present since systematic building evaluation is not yet established. All the study designs discussed above . . . experimental, quasi-experimental, and non-experimental . . . are concerned with obtain-ing information from people to identify and measure the effects a program has had on a target population. For certain aspects of building evaluation, in particular the study of building users and user satisfaction, the system-atic study of building occupants before and after they move into a new facility may be useful to obtain information about how a building affects occupants. This, in turn, could be compared to how a building was in-tended to affect people and some measure of design effectiveness could be made. Assuming that buildings affect people in measurable ways then some of these designs may be of direct relevance to Building evaluators. 45. Franklin and Thrasher, pp. 66-70. STEP 5. Data Collection and Analysis Data for use in program evaluation can come from many sources and be collected by the whole range of research techniques. According to Carol Weiss the ingenuity and imagination of the evaluator are the only limits. She lists fifteen common sources of data for program evaluation.(^6) Not all these sources are appropriate to architecture, and those that are, can be grouped together into four main categories as shown below: 1. Verbal responses of participants and users (This would include information resulting from interviews and questionnaires.) 2. Observation (Physical evidence, actions and behaviours viewed by trained observers can yield important information on building performance and use.) 3. Written records (Such things as; institutional records, government statistics, diaries, financial reports and other project files can provide information for evaluation purposes.) 4. Ratings (These include scalar responses, not necessarily verbal, of judgements by peers, staff or experts to examine or rate questions or simulations.) There are advantages and shortcomings with each of these sources and the evaluator must be aware of them to make the most effective use of each technique. Both data collection and analysis must be done accurately and consistently regardless of what methods are used. This is very important if evaluation results are to be dependable and comparable across different studies. The specific techniques used for data gathering and analysis depend entirely on the issues under study and they must be carefully selec-ted or developed accordingly. 46. Weiss, Evaluation Research, p. 53. 31 STEP 6. Presentation of Findings The communication and implementation of program evaluation results is one of the weaker aspects of program evaluation development to date. It is a topic far less developed then the methodological issues. Many of those who have been doing program evaluations have come from academic backgrounds. Their interests are in doing the evaluations rather than in how results are applied. There has been a reluctance and uncertainty on the part of evaluators about getting heavily involved with program practi-tioners in the organizational situations where the evaluation results are put into practice. The recognition that comes from publishing in professional journals seems to be more rewarding and meaningful to many evaluators than how their work is used by the decision makers. (^ 7) The limited in-volvement of evaluators with the transfer and use of study findings has greatly reduced the impact of many evaluation studies. The decision making environment in which evaluation findings are to be used is political in nature and the information from evaluation is only one of many inputs that decision makers use. If they are to have maximum impact it is critical that evaluation results be presented in terms which are immediately comprehensible and useful to decision makers. (^ 9) other-wise the results of the evaluation will go unused. Some evaluators measure the success of a study by how many of their find-ings are implemented.(^ 0) To ensure their success in these terms they may have to become very heavily and directly involved in program activities. The extent to which evaluators should become involved in implementing evaluation findings is a question of judgement. 47. Ibid. p. 111. 48. Ibid. p. 113. 49. Franklin and Thrasher, p. 111. 50. Ibid, p. 115. 1.5 General Problems and Pitfalls There have been a number of common problems identified in doing program evaluations and applying the results. Evaluators should be aware of these problems to be able to prepare strategies to deal with them in future evaluations. Some of the difficulties regularily encountered by program evaluators are discussed below. 1. The program is more important than its evaluation. Evaluation re-search deals with programs and people in real life situations. When conflicts occur between the running of a program and the evaluation, the program should take precedence. This often results in the evaluator having to change his approach or compromise his standards in mid-stream, adversely affecting the quality of the results of the evaluation accordingly. 2. The failure to elicit a clear specification of goals at the outset of an evaluation study leads to unsatisfactory conclusions. Yet the goals of programs are seldom simple or clear. If stated at all, they are usual-ly in vague generalities such as "improve the urban environment". As Carol Weiss points out, program staff find it hard to articulate or agree on goals in terms specific enough to evaluate/-^) Yet evalua-tors must use some accepted form of goals if their work is to pro-ceed. And if there is no agreement on the goals, there is the danger that decision makers can easily dismiss the results as "not what we were trying to do". 3. There is often opposition to evaluation by program staff, and it is not clear that any administrators of projects or programs want to have their activities evaluated. Often they are satisfied with informal approaches to evaluation. They usually believe in the program and its work and see no need for evaluation. If the results are positive, they knew it all along and if negative, might threaten the program and their jobs. When the evaluation study must rely on data from the program staff their indifference or opposition can be defeating. 51. Weiss, Evaluative Research, p. 28. 4. Programs are usually very complex, and this can lead to a difficulty in attributing the cause of measured effects to the correct facts. The evaluator must try to build in appropriate measures of program factors to identify the truly productive program components. Even if this is done, programs can change drastically while they are being studied and this makes identification of causes very difficult. The evaluator must try to determine if results are attributable to some original pro-gram feature or the midstream change. 5. Program effects take time to develop and be measured and this re-quires a certain stability in the program's form. This may present a conflict with the needs of program staff who want quick feedback to help them improve the program as it goes along. In such cases form-ative evaluation techniques would be most appropriate. 6. There is considerable evidence that evaluation results have not exerted much influence on program decisions. Decision makers always have a number of other factors, like political and organizational issues, to consider in their decision making. The results of evaluation studies are only one consideration. (52) 7. Evaluation studies are often tagged onto a program in the later phases of its development. This makes evaluation very difficult, since goals are usually not well defined, unrecorded changes have likely occurred and no central study group has been established. It is highly desirable that evaluation studies be ongoing with the program; commencing with the planning phase when the goals are set, following through the design of activities to the collection and interpretation of outcome data. 52. Jack Zusman and Raymond Bissonette, "The Case Against Evaluation," International Journal of Mental Health (Summer, 1973), pp. 111-125. 1,6 Summary and Implications Program evaluation is concerned with measuring the effects of a program with respect to its intended goals by the use of scientific methods. The evaluator's task is to try to measure how well (magnitude, patterns, directions, trends) a program is working or has worked. This involves identifying the goals of a program and selecting those to be evaluated, selecting or developing measurement criteria for the goals, designing a study plan, collecting data according to the plan to find out if the goals are being attained and why, and communicating the results to decision makers. The application of scientific methods has proven to be very diffi-cult in an "action" or field situation. Program evaluators have developed approaches to their studies which can minimize the problems that tend to discredit the validity of their study results. There are striking parallels between a building, designed in response to goals intended to have certain effects on people and their environment; and a program, also developed to meet goals and have intended effects on people.(53) Both must be evaluated in an "action" setting where variables cannot be fully controlled. This suggests that some of the methods and techniques developed and the pitfalls discovered by program evaluators should be of great interest to those concerned with the development of systematic post-occupancy evaluation. The experiences of program evaluation point to the need for at least three basic ingredients for a systematic evaluation process related to a program delivery process. 1) There must be a clear statement of goals or intentions with criteria. 2) Systematic methodologies must be used in conducting evaluations. 53. Campbell, p. 1. 3) The evaluation process must be closely associated to the program it is evaluating at certain key points - goal definition, monitoring of signif-icant changes during the program and study of program outcomes. Perhaps the most fundamental lesson which can be learned from program evaluation experience is the value of distinguishing clearly between subjec-tive personal evaluation and systematic objective evaluation. That is not to say one is better or worse; they both offer unique advantages. Subjec-tive personal evaluation has been predominant in the building profession. Program evaluation points to many of the advantages of a systematic eval-uation process, as well as some of the fundamental issues to be considered in establishing such an evaluation process. CHAPTER 2 A REVIEW OF PAST EFFORTS AT BUILDING EVALUATION Although there is no regularly used building evaluation process, there have been several attempts at doing building evaluation. Some of these have made important developmental contributions or served to point out signifi-cant issues and problems. This chapter looks at a number of evaluation studies and discusses their implications to the development of a systematic building evaluation process. 2.1 Architectural Criticism - an Informal Evaluation Process Architectural criticism is the traditional means of design evaluation within the architectural profession. Valuable contributions have been made to the development of architectural thought by gifted critics. John Ruskin, Le Corbusier and Lewis Mumford are notable examples. These contributions have been unique, dependent on the qualities of the individual perceptions. Unfortunately, as it is generally practiced, architectural criticism is at best an informal approach to evaluation. Much that is written by critics in architectural journals is based solely on personal impressions. (D Architec-tural criticism has been mainly concerned with the aesthetics of buildings. Furthermore, "What are passed as aesthetic criticisms of buildings are more often than not exercises in untutored subjectivity."^) Because eval-uation criteria are almost never stated by critics, the only conclusion one can draw is that most of the resulting information, while perhaps 1. Alan Temko "Evaluation: Louis Kahn's Salk Intitute After a Dozen Years," American Institute of Architect's Journal (March 1977), pp. 42-48. This article is part of a series on post-occupancy evaluation published in the A.I.A. Journal. In the last part of the article the author goes into a rambling discussion of the appearance of the build-ing at sunset conjuring up images of Greek mythology. This discourse is personal romaniticism with no relevance to either the building or its users. This article reflects much of what is produced under the title architectural criticism. 2. Jayant J. Maharj, "The Nature of Architectural Criticism" (unpublished Master's thesis, School of Architecture, Nova Scotia Technical College, 1976), p. 132. 37 interesting, is unreliable and of little practical use to future decision making. According to Bechtei, the only accepted criteria for success within the design profession are financial and reputational. Another noted weakness is that, "Consumers of buildings have been largely ignored as audiences of architectural criticism."(3) The result of this is that the profession does not appear to be very concerned with the values and beliefs of those outside the profession - including their clients. The profession runs the danger of loosing the confidence of the public and of being considered irrelevant to their needs. A number of authors have expressed concern over these situations and have called for more systematic and responsive approaches to criticism and evaluation.(^) Such changes are of fundamental importance if a useful and dependable knowledge base is to be developed from the practice of architectural criticism. Unfortunately, at present, architectural criticism offers little guidance for the development of systematic evaluation. 2.2 Early Attempts at Systematic Evaluation One of the first studies which attempted to use a systematic approach in the assessment of a building environment was made by the Pilkington Research Unit. The study tried to "provide a global picture of the envi-ronment in modern office buildings and in so doing brought a new approach to the evaluation of building performance"^). The study focused on the user's satisfaction with the quality of the building environment. Apart from applying new methods for studying the environment and demon-strating the effectiveness of interdisciplinary teams in this kind of work, 3. Ibid. p. 134. 4. James Marston Fitch, "Architectural Criticism: Trapped In Its Own Metaphysics", Journal of Architectural Education (April, 1976), pp. 2-3 and P. Collins, Architectural Judgement (Montreal, 1971), both emphasize the need for an objective and systematic approach to architectural criticism. 5. Office Design: A Study of Environment, ed. Peter Manning (Liverpool University, 1965). the study's main contribution was in pointing out the major problem in try-ing to do comprehensive building evaluations. There was no criteria for overall assessment of the environment. "A judgement of the success or failure of the total environment within a building necessarily takes account of all the contributors: owners, users, design team, constructors, maintenance staff - and of all the contributory factors: economic, physical, social and psychological. At present there is no way of attaching a meaningful weighting to any of these and appraisals of the total environment can only be made on an individual basis by com-parison with other buildings - there is no criterion."^). Progress has been made in developing criteria for various aspects of build-ing performance, particularly in the economic and physical areas, but there is no established set of criteria yet identified for evaluating the overall building environment. Until further developments are made a holistic approach to building evaluation will not be possible. A Conceptual Model Relating Evaluation and Design The work of another research group in Britain, the Building Performance Research Unit (BPRU), produced one of the most comprehensive building evaluation efforts made to date. Several concepts and methods useful to the development of building evaluation resulted. One of the most impor-tant ideas came from the unit's early work which was devoted largely to developing a conceptual model of the system of building and people. The model produced from this work is shown below in Figure 2.1.(7) 6. Ibid. 7. Thomas A. Markus et al., Building Performance: Building Performance Research Unit (New York: John Wiley and Sons, 1972), p. 4. B U I L D I N G S Y S T E M construe < tional services contents cost of provision E N V I R O N M E N T A L A C T I V I T Y S Y S T E M + spatial physical cost of mainten-ance RESOURCES SYSTEM + S Y S T E M identific-ation [control 1 ' " 1 jcommun- 1 V / ication Informal / [activity \ [v^kflowj cost of activity O B J E C T I V E S S Y S T E M product -ion adapt-ability morale (stability^] < value of achieving objective Fig. 2.1 Conceptual model of the system of building and people. This model has five main parts: a) the objectives system, b) the activity system, c) the environmental system, d) the building system and, e) the resources system. "These five parts, with their sub-systems and compon-ents, make a complex system which is of course open to the influence of politics and economics, culture, climate, the city plan and site, the social and business context. It is within these that the building universe exists". (3) The BPRU model is built on the recognition that all organizations have objectives, and to achieve these objectives organizations must undertake certain activities. Activities can best be performed in appropriate environ-ments. In most cases a building is used to modify various external con-ditions and provide a controlled internal environment for performing specif-ic activities. Three parts of the BPRU model — the building, environ-mental and activity systems — have associated costs which can be added up and equated to the value of meeting the objectives. The resources system provides a way of relating the costs and values of the various sub-systems of the model. 8. Ibid. One very important feature of the model is that it relates design decision-making and evaluation as aspects of the same systems model. By follow-ing and linking the parts of the model in one direction, from objectives to building system, we are following the sequence of a design process; going in the other direction, building to objectives, we are in an evaluation mode. The model shows that organizational objectives can be related to the build-ing system. If these objectives are actually used to generate a design, then the model shows how they can be used by tracing back through the sub-systems of the model, to form the basis for evaluation. Figure 2.2 below illustrates the two modes represented in this model; design and eval-uation. DESIGN MODE Ob j ect i ves Ac t i v i t i e s Envi ronmenta1 Characteristics Hardware So 1u t i on EVALUATION MODE F i g . 2.2 Design and e v a l u a t i o n : t w o aspects of the same process. Within the model there is a fundamental recognition that buildings are for people and that building performance can and must be evaluated with re-spect to their goals and objectives. "The main reason for including the objectives system in our model is that it provides an important context for the other four systems. Unless it is accepted right at the start of re-search into design that bricks and mortar of a building exist to facilitate some specifiable goals, then it is impossible to proceed further."(9) There is a striking similarity here between the BPRU's ideas on the importance 9. Ibid, p. 5. of goals and the basic premise behind program evaluation; that the achievement of goals is the purpose of programs and that these goals must be the basis for evaluation. The four sub-systems of the objectives system -- production, adaptability, moral and stability — interact with each other. They and their inter-actions account for most of the overall objectives of most organizations. "In many organizations these [sub-systems] require a building or a specific type of environment if the organization is to move towards achieving them, so it is valid to think of this system giving rise to the need for further system of the building. On the other hand, the reason for the building is that it generates an environment required for the activities, needed by the organization to achieve its objectives. In other words, the objectives give rise to the activities which it is necessary to implement in order to achieve those objectives. Thus the activity system may be regarded as a set of goals, the achievement of which leads to the reach-ing of objectives. Objectives are therefore the beginning and the end of the whole system; its vital centre."(10) Lessons from the BPRU's Evaluation Study In 1969 the BPRU conducted an evaluation of a completed school building called St. Michael's Academy in Kilwinning. The study's purpose was to show how the techniques developed at that time could be applied in an actual evaluation study without special expertise and that useful results could be obtained despite the general lack of knowledge about evaluation. The BPRU's conceptual m o d e l w a s used to structure the evaluation. A number of existing evaluation measures were tested and some new ones developed for the study. All the measures used in the study were divided into three categories; familiar measures, developments of familiar measures and entirely new measures, and they are listed below.d^) 10. Ibid, p. 6. 11. Supra, p. 39. 12. Thomas A. Markus, "Building Appraisal: St. Michael's Academy, Kilwinning," Architectural Journal (January, 1970), pp. 46-47. New Measures - Compactness - Teacher satisfaction - Circulation rules - Teaching space costs Development of existing measures - Daylighting - Heating - Cost of educational objectives Existing Measure - Capital costs analysis - Space allocation analysis - Artificial lighting - Distance travelled In the evaluation study report it was pointed out that the usefulness of measures like these is dependent on comparison with norms. But norms are not well developed because few evaluations have been done. The BPRU team drew attention to a number of important lessons learnt from their study. These demonstrate the range of issues about which use-ful information was obtained from the BPRU's building evaluation study and point to where regular building evaluations could make significant contri-butions to building practice and use. Costs The evaluation team's cusory look into costs strongly emphasized two key problem areas; "the short sightedness of a policy which considers only ini-tial costs and which lays down cost limits in these terms" and "the rela-tively low utilization of the school building - short days, no weekends, long holidays."^) It is interesting to note that there have been many recent attempts at applying life-cycle costing techniques which take into account operating 13. Ibid. p. 48. and maintenance costs during building design decision-making. Also, those concerned with energy conservation are beginning to look at building utili-zation factors as a significant aspect of energy consumption. Both of these efforts have been plagued by the general lack of reliable information about life-cycle costs and building utilization which the BPRU demonstrat-ed could be obtained through regular evaluation studies. Programming The study team found that the most important and critical decisions for the achievement of a good design were those concerned with choosing from the "infinite possible range of spaces which will best meet present and future activity patterns", (^) rather than those concerned with arranging the spaces presented in the client's brief as the schedule of accommo-dation. The team felt that architects were best qualified to carry this out but that the architect "must equip himself with sharper and quicker analy-tical tools". Recognition of the importance of this early decision making activity has lead to the establishment and development of a new specialized service area called programming. Programming is discussed in more detail in a later chapter of this paper. Building Management It was found that there was a tremendous lack of even "elementary man-agement techniques" and "adequate records, budgetary controls or estima-tion procedures" among those who own and run b u i l d i n g s / T h i s lack of data on performance puts the building industry in a very weak position with respect to running its affairs efficiently and effectively. This is especially true when compared to many other industries where evaluation and performance data are systematically gathered and used in decision making (automotive, aerospace). 14. Ibid. 15. Ibid. Developing Evaluation The team considered that their study was relatively easy to do where it used techniques that had been well worked out. They pointed out that other techniques can best be developed and perfected through use. Once evaluations are routinely done, when in the team's words "the job becomes dull and repetitive (then) a continuous flow of useful results will become available with little effort." Establishing Evaluation Norms Only through doing evaluations, and comparing and collating results can reliable evaluation norms be established. According to the BPRU team, "Comparison of performance with theoretical optima or legal requirements is useful and essential. But the greatest need is for increased understand-ing of how a particular aspect of a particular building fits into the world at large - in other words we need to establish norms. Not only will this involve use of appraisal tools often and for a long time, but it means that existing information we publish and read (eg. A3 cost analyses) must be collated and analysed so that norms and trends can be found."(16) Educating Building Users The study found that the very act of evaluation in which the building users were involved made the users more conscious of their building environment and many aspects of it they would otherwise not have considered. This raises the possibility that they will be better able to use the building for their purposes and especially to adapt it to their changing activities. These lessons learned from the BPRU's evaluation demonstrate the kinds of issues about which information can be obtained from an evaluation study and point out some of the potential users of evaluation results; planners, programmers, architects, building managers, evaluators & building users. Most importantly it was concluded that only through doing evaluations can better techniques and meaningful norms be developed. 16. Ibid. 2.3 The Performance Concept as a Basis for Evaluation The use of the performance concept in buiiding projects offers the means of structuring systematic evaluation into the building industry. John Eberhard, formerly of the National Bureau of Standards in the United States has described the performance concept as follows: "The performance concept is an organized procedure or framework within which it is possible to state the desired attributes of a material, compon-ent or system in order to fulfill the requirements of the intended user without regard to the specific means to be employed in achieving the results. This is true of any product or system produced for use by To date, the performance concept has been used primarily to stimulate in-novation and reduce costs in the construction industry by presenting build-ing requirements to contractors in a manner that allows them more scope for response than they have had with tradition prescriptive specifications. Performance specifications have been used in procuring various building systems for schools (SCSD, SEF), and offices (PBS).(18) Any performance statement or specification is made up of three basic 1) a requirement - derived from some characteristic of the users that the physical environment can affect. These could be physiological, psychological or sociological. 17. John Eberhard, The Performance Concept: A Study of Its  Application to Housing (Washington, D.C: U.S. Department of Commerce, 1969), p. 3. 18. SCSD (School Construction Systems Development) in California and SEF (Study of Educational Facilities) in Toronto were two projects which used performance specifications to procure building systems for school buildings. PBS (Public Building Service) also developed a per-formance specification for office building procurement; see The PBS  Building Systems Program and Performance Specification for Office  Buildings, "The Peach Book" (3rd ed.; Washington: The Office of Construction Management Public Building Services, General Service Administration, November, 1975). humans, from shelter parts: 2) criteria - attributes or characteristics that are to be used in evaluat-ing whether the requirements are being met. 3) a test - to evaluate the performance of solutions in meeting the requirements using the stated criteria. The performance concept has the means of evaluation built into it. By extensively introducing the performance concept into various levels of the construction industry a systematic evaluation process could be established. There are however, three major problems which Michael Brill says, "presently inhibit the development of a broadly used system of evaluation on a performance basis'1/^) 1. The way we experience and the way.we study the environment are not compatible and we have no means of reconciling them. Study of the environment and specification of its characteristics are always done parametrically rather than holistically. That is, we examine specific parts or aspects of the environment. There is an assumption that after studying various parameters we can resynthesize them into a whole, but Brill points out that "efforts at resynthesis are notable failures". (20) This presents a fundamental problem because human "perceptions and behaviors are responses to a holistic (rather than parametric) environment". This same problem was identified by the Pilkington study team when they stated that no criterion could be found to assess the (holistic) environment.(21) 19. Michael Brill, "Evaluating Buildings on a Performance Basis'" Designing for Human Behavior: Architecture and the Behavior  Science, ed. Jon Lang, Charles Burnette, Walter Moleski and David Vachon (Community Development Series; Stroudsburg, Pennsylvania: Droden, Hutchinson and Ross, Inc., 1974), pp. 316-319. 20. Ibid. p. 319. 21. Supra, p. 38. 2. The measures we have for environmental features and particularly human reaction to them are not well developed. According to Brill, we can now measure the physiological aspects of satisfaction fairly well, but measurement of the psychological and sociological aspects are very poorly developed in terms that are meaningful to building designers. Consequently it is very difficult to state user require-ments in performance terms and to evaluate the building environment with respect to them. This difficulty has been pointed out by several authors as a major deterrent to broader application of the perform-ance concept. (22) 3. Present design methodologies need to be revamped. There is a strong question as to whether the design process is able to accommodate much new information. The use of the performance concept will call for the development and consideration of new, more precise informa-tion. Unless the design process can make use of this information the performance concept cannot be further developed. According to Brill, much of the development v/ork in design methodology currently going on appears to be in part a recognition of the need for this type of new design process. (23) 2.4 Building Evaluation; A Behavioural Science Perspective During the last 10 to 15 years there have been an increasing number of studies concerned with trying to better understand man-environment rela-tionships and many of these studies have been evaluative in nature. These efforts have served to focus more attention on the social purpose of build-ings. The main emphasis of many of these studies has been to evaluate buildings from the viewpoint of the occupant users and to measure their 22. A Cronberg and A. Saeterdal, "The Potential of the Performance Concept - Some Questions," Industrialization Forum, Vol. IV (No. 5, 1973), pp. 23-26 and William M. Pena and John W. Focke, "Performance Requirements of Buildings and the Whole Problem," Performance Concepts in Buildings; Proceedings of the Joint  RILEM-ASTM-CIB Synposium. 1972, National Bureau of Standards Special Publication 361, Vol. I (March, 1972), pp. 43-55. 23. Brill, p. 319. satisfaction with various attributes of the building environment. The majority of these studies have been concerned mainly with determining goodness of fit . . . that is, finding out the extent to which an existing building provides an environment which suits the activity patterns of the users and satisfies them. The results of these studies have been interesting but they have not, as yet, provided a great deal of data which has been used directly by decision makers in the building process. One of the main reasons for this situation is that most of these evaluation efforts have been 'unhinged' from the building process. They have been concerned with looking at isolated occupancy situations to gain information pertaining to basic theoretical questions about man's relationship to the physical environment. They have not been addressed to questions of specific concern to designers or other decision makers in the building process. There is also a lack of psychological and sociological measures which can provide meaningful information for design decision makers.(24) This situation might be changed if the concerns of behavioural scientists could be more directly integrated into the design and construction process. The related issues of interest to the decision makers could be more easily identified to form the focal point for evaluation studies. 24. Supra, p. 47. One interesting approach to the problem has been proposed by Robert Sommer.(25) j-je believes that behavioural science ideas and concerns could be made more relevant and useful to designers if psychologists and sociologists were included in project design teams. By being involved directly in the design decision making process they could help identify and record goals pertaining to behavioural issues and studying how they are incorporated into design. They could then follow up with evaluation stud-ies. According to Sommer, until such a process is followed much of the development in the man-environment field is likely to be slow and largely irrelevant to decision makers in the building process. John Zeisel is a sociologist who has been heavily involved with applying sociological concepts to the study of environment and to the evaluation of building design. Zeisel has been particularly concerned with relating eval-uation to the design process and has worked with architects along the lines suggested by Sommer. The model shown below in Figure 2.3 illustrates Zeisel's view of how evaluation can be linked to the design process. (26) 25. Robert Som mer, "Looking Back at Personal Space", Designing for  Human Behaviour: Architecture and the Behavioural Sciences, ed. Jon Lang, Charles Burnette, Walter Moleski and David Vachon (Community Development Series; Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1974), p. 208. 26. John Zeisel, Sociology and Architectural Design: 6, Social Science  Frontiers: Occasional Publications Reviewing New Fields for Social  Science Development (New York: Russell Sage Foundation, 1975), p. 20. 50 Fig. 2.3 The Design Cycle Zeisel describes the five steps of the "current project" cycle as follows: Five steps in the design cycle "1. Programming (Analysis) Identifying design objec-tives, constraints and criteria. 2. Design (Synthesis) Making design decisions which satisfy criteria. 3. Construction (Realization) Building the project and modifying plans under changing constraints. k. Use (Reality Testing) Moving in and adoptng the environment. 5. Diagnostic Evaluation (Review) Pre-design programming and post-diagnostic evaluation are identified as two distinct stages of the design cycle in which evaluative studies can be advantageously undertaken. Evaluation studies can provide information during programming as part of the analysis process shown as step one in Zeisel's model. Zeisel cites the following examples of programming research: Alexander (1969); Birley, et al. (1970); Deasy & Lasswell (1966); Ertel (1974); Howell & Dinkel (1973); Morris (1961); Zeisel (1973 and 1974). (28) Diagnostic evaluations are made as part of a review activity after a building is in use. Examples of diagnostic evaluation research in-clude: Cooper (1965 and 1970); Griffin (1973); Saile (1971 and 1972); Van der Ryn & Silverstein (1967); Zeisel and Griffin (1974); Zeisel and Rhode-side (1974).(29) T N E purpose here is to obtain information on performance particularly with respect to objectives. In Zeisel's model the experimental and cyclic nature of the design process is emphasized. The programming, design, construction and use of a build-ing provide the subject for testing and diagnostic evaluation can be con-sidered the means of gathering data and completing the experimental situa-tion. The notion of equating a building project with an experiment has 27. Ibid. 28. Ibid. 29. Ibid. been advanced by several authors.(30) Feedback from "post-diagnostic evaluation" can contribute to the development of both general design knowledge and directly to pre-design programming in new projects. Ostrander and Connell point out that, "A single architectural experiment (or one building) when evaluated may not tell the architect very much, but a series of experiments (involving a number of buildings) contitutes a learning system that offers both useful feedback and a sense of what direction to take next."(3D Some of the evaluation techniques drawn from the behavioural sciences are gaining growing recognition and acceptance among architects. The American Institute of Architects (AIA) has prepared a taped training pro-gram which describes a number of evaluation methods which the AIA says architects can and should be using to do their own building evaluation studies.(32) j n e u s e 0 f observation and behavioral mapping techniques, activity logs, social mapping, and semantic rating scales are specifically recommended. Each of these offers a way of systematically obtaining or analyzing data about the use of buildings and building spaces. Semantic rating scales can be used to find out how people feel about certain aspects of the environment; observation and behavioral mapping can help an archi-tect look at what people do in the designed environment; an activity log can be used to view a person's behavior over time to compare actual with 30. Brill, Designing for Human Behavior, pp. 316-319; Bill Hillier and Adrian Leaman, "A New Approach to Architectural Research," Royal  Institute of British Architects Journal (December, 1972), p. 521; Raymond Studer, "The Organization of Spatial Stimuli", Environment  and Social Sciences: Perspectives and Applications ed. J.L. Wohlwill and D.H. Carson (American Psychological Association, Inc., 1972), pp. 279-292. 31. Edward R. Ostrander and Bettye Rose Connell, "Maximizing Cost Benefits of Post-Construction Evaluation," The Behavioural Basis of  Design Book 1: Selected Papers EDRA 7, ed. Peter Suedeld and James A^  Russell (Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1976), p. 241. 32. Henry Sanoff, et al. "Post Completion Building Evaluation", American Institute of Architects Continuing Education Cassette Tape and Supplementary Written Material (Washington, D.C: Tech. Tapes, n.d.). intended use of spaces, and social mapping helps to explore and identify relationships between people in designed environments. In the bibliography of the AIA training program 35 references have been cited with notations about which technique each reference illustrates or provides further information about. (33) jh[s program recognizes that architects have not been trained in systematic methods of building evalua-tions in the past and offers this program in the belief that it is time that architects begin to conduct their own surveys on how people use environ-ments. 2.5 Post-Occupancy Evaluation; Towards a Consolidated Perspective Post-occupancy evaluation (POE) is beginning to emerge as a recognized area of study and a focal point for building evaluation ideas. Both archi-tects and social scientists have been major participants in the development of POE. It has been under discussion for less than 20 years and there is a great deal of debate about its scope, content and purpose/34) PQE occurs some time after a building has been occupied and regular use patterns have been established. Two years after completion has been suggested as an optimimum period. (35) J-J j s generally concerned with obtaining information on the use and performance of a building and making use of that information in future building projects. According to Bechtei, "The flow of information from completed projects to those still on the drawing board is the essence of post-occupancy evaluation ..."(36) Post-33. Henry Sanoff et al. "Building Evaluation", Build International Vol. VI (No. 3, May - June 1973), pp. 261-297. 34. "Commentary", Research and Design, ed. Kevin W. Green (Washington, D.C.: American Institute of Architects Research Corp., July, 1978), Vol. I, No. 3, p. 1. 35. "P.O.E. - The State of the Art", Research and Design, ed. Kevin W. Green (Washington, D.C.: American Institute of Architects Research Corp., July, 1978), Vol. I, No. 3, p. 7. 36. Ibid. occupancy evaluation is comparable to program evaluation in a number of ways. They are both concerned with outcome evaluation and with the use of systematic study methods in doing evaluation. There is still much controversy between architects and social scientists on what is most important in POE. For social scientists, like Bechtei, it is essential that POE be accurate and to ensure this, POE "must rest on scientifically gathered information". To many architects Bechtei places an unnecessarily heavy emphasis on the statistical techniques of behavioural science. Charles Masterson asserts that the present POE models fail because they place technique before ideas. (37) They employ the methods of classical science which rule out the human agent. Masterson points out, "Evaluation is itself a design process, in which assumptions and conjectures are translated into an examinable for mat. "(38) Bechtei and Srivastava have recently presented an eleven step POE process as part of a research project report to the Department of Housing and Urban Development. To many this process is said to represent the state of the design evaluation art today. (39) According to Bechtei and Srivastava the POE process should commence with a literature search as step 1, to find earlier evaluations of similar projects. Bechtei says there are close to 1500 previous evaluation studies to draw on for guidance. In step 2, the POE team then talks to various building occupants (manage-ment, maintenance and day-to-day users) to get a picture of the total environment. Next during step 3, a tour of the building is made with the original project architect, building maintenance staff and user occupants to gain an understanding of original design parameters and intentions and the way the building actually functions. The following five steps, (4, 5, 6, 7  and 8), are concerned with setting up and conducting a valid statistical 37. Charles Masterson, "Evaluating Design", Research and Design, ed. Kevin W. Green (Washington, D.C.: American Institute of Architects Research Corp., July, 1978), Vol. I, No. 3, p. 7. 38. Ibid. 39. Green, p. 7. survey to gather data from the environment's population or a statistical sample, analyzing and writing up the results. In step 9 the research is reviewed by the POE team and POE client, and in step 10 the findings applied, to fine tune the evaluated project or to develop a client's new project. During the final step 11, the evaluation research is put into 'an archive for POE information' where it can be used in future projects or as a basis for the development of knowledge. This process has one major weakness. The goals or intentions of the pro-ject must be reconstructed from discussion with the architect and client. This raises the question of reliability, "introduced by retrieving decision in-formation from memory and old files. There is little that can be done to ensure that retrieved information is a realistic representation of the actual decision-making process."(40) j^[s approach also raises the problem which Zeisel calls rationalization by decision makers.(^D These weaknesses are inherent in this approach to POE. 2.6 Relating POE to the Building Delivery Process The usefulness of building evaluation information to decision makers in the building delivery process depends on a close link between the evaluation process and the design/delivery process. Building delivery process refers to a set of activities which must be per-formed in any building project to go from the inception of a project to the occupancy of a completed building. How the steps of the process are divided, the names given to the steps, the emphasis on particular parts 40. Ostrander and Connell, p. 243. 41. Zeisel, p. 40. and the methods used to do the many activities can vary considerab-ly.^ 2 ) But there are a set of activities which must be completed in every project. The diagram below illustrates a simplified version of the building delivery process. DELIVERY PROCESS PIanni ng Pr og r arrmi ng Des ign Cons t r uc t i on Occupancy Fig. 2.4 The Building Delivery Process This model is a linear process and has five fundamental stages. Planning and programming are the first stages and are concerned with defining the building problem. Design is the development of a solution to the problem defined in the program. It includes the preparation of working drawings and specifications. During construction, the building is erected. Finally, when completed, the building is occupied by the users and the delivery process is completed. Three models have been proposed by Ostrander and Connell which describe different ways in which post-occupancy building evaluation studies can relate to the building delivery process.^3) Each 0 f these models can be considered to represent a point along a continuum from minimum to maxi-mum interaction between the evaluation process and the building delivery process. Interaction is defined here as the opportunity for direct contact 42. Examples include: Supra, p. 49; Henry Sanoff, Methods of  Architectural Programming (Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1977), p. 3; Public Works Canada, Project Delivery System, Users Manual (September, 1976), pp. 12-13. 43. Ostrander and Connell, pp. 241-245. between those doing evaluation and those doing design during the building delivery process. The contact between evaluator and designer or client is referred to as collaboration. Data gathering for evaluation involves obtaining information about the building and its use after it is occupied. This is called a cross-sectional study. When an evaluator participates in programming and monitors design and construction activities during the building delivery process, Ostrander and Connell refer to this as a longitud-inal study. Model 1 is non-collaborative, employing only a cross-sectional study. The evaluator uses criteria which are set up without reference to the building's programming or design, and they do not necessarily focus on concerns that were influential during the building delivery process. The evaluator usually decides on the main focus of the evaluation study. This model uses only a cross-sectional study for collecting data after construction. Model 1 does not include any study of the events that preceded the occupancy of the building. This model represents a majority of the building evaluation studies that have been done to date. The main weakness of this approach is that the results are likely to be of minimum relevance or use to decision makers, since the focus of the study and criteria used are unrelated to the decision making process. DELIVERY PROCESS PIann i ng Prograrrmi ng Des i gn Cons t ruct i on Occupancy EVALUATION PROCESS Fig. 2.5 Model 1 - A non-collaborative cross-sectional study Model 2 is collaborative employing a cross-sectional study. This model uses two types of data-collection, one is concerned with getting informa-tion about events which occurred during the delivery process, and the other, involves obtaining user reactions to the building after occupancy. Initially, discussions are held with the architect and client to identify the major issues, goals and constraints that influenced the design decision-making. Then a cross sectional study is done to determine how the building is working. Finally the results of the cross-sectional study are assessed with respect to the information obtained about the delivery process. The POE process developed by Bechtel would be an example of this type of approach. This model has the decided advantage of collaboration between evaluator and decision maker. In practice, however, goals are seldom explicitly recorded during the building process and are given to the evaluator as re-collections or interpretations. These can easily be influenced by knowledge of the completed building so their validity as a basis for objective evalua-tion is questionable. DELIVERY PROCESS PI ann i ng Pr ograrmni ng Des ign Cons t ruct i on Occupancy EVALUATION PROCESS Fig. 2.6 Model 2 - A collaborative cross-sectional study. 44. Supra, p. 54. Model 3 is also collaborative and employs both a cross-sectional and a longitudinal study. The added feature of the longitudinal study means that the evaluator becomes a participant/observer in the actual design and decision making process and collects data about these events. In this ap-proach a close working relationship must be established between the arch-itect, client and evaluator, when the decision to build has been made. All the constraints that emerge during the design process, construction and occupancy will be documented for use during the evaluation. Criteria and interpretations can be more directly related to decision making influences by the evaluator. The approach to evaluation embodied in this model has many similarities to program evaluation . . . the involvement of the eval-uator early on in the process, emphasis on the importance of clearly establishing project goals, and the use of project goals as the basis for evaluation. DELIVERY PROCESS Plann i ng Prograrrmi ng Des ign Cons t ruct i on Occupancy E VALUATION PROCES S Fig. 2.7 Model 3 - A collaborative cross-sectional and longitudinal study Ostrander and Connell point out that this third model is an "ideal" model and there is much less experience within the building industry to draw on for its use and development than for the first two. It requires a much heavier time commitment on the part of the evaluator. An evaluation study following Model 3 would have to be extended over a period of years to obtain results since it parallels the entire building delivery process and is not completed until well after occupancy has commenced. There are some important potential benefits during the process, which can help justi-fy this long time commitment. The fact that evaluation is being consider-ed at the outset of a project can provide incentive to project staff to clearly identify and record important goals and issues, making them more explicit for use within the decision making process. In addition to their use for post-occupancy evaluation, project goals can also be used for the evaluation of alternative designs during the delivery process. Model 3 pro-vides for the closest integration between evaluation and the building process, increasing the likelihood that evaluation results will be relevant and pertinent to the decision makers. 2.7 A Summary of Building Evaluation Approaches In this chapter a number of evaluation types have been discussed which can be distinguished from each other and other common evaluation activi-ties. Architectural criticism is concerned mainly with issues of building esthetics and its practice is characterized by informality and subjectivity. It has generally ignored the occupant user's values. The field of architectural criticism offers little guidance to the development of a systematic building evaluation process. The efforts of the Pilkington and Building Performance Research Units served to identify some key issues and useful methods. They were both one of a kind studies, and though the BPRU study recommended the estab-lishment of routine evaluations neither study established such a process. The performance concept could potentially provide a basis for regular eval-uation in the building process. There are a number of fundamental obsta-cles to its widespread application in the building industry. Until some new developments occur, particularly in our understanding of the relationships between human behavior and environmental features to overcome these obstacles, the performance concept will remain a promising possibility. Pre-design evaluation as identified by Zeisel, occurs at the beginning of a project and is used for obtaining information about users, environments or buildings from situations which are similar to that being planned for.(^) Zeisel clearly differentiates this from diagnostic evaluation which is a form of POE. Though pre-design evaluations deal with the same subject as POE's their purpose is to obtain information for use in the programming and design of specific building projects. Pre-design evaluations are primari-ly concerned with data-gathering and not with evaluating a project. Shakedown evaluation occurs immediately after construction of a building and often extends into occupancy. The activity associated with this type of evaluation is commonly referred to as the preparation of deficiency lists. It is limited to evaluating the building against the requirements recorded on working drawings and specifications. Its purpose is to find technical deficiencies in construction. Once the deficiencies have been corrected to meet the requirements stated in the technical documents the process is complete. Process evaluation is concerned with assessing the effectiveness and effic-iency of the project delivery system. There has been little of this type of evaluation in the building industry. Internal process questions about how well resources such as time, money and manpower have been managed and used would be the subject of this type of evaluation. There are two aspects to this type of evaluation which have been identified in program evaluation and which should be considered. One is that process evaluation is essentially an internal matter and only of concern to project manage-ment. The other is that the quality of the delivery process has very 45. Supra, p. 51 direct and appreciable effect on the quality of the building product. Con-sequently, process evaluation should be an integral part of building evalua-tion and as such must be closely related to post-occupancy evaluation. Post-occupancy evaluation is concerned with the study of a building and its use sometime after occupancy to determine how well the building is satis-fying the needs of the occupants. Impact evaluations look at the effects of projects on the community at large. They assess the economic, social and physical impacts, usually, of large building projects such as airports, shopping centers and office build-ings on the community in which they are to be constructed. They can be used to evaluate the likely consequences of planned projects or to assess actual outcomes. One of the main distinctions between impact and post-occupancy evaluation is one of scale. Post-occupancy evaluation focuses primarily on the occupant-users of a building project, while impact evalua-tion focuses more on the collective outside users within the surrounding community. This distinction is by no means clear cut and there can be, and often is considerable overlap. Impact evaluations can look at the effects of building projects with, or, without direct reference to project goals, though the use of project intentions as the basis of evaluation is the more common approach. In building evaluation, as in program evaluation, overlaps between various types of evaluation are evident, and the use of combinations of evaluation types may be very advantageous in some evaluation studies. Though some distinctions can be made the field is very new. It would be prudent to remain flexible about the definition of evaluation types related to build-ings. For the purpose of this study, post occupancy evaluation is the main focus of interest. CHAPTER 3 BUILDING PROGRAMMING - THE BASIS FOR SYSTEMATIC BUILDING EVALUATION Any type of systematic building evaluation process which intends to use building programs as the basis for evaluation must, to a large extent, be dependent on the form, content and quality of building programs. There-fore, an awareness and appreciation of current building programming prac-tice, its problems, and future directions is important for assessing the via-bility of a programmatic approach to building evaluation. This chapter briefly reviews the field of building programming, discussing issues which are of particular interest and concern. to the development of a program-matic building evaluation process. 3.1 Building Programming The purpose of building programming is to define a client's building prob-lem in terms that will effectively guide those involved in seeking a design solution. It is primarily a process of problem analysis done by, or in con-junction with a client organization. The activity of programming is just beginning to be recognized as an independent function in the building pro-cess. According to Gerald Davis "programming emerged as a distinct pro-fessional role in the decade after 1965.".(O Consequently, definitions of scope and content still vary considerably. Sanoff says that "there does not appear to be any concensus among researchers in the field about the avail-ability or desirability of the best type of programs."^) Variations depend on the nature of the problem being addressed and the beliefs of individual programmers. 1. Gerald Davis, "Programming and Project Use, 1:3, Post Evaluation of the Building," Alternative Processes; Building Procurement Design and  Construction, ed. Michael Glover (Champaign, Illinois: Industrializa-tion Forum Team and the University of Illinois, 1976), p. 16. 2. Henry Sanoff, Methods of Architectural Programming (Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1977), p. ix. Some programmers see programming as the first step of the design process which involves problem identification, information collection and informa-tion organization. Henry Sanoff refers to this process as "Architectural Programming" and defines a program as follows. "A program is a communicable statement of intent. It is a prescription for a desired set of events influenced by local constraints, and it states a set of desired conditions and the methods for achieving those conditions. The program is also a formal communication between designer and client in order to determine that the client's needs and values are clearly stated and understood. It provides a method for decision making and a rationale for future decisions. It encourages greater client participation, as well as user feedback. The program also serves as a log, a memory, and a set of conditions that are amendable to postconstruction evaluation."'3) Pena is another programmer who shares Sanoff's view of architectural pro-gramming as one of the first stages of the total design process, but he states emphatically that a clear distinction must be maintained between programming and design/'*) In this approach, the problem statement or program is prepared as the last step of programming, and it acts as an in-terface between programming and design but these two activities must be kept separate; because doing design before the problem is completely de-fined only leads to solutions which are partial and premature. Program-ming is the prelude to good design. Five steps are defined by Pena for the programming process; (1) Establish Goals, (2) Collect Facts, (3) Un-cover Concepts (4) Determine Needs and (5) State the Problem,^) and only after the problem statement is made should design be commenced. In order to describe the whole building problem during programming Pena uses four considerations; function, form, economy and time. Each of these is applied to each of the five steps. By combining the four considerations and the five steps into a matrix format a programming framework is created which Pena says ensures a comprehensive approach to program-ming. This framework also serves as a checklist or information index for programming. 3. Ibid, p. 4. 4. William Pena, William Caudill and John Focke, Problem Seeking: An  Architectural Programming Primer (Houston, Texas: Cahners Books International, Inc., 1977), pp. 20-21. 5. Ibid, p. 24. A more inclusive definition of programming is presented in this description by Gerald Davis. "Programming a building is the process of determining what is needed by its users and by others who are affected by it (such as owners, managers and the public). Programming includes evaluating how the building satis-fies these needs."'6) The addition of evaluation as a part of programming greatly expands the scope of this activity. Davis goes on to distinguish between a functional and architectural program. "A functional program documents both the decisions about what the build-ing should provide and the basic data needed for design. An architectural program is prepared by the architect to confirm what the building will be. The programming role includes obtaining information about the build-ing users and their needs, synthesizing the issues that require management decisions, recommending what should be built, and evaluating the environ-ments that result.''^7' The inclusion of building evaluation as part of the programming role is unique to Davis' view of programming. It strongly emphasizes the poten-tial for a close link between programming and evaluation.^) Francis Duffy and John Worthington who work as programmers in England describe programming and the programmer's work as follows: "The brief (note: the word brief is used in England and in parts of Canada in the place of program, they refer to the same thing) is some-times thought of as a schedule of rooms. This is a dangerous over-simpli-fication for two reasons. Firstly, the brief ought to be a continuous pro-cess of explaining to an architect what the user who is always changing requires. The second reason is that the process of translation should also work the other way. The architect ought to be explaining to the user what design options are available to him. What has happened in the past and what seems to have become unwittingly enshrined in the RIBA Plan of Work and even in the Conditions of Engagement is that architects have assumed that user requirements ought to be fixed, and users have been tacitly encouraged to assume that buildings are rigid and hard to change. In fact both organizations and buildings can change and be designed and re-designed throughout all stages of their life. 6. 7. 8. Davis, p. 15. Ibid. Ibid, p. 23. User requirements are rareiy precisely related to one building. Activities either spill over into several pieces of accommodation or the user requires only one part of a building. Again this contradicts the architect's way of looking at his client through the medium of the particular building which he has to design. More importantly, user requirements change and develop over time. From the client's point of view the fundamental design prob-lem is how to fit his changing organization into a changing stock of space over a long period of time. In this sense, briefing (programming) infor-mation is independent of any particular building, old or new. It should be in such a form that it can be used to test any building or buildings or project in the light of a predicted pattern of change. Briefing is controversial rather than factual. It is about how interests which conflict and compete for scarce resources can be reconciled. So, although a considerable amount of time has to be spent collecting facts about, for example, how many members of staff there are of various kinds, or how frequent are meetings and visitors, the really important part of briefing is persuading the user to make decisions about these facts. Does he really want an organization of such a size in 1984? Should he strive to express through his building the units and strength of his organi-zation? To what extent should staff be encouraged to feel themselves as autonomous self-regulating groups? What social provision should be made for staff? How permeable should the organization be to the outside world? These are hard questions to answer and can only be satisfactorily answered after a process of debate in which many kinds of people take part. Only after they have been answered can the spatial consequences be finally decided. Much of the skill of the brief writer lies in ensuring that decisions which are directly related to design and the use of space are made and understood by client and architect."^) There is a strong emphasis, in this description, on the importance of considering the definition of the building problem independently of any solution. Some basic questions about the clients' plans and intentions must be answered before the building problem can be clearly stated. 3.2 Stating Project Goals Common to all these descriptions of programming is the idea that pro-gramming is a process for determining and explicitly stating a client's building problem. The exact form this statement takes can vary but it must include some basic statement of the client's/user's goals. 9. Francis Duffy and John Worthington, "Organizational Design", Journal  of Architectural Research, Vol. VI (No. 1, March, 1977), p. 4. During its development work the Building Performance Research Unit con-cluded that "Today, almost always, the organization is the starting point", therefore organizational objectives are an appropriate place to begin build-ing performance analysis/10) When looking at any building project, it is clear, as Duffy and Worthington pointed out above, that consideration must be given to objectives from a number of viewpoints. Conflicts between authorities in organizations and employees, users and the public etc. "are the very stuff of politics at macro and micro-scales in all organ-ization, from the United Nations to the school management committee. The designer and evaluator must understand these conflicts, and always be clear that objectives and priorities between them depend on which indi-viduals or groups are selected as generators for the design. "(11) The BPRU found four objectives which are common to most organizations and which have significant implications to building facilities/12) They are production, adaptability, morale and stability. (a) Production is a basic objective of most organizations. Organizations are concerned with "changing some resource from one level to another" or "creating a product" and the process for doing this can be a strong determinant of building form. This is most evident in in-dustry where the form of a factory building is often of great import-ance to production and a building is shaped around machinery and as-sembly lines. The implications of production to buildings are often less obvious in non-commercial buildings such as schools and houses, but they are nevertheless present, and should be an important con-sideration for design. (b) Adaptability is a second important organizational objective. Survival for any organization is dependent on its ability to change itself in 10. Thomas A. Markus et al. Building Performance: Building  Performance Research Unit (New York: John Wiley and Sons, 1972), p. 5. 11. Ibid. 12. Ibid, pp. 5-6. response to changes in the environment. It is likely that some of the most crucial limitations on adaptation are set by physical structure. There are two aspects to adaptability which are of particular concern to buildings; replacement and innovation. These have strong implications to the concepts of flexibility and adaptability in the building system. (c) Morale refers to the objective many organizations have of wanting to "keep their members happy". The DPRU found that this is a very important objective of most organizations. Even many commercial organizations persue this goal for its own sake without any ulterior motive of increased productivity. The quality of ah environment and the way it is conceived and organized is likely to effect achievement of this objective. (d) Stability is the fourth general objective of most organizations. "The turmoil and constant variation which the above three objectives either create or deal with, inevitably give rise to difficulties within the organization in terms of its stability or the degree to which it exists as a single entity over time and space. A s a consequence a further organizational objective will be to maintain the organization in a s tab le s ta te so that, although production is being maintained or in-creased, adaptation is taking place and morale is being encouraged, the organization continues to exist in a recognizable form."(13) x n e physical environment occupied by an organization can contribute to achieving all four of the above objectives. Pena emphasizes that an important part of programming is establishing goals and ensuring that they are useful and relevant to the architectural design problem. "Project goals indicate what the client wants to achieve and why".d^) Goals must be practical and there must be a concept for implementing a goal. Pena cautions programmers to beware of lip-service goals that have no integrity or practicability. He also points out that, 13. Ibid, p. 6. 14. Pena, Caudill and Focke, p. 58. "Goals...must be tested for pertinence to a design problem and not to a social or some other related problem" because "trying to mix problems and solutions of different kinds causes never-ending confusion ... a social prob-lem calls for a social solution."^ ^ 3.3 Programming Problems The possibility of using the building program as the basis for post-occupan-cy evaluation has already been pointed out by some programmers. There are however a number of problems associated with programming which could hinder this development. First, there are few good building programs prepared. Building program-ming of the type described here is not yet very widespread. The tradi-tional "schedule of accommodation" approach, consisting of a list of room names and area requirements with possibly a brief qualifying remark about location or some characteristic is still the prevalent form of communica-tion between a client and designer. Other programs which have been pre-pared are like metropolitan telephone books which only serve to overwhelm or confuse the designer rather than to clarify the problem. As yet the preparation of good building programs is the exception rather than the rule. Even when a good program has been prepared designers do not always use it to guide their work. They appear either to gloss over, or, to ignore it and design what they believe the client wants, or, often believe they have followed it but have actually misinterpreted it. There are some genuine communication problems here. Part of the difficulty seems to be associat-ed with language, that is, lack of a common vocabulary between program-mers and designers. There is also the problem of translating program con-cepts into design concepts which has been referred to as the "creative 15. Ibid, p. 59. 16. Supra, pp. 64-65. leap" p r o b l e m / H o w and when the transition from words to form is made is not clear. Some study of this process is now being undertaken. It is essentially study of the creative design process and may lead to a better understanding of how to more effectively communicate programming information. In the meantime, one approach for dealing with this problem has been to involve the architect in the programming as an observer/participant with the programmer directing the team. Then having them switch roles during d e s i g n / T h i s provides for overlap and continuity between programming and design. If the building program is to be used as a basis for post-occupancy evalua-tion it is important that the same program also be used as the basis for design. Otherwise, evaluation of the building with respect to program ob-jectives will have little meaning. Therefore, the resolution of the pro-gramming difficulties discussed above would greatly increase the usefulness of the building program as a basis for post-occupancy evaluation. 3A Summary and Implications There is a clear trend in the development of building programming practice towards a more systematic analysis, and explicit presentation of building problems and user requirements in building programs. This includes the statement of goals and objectives. Many large client organizations — Public Works Canada, Canadian Peniten-tiary Services, Provincial Health Departments and Universities — are using specialized programming services to produce programs for guiding design consultants, and providing a basis for assessing the results of their work. 17. Davis, p. 17. 18. Dr. Richard Sea ton described the use of this process on a building project at the University of British Columbia. This trend increases the likelihood of having more building programs in the future which can provide a basis for post-occupancy evaluation. There is opportunity, at this time, to influence the development of pro-gramming in such a way, that it takes the needs of the post-occupancy evaluation process directly into account during building programming. This requires a clear statement of project objectives and evaluation criteria. The program evaluation field provides the model for the inclusion of evaluation requirements during program development. The development of a programmatic approach to post-occupancy evaluation should be closely associated with the development of programming. The information resulting from evaluation studies can be of benefit to pro-grammers by providing them with an assessment of the results of their contribution to the building process. This in turn, provides a basis for guiding further refinments to programming practice. There is, therefore, a mutual dependency between programming and post-occupancy evaluation which can be used to the advantage of both. CHAPTER it SYNTHESIS - A PROGRAMMATIC APPROACH TO POST-OCCUPANCY EVALUATION The previous three chapters reviewed ideas from program evaiuation, past attempts at building evaluation and the emerging fields of POE and build-ing programming. This chapter will examine how many of these ideas can be combined to form a programmatic approach to post-occupancy building evaluation. At the end of the first chapter three key ingredients were identified from program evaluation experience which were needed to establish any systematic evaluation process. They included: a framework for relating evaluation to the program it was to study, systematic study methods, and a clear definition of goals. 4.1 A Conceptual Model for Programmatic Evaluation Current POE approaches, such as Bechtel's, are using systematic data col-lection and analysis techniques. Their main weakness was found to be their reliance on the reconstruction of project goals and intentions rather than a reliable record of actual goals and evaluation criteria.^) A programmatic approach to POE - that is POE with reference to project goals stated in a building program and with the consideration of evaluation requirements when establishing goals - could greatly strengthen current POE development. As noted in the last chapter the idea of using the goals and intentions re-corded in the building program for evaluation has been suggested by sever-al authors. What is lacking is an evaluation process which brings the con-sideration of evaluation directly and explicitly into the programming activi-ty so that goals and criteria are established which will meet the needs of post-occupancy evaluation. 1. Supra, p. 55. A building evaluation process closely resembling that used in program eval-uation could satisfy this requirement. Masterson has pointed out that evaluation should be considered a kind of design process in itself.( 2) An evaluation problem is identified, a study approach is derived, the study is conducted and the results are reported and implemented. The six step process, identified in Chapter 1, for doing program evaluation follows the sequence of a design process and could be used as a general guide to programmatic POE. The six steps are listed below. 1. Problem definition 2. Goal definition and recording 3. Determination of criteria 4. Study plan development 5. Data collection and analysis 6. Reporting and implementing results Ostrander and Connell's third model was found to provide the strongest link between evaluation and the building delivery process/ 3) Through this con-ceptual model the six steps of the program evaluation process can also be related to the building delivery process as shown on figure 4.1 below. 2. Supra, p. 54. 3. Supra, p. 59. DELIVERY PROCESS Plann i ng Pr ograrrrni ng De s i g n Construct ion Occupancy 1.De f i ne p rob I em. 2.Identify & record goals. 3.Establish evaluation c r i t e r i a . 4.Prepare study plan. 5.Collect & analyze data. 6.Present Re s u 11 s EVALUATION PROCESS Fig. 4.1 Conceptual framework for programmatic POE Step 1, definition of the evaluation problem would occur early in both the evaluation and building process. It would be done as part of the longitud-inal study of the evaluation, prior to the setting of goals in the building delivery process. Step 2, identification and recording of goals and step 3, determination of criteria should be jointly undertaken by members of the project and evaluation team. These two steps would occur during program-ming in the delivery process when goals and objectives are being set. Step 4, development of a study plan, is primarily the responsibility of the evaluators and would be done some time after goals and criteria are estab-lished. There are two aspects to step 5. One aspect is concerned with monitoring the design and construction process to determine if changes are made which could strongly affect or help explain the outcomes of the building project. Data should be gathered on these decisions, the way in which they were made and the reasons for them. Of particular interest are the decisions, which have special significance to the achievement of goals - for example major changes in design or use of materials due to unexpected technical, production or delivery problems. The other aspect of step 5 is the cross sectional study which involves study of the building and its users after occupancy to obtain performance data. Depending on the nature of the issues being addressed by the evaluation, there may be two or more cross sectional studies conducted at different time intervals to provide comparative data. Step 6, present results, includes the report-ing of findings to those requesting the study, and may include recommenda-tions for changes in the situation under study; or, for new standards; or, approaches for similar projects in the future. The arrows between the building process and the evaluation process on figure 4.1 represent the areas where specific interactions between activi-ties in the evaluation and building delivery process would take place. In program evaluation, and again in discussion of Ostrander and Connell's models, it was pointed out that a strong link between evaluation and goal setting was essential for a programmatic approach to evaluation. The first arrow between building programming and evaluation identifies this link and conveys the idea that this should be a two-way communication process between evaluators and those in the delivery process. The two arrows from design and construction to the evaluation process point out the activity of monitoring changes during the delivery process. These changes could drastically affect results and therefore are of interest to the evaluation. Finally, the shaded areas at the end of the process which cut through occupancy represent the cross sectional studies of the completed building and its occupants. This is the area to which most of the POE studies to date have been limited. As noted above, one or more cross sectional studies could be done depending on the nature of the evaluation problem. $.2 A Basic Evaluation Process Ostrander and Connell have pointed out that there is no previous experi-ence within the construction industry with this kind of a building evalua-tion process. There have been attempts at certain aspects of evalua-tion and lessons learned which could contribute to the development of programmatic POE. The six step program evaluation process can serve as a framework for identifying the basic activities needed for a programmatic approach to evaluation and for identifying where existing experience can be incorporated and where new development is needed. Step 1. Defining the Evaluation Problem This first step, as it is proposed here, is a completely new activity relat-ed to the building process. It looks at the building process from an evalu-ative viewpoint and introduces the consideration of evaluation as a design process in itself. This includes identification of the evaluation client, the client's purpose and how the results of the evaluation are to be used. The Purpose of Evaluation Determining the real purpose behind any evaluation is of fundamental im-portance to understanding what is expected of the evaluation and in decid-ing whether or not an evaluation should be undertaken. In the case of POE there is agreement that the purpose should be to obtain information about the usefulness of buildings, and to use that information in future decision making about building design and use.(^) 4. Supra, p. 60. 5. Michael Brill, "Evaluating Buildings on a Performance Basis," Designing for Human Behavior; Architecture and the Behavior  Sciences, ed. Jon Lang, Charles Burnette, Walter Moleski and David Vachon (Community Development Series; Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1974), p. 316; Edward R. Ostrander and Bettye Rose Connell "Maximizing Cost Benefits of Post-Construction Evaluation", The Behavioural Basis of Design  Book 1: Selected Papers E.D.R.A. 7, ed. Peter Suedeld and James A~l Russell (Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1976), p. 241; and Supra, p. 52. There must be a genuine desire for objective information on the part of those sponsoring evaluation, and evaluators should get a clear understand-ing of where and how results will be used. It is during this early stage of a study that an evaluator should ensure that the evaluation is not being requested for one of the pseudo-purposes pointed out by Suchman in pro-gram evaluation - eyewash, whitewash, submarine, posture, or postpone-ment.^) if any of these pseudo-purposes becomes evident, then there is serious question as to whether or not the evaluation is worth undertaking. This would be true for both program and building evaluation. Client Viewpoints There are many different participants associated with any building project. They include owner, financier, authority, planner, programmer, designer, constructor, operator, project manager, contractors, tenants, occupants, and the public. Each of these has an interest in the building project and these interests are often in conflict. The resolution of these is a major aspect of decision making throughout the building delivery process. An evaluator must be aware of these different viewpoints, the interests they represent and their role and importance in a building project. The focus and content of the issues to be considered during an evaluation will depend on who is requesting the evaluation, and whose viewpoint and interests are taken into account. According to Bechtei the client and/or architect spon-soring the POE should select the issues to be evaluated.(7) 6. Supra, p. 18. 7. "P.O.E. - The Statement of the Art", Research and Design, ed. Kevin W. Green (Washington, D.C.: American Institute of Architects Research Corp., July, 1978), Vol. I, No. 3, p. 7. Use of the Evaluation Results Another important consideration when defining an evaluation problem is identification of the end use of evaluation results. There are two opposing philosophies which underlie the uses of evaluation information which can be readily identified; accountability and learning/8' A strong argument has been made for emphasizing the learning aspects of evaluation, particularly when initiating evaluation studies. Accountability presents a potential threat for practitioners who are responsible for a building project. Their cooperation is very important in doing an evaluation. This has been point-ed out by program evaluators. (^ ) if the learning aspects of evaluation and the potential benefit of the resulting information to practitioners can be emphasized then the likelihood of obtaining their cooperation is increased. This strategy has been strongly recommended by Ostrander and Connell, "If post-construction evaluation research is to be welcomed by practition-ers. "(10) By clarifying the planned uses of evaluation results the likeli-hood of designing an effective evaluation study is greatly increased. Step 2. Identify and Record Goals The first direct link between the building delivery and evaluation process should occur during this second step. The formulation and recording of goals and objectives is a prerequisite to doing programmatic POE as it is proposed here. There is a great deal of confusion as to what is meant by the terms goal and objective. Pena de-fines a goal as "The end toward which effort is directed. It suggests 8. Ostrander and Connell, p. 241. 9. Supra, p. 22. 10. Ostrander and Connell, p. 241. something attained only by prolonged effort."^) He distinguishes between project goals concerned with product and operational goals concerned with process. Goals are established in close conjunction with the client. Pena points out that a number of terms are often used synonymously for goals: objectives, aims, missions, purposes, reasons, philosophies, aspirations and policies. Objectives are "a more detailed delineation of a particular goal," implying something tangible and immediately attainable/ "Goals tend to be general; objectives tend to be specific. Objectives are more time bound and quantitative and therefore a better measure for evaluating the degree of achievement than generalized goals/ ^ ) Pena offers the follow-ing example of the distinction between goal and objective: "Goal: To serve as many students from the state of Texas as possible. Objective: To increase enrollment by the amount of 1000 students per year."U4) The Building Performance Research Unit used the terms goal and objective to mean the exact opposite to Pena. According to the BPRU "Objectives can be seen as long term aims or basic philosophical, mystical or religous desires; such things as staying alive, enjoying life, finding self or truth can be thought of as objectives ... in order to achieve these objectives it is necessary to be successful in doing many less long term things. In other words, many goals must be reached in order to achieve an objective."(^ 11. William Pena, William Caudill and John Focke, Problem Seeking: An  Architectural Programming Primer (Houston, Texas: Cahners Books International Inc., 1977), p. 95. 12. Ibid. p. 96. 13. jbid. IU. Ibid. 15. Thomas A. Markus et al., Building Performance: Building Performance Research Unit (New York: John Wiley and Sons, 1972)7 pp. 1-2. The meaning of terminology in this area is certainly confused. There is agreement that distinction can be made between more and less precise statements of intentions. Until some general agreement can be achieved on a single meaning for these terms they should be clearly defined when-ever they are used. Program evaluators seem to share the same confusion and some use these terms synonymously.^6) There is clearly a more ex-tensive use of the term "goals" as the basis of evaluation (goal-attainment model) and emphasis has been placed on the statement of goals in precise measurable terms. In this study Pena's definitions have been followed. Regardless of the terminology used it is essential for building evaluation purposes that the kind of clear, measurable statement? of intentions called for by program evaluators be made during the building programming process. The importance of this activity to the evaluation process has been strongly emphasized in program evaluation. This activity consists of the identifica-tion of goals, the sorting out of goal conflicts and selecting those goals which are to be of major concern to evaluation. The process of defining and recording goals for building evaluation purposes must be part and parcel of the programming process in the building delivery system, it should not be a separate activity. Therefore the programming methods used to state, organize and classify goals during programming will effect evaluation. Consideration of evaluation requirements must be made during programming. Some of the basic problems of programming discussed in the last chapter are therefore of importance and concern to evaluation. Evaluators must be aware of them. 16. Edward Suchman, "Action for What? A Critique of Evaluation Research", Evaluating Action Programs, ed. Carol H. Weiss (Boston: Allyn and Bacon Inc., 1972), pp. 64-65. 17. Carol H. Weiss, Evaluation Research: Methods for Assessing Program  Effectiveness, ed. Herbert Costner and Neil Smelsor (Prentice-Hall Methods of Social Science Series; Englewood Cliffs, New Jersey: Prentice-Hall Inc., 1972), pp. 26-27. Programming is in its early stages of development and there are many dif-ferent approaches used which put more or less emphasis on goal setting. There is much variation on the subject, scope and content of goals and objectives. Pena has distinguished between four kinds of project goals: motherhood, lip-service, inspirational and practical which have varying degress of usefulness in the design process and also to e v a l u a t i o n . A n awareness of these types of goals and their implications is important to both programming and evaluation. Since program goals have such strong implications to evaluation, it can be argued that specific consideration of evaluation should be made during pro-gramming and, to ensure this, evaluators should participate directly in the programming activity. The involvement of evaluators in goal setting is strongly favoured by program evaluators. Evaluators can make valuable contributions to programming as a result of their experience with the results and implications of goals learned from past evaluations. A prob-lem program evaluators have found, is that when conflicts occur during goal setting between meeting the needs of a program or project and evalu-ation, that the project needs take precedence. No one argues with this priority, but it can weaken evaluation and care must be taken to try to meet the needs of both activities. Characteristics of Goals Program evaluators have found that to be useful for evaluation purposes goals must have certain characteristics and these same characteristics apply to goals which are to be used in building evaluation. To be useful for evaluation goals must be: . clear and specific as to what is to be achieved. . acceptable (to those responsible for implementation). . realistic and attainable. . relate logically to higher objectives or goals. 18. Pena, Caudill and Focke, p. 98. . measurable - ie. permit measurement of achievement. . designed to permit the development of alternatives ie. objectives should not define the methods. . expressed, communicated to, and understood by aU_ concerned. Pena cautions strongly against confusing incompatible goals and solu-tions/^) A social problem cannot be solved by a building project. Care must be taken to ensure that appropriate goals are being identified and used for building evaluation. There are both latent and manifest goals in the building process. Since the lack of clearly stated goals will be perhaps one of the greatest diffi-culties in establishing programmatic evaluation, the recognition of latent goals and consideration of their significance and how to state them will be an important step in developing an effective evaluation process. The Classification of Goals Sorting out and classifying goals is the major concern of building program-ming and as pointed out in the last chapter there are different views on the scope and content of what this includes. It varies according to the beliefs of different programmers and the nature of the problem being addressed/20) Those involved in social programs and program evaluators have had to deal with a similar situation. Suchman suggested a classification system for organizational goals and objectives based on three distinguishable levels of responsibility and decision making within organizational structures; field, supervisory and central/21) These apply to most organizations and provide a convenient and useful way of classifying goals and objectives for both program development and program evaluation purposes. 19. Supra, p. 69. 20. Supra, pp. 66-67. 21. Supra, p. 21. A classification system has been developed by Francis Duffy for distin-guishing between different types of building scale. He distinguishes be-tween three levels of building environment scale called shell, scenery and set. (22) T n j s classification system has been developed mainly for use in office building environments but seems to be relevant to other building types as well. Each of the three categories distinguishes a different part of the physical environment. The shell category includes the most perman-ent building elements which usually have a minimum life span of 40 years. Scenery refers to the next category of environmental detail. Examples of scenery elements are, fixed interior walls and service lines to group areas. The items in this category are more flexible than shell elements and usually less costly to alter. They are changed when major occupant reorganizations occur, on average every 7 years. Finally the set category is made up of the most flexible environmental elements such as furniture and free standing partitions which can be moved easily at rela-tively little cost. These are almost constantly being altered in office buildings to adjust the environment to the ever changing organizational structure. Duffy's classification system offers not only a convenient way of looking at a building facility, it also distinguishes between parts of the building envi-ronment which have different, identifiable implications to the way an organization fits into a building facility. Decisions about shell have the most costly, long term implications to an organization. Decisions about items in the set category are the most easily changed. Shell, scenery, and set correspond to the three levels of organizational goal classification identified by Suchman; central (shell), supervisory (scenery), and field (set). These concepts could provide a way of sorting out organizational objectives into appropriate levels of consideration and in turn of relating these decision making levels to appropriate aspects of the building environ-ment. This approach could be useful for programming and design as well as evaluation. 22. Francis Duffy, "Office Building Technical Study 1: The Place and the Process", The Architects Journal, Vol. II (May, 1973), p. 1063-1067. The Relationship of Organizational Goals and Buildings The BPRU's model can provide a valuable guide for understanding the rela-tionship between project goals and the building system.(23) j n e considera-tion of this relationship can help evaluators determine which are the important goals and how they influence the development of the project and the completed building through the activity, environment, building sub-systems and, conversely, how well the building has realized the project goals. Step 3. Criteria for Measuring Selected Goals If goals are to serve as the basis for evaluation then along with their for-mulation and recording must go the establishment of related measurement criteria. The problem is to find or develop measures which capture the essence of the issue being evaluated. Ideally these criteria must enable an evaluator to reduce by some operational procedure the aspect of the building or its use under study so that it can be analyzed by systematic methods and techniques. The BPRU distinguished three categories for the criteria they used in their evaluation; existing, modified existing and newly developed. There is a strong advantage to using existing criteria where possible. If criteria have been used before there will be data from other studies which can be used for comparison. The BPRU emphasized that evaluation criteria can only be improved through repeated use and modification. Existing criteria should therefore be used whenever possible. There are some existing criteria available for building evaluation. Besides those used by the BPRU, there have been criteria developed for other evaluation studies particularly those related to man environment studies. Bechtei estimates that around 1500 evaluation studies have been done since 1973.(24) Many of these evaluation studies have followed Ostander and 23. Supra, p. 39. 24. Research and Design, p. 32. Connell's model 1 and 2 where criteria for evaluation were established after the project was completed.(25) These may not apply directly to the measurement of project goals. Even if they are not directly applicable to programmatic evaluation they may be able to be used in a modified form. A very interesting developmental study would be the cataloging of the criteria used in these studies with an analysis to determine their applicability to measuring the achievement of project goals. Where existing criteria are not available modifications must be made to existing ones or new ones must be developed. There are some general considerations which have been suggested by program evaluators to guide criteria development: - nature of project and of the goals to be measured - use and type of indicators required by practioners or managers in the delivery process - significance of the goal to project success - susceptibility to management control - kinds of decisions the users of evaluation results are often required to make - feasibility of data collection using particular criteria - existing internal and external information available - gestation period (time lag) - frequency of data collection - external factors influencing goal achievement. These should all influence the development of new criteria. As pointed out in Chapter 2 no single criteria has been found to measure a building's overall success. This point was strongly stated in the conclu-sions of the Pilkington study(26) a n f j Qr[[[ identified it as a fundamental problem in establishing building evaluation on a performance concept 25. Supra, pp. 57-58. 26. Supra, p. 38. basis.(27) ^ j t n t n e p r e s e n t state of the art, evaluation criteria can only be related to parts of buildings and their use. Bechtei says it is up to the client and/or architect on the POE team to identify the elements of great-est importance and to limit the evaluation accordingly. Development of criteria would be left up to the evaluator but, as pointed out by program evaluators, it is very advisable to get the concurrence of the client/archi-tect before proceeding with any measurement criteria. STEP 4. The Design of an Evaluation Study Plan Designing any evaluation study plan will depend on the purpose of the evaluation, the issues under study and the kinds of criteria being used. Consideration of these will help in structuring the best approach by point-ing to what kind of data gathering will be required and to how and where it should be done. Using a Model to Guide the Study Plan Program evaluators have found that in preparing an evaluation study plan it is often useful to make a model of the program to be studied. Such models can facilitate an evaluation study by helping evaluators identify and focus on the important variables of the program. The model can also be used to identify key linkages between goals and outcomes, and point to those aspects of a project which have significantly contributed to these outcomes. Building projects are similar to programs in that they are a complex set of activities aimed at achieving a purpose. Models can be constructed to represent various aspects of the project process, or the resulting building. There are two basic models used in program evaluation which were dis-cussed earlier; the goal attainment model and systems model.(28) They have both been found useful in guiding evaluation studies. Choosing be-tween them depends on the nature of the evaluation study being planned. 27. Supra, p. 46. 28. Supra, pp. 23-24. The goal attainment model is cheaper and simpler; but the systems model provides a more comprehensive picture of the project, helping point out key outside influences and the feedback links for communicating evaluation results. The BPRU model is an example of a systems model which has been used to guide a particular building evaluation study.(29) j n e m o d e l provided a way of looking at the interrelationship of the major subsystems of a build-ing, the activities it facilitates and the user's goals. The BPRU model could provide a general guide for the development of any building evaluation study plan. Another important consideration program evaluators have identified is that practitioners and other interested parties should be involved in discussing and suggesting improvements to any evaluation study model. This involve-ment is important to help secure their interest in the evaluation; it can also significantly influence their acceptance of evaluation results later. Selecting a Study Design The evaluator's second main task is selecting specific study methods. The methods chosen should strike a balance between respecting the canons of basic research, meeting the practical requirements of time restrictions and providing decision makers with useable results. In program evaluation the basic aim has been to produce information which is made as reliable as possible by following scientific procedures. The classical experimental model has been found impractical in action settings.(30) Therefore quasi-experimental models have been adopted where only those variables that are judged critical are controlled. By following these models a certain degree of rigor has been maintained in evaluation studies. The reliability of results from these evaluations are considered quite acceptable for program 29. Supra, p. 39. 30. Supra, p. 26. evaluation purposes though they are open to some challenge.(31) Non-ex-perimental study approaches, such as comparative studies with no controls are also used, but the results are far less attributable to specific aspects of a program. If the information generated by evaluation is to be more reliable than that gained from subjective informal evaluation, then syste-matic procedures must be used as much as possible in the study design. The applicability of these various study designs to building evaluation must be considered. Common to all the program evaluation designs examined is a systematic study approach where stated goals are used as the basis of evaluation - clearly, this is applicable to building evaluation. The classical experimental design is not applicable to building or building use evaluation for the same reasons it is not useful for program evaluation. Circumstances associated with the action setting in which evaluation takes place do not permit adequate control of variables. Quasi-experimental designs may be applicable to the study of building user satisfaction. Where changes to the behaviour of building occupants is an objective of a project, a time series or other quasi-experimental design may be useful. Great care would have to be taken to ensure that anticipated changes were relatable to building features. Also, it must be noted, that such a study would be limited to a specific and rather narrow aspect of building evaluation. Non-experimental designs have already been used for some evaluation studies concerned with user satisfaction. Until more evaluations are undertaken it is difficult to further assess the extent of the applicability of these study designs to post-occupancy evaluation. Scheduling The scheduling of data collection is another important question to be re-solved as part of preparing the study plan. Sufficient time must be allow-ed to pass after occupancy of a building for the regular use patterns to become established and for the building to be "broken-in". The amount of time this will take depends, to some extent on, what is being evaluated. 31. Supra, p. 27. Three to six months is considered the minimum time needed for users to accommodate themselves to a building and to develop regular patterns of use. Two years has been identified as the optimum time to allow for use patterns to develop. (32) jf a building's energy consumption characteristics are being evaluated then perhaps more than two years may have to pass before the necessary cyclical weather conditions have been experienced. Data gathering can also be done at several different points over an exten-ded period of time (a time series study). Three to four years may be needed to gather comparative information on goals such as flexibility or adaptability. Finally, the evaluation of operations and maintenance features may need to wait 5 years or more before useful data can be obtained. Summative vs Formative Building Evaluation The distinction between formative and summative evaluation is applicable to building evaluation. The thorough evaluation of a completed building project would be a summative evaluation. Buildings and their use do not have a termination point in the same sense that programs do. Once con-structed, buildings are relatively permanent physical structures which sup-port different occupancies over a long period of time, often 25 or more years. Buildings themselves continue to exist over this lifespan while occupancies may change. What does have a clear end point is the building delivery process which ends with the commissioning and occupancy of a building. A summative evaluation would be done at the conclusion of this delivery process. The cross sectional study described in Ostrander and Connell's models could be considered summative in nature. (33) j n e BPRU's study of the at Kilwinning School would be an example of a summative building evaluation. (3*0 32. Supra, p. 53. 33. Supra, p. 57. 34. Supra, p. 41. In the case of the BPRU evaluation, a study was done where the accuracy and reliability of results was of particular importance. The time was taken to be as thorough and comprehensive as available techniques would allow. The scope of the evaluation was broad and it concentrated on look-ing at many aspects of a single building project. This is what a summa-tive type evaluation should normally do. Formative evaluations are concerned with obtaining information for use in on-going development during the course of a phased building program. A pilot project where a particular building type is being developed, such as a prototype nursing station or a general purpose office building, would be a situation in which formative evaluation would be appropriate. Such studies would be characterized by short time frames; and the number and com-plexity of issues under study would have to be limited accordingly. Step 5. Data Collection and Analysis There are two aspects to the data gathering activity. One is obtaining data on important decisions or changes made during the delivery process. The second is gathering data about the building and its occupants after the building is completed. A new "Design Log Method" has been proposed by Mayer Spivack for keeping track of design information during the delivery process. It offers a means of carefully monitoring the building design process. "The Design Log is a log that follows the whole process of design from first meeting right on through."(^ -5) ^ s e a c n p a rt 0 f the design project comes up for discussion the architect jots down its particular design requirement. He then notes the "treatment" that satisfies particular requirements of the space. This is done in the company of the client who then knows exactly how and why the design is taking shape. A program can be prepared in parallel with the job. 35. Mayer Spivack, "The Design Log: A New Informational Tool", American Institute of Architects Journal (October, 1978), pp. 76-78. "Then", Spivack says, "when the program is finished and we get into actual design on the drawing board, every design decision that has any significance at all has a reason. There's always a reason for a design decision, and the architect simply notes what that reason is by writing it down in the Design Log. This allows us to take those decisions through to final design, knowing all the time why we're doing these things. It allows us then to go to post-occupancy evaluation, never forgetting from the moment we began what we were trying to do in the program. Which means that, for the first time, post-occupancy evaluation is organically linked to program."(36) This design log method is still in its early stages of development but it has already proven useful in several projects and it could be a key to the establishment of programmatic POE.(37) Data collection after building occupancy is the cross sectional part of the evaluation and involves the systematic gathering of data through observa-tion, measurement or compilation. Data on various aspects of building performance and use can be obtained from a number of sources. Some of the more common ones are listed below. 1. Existing records and statistics; building and O&M records, drawings, study reports, building program, plans, cost records, schedules includ-ing other evaluation and research studies. 2. Competent project personnel (programmers, designers, contractors) 3. Clients and users 4. Citizens at large 5. Knowledgeable individuals outside the project 6. Building operations 7. The building system itself (see Figure 4.2 below) 36. Tbid. 37. Ibid. The Data Gathering Domain The BPRU Model can be used to help define a domain where most of the data gathering about a building will take place. This domain is outlined on Figure 4.2 below. It includes the building system, environment system and activity system. Within the area of concern of these subsystems most of the data needed about the building itself will be found. cost of provision R E S O U R C E S S Y S T E M OBJECTIVES SYSTEM Fig. k.2 - Data Gathering Domain Data Gathering and Analysis Techniques There are a growing number of data collection techniques available for use in post-occupancy evaluations. POE study techniques used by social scientists, such as Bechtei, have been limited to those techniques broadly accepted as standard measuring devices by the scient if ic community. There are many other techniques available from the research fields (basic, applied, operational and evaluative) which may be adapted for use in building evaluation. If objective and reliable results are to come from an evaluation study then it is important that systematic methods be used for col lect ing and analyz-ing data. As was pointed out ear l ier , the traditional architectural approach to evaluation lacks objectivity. There has been generally l i t t l e scient if ic research experience among archi tects , so, the skills for doing systematic evaluations are not readily available from within the archi tec-tural profession. There is much controversy in the emerging POE f ield about the extent to which rigorous scient if ic methods are appropriate or necessary. Bechtel's approach to POE emphasizes the use of scientif ic methodology in data gathering to ensure accuracy. Many architects consider that highly statis-t ica l approaches like Bechtel's amount to overskil l and they are fully satis-fied with evaluation data gathered in more casual ways. The BPRU study demonstrated the use of several techniques for looking at different aspects of a building; cost , use, physical properties, management and function. C l e a r l y , there are many techniques now available for data gathering and analysis in these areas. They need to be further developed and new ones introduced. This can only be done by doing building evaluations and c r i t i ca l ly reviewing the study methods used. Step 6. Reporting and Implementing Evaluation Results In communicating evaluation findings to decision makers, building evalua-tors wi l l have to deal with several general problems which have been iden-t i f ied by program evaluators and which are common to al l fields of evalua-t ion . There is a basic difference in orientation and interest between prac-titioners and evaluators. Practi t ioners are interested primari ly in the use-fulness of results for their particular purpose, while the evaluators' inter-ests often center around the correctness of the results and the elegance of the presentation. The values of practitioners, not the scientific canons of objectivity and truth, are likely to govern the acceptance and use of recommendations provided by evaluators, regardless of the size, scope or sophistication of the study. Evaluators must therefore be keenly aware of the values and concerns of practitioners and try to address them as direct-ly as possible in reporting the study findings. Reports should be made to draw out the important findings. Care should be taken to avoid research jargon, or, to explain it where it must be used. If an evaluation is addressed to practitioners the evaluation study is essentially fulfilling a service function and the evaluator should take the action necessary to ensure that the client is getting good service. The study results should be in an understandable and useful form. There is another problem which could be termed "information overload" associated with the building design process which will be a significant dif-ficulty in trying to communicate evaluation results to architects.(38) n deals generally with the willingness and ability of those involved in the design process and the process itself to make use of evaluation informa-tion. Even under the most ideal circumstances results may not be accep-ted by designers. As Davis puts it: "One problem may lie in the existence of an old assumption: that sound information is awaited by eager users. That's rarely true unless the rele-vance of knowledge to solving one's problem is crystal clear. On the con-trary, knowledge, particularly knowledge on effectiveness, can be a dread simply because it nearly always demands action - change in one's behav-iour. The only alternative options are to ignore it, belittle it, or sabo-tage its production." (39) 38. Supra, p. 47. 39. Howard R. Davis, "Four Ways to Goal Attainment: An Overview" Evaluation, Special Monograph No. 1 (1973), pp. 23-28. Michael Brill has also pointed out that the present design process is prob-ably not able to take in much new information and it will have to be revamped to be able to do so.(^ O) Developments in this direction are tak-ing place but until such a change occurs the communication of evaluation results to designers may be very difficult. Some program evaluators gauge the success of an evaluation study by the number of their recommendations that are implemented. These evaluators tend to get heavily involved in implementation. Other evaluators see the clear presentation of results as the end of the their evaluation responsibili-ties, and implementation, as part of the decision making process clearly beyond the scope of evaluation. The extent to which evaluators should get involved with implementation of findings as part of the communication process, is a question of judgement in particular situations. It seems appropriate and necessary that some assistance should be given by evaluators in interpreting and applying evalu-ation results. Major decisions about building projects and their use are often political rather than technical; evaluation results that do not mesh with the political strategies of decision makers are not likely to have much impact on decisions. Even at best, evaluation results are only one of the many inputs in the decision-making process. The implication of this fact cannot be ignored if the role of evaluation is to be understood and results are to be presented as useful and viable input into the building process. Particularly during the early stages of evaluation process development, those doing evaluations should be actively involved in implementation. Evaluation information can be used to influence decisions about buildings in three basic ways; to help decide on the modification of an existing envi-ronment, as input to new projects on the drawing board (according to 40. Supra, p. 47. Bechtel this should be the essence of POE)(^) and as a contribution to a general data base about buildings and their use. Zeisel highlighted these last two uses of POE information on his diagram of the design cycle.(^2) Decision-makers, other than those commissioning an evaluation, may also be interested in findings. Other evaluators and researchers will be interes-ted in study techniques and approaches. Dissemination of evaluation re-sults to these general audiences is essential for the cumulative develop-ment of evaluation knowledge. The effective transfer of post-occupancy evaluation information to a general audience should include: "selection, translation, organization, recruitment, monitoring, summarizing and revis-ing, followed by publication and distribution."(43) Testing the usefulness, effectiveness and practicality of information resulting from post-occupancy evaluation is a research effort in itself. There is a recognized need for developing an "information transfer strategy" for communicating POE results.(^) Serious consideration must be given to this activity. Any particular evaluation study could be used in several ways but it must first respond to the initiators purposes and then to the other levels of interest. 4.3 Genera] Problems Who Pays and Who Benefits? In order to establish programmatic building evaluation of the type proposed in this study a new evaluation process must be put into place which paral-lels the building delivery process and provides for the kinds of links discus-sed in the previous sections of this chapter. Since there is so little 41. Supra, p. 53. 42. Supra, p. 50. 43. Sandra C. Howell, "Post Occupancy Evaluation Transfer Strategy," Industrialization Forum, Vol. VIII, No. 1 (1977), p. 29. 44. Ibid, pp. 29-35. building evaluation experience and the evaluation process being proposed is substantially new, much of the initial implementation of such a process must be considered a research and development activity. The research effort demanded by such a process would extend over a long period (a few years for even a medium sized project). It would be more expensive than the shorter evaluation studies now done which do not include direct refer-ence to project goals. Some of those who could benefit from programmatic evaluation are: researchers who would do the evaluation work. They would collect infor-mation and experience for use in other projects; designers, particularily those who specialize in a particular building type, (such as: schools, offices, hospitals, housing), would be able to apply evaluation results in future designs; users of facilities could expect to get a better environment as a result of the application of evaluation information; and the institu-tions and organizations that construct and maintain buildings on a continu-ous basis, (such as: Government agencies & hotel chains), could potenti-ally benefit most from a systematic evaluation process/^ These are the main beneficiaries of evaluation information but according to Ostrander and Connell the question of who supports post-occupancy evalua-tions has not been satisfactorily resolved/^) Researchers, while being in a position to benefit from evaluation studies, do not control funds for doing evaluations. They may be in a position to solicit money from foun-dations or institutions but they usually do not have direct control over allocating these funds. Designers could do some limited funding of evalua-tion if it were related to specific project requirements. The view has been expressed by some architects that clients should fund evaluation 45. Robert Shibley, "Toward More Leverage with Post Construction Evaluation Results", Post-Construction Evaluation: A Hard Look at a  Seductive Idea, Workshop Report EDRA 7, Book 2 (1976), p. 130. 46. Ostrander and Connell, p. 244. because they will benefit from the information gained about their build-ing. Architectural practices do not generate revenues from within the cur-rent structure of their design services to fund post-occupancy evaluations. The development of professionally sponsored research institutes, which could act as a repository for evaluation information and as a co-ordinating agency for research/evaluation funding has been suggested, and offers an interesting possibility.(^7) However, given the reported apathetic attitude of many members of the RAIC towards any type of research^8) - the likelihood of such professionally initiated development in Canada does not seem promising at present. Organizations that build and maintain buildings on a continuous basis have been singled out as having the most to gain from systematic post-occupan-cy building evaluation.(^9) Such organizations are also likely to have the funds to pay for evaluation studies of the type proposed here. It is in this environment that programmatic post-occupancy evaluation must initially develop. If support for evaluation development is to be obtained from such organizations careful consideration must be given to the evaluation of issues which are of direct concern to the organization. Shibley suggests that to be most effective evaluation should be integrated into an organiza-tion's normal way of doing business to the largest extent possible.(50) ^ n attempt to do this is currently underway in Public Works Canada. (51) 47. Robert B. Bechtel, "Social Goals Through Design: A Half Process Made Whole," Paper delivered at the American Institute of Planners Conference, Boston, Massachusetts, October 9, 1972, p. 16. 48. Charles H. Cullum, "Architectural Research and Apathy," The  Canadian Architect (October, 1975). 49. Shibley, p. 130. 50. Ibid. 51. A task force is presently working on the implementation of a Post Occupancy Evaluation process related to all government building projects. An Inter-disciplinary Team Approach Another outstanding question which must be addressed is, who should do evaluation? It has been suggested by some, that architects should do post-occupancy evaluations as an extension of architectural services.^2) They are directly involved with the delivery process, have first hand acquaintance with the problems and issues of a project and could make direct use of the results in their future designs. There are two major drawbacks to this suggestion. First, there is a widespread lack of knowledge among most architects of the research methods which are needed to do systematic evaluation studies. Though the AIA has initiated its professional training program the necessary skills are not generally developed within the profession. Secondly, the objectivity of the results of a post-occupancy evaluation study conducted by a project's design architect would be open to question on the the grounds of partiality. There is a close relationship between programming and evaluation in the approach proposed in this study. Gerald Davis has suggested that program-mers should also do building evaluation.(53) j n i s 0 f f e r s the same advan-tage as having the architect do evaluation. The evaluation would be done by someone who is very familiar with the project objectives and who could bring this knowledge to the evaluation process. Program evaluators have pointed out that there is a problem of impartiality raised when the evalua-tor is part of the original study team. Any team members could be suspected of protecting vested interest, or, be subject to internal group pressure which could influence their interpretation of evaluation results. If impartiality is to be preserved then evaluation should be done by a disin-terested party from outside the project team. 52. Herbert McLaughlin, "Evaluation Studies: A Follow-up Architectural Service", Architectural Record (August 1974), p. 65. 53. Gerald Davis, "Programming and the Project Use, 1:3, Post Evaluation of the Building," Alternative Processes; Building  Procurement Design and Construction, ed. Michael Glova (Champaign, Illinois: Industrialization Forum Team and the University of Illinois, 1976), p. 23. 100 Behavioural scientists are also prime candidates for becoming post-occupan-cy evaluators since they have already been involved in many evaluation efforts. Their training has usually familiarized them with the appropriate research techniques. Many social scientists are well acquainted with evalu-ative research and program evaluation methods. These advantages are somewhat offset however, by a lack of knowledge or experience with build-ing design and construction issues. The main criticism of most behavioural science based building evaluation studies done to date, is their lack of direct relevance to design decision makers which is due, in large part, to the evaluator not being well acquainted with design issues. A collaborative team approach between architects and social scientists seems to offer the most advantageous approach to programmatic POE. The combination has already proven successful in a number of evaluation studies. (54) T n e usual problems of interdisciplinary co-operation, understanding of new viewpoints, questioning of ones own values must be overcome. This team approach is certainly the most promising for doing programmatic post-occupancy evaluations. There is no formally defined or generally recognized role of "evaluator" in the building industry at present; the question of who should do evaluation remains open. Many architects consider that they have always performed this function as a normal part of their activities. Others have called for evaluation to be an extension of architectural services. Programmers and social scientists have been suggested as the most suitable candidates for evaluation. If building evaluation is to develop as part of architecture and the building industry then a new role of "building evaluator", comparable to program evaluator, must be clearly identified and established regardless of who performs the function. 54. Supra, p. 51. C H A P T E R 5 C O N C L U S I O N - O P P O R T U N I T Y A N D C H A L L E N G E Clearly, there is a strong growing demand for objective information on the performance and use of buildings coming both from within and outside the architectural profession. Such information is an essential ingredient for improving the quality of decision making about buildings. It is also needed if we are to establish and develop an objective knowledge base for archi-tectural practice. Systematic building evaluation provides a way of obtain-ing this information. It would enable architects and others involved in the building process to objectively demonstrate to themselves, their clients, building occupants and the public, the effectiveness of their services, the resulting quality of building environments and the directions future develop-ment should take. Programmatic post-occupancy evaluation, (programmatic POE), is the most effective means of establishing a regular systematic evaluation pro-cess, d) It provides a basis for POE which reflects the goals of a project, the same goals which have been used to guide design and construction. With this approach there is a consistency throughout the project and evalu-ation is based on what the design was supposed to achieve. The basis for evaluation is therefore fair and clear from the outset of the project. By referring to project goals the evaluation will be closely related to the issues of concern to those involved in the design and use of buildings and will be of specific interest and relevance to practitioners. This is essen-tial because without their support and cooperation programmatic POE will be very difficult to do. They are the ones who put evaluative information to use. A similar approach to evaluation has already been successfully established and demonstrated in program evaluation. Many lessons learned by program 1. Edward R. Ostrander and Bettye Rose Connell, "Maximizing Cost Benefits of Post Construction Evaluation", The Behavioural Basis of  Design Book 1; Selected Papers EDRA 7, ed. Peter Suedeld and James Ai Russell (Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1976). p. 224. 102 evaluators can be used to guide the development of a programmatic POE process. Some of these have been pointed out and applied in this study. Further reference to program evaluation experience is essential. The evaluation process proposed here consists of a set of relatively simple steps. The purpose of the evaluation must first be established; then a clear statement of the goals is made and meaningful evaluation criteria are stated for those goals to be evaluated. These should be agreed to by client, practitioner and evaluator. The building design must be developed in response to the goals, and the project should be monitored by the evalu-ator to detect and record events or decisions which might significantly alter goals or outcomes. Spivack's design log has been suggested as a means of recording this process.^) Next an evaluation study plan must be made incorporating appropriate data gathering and analyses techniques. The study is then conducted and findings reported. Such an approach to building evaluation has not been used to date. Evaluation studies which have referred to goals have had to reconstruct them because goals have seldom been pre-recorded. To do programmatic POE requires a good building program with clearly defined goals. It also requires a long-time commitment on the part of the evaluators to monitor the building pro-cess. Several years to monitor even a modest size project. Programmatic POE is closely dependent on the quality of programming which is in its early stages of development. Many of the problems of pro-gramming will be of direct relevance and concern to evaluation. Several programmers have pointed out the potential of building programs as the basis for POE. There is a great opportunity to incorporate the specific consideration of evaluation directly into the development of programming and evaluation should be developed in concert with programming. Large organizations, with ongoing building programs, have potentially the most to gain from a systematic evaluation process. Information from an evaluation can be used directly in several on-going projects. There is a 2. Supra, p. 89. continuous operational situation into which evaluation information could be channelled. Such an organization is also likely to have the resources need-ed to develop a programmatic approach to POE over a long time period. It is in this environment that the implementation of programmatic POE is most likely to succeed. Pains must be taken to integrate the evaluation process into the regular procedures and documentation of the organization if it is to be useful to, and consequently, supported by the organizational client. All the attempts at systematic building evaluation reviewed for this study have been primarily research type studies exploring basic questions of tech-nique and new methodology. There has been no regular evaluation process established. Yet, as the BPRU pointed out, only through a regular evalua-tion process can norms be tested and enough information generated to draw meaningful conclusions about trends and practices. Programmatic POE could provide this kind of regular evaluation process. The use of interdisciplinary teams for doing evaluation has many advan-tages. The success of this approach has been demonstrated repeatedly, by the Pilkington Research Unit, BPRU and in other recent evaluation stud-ies. The specific make-up of an evaluation team should vary according to the subject of a particular evaluation. A team of architect and social scientist, (not necessarily in that order), has proven a very successful combination in recent evaluations, particularly, when building use was of prime concern. This combination could be considered a basic team mix. Other specialists could be added as circumstances required; for example, financial or real estate analysts, environmentalists, psychologist, planners, engineers, urban designers, building scientists and other technical experts. The team approach is the most advantageous. The activity of systematic POE is new to the building industry and differs from the other activities related to the building delivery process. Though it may share certain techniques with programming and pre-design analysis, post-occupancy evaluation is a special activity responding to a unique pur-pose. It should be recognized as such, and the role of "evaluator" should be established as a new, independent role related to the building process. 104 It could be comparable to the role of program evaluator in social science and would provide a focal point for the development of skills, techniques and responsibility for POE. Architects have played a relatively minor role in the development of build-ing evaluation to date. Many have been very critical of the systematic methods used by social scientists in POE, considering these methods to be too elaborate. A fear has also been expressed that the architectural pro-fession will be attacked unfairly by these outside evaluators. Architects are not trained in the use of the statistical methods employed by social scientists. This lack of familiarity makes such methods seem threatening. A number of techniques have been refined and simplified without losing their effectiveness and Architects can learn to use them easily. The AIA offers a training program on evaluation techniques through its continuing education service.(3) Architects can learn to evaluate their buildings. Schools of architecture should begin offering specific courses on evaluation techniques and encourage interested students to develop their abilities to do systematic evaluation studies. The demand for evaluation is evident, it will not go away. There are sufficient guidelines and techniques available now for developing and implementing a programmatic POE process. Such a process could provide the basis for developing an architectural research function in close con-junction with the building process; it could be the first step towards build-ing an objective knowledge base for architecture. But to realize these potentials architects must get involved with evaluation in a direct and positive way. If they ignore this opportunity others will certainly pick it up. 3. Supra, p. 52. BIBLIOGRAPHY Albuquerque/Bernalillo County Planning Department. Subsidized Housing in  Albuquerque: Design Evaluation, Analysis and Recommendations. A Report Prepared by Design and Planning Assistance Center: Dennis Hanson, Min Kantrowitz, Richard Nordhaus and Robert Strell. Albuquerque; 1978. Bechtel, Robert B. "Social Goals Through Design: A Half Process Made Whole" Paper delivered at the American Institute of Planners Conference, Boston, Massachusetts, October 9, 1972. Berkeley, Ellen Perry (ed.). "Architecture Criticism and Evaluation", Journal of Architectural Education, XXIX, No. 4 (April 1976), the entire issue. Bobrow, Philip D. "Experimental Changes to the Architectural Process", Industrialization Forum, Vol. V, No. 5, 1974, pp. 9-20. Campbell, David E. "Evaluation of Built Environment: Lessons from Program Evaluation". In the Behavioural Basis of Design Book 1:  Selected Papers EDRA 7. Peter Suedeld and James A. Russell (ed.). Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1976, pp. 241-245. Campbell, Robert D. "Evaluation of Man-Environment Systems", Man- Environment Systems, Vol. VII, No. 4 (July 1977), pp. 194-202. Canter, David. "On Appraising Building Appraisals", The Architect's  Journal, (December 1966), pp. 1547-1550. Canter, David "Priorities in Building Evaluation: Some Methodological Considerations", Journal of Architectural Research, Vol. VI, No. 1 (March 1977), p. 38-40. Canty, Donald and Andrea O. Dean. "Evaluation: A Small Office Building Asserts Itself, but with Respect", American Institute of Archi- tects Journal (September 1976), pp. 23-26. Caro, Francis G. (ed.) Readings in Evaluation Research. New York: Russel Sage Foundation, 1971. Collins, P. Architectural Judgement. Montreal, 1971. Cronberg, A. and A. Saeterdal. "The Potential of the Performance Concept - Some Questions", Industrialization Forum, Vol IV, No. 5 (1973), pp. 23-26. Cullum, Charles H. "Architectural Research and Apathy", The Canadian  Architect, (October 1975), pp. 9, 64, 65. Davis, Howard R. "Four Ways to Goal Attainment: An Overview". Evaluation, Special Monograph No. 1. 1973. Dean, Andrea O. "Evaluation: Working Toward an Approach That Will Yield Lessons for Future Design", American Institute of Archi- tects' Journal, (August 1976), pp. 26-28. Deniston, O.L. and I.M. Rosenstock. Health Services Reports. LXXXVIIl (February, 1973). Duffy, Francis. "Office Building: Technical Study 1: The Place and the Process," The Architects Journal. Vol. II (May, 1973). Duffy, Francis, Colin Cave, John Worthington. Planning Office Space. London: The Architectural Press Ltd., 1976. Duffy, Francis and John Worthington. "Organizational Design" Journal of  Architectural Research, VI No. 1 (March 1977), pp. 4-9. Eberhard, John. The Performance Concept: A Study of its Application to  Housing. Washington, D.C.: U.S. Department of Commerce, 1969, Chapter 1:3. Evaluative Research: Strategies and Methods. Pittsburgh: American Institute for Research, 1970. Evans, Benjamin H. and Herbert C. Jr. Wheeler. Emerging Techniques: 2,  Architectural Programming. Washington, D.C.: The American Institute of Architects, 1969. Fitch, James Marston. "Architectural Criticism: Trapped in its own Metaphysics", Journal of Architectural Education. (April, 1976). Franklin, Jack L. and Jean H. Thrasher. An Introduction to Program  Evaluation. New York: John Wiley and Sons, 1976. Glover, Michael, (ed.) Alternative Processes: Building Procurement,  Design and Construction. Champaign, Illinois: Industrialization Forum Team and the University of Illinois, 1976. Government of Canada. "Evaluation of Programs by Departments and Agencies." Draft Report of Guidelines Prepared for Treasury Board Policy 77-47 on Program Evaluation in the Federal Public Service, Ottawa, 1978. Green, Kevin W. (ed.). Research and Design. Washington, D.C.: American Institute of Architects Research Corp., Vol. I, No. 3 (July, 1978). Hall, Edward and Mildred Hall. The Fourth Dimension in Architecture:  The Impact of Building on Man's Behavior. Sante Fe, New Mexico: The Sunstone Press, 1975. Hilliar, Bill and Adrian Leaman. "Architecutre as a discipline", Journal of  Architectural Research, Vol. V, No. 1 (March 1976), pp. 28-32. Hillier, Bill and Adrian Leaman. "A New Approach to Architectural Research", Royal Institute of British Architects Journal, (December 1972)7 pp- :> 17-521. Howell, Sandra C. "Post Construction Evaluation Transfer Strategy", Industrialization Forum, Vol. VIII, No. 1, 1977. pp. 29-35. Johnson, A.W. "P.P.B. in Canada", Public Administration Review (January - February, 1973). Lang, Jon, et al. (ed.). Designing for Human Behavior. Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1974. McLaughlin, Herbert. "Evaluation Studies: A follow-up architectural service", Architectural Record, (August 1974), pp. 65-68. Maharaj, Jayant J. "The Nature of Architectural Criticism" Unpublished Master's dissertation, Faculty of Graduate Studies, Nova Scotia Technical College, 1976. Manning, Peter., (ed.). Office Design: A Study of Environment. Liver-pool: Liverpool University, 1965. Markus, Thomas A. "The why and the how of research in "real" buildings", Journal of Architectural Research, Vol. Ill, No. 2 (May 1974), pp. 19-23. Markus, T.A., et al. Building Performance. New York and Toronto: John Wiley and Sons, 1972. Markus, Thomas A., et al. "Building Appraisal: St. Michael's Academy Kilwinning," The Architects Journal, January 7, 1970. pp. 9-50. Moleski, Walter, et al. "Environmental Programming and Evaluation: A New Category of Man-Environment Studies", Man-Environment  Systems, Vol. VII, No. 1, (January, 1977), pp. 35-36. Ostrander, Edward R. and Bettye Rose Connell. "Maximizing Cost Benefits of Post-Construction Evaluation". In Peter Suedeld and James A. Russell (eds.), The Behavioral Basis of Design Book 1:  Selected Papers EDRA 7. Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1976. pp. 241-245. Ostrander, Edward, et al. "Post Construction Evaluation: A Hard Look at a Seductive Idea". A workshop report in Peter Suedfeld, et al. (eds.), The Behavioral Basis of Design Book 2: Session Summaries  and papers/EDRA7. Stroudsburg, Pennsylvania:. Dowden, Hutchin-son and Ross, Inc., 1977. pp. 126-130. Parsons, David J. "Building Performance: Concept and Practice", Industri- alization Forum, Vol. Ill, No. 3 (1972), pp. 23-32. Pena, William, et al. Problem Seeking, An Architectural Programming Primer. Boston: Cahners Books Internation Inc., 1977. Pena, William M. and John W. Focke. "Performance Requirements of Build-ings and the Whole Problem", Performance Concept in Buildings;  Proceedings of the Joint RILEM - ASTM - C1B Symposium, 1972, National Bureau of Standards Special Publication 361, Vol. 1 (March 1972), pp. 43-55. Piano & Rogers: A Statement: Centre Georges Pompidon, A.D. Profiles: 2, Architectural Design Vol. II (1977), pp. 87-151. Public Works Canada. Project Brief System Users Manual. Ottawa, 1976. Rabinovitz, H.Z., et al. Buildings in Use Study. Vol. 1 Field Tests  Manual. Vol. 2 Technical Factors. Vol. 3 Functional Factors. Prepared at the School of Architecture, University of Wisconsin, Milwaukee, 1975. Sanoff, Henry. Methods of Architectural Programming. Stroudsburg, Pennsylvania: Dowden, Hutchinson and Ross Inc., 1977. Sanoff, Henry, et al. "Building Evaluation", Build International, Vol. 6, No. 3 (May - June 1973), pp. 261-297. Sanoff, Henry, et al. "Post Completion Building Evaluation". A Cassette Tape and Supplementary Written Material distributed by Library of Architectural Cassettes, produced by Tech Tapes, Washington: n.d. Schodek, Daniel L. "Evaluating the Performance of Buildings'" Industrializa- tion Forum, Vol. IV, No. 5 (1973), pp 11-18. Shibley, Robert. "Toward More Leverage with Post Construction Evaluat-ion Results," Post-Construction Evaluation: A Hard Look at a  Seductive Idea, V/orkshop Report EDRA 7, Book 2., 1976. Struening, Elmer L. and Marcia Guttentag (eds.) Handbook of Evaluation  Research. Beverly Hills: Sage Publications Inc., 1975. Suchman, Edward, Evaluation Research. New York: Russell Sage Founda-tion, 1967. Temko, Allan. "Evaluation: Louis Kahn's Salk Intitute After a Dozen Years", American Institute of Architects' Journal, (March 1977), pp. 42-48. Thompson, Mark S. Evaluation for Discussion in Social Programmes. West-mead, England: Saxon House, D.C. Heath Ltd., 1975. Tyler, Ralph W., et al. Perspectives of Cirriculum Evaluation. Chicago: Rand McNally and Co., 1967. U.S. Department of Commerce. Performance Concept in Buildings, Vol. 1 Invited Papers. Washington: 1972. Wade, John "An Architecture of Purpose", American Institute of Archi- tects Journal, (October 1967), pp. 71-76. Weiss, Carol H. Evaluation Research. Englewood Cliffs, N.J.: Prentice-Hall Inc., 1972. Weiss, Carol H., (ed.) Evaluating Action Programs. Boston: Alyne and Bacon Inc., 1972. Wholey, Joseph S., et al. Federal Evaluation Policy. Washington, D.C.: The Urban Institute, 1970. Wohlwill, J.F. and D.H. Carson., (ed.) Environment and Social Sciences:  Perspectives and Applications. American Psychological Association Inc., 1972. Zeisel, John. Sociology and Architectural Design. New York: Russell Sage Foundation, 1975. Zusman, Jack and Raymond Bissonnette. "The Case Against Evaluation", International Journal of Mental Health. II (Summer, 1973). 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0094651/manifest

Comment

Related Items