Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The relationship among process use, findings use, and stakeholder involvement in evaluation Alkhalaf, Arwa 2012

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2012_fall_alkhalaf_arwa.pdf [ 830.96kB ]
Metadata
JSON: 24-1.0073351.json
JSON-LD: 24-1.0073351-ld.json
RDF/XML (Pretty): 24-1.0073351-rdf.xml
RDF/JSON: 24-1.0073351-rdf.json
Turtle: 24-1.0073351-turtle.txt
N-Triples: 24-1.0073351-rdf-ntriples.txt
Original Record: 24-1.0073351-source.json
Full Text
24-1.0073351-fulltext.txt
Citation
24-1.0073351.ris

Full Text

The	
  Relationship	
  among	
  Process	
  Use,	
  Findings	
  Use,	
  and	
  Stakeholder	
  Involvement	
  in	
   Evaluation	
   by	
   Arwa Alkhalaf B.Sc., King Abdulaziz University, 2007  A THESIS SUBMITTED IN PARTIAL FULLFILMENT OF THE REQUIRMENTS FOR THE DEGREE OF MASTER OF ARTS  in  The Faculty of Graduate Studies (Measurement, Evaluation, and Research Methodology)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) October 2012  © Arwa Alkhalaf, 2012  Abstract The evaluation use literature agrees that there is a set of consequences that result from involvement in the evaluation, namely use of process and findings. Recently, research is focusing more on the relationships between use of findings and process. The relationship between process and findings use has been examined in relation to involvement in the evaluation process. Empirical research shows that involvement plays a large role in enhancing use of findings and process, and the effects of process use on the use of evaluation findings. The purpose of this study is to examine the relationship among process use, use of findings, and stakeholder involvement in an evaluation and the effects of factors identified in the literature on use. The study considers use in the context of the evaluation of the “Working on Walls” (WOW) project. Through a Delphi technique with three rounds, perceptions of the ongoing evaluation of the project were elicited from WOW participants. This research investigated process and findings use individually and the relationship between them and it provides an example of using the Delphi method in evaluation research. The Delphi technique that was used had many limitations in this context. The most important were attrition rate, biased results, and misinformed participants. The Delphi survey results provided an accurate but incomplete view of the factors that were available in the evaluation. However, it failed to correctly identify uses of the evaluation. Although this study had some limitations, it provides an example that emphasises the importance of the personal factor on process and findings use of an evaluation. It also emphasizes the importance of decision-makers’ attitudes towards evaluation on stakeholder’s perceptions of an evaluation. Lastly, the survey created for this study is one of a kind that can be useful for future research on evaluation use.  	
    ii	
    Preface The University of British Columbia Behavioural Research Ethics Board approved this study as a minimal risk study. The certificate of approval number is H11-02894.  	
    iii	
    Table of Contents Abstract …………………………………………………………………………………...  ii  Preface ……………………………………………………………………………………. iii Table of Contents ………………………………………………………………………....  iv  List of Tables ……………………………………………………………………………..  v  Acknowledgment …………………………………………………………………………  vi  Dedication ………………………………………………………………………………...  vii  1  Introduction …………………………………………………………………….  1  2  Literature Review ……………………………………………………………....  5  What is Evaluation Use?.................................................................................. 7 Use Terminology and Framework …………………………………………..  8  Types of Evaluation Use …………………………………………………..... 10 The Relationship Between Process and Findings Use ……………………… 27 Summary …………………………………………………………………..... 28 3  Research Methodology and Methods …………………………………………..  30  Case Study Methodology …………………………………………………... 31 4  The Delphi Technique ……………………………….……………………..  40  Research Results and Findings …………………………………………………  49  Results of the Delphi Study ………………………………………………...  49  The Relationship among Findings Use, Process Use, Stakeholder  5  Involvement, and Factors Affecting Findings Use ………………………....  60  Summary …………………………………………………………………....  64  Discussion ……………………………………………………………………… 65 Limitations of The Delphi Technique……………………………………..... 70 Implications for Evaluation Practice and Research ……………………...…  76  Summary and Future Research ………………………………………….….  79  References ……………………………………………………………………………..…. 78 Appendices ………….……………………………………………………………………. 90  	
    iv	
    Appendix A: Survey Items and Their Links to the Literature ……………………..…....  90  Appendix B: Complete Results of the Delphi Rounds ……………………………..…...  95  	
  	
  Appendix C: Consent Form	
  …………………………………………………………………………………..	
   103  	
    v	
    List of Tables Table 1  Gender and Role of WOW Participants …………………………………  Table 2  Evaluation Questions, Data Sources, and Collection Methods for the  32  WOW Project ……………………………………………………………  35  Table 3  Description of WOW Project Formative Reports ……………………….  38  Table 4  Descriptive statistics from Delphi I, II, III ………………………………  51  Table 5  Items that reached Agree/Disagree consensus from Delphi III …………. 58  	
    vi	
    Acknowledgement This project would not have been possible without the essential and kind support of many individuals. I would like to express many thanks and appreciation to my supervisor, Sandra Mathison, for her great support, guidance, and assistance throughout this research. Her knowledge and expertise has changed the way I think of and about research in general and evaluation research and practice in particular. She has always been supportive and kind to me. My sincere thanks go to the Working on Walls project. The research would not have been completed without their participation in data collection. The project has always been supportive of my learning about and experience with evaluation. Lastly, words alone cannot express the thanks I owe to my parents. Their prayers have guided me throughout this process. Their encouragement and love brightened many dark nights. Their words of wisdom inspired me to overcome many obstacles during my studies abroad.  	
    vii	
    Dedication  to my small family, Mohammed and Faris  	
    viii	
    Chapter 1  Introduction Evaluators have been discussing, theorizing, and researching evaluation use for over forty years. Research focuses on definitions of use (Patton, 1988; Henry & Mark, 2003; Weiss, 1988), appropriate terminology (Alkin, 1982; Kirkhart, 2000), and typologies. Evaluation use is defined as the way the evaluation findings or process impacts the thoughts and actions of the evaluation’s stakeholders (Alkin, 2005). Consequences of the evaluation process are a result of learning from the evaluation that takes the form of changes in individuals, program culture, or program processes (Patton, 1997) whereas the impact of evaluation findings result from how findings are applied, integrated, and used. The literature on evaluation use continues to grow and has expanded to include different influences and the factors that may affect it. In addition, there is a unified understanding among theorists that use of evaluation findings is not the only impact an evaluation may have (for example, Alkin & Taut, 2003; Patton, 2008). There is general agreement that there is a set of consequences that result from involvement in the evaluation process. Recently, research is focusing more on the relationships between use of findings and process use (for example, Amo, 2009; Lopez, 2010). The relationship between process and findings use initially appeared in the literature indirectly through the study of collaborative forms of evaluation methods, such as participatory evaluation. For example, Greene (1988) was one of the first to address this issue when she studied the effects of a participatory evaluation on use of evaluation findings. However, further research is necessary to explore the nature of possible links between process use, findings use, and involvement in the evaluation process. 	
    1	
    Research has shown that involvement positively affects use of evaluation findings and is important to experience process use (Patton, 2008). However, these studies have not examined the effects of involvement as a mediator variable on both process and findings use in one study or the degree of involvement needed in order to experience process use and, therefore, use of evaluation findings. Research has also shown the link between involvement and conceptual use (Greene, 1988), and between involvement and instrumental use (Amo, 2009). However, the minimum or maximum degree of involvement needed to experience findings use has not been explored. Process use occurs as a result of learning from the evaluation with or without involvement; however, research has shown that it occurs primarily as a result or by-product of involvement in an evaluation (for example, Harner & Preskill, 2007; Patton, 2008). Similar to the research on findings use and involvement, degree or types of involvement have not been identified in the literature. In addition, different types of process use have not been linked to specific instances of findings use or factors that affect findings use (for example, Lopez, 2010), which lead me to believe that there is an overlap between factors that affect both findings and process use (Johnson, Greenseid, Toal, King, Lawrenz & Volkov, 2009) and, furthermore, that there is an empirical link between these types of uses. To put it clearly, process use and findings use are closely interlinked (Alkin & Taut, 2003); one must lead to the other. The relationship between process and findings use has been examined in relation to involvement in the evaluation process. Empirical research shows that involvement plays a large role in enhancing use of findings and process (Amo, 2009; Lopez, 2010). Research also showed that process use strengthened the use of evaluation findings (Lopez, 2010). The purpose of this  	
    2	
    study is to examine the relationship among process use, use of findings, and level of stakeholder involvement in an evaluation. The study considered use in the context of the evaluation of the “Working on Walls” (WOW) project. WOW aims to increase the employability of post-doctoral fellows and graduate students outside of academia through involvement in multiple laboratory experiences, travel, publications, professional and technical skills workshops, organizing symposia, grant writing, and mentorship. The evaluation is primarily formative; that is, the project employed the evaluation to help stakeholders improve the WOW project, as it is ongoing. Since the project directors expect evaluation to be a tool that will help them develop and improve the project, this implies the findings are intended to be used by stakeholders. At the same time, stakeholders have been involved in the evaluation process in different ways and to different degrees. For example, project directors specified the data collection method they would like to use within the evaluation whereas students did not participate in this decision. This provides for diverse perceptions of the experienced evaluation process. Because process use, findings use, and stakeholder involvement is expected in the WOW evaluation project, the case is suitable for this study. The current study examined the relationship among process use, findings use, stakeholder involvement in the evaluation, and factors affecting findings use. The following research questions were addressed: •  To what degree do stakeholders in the WOW project need to be involved in the evaluation in order to experience process use?  •  What impact does the involvement in the evaluation process have on instrumental, conceptual, and symbolic use in the WOW project?  	
    3	
    •  To what extent do characteristics of the evaluation implementation, decision-making setting, and stakeholder involvement identified in the literature facilitate use of the evaluation in the WOW project?  •  What is the causal relationship of involvement in the evaluation process, process use, and use of evaluation findings in the WOW project?  •  Through examining how the evaluation of the WOW project is used and what factors affects it, what can the evaluators of the project learn from their evaluations? To explore the research questions a structured survey built on a comprehensive review of  the literature was employed in a Delphi technique. This method was used to reach consensus on stakeholders’ perceptions of uses of the ongoing evaluation of WOW. Although this study had some limitations, it provides insight on the effect of factors related to the decision-making context on process and findings use of an evaluation. It also emphasizes the importance of decision-makers’ attitudes towards evaluation on stakeholder’s perceptions of the evaluation.  	
    4	
    Chapter 2  Literature Review Use of evaluations has been the focus of many studies since the 1960s. During this period, evaluation expanded as an industry and as a stand-alone profession (Madaus & Stufflebeam, 2000). Large numbers of publications tackled evaluation of programs related to social issues. Patton (1986) reported that 1700 citations of evaluation studies were identified in the Congressional Sourcebook on Federal Program Evaluations in 1976 during the period 19731975. Although a large number of evaluations were produced, evaluators were primarily concerned with the extent to which their studies had an impact on decisions. The issue of use was concerned not only with the use of evaluations but also with use of knowledge that was produced through social science research in general. Even though the United States government provided substantial funding for evaluations (Patton, 1986), in the early 1970’s evaluations were frequently characterized as unsuccessful because of the lack of use by decision-makers (Weiss, 1972). This aroused the interest of many evaluators who researched remedies and reasons behind such a phenomenon. For example, Weiss (1972) proposed hypotheses to explain the reason behind non- or under- use of evaluation findings. Weiss called for a comparative study of factors that, if proven to have an effect on utilization, have clear implications for evaluation practice. Henry (2000) criticized the focus on judging the success of an evaluation by its application. He states that social betterment should be the ultimate goal of an evaluation and argues that use is only one of the means by which an evaluation achieves social betterment. He believes that overemphasizing the importance of use limits the role of evaluation in informing 	
    5	
    policies and contributing to the democratic process. It also may lead to falling into the persuasion use trap; that is, evaluating to provide persuasive evidence rather than credible evidence. Whereas Henry believes that use should not be the only goal of evaluation, Weiss and Patton advocate for ensuring use. A famous debate between Carol Weiss and Michael Patton questioned whether evaluators should be held accountable for evaluation use (Patton, 1988; Weiss, 1988a; Weiss, 1988b). Weiss contended that, even when evaluators follow advice that leads to higher use of evaluations, their influence is characterized by “indifferent success” (Weiss, 1988b, p. 15) because of competing information and politics in the decision-making contexts. Therefore, an evaluator should strive to act as an educator, enlightening program managers and decisionmakers about a program. On the other hand, Patton advocated for an active role for evaluators in promoting and ensuring use. He believed an evaluator’s role included helping decision-makers identify their evaluation needs and the types of information needed to address such needs, and producing useful findings that would be used by intended users. In general, research on the utilization of evaluation findings and processes does not suggest that evaluation has direct impact on decisions, but rather that evaluation has a subtle effect, which might not be seen by many observers. Results and findings affect users in informal and gradual ways (King, 1982, Weiss, 1988a). For example, Patton, Grimes, Guthrie, Brennan, French and Blythe (1975) found that evaluators and decision-makers of twenty evaluations of United States federal mental health programs described indirect use of results to reduce uncertainties in decision-making processes. Therefore, it is no longer appropriate to assume direct use of evaluation findings is the most important outcome for program managers’ decisions.  	
    6	
    These high and unreasonable expectations are what discouraged many evaluators and led them to believe that evaluations were not effective. What is Evaluation Use? In the 1970s and early 1980s evaluation use was defined as direct implementation of evaluation findings for decision making. Evaluation use was described as immediate, concrete, and observable impact directly from evaluation findings on decision-making and program activities (Weiss, 1998). Some researchers still believe this is the case. For example, Grinsburg and Rhett (2003) defined useful evaluation as one that adds to the timely body of evidence that increases the likelihood of making policy decisions and helps improve program implementation. However, the understanding of evaluation use has progressed from an instrumental view to a more complex multi-dimensional concept. Patton et al. (1975) offered a different conceptualization of evaluation use, suggesting that the use of evaluation findings is not dependent on direct use but rather is a distinguishable action taken to make changes or modifications that can be linked to evaluation process, products, or findings. This definition summarizes the evolution of the concept of evaluation utilization, which accounts for the use of evaluation findings and processes when either or both can influence stakeholders’ actions and thoughts (Amo, 2009). Therefore, evaluation use has come to be defined as the link between knowledge and action; in other words, if an evaluation is unused the knowledge has not been translated into action. Action can take many forms; for example, direct implementation of recommendations or application of knowledge learned from the evaluation. The perspective on use has changed from a one-dimensional view to a multidimensional concept with multiple definitions, where each definition depends on which dimension of use one  	
    7	
    is addressing. This multidimensional concept was first proposed by Kirkhart (2000) with the introduction of evaluation influence. Evaluation influence takes into consideration intended and unintended effects of findings and the process of an evaluation over time (Kirkhart, 2000). Because the construct of use has multiple attributes, the shift in the terminology from use to influence provides for a wide array of perspectives (Caracelli, 2000; Henry & Mark, 2003). Generally, there is a widespread agreement that useful evaluations affect stakeholders, whether through the application of results to actions or learning that occurs from an evaluation’s influence. At the same time, there is a lack of consensus on a single definition of evaluation use (Cummings, 2002). Theorists’ conflicts arise when questioning whether the effect of evaluation stems from intentionality (Kirkhart, 2000), involvement (Greene, 1988), or over time (Kirkhart, 2000; Weiss, 1979). The following definition, however, provides some common ground: “Evaluation use, or evaluation utilization, occurs when evaluation information in the form of findings, or evaluation practice, has influence on the actions or thoughts of stakeholders.” (Alkin, 2005, p. 143). Use Terminology and Frameworks While the notion of use is important, disagreement has emerged over the appropriateness of the definition and terminology. Some researchers believe that the term ‘use’ implies direct use of evaluation findings whereas utilization connotes “a dynamic process that occurs overtime” (Patton, 2008, p. 107). Others have deemed the term ‘use’ more reflective of the reality of evaluation implementation over ‘utilization’, because utilization holds linguistic connotations related to direct and instrumental use only (King, 1982, Weiss, 1980). Alkin (1982) uses both terms synonymously, however, he defines utility as a potential substitute for use; evaluations  	
    8	
    have utility “if they are performed and presented in a manner that makes them potentially amenable for use” (p. 153). More recently, Kirkhart (2000) suggested substituting the term ‘use’ and ‘utilization’ with ‘influence’. This was intended to be an improvement over previous terms because it is a broader definition inclusive of all use traditional typologies in addition to “multidimensional, incremental, unintentional, and noninstrumental” uses (Kirkhart, 2000, p.7). Therefore, in Kirkhart’s terms, evaluation influence is the essence of the study of evaluation effects. Three dimensions present the framework of evaluation influence. First, sources of influence highlight the difference between process and results as key areas from which influence emerges. Second, intention differentiates between intentional and unintentional influences of an evaluation. Third, time denotes immediate, end of cycle, or long term influences of an evaluation. These three dimensions create an integrated theory that facilitates the understanding and measurement of evaluation influence. Henry and Mark (2003) joined Kirkhart in advocating for moving evaluation research from “use” to “influence” and created a framework that identified three levels of influence (individual, interpersonal, and collective) and listed potential consequences for each level. Because the influence of a single evaluation can transpire through numerous consequences, there are multiple possible pathways of influence (Henry & Mark, 2003). This influence model is different from Kirkhart’s model because it does not limit influence and consequences of an evaluation within a limited number of dimensions; rather, it opens the notion of evaluation influence further to incorporate different contexts and personalities of people involved in the evaluation. The discussion of different terminologies that presents evaluation consequences more  	
    9	
    accurately is constructive in showcasing different evaluation use theories. By adopting a traditional definition of evaluation use and recent frameworks of influence, different types of use can be illuminated through different people and intentions (Mark & Henry, 2004). These frameworks can provide an opportunity to create highly descriptive and exact nuances of use within an evaluation. However, the literature on evaluation influence is still in its infancy, and more research is needed to create a stronger definition, identify factors that affect influence, and elucidate how influence relates to the traditional view of evaluation use. A handful of studies have examined the idea of influence; however, most are theoretical and lack an applied example. Although the notion of influence is appealing and is the contemporary view of evaluation use (see for example, Lawrenz, King & Ooms, 2011), this study focuses on the intentional and immediate use of the two sources (findings and process). Consequently, examining the three dimensions incorporated in Kirkhart’s influence framework is outside the scope of this study because the evaluation case examined does not include all sub-dimensions of this model, namely the end of cycle and long term sub-dimensions. Types of Evaluation Use Many conceptual and empirical studies support the existence of different types of evaluation use (Alkin & Taut, 2003; Johnson, et al., 2009; Patton, 2008). Building on a review of evaluation use studies and social science in general, Leviton and Hughes (1981) listed three types of evaluation use: instrumental, conceptual, and symbolic or persuasive. Instrumental use refers to the direct and concrete application of evaluation findings to decision-making and problem solving (Rich, 1977). Instrumental use is common in four situations: 1) when the implications of findings are non-controversial and do not conflict with interests with an organization; 2) if the changes are small and do not upset a program’s existing  	
    10	
    repertoire; 3) if the program is stable and changes would not include managerial, budget, or public support; 4) if the program is in crisis and nobody knows what to do (Weiss, 1998). Conceptual use refers to changing one’s thinking about a program as a consequence of an evaluation without putting this information to any specific, documentable use (Rich, 1977). Conceptual use involves learning new skills that might not be immediately applicable to the program but might be of use in the future; for example, learning how to collect certain types of data, or how to communicate with clients. Although, Leviton and Hughes (1981) note that conceptual use can lead to future instrumental use, the relationship between instrumental and conceptual use is blurry in terms of one leading to the other. Symbolic or persuasive use refers to “drawing on evaluation evidence in attempts to convince others to support a political position or to defend such a position from attack” (Leviton & Hughes, 1981, p. 528) or to support a decision that has already been made. A fourth kind of use is legitimative use, which is related to symbolic use and is the use of evaluations to justify previously made decisions (Alkin & Taut, 2003). Evaluation use typically falls in two main categories: uses that result from the evaluation process and others that result from evaluation findings (Alkin & Taut, 2003; Patton, 2008). Findings use refers to how the evaluation findings are applied, integrated, and used; process use refers to the manner in which the conduct of an evaluation impacts individuals or organizations involved in the evaluation. Process use incorporates learning as a major component of its effectiveness whereas findings use may or may not be used for learning (Alkin & Taut, 2003). In other words, learning from findings use requires knowledge accumulation and acquisition; on the other hand, process use requires behavior and skill acquisition or modification. Process use is not a substitute for findings use, but should enhance it (Patton, 2008).  	
    11	
    In a review of literature on evaluation use, similar to the one conducted by Cousins and Leithwood (1986), Johnson, et al. (2009) examined the empirical research from 1986 to 2005. From a total of 41 analyzed studies they found that most (38 out of 41) examined findings use whereas only three examined process use. However, since 2005 numerous studies that examine process use empirically have emerged (for example: Amo, 2009; Fredrick, 2008; Lopez, 2010; Remport, 2008) Use of Evaluation Findings According to the literature on use of evaluation findings there are three main types: instrumental, conceptual, and symbolic. Instrumental use (and findings use in general) has been found to be the most applied and researched type of use (Johnson et al., 2009). Evaluation findings have been used with effects that are gradual and subtle (Alkin & Daillak, 1979; Patton et al., 1975) since they are viewed as one piece of information leading to a decision but not the only one (Alkin, 1975; Patton et al., 1975; Weiss, 1972). Also, evaluation findings are predominantly used when they reflect moderate alterations in a program, staff, or costs. The nature of the evaluation findings, whether they are desirable or not, have also been found to affect utility (Bober & Bartlett, 2004; Weiss, 1972). Therefore, most of the time evaluators should not expect their evaluations to be totally accepted and implemented. In order for any evaluation to have the potential to be used, it has to possess the following components. First of all, the evaluation should fit the needs of intended users (Alkin, 1975; Patton, 2008; Weiss 1972). Second, lines of communication between program managers, staff and participants with evaluators must be open, clear, fluid, and two-way (Balls & Anderson, 1977; Greene, 1988; Polivka & Steg, 1978). Third, evaluation must be sensitive to the context of the program, organization, and community (Alkin, 1975; Braskamp, Brandenburg & Ory, 1987;  	
    12	
    Greene, 1988; Preskill, 1991). By understanding an organization or program’s context and culture, evaluators may be more prepared to conduct better-used evaluations. Factors effecting use of evaluation findings. The study of evaluation use largely benefitted from a number of reviews and synthesis of literature that not only helped to develop and shape types of use, but also to create a strong foundational understanding of how evaluation use can be enhanced. The purpose of these reviews was to allocate factors or conditions that potentially cause use. In 1981, Leviton and Hughes reviewed literature on use of evaluation and social science research from which they identified a number of variables and characteristics of the evaluation process that have been argued to affect use. The authors clustered variables affecting utilization into five categories: relevance of the evaluation information and timeliness of results to fit the needs of stakeholders; communication and dissemination of the evaluation; information processing that is suitable for evaluation users; credibility of evaluation implementation and evaluator; and user involvement and advocacy in terms of commitment to evaluations and advocacy of programs and policies. Cousins and Leithwood (1986) conducted a meta-analysis of 65 studies on evaluation use published between 1971 and 1985. These studies primarily included retrospective, longitudinal, and simulation designs with surveys as the most common methodology. Sample sizes, as well as authors’ theoretical foundations around evaluation use, varied. The meta-analysis resulted in the allocation of twelve factors that influence the use of evaluation findings. These were grouped into two higher order factors, namely characteristics of evaluation implementation and decision or policy setting. Six factors address characteristics of evaluation implementation: evaluation quality, credibility, relevance, communication, nature of findings, and timeliness. The other six  	
    13	
    factors address characteristics of the decision or policy setting: information needs of users, decision characteristics, political climate, competing information, personal characteristics of stakeholders, and user commitment and receptiveness to evaluation information. Through an analysis to identify the most dominant factors, the authors found six factors and instances where evaluation use is mostly evident: 1) when evaluations design and methodology are appropriate; 2) when the decisions to be made were significant to users and applicable; 3) when evaluation findings were consistent with user expectations; 4) when stakeholders were involved and committed to the evaluation process; 5) when the findings were relevant to challenges stakeholders encountered; and 6) when other information agreed with the findings. More recently, Johnson et al. (2009) picked up where Cousins and Leithwood left off, reviewing 41 studies from the period 1986 to 2005. The authors followed Cousins and Leithwood’s framework and, like their predecessors, allocated characteristics of the decision and policy setting and evaluation implementation as higher order factors affecting evaluation use. Although Cousins and Leithwood’s characteristic of credibility considered the evaluator’s title or reputation, it did not address the influential nature of the evaluator’s personal competence, leadership, and “who the evaluator is” (Johnson et al., 2009, p. 382) as a factor affecting use. Therefore, the authors added the evaluator competence characteristic to the evaluation implementation category. Similar to Leviton and Hughes, the authors found that the existence of stakeholder involvement was prevalent in the literature; therefore, they added a new higher order category, stakeholder involvement, to accommodate the recent literature that places considerable emphasis on participatory approaches to evaluation. This new category includes involvement with commitment or receptiveness to evaluation, communication quality, credibility, findings, relevance, personal characteristics, decision characteristics, information, and direct stakeholder  	
    14	
    involvement. In their review, Johnson and colleagues found that more than half of the studies addressed involvement of stakeholders and that, in relationship to use, involvement of stakeholders was associated with the characteristics of other categories. Over the past 40 years, research on evaluation use has identified the primary factors affecting use of evaluation findings as 1) characteristics of the decision context, 2) characteristics of evaluation implementation, and 3) characteristics of stakeholder involvement. The next sections will review empirical studies that examine these characteristics. Characteristics of decision-making context. Decision-making characteristics are those related to situations and circumstances of organizations and decision-makers, such as organization politics, availability of resources, information needs, and personal characteristics (Cousins & Leithwood, 1986). Although the number of studies examining the characteristics of the decision-making context is less than those examining the characteristics of evaluation implementation (Johnson et al., 2009), research has found that decision-making context highly affects use of evaluation. Commitment and receptiveness to an evaluation and information needs of evaluation audiences. In a recent study, Bober and Bartlett (2004) sought to identify factors effecting use in corporate universities. The authors conducted case studies examining the evaluations of four corporate university training programs. They found that the degrees of factors affecting each location differed with the needs associated with the organization. However, a predominant characteristic that effected use of evaluation findings was commitment and receptiveness of an organization. Cousins (1996) studied the effects of researcher involvement on evaluation utilization in three cases where the evaluator was a full partner, silent partner, and general advisor and found the involvement of the evaluator was not as important as administrative  	
    15	
    support and commitment. Decision characteristics, political climate, and competing information. Malen, Murphy and Geary (1988) studied the role of evaluation information on legislative decision-making where most concerns were related to the political context, reputation, and other information. The authors found that an evaluation report was considered a threat to pervasive ideologies, political alignments, reform commitments, and education appropriations. In addition, the evaluation report did not have a singular effect on decision making but rather was one piece of information to be considered when making decisions. The evaluation on its own had a subtle effect on decisions made. As well, the authors found that risks were calculated before evaluation information was used even when the information was congruent with beliefs and experience with the program; substantiating the effects that the political climate, decision characteristics, and competing information have on the decision-making context. In an effort to examine factors influencing the decision–making, Newman, Brown and Rivers (1987) created a simulation study using evaluation vignettes that contained a description of a school program and an evaluation-based decision. The authors found that when the program was of high importance, board members wanted more time, information, and contact with a consultant or expert in the field. On the other hand, when the program was of low importance, board members were more willing to use their own experience. In terms of involvement in decisions, board members indicated the need to be involved in decisions depending on the nature of the content, context, and importance of a program. This study shows that decision characteristics, such as importance of the decision and program, are affected by contextual variables that influence the decision-making process. Weiss, Murphy-Graham and Birkeland (2005) demonstrated the effects that political  	
    16	
    climate and competing information have on use of evaluation findings. The authors point to the example of the Drug Abuse Resistance Education (D.A.R.E) program that is administered in schools across the United States and aims at preventing youngsters from using drugs. The authors explored this particular study because evaluation findings were consistently being neglected. Even though dozens of evaluations showed that the D.A.R.E program was ineffective in keeping young people away from drugs, approximately 70% to 80% of school districts continued to use it. However, this changed when the grantor required that school districts use a drug prevention program that evaluations found to be effective or promising whether or not the grantee believed that the program improved the lot of their students. Personal Factor. Patton et al. (1975) was first to coin the term “personal factor”. The personal factor is the presence of individuals who genuinely care about the evaluation process and its findings, and the absence of the personal factor may hinder utilization of findings (Patton, 2008). Stakeholders who actively seek information from the evaluation in order to learn, make decisions, and reduce uncertainties are essential for use of evaluation findings. For example, Boyer and Langbein (1991) found that the presence of an advocate for an evaluation positively affects the use of findings in the U.S. Congress. In a statewide farmer survey about conservation tillage, Rockwell, Dickey and Jasa (1990) capitalized on the personal factor that was identified as important through a case study. They found that the personal factor accounted for other factors influencing use. The six characteristics (commitment and receptiveness to an evaluation, information needs of evaluation audiences, decision characteristics, political climate, competing information, and the personal factor) related to the decision-making context appear in the literature frequently providing verification for Cousins and Leithwood’s (1986) framework.  	
    17	
    Characteristics of evaluation implementation. Evaluation characteristics also affect utilization. Bober and Bartlett (2004), in their examination of evaluations of corporate university training programs, found that factors related to evaluation implementation are more influential to evaluation utilization than factors related to decision or policy setting. Methodological quality. Technical and methodological credibility are important for utilization (Alkin, 1975; Bober & Bartlett, 2004; Boyer & Langbein, 1991; Rockwell et al, 1990; Tomlinson, Bland, Moon & Callahan, 1994) and include the quality of evaluation design and methodology, data collection, and data analysis. Although, methodological quality appears as an important factor in the literature, empirical studies on this matter vary in their results. Bledsoe and Graham (2005) found that evaluators tend to employ methods from different evaluation approaches when designing an evaluation. Based on this idea, the authors examined an evaluation they conducted using methods from different evaluation approaches (i.e. empowerment, theory-driven, consumer based, inclusive, and use-focused evaluation) to address the question “What is the likelihood that the use of multiple evaluation approaches will increase use?” They found that using multiple approaches in a single evaluation created an evaluation that addressed different needs for different stakeholders, which led to creating informed recommendations that were found to be more useful and were used. On the other hand, Patton et al. (1975) and Alkin et al. (1979) found that methodological rigor was unrelated to utilization of findings. Timeliness. Final reports that are comprehensible and submitted in a timely manner maximize use (Alkin, 1975; Alkin & Daillik, 1979; Boyer & Langbein, 1991; Patton et al., 1975; Rockwell et al, 1990; Weiss, 1972). For example, timeliness was found to be the most significant factor effecting the utilization of evaluation in corporate universities’ training programs (Bober  	
    18	
    & Bartlett, 2004). Boyer and Langbein (1991) explore relationships between the use of evaluation research in health policy and factors cited in the literature as influences on use. Multiple regression results indicate that proper timing and clarity of a report has a significant positive effect on the amount of reported use by members of Congress and their staff. Having an evaluation report coincide with an organizational need for the findings can play a strong role in utilization. Communication Quality. Research indicates that effective presentation methods that program managers and staff find appropriate (Greene, 1988; Weiss, 1972) or disseminating evaluation results in more than one way affects the use of findings (Bober & Bartlett, 2004). The communication methodology has been found to be a very important factor in increasing evaluation utilization (Newman, Brown & Braskamp, 1980). Marsh & Glassick (1988) found that verbal communication of detailed recommendations enhances the likelihood of their being used. When recommendations were discussed with project managers and required instrumental changes, recommendations were more likely to be used over those that were discussed later in the project. Marsh and Glassick’s article (1988) shows the importance of verbal interaction between evaluators and stakeholders. Presenting information in ways that accommodates different learning styles ensures that all program stakeholders understand the findings, hence increasing the probability of using them. Grasso (2003) stated that tailoring the evaluation findings to fit the needs of evaluation audiences, and focusing on findings that users have some influence over, increases use. He also stated that reports should include clear supporting, but not overly technical, evidence of findings and explanation of how the data has been collected and analyzed. This information gives the audience the ability to judge the credibility of the data, findings, and recommendations, which in  	
    19	
    turn leads to increased use of evaluation findings. When reporting findings, Grasso (2003) suggests incorporating qualitative descriptions that can help the audience relate to findings, rather than depending on academic descriptions. This agrees with Marra’s (2003) findings that indicate evidence-based recommendations are more likely to be used. Similar to Grasso, Boyer and Langbein (1991) and Newman, Brown and Braskamp (1980) showed that clarity of a report has been found to have a positive effect in use of findings. Relevance. In his study of managerial style and its implications on utilization of evaluation findings, Cox (1977) found that relevancy of findings is crucial to endorse use. He stated that reports should fit those questions that managers want answered. Similarly, Boyer and Langbein (1991), in their study of the legislative context, found that the type of report influenced its relevance for staff members. The authors showed that staff members were more inclined to use evaluation reports that are produced by the General Accounting Office (GAO) over those produced by a non-GAO, because GAO reports were found to be more relevant than the others. Evaluator Competence. Evaluator credibility, competence, and experience have also been found to affect the utility of evaluation findings (Alkin, 1975; Boyer & Langbein, 1991; Newman, Brown & Braskamp, 1980; Tomlinson et al., 1994). Evaluators must possess skills to communicate findings clearly and efficiently (Alkin & Daillak, 1979). To enhance utilization, the evaluator must take an active role. In other words, issuing reports and hoping that they are sufficient to motivate use is a passive and uncooperative role; rather an evaluator first must ensure the understandability of the report then conduct a formal verbal presentation that is followed by a discussion (Brown & Braskamp, 1980). Shea (1991) stated that the evaluator’s level of understanding of the program, communication ability, and organizational position in relation to the program being evaluated are  	
    20	
    characteristics that may affect use of findings. Similarly, Brown and Braskamp (1980) contended that many evaluator characteristics, such as gender, title, level of training, and organizational position, may play important roles in determining credibility of findings, and therefore use. Unfortunately, the researcher has not found recent empirical investigations of the effects of evaluator characteristics on findings. However, a number of simulation studies have suggested characteristics (i.e. title, gender, and the use of jargon) may have an influence on evaluation findings. Braskamp, Brown, and Newman (1982) reviewed a number of simulation studies that examined the effect of evaluator characteristics on potential use of evaluation findings. In these studies, characteristics of message source/sender, receiver, and content were simulated in evaluation reports to reflect potential real-life situations. The authors stated that the title of the message source affected its potential use; “researcher” was rated higher than both “evaluator” and “content expert”. In addition, the gender of the message source was found to affect the potential use of findings if the receiver had no prior knowledge of the subject matter of the evaluation report. In this case male authors were preferred over female authors. Lastly, jargon in the evaluation reported had an effect on how the evaluator was perceived. Nature of findings. Positive, negative, consistency with evaluation audience expectations, and value of the findings for decision making reflects the nature of findings as identified by Cousins and Leithwood (1986). Bober and Bartlett (2004) found that a factor that affected use, although ranked as the least effective, of evaluation findings was nature of findings. Similarly, Malen et al. (1988) found that, although findings were congruent with decision-makers’ beliefs and experiences, other decision and policy setting factors took precedence over the nature of findings. All in all, research has shown that nature of findings has a subtle effect on the utilization of evaluation findings.  	
    21	
    The literature on the importance of evaluation implementation is vast and prove that many of the factors identified in Cousins and Leithwood’s (1986) framework, similar to the characteristics of decision and policy settings, still apply. However, the nature of findings factor’s effect on use of evaluation findings is not empirically supported. For the purpose of this study the researcher will examine all factors pertaining to characteristics of evaluation implementation with the exception of nature of findings. Characteristics of stakeholder involvement. Stakeholder involvement is different from what Patton et al. (1975) called the “personal factor”. To understand the influence of stakeholder involvement on use, it is necessary to identify who the stakeholders are in any given evaluation. Evaluation stakeholders are people who have a vested interest in evaluation findings. Greene (2005) clustered stakeholders into four groups: “(a) people who have decision authority over the program, including other policy makers, funders, and advisory boards; (b) people who have direct responsibility for the program, including program developers, administrators in the organization implementing the program, program managers, and direct service staff; (c) people who are the intended beneficiaries of the program, their families, and their communities; and (d) people disadvantaged by the program, as in lost funding opportunities.” (p. 387-397). Weiss (1998) includes the general public, with direct or indirect interest in program effectiveness, in the list of possible stakeholders. Grasso (2003) and Patton (2008) suggest that there are multiple stakeholders in an evaluation and that prioritizing intended users is critical to increasing use of findings. A survey to Evaluation Use Topical Interest Group members of the American Evaluation Association found that 74% of respondents believed stakeholder involvement increases the use of findings (Preskill & Caracelli, 1997). Communication is key in achieving a truly effective participatory evaluation (Greene,  	
    22	
    1988). The communication between evaluator and stakeholders has to be open-ended and twoway, such that identifying recommendations and program strengths and weaknesses becomes an activity both parties determine together. Evaluators take the role of educators; they facilitate understanding of data and leave interpretations to stakeholders. For example, Marsh and Glassick (1988) found that recommendations were more likely to be used when stakeholders were involved in refining the content and the selection of findings to formulate recommendations. This facilitated understanding of the findings and allowed for enhanced use. Johnson et al. (2009) refer to a type of stakeholder involvement that leads to identification of needs, which they called involvement with information needs. Rockwell et al. (1990) involved four extension staff that were developing educational programs on energy conservation. The extension staff cooperated with the evaluator in creating the study questions as well as the tillage survey items. The authors found that staff involvement in the creation of the evaluation process resulted in a sense of ownership of the produced information. This helped in sharing the information and the major conclusions of the study with other extension personal and provided the focus for the written reports. Brett, Hill-Mead, and Wu (2000) presented lessons learned from addressing users’ needs, thus creating an evaluation system that encourages use. The authors stated that the importance of evaluation can be emphasized through a demonstration to stakeholders of evaluation strategies that can help address critical issues. The authors also suggested including different levels of staff in formulating answerable evaluation questions. Using a utilization-oriented participatory evaluation case study methodology to investigate the link between participation and evaluation use, Greene (1987) found that all types of findings use were reported; namely, instrumental, conceptual, and symbolic use. The constant discussion among members helped enlighten group members about different perspectives of the program.  	
    23	
    Turnbull (1999) created a model that examines the causal relationships in a participatory evaluation. Through intervening mechanisms and structural equation modeling, he found that the proposed model was a plausible explanation of how participation can be expected to increase the use of evaluation information. A key factor in effective involvement of stakeholders lies in their perceptions about the nature of involvement: “participatory evaluation is likely to result in increased use if participants perceive that (a) their work place goals are participative; (b) they are able to participate to a desired degree; (c) they perceive that they have influence in the decision-making process; (d) they believe that the participatory process was efficacious in that it achieved its intended outcomes.” (Turnbull, 1999, p. 140). These studies clearly demonstrate the positive influence that involving stakeholders in the evaluation process has on use. Evaluation Process Use Patton (2008) defined process use as any indication of individual changes in thinking, attitudes, and behavior, and program or organizational changes in procedures or culture among those involved in the evaluation as a result of the learning that occurs from the evaluation process. Process use focuses on how groups of people collaborate to make meaning as they conduct an evaluation (Shulha & Wilson, 2003). Learning from the evaluation process occurs by encouraging dialogue, questioning assumptions, values, and beliefs. This results in individuals who have a better understanding of the evaluand, the organization, themselves, each other, and evaluation practice (Preskill, Zuckerman & Mathews, 2003). Learning from the process of the evaluation includes being involved in any or all parts of the evaluation process; for example being involved in the evaluation’s negotiation and contract development, determining the focus  	
    24	
    of the study, designing and implementing data collection methods and instruments, analyzing and interpreting data, communicating and reporting evaluation findings, and being informed of the evaluation’s results. In an exploratory study, Harner and Preskill (2007) asked evaluators, “What does process use look like?” Most respondents were experienced evaluators who sought participatory, userfocused, social-justice democratic or evaluation capacity building approaches. Thirty-nine percent of respondents indicated that process use occurs with stakeholder involvement in the evaluation process. A larger percentage (57%) said that process use happens when stakeholders are involved in defining the most important questions or in designing and implementing data collection methods. However, this understanding does not reflect the learning that occurs during the evaluation process, which is key in process use. Others (34%) said that process use is an outcome that leads stakeholders to change perspectives or attitudes about the program and make improvements. This study resulted in unclear and incomplete perspectives on process use, and led the researchers to conclude that evaluators do not have a clear understanding of process use. The methodology of an evaluation, the context, and the people involved shape and form process use, if it takes place at all (Alkin & Taut, 2003). Although some research indicates that process use is a type of incidental learning and a by-product of the involvement in an evaluation (Harner & Preskill, 2007), evaluators can guide stakeholders towards experiencing process use (Patton, 2008). If an evaluator aims at enhancing process use, it is important that h/she opens discussion with the stakeholders who will be the most involved. These intended users are key in determining the content of the evaluation and will be those most exposed to the evaluation process. The degree of involvement in the evaluation process has a direct effect on how the evaluation impacts a program, stakeholders, and participants (Greene, 1988; Carden & Earl,  	
    25	
    2007). In addition, involvement of intended users in an evaluation process impacts the capacity of a program to incorporate evaluation for learning (King, 2007; Harner & Preskill, 2007). Though similar in appearance, it is important to distinguish between process use and evaluation capacity building. Evaluation capacity building is building, sustaining, and strengthening program evaluation practices in an organization’s routine through a group of activities (King, 2007). Although, process use and evaluation capacity building can be intentional and unintentional (Patton, 2007) and involve infusing evaluative thinking in individuals, the primary difference between the two is that evaluation capacity building contains clearly defined goals whereas process use does not (Preskill & Boyle, 2008). Evaluation capacity building is process use only when an evaluation capacity building activity is part of an evaluation experience (Patton, 2007). Preskill, Zuckerman, and Mathews (2003) developed five categories of variables that appear to affect process use through examining how and what stakeholders learned from their participation in the evaluation. The authors utilized a qualitative case study and found the following five factors affecting process use: 1) Facilitation of evaluation processes, 2) Management Support, 3) Advisory Group Characteristics, 4) Frequency, Methods, and Quality of Communications, and 5) Organization Characteristics. Because this was a case study, findings derived should not be generalized. However, this study is one of a limited number of studies addressing factors effecting process use (Lopez, 2010). Kamm (2004) found that the factors identified by Preskill et al. (2003) are influential in an evaluation using an Evaluative Inquiry for Learning in Organizations approach. She found a high level of process use occurring for stakeholders, which she credited to the presence of factors mentioned above.  	
    26	
    The Relationship Between Process and Findings Use Literature on evaluation utilization has emphasized use of evaluation findings, and, as the area evolves, research has expanded to include use of the evaluation process. Although further research is necessary to explore the nature of possible links between process use and findings use (Alkin & Taut, 2003), there is a growing body of theoretical and empirical knowledge that supports the link between findings and process use. The relationship between process and findings use appears in the literature indirectly through the study of collaborative forms of evaluation methods, such as participatory evaluations. For example, a seminal article by Greene (1988), which studied the effects of a participatory evaluation on use of evaluation findings, was one of the first to address this issue. Cousins and Earl (1992) defined participatory evaluation as “applied social research that involves a partnership between trained evaluation personnel and practice-based decision-makers, organization members with program responsibility” (Cousins & Earl, 1992, p. 399). Participatory evaluation connotes that both evaluators and stakeholders are directly involved in the production of evaluation information (Cousins & Earl, 1992). Stakeholder participation in the evaluations conducted by Greene (1988) was defined as shared decision making. In this evaluation, stakeholders were actively engaged with primary responsibility for determining the content of the evaluations, whereas evaluator responsibilities were directing and guiding the evaluation process, and maintaining technical quality. This participatory approach resulted in increased instrumental, conceptual, and persuasive use of evaluation findings. Although process use was not explicitly defined and studied at the time, Greene identified uses from participation in the evaluation process: 1) learning more about the program and the organization through developing a deeper understanding of how the program works, a broader view of its key issues, and more  	
    27	
    insight into the perspectives of decision-makers; 2) learning about and developing favorable attitudes towards evaluation through learning how to think critically about the program and evaluation activities; and 3) stakeholders who participated in the evaluation process developed a greater sense of acceptance and ownership of the results. Recently, Lopez (2010) studied the relationship between process and findings use in personnel evaluation. She concluded that involvement in the evaluation process led to process use, which played an important role in the overall effect of the evaluation on stakeholders. She also found that process use was further strengthened through the use of evaluation findings. Similarly, Amo (2009) studied the relationship between process and findings use in a government context at a macro (government) and micro (organization) level. She found that, at both levels, participation in the evaluation process was an important predictor for evaluation use, although in the macro level participation and process use were found to be important predictors of findings use where participation took precedence over process use. These studies show the importance of stakeholder involvement in use of evaluation process and findings. Although the level and intensity of stakeholder involvement has been mentioned in the literature (for example, Greene, 1988), there still is much room for further research that empirically tests relationships among process use, use of findings, and stakeholder involvement. Summary After more than 40 years of research on evaluation utilization the field has established an agreed upon definition of use (Henry & Mark, 2003); however there still is some conflict about what terminology is appropriate. It is also agreed that there are two categories of use (i.e. process use and use of findings) and types of use within findings use (i.e. instrumental, conceptual, symbolic, and legitimative) (Alkin & Taut, 2003). The categories of evaluation use, as well as  	
    28	
    the types of findings use, are well established; however, there still exist mild confusion and blurred lines between process use and conceptual use of findings (Amo, 2009; Leviton & Hughes, 1981; Shulha & Cousins, 1997). Generally, research in evaluation use has been drawing conclusions about methods and factors to increase use of findings; however, many of these studies are reflective, based on case studies or lessons learned rather than empirical research. Literature indicates that evaluators should engage stakeholders in the evaluation process to increase the possibility of experiencing process use and use of findings. Although research on use is one of the areas most often researched in the field of evaluation, the quality of research on use is poor in comparison to research conducted in other fields (Lopez, 2010). There is much room to explore the consequences of involvement in the evaluation process, which may enable or enhance the use of findings.  	
    29	
    Chapter 3  Research Methodology and Methods The literature shows that the relationship between stakeholder involvement and evaluation use in general is yet to be empirically studied and thoroughly examined. The present study is meant to examine the extent of use of evaluation findings and process as mediated by stakeholder involvement in the WOW project. Through the examination of one case, the evaluation of the WOW project, a description of the effects of stakeholder involvement in the evaluation process on use of findings and process are provided by examining the interactions among the following variables: •  process use and stakeholder involvement,  •  factors affecting use of evaluation findings, and  •  process use, findings use, and involvement. The Delphi technique was used to collect evaluation stakeholders’ perspectives on  evaluation use, involvement, and factors effecting use of findings. This method is suitable to reach consensus among a group of people, but is also used to obtain opinions on a social phenomenon (Linston & Turoof, 2002), determining a quality of a program (Hirner, 2008), or examining stakeholder involvement in developing an evaluation (Smalley, 2000), to name a few. A survey based on relevant factors in studies of evaluation use was developed to collect stakeholders’ perceptions. This chapter is divided into two main parts: a section outlining the research methodology and an explanation of the case, and a second section describing how the Delphi technique was used for this study. 	
    30	
    Case Study Methodology Case studies are a choice of what is to be studied, or what can be called a unit of analysis, that may be examined in whatever way a researcher sees fit. The epistemological question that is accompanied by the case study is: What can be learned from a single case? (Stake, 1995). A case seeks both common and mundane instances such that data reflect a case’s activities and functions, historical background, physical setting, economic and legal issues, and persons. A researcher’s interest in a case may be intrinsic or instrumental. An intrinsic case study is concerned with obtaining a deeper understanding of a case, such as a particular program, person, or organization, and it does not aim to reach a generic or general understanding of a construct. On the other hand, instrumental case studies examine a case because this case will provide an understanding, an insight, or generalization of a phenomenon or construct. The case itself may not be of primary interest; however, it plays a supportive role and facilitates understanding of something else. In instrumental case studies issues are dominant, whereas in intrinsic studies the case is dominant. For the purpose of this research, I utilized the evaluation of the WOW project to answer the research questions. The evaluation of this particular program facilitates conducting this research, but the project is not of particular interest; therefore, I used the WOW project as an instrumental case to understand evaluation use. The Case: “Working on Walls” (WOW) Project The WOW project. I am currently involved in an evaluation of the WOW project, which is in its third year. This project has been funded for six years by the NSERC-CREATE project at the University of British Columbia, Faculty of Science, Botany Department. WOW aims to increase graduate students’ and post-doctoral fellows’ (PDF) employability outside academia.  	
    31	
    The project provides opportunities that are believed to have a direct effect on expanded employability, such as engaging in multiple laboratory experiences, travel, publications, professional and technical skills workshops, organizing symposia, grant writing, mentorship, and being a part of a connected community of scholars and graduate students focused on plant cell biology. Currently, the project participants are 19 graduate students and post docs (3 master’s students, 12 PhD students, and 4 PDFs, referred to as WOWees) and 8 principle investigators (PIs) including the project directors. Within the last year a group of new graduate students joined the program, some since September 2011 (2 master’s students, 3 PhD students, and 1 PDF) and others in January 2012 (3 PhD students). One PDF left in February 2012. All of these members were included in this study and Table 1 summarizes participants’ gender and roles. Table 1 Gender and Role of WOW Participants Gender Male  Female  Total  Principle Investigator (PI)  7  1  8  Post-Doc Fellow (PDF)  3  1  4  Graduate Student  5  10  15  Project Manager  0  1  1  Total  15  13  28  Role  	
    32	
    Evaluation of WOW. The WOW evaluation was conceptualized as a practical participatory evaluation intended to be utilization oriented, problem solving, and providing formative feedback to the project (Cousins & Whitmore, 1998). The conditions for using practical participatory evaluation are: a formative, improvement-oriented context; reasonable consensus on issues; organizational commitment to evaluation; and sufficient resources to conduct the evaluation. The WOW context met all of these conditions. Like all practical participatory evaluations, trained evaluators worked in partnership with non-evaluator stakeholders to conduct the WOW evaluation. The main evaluation question was, “To what extent does the mentoring model, networking system, and scientific and professional skill development contribute to enhancing the employability in a wide range of career paths?” To plan the evaluation, a logic model was developed collaboratively with program directors. As a consequence, the primary evaluation question was divided into four parts: mentoring; networking; scientific and professional skills development; a fifth question was added regarding the appeal of the program for graduates and post-docs who are looking for training. The evaluation utilized a variety of data collection methods to answer each part of the evaluation question. (See Tables 2 and 3.) Data collection strategies were discussed with the principal investigators first, but all WOWees had at least some input about what strategies might work within the WOW context. The evaluation team triangulated this data with reviews of WOW documents and observations of project meetings. As is appropriate for a practical participatory evaluation, the evaluation was primarily formative. As shown in Table 3, the first year our reporting consisted of summarizing data we had acquired thus far and providing formative feedback to the program director. Our first year  	
    33	
    evaluating the WOW project was very informative for the evaluation team. The team learned about the context of the program and the nature of the stakeholders in addition to acquiring pilots for our data collection methods. For the second year of the evaluation, the evaluation team reported evaluation findings in two forms. The first form was very well accepted; however, the second did not receive much attention. At the time of the data collection for this study the evaluation of the project was ongoing and my involvement in the evaluation had not changed. Data collection for the evaluation and this research were occurring at the same time.  	
    34	
    Table 2 Evaluation Questions, Data Sources, and Collection Methods for the WOW Project Evaluation Question  Data Source  Data Collection Method  1- To what extent does the mentoring model contribute to enhancing trainees’ employability in a wide range of career paths and career success? How are WOW members mentoring each other?  Mentoring activities The Mentoring Log: Date, who met, done by WOW and kind of mentoring (job-related members or skill development-related) using Google Docs (Once every academic term or three times per calendar year) SNA (Social network analysis): Using Lime Survey and UCINET (Two times per calendar year)  What are faculty & trainees’ perceptions of the effectiveness of the mentoring model?  Faculty & trainees’ perceptions of the effectiveness of comentoring  Trainee Focus Group and PI Individual Interviews: Describe what the mentoring looks like, and do they think it is valuable or effective? (At the end of academic year and post-employment)  Are trainees seeking and obtaining diverse careers?  Types & number of jobs trainees have applied for and the success rate  Annual Report: List of trainees’ job application and their success (At the end of academic year)  	
   	
   	
   	
    	
    35	
    Evaluation Question 2- To what extent does the networking system contribute to enhancing trainees’ employability in a wide range of career paths and career success?  Data Source  Data Collection Method  To what extent are WOW members networking with each other?  Relationships among all team members  SNA: Describe and examine the relationships (Two times per calendar year)  What are the faculty and trainees’ perceptions of the effectiveness of the networking system?  Faculty and trainees’ perceptions of the effectiveness of networking  Trainee Focus Group and PI Individual Interviews: Describe rotations and international training, and do they think it is valuable/effective? (At the end of academic year and post-employment)  Whether the postdocs are working as subproject managers  Annual Report: Trainees’ scientific skill development activities (At the end of academic year)  3- To what extent does the scientific skill development contribute to enhancing students’ employability and a wide range of career paths? To what extent are the trainees involved in the scientific skill development activities?  Whether graduate students’ are involved in fullyintegrated research activities and their progress in obtaining publications and scholarships What are the faculty & trainees’ perceptions of the effectiveness of the scientific skill development activities?  	
    Faculty & trainees’ perceptions of the effectiveness of scientific skill development activities  Trainee Focus Group and PI Individual Interviews: Do they think the scientific skill development activities are valuable and effective? (At the end of academic year and post-employment)  36	
    Evaluation Question 4- To what extent does the professional skill development contribute to enhancing students’ employability and a wide range of career paths?  Data Source  Data Collection Method  To what extent are trainees involved in other professional skill development activities?  The types and number of trainees’ professional skill development activities  Annual Report: Date, name, and description of professional skill development activities (At the end of academic year)  What are the faculty & trainees’ opinions on the effectiveness of the professional skill development activities?  Faculty & trainees’ opinions on the effectiveness of the professional skill development activities  Trainee Focus Group and PI Individual Interviews: Do they think the professional skill development activities are valuable and any effective? (At the end of academic year and post-employment)  The number of student applications and reasons for joining WOW  Student Applications and statements of intent (After admission; kept track of by project manager)  5- To what extent is WOW program attractive to those who are seeking graduate / post-doc training? Is the number of student applications to WOW program increasing every year, and if yes, why?  	
    37	
    Table 3 Description of WOW Project Formative Reports Format  Audience  Focus  Products  After the first year (September 2010) PowerPoint presentation  All the WOW members during a monthly meeting  Summary of findings about a course that was instructed by a group of WOW PIs for trainees  The presentation resulted in a discussion between PI’s and trainees about project improvement ideas.  Word All the WOW document members sent that primarily through email utilized graphs.  Summary of data that had been collected for the first year in the project (Mentoring log, Interviews, Annual Report)  The graphic display of data was not well received by the project directors. A follow up meeting with the program director clarified misunderstandings.  PowerPoint presentation  The social networks that have been formed during the year and our analysis of these findings.  Their feedback concluded that the report was interesting and reflected the networks they are observing.  First presented to the project directors then to the whole group during a monthly meeting  After the second year (September 2011) PowerPoint presentation  	
    First presented to the project directors then to the whole group during a monthly meeting  The social networks that have been formed during the year and our analysis of these findings.  Their feedback concluded that the report was interesting.  38	
    Format PowerPoint presentation  Audience Sent to project directors by email, but the evaluation team did not have the chance to present verbally  Focus Issues that were evident in the data (Observations, Mentoring log, SNA, Annual Report, and Interviews) collected throughout the length of the project.  Products Evaluation team did not have the chance to discuss the findings with the project directors or WOW members even after multiple requests.  Stakeholder attitudes towards evaluation. The project PIs chose to include an evaluation component, although the funder did not require a formal evaluation. It is reasonable to infer that the PIs are receptive and open to evaluation and see some potential value for the project. Working relationships with the project have been collaborative and open, often characterized by discussions about the appropriateness of methods and the meaning of results. For example, the PIs wanted to use a control group for both the social network analysis and yearend WOWee summary of activities. Discussions between the evaluation team and the PIs have addressed these issues in ways that seem satisfactory to the project and the integrity of the evaluation process. Another example is the negotiation over the formatting of the first year interim report, which one of the PIs and the project manager did not find informative. During the process of this evaluation, especially towards the end of the first year there seemed to be more understanding of the evaluation and acceptance of our efforts. For example, throughout the second year the response rates to the evaluation requests of data collection were usually submitted in a timely manner. As the third year has started, PIs and WOWees who have spent some time in the project seem very much at ease with the evaluators. They are comfortable talking with the evaluators about the evaluation; what they most like and dislike about our efforts, how they think we can improve their experience with the evaluation, asking about  	
    39	
    unclear issues in the evaluation, and suggesting ideas to examine and explore as part of the evaluation that seems interesting to them. However, this openness does not extend to newer WOWees, who are still unfamiliar with who we are, what we do, and how to approach us. Stakeholder involvement. Participants in this research are those who are directly involved in the evaluation and/or affected by the evaluation findings; that is, graduate students, PDFs, PIs, project directors, and the project manager. In the past two years there have been different levels of involvement in the evaluation, as observed by the evaluators. The project directors were involved in the evaluation process from the beginning; seeking advice on how to do evaluation and generating ideas about kinds of data the project might find useful, for example social network analysis. Graduate students, PDFs, and PIs have been primarily involved in the data collection procedures and discussions of evaluation findings. To date, three presentations of evaluation findings have been conducted, two in a monthly meeting where all WOW participants attended and one for program directors. In evaluation presentations where the whole group was present, stakeholders were interested in the results, and these results have stimulated discussions related to improving the project. Similarly when presentations were conducted for program directors, they shared the findings with the whole group. Evaluators’ Role. The evaluators of the project were two graduate students mentored by a senior evaluator from the Faculty of Education. The graduate students took the position of research assistants but were responsible for conducting the evaluation. I joined the evaluation team, which included one graduate student and senior evaluator at the time, halfway through the first year of the WOW project and evaluation. I have been involved in the evaluation of the project for over two years at the time this research was conducted.  	
    40	
    Evaluators responsibilities consisted of collecting data, educating stakeholders about evaluation in general and evaluation of the project in particular, attending monthly meetings, interpreting and reporting evaluation information. Most data collection was conducted online, except for observations of WOW monthly meetings, individual and focus group interviews, and evaluators and project directors meetings. Evaluators kept observation journals of meetings they attended, documentation of emails, and other material sent through email such as meeting agendas. My participation with the project was based on learning about evaluation through practice. I was also considered a trainee in the WOW project in terms of receiving the same benefits as other graduate students. The Delphi Technique The Delphi technique evolved from the Delphi Project conducted by Dalkey and Helmer (1963) when working for the RAND Corporation in their efforts to apply expert opinion to issues related to the U.S. Ministry of Defense. The primary purpose of the Delphi technique was to obtain consensus of opinion from a group of experts through a series of questionnaires controlled with feedback (Dalkey & Helmer, 1963). The structure of the technique is intended to allow access to the positive attributes of interacting groups, such as obtaining knowledge from a variety of sources, while preventing their negative aspects that are attributable to social, personal, and political conflicts (Rowe & Wright, 1999). This technique allows input from a larger number of participants than could be included in a group meeting and from members who are geographically dispersed. The Delphi technique can be characterized “as a method for structuring a group communication process so that the process is effective in allowing a group of individuals, as a whole, to deal with a complex problem” (Linston & Turoff, 2002, p. 3) where structured  	
    41	
    communication is providing feedback on others’ contributions, assessment of the group judgment, opportunity to revise one’s view, and anonymity of responses. Similarly, Rowe and Wright (1999) characterize the classical Delphi technique by four key features, 1) anonymity allows a participant to express their opinions without pressure from others in the group; decisions are judged by merit rather than who provided the feedback; 2) iteration allows participants to revise their opinions in light of the progress in opinions of the group; 3) controlled feedback informs participants of others’ opinions; 4) statistical aggregation of group responses allows for quantitative analysis of data. Although the Delphi technique was originally utilized as a forecasting tool, it has a wide variety of applications (Gupta & Clarke, 1996; Linston & Turoff, 2002), such as assessing possible budget allocations, exploring planning options, curriculum development, evaluating policy issues, developing causal relationships in complex economic or social phenomena, studying perceived human motivations, and studying priorities of personal values. It is not, however, the nature of the context that determines the appropriateness of the Delphi technique; rather, it is the particular circumstances that identify the need for a group communication process (Linston & Turoff, 2002). Usually, one or more of the following properties may lead to employing Delphi: • •  • • • •  	
    “The problem does not lend itself to precise analytical techniques but cart benefit from subjective judgments on a collective basis; The individuals needed to contribute to the examination of a broad or complex problem have no history of adequate communication and may represent diverse backgrounds with respect to experience or expertise; More individuals are needed than can effectively interact in a face-to-face exchange; Time and cost make frequent group meetings infeasible; The efficiency of face-to-face meetings can be increased by a supplemental group communication process; Disagreements among individuals are so severe or politically unpalatable that the communication process must be refereed and/or anonymity assured;  42	
    •  The heterogeneity of the participants must be preserved to assure validity of the results, i.e., avoidance of domination by quantity or by strength of personality” (Linston & Turoff, 2002, p. 4).  The Delphi technique continues to be used as a method for forecasting and supporting decision-making (Landata, 2006). In educational research, the Delphi technique has been used for three primary purposes: identifying educational goals and objectives, developing curriculum and campus planning, and creating criteria for evaluation (Eggers & Jones, 1998). Although the Delphi technique has been utilized to extract stakeholder perceptions and opinions about curriculum and educational programs (Hirner, 2008; Smalley, 2000), this method is rare in program evaluation research (for example, Briedenhann & Butts, 2006; Garavalia & Gredler, 2004). The Delphi Process The Delphi technique involves sending a questionnaire to a respondent group, summarizing the results and, based upon the results, developing a new questionnaire for the same respondent group. The second questionnaire is sent to the respondent group, who are usually given at least one opportunity to reconsider original answers based on examination of the group response (Linston & Turoff, 2002). The results from the second questionnaire help in the formulation of the third questionnaire. This iterative process is repeated until consensus is reached or until the number of returns for each round decreases. A typical Delphi study begins with a set of open-ended questions to collect information from a panel of experts before moving to consensus building, and a modified Delphi study moves directly to consensus building if a set of possible solutions already exists (Hirner, 2008). Researchers in the field often point out that there is no typical Delphi; rather it can be modified to fit the research circumstances and needs (Linston & Turoff, 2002; Skulmoski, Hartmen & Karhn, 2007). 	
    43	
    Deciding who is included in a Delphi panel is crucial to the design of the study. A mix of Delphi panel stakeholders (who are directly affected by the results of a study), experts (who are specialized or experienced in the area of study), facilitators, and individuals can provide alternative views (Scheels, 1975). Because Delphi is a tool that aids understanding or decisionmaking, it will only be effective if those who will ultimately act upon the results of the Delphi are actively involved (Clayton, 1997; Hasson, Keeney & McKenna, 2000). Since the panel is selected to apply their knowledge to a certain context, often the selection of the sample of `experts' involves either purposive sampling or criterion sampling (Hasson et al.’ 2000). Group size varies, but 15-30 people for a homogeneous group and 5-10 people for a heterogeneous group are typical (Clayton, 1997). Because participants are questioned about the same topic repeatedly using a slightly modified questionnaire each time, the Delphi requires a continued commitment and motivation from participants. Therefore, it is important that those who have agreed to participate have the time to commit to the process and maintain involvement until the process is completed (Hasson et al., 2000) in order to avoid attrition. The number of rounds of data collection depends on the time available, the research questions, and consideration of sample fatigue (Hasson et al., 2000). The classic Delphi technique had four rounds; however, there is evidence that shows either two or three rounds are preferred (Gupta & Clarke, 1996). Also, consideration must be given to the level of consensus desired; 51%, 70%, and 80% has been reported, however, the stability of the responses through the rounds is a more reliable indicator of consensus (Hasson et al., 2000). Data from the first round of the Delphi are often analyzed using content analysis techniques (Skulmoski et al., 2007). Subsequent rounds are analyzed to identify convergence and changes of respondents' opinions through the use of descriptive and inferential statistics (i.e.,  	
    44	
    central tendencies and levels of dispersion). This enables participants to see where their response stands in relation to that of the group. Advantages and disadvantages. A primary reason for the popularity of the Delphi technique is its strength as a planning, forecasting, and decision-making tool. It relies on a structured and indirect approach to quickly and efficiently elicit responses while simultaneously promoting learning among panel members (Gupta & Clarke, 1996). The Delphi method captures a wide range of interrelated variables and multidimensional features of complex problems. It also documents the opinions of the respondents while avoiding the drawbacks of face-to-face interaction, such as group conflict and individual dominance. Delphi also has some limitations, such as limited value of feedback and consensus, and instability of responses among rounds (Gupta & Clarks, 1996). Another disadvantage is the lack of criteria for distinguishing an expert from others and the lack of sufficient evidence that signifies the judgment of an expert is more valid than those of a non-expert (Gupta & Clarke, 1996). The Use of the Delphi Technique in This Study Based on Linston and Turoff (2002), the Delphi technique is appropriate for this study for two reasons. In the WOW project: 1) participants occupy different roles with varying degrees of experience and expertise; and 2) anonymity of the participants must be preserved to assure validity of the results. In the context of WOW, participants have different levels of authority in a group that includes graduate students, PDFs, professors, and a project coordinator. Conducting face-to-face meetings may lead to domination of a groups’ opinion by a few; therefore, obtaining individual responses anonymously is a good approach to eliciting stakeholder opinions about the evaluation (Goodman, 1987).  	
    45	
    Second, unlike traditional survey approaches where larger samples are preferred, sample size in the Delphi method varies depending on the research question (Linston & Turoff, 2002). In a review of literature studying the effectiveness of the Delphi technique, Rowe and Wright (1999) reported the use of sample sizes ranging from 3 – 98 experts. For the current study, the sample size is 21, which is not large enough to be used in a traditional survey. Lastly, because of the researcher’s current and ongoing position as a junior evaluator of the WOW project, it is best to maintain distance from participants in order to preserve data integrity and objectivity. Survey. The purpose of the survey was to gather information about stakeholders’ perceptions related to evaluation practices within the WOW project in order to better understand how the evaluation is used. The survey was based on a comprehensive literature review and similar questionnaires on evaluation use (Amo, 2009; Weeks, 1979). I constructed all items included in the survey based on literature on evaluation use, where each item or group of items reflected findings of a certain study. Each item went through multiple versions and revisions until the final product was approved and a complete instrument was created. Respondents were asked to rate their level of agreement on a 73 items reflecting four variables identified in the literature: findings use; process use; stakeholder involvement; and factors affecting findings use. To maintain anonymity, respondents were asked not to identify themselves in any way; however, they were asked to indicate to which role group they belong (i.e. PI, PDF, Graduate Student, Other). Appendix B shows the survey items and the corresponding literature upon which each item builds. As a maximum of five categories is adequate for most surveys, a five point Likert-scale rating was used with the following responses (Strongly Agree, Agree, Neither Agree/Disagree,  	
    46	
    Disagree, and Strongly Disagree) (Preston & Coleman, 2000). Wakita, Ueshima, and Noguchi (2011) examined whether the number of options in the Likert scales influences the psychological distances between categories. They found that psychological distances differ with 7-point Likert scale but were not evident in 4- and 5- point Likert-scales. Delphi I. Unlike traditional Delphi, where the first round usually begins with gathering input from the participants and then formulating a survey for consensus building, the first round in this study attempted to achieve consensus. The initial survey was designed with an online survey tool, Survey Monkey, consisted of 73 Likert-scale items, and was sent electronically through email. Twenty-nine potential respondents received an email that included a brief description of the study and research questions and invited them to participate. Potential respondents were asked to click on the website URL to reach the survey. Completion of the survey was taken as implied consent. Delphi II. The second survey continued the process of consensus building using statistical feedback for each item. In this round, each WOW member received an updated survey showing each stakeholder’s group responses and the whole group response in the previous round, and statistical information. WOW participants were asked to review each item, consider the group response, and then re-rate the items, taking the information into account to consider reasons for remaining outside the consensus. This round gives participants an opportunity to make further clarifications of both the information and their judgments of the relative importance of the items. The following instructions accompanied the Delphi II survey: The purpose of the second round survey is for you to read the summary values from the previous survey and perhaps change (or not) your perception or idea about the uses of the evaluation based on the responses of others. What I am trying to do here is to see if there exists a certain amount of consensus on certain ideas about the uses of the evaluation in the WOW  	
    47	
    project thus far. Reaching consensus is important for two reasons, first it informs me about the uses that have been observed by the whole group and each sub-group (i.e. PIs, Trainees) and therefore shows me where more work, from our side, needs to be done. Second it is important to achieve reliability, where larger agreement of each item denotes more reliable findings. In this survey each item includes statistical information about the responses from the past round, such that each item will include the median, inter-quartile range, mode and percentage of agreement on the mode for the whole group and for each individual sub-group. Together you can understand what most people answered and the extant of the differences among the total responses. All in all, for the previous round I received 20 responses, five of which included missing data and therefore were not included in this analysis. The statistics provided come from the sample size of 15: five PIs, and 10 PDFs and Graduate Students. How to interpret the results: 1. First a brief explanation of the statistics: • Median is the 50th percentile or the point below which fifty percent of the responses fall. • Inter-quartile Range (IQR) is the distance between the 25th and 75th percentile measuring variability. • Mode is the score that occurs most frequently. • Percentage of agreement on the mode is the percentage of responses that agree with the mode. 2. With a median value ranging from 1 to 5 … • A median value of 2 or smaller with IQR of 1.5 or smaller shows agreement • A median value of 4 or higher with IQR of 1.5 or smaller shows disagreement • A median value of 3 and IQR of 1.5 or smaller shows “Neither Agree/Disagree” 3. With a mode value ranges from 1 to 5 … • A mode value of 2 or smaller with a percentage over 50% shows agreement • A mode value of 4 or higher with a percentage over 50% shows disagreement • A mode value of 3 with a percentage over 50% shows “Neither Agree/Disagree” 4. When values differ from the above this shows that consensus has not been reached. Delphi III. After receiving completed second round surveys, items were analyzed and responses were compiled, a new statistical summary was prepared and again the information was shared with participants. The same 73-item survey was sent for a last round of consensus building. Three rounds of data collection were enough, taking sample attrition and fatigue into consideration. Items that reached consensus of agreement and disagreement are included in the analysis. The analysis starts with understanding how evaluation use is perceived among WOW project  	
    48	
    members. Then statistical analysis that included descriptive statistics and logistic regression was performed to understand the relationships between and among the resulting agreed/disagreed items.  	
    49	
    Chapter 4  Research Results and Findings This chapter describes how the Delphi method was performed and discusses the results in relation to the WOW project case. The Delphi survey elicited perceptions on evaluation use of the WOW project stakeholders. They reached consensus of agreement on one use of finding (use of the evaluation findings to persuade funders), two ways in which they were involved in the evaluation (data collection and receiving results) and nine factors affecting the use of evaluation findings (support and commitment to the evaluation, discussion of results, interested in results, approachable and friendly evaluators, general involvement, importance of other information in addition to evaluation results, use of other information in decision-making). Research participants also reached consensus of disagreement on two types of process use (learned or improved skill and increased commitment to the project). The identified items were treated as separate variables to understand how specific instances of use and involvement relate to one another. The logistic regression analysis was used to answer the following question: •  What impact does the involvement in the evaluation process have on instrumental, conceptual, and symbolic use in the WOW project?  •  What is the causal relationship of involvement in the evaluation process, process use, and use of evaluation findings in the WOW project?  However, because of the inaccurate results of the Delphi, no substantive conclusion could be made regarding the relationships among process use, findings use and stakeholder involvement and therefore these analyses were not included in this study. In addition, the Delphi rounds resulted in un-identification of any instances of process use. Although instances of involvement 	
    50	
    were identified, no link could be made in this context between degree of involvement and process use to answer the following question: •  To what degree do stakeholders in the WOW project need to be involved in the evaluation in order to experience process use? Although some of the research questions remain unanswered, this study reveals the effects  of the decision-making setting in an evaluation. The study data do answer the questions, what are the effects of false statements made by individuals in an organizational role of a project on the evaluation of WOW? How do these statements effect perceived use and experience with the evaluation of WOW? And how does variation in involvement in the evaluation process effect collective perceptions of process use in participants of the WOW project? Results of the Delphi Study Statistical feedback administered in the second and third round included the median, interquartile range (IQR), mode and percentage of agreement on the mode. Consensus was determined using the median and IQR. The WOW group identified types of uses, involvement, and factors affecting use by eliminating 59 items from the survey. The response rate decreased by 12% from the first round to the second round and 30% from the second round to the third round indicating a level of fatigue and attrition. Descriptive Statistical Analysis of Delphi Surveys I, II, and III The median, interquartile range, mode, and percentage of agreement on the mode were calculated for the three survey rounds. The median indicates where the middle 50% of the responses fell; IQR illustrates the distance between 25% and 75% of the responses; and the mode shows the response that occurred most and the percentage of responses that agree with the mode. The mode and percentage were added to provide information on what most respondents chose  	
    51	
    and together showed whether consensus was achieved for each item. The median and IQR are accurate representations of the mean and standard deviation when the data is ordinal (Glass & Hopkins, 1996). A median of 2 or less denoted that at least 50% of the responses indicated agreement or strong agreement for an item. The IQR is used to show that the variation between the upper and lower quartiles is small. The Delphi results showed consensus when the median of responses was 2 or less with IQR of 1.5 or less. Similarly, consensus of disagreement was identified with a median value of 4 or more and an IQR of 1.5 or less. This method shows large-scale agreement with minimal variation (Fish & Busby, 1996). Delphi I. Fifteen of the 29 potential respondents replied to the survey: 5 principle investigators (PIs), and 10 trainees (including PDFs and graduate students). Table 4 provides an example of descriptive statistical results from the survey and illustrates how the consensus changes through Delphi rounds. This table contains items representing findings use, stakeholder involvement, process use, and factors affecting findings use that were chosen because of their variation in consensus, whereas the rest of the items reached a neutral consensus or none at all. (See Appendix for the complete summary of statistics.) After the first round data collection, three participants chose to drop out of the study and therefore were not included in successive rounds.  	
    52	
    Table 4 Descriptive statistics from Delphi I, II, III Items Findings Use 1- I feel the WOW project was enhanced after the first SNA evaluation feedback. Whole Group Statistics: PIs: PDFs & Graduate Students: 2- The evaluation is useful to persuade others, such as funders. Whole Group Statistics: PIs: PDFs & Graduate Students: 3- The evaluation is useful to justify program existence or continuation. Whole Group Statistics: PIs: PDFs & Graduate Students: Factors Affecting Findings Use 4- The evaluators discussed evaluation results with project community members. Whole Group Statistics: PIs: PDFs & Graduate Students:  	
    Delphi I Med/IQR/Mod/%a Cons.b  Delphi II Med/IQR/Mod/% Cons.  Delphi III Med/IQR/Mod/% Cons.  3/0/3/60 2/2/2/40 3/0/3/80  Neutral  3/1/3/50 3/2/-/-c 3/1/3/57.1  Neutral  3/1/3/66.7 3/1/3/62.5  Neutral  2/1/2/60 2/2/2/60 2/1/2/60  Agree  2/2/2/58.3 2/2/-/2/2/2/71.4  Noned  2/1/2/55.6 2.5/1/-/-  Agree  2/1/2/68.7 2/1/2/80 2/1/2/60  Agree  2/1/2/66.7 3/2/2/40 2/0/2/85.7  Agree  3/1/3/66.7 3/1/3/75  Neutral  2/1/2/60 2/1/2/60 2.5/1/2/50  Agree  2/1/2/66.7 2/1/2/80 2/1/2/57.1  Agree  2/1/2/66.7 2/1/2/62.5  Agree  53	
    Items 5- Evaluators are approachable and friendly. Whole Group Statistics: PIs: PDFs & Graduate Students: 6- You are interested in the evaluation results. Whole Group Statistics: PIs: PDFs & Graduate Students: 7- Other information is considered when making decisions. Whole Group Statistics: PIs: PDFs & Graduate Students: Stakeholder Involvement 8- I have been involved in planning the WOW evaluation. Whole Group Statistics: PIs: PDFs & Graduate Students: 9- I have been involved in deciding what data should be collected for the WOW evaluation. Whole Group Statistics: PIs: PDFs & Graduate Students:  	
    Delphi I Med/IQR/Mod/%a Cons.b  Delphi II Med/IQR/Mod/% Cons.  Delphi III Med/IQR/Mod/% Cons. Agree  2/2/2/40 2/2/1/40 2/2/2/40  None  2/0/2/75 2/1/2/60 2/0/2/85.7  Agree  2/1/2/66.7 2/1/2/62.5  2/1/2/53.3 2/1/2/60 2/1/3/50  Agree  2/1/2/50 2/0/2/100 3/1/3/57.1  Agree  2/1/2/66.7 2/1/2/62.5  Agree  2/1/2/46.7 2/2/1/40 2/1/2/50  Agree  2/2/3/41.7 2/1/2/60 2/2/3/42.8  None  1.5/2/1/44.4 2/2/-/-  Agree  4/2/4/40 4/2/4/60 4/2/5/40  None  3/2/4/41.7 2/2/2/40 4/2/4/57.1  Neutral  4/2/4/44.4 4/2/4/50  None  4/2/4/40 4/2/4/60 4/2/4/40  None  3.5/2/4/50 3/2/3/40 4/1/4/71.4  None  4/0/4/100 4/0/4/100  Disagree  54	
    Items 10- I have participated in some data collection for the WOW evaluation. Whole Group Statistics: PIs: PDFs & Graduate Students: 11- I have received evaluation results about the WOW project. Whole Group Statistics: PIs: PDFs & Graduate Students: Process Use 12- Involvement in the evaluation process and activities helped me understand what evaluation is all about. Whole Group Statistics: PIs: PDFs & Graduate Students: 13- Involvement in the evaluation process and activities helped develop or improve some of my skills such as data collection techniques. Whole Group Statistics: PIs: PDFs & Graduate Students:  	
    Delphi I Med/IQR/Mod/%a Cons.b  Delphi II Med/IQR/Mod/% Cons.  Delphi III Med/IQR/Mod/% Cons. Agree  2/1/2/66.7 2/2/2/80 2/2/2/60  Agree  2/0/2/66.7 2/4/-/2/0/2/100  Agree  2/0/2/100 2/0/2/100  2/0/2/73.3 2/1/2/80 2/1/2/70  Agree  2/2/2/58.3 2/2/2/60 2/2/2/57.1  None  2/1/2/77.7 2/1/2/75  Agree  2/2/2/46.7 2/2/2/40 2.5/2/2/50  None  2.5/2/2/41.7 3/3/-/2/2/2/57.1  None  3/2/3/44.4 3/1/-/-  None  4/1/4/40 4/1/4/60 3.5/2/3/40  Disagree 4/2/4/50 4/1/4/60 4/1/4/42.8  None  4/1/4/55.6 4/1/4/50  Disagree  55	
    Items 14- My involvement in the evaluation process increased my commitment to the project. Whole Group Statistics: PIs: PDFs & Graduate Students: 15- The evaluation process caused me to question the underlying assumptions of the project. Whole Group Statistics: PIs: PDFs & Graduate Students:  Delphi I Med/IQR/Mod/%a Cons.b  Delphi II Med/IQR/Mod/% Cons.  Delphi III Med/IQR/Mod/% Cons.  3/2/2/26.7 2/2/2/60 3/2/2/30  None  4/2/4/50 4/1/4/80 3/2/-/-  None  4/1/4/55.6 3.5/1/-/-  Disagree  4/1/4/40 4/1/4/80 3/1/3/50  Disagree 3/2/3/41.7 4/3/-/3/1/3/71.4  None  3/2/3/55.6 3/1/3/62.5  None  a. Abbreviation for: Median/ Inter-quartile range/ Mode/ Percentage of agreement on the mode b. Abbreviation for Consensus c. Multi-modal d. No Consensus  	
    56	
    As previously mentioned, Delphi results indicate consensus of agreement when the median is 2 or less and IQR is 1.5 or less, and consensus of disagreement when the median is 4 or more and IQR is 1.5 or less. For example, items 2, 3, 4, 6, 10, and 11, as shown in Table 4, reached consensus of agreement in the first round; whereas, items 13 and 15 reached a consensus of disagreement. Items that result in a median of 3 and an IQR of 1 indicate that most respondents neither agree nor disagree. Because the survey is an exhaustive list of possible uses of an evaluation, some items may not be applicable to the WOW project. Therefore, when most respondents neither agree nor disagree with an item, this may indicate that an item is not applicable. Although these items and others achieved consensus, they were still included in Delphi II and Delphi III to see whether responses changed with statistical feedback. In total, respondents in this round reached consensus of agreement on 29 items, which included 4 uses of findings items, 2 level of involvement items, 2 uses of the evaluation process items, and 21 items on factors affecting use of findings. Also, respondents reached consensus of disagreement on 3 items: one on stakeholder involvement and 2 on process use. After the Delphi I survey was completed it was discussed at a regular WOW monthly meeting, which gave the participants a chance to ask questions about the survey. Delphi II. The original 73 items were sent to the 26 members of the WOW group for a second round of consensus building. In this round participants were provided information about the outcome of the first survey, including the median, IQR, mode, and percentage of agreement on the mode for the whole group and the PI and Trainee sub groups. The participants were asked to reflect on each item in light of the first round results and indicate their agreement on the same 5-point Likert-scale ranges from 1 (strongly agree) to 5 (strongly disagree). Twelve of the 26 potential respondents responded: 5 PIs and others, and 7 trainees.  	
    57	
    Results from Delphi II (as can be seen in Table 4), differ from those of Delphi I. For example, the previous round reached consensus of agreement on items 2 and 11, whereas respondents did not reach consensus on the same items in Delphi II. In this round the number of items for which respondents reached consensus of agreement decreased to 11 items: 1 item reflecting uses of findings, 1 item reflecting level of involvement, and 9 items reflecting factors affecting use of findings. The respondents did not reach consensus of disagreement on any item in this round. Delphi survey termination can be built on sound statistical decisions, as described in Kalaian and Kasim (2012). However this is not possible in this context for two reasons: 1) the identity of the respondents is unknown, and 2) the number of respondents from the first and second round is not equal, making the comparison between these rounds more difficult. Also some literature found that three rounds are adequate to terminate the Delphi survey (Dalkey, 1969; Linstone & Turoff, 2002; Yousuf, 2007). Therefore a third round of Delphi was administered. Delphi III. The original 73 items were sent to the 26 participants for a third round of consensus building. In this round participants were provided information about the outcome of the second survey, including the median, IQR, mode, and percentage of agreement on the mode for the whole group and each sub group (i.e. PIs and trainees). The participants were asked to reflect on each item and indicate their level of agreement on the same 5-point Likert-scale. Nine of the 26 potential respondents submitted the survey: 1 PI, and 8 trainees. Because a single response was received from the PI sub-group, results for this sub-group are not shown in Table 4. Similar to Delphi II, some items changed their consensus for this round. For example, items 2 and 7 changed from no consensus in round 2 to a consensus of agreement in round 3. The  	
    58	
    change or stability of consensus on items throughout the three rounds of data collection can be seen in Table 4. In round three respondents reached consensus of agreement on 12 items. Nine of these items are a subset from the items respondents agreed on in Delphi II, 1 item was agreed on in Delphi I, 2 were newly agreed upon items (1 uses of findings item, 2 level of involvement items), and 9 were factors affecting use of findings items. In addition, there were 3 more items that reached consensus of disagreement: 1 level of involvement item and 2 process use items. In total 15 items reached agree and disagree consensus in the Delphi III survey: 12 items were agreed upon and 3 items were disagreed upon. Even though a traditional Delphi does not take items that did not reach agreement into consideration, for the purpose of this study I think it is important to understand types of uses that were not experienced by members of the WOW group. It must be noted that 8 out of 9 respondents to the last Delphi round are trainees, which means these results will mainly reflect trainee opinion on the evaluation. Items that reached agree or disagree consensus. The 73 items that were included in the Delphi survey were a part of several rounds of consensus building resulting in 12 items that reached a consensus of agreement and 3 items that reached a consensus of disagreement. Table 5 lists the items that reached Agree/Disagree consensus.  	
    59	
    Table 5 Items that reached Agree/Disagree consensus from Delphi III.  Factors affecting findings use  Process Use  Stakeholder Involvement  Findings use  Items  	
    Type of consensus  1. The evaluation is useful to persuade others, such as funders.  Agreement  1. I have been involved in deciding what data should be collected for the WOW evaluation.  Disagreement  2. I have participated in some data collection for the WOW evaluation.  Agreement  3. I have received evaluation results about the WOW project.  Agreement  1. Involvement in the evaluation process and activities helped develop or improve some of Disagreement my skills, such as data collection techniques. 2. My involvement in the evaluation process increased my commitment to the project. Communication 1. The evaluators discussed evaluation results with project community members. Evaluator 2. Evaluators are approachable and friendly. competence The personal 3. You are interested in the evaluation results. Factor 4. You are supportive of the evaluation. Commitment to 5. The project is committed to the evaluation. the evaluation Competing 6. The evaluation evidence is one source of Information information the project receives. 7. Other information is as important as evaluation results. 8. Other information is considered when making decisions. Involvement 9. The evaluation involves members of the project throughout the evaluation process.  Disagreement Agreement Agreement Agreement Agreement Agreement Agreement Agreement Agreement Agreement  60	
    Findings use. Fifty six percent of respondents agreed that the evaluation is useful to persuade others, such as funders. This is symbolic use of findings, which is using evaluation to justify decisions that have already been made, to use the evaluation as persuasion tool, or to justify existence and continuation of a program. Stakeholder Involvement. All respondents disagree that they have been involved in deciding what types of data should be collected for the WOW project. This is true since trainees did not take part in this activity. Deciding the type of data collected was the responsibility of the project managers in collaboration with the evaluators. All respondents also agreed that they have participated in data collection for the WOW project. This is evident since project participants are involved in different data collection procedures (i.e., interviews, social network survey, mentoring log, annual progress report) throughout the lifetime of the project. The respondents agreed that they have received evaluation results about the WOW project. For the past three years of the project, evaluators have presented the evaluation findings to the project managers annually. Social network analysis (SNA) has received much attention and a power point presentation is the medium by which the SNA findings were transferred in a WOW monthly meeting. Most of the project participants usually attended the WOW monthly meeting, which is why 78% of respondents agree that they have received evaluation results. Process Use. The respondents disagree that involvement in the evaluation process and activities helped develop or improve some skills, such as data collection techniques. Although respondents agree that they have been involved in data collection for evaluation information and subjected to evaluation results, 55.6% agree that this type of involvement did not generate a learning experience that yielded developing or improving new skills. The same percentage of respondents with the same level of involvement also disagree that the process increased their  	
    61	
    commitment to the project. Factors affecting findings use. Sixty seven percent of the respondents agreed that the evaluators discussed evaluation results with project members. This is a result of the annual presentations that are conducted by the evaluators at the WOW monthly meeting. The same percent of respondents also agreed that they were interested in the evaluation results. Most respondents (66.7%) agreed that evaluators were approachable and friendly. The project evaluators were graduate students learning about and practicing evaluation, and therefore are considered trainees in the WOW project. This was not the case in the early stages of the project when the evaluators were perceived as outsiders by other trainees and as employees by the project manager and project coordinator. However, the evaluators have made an effort to change this perception; for example, one evaluator participated in social activities with WOW trainees. In addition, communication, inquiry, and critique were constantly encouraged by the evaluators. Also most respondents are committed (56%), supportive (67%) of the evaluation, and generally feel they are involved in the evaluation (56%). In the literature, these three factors have shown to affect the degree of findings and process use. The third round shows that respondents believe there is other information, different from the evaluation (56%), which is just as important (56%) and is used to make decisions (67%). The Relationship among Findings Use, Process Use, Stakeholder Involvement and Factors Affecting Findings Use In order to answer the research questions, an attempt was made to study the relationships among the items that respondents of the Delphi III reached consensus of agreement or disagreement on (see Table 5) through binary logistic regression. Relationships between and  	
    62	
    among these categories were found to be positive and significant except for the relationship between findings use and process use, which, although significant, was found to be uninterpretable. This was the result of the nature of data and small sample size, which leads to the creation of empty cells. This problem occurs in logistic regression when there are few cases and may produce very large estimates of parameters and standard errors (Tabachnik & Fidell, 2007), which was the case. The relationships that were examined are as follows: •  Between use of the evaluation findings to persuade funders and use of the evaluation process to increase commitment to the WOW project.  •  Between use of the evaluation findings to persuade funders and use of the evaluation process to learn or improve skills.  •  Between involvement in the form of receiving evaluation results and use of the evaluation to persuade funders.  •  Between involvement in the form of receiving evaluation results and use of the evaluation to learn or improve new skills on one hand and, on the other hand, to increase commitment to the project.  •  Respondent’s perceptions of the factors that affect the use of evaluation findings, which revolve around communication of results, evaluator competence, commitment to the evaluation, competing information, stakeholder interest and support of the evaluation, and stakeholder perception of the general involvement in the evaluation process. These factors, although identified as important in this context, did not necessarily affect the persuasive use of evaluation information.  	
    63	
    •  Among use of the evaluation to persuade funders, to increase commitment, to learn or improve skills, involvement in the form of receiving results, and a group of factors that were found to exist in the WOW context (discussion of results, personal interest in the evaluation results, and perceived involvement in the evaluation).  Although the results were statistically significant, the meaningfulness of the values was questioned for an important reason. The type of findings use identified to be the most observed type of use in the WOW context, namely use of the evaluation to persuade funders, is inconsistent with reality. This led me to believe that the relationships between persuasion use of the evaluation and other variables cannot have substantive meaning. Use of the evaluation as a persuasion tool is inaccurate because the evaluation was in its essence formative and meant to be used. Evaluation was introduced to the project not as a requirement of the funders but as the project managers’ personal interest in developing a successful project. The project managers requested a formative evaluation to ensure continuous feedback on the ongoing project from the evaluators. The evaluation was in fact used in many different ways; however, this was not captured by the survey. For example, the project managers requested a specific and complicated type of analysis to study the networks formed as a result of being part of the WOW project. This information has been used to motivate and encourage activities related to the WOW project’s academic research focus and social activities outside the academic realm. Another issue is the lack of perceived process use. Although process use is a type of use that should be observed even with minimal involvement, but in this case no type of process use was reported. In the context of the WOW project, there have been many occasions where the evaluators talked about the evaluation practice and might have potentially educated stakeholders  	
    64	
    about the evaluation in the context of WOW, and more general. For example, this might have occurred during interviews, WOW monthly meetings, and in smaller meetings between evaluators and project managers. Nevertheless, this study also was unable to capture such instances of use. Process use of the evaluation should also be evident when stakeholders, and PIs in particular, had an opportunity to learn about social network analysis. Over time, the PIs were able to understand the social network maps and parameters and how that would relate to the project goals without evaluators’ explanations. Although this evaluation was not participatory in the sense that stakeholders conducted the evaluation, it was participatory in the sense that all evaluation decisions were made with project directors. It was also participatory in the inclusion of stakeholders in data collection, discussion and distribution of results, and teaching social network analysis. This degree of participation and involvement in the evaluation process should have sparked at least a small amount of process and findings use, especially given that stakeholders’ general perception of involvement was positive. Additionally, the participants identified eight factors that are shown to enhance findings use even though they are not reflected in the results of this study. These factors are, in fact, relevant to the project and have been observed by evaluators. Summary The data collection rounds revealed trainees’ perceptions and experiences with the evaluation of the WOW project. The Delphi survey resulted in 15 items on which participants reached agree or disagree consensus that shaped how the evaluation was perceived and experienced by stakeholders of WOW project. The findings use of the evaluation in this context is persuasive use, which is to persuade funders to continue funding the project. This type of  	
    65	
    findings use is not identified accurately, since project managers’ personal interest in improving the project as it progresses initiated the evaluation of the project. Because of that correlational relationships between and among the variables identified in the third round of Delphi held no substantive meaning and were discarded. Similarly, no process use instances were identified as a result of this study, even with the identification of types of involvement and the general perception that involvement in the evaluation process was positive. This result is inconsistent with reality since in fact process use has been observed. Lastly, the availability of factors that have been found in the literature to positively affect the use of findings had no measurable effect in this study.  	
    66	
    Chapter 5 Discussion This study measured the degree of use in an evaluation of an educational project aimed towards increasing the employability of graduate students and post-doctoral fellows. A case study and Delphi survey approaches were used to investigate project members’ perceptions of the types of uses observed, types of involvement, and factors that affect use of findings. The following sections discuss results, limitations of this study, and implications for evaluation practice. Findings use The iterative rounds of data collection showed that one type of findings use was observed in this context, namely symbolic use. This is the use of evaluation findings as a persuasion tool, to support decisions already been made, and to justify program continuation. For the case of the WOW project, the type of findings use reported is the use of evaluation information to persuade funders to continue funding the project. Literature on evaluation utilization has reported the use of evaluations symbolically (for example, Barrios, 1986; Bober & Bartler, 2004; Marra, 2003); however, in this case study the reported use is inaccurate. The WOW project has used the evaluation findings in many ways and using findings as a persuasive tool is not one of them. The analyzed responses in this study primarily reflect trainees’ perceptions, and they perceived this persuasive use of evaluation findings. This may be a result of the emphasis, during WOW monthly meetings, on the importance of the evaluation and the consequent impression that the evaluation was required by the funder. Although some trainees, who were involved in the project for some time, knew that the evaluation was designed to help make the project better and their feedback was the means to identify strengths or weaknesses of the project, the rhetoric  	
    67	
    among project members may have led to the creation of this persuasive use belief. This could have been exacerbated by the fact that newer trainees knew even less about the evaluation. Looking back at the data, interestingly enough, the one PI who responded to this round agreed that evaluation was a funder’s requirement and 50% of the trainees agreed and 50% neither agreed nor disagreed with this type of use. This suggests that the belief that evaluation of the project was a funder’s requirement was common among trainees, but perhaps with PIs too. Another reason why persuasive use was the only type of reported use may be because most decisions regarding the project, whether evaluation information is taken into consideration or not, was made without the inclusion of trainees. The formative nature of the evaluation and the, at least, annual presentation of evaluation findings did not seem to affect respondents’ perception of use. This could be a result of disconnection between trainees, PIs including project managers, and decisions made as a result of the evaluation information. To illustrate this point I will give an example where the disconnection between trainees and decisions made by project managers is apparent. When the evaluation of the project started, trainees had no idea why the evaluation was being conducted and were reluctant to share information with evaluators. As a result, evaluators created a document that explained what the evaluation was about, what it entailed, what types of data were collected and how frequently, which helped ease discomfort and create an atmosphere of acceptance for evaluation efforts. The decision to include evaluation was never discussed with trainees. Even if this decision was not a trainee concern, they were not informed that it was part of the project until they met the evaluators. At the same time, future steps, issues that may be enhanced, or decisions that were made were seldom discussed in monthly meetings. This strengthens the conclusion that there was  	
    68	
    a considerable amount of disconnection between the purpose of the evaluation and any decisions that were made as a result of it. As mentioned previously, most respondents to the last round of the Delphi survey were trainees who did not participate in decision-making. This separation, along with the rhetoric created, may have influenced this finding. Process Use Respondents did not reach consensus regarding types of process use they experienced; 12 out of 15 items concerning process use did not reach consensus. The variation of involvement in the evaluation process and intentions behind being involved in the evaluation among project members may explain the difficulty in reaching consensus. Although process use is a product of involvement in the evaluation that leads to a learning experience, this study did not show the link between the two constructs. At the same time, most participants agreed that they did not experience two types of process use, namely learning or improving skills and increased commitment. The first type of process use, learning or improving skills, seems inconsistent with what happened during the evaluation process. Participants in the project learned about social network analysis and about program evaluation. Whether they were able to link this type of learning to any use is a different consideration. Most of the respondents to the last round of data collection were trainees who may have not linked the evaluation process to any use in their area of study. Although if asked about evaluation and social network analysis, I am sure they would have had a brief and simple, but correct, explanation of what they are. The second type of process use that reached consensus of disagreement is the increased commitment to the project as a result of the evaluation process. This could have two meanings.  	
    69	
    Either project participants are committed to the project regardless of the evaluation process, or the evaluation as it is being conducted would never increase commitment to the project. The two explanations are highly likely since project managers are committed to the evaluation process regardless of its findings whereas other participants may have a mixture of both feelings. Although the project participants were involved in the evaluation in many ways, their involvement in this study could not be positively linked to process use. Relationship between process use and findings use The research on evaluation use has shown both the existence and non-existence of a relationship between process and findings use. For example, Lopez (2010), in her study of the relationship between process use and findings use in personnel evaluations, found that process use played an important role in the overall impact of the evaluation; the same areas that affected stakeholders through process use were improved with the use of evaluation findings. Unlike the previous findings, Amo (2009) found that development of knowledge and skills is not sufficient to enhance the use of evaluation findings in the government context. In the context of this study, no conclusions can be reached regarding the relationship between findings and process use. However, this study highlights the importance of the decisionmaking and policy setting contexts that create an environment supportive of the evaluation. The study has shown the effects of expressions used by project managers to instill importance of the evaluation among project participants. The application of phrases that creates a false common belief may have a negative effect on process and findings use. In the WOW case, the verbal or non-verbal declaration that the evaluation was a funder’s requirement may have weakened the one-to-one connection between the evaluation and each individual participant. In other words, a participant might think: since the evaluation is a funder’s requirement I will just go along with  	
    70	
    providing what evaluators need but I do not need to know too much about the evaluation or its process; therefore mentally blocking substantive learning from the evaluation process. Factors affecting the use of findings WOW project members observed a number of factors from each of the higher order characteristics identified in the model present by Johnson et al. (2009), where the authors have shown the emergence of these factors in the literature during the period from 1986 to 2005. The current study’s respondents identified two characteristics of the evaluation implementation that have an effect on use of the evaluation findings: 1) communication of findings through the discussion of evaluation results and 2) evaluator competence (evaluators are perceived as friendly and approachable). They also identified characteristics related to decision and policy context: 1) the personal factor in the form of support and interest in the evaluation process, 2) commitment to the evaluation process, and 3) the importance of competing information alongside evaluation information for decision-making. The last group of characteristics is related to stakeholder involvement in an evaluation. Participants in the study agree that they participated in data collection and received evaluation results. They also perceive that, generally, they were involved in the evaluation process. In the context of WOW, the level of involvement, although it did not result in skill acquisition or increased commitment, had the effect of including project members routinely in the evaluation. The involvement in data collection gave project members the chance to inquire about the evaluation and know what types of data are used to answer evaluation questions. As participants mentioned many times, the data collection was not burdensome and was minimal, and the routine online data collection and emails between the evaluators created the opportunity for frequent communication.  	
    71	
    Each factor identified by project participants can be linked to exact nuances of process and findings use in the literature (for example, Johnson et al., 2009). However, this study was unable to reach definitive conclusions because of the limitations of the study results. Nevertheless, the identified factors provide an accurate, if incomplete, picture of the evaluation, decision-making, and stakeholder involvement settings. The relationship between findings and process use is undetermined even with a significant degree of involvement and factors that enhance use of the evaluation process and findings. The findings use, identified by mostly trainees, does not reflect how the evaluation results were used. This is partly due to the effect of the decision-making settings in this case. Also, respondents to the study did not identify uses of the evaluation process. Limitations of the Delphi Technique The method employed for this study had many limitations. The following section discusses these limitations, their effects on the study, and how to avoid them in future research. Attrition Rate One significant problem that is often encountered in Delphi studies has to do with maintaining focus when rating surveys contain large numbers of items. These surveys can consume large blocks of time and therefore represent a common source of participant attrition (Custer, Scarcella, & Stewart, 1999). The expected dropout is a disadvantage associated with the Delphi Technique and must be accounted for. Dropout rates of 30% (Wei & Hammons, 2004) and 22% (Buss, 2001) have been reported. In the context of this study, 29 of WOW participants were identified as individuals who, with some variation, had experienced the evaluation of the project. Only 9 participants responded to the last Delphi round, which represents a 65% dropout rate. This is a very high percentage  	
    72	
    considering the sample size and unbalanced group composition (i.e., PIs, and trainees). This behaviour may have led to a response bias because the attrition rate is substantial. There are two solutions to this problem: 1) by purposefully sampling a large number of participants to begin with, or 2) by employing a modified version of the traditional Delphi technique called Rotational Delphi. Studies that employed the Delphi technique reported and expected attrition (for example, Briedenhann & Butts, 2006; Wei & Hammons, 2004) but also accounted for it. Both studies sampled an expert panel that exceeded the research needs to adjust for any bias that may result from attrition. For the study at hand, the sample represents all participants in the WOW project; therefore, it was impossible to account for a dropout rate by increasing the number of participants. Custer et al. (1999) had a different solution. They found a method that eliminated fatigue and therefore attrition. This was called a Rotational Delphi technique where subsets of survey items were rotated among sub-panels for all rounds. After the first round, each sub-panel reviewed new items not included in the previous round and accompanied by statistical feedback from a different sub-panel. If this method had been applied to the study at hand, it may have reduced attrition. This claim is supported by several encounters with the project coordinator. She approached me on a number of occasions to ask why participants were answering the same survey over and over again, which led me to conclude that a level of fatigue was occurring. An additional burden that may have added to the attrition rate in this study was the nonelimination of items throughout the rounds. This resulted in a 73-item survey that took a minimum of 45 minutes in each round if answered carefully. Therefore, I believe that an important source of attrition in this context was the relatively long blocks of time needed to  	
    73	
    answer the large number of items. Loss of Interest and Commitment The Delphi technique by definition employs a number of survey rounds to converge to an agreed upon opinion among a group of people. The repetitive nature of the Delphi can have increase the attrition rate because of time consumption but also it may result in loss of interest. I believe loss of interest occurred in this context when the panel was asked to rate exactly the same items over three rounds. For this study, I believe that two rounds of data collection could have been sufficient to maintain interest in the study while avoiding attrition. The Delphi is heavily dependent on the sample having the time and interest to commit to the process. It is also important that those who agreed to participate maintain involvement until the process is completed. The commitment of participants to complete the Delphi process is often related to their interest with the question being examined (Hasson et al., 2000). A way to maintain interest, and perhaps commitment, is to employ a classic Delphi, where the first round is used to gather opinions from a panel through an open-ended set of questions that generates ideas and helps identify issues that would be addressed in successive rounds. This study, however, began the first round with an exhaustive list of items of all possible uses, factors, and involvement as reported in the literature. In hindsight, perhaps the Delphi method would have resulted in more accurate results if, initially, the participants had the chance to identify how they perceived the evaluation of the project and then rated the items in following rounds. The classic Delphi could have been more interactive and engaging. Increasing interactive participation may have had an effect on maintaining interest in the study.  	
    74	
    Small Sample An obvious limitation in the study is the sample size. The high attrition rate (65%) in the last round resulted in a biased sample. The small sample did not represent a complete picture of how the evaluation was perceived in the context of the WOW project. In addition, using such a small sample for the statistical analysis resulted in inflated standard errors and created a need to perform bootstrapping. The statistical analysis was not included primarily because of the false results of the Delphi. However, it should be acknowledged that it was a challenge to work with small samples and perform a logistic regression analysis. Informing the Sample Participants in this study were informed well before the data collection started through a verbal presentation in a monthly meeting, as advised by the Delphi literature, and were provided with written instructions through email. However, it seemed that the participants did not distinguish between data collection for this study and other data collection occurring simultaneously for the evaluation. I found that preparing participants is an important step, which, if not carried out appropriately, could affect the response rates. In hindsight, I should have conducted more than one meeting to explain my research. The first should have occurred before the data collection to discuss the research questions and how I planned to pursue answering them, and the second should have occurred close to the time that data collection began to inform participants specifically about what they will be asked to do, how much time they will be expected to contribute, and what use will be made of the information they provide. If the sample has an understanding of the study's aims and the process, this helps to build a research relationship, which is important as the ongoing response from the second and third rounds is  	
    75	
    based on self-motivation. For this study, I cannot say that participants had a true understanding of the aims, objectives, and method of the study. A second issue associated with informing participants is related to difficulty in understanding the statistical feedback for the second and third rounds of the Delphi. The classic Delphi creates a coding system to track participant’s responses from the first to the final round in order to return each individual’s response in addition to group responses for subsequent rounds. This facilitates comparison and provides an opportunity for the participant to reflect on whether or not they should change their response. For the purpose of this study and to maintain anonymity, individual responses were not included in the feedback but were substituted with sub-group responses. This may have altered the purpose of including feedback in a Delphi study. Implications for Evaluation Practice and Research Characteristics of the Decision-Making Settings This study showed the importance of characteristics of the decision-making settings by shedding light on the effects of individuals in the organizational role on stakeholders’ perceptions of evaluation use. Project managers have the power to change stakeholders’ opinions of evaluation. They have the power to affect stakeholders’ perceptions of the evaluation positively with encouragement, support, and commitment to the evaluation through verbal and non-verbal declarations, but also affect stakeholders’ perceptions negatively with inaccurate verbal or non-verbal statements about the evaluation. The study is a clear example of the personal factor identified by Patton et al. (1975). The personal factor is the presence of individuals who care about the evaluation process and its findings (Patton, 2008). In the context of this study the personal factor was strongly present in project directors and participants of the project. Project directors influence other  	
    76	
    participants to take interest in the evaluation, but also their statements to increase support and interest to the evaluation may have an impact on their experiences with the evaluation. Based on this study, evaluators should be aware of how they are portrayed to all stakeholders. Even in instances when they are introduced in a positive way, evaluators need to personally remind stakeholders of their roles and goals from time to time. This can enhance stakeholders understanding of evaluation and perhaps result in a better experience with evaluation. The educating role of an evaluator entails that she/he ensures that evaluations are perceived accurately by taking personal interest in representing their objectives within a specific program. Educating stakeholders can have a positive impact on both findings and process use of an evaluation. Conceptual use can be enhanced with increased knowledge about the project, but also process use can increase when stakeholders are actively involved in discussions about evaluation with evaluators. This study strengthens my belief that one-to-group dynamics between evaluator and stakeholders is as important as a one-to-one dynamic. Group discussions among the group and with an evaluator can help adjust incorrect feelings, opinions, and beliefs about evaluation of a certain project and in general. Stakeholders’ correct knowledge about evaluation of a specific program and evaluation in general can affect perceptions of how a particular evaluation is used. Delphi in Evaluation Research This study employed a research method that has not been previously applied in research on evaluation. The Delphi technique is very useful in incidents when eliciting a collective opinion from a group of people who are geographically dispersed, which is the case in multi-site evaluations, or people who take on different roles within a program, which is the case in most evaluations. The Delphi method considers the circumstances surrounding the group  	
    77	
    communication process. In evaluation research there are many instances when group communication can be difficult. For example when there is logistic issues such as geographic dispersion or when a program includes a large number of people. Face-to-face meetings can be difficult and may result in unheard opinions, or conflict. This method eliminates problems associated with face-to-face group discussions. However, to be able to employ this method in research on evaluation one must take extra care. There are disadvantages associated with using the Delphi technique in evaluation research. As shown in this study, participants in the WOW program did not distinguish between data collection taking place for this study and other data collection for the evaluation that was occurring at the same time. Even with notification through email and a face-to-face meeting, WOW participants were still unclear. Informing the participants in a study and preparing them well in advance is very important to ensure commitment to the study. Maintaining commitment and interest of participants is key to a successful Delphi method. One way to maintain commitment and interest in a study is to investigate a research question within a specific evaluation that is appealing to stakeholders. Another way is by modifying the Delphi to fit the group characteristics. The flexibility of the Delphi method is a very attractive quality that if used well can assist in obtaining accurate group opinion to answer many complex research questions on evaluation. This research is an example of modifying the Delphi to fit the research and stakeholder needs. Instead of employing a classical Delphi that starts with open-ended questions, a prepared survey was sent to lessen time needed to complete data collection. The survey that was created for the purpose of this study contained an exhaustive list of possible uses of an evaluation. Many surveys measuring evaluation use have been published but  	
    78	
    are context-bound. Other surveys are dated and do not reflect the contemporary evaluation use theory. However, the survey constructed for this study can be context free and reflects recent evaluation use theory. It can be a generic instrument that measure uses of any evaluation. This instrument can be useful for practitioners who would like to see the extent their evaluations are used. Summary and Future Research This research is aligned with many past research studies that investigate process and findings use individually and the relationship between them. The Delphi technique that was used to derive participants’ perceptions of the evaluation of the WOW project had many limitations in this context. The most important were attrition rate, biased results, and misinformed participants. The Delphi technique is very useful to conduct research that needs to collect opinions related to a certain phenomenon, especially when characteristics of the participant group include uneven levels of power. Given the objectives of this study, the Delphi technique was thought to be suitable to gather optimal and accurate information about the evaluation. However, the results proved that this was not the case; the research questions were not answered. The Delphi survey results provided an accurate but incomplete view of the factors that were available in the evaluation. However, it failed to correctly identify uses of the evaluation. It is important to note that consensus does not mean that the correct answer, opinion, or judgment has been found. However, this method and its results should be used to facilitate group discussion and debate. The current study is an illustration of how the Delphi method is used in research on evaluation that included all participants involved in the case. When employing a Delphi study, extra care must be taken to ensure the suitability of the method for the participants. For example,  	
    79	
    inquire whether they can allot enough time to respond to the Delphi rounds. A schedule of expected survey rounds can be created in a face-to-face meeting where participants can provide input on times they prefer. In addition, actions must be taken to maintain commitment and interest in the study through a preferred form of reporting. In retrospect, to increase commitment to the study, some of the limitations discussed above could have been avoided. However, at the same time, it must be acknowledged that this method may have not been suitable for this group of people. Other methods that I could have employed would be a mixed method design that included interviews in conjunction with a survey. The method employed in this study resulted in many questions that could have been answered if follow-up interviews had been employed. Similarly, Briendenhann and Butts (2006) found that the method was not suitable to obtain a fuller understanding of significant and critical issues in evaluation of a rural tourism project. They concluded that, in order to be fully useful, the Delphi technique must be supported with interviews. In addition to the effects of the methods employed, this study provides an example that shows the importance of the decision-making context on perceived use of the evaluation. This study shows that stakeholders mirror decision-makers’ attitudes of support, commitment, and interest in the evaluation process and findings. The organizational role of decision-makers influences the stakeholder group positively by encouraging involvement in and learning from the evaluation. However, it also has a negative impact when statements describing uses of the evaluation that are not accurate are made. These statements have the power to alter stakeholders’ opinions of and experiences with the evaluation. The decision-makers’ attitudes and perceptions about the evaluation largely influence the perceived use of the evaluation findings and process.  	
    80	
    Process use is also affected by personal characteristics of stakeholders and their attitudes toward learning. While some are interested in learning new things, others are involved in the evaluation because project managers impose it on them. In addition, perception of process use is affected by the variation in stakeholders’ involvement in the evaluation, role in the project, and reasons behind being included in the evaluation. Although structured involvement in the evaluation positively affects use of the evaluation, it is not predictive of perceived use. Stakeholders’ understanding of the evaluation, how it is useful, and the intensity of possible use in general can be a predictor of the perceived use of a particular evaluation. Future research that examines evaluation use in a certain context should study the relationship between perceptions of use in a particular evaluation with general knowledge about evaluation. Many questions that need answers arise from this study. What factors affect perceptions of use? Why is there a difference between actual and perceived use? How does perceived used affect findings and process use? What other affects do verbal inaccurate statements have on evaluation? Does the personal factor have negative effects on use of evaluations? How? These are questions that can be considered in future work.  	
    81	
    References Alkin, M. C.Evaluation use. In S. Mathison (Ed.), Encyclopedia of evaluation (1st ed., pp. 143). United States of America: Sage Publications. Alkin, M. C. (1975). Evaluation: Who needs it? Who Cares? Studies in Educational Evaluation, 1(3), 201. Alkin, M. C. (1982). Introduction: Parameters of evaluation Utilization/Use. Studies in Educational Evaluation, 8, 153-155. Alkin, M. C., Daillak, R., & White, P. (1979). Using evaluations: Does evaluation make a difference? Sage Publications. Alkin, M. C., & Daillik, R. H. (1979). A study of evaluation utilization. Educational Evaluation and Policy Analysis, 1, 41. doi: 10.3102/01623737001004041 Alkin, M. C., & Taut, S. (2003). Unbundling evaluation use. Studies in Educational Evaluation, 29, 1-12. Amo, C. (2009). Investigating the relationship between process use and use of evaluation findings in a government context. (Master's Thesis, University of Ottawa). (ProQuest Dissertations and Theses) Amo, C., & Cousins, J. B. (2007). Going through the process: An examination of the operationalization of process use in empirical research on evaluation. New Directions for Program Evaluations, (116), 5-26. Ball, S., & Anderson, S. B. (1977). Dissemination, communication, and utilization. Education and Urban Society, 9, 451. doi: 10.1177/001312457700900404 Bledsoe, K. L., & Graham, J. A. (2005). The use of multiple evaluation approaches in program evaluation. American Journal of Evaluation, 26, 302. doi: 10.1177/1098214005278749  	
    82	
    Bober, C. F., & Bartlett, K. R. (2004). The utilization of training program evaluation in corporate universities. Human Resource Development Quarterly, 15(4), 363. Boyer, J. F., & Langbein, L. I. (1991). Factors influencing the use of health evaluation research in congress. Evaluation Review, 15, 507. doi: 10.1177/0193841X9101500501 Braskamp, L. A., Brandenburg, D. C., & Ory, J. C. (1987). Lessons about clients’ expectations. New Directions for Program Evaluations, 36, 63. Braskamp, L. A., Brown, R. D., & Newman, D. L. (1982). Studying evaluation utilization through simulations. Evaluation Review, 6, 114-126. doi: 10.1177/0193841X8200600108 Brett, B., Hill-Mead, L., & Wu, S. (2000). Perspectives on evaluation use and demand by users: The case of city year. New Directions for Program Evaluations, 88, 71-83. Briedenhann, J., & Butts, S. (2006). Application of the delphi technique to rural tourism project evaluation. Current Issues in Tourism, 9(2), 171. Brown, R. D., & Braskamp, L. A. (1980). Summary: Common themes and a checklist. New Directions of Evaluations, 5, 91-97. Buss, A. R. (2001). A delphi study of educational telecollaborative projects: Identifying critical elements and obstacles. Journal of Educational Computing Research, 24(3), 235. Carden, F., & Earl, S. (2007). Infusing evaluative thinking as process use: The case of the international development research centre (IDRC). New Directions for Program Evaluations, (112), 61-73. Clayton, M. J. (1997). Delphi: A technique to harness expert opinion for critical-decision making tasks in education. Educational Psychology, 17(4), 373. Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review if Educational Research, 56, 331-364.  	
    83	
    Cox, G. B. (1977). Managerial style : Implications for the utilization of program evaluation information. Evaluation Review, 1, 499. doi: 10.1177/0193841X7700100308 Cummings, R. (2002). Rethinking evaluation use. Paper Presented at the 2002 Australasian Evaluation Society International Conference, Wollongong Australia. Custer, R. L., Scarcella, J. A., & Stewart, B. R. (1999). The modified delphi technique - A rotational modification. Journal of Vocational and Technical Education, 15(2) Dalkey, N. & Helmer, O. (1963). An experimental application of the delphi method to the use experts. Management Science, 9(3), 458. Dalkey, N. C. (1969). The delphi method: An experimental study of group opinion. ( No. RM5888-PR). Santa Monica, CA, USA: The Rand Corporation. Eggers, R. M., & Jones, C. M. (1998). Practical consideration for conducting delphi studies: The oracle enters A new age. Educational Research Quarterly, 21(3), 53. Encyclopedia of evaluation (2005). In Mathison S. (Ed.), Sage Publications. Fish, L. S., & Busby, D. M. (2005). The delphi method. Research methods in family therapy (2nd ed., pp. 238). New York, NY, USA: Guilford Press. Ginsburg, A., & Rhett, N. (2003). Building a better body of evidence: New opportunities to strengthen evaluation utilization. American Journal of Evaluation, 24, 489-498. Goldstein, M. S., Marcus, A. C., & Rausch, N. P. (1978). The non-utilization of evaluation research. The Pacific Sociological Review, 21(1), 21-44. Grasso, P. G. (2003). What makes an evaluation useful? reflections from experience in large organizations. American Journal of Evaluation, 24, 507. doi: 10.1177/109821400302400408  	
    84	
    Greene, J. (1987). Stakeholder participation in evaluation design: Is it worth the effort? Evaluation and Program Planning, 10, 379. Greene, J. (2005). Stakeholders. In S. Mathison (Ed.), Evaluation encyclopedia (pp. 387). Thousand Oaks, California: Sage Publications Inc. Greene, J. C. (1988b). Communication of results and utilization in participatory program evaluation. Evaluation and Program Planning, 11, 341-351. Gupta, U. G., & Clarke, R. E. (1996). Theory and applications of the delphi technique: A bibliography (1975-1994). Technological Forecasting and Social Change, 53, 185. Harnar, M. A., & Preskill, H. (2007). Evaluators’ descriptions of process use: An exploratory study. New Directions for Program Evaluations, (116), 27-44. Harnar, M. A., & Preskill, H. (2007). Evaluators’ descriptions of process use: An exploratory study. New Directions for Program Evaluations, (116), 27-44. Hasson, F., Keeney, S., & McKenna, H. (2000). Research guidelines for the delphi survey technique. Journal of Advanced Nursing, 32(4) Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation's influence on attitudes and actions. American Journal of Evaluation, 24, 293. doi: 10.1177/109821400302400302 Hill, K. Q., & Fowles, J. (1975). The methodological worth of the delphi forecasting technique. Technological Forecasting and Social Change, 7, 179. Hirner, L. J. (2008). Quality indicators for evaluating distance education programs at community colleges.  	
    85	
    Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009). Research on evaluation use : A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30, 377-410. doi: 10.1177/1098214009341660 Johnson, R. B. (1998). Toward a theoretical model of evaluation utilization. Evaluation and Program Planning, 21, 93-110. Kalaian, S. A., & Kasim, R. M. (2012). Terminating sequential delphi survey data collection. Practical Assessment, Research & Evaluation, 17(5) Kamm, B. L. (2004). Building organizational learning and evaluation capacity: A study of process use. (Doctoral Dissertation, The University of New Mexico). (UMI: 3154944) Keeney, S., Hasson, F., & McKenna, H. (2006). Consulting the oracle: Ten lessons from the delphi techniquein nursing research. Methodological Issues in Nursing Research, 53(2), 205. King, H. (2007). Developing evaluation capacity through process use. New Directions for Program Evaluations, 116, 45. King, J. A. (1982). Studying the local use of evaluation: A discussion of theoretical issues and an empirical study Studies in Educational Evaluation, 8, 175-183. King, J. A., & Pechman, E. M. (1984). Pinning a wave to the shore: Conceptualizing evaluation use in school systems. Educational Evaluation and Policy Analysis, 6(3), 241-251. Kirkhart, K. E. (2000). Reconceptualizing evaluation use: An integrated theory of influence. New Directions for Program Evaluations, 88, 5-23. Lawrenz, F., Huffman, D., & McGinnis, J. R. (2007). Multilevel evaluation process use in largescale multisite evaluation. New Directions for Program Evaluations, (116), 75-85. Leviton, L. C., & Hughes, E. F. X. (1981). Research on the utilization of evaluations: A review and synthesis. Evaluation Review, 5, 525. doi: 10.1177/0193841X8100500405  	
    86	
    Linstone, H.A., & Turoff, M. (Ed.). (2002). The delphi method: Techniques and applications Murray Turoff and Harold A. Linstone. Lopez, R. M. (2010). Understanding the impact of process use and use of evaluation findings on program participants. (Doctoral Dissertation, Claremont University). (UMI: 3445790) Madaus, G. F., & Stufflebeam, D. L. (2000). Program evaluation: A historical overview. In T. Kellaghan, G. F. Madaus & D. L. Stufflebeam (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (2nd ed., pp. 14). Hingham, MA, USA: Kluwer Academic Publishers. Malen, B., Murphy, M. J., & Geary, S. (1988). The role of evaluation information in legislative decision making: A case study of a lose cannon on deck. Theory into Practice, 27(2), 111. Marra, M. (2003). Dynamics of evaluation use as organizational knowledge: The case of the world bank (Doctoral Dissertation, The George Washington University). (UMI: 3085545) Marsh, D. D., & Glassick, J. M. (1988). Knowledge utilization in evaluation efforts: The role of recommendations. Science Communication, 9, 323. doi: 10.1177/107554708800900301 Newman, D. L., Brown, R. D., & Braskamp, L. A. (1980). Communication theory and the utilization of evaluation New Directions of Evaluations, 5, 29-35. doi: 10.3102/01623737001004041 Newman, D. L., Browns, R. D., & Rivers, L. S. (1987). Factors influencing the decision-making process: An examination of the effect of contextual variables. Studies in Educational Evaluation, 13, 199. Patton M, Q. (1986). Utilization-focused evaluation (Second Edition ed.) Sage Publications. Patton, M. Q. (1988). The evaluator's responsibility for utilization. Evaluation Practice, 9(1), 5.  	
    87	
    Patton, M. Q. (1998). Discovering process use. Evaluation, 4, 225-233. doi: 10.1177/13563899822208437 Patton, M. Q. (2000). Discovering process use. In D. L. Stufflebeam, G. F. Madaus & T. Kellaghan (Eds.), Evaluation models (pp. 425-438). Boston,: Kluwer Academic Publishers. Patton, M. Q. (2007). Process use as a usefulism. New Directions for Program Evaluations, (116), 99-112. Patton, M. Q., Grimes, P. S., Guthrie, K. M., Brennan, M. J., French, B. D., & Blythe, D. A. (1975). In search of impact: An analysis of the utilization of federal health evaluation research. (). Minneapolis, Minnesota: Minnesota Center of Social Research, University of Minnesota. Podems, D. (2007). Process use: A case narrative from southern africa. New Directions for Program Evaluations, (116), 87-97. Polivka, L., & Steg, E. (1978). Program evaluation and policy development : Bridging the gap . Evaluation Review, 2, 696. doi: 10.1177/0193841X7800200412 Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29, 443. doi: 10.1177/1098214008324182 Preskill, H., & Caracelli, V. (1997). Current and developing conceptions of use: Evaluation use TIG survey results. American Journal of Evaluation, 18, 209. doi: 10.1177/109821409701800122 Preskill, H., & Torres, R. (2000). The learning dimension of evaluation use. New Directions for Program Evaluations, (88), 25-37. Preskill, H., Zuckerman, B., & Matthews, B. (2003). An exploratory study of process use: Findings and implications for future research. American Journal of Evaluation, 24, 423-442.  	
    88	
    Preston, C. C., & Coleman, A. M. (2000). Optimal number of response categories in rating scales: Reliability, validity, discriminating power, and respondent perferences. Acta Psychologica, (104), 1. Remport, T. (2008). Identifying factors associated with local use of large-scale evaluations: A case study (Doctoral Dissertation, University of Illinois). (UMI: 3347502) Rich, F. R. (1977). Uses of social science information by federal bureaucrats: Knowledge for action versus knowledge for understanding. In C. H. Weiss (Ed.), Using social research in public policy making (pp. 199). Lexington, Massachusetts: Lexington Books. Rockwell, S. K., Dickey, E. C., & Jasa, P. J. (1991). The personal factor in evaluation use: A case study of a steering Committee’s use of a conservation tillage survey. Evaluation and Program Planning, 13, 389. Rowe, G., & Wright, G. (1999). The delphi technique as A forcasting tool: Issues and analysis. International Journal of Forcasting, 15, 353. Russ-Eft, D., Atwood, R., & Egherman, T. (2002). Use and non-use of evaluation results: Case study of environmental influences in the private sector. American Journal of Evaluation, 23, 19-31. doi: 10.1177/109821400202300103 Sanders, J. R. (2002). Presidential address: On mainstreaming evaluation. American Journal of Evaluation, 23, 253-259. doi: 10.1177/109821400202300302 Shea, M. P. (1991). Program evaluation utilization in canada and its relationship to evaluation process, evaluator and decision context variables. (Doctoral Dissertation, The University of Windsor). Shulha, L. M., & Cousins, J. B. (1997). Evaluation use: Theory, research and practice since 1986. American Journal of Evaluation, 18, 195-208.  	
    89	
    Skulmoski, G. J., Hartman, F. T., & Krahn, J. (2007). The delphi method for graduate research. Journal of Information Technology Education, 6 Smalley, S. B. (2000). The emergence of stakeholder consus: Examining issues in evaluating sustainable agriculture research and education (SARE). (Doctor of Philosphy, Michigan State University). Stufflebeam, D. L., & Madaus, G. F. (2000). Program evaluation: A historical overview. In D. L. Stufflebeam, G. F. Madaus & T. Kellaghan (Eds.), Evaluation models (pp. 4). Boston: Kluwer Academic Publishers. Tabachnik, B. G., & Fidell, L. S. (2007). Using multivariate statistics (Sixth ed.). New Jersey, United States: Pearson Education, Inc. Taut, S. (2007). Studying self-evaluation capacity building in a large international development organization. American Journal of Evaluation, 28, 45-59. doi: 10.1177/1098214006296430 Tomlinson, C., Bland, L., Moon, T., & Callahan, C. (1994). Case studies of evaluation utilization in gifted education Evaluation Practice, 15(2), 153-168. Turnbull, B. (1999). The mediating effect of participation efficacy on evaluation use. Evaluation and Program Planning, 22, 131. Wakita, T., Ueshima, N., & Noguchi, H. (2011). Psychological distance between categories in the likert scale: Comparing different numbers of options. Educational and Psychological Measurement, , 1. doi: 10.1177/0013164411431162 Wei, W., & Hammons, J. O. (2001). Using the delphi process to reach consensus between taiwanese teachers and professors about possible competencies for use in a simulated teaching performance test used to determine licensure. Early Child Development and Care, 167, 149.  	
    90	
    Weiss, C. H. (1972). Utilization of evaluation: Toward comparative study. In C. H. Weiss (Ed.), Evaluating action programs: Readings in social action and education (pp. 318-326). Needham Heights, MA: Allyn and Bacon, Inc. Weiss, C. H. (1988a). Evaluation for desicions: Is anybody there? does anybody care? American Journal of Evaluation, 9(1), 5. Weiss, C. H. (1988b). If program decisions hinged only on information: A response to patton. American Journal of Evaluation, 9(3), 15. Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19, 21. doi: 10.1177/109821409801900103 Weiss, C. H., Murphy-Graham, E., & Birkeland, S. (2005). An alternate route to policy influence : How evaluations affect D.A.R.E. American Journal of Evaluation, 26, 12. doi: 10.1177/1098214004273337 Yousuf, M. I. (2007). Using experts' opinions through delphi technique. Practical Assessment, Research & Evaluation, 12(4) 	
    	
    91	
    Appendix Appendix A: Survey Items and Their Links to the Literature  Instrumental Use  Patton et al (1975) Greene (1988) Shea (1991)  Conceptual Use  Types of use in literature  Greene (1988) Greene (1987) Shea (1991)  Symbolic Use  Category  Weiss et al. (2005)  Items Use of Evaluation Findings • I feel the WOW project was enhanced after the first year evaluation feedback. • I feel the WOW project was enhanced after the first SNA evaluation feedback. • WOWees discuss future steps based on evaluation data or reports. • Evaluation helps get new funding or maintain the original funding for WOW. • I do not think the evaluation information contributes much to the project. • Some decisions regarding the project are directly affected by the evaluation. • Evaluation helps make decisions concerning budget allocation. • The evaluation data or reports significantly affect the way you think about the project. • The evaluation data or reports help me learn more about the project. For example, the evaluation data or reports help me understand aspects of the project that were unclear. • The evaluation is useful to persuade others, such as funders. • The evaluation is useful to support decisions already made. • The evaluation is useful to justify program existence or continuation.  	
   	
   	
   	
   	
    	
    92	
    Category  Types of use in literature  Items Stakeholder Involvement  Stakeholder Involvement Learning Dimension  Amo (2009)  Instrumentation effects  • I have been involved in planning the WOW evaluation. • I have been involved in deciding what data should be collected for the WOW evaluation. • I have participated in some data collection for the WOW evaluation. • I have participated in data analysis for the WOW evaluation. • I have received evaluation results about the WOW project. Use of the Evaluation Process  Marsh and Glassick (1988) Rockwell et al. (1990) Brett, Hill-Mead and Wu (2000) Greene (1987) Turnbull (1999)  Patton (2008)  	
    • Involvement in the evaluation process taught me to think critically. • Although the evaluation of the project is ongoing I feel I know a great deal about the WOW evaluation process. • The knowledge I acquired from being exposed to the evaluation process can be used elsewhere. • Involvement in the evaluation process and activities helped me understand what evaluation is all about. • Involvement in the evaluation process and activities helped develop or improve some of my skills such as data collection techniques. • Through involvement in the evaluation process I developed professional networks. • My involvement in the evaluation process increased my commitment to the project. • Because I know more about evaluation, I appreciate its power as a force of change. • My involvement in the evaluation process increased ownership of what we do. • Data collection procedures helped me understand more about evaluation. • Data collection procedures helped me understand more about the project.  93	
    Increase the value of evaluation  Enhance shared understanding  Category  Types of use in literature Russ-Eft et al (2002)  Items • The evaluation process created a shared understanding of the project among WOWees. • The evaluation process caused me to question the underlying assumptions of the project.  • Because of the evaluation I feel that project will be capable to achieve its goals. • Throughout my involvement in the evaluation process I learned to appreciate the value of evaluation. Factors Effecting Use of Evaluation Findings  Lopez (2010)  Nature of Results  Bober and Bartlett (2004) Malen et al (1988)  Communication  Bober and Bartlett (2004) Marsh and Glassick (1988) Grasso (2003) Marra (2003) Boyer and Langbein (1991) Newman et al (1980)  Timeliness  Bober and Bartlett (2004) Boyer and Langbein (1991)  Methodological Sophistication  Evaluation Implementation  Bledsoe and Graham (2005)  	
    • The evaluation produced useful results. • The evaluation of the project fosters improvement. • The evaluation results were easy to understand and comprehend. • The evaluators discussed evaluation results with project community members. • The evaluators were open to answering questions about the evaluation results. • The presentation of evaluation results facilitated understanding of them. • I approach evaluators when I have a question. • Communication between project members and evaluators is two-way and fluid. • The evaluators facilitated the evaluation process by providing explanations of what the evaluation process entails. • The evaluation results are distributed in a timely manner. • The evaluation report is distributed before decisions are made. • • • • •  The evaluation design is clear and understandable. Data collection techniques are interesting to me. The evaluation methodology is sophisticated. The evaluation evidence is based on objective data. You perceive the evaluation as unbiased.  94	
    Credibility  Types of use in literature Boyer and Langbein (1991) Cox (1977)  Johnson et al (2009)  Evaluator Competence  Relevancy  Category  Shea (1991) Brown and Braskamp (1980) Braskamp et al (1982)  Items • The evaluation is catered to fit the needs of members of the community. • The evaluation is designed to fit program goals and objectives. • The evaluation results reflect project concerns. • The evaluation design and evidence is trustworthy and credible.  • Evaluators are approachable and friendly. • Evaluators seem to be knowledgeable about evaluation. • Evaluators are capable of performing a sound and credible evaluation. • Evaluators communicate effectively with project members.  The personal factor  Decision and Policy Settings Boyer and Langbein (1991) Patton (2008) Rockwell et al. (1990)  • • • • • • •  Commitment to the Evaluation  Bober and Bartlett (2004) Cousins (1996)  Competing Information  •  Malen et al. (1988) Newman et al. (1987) Weiss et al. (2005)  	
    • •  You are interested in the evaluation results. You are supportive of the evaluation. You are interested in the evaluation process. Project mangers encourage collaboration with the evaluation and evaluators. Project managers support members to pursue learning from engaging in an evaluation. Project managers strengthen beliefs of the value of evaluation among project members. You look forward to partake in the data collection procedures. You look forward to talk to evaluators about the project. WOWees discuss evaluation occasionally. The project is committed to the evaluation.  • The evaluation evidence is one source of information the project receives. • Other information is as important as evaluation results. • Other information is considered when making decisions. 95	
    Information needs  Political climate  Category  Types of use in literature Malen et al. (1988) Newman et al. (1987) Weiss et al. (2005) Bober and Bartlett (2004) Cousins (1996)  Items • The types of decisions made are related to project implementation. • Decisions that are based on evaluation results are made to ensure ongoing funding of the project. • The type of information provided by the evaluation is directly related to goals and objectives of the project. • The evaluation is informed by your needs.  Involvement in the evaluation  Stakeholder Involvement  	
    Marsh and Glassick (1988) Rockwell et al. (1990) Brett, Hill-Mead and Wu (2000) Greene (1987) Turnbull (1999)  • The evaluation involves members of the project throughout the evaluation process.  96	
    Appendix B: Complete Results of the Delphi Rounds Findings Use Items 1- I feel the WOW project was enhanced after the first year of evaluation feedback. Whole Group Statistics: PIs: PDFs & Graduate Students: 2- I feel the WOW project was enhanced after the first SNA evaluation feedback. Whole Group Statistics: PIs: PDFs & Graduate Students: 3- WOW project participants discuss future steps based on evaluation data or reports. Whole Group Statistics: PIs: PDFs & Graduate Students: 4- Evaluation helps get new funding or maintain the original funding for WOW. Whole Group Statistics: PIs: PDFs & Graduate Students:  	
    Delphi I Med/IQR/Mod/%a Cons.  Delphi II Med/IQR/Mod/% Cons.  Delphi III Med/IQR/Mod/% Cons.  3/1/3/46.7 2/2/2/40 3/1/3/60  Neutral  2.5/1/3/50 2/1/2/60 3/1/3/57.1  None  3/1/3/55.6 3/1/3/50  Neutral  3/0/3/60 2/2/2/40 3/0/3/80  Neutral  3/1/3/50 3/2/-/-d 3/1/3/57.1  Neutral  3/1/3/66.7 3/1/3/62.5  Neutral  4/2/4/46.7 4/2/4/80 3/2/2/40  None  3.5/2/4/41.7 4/2/4/60 3/2/-/-  None  3/1/3/77.8 3/0/3/87.5  Neutral  3/2/3/46.7 2/2/1/40 3/2/3/50  None  3/1/3/50 2/1/2/60 3/1/3/57.1  Neutral  3/1/-/3/1/3/50  Neutral  97	
    Findings Use Items 5- I do not think the evaluation information contributes much to the project. Whole Group Statistics: PIs: PDFs & Graduate Students: 6- Some decisions regarding the project are directly affected by the evaluation. Whole Group Statistics: PIs: PDFs & Graduate Students: 7- Evaluation helps make decisions concerning budget allocation. Whole Group Statistics: PIs: PDFs & Graduate Students: 8- The evaluation data or reports significantly affect the way you think about the project. Whole Group Statistics: PIs: PDFs & Graduate Students: 9- The evaluation data or reports help me learn more about the project. For example, the evaluation data or reports help me understand aspects of the project that were unclear. Whole Group Statistics: PIs: PDFs & Graduate Students:  	
    Delphi I Med/IQR/Mod/%a Cons.  Delphi II Med/IQR/Mod/% Cons.  Delphi III Med/IQR/Mod/% Cons.  3/1/4/40 4/2/4/40 3/1/3/40  Neutral  3.5/2/4/50 4/1/4/80 3/2/-/-  None  3/2/3/44.4 3/1/3/50  None  3/1/2/40 3/2/2/40 2.5/1/2/40  Neutral  2.5/2/2/58.3 3/3/2/60 2/1/2/57.1  None  3/0/3/100 3/0/3/100  Neutral  3/1/3/40 4/1/4/60 3/2/3/40  Neutral  3/2/3/41.7 4/2/2/40 3/1/3/57.1  None  3/1/3/55.6 3/1/3/62.5  Neutral  3/2/3/33.3 2/2/2/40 3/1/3/40  None  3/2/-/3/3/2/40 3/2/3/42.8  None  3/1/3/55.6 3/1/3/62.5  Neutral  2/1/2/46.7 2/1/2/60 3/2/2/40  Agree  3/1/-/2/2/2/60 3/1/3/57.1  Neutral  3/1/3/66.7 3/0/3/75  Neutral  98	
    Findings Use Items 10- The evaluation is useful to persuade others, such as funders. Whole Group Statistics: PIs: PDFs & Graduate Students: 11- The evaluation is useful to support decisions already made. Whole Group Statistics: PIs: PDFs & Graduate Students: 12- The evaluation is useful to justify program existence or continuation. Whole Group Statistics: PIs: PDFs & Graduate Students: 	
   Stakeholder Involvement Items 1- I	
  have	
  been	
  involved	
  in	
  planning	
   the	
  WOW	
  evaluation.	
  	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 2- I	
  have	
  been	
  involved	
  in	
  deciding	
   what	
  data	
  should	
  be	
  collected	
  for	
  the	
   WOW	
  evaluation.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 	
    Delphi I Med/IQR/Mod/%a Cons.  Delphi II Med/IQR/Mod/% Cons.  Delphi III Med/IQR/Mod/% Cons.  2/1/2/60 2/2/2/60 2/1/2/60  Agree  2/2/2/58.3 2/2/-/2/2/2/71.4  None  2/1/2/55.6 2.5/1/-/-  Agree  2/1/2/53.3 2/1/2/80 3/1/3/50  Agree  3/1/3/58.3 3/1/3/80 3/1/-/-  Neutral  3/1/3/88.9 3/0/3/87.5  Neutral  2/1/2/68.7 2/1/2/80 2/1/2/60  Agree  2/1/2/66.7 3/2/2/40 2/0/2/85.7  Agree  3/1/3/66.7 3/1/3/75  Neutral  Delphi I Med/IQR/Mod/%a Cons.  Delphi II Med/IQR/Mod/ Cons. %  Delphi III Med/IQR/Mod/% Cons.  4/2/4/40 4/2/4/60 4/2/5/40  None  3/2/4/41.7 2/2/2/40 4/2/4/57.1  Neutral  4/2/4/44.4 4/2/4/50  None  4/2/4/40 4/2/4/60 4/2/4/40  None  3.5/2/4/50 3/2/3/40 4/1/4/71.4  None  4/1/4/55.6 4/1/4/50  Disagree  99	
    Stakeholder Involvement Items 3- I	
  have	
  participated	
  in	
  some	
  data	
   collection	
  for	
  the	
  WOW	
  evaluation.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 4- I	
  have	
  participated	
  in	
  data	
  analysis	
   for	
  the	
  WOW	
  evaluation.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 5- I	
  have	
  received	
  evaluation	
  results	
   about	
  the	
  WOW	
  project.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 	
   Process Use Items 1- Involvement	
  in	
  the	
  evaluation	
   process	
  taught	
  me	
  to	
  think	
  critically.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 2- Although	
  the	
  evaluation	
  of	
  the	
   project	
  is	
  ongoing	
  I	
  feel	
  I	
  know	
  a	
   great	
  deal	
  about	
  the	
  WOW	
  evaluation	
   process.	
   Whole Group Statistics: PIs: 	
    Delphi I Med/IQR/Mod/%a Cons.  Delphi II Med/IQR/Mod/ Cons. %  Delphi III Med/IQR/Mod/% Cons.  2/1/2/66.7 2/2/2/80 2/2/2/60  Agree  2/0/2/66.7 2/4/-/2/0/2/100  Agree  2/0/2/100 2/0/2/100  Agree  4/1/4/53.3 4/1/4/80 4/2/4/40  Disagree 4/2/4/58.3 4/1/2/60 4/2/4/57.1  None  4/2/4/44.4 4/1/4/50  None  2/0/2/73.3 2/1/2/80 2/1/2/70  Agree  None  2/1/2/77.7 2/1/2/75  Agree  2/2/2/58.3 2/2/2/60 2/2/2/57.1  Delphi I Med/IQR/Mod/%a Cons.  Delphi II Med/IQR/Mod/% Cons.  Delphi III Med/IQR/Mod/% Cons.  3/1/3/40 3/2/3/40 3/1/3/40  Neutral  3/1/3/41.7 3/1/3/60 3/2/-/-  Neutral  4/2/4/55.6 4/2/4/50  None  3/1/3/46.7 3/3/4/40  Neutral  3/2/3/33.3 4/2/4/40  None  4/2/3/55.6 -  None  100	
    Process Use Items PDFs & Graduate Students: 3- The	
  knowledge	
  I	
  acquired	
  from	
   being	
  exposed	
  to	
  the	
  evaluation	
   process	
  can	
  be	
  used	
  elsewhere.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 4- Involvement	
  in	
  the	
  evaluation	
   process	
  and	
  activities	
  helped	
  me	
   understand	
  what	
  evaluation	
  is	
  all	
   about.	
  	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 5- Involvement	
  in	
  the	
  evaluation	
   process	
  and	
  activities	
  helped	
  develop	
   or	
  improve	
  some	
  of	
  my	
  skills	
  such	
  as	
   data	
  collection	
  techniques.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 6- Through	
  involvement	
  in	
  the	
   evaluation	
  process	
  I	
  developed	
   professional	
  networks.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:  	
    Delphi I Med/IQR/Mod/%a Cons. 3/1/3/60  Delphi II Med/IQR/Mod/% Cons. 3/2/3/42.8  Delphi III Med/IQR/Mod/% Cons. 3.5/2/4/50  2/1/2/53.3 2/1/2/60 2.5/1/2/50  Agree  2.5/2/2/50 3/2/-/2/2/2/57.1  None  3/2/-/3/2/-/-  None  2/2/2/46.7 2/2/2/40 2.5/2/2/50  None  2.5/2/2/41.7 3/3/-/2/2/2/57.1  None  3/2/3/44.4 3/1/-/-  None  4/1/4/40 4/1/4/60 3.5/2/3/40  Disagree 4/2/4/50 4/1/4/60 4/1/4/42.8  None  4/1/4/55.6 4/1/4/50  Disagree  3/1/4/33.3 4/2/4/60 3/1/3/40  Neutral  None  4/2/4/44.4 4/1/4/50  None  4/2/4/50 4/2/-/4/1/4/57.1  101	
    Process Use Items 7- My	
  involvement	
  in	
  the	
  evaluation	
   process	
  increased	
  my	
  commitment	
  to	
   the	
  project.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 8- Because	
  I	
  know	
  more	
  about	
   evaluation,	
  I	
  appreciate	
  its	
  power	
  as	
   a	
  force	
  of	
  change.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 9- My	
  involvement	
  in	
  the	
  evaluation	
   process	
  increased	
  ownership	
  of	
  what	
   we	
  do.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 10- Data	
  collection	
  procedures	
   helped	
  me	
  understand	
  more	
  about	
   evaluation.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 11- Data	
  collection	
  procedures	
   helped	
  me	
  understand	
  more	
  about	
   the	
  project.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:  	
    Delphi I Med/IQR/Mod/%a Cons.  Delphi II Med/IQR/Mod/% Cons.  Delphi III Med/IQR/Mod/% Cons.  3/2/2/26.7 2/2/2/60 3/2/2/30  None  4/2/4/50 4/1/4/80 3/2/-/-  None  4/1/4/55.6 3.5/1/-/-  Disagree  3/2/3/40 3/2/2/40 3/1/3/40  None  3/2/-/3/2/-/3/1/3/42.8  None  3/2/3/44.4 3/2/3/50  None  3/1/3/453.3 3/1/3/80 3.5/1/4/40  Neutral  3/1/3/50 4/1/4/60 3/1/3/57.1  Neutral  3/1/3/55.6 3/1/3/50  Neutral  3/2/2/26.7 3/2/2/40 3/2/2/40  None  3/2/-/4/2/4/60 3/2/2/42.8  None  3/2/-/3/2/-/-  Agree  3/2/2/33.3 3/1/3/60 3/2/2/40  None  3/1/3/50 3/1/3/60 3/2/3/42.8  Neutral  3/2/-/3/2/3/37.5  None  102	
    Process Use Items 12- The	
  evaluation	
  process	
  created	
  a	
   shared	
  understanding	
  of	
  the	
  project	
   among	
  WOWees.	
  	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 13- The	
  evaluation	
  process	
  caused	
   me	
  to	
  question	
  the	
  underlying	
   assumptions	
  of	
  the	
  project.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 14- Because	
  of	
  the	
  evaluation	
  I	
  feel	
   that	
  project	
  will	
  be	
  capable	
  to	
   achieve	
  its	
  goals.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 15- Throughout	
  my	
  involvement	
  in	
   the	
  evaluation	
  process	
  I	
  learned	
  to	
   appreciate	
  the	
  value	
  of	
  evaluation.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 	
    Delphi I Med/IQR/Mod/%a Cons.  Delphi II Med/IQR/Mod/% Cons.  Delphi III Med/IQR/Mod/% Cons.  2.5/1/2/50 3/1/3/60 2/2/2/57.1  Agree  2.5/1/2/50 3/1/3/60 2/2/2/57.1  Agree  3/2/2/50 2.5/2/2/50  Neutral  4/1/4/40 4/1/4/80 3/1/3/50  Disagree 3/2/3/41.7 4/3/-/3/1/3/71.4  None  3/2/3/55.6 3/1/3/62.5  None  3/2/3/33.3 2/2/2/40 3/1/4/40  None  3/1/-/2/2/3/60 3/1/2/57.1  Neutral  3/3/2/33.3 3/3/2/37.5  None  3/1/2/40 2/1/2/60 3/2/3/40  Neutral  3/2/3/41.7 3/1/3/60 3/2/2/42.8  None  3/2/-/3/2/2/37.5  None  	
   	
    	
    103	
    Delphi I Factors Affecting Findings Use Items 1- The	
  evaluation	
  produced	
  useful	
   results.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 2- The	
  evaluation	
  of	
  the	
  project	
   fosters	
  improvement.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 3- The	
  evaluation	
  results	
  were	
  easy	
   to	
  understand	
  and	
  comprehend.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 4- The	
  evaluators	
  discussed	
   evaluation	
  results	
  with	
  project	
   community	
  members.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 5- The	
  evaluators	
  were	
  open	
  to	
   answering	
  questions	
  about	
  the	
   evaluation	
  results.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:  	
    Delphi II  Delphi III  Med/IQR/Mod/%a  Cons.  Med/IQR/Mod/%  Cons.  Med/IQR/Mod/%  Cons.  2/1/2/40 2/1/3/60 3/1/3/60  Agree  3/1/-/2/1/2/80 3/1/3/57.1  Neutral  3/2/2/44.4 3/1/3/50  None  2/0/2/73.3 2/1/2/80 2/1/2/70  Agree  3/1/-/3/1/3/60 2/1/2/57.1  Neutral  2/2/2/55.6 2/1/2/62.5  None  3/2/3/30 2/1/2/80 2/1/2/70  None  3/2/3/33.3 2/3/2/40 3/1/3/57.1  None  3/2/-/3/2/-/-  None  2/1/2/60 2/1/2/60 2.5/1/2/50  Agree  2/1/2/66.7 2/1/2/80 2/1/2/57.1  Agree  2/1/2/66.7 2/1/2/62.5  None  2/1/2/53.3 2/2/1/40 2/1/2/60  Agree  2/1/2/75 2/1/2/80 2/3/2/71.4  Agree  2/2/2/66.7 2/2/2/62.5  None  104	
    Delphi I Factors Affecting Findings Use Items 6- The	
  presentation	
  of	
  evaluation	
   results	
  facilitated	
  understanding	
  of	
   them.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 7- I	
  approach	
  evaluators	
  when	
  I	
  have	
   a	
  question.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 8- Communication	
  between	
  project	
   members	
  and	
  evaluators	
  is	
  two-­‐way	
   and	
  fluid.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 9- The	
  evaluators	
  facilitated	
  the	
   evaluation	
  process	
  by	
  providing	
   explanations	
  of	
  what	
  the	
  evaluation	
   process	
  entails.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 10- The	
  evaluation	
  design	
  is	
  clear	
  and	
   understandable.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:  	
    Delphi II  Delphi III  Med/IQR/Mod/%a  Cons.  Med/IQR/Mod/%  Cons.  Med/IQR/Mod/%  Cons.  2/1/2/46.7 2/2/1/40 2.5/1/2/50  Agree  2/1/2/50 2/2/-/2/3/2/57.1  Agree  2/2/2/55.6 2/2/2/62.5  None  3/2/3/33.3 2/2/2/40 3.5/2/4/50  None  4/2/4/50 4/2/4/40 4/2/4/42.8  None  3/3/2/44.4 2.5/2/2/50  Disagre e  3/1/3/53.3 2/1/2/60 3/1/3/50  Neutral  3/1/3/50 3/1/3/40 3/2/3/42.8  Neutral  3/2/2/44.4 3/2/-/-  None  2/1/2/40 2/1/2/60 3/1/3/40  Agree  3/1/-/2/1/2/60 3/2/3/42.8  Neutral  3/2/-/3/2/3/44.4  None  3/1/2/33.3 2/2/2/40 3/2/2/30  Neutral  3/1/3/50 3/1/3/60 3/2/3/42.8  Neutral  3/2/-/3/2/-/-  None  105	
    Delphi I Factors Affecting Findings Use Items 11- Data	
  collection	
  techniques	
  are	
   interesting	
  to	
  me.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 12- The	
  evaluation	
  methodology	
  is	
   sophisticated.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 13- The	
  evaluation	
  evidence	
  is	
  based	
   on	
  objective	
  data.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 14- You	
  perceive	
  the	
  evaluation	
  as	
   unbiased.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 15- The	
  evaluation	
  results	
  are	
   distributed	
  in	
  a	
  timely	
  manner.	
   Whole Group Statistics: PIs: PDFs & Graduate Students: 	
   	
   	
   	
   	
   	
    Delphi II  Delphi III  Med/IQR/Mod/%a  Cons.  Med/IQR/Mod/%  Cons.  3/2/3/40 3/1/3/60 3/2/2/40  None  3.5/2/4/50 4/2/4/60 3/2/3/42.8  Disagree 3/2/3/44.4 3/1/3/50  None  3/1/3/53.3 3/1/3/80 3/1/3/40  Neutral  3/1/3/58.3 3/1/3/80 3/2/3/42.8  Neutral  3/2/3/55.6 3/2/3/50  None  3/2/3/40 3/1/3/80 3/2/2/50  None  3/1/3/41.7 3/1/3/60 3/2/-/-  Neutral  3/1/3/55.6 3/2/3/50  Neutral  2/1/2/53.3 2/1/2/80 3/2/2/40  Agree  2/2/2/50 2/1/2/80 3/3/-/-  None  2/2/2/44.4 2.5/2/2/37.5  None  3/1/2/33.3 2/2/2/60 3/1/3/50  Neutral  3/1/3/58.3 3/1/3/60 3/1/3/57.1  Neutral  3/1/-/2.5/1/2/50  Neutral  Med/IQR/Mod/%  Cons.  106	
    Delphi I Factors Affecting Findings Use Items 16-­‐	
  The	
  evaluation	
  report	
  is	
   distributed	
  before	
  decisions	
  are	
   made.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   17- The	
  evaluation	
  is	
  catered	
  to	
  fit	
   the	
  needs	
  of	
  members	
  of	
  the	
   community.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   18- The	
  evaluation	
  is	
  designed	
  to	
  fit	
   program	
  goals	
  and	
  objectives.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   19- The	
  evaluation	
  results	
  reflect	
   project	
  concerns.	
  	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   20- The	
  evaluation	
  design	
  and	
   evidence	
  is	
  trustworthy	
  and	
  credible.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
    	
    Delphi II  Delphi III  Med/IQR/Mod/%a  Cons.  Med/IQR/Mod/%  Cons.  Med/IQR/Mod/%  Cons.  3/0/3/60 3/2/2/40 3/0/3/80  Neutral  3/0/3/58.3 3/1/3/60 3/1/3/57.1  Neutral  3/1/3/77.8 3/1/3/75  Neutral  3/1/3/60 2/1/2/60 3/0/3/70  Neutral  3/1/3/58.3 3/1/3/60 3/1/3/57.1  Neutral  3/0/3/77.8 3/0/3/87.5  Neutral  2/1/2/60 2/2/2/40 2/1/2/70  Agree  2.5/1/2/50 2/1/2/60 3/1/3/42.8  Agree  3/1/3/77.8 3/1/3/75  Neutral  3/1/2/40 3/2/3/60 2.5/1/2/50  Neutral  3/1/2/58.3 3/1/3/60 3/1/3/57.1  Neutral  3/1/3/66.6 3/1/3/62.5  Neutral  3/1/3/53.3 3/2/3/60 3/1/2/50  Neutral  3/1/3/50 3/1/3/60 3/2/3/42.8  Neutral  3/2/3/44.4 3/2/-/-  None  107	
    Delphi I Factors Affecting Findings Use Items 21- Evaluators	
  are	
  approachable	
  and	
   friendly.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   22- Evaluators	
  seem	
  to	
  be	
   knowledgeable	
  about	
  evaluation.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   23- Evaluators	
  are	
  capable	
  of	
   performing	
  a	
  sound	
  and	
  credible	
   evaluation.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   24- Evaluators	
  communicate	
   effectively	
  with	
  project	
  members.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   25- You	
  are	
  interested	
  in	
  the	
   evaluation	
  results.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
    	
    Delphi II  Delphi III  Med/IQR/Mod/%a  Cons.  Med/IQR/Mod/%  Cons.  Med/IQR/Mod/%  Cons.  2/2/2/40 2/2/1/40 2/2/2/40  None  2/0/2/75 2/1/2/60 2/0/2/85.7  Agree  2/1/2/66.7 2/1/2/62.5  Agree  2/1/2/53.3 2/2/1/40 2/1/2/60  Agree  3/1/3/41.7 2/2/2/40 3/2/3/42.8  Neutral  2/2/2/44.4 2.5/2/2/37.5  None  3/1/3/40 3/2/3/40 3/1/3/50  Neutral  3/1/3/50 2/2/2/40 3/1/3/57.1  Neutral  3/1/3/55.6 3/1/3/50  Neutral  3/1/3/40 2/2/1/40 3/1/3/50  Neutral  3/1/3/58.3 3/1/3/40 3/1/3/57.1  Neutral  3/1/3/55.6 3/1/3/50  Neutral  2/1/2/53.3 2/1/2/60 2/1/3/50  Agree  2/1/2/50 2/0/2/100 3/1/3/57.1  Agree  2/1/2/66.7 2/1/2/62.5  Agree  108	
    Delphi I Factors Affecting Findings Use Items 26- You	
  are	
  supportive	
  of	
  the	
   evaluation.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   27- You	
  are	
  interested	
  in	
  the	
   evaluation	
  process.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   28- Project	
  mangers	
  encourage	
   collaboration	
  with	
  the	
  evaluation	
  and	
   evaluators.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   29- Project	
  managers	
  support	
   members	
  to	
  pursue	
  learning	
  from	
   engaging	
  in	
  an	
  evaluation.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   30- Project	
  managers	
  strengthen	
   beliefs	
  of	
  the	
  value	
  of	
  evaluation	
   among	
  project	
  members.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
    	
    Delphi II  Delphi III  Med/IQR/Mod/%a  Cons.  Med/IQR/Mod/%  Cons.  Med/IQR/Mod/%  Cons.  2/1/2/53.3 2/1/2/60 2/2/2/50  Agree  2/1/2/53.3 2/0/2/100 2/1/2/42.8  Agree  2/1/2/66.7 2/1/2/62.5  Agree  2/1/2/46.7 2/1/2/80 3/1/3/40  Agree  3/1/3/50 2/1/2/80 3/0/3/71.4  Neutral  3/2/3/44.4 3/2/3/50  Neutral  2/1/2/40 2/2/1/40 2.5/1/2/40  Agree  3/1/3/50 2/1/2/60 3/1/3/57.1  Neutral  2/2/3/44.4 2.5/2/3/37.5  None  3/1/2/33.3 3/2/2/40 2.5/1/2/40  Neutral  3/1/3/66.7 3/1/3/80 3/1/3/57.1  Neutral  3/0/3/77.8 3/0/3/75  Neutral  3/1/3/46.7 3/1/3/80 2.5/1/2/40  Neutral  3/1/3/66.7 3/1/3/80 3/1/4/57.1  Neutral  3/1/3/66.7 3/1/4/62.5  Neutral  109	
    Delphi I Factors Affecting Findings Use Items 31- You	
  look	
  forward	
  to	
  partake	
  in	
   the	
  data	
  collection	
  procedures.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   32- You	
  look	
  forward	
  to	
  talk	
  to	
   evaluators	
  about	
  the	
  project.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   33- WOW	
  project	
  members	
  discuss	
   evaluation	
  occasionally.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   34- The	
  project	
  is	
  committed	
  to	
  the	
   evaluation.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   35- The	
  evaluation	
  evidence	
  is	
  one	
   source	
  of	
  information	
  the	
  project	
   receives.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
    	
    Delphi II  Delphi III  Med/IQR/Mod/%a  Cons.  Med/IQR/Mod/%  Cons.  Med/IQR/Mod/%  Cons.  3/2/3/40 2/2/2/60 3/1/3/50  None  3.5/1/-/4/2/4/40 3/2/-/-  None  3/1/3/55.6 3/1/3/62.5  Neutral  2/1/2/46.7 2/1/2/60 3/2/2/40  Agree  3/2/3/41.7 2/2/2/60 3/1/3/57.1  Neutral  3/1/3/55.6 3/1/3/62.5  Neutral  2/1/2/46.7 3/2/2/40 2/1/2/50  Agree  2.5/1/2/50 3/1/3/60 2/1/2/57.1  Agree  3/1/3/44.4 2.5/1/-/-  Neutral  2/2/1/33.3 2/2/1/40 2/2/3/40  None  2/2/2/41.7 2/2/-/3/2/2/42.8  None  2/1/2/55.6 2/1/2/62.5  Agree  2/1/2/53.3 2/1/2/60 2/1/2/50  Agree  2/1/2/66.7 2/1/2/60 2/1/2/71.4  Agree  2/1/2/55.6 2.5/1/2/37.5  Agree  110	
    Delphi I Factors Affecting Findings Use Items 36- Other	
  information	
  is	
  as	
  important	
   as	
  evaluation	
  results.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   37- Other	
  information	
  is	
  considered	
   when	
  making	
  decisions.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   38- The	
  types	
  of	
  decisions	
  made	
  are	
   related	
  to	
  project	
  implementation.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   39- Decisions	
  that	
  are	
  based	
  on	
   evaluation	
  results	
  are	
  made	
  to	
   ensure	
  ongoing	
  funding	
  of	
  the	
   project.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   40- The	
  type	
  of	
  information	
  provided	
   by	
  the	
  evaluation	
  is	
  directly	
  related	
   to	
  goals	
  and	
  objectives	
  of	
  the	
  project.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
    	
    Delphi II  Delphi III  Med/IQR/Mod/%a  Cons.  Med/IQR/Mod/%  Cons.  Med/IQR/Mod/%  Cons.  2/1/2/66.7 2/1/2/60 2/0/2/70  Agree  2/1/2/50 2/2/-/2/1/2/57.1  Agree  2/1/2/55.6 2/1/2/62.5  Agree  2/1/2/46.7 2/2/1/40 2/1/2/50  Agree  2/2/3/41.7 2/1/2/60 2/2/3/42.8  None  1.5/2/1/44.4 2/2/-/-  Agree  3/1/3/46.7 3/2/3/40 2.5/1/2/50  Neutral  3/1/3/58.3 3/2/3/60 3/1/3/57.1  Neutral  3/1/3/66.7 3/1/3/75  Neutral  2/1/2/53.3 2/1/2/60 2.5/1/2/50  Agree  3/1/3/58.3 3/2/-/3/1/3/50  Neutral  3/1/3/55.6 3/1/3/62.5  Neutral  2/1/2/60 2/1/2/60 2/1/2/60  Agree  3/1/3/50 2/1/2/60 3/1/3/57.1  Neutral  3/1/3/66.7 3/1/3/62.5  Neutral  111	
    Delphi I Factors Affecting Findings Use Items 41- The	
  evaluation	
  is	
  informed	
  by	
   your	
  needs.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
   42- The	
  evaluation	
  is	
  informed	
  by	
   your	
  needs.	
   Whole Group Statistics: PIs: PDFs & Graduate Students:	
    	
    Delphi II  Delphi III  Med/IQR/Mod/%a  Cons.  Med/IQR/Mod/%  Cons.  Med/IQR/Mod/%  Cons.  3/1/3/46.7 2/2/2/40 3/1/3/50  Neutral  3/2/3/50 3/2/2/40 3/1/3/57.1  None  3/0/3/77.8 3/0/3/75  Neutral  2/0/2/66.7 2/2/2/40 3/1/3/50  Agree  2/1/2/75 2/1/2/80 2/1/2/71.4  Agree  2/1/2/44.4 2.5/1/2/37.5  Agree  112	
    Appendix C: Consent Form 	
   Consent Form The	
  relationship	
  among	
  process	
  use,	
  findings	
  use,	
  and	
  stakeholder	
  involvement	
  in	
   evaluation	
   Who	
  is	
  conducting	
  this	
  study?	
   This	
  study	
  is	
  conducted	
  as	
  a	
  thesis	
  for	
  a	
  Master’s	
  degree.	
  The	
  information	
  the	
  study	
   yields	
  will	
  be	
  useful	
  for	
  the	
  evaluation	
  team	
  with	
  regards	
  to	
  the	
  evaluation	
  of	
  the	
  “Working	
   on	
  Wall”	
  (WOW)	
  project.	
  That	
  is,	
  the	
  findings	
  of	
  this	
  study	
  will	
  help	
  evaluators	
  understand	
   the	
  project	
  further	
  in	
  terms	
  of	
  how	
  the	
  evaluation	
  of	
  the	
  project	
  has	
  been	
  used	
  and	
  what	
   are	
  factors	
  that	
  have	
  an	
  effect	
  on	
  the	
  evaluation	
  itself	
  and	
  its	
  uses.	
   Principal	
  Investigator:	
    Co-­‐Investigator:	
    Sandra	
  Mathison	
    Arwa	
  Alkhalaf	
    Professor	
    Graduate	
  Student	
    Faculty	
  of	
  Education	
    Faculty	
  of	
  Education	
    University	
  of	
  British	
  Columbia	
    Department	
  of	
  Educational	
  and	
  Counseling	
  Psychology	
    2125	
  Main	
  Mall	
    and	
  Special	
  Education	
    Vancouver,	
  BC	
  	
    University	
  of	
  British	
  Columbia	
    Canada	
  V6T	
  1Z4	
    British	
  Columbia	
    Email:	
  mathison@mail.ubc.ca	
    2125	
  Main	
  Mall	
   Vancouver,	
  BC	
   Canada	
  V6T	
  1Z4	
   Email:	
  arwaa@interchange.ubc.ca	
   Tel:	
  604-­‐7674074	
    	
   	
    	
    113	
    Why	
  are	
  we	
  doing	
  this	
  study?	
   Evaluation	
  use	
  refers	
  to	
  the	
  influence	
  that	
  evaluation	
  information	
  and	
  conduct	
  have	
   on	
  the	
  actions	
  or	
  thoughts	
  of	
  stakeholders.	
  For	
  example,	
  evaluation	
  information	
  may	
  affect	
   decision-­‐making	
  and	
  evaluation	
  conduct	
  can	
  create	
  shared	
  understanding	
  of	
  the	
  project.	
   We	
  are	
  doing	
  this	
  study	
  to	
  learn	
  more	
  about	
  evaluation	
  uses	
  in	
  the	
  context	
  of	
  your	
  project.	
   That	
  is	
  we	
  are	
  interested	
  in	
  understanding	
  the	
  level	
  of	
  which	
  the	
  evaluation	
  information	
   and	
  process	
  have	
  been	
  used.	
  Also,	
  members	
  of	
  the	
  WOW	
  community	
  have	
  been	
  exposed	
  to	
   the	
  evaluation	
  in	
  different	
  levels.	
  With	
  that	
  in	
  mind,	
  we	
  aim	
  to	
  learn	
  more	
  about	
  how	
  this	
   affects	
  the	
  use	
  of	
  the	
  evaluation.You	
  are	
  being	
  asked	
  to	
  take	
  part	
  in	
  this	
  study	
  because	
  you	
   are	
  an	
  important	
  part	
  of	
  the	
  project.	
  Since	
  you	
  are	
  a	
  direct	
  recipient	
  of	
  our	
  evaluation	
  and	
   have	
  been	
  exposed	
  to	
  the	
  evaluation	
  of	
  the	
  project,	
  we	
  consider	
  you	
  an	
  important	
  member	
   of	
  WOW	
  and	
  value	
  your	
  opinion	
  about	
  the	
  evaluation	
  of	
  the	
  project.How	
  is	
  the	
  study	
  done?	
   	
   If	
  you	
  agree	
  to	
  take	
  part	
  in	
  this	
  study,	
  here	
  is	
  what	
  is	
  asked	
  of	
  you:	
   The	
  method	
  employed	
  for	
  this	
  study	
  is	
  called	
  the	
  Delphi	
  Technique.	
  It	
  is	
  a	
  tool	
  to	
   extract	
  opinions	
  from	
  a	
  group	
  of	
  people	
  without	
  having	
  to	
  meet	
  face	
  to	
  face	
  or	
  gather	
  in	
  one	
   room.	
  It	
  is	
  purposeful	
  for	
  this	
  study	
  to	
  overcome	
  the	
  power	
  dynamics	
  that	
  is	
  part	
  of	
  the	
   WOW	
  community.	
  	
  The	
  Delphi	
  method	
  is	
  used	
  to	
  achieve	
  at	
  least	
  51%	
  agreement	
  among	
   the	
  group	
  through	
  multiple	
  rounds	
  of	
  data	
  collection.	
  	
   The	
  survey	
  URL	
  is	
  sent	
  to	
  you	
  through	
  email	
  where	
  you	
  will	
  be	
  able	
  to	
  access	
  the	
   survey.	
  The	
  first	
  round	
  is	
  a	
  survey	
  containing	
  73	
  items	
  that	
  are	
  rated	
  on	
  a	
  Likert-­‐scale.	
  It	
   will	
  take	
  45	
  minutes	
  at	
  most	
  to	
  complete.	
  After	
  submission,	
  the	
  results	
  of	
  the	
  first	
  round	
   will	
  be	
  analyzed.	
  In	
  successive	
  rounds,	
  each	
  WOW	
  member	
  receives	
  an	
  updated	
  survey	
  URL	
   through	
  email	
  showing	
  each	
  stakeholder’s	
  group	
  responses	
  and	
  the	
  whole	
  group	
  response	
    	
    114	
    in	
  the	
  previous	
  round,	
  the	
  statistical	
  information,	
  and	
  explanations	
  or	
  commentary	
   developed	
  from	
  that	
  round.	
  Participants	
  are	
  asked	
  to	
  review	
  each	
  item,	
  consider	
  the	
  group	
   response	
  and	
  then	
  re-­‐rate	
  the	
  items,	
  taking	
  the	
  information	
  into	
  account	
  to	
  consider	
   reasons	
  for	
  remaining	
  outside	
  the	
  consensus.	
  This	
  round	
  gives	
  participants	
  an	
  opportunity	
   to	
  make	
  further	
  clarifications	
  of	
  both	
  the	
  information	
  and	
  their	
  judgments	
  of	
  the	
  relative	
   importance	
  of	
  the	
  items.	
  In	
  successive	
  rounds	
  the	
  survey	
  will	
  also	
  take	
  45	
  minutes	
  at	
  most	
   to	
  complete.	
   A	
  period	
  of	
  two	
  weeks	
  is	
  given	
  to	
  complete	
  the	
  first	
  round	
  of	
  data	
  collection.	
  Once	
  all	
   responses	
  are	
  collected	
  or	
  the	
  allotted	
  time	
  period	
  has	
  passed	
  the	
  first	
  round	
  of	
  data	
   collection	
  will	
  be	
  completed.	
  In	
  successive	
  rounds	
  data	
  collection	
  will	
  follow	
  a	
  week	
  after	
   the	
  previous	
  round.	
  Similar,	
  to	
  the	
  first	
  round	
  successive	
  rounds	
  have	
  a	
  period	
  of	
  two	
   weeks	
  to	
  be	
  completed.	
   In	
  order	
  to	
  keep	
  the	
  results	
  of	
  the	
  data	
  collection	
  anonymous,	
  you	
  will	
  not	
  be	
  asked	
   to	
  identify	
  yourself	
  throughout	
  the	
  rounds	
  of	
  data	
  collection.	
  	
   How	
  will	
  the	
  results	
  of	
  the	
  study	
  be	
  disseminated?	
   	
    The	
  study	
  will	
  be	
  reported	
  in	
  a	
  graduate	
  thesis	
  and	
  may	
  also	
  be	
  published	
  in	
  a	
    journal	
  article.	
  The	
  results	
  will	
  also	
  be	
  orally	
  presented	
  to	
  the	
  evaluation	
  team	
  and	
  the	
   WOW	
  project.	
   	
    We	
  do	
  not	
  think	
  any	
  part	
  of	
  this	
  study	
  will	
  create	
  a	
  problem	
  or	
  be	
  bad	
  for	
  you.	
  The	
    items	
  you	
  will	
  be	
  asked	
  to	
  rate	
  are	
  not	
  related	
  to	
  you	
  in	
  any	
  way,	
  however	
  closely	
  related	
  to	
   how	
  you	
  perceive	
  or	
  experienced	
  the	
  evaluation	
  of	
  the	
  WOW	
  project.	
  Participating	
  in	
  this	
   study	
  will	
  help	
  us	
  create	
  a	
  better	
  evaluation	
  for	
  the	
  WOW	
  project.	
  You	
  will	
  not	
  to	
  be	
  asked	
    	
    115	
    to	
  identify	
  yourself	
  at	
  all.	
  Your	
  data	
  will	
  never	
  be	
  linked	
  to	
  you	
  and	
  we	
  will	
  never	
  know	
  who	
   said	
  what.	
   Privacy	
  and	
  Policy	
  of	
  SurveyMonkey:	
   	
    SurveyMonkey	
  is	
  an	
  online	
  survey	
  company	
  with	
  servers	
  located	
  in	
  the	
  USA	
  and	
  is	
    subject	
  to	
  U.S.	
  laws.	
  For	
  example,	
  the	
  US	
  Patriot	
  Act	
  allows	
  authorities	
  access	
  to	
  the	
  records	
   of	
  internet	
  service	
  providers.	
  However,	
  for	
  the	
  purpose	
  of	
  this	
  research,	
  the	
  survey	
  does	
   not	
  ask	
  for	
  any	
  personal	
  identifiers	
  or	
  any	
  information	
  that	
  may	
  be	
  used	
  to	
  identify	
  you.	
   SurveyMonkey	
  servers	
  records	
  incoming	
  IP	
  addresses	
  of	
  the	
  computer	
  address.	
  If	
  you	
   choose	
  to	
  participate	
  in	
  the	
  survey,	
  you	
  understand	
  that	
  your	
  responses	
  to	
  the	
  survey	
  will	
   be	
  stored	
  and	
  accessed	
  in	
  the	
  USA.	
  The	
  security	
  and	
  privacy	
  policy	
  for	
  SurveyMonkey	
  can	
   be	
  found	
  at	
  the	
  following	
  link:	
  http://www.surveymonkey.net/mp/policy/privacy-­‐policy/	
   Taking	
  part	
  in	
  this	
  study	
  is	
  entirely	
  up	
  to	
  you.	
  You	
  have	
  the	
  right	
  to	
  refuse	
  to	
  participate	
   in	
  this	
  study.	
  If	
  you	
  decide	
  to	
  take	
  part,	
  you	
  may	
  choose	
  to	
  pull	
  out	
  of	
  the	
  study	
  at	
  any	
  time	
   without	
  giving	
  a	
  reason	
  and	
  without	
  any	
  negative	
  impact	
  on	
  you.	
  	
  With	
  the	
  submission	
  of	
   the	
  survey,	
  consent	
  from	
  your	
  part	
  will	
  be	
  assumed.	
  We	
  also	
  recommend	
  that	
  you	
  keep	
  a	
   copy	
  of	
  this	
  consent.	
  If	
  you	
  have	
  any	
  questions	
  or	
  concerns	
  about	
  what	
  we	
  are	
  asking	
  of	
   you,	
  please	
  contact	
  Arwa	
  Alkhalaf.	
  The	
  contact	
  information	
  is	
  listed	
  at	
  the	
  top	
  of	
  the	
  first	
   page	
  of	
  this	
  form.	
   	
    	
    116	
    

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0073351/manifest

Comment

Related Items