Interactive Visualization to Facilitate Group Deliberations in Decision Making Processes by Hamed Taheri B.Sc. Isfahan University of Technology, 2007 M.Sc., Isfahan University of Technology, 2010 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (RESOURCE MANAGEMENT AND ENVIRONMENTAL STUDIES) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2015 © Hamed Taheri, 2015 ii Abstract Structured Decision Making (SDM) and Multi-Criteria Decision Analysis (MCDA) are increasingly used to facilitate municipal and environmental decision-making. Even though it is well documented that visualization techniques can facilitate analytical activities, few studies have probed into the use of information visualization (infovis) in SDM and MCDA processes. The aim of the present study is to analyze how infovis can support participants in a real-world MCDA/SDM process surrounding an urban infrastructure-planning problem with focus on meetings held to evaluate multiple alternatives over a set of criteria. I attended as a participant in a series of SDM workshops related to the renewal of a municipal sewage treatment facility. Participatory observation was conducted of visualizations used to evaluate multiple alternatives over a number of criteria. Two interactive infovis features were identified to be particularly be beneficial for SDM-based processes: information on demand and exploration of preferences. We demonstrated an interactive computer-based tool developed by our research team, ValueCharts, in a number of meetings with potential users to get their feedback. Drawing on these results and the literature, ValueCharts was extended by our research team to a new version, Group ValueCharts, to support group deliberations during MCDA and SDM processes. An experiment was then conducted with seven participants on a stormwater management decision problem at UBC. Our results show that interactive visualization features in ValueCharts and Group ValueCharts have the potential for increasing the effectiveness of MCDA decision processes. iii They can facilitate comparing multiple alternatives and also probing into participants’ preferences. Interactive visualization was acknowledged by participants for improving group interaction, exchange of information, identification of sticking points, and focusing discussions on what matter for the final decision. Feedback from the participants and our observations support our conclusion that the identified infovis features hold the potential of facilitating the decision process in SDM. Understanding MCDA concepts, however, seems to be essential to get the most out of such visualization tools. If participants do not grasp MCDA concepts, some of them may become suspicious of the final ranking or not be able to identify some of sticking points that really matter. iv Preface I joined Informed Decisions: Exploring Alternatives for Sustainability (IDEAS) research team on September 2011. I got involved in a number of studies for an ongoing research project that aims at developing a platform for collaborative decision analysis to enable identification of sticking points when comparing alternatives by visualizing trade-offs, with a focus on renewal and retrofitting of municipal sewage and water infrastructure. The focus of my study was on how participants use visualization tools during real-world Structured Decision Making (SDM) processes. I used the two visualization tools developed in our research team: ValueCharts (Bautista & Carenini, 2006; Carenini & Loyd, 2004) and Group ValueCharts (Bajracharya et al., 2014). In various circumstances in my study, I got feedback from our research team mainly from Dr. Gunilla Öberg (my academic supervisor), Dr. Giuseppe Carenini (my academic supervisor), Dr. David Poole, and Dr. Brent Chamberlain. We published some of our results in a journal paper (Chamberlain, Carenini, Oberg, Poole, & Taheri, 2014). Sections 2.1, 2.2, 3.1, 4.0, and 5.0 of this thesis are the extended versions of a paper in which I am the main author with Dr. Gunilla Öberg, Dr. Giuseppe Carenini, Dr. David Poole, and Dr. Brent Chamberlain. For demonstrating ValueCharts in our meetings, Dr. Chamberlain did some programming on ValueCharts to enable its integration with PDF files. He and I worked together to prepare presentations and demos, and hold the meetings with v people working in some consulting firms. Dr. Öberg and I participated in a series of workshops as part of a real-world decision making process. Our observation results were discussed in our research team. In the Orchard Commons study, I worked with Ms. Sanjana Bajracharya on her MSc research on designing and running a case study. Ms. Bajracharya designed and developed Group ValueCharts with some contribution from Ms. Kai Di Chen. Group ValueCharts were used for this case study. I worked on the decision problem used in this case study by help of Mr. Daniel Klein, Ms. Ghazal Ebrahimi, and Mr. Lucas Navilloz. I and Ms. Kai Di Chen also conducted heuristic evaluations on ValueCharts and Group ValueCharts. For these case studies, Dr. Öberg was the principal investigator and the UBC Behavioral Research Ethics Board (BREB) certificate of approval number is H12-03317. vi Table of Contents Abstract .................................................................................................................. ii Preface .................................................................................................................. iv Table of Contents .................................................................................................. vi List of Figures ........................................................................................................ ix Acknowledgments.................................................................................................. xi Dedication ............................................................................................................. xii CHAPTER 1 Introduction ............................................................................ 1 1.1. Multi-stakeholder decision processes in infrastructure planning projects…………………………………………………………………………….….. 1 1.2. Structured Decision Making and Multi-Criteria Decision Analysis ... 3 1.3. Use of interactive visualization to support MCDA and SDM processes……………………………………………………………………………… 5 1.4. A review on existing Decision Support Systems (DSSs) for infrastructure planning ........................................................................................ 7 1.5. Research questions ........................................................................ 8 CHAPTER 2 Methods ................................................................................. 9 2.1. Participatory observation at workshops guided by a structured decision making approach ................................................................................. 9 vii 2.2. Demonstrating ValueCharts and testing interactive infovis capabilities with potential users ........................................................................ 10 2.3. Using interactive Visualization to facilitate group decision analysis for a stormwater management problem ................................................................. 13 2.4. Heuristic evaluation of ValueCharts .............................................. 18 CHAPTER 3 An overview to the case studies ........................................... 20 3.1. SDM Workshops as part of a decision process for building a new wastewater treatment plant .............................................................................. 20 3.2. Stormwater management at Orchard Commons .......................... 26 CHAPTER 4 Results and discussion ........................................................ 28 4.1. Visualizations used in the SDM workshops to facilitate group MCDA …………………………………………………………………………. 28 4.1.1. Comparing multiple alternatives over a set of criteria .............. 29 4.1.2. Analyzing participants’ responses ............................................ 32 4.2. Visualizations used in Group ValueCharts – Orchard Commons case ………………………………………………………………………….. 37 4.2.1. Comparing multiple alternatives over a set of criteria .............. 40 4.2.2. Analyzing participants’ responses ............................................ 41 4.3. Presenting complementary information and the risk for information overload……………………………………………………………………………… 46 viii 4.4. Interaction with data ...................................................................... 50 4.5. Understanding MCDA and trust to the results............................... 52 4.6. Allowing group interaction ............................................................. 54 CHAPTER 5 Conclusions ......................................................................... 58 Bibliography ......................................................................................................... 61 Appendices .......................................................................................................... 67 Appendix A: Multi-Attribute Utility Theory (MAUT) ............................................... 67 Appendix B: Booklet prepared for the first meeting of Orchard Commons .......... 70 Appendix C: Booklet prepared for the second meeting of Orchard Commons .... 75 Appendix D: Survey questions for Orchard Commons ........................................ 76 ix List of Figures Figure 2.1: Overview of ValueCharts illustrated by the comparably simple decision-problem ‘choice of hotel’ ...................................................................................... 12 Figure 2.2: Initial four designs to visualize individual preferences in a group decision process ................................................................................................................ 14 Figure 2.3: Meeting room with UBC staff on stormwater management at Orchard Commons ............................................................................................................ 16 Figure 2.4: Left: Group ValueCharts: Average view, Right: Group ValueCharts: Detailed View for stormwater management at Orchard Commons ...................... 17 Figure 3.1: Part of the printed form filled by one of the authors (Taheri) for the swing weighting exercise in one of the workshops held during the planning process for the renewal of a Canadian sewage treatment facility. ............................................... 25 Figure 4.1: Colour-coded performance table shown in one slide from the presentation in the real-world case study that the facilitator used to explain that alternative #2 and #6 are inexpensive and simple and have mediocre performance.............................................................................................................................. 31 Figure 4.2: Audience responses to one of the 7 questions asked in one of the workshops held during the planning process for the renewal of a Canadian sewage treatment facility................................................................................................... 33 Figure 4.3: Graphs used to show results for a) swing weights given by all participants, b) direct weights given by all participants, and c) ranking results based x on individual swing weights. These graphs were printed for each participant based on their results. .................................................................................................... 35 Figure 4.4: A task framework to design infovis tools to support decision analysis (Bautista & Carenini, 2006) .................................................................................. 38 Figure 4.5: A summary of survey results from six participants in a study investigating the perceived usefulness of an interactive group decision visualization tool, named Group ValueCharts (Bajracharya et al., 2014). .................................................... 43 Figure 4.6 The pop-up window to see utility functions of users for a criterion ...... 44 Figure 4.7: Visualization in Group ValueCharts to show the interplay between weights and utility functions ................................................................................. 44 Figure 4.8: Integrating ValueCharts with the reports to enable “details on demand”. When a user right click on an alternative or a criterion, they can go to the exact part of the report talking about it. ................................................................................ 50 xi Acknowledgments I would like to express my sincere gratitude to my academic supervisors, Dr. Oberg and Dr. Giuseppe Carenini for their supports and guidance. This work is a result of their continuous encouragement and support to do what I was passionate about. I would also like to thank Dr. David Poole, and Dr. Brent Chamberlain for their guidance and active involvement in my work. I sincerely thank Dr. Milind Kandlikar for being the examiner of my thesis. I would also like to thank my colleagues who helped me during my study, Ms. Bajracharya, Ms. Kai Di Chen, Mr. Daniel Klein, Ms. Ghazal Ebrahimi, Mr. Alireza Zarei, and Mr. Lucas Navilloz. Finally, I would like to thank the love of my life and my closest friend, Mrs. Neda Afghari for her constant support and sincere encouragement to pursue my passion. I cannot imagine to have been able to succeed without endless support of our great families. xii Dedication To Neda and our lovely families And Those whose lives inspire us to think different and aim for making a meaningful change to our world.1 Introduction 1.1. Multi-stakeholder decision processes in infrastructure planning projects Existing infrastructures of most cities in the industrialized world are aging and old. The planning process for their retrofitting or replacement usually requires group decision analysis among multiple stakeholders, who should together assess environmental, social, and economic aspects of a number of alternatives, make trade-offs, and reach consensus over one final solution. Every stakeholder represents a group of people or an organization that will be influenced by the decision, and therefore looks after their interests, which might be in conflict with interests of other stakeholders. In reality, it is rare to find a solution that satisfies expectations of all stakeholders. Most of the time, providing more benefits to some stakeholders poses more costs to others. These conflicting or sticking points (SPs) need to be identified and discussed in stakeholder meetings during a planning phase. If remain 2 unresolved, some SPs may bring about unexpected costs and also result in losing support from some stakeholders during construction and operation phases. Decision makers often may not have resources and tools to do an integrated analysis in order to identify the best solution. As a result, they generally lean towards more traditional solutions (Guest et al., 2009). In infrastructure planning, stakeholders usually have a limited number of meetings (limited time resource) to receive all information about the project, discuss objectives, evaluate alternatives, and express their interests and concerns over possible outcomes of the project. Because of time limitation, it is crucial to efficiently identify main SPs among stakeholders in order to discuss and resolve them before making a decision. In these meetings, however, some critical SPs may never be captured or sufficiently discussed. Nunamaker and Deokar (2008) cited various reasons that can impede effective discussion in face-to-face meetings. For instance, “airtime fragmentation” gives each participant a small portion of time to express preferences, ask questions, and support or oppose other views. Also, some participants might “dominate” the discussion time in an unproductive manner. Some of the other factors they cite are “information overload”, “coordination problem”, “incomplete use of information”, “concentration blocking”, and “attention blocking”. 3 1.2. Structured Decision Making and Multi-Criteria Decision Analysis Structured Decision Making (SDM) and Multi-Criteria Decision Analysis (MCDA) are increasingly used to facilitate infrastructure planning as well as environmental decision-making (Gregory et al., 2012; Huang, Keisler, & Linkov, 2011; Kiker, Bridges, Varghese, Seager, & Linkov, 2005; Matthies, Giupponi, & Ostendorf, 2007; G. A. Mendoza & Martins, 2006; Sojda et al., 2012). There are decades of research on developing MCDA methods (G. A. Mendoza & Martins, 2006; Montis, Toro, Droste-franke, Omann, & Stagl, 2004) and decision support systems (DSSs) (Dalal, Khodyakov, Srinivasan, Straus, & Adams, 2011; Kilgour & Eden, 2010; Nunamaker & Deokar, 2008; Shih, Wang, & Lee, 2004), which both are to facilitate in-depth analysis of a decision problem over multiple criteria. There are a variety of MCDA methods that are all similarly aimed to support (an) individual(s) in “subjective” analysis of a multifaceted problem. An MCDA problem consists of multiple “competing” objectives, and a finite number of “explicit” alternatives. In addition, to enable subjective analysis, MCDA assesses preferences, which are elicited from an individual or a group, to “partially or completely” compute the desirability of alternatives (Fischer, 2003; Fülöp, 2001; Kiker et al., 2005; Korhonen, 2005). Belton and Steward (2002) had a thorough review on MCDA methods and classified them into three main categories namely as “Value measurement models”, “Goal, aspiration or reference level models”, and 4 “Outranking models” (Belton & Stewart, 2002). There are also several review papers that discuss underlying axioms and assumptions of these methods (Huang et al., 2011; G. a. Mendoza & Martins, 2006; Montis et al., 2004). Structured Decision Making (SDM) is a widely-used methodology that is also praised for being realistic and straightforward for most environmental projects (Gregory et al., 2012). SDM facilitates collaboration among stakeholders, experts, and decision makers and uses a variety of decision making and group discussion methods. This methodology is based on MCDA, applied ecology alongside with contributions from cognitive psychology, group dynamics, and negotiation theory and practice. SDM consists of seven steps as follows (Gregory et al., 2012): 1) Defining the context and scope of the decision 2) Identifying objectives and performance measures 3) Developing alternatives or various strategies 4) Estimating expected consequences of these alternatives and strategies 5) Evaluating all alternatives and choosing the most desirable one 6) Implementing and monitoring the selected alternative The seventh step is “iteration”. SDM emphasizes that decision making is an iterative process, and it is usually essential to iterate between the above steps in 5 order to make an informed decision and also learn in order to improve the final solution over time. 1.3. Use of interactive visualization to support MCDA and SDM processes It is well documented that information visualization (infovis) techniques can facilitate analytical activities, provided that they are designed properly. Few studies have, however, been dedicated to the use of infovis in support of multi-criteria decision analysis processes (Bautista & Carenini, 2006; Conati, Carenini, Hoque, Steichen, & Toker, 2014; Gratzl, Lex, Gehlenborg, Pfister, & Streit, 2013; Yi, 2008). These studies also focus mainly on single-user situations. In infrastructure planning, however, decision making processes usually require group deliberations among people representing various stakeholders and/or experts in different areas of expertise. Group deliberations involve complex social processes that pose different requirements on infovis support than single-user situations do (Alallah, Nezhadasl, Irani, & Jin, 2007; Heer & Agrawala, 2008), but previous studies of infovis in support of MCDA deal mainly with single-user situations. Heer and Agrawala (2008) propose a list of design considerations for developing infovis tools to support collaborative analysis and sense-making processes. These considerations bring attention to potentials of designing infovis tools that can improve “analytic capabilities by promoting an effective division of labor among participants, 6 facilitating mutual understanding, and reducing the costs associated with collaborative tasks”. Alallah et al (2007) emphasize that there is a need for infovis tools that visualize group dynamics during face-to-face meetings as part of a consultation process. A number of studies have probed into visualization techniques to support identification of SPs in consensus reaching process (CRP). The majority of these are designed to track the level of agreements/disagreements among participants over time during CRP, and to help identify when sufficient agreement has been reached (Alonso & Herrera-Viedma, 2007; Palomares & Martínez, 2014). There are, however, no studies on developing infovis tools for MCDA processes that can for example help a group of individuals identify SPs in a criterion level and relate them to overall rankings of the alternatives. It is also worth mentioning that MCDA and SDM represent two complementary perspectives, which may differently influence the design of infovis tools. As an illustration, MCDA methods mostly represent a normative perspective, which studies how MCDA should work, for example how to break down the problem into a ‘comprehensive’ list of ‘measurable’ criteria, elicit and use preferences, and compare and rank alternatives. By contrast, the second perspective is prescriptive and defines MCDA as a procedure to facilitate real decision making processes. The prescriptive perspective is aimed to support people to overcome cognitive illusions and human limitations, which have been usually discussed by the descriptive perspective (Bell, Raiffa, & Tversky, 1988; Riabacke, 2012). The prescriptive 7 perspective can be found in methodologies such as “structured decision making (SDM)” (Gregory et al., 2012), “trans-disciplinary case studies” (R. Scholz & Tietje, 2001; R. W. Scholz, Lang, Wiek, Walter, & Stauffacher, 2006; R. W. Scholz, 2011), and “procedure for linking visions with resource allocation scenarios and Multi-Criteria Analysis” (Trutnevyte, Stauffacher, & Scholz, 2011, 2012). The normative perspective emphasizes logic, usefulness and accuracy of MCDA results while the prescriptive one focuses on the context of use and its usability and effectiveness. 1.4. A review on existing Decision Support Systems (DSSs) for infrastructure planning As part of my collaboration on another study (Chamberlain et al., 2014), I reviewed a number of existing DSSs for wastewater infrastructure planning and management (Adewumi, 2011; Dinesh & Dandy, 2003; Finney, Gearheart, Salverson, & Zhou, 2009; Hidalgo, Irusta, Martinez, Fatta, & Papadopoulos, 2007; Joksimovic, Kubik, Hlavinek, Savic, & Walters, 2006; Loetscher, 2002). It became clear that most existing software systems are weakly designed to engage stakeholders and facilitate group decision analysis. Also, there is lack of attention to the merits of including stakeholder preferences into decision processes in order to find and develop a lasting solution. Hamouda et al reviewed multiple DSSs in wastewater management, which shows that current systems focus almost exclusively on the technical and economic aspects of wastewater systems (Hamouda, Anderson, & Huck, 2009). 8 1.5. Research questions This thesis was part of a larger project that aims at developing a platform for collaborative decision analysis that enables identification of sticking points when comparing alternatives by visualizing trade-offs, with a focus on renewal and retrofitting of municipal sewage and water infrastructure. To our knowledge, this thesis is the first study that is aimed to probe into how participants actually use infovis tools in a real-world MCDA/SDM process and to identify potentials of adding interaction features to improve group deliberations. The focus of this study is on meetings held to evaluate multiple alternatives over a set of criteria. There are two research questions that this study is to answer: i. How has infovis been actually used by participants for group deliberations for an MCDA problem in a real-world SDM process? ii. Can adding interaction features to infovis tools improve group deliberations in evaluating multiple alternatives over a set of criteria in domain of urban infrastructure planning? 9 Methods Sub-section 2.1 illustrates the method used to probe into how participants actually use infovis tools for group deliberations in a real-world SDM process (my first research question). In sub-sections 2.2 to 2.4, I explain other methods that I used to answer the second research question on whether interaction features have the potential to improve group deliberations in evaluating multiple alternatives over a set of criteria in the domain of urban infrastructure planning. 2.1. Participatory observation at workshops guided by a structured decision making approach During 2012-2013, I and Dr. Öberg participated in various public and closed workshops surrounding the renewal of a Canadian sewage treatment facility serving 180,000 inhabitants. We were invited by staff in charge of the planning process to become members in what was named a Community Resource Forum (CRF) after expressing interest in the planning process. I attended actively as a participant in these workshops while taking notes, drawing on ethnographic methods (Randall & Rouncefield, 2014). Our observations and note-taking was 10 guided by the questions: What type of infovis is used?, To what end?, and, What type of interactions occur among facilitators and participants in relation to the infovis? After each workshop, we (I and my research team) discussed our observations among our research team with the aim of identifying opportunities for interactive visualization. 2.2. Demonstrating ValueCharts and testing interactive infovis capabilities with potential users In parallel with the workshops, we organized seven meetings with municipal staff involved in sewage infrastructure planning as well as five meetings with staff in two consultant firms dealing with design and development of urban infrastructures. Some of the participants in these meetings were also members of the CRF. The meetings were held in our partners’ meeting rooms and the number of participants varied from two to six. The meetings generally started by someone in the host organization welcoming and introducing us, whereupon we were given a short introduction of the project followed by open-ended questions regarding decision making processes related to building sustainable infrastructure in general and specific questions related to the potential use of interactive visualization as a tool to facilitate structured decision making. In meetings with public staff, I was in charge of the presentations, note-taking, and asking questions. In other meetings with consulting firms and potential users, I sometime presented the whole or part of 11 the presentation, helped prepare materials and demos before the meeting, and took notes when needed. In some meetings, a visualization tool called ValueCharts was demonstrated to explore potential benefits of interactive visualization (Bautista & Carenini, 2006). ValueCharts consists of interactive visualizations designed to enhance the quality and efficiency of multi-criteria decision analysis (MCDA) processes. For example, it allows individual users to inspect the impact of their preferences on the overall ranking. They can tweak and update their preference model (consisting of weights and score/utility function) and simultaneously see how the changes impact the overall ranking. Figure 2.1 shows various parts of the main interface of ValueCharts. For more information on ValueCharts including the source codes and publications, please visit the following URL of the webpage created by my research team: http://www.cs.ubc.ca/group/iui/VALUECHARTS/ The following YouTube link, created by our research team, helps better overview main features of ValueCharts including its interactive features: https://www.youtube.com/watch?v=1OUF-mkYYNMDrawing on results from previous studies on ValueCharts (Bautista & Carenini, 2006; Chamberlain et al., 2014; Yi, 2008) in combination with insights from the CRF-workshops and our first round of meetings with potential users, we applied the tool to three actual completed projects where MCDA had been used in the appraisal process and the facilitators had used a colour coded decision table (similar to Figure 4.1) as infovis to support 12 the decision process. The results were presented to our partners for feedback. Each project has a report describing the decision problem, objectives and evaluation criteria and finally all alternatives. I was in charge to read those reports, build the ValueCharts files, and connect the reports to right elements in ValueCharts. Dr. Brent Chamberlain (a Post-doc at the time) was in charge to implement the new features into ValueCharts based on our discussions in both internal and external meetings. Figure 2.1: Overview of ValueCharts illustrated by the comparably simple decision-problem ‘choice of hotel’ Notes were taken in each meeting, with a focus on comments and suggestions related to interactive visualization (when might it be useful, by who, for what type of deliberations etc.). The notes were discussed internally in our research team 13 meetings based on which the team discussed and prioritized new features to be added to ValueCharts. 2.3. Using interactive Visualization to facilitate group decision analysis for a stormwater management problem In this study, the goal was to extend ValueCharts to a new version named Group ValueCharts to support group deliberations for MCDA problems. I worked with a Master student in the department of Computer Science at UBC (Ms. Sanjana Bajracharya) on her research thesis titled “Interactive Visualization for Group Decision-Making” (Bajracharya, 2014). I led the development of the case study, and with Ms. Bajracharya met potential users to validate different prototypes, as explained later in this section. I was also involved in designing and conducting the experiment, designing the surveys, and synthesizing the results. An undergrad Computer Science student, Kai Di Chan, was also involved as a researcher and helped in both software programming and conducting the experiments. There was occasional help from three other students at Institute for Resources, Environment, and Sustainability (IRES). The methodology and main part of the results are already discussed in (Bajracharya et al., 2014). Although it is thoroughly elaborated in our paper (Bajracharya et al., 2014), the following also provides the readers of this thesis with a brief overview on the methodology: 14 A) Low-fidelity prototyping: To create a Group ValueCharts, based on the results of a previous study and the literature, Bajracharya led creating four hand-drawn sketches of ways to visualize individual preferences in a group setting (Figure 2.2). These were presented to colleagues and staff involved in water and sewage infrastructure planning. Based on their feedback, the sketch design with bar charts was chosen as the means to visualize individual preferences for each criterion and each alternative, as it appeared to be the most intuitively easy to understand among the four. Sanjana and Kai Di did the programming and developed Group ValueCharts with the aim to make it possible for users to compare multiple alternatives over a list of criteria and identify trade-offs between different alternatives and also sticking points among users. Figure 2.2: Initial four designs to visualize individual preferences in a group decision process B) Initial Meeting with staff at UBC in leading positions at departments that are involved in infrastructure planning: Staff members were 15 invited to participate in the study. We first gave a quick overview of the tool in a meeting with four participants and asked them to suggest features that they felt would be useful in a group discussion. Their feedback and suggestions were used in developing Group ValueCharts. C) Developing alternatives on the identified problem: We also used these conversations to identify a decision problem that could be used to test the effectiveness of the tool. Eventually, we decided to consider storm-water management from a site called Orchard Commons at UBC that was in the planning stage. We developed four alternatives for managing the stormwater and identified six evaluation criteria through an iterative dialogue with participants in the group and other staff that we were advised to contact. D) Decision analysis meetings using Group ValueCharts: We held two meetings, each two-hour long. The participants were asked to sit around a U-shaped table with a projection screen at the front end of the meeting room. Each participant was provided with a laptop with ValueCharts installed (see Figure 2.3). In the beginning of both meetings, the participants were asked if they would allow us to audio-record the conversation and log their interaction with the tool. We assured them that the collected data would be anonymized before publication. All participants agreed verbally. The first meeting was aimed to familiarize the participants with the tool, and collect their opinions on the strengths and 16 weaknesses of the tool, and receive concrete suggestions on improving the first version. Seven of the eight invited staff members participated. Figure 2.3: Meeting room with UBC staff on stormwater management at Orchard Commons The second meeting was to further investigate the participants’ perceptions on the usefulness of the tool and to collect their feedback on changes made to the tool since the first meeting. Five of those who were in the first meeting also participated in the second. The participants were asked to fill out three surveys: one before using the tool, one after discussing an Average View of Group ValueCharts (Figure 2.4 – left), and the third one after discussing a Detailed View of Group ValueCharts (Figure 2.4 – right). The surveys were designed to evaluate the process and outcome effectiveness of the tool. The survey questions were 17 designed based on the framework developed by (Schilling, Oeser, & Schaub, 2007). In the first survey, participants were asked to answer a number of questions on their familiarity with MCDA methods and visualization tools. In the second and third surveys, participants were asked to evaluate respectively the Average and Detailed views of Group ValueCharts on improving the following aspects: participation in discussions, group interactions, exchange of information among participants, identification of agreements and disagreements, capability to support informed decision making, trust to the results (i.e. willingness to choose the highest-ranked alternative), and willingness to use. Figure 2.4: Left: Group ValueCharts: Average view, Right: Group ValueCharts: Detailed View for stormwater management at Orchard Commons 18 2.4. Heuristic evaluation of ValueCharts I and Ms. Kai Di Chan conducted a heuristic evaluation study (Stone, Jarrett, Woodroffe, & Minocha, 2005) on ValueCharts. This evaluation was aimed to help uncover usability issues in ValueCharts and Group ValueCharts; therefore, most of the results were not directly helpful in answering my research questions. There, however, were occasions in which participants provided feedback that I will use in discussing the results from the previously-mentioned case studies. Seven individuals including myself participated. When Kai Di and I were ready, Kai Di first facilitated the experiment with me as the first subject to ensure that the experiment can provide us the intended results. Two participants played both the facilitator and the user roles when they were completing a number of tasks. A formal heuristic evaluation requires to have participants/inspectors with diverse backgrounds including domain experts and HCI experts. Some participants were familiar with decision making processes and some had programming skills. Since none of the participants had notable experience in the field of HCI usability, the results may not be considered as a formal heuristic evaluation. Participants were first trained on the ten heuristics that they needed to use to evaluate ValueCharts (Nielsen, 1994; Stone et al., 2005). Then, there was an observer who walked each participant through the software. The first task was to construct a decision model. The second task was to express their preferences as 19 weights and score functions and then analyze the ranking results. In this manuscript, some of the results from the second task are discussed. 20 An overview to the case studies As briefly mentioned in the methods chapter, my study is based on the results from the two following case studies. The first case was a real-world SDM process in which I participated as an observer in a number of their workshops and also held a number of separate meetings with potential users with relevant experiences. In the second case, I worked with Ms. Bajracharya and our research team to design and conduct an experiment where our interactive visualization tool (Group ValueCharts) was used to facilitate group deliberations for evaluating a number of alternatives developed for an actual but simplified decision problem. In the following, these case studies are described. 3.1. SDM Workshops as part of a decision process for building a new wastewater treatment plant The Community Resources Forum (CRF) consisted of 20-30 invited members from neighbourhood organizations, local activist/interest groups, local businesses, water – and sewage infrastructure industry, local politicians, a former member of parliament and a few academics. The role of the forum was to advise the staff in 21 charge of the decision process along with advice from various other groups that also were consulted, such as an expert group, a public advisory committee, and First Nations. A consulting firm that explicitly used structured decision making (Gregory et al., 2012) to guide decision processes moderated the workshops. Six workshops aimed to inform and engage the CRF on scope definition, development of evaluation criteria, screening of different design concepts, preference elicitation, and evaluation of alternatives were held. The first two workshops were mainly informative with staff members in charge of the planning process informing CRF of the process and what the plans were. In one of the workshops, we were informed that a list of objectives (evaluation criteria) had been defined and the consultants had guided the development of nine design concepts (alternatives). The performance of the concepts in relation to the objectives and sub-objectives had been estimated through an iterative process involving several workshops with two groups called “The Engineering Team”, and “Architectural and Community Integration Team”. Three distinctive ‘build scenarios’ had then been developed by comparing and evaluating the nine design concepts. Participants took seats in a workshop room, which was equipped with a large projection screen and flip-charts. In some of the workshops the participants were given designated seats at round-tables, forming smaller groups with a facilitator in each group. On other occasions there was free seating and at yet other times there was a mix of both. 22 The workshops usually started with an oral presentation by the moderators followed by a discussion where participants shared their concerns and interests, and suggestions around the presented subject. A note-taker was documenting the conversations and the workshops were audio recorded. In most of the workshops the project team informed participants about different aspects of the project, followed by open-ended semi-structured discussions. Discussion time was arguably short (usually half an hour to one hour); therefore, questions and comments were briefly answered or just recorded with a promise to be noted. After each workshop, the project team analyzed the information, answered questions posed, and summarized them into a report, which was emailed to attendees a few weeks after each workshop. In two of the workshops we attended, we were invited to compare and evaluate the nine design concepts and discuss trade-offs identified in three build scenarios, which they had developed based on their evaluations of the nine concepts. In one of these evaluation workshops, the facilitator asked participants to fill out a form with seven questions that were “designed to help find the best balance among competing trade-offs”. The responses were given on a predefined discrete scale. The presenter guided the participants through the form by describing trade-offs related to each question with the help of a few PowerPoint slides. For example, one trade-off was whether or not to include a second site. In one hand, the facilitator explained, this would for example allow for more efficient energy and material recovery thanks to biogas production through combined treatment of solid and 23 liquid waste. On the other hand, two sites would require transportation of the solid waste that “shifts some impacts elsewhere”. The question posed to the forum participants in relation to this issue was “which of the following best summarizes your expression of using a second site on the community and region” with five possible responses: Positive benefit, Potential positive benefit, Neutral, Potential negative effect, and Negative effect. The facilitators used PowerPoint to describe pros and cons of a second site. When participants filled out the forms, their responses were collected and entered into a computer while the participants were taking a break. The explicit aim of another workshop was “to address the interests of those who wanted to understand better the rationale for the selection of design concepts and to express their personal views and values directly though weighting and rating exercises,” and also to “show how these weights link to ongoing and future choices, brief you on the latest issues being considered, and will also be seeking your input on those issues directly.” The facilitator explained that they were using “multi-method approach to examine trade-offs”, which consisted of two “direct ranking” and “swing weighting” questionnaires (Gregory et al., 2012). We were first presented with a “swing weighting” questionnaire in which we were asked to start by individually weighting the sub-objectives within each main objective, based on their perceived relative importance (Figure 3.1). For instance, each individual was asked to express their preferences on the relative importance between the eight sub-objectives of the 24 main objective “Provide Robust Secondary Waste Water Treatment”. In the ‘Rank’ column, they first ranked sub-objectives based on their relative importance. Each sub-objective was to be compared with every other sub-objective. After ranking all sub-objectives, in the ‘Weight’ column the participant was asked to give a score of 100 to the sub-objective with the highest rank (i.e. the rank 1st). Then, the second rank was to be given a score of 100 or less based on its relative importance to the first rank. This process was to be continued until all sub-objectives within each main objective had been compared to each other. When this was done for the sub-objectives under each one of the main objectives, in another similar table with ‘Rank’ and ‘Weight’ columns the same procedure was carried out for the main objectives. Participants carried out the task while listening to the facilitator’s explanation and simultaneously learning about the objectives through the printed brochure. 25 Figure 3.1: Part of the printed form filled by one of the authors (Taheri) for the swing weighting exercise in one of the workshops held during the planning process for the renewal of a Canadian sewage treatment facility. Next, we worked on the “direct weighting” questionnaire in which a presenter went through the concept designs using a PowerPoint presentation while participants were asked to intuitively evaluate them in the questionnaire. Similarly, each participant was asked to first rank the concepts, then give 100 to the one they intuitively felt was most desirable, and then continue and assign a number between zero and 100 to the rest. The oral presentation was around 20 minutes long. When the participants started to fill out the form, a participant voiced that that they felt that they were not able to intuitively compare and evaluate the alternatives. The reaction among the other participants (nodding, verbal support etc.) indicated that several other participants shared this view. The group asked if the facilitators could provide some kind of summary or overview of the alternatives that would 26 make it easier to rank them. The facilitators responded by printing the colour-coded performance table that outlines how the design concepts performed in relation to the objectives (Figure 4.1). During these exercises, participants asked various questions to get clarification about things they could not find in the brochure. When the answer was to be found in the brochure, the project team used the projection screen to show the specific page where the information could be found. For instance, one participant asked for a clarification of the risk-related objectives to see if the emphasis was on risks threatening the government or the local community. The facilitator showed part of the brochure on the projection screen to provide information on the types of risks that have been considered. 3.2. Stormwater management at Orchard Commons In our conversations with UBC staff, we identified a decision problem that could be used to test the effectiveness of Group ValueCharts. The decision problem was about storm-water management at a UBC site called Orchard Commons. This site is currently a car parking lot, but they are planning to construct new buildings for the UBC Vantage College. The site is envisioned to become a mixed-use of academic and student housing center. Our case study, however, was a simplified version of this decision problem. Participants were aware that the goal of this study is to explore ways to adapt an interactive visualization tool to support their group 27 decision analysis, and also help them identify to what extent differences among participants (i.e. sticking points) might impact the outcome. We considered four alternatives for managing storm water from a new building to be constructed at Orchard Commons. The four alternatives are: “Site unchanged” (As a baseline), “Conventional run-off management”, “Best practice on-site run-off management”, “Direct run-off to “Sustainability Street.” To compare these alternatives, we identified six evaluation criteria namely as: “Reuse capacity”, “Cliff erosion”, “Innovation support”, “Risk”, “Disruption to stakeholders”, “Potential domino effects.” Through a number of meetings with technical experts at UBC, we estimated the outcomes of these four alternatives over the six criteria. 28 Results and discussion 4.1. Visualizations used in the SDM workshops to facilitate group MCDA The first research question of this study is whether and how participants have used infovis tools to do group deliberations in a real-world MCDA process. The focus of this study was on meetings in which participants evaluate and discuss alternatives over multiple criteria. For this purpose, I participated as an observer in a series of workshops held during an actual SDM process. The visualizations used in the workshops consisted of a variety of tables, graphs, and 3D drawings. The visualized information was shared with the participants through various types of printed material, PowerPoint presentations, posters hung on the walls, and flip-charts. The content in PowerPoint presentations and posters overlapped to a large extent with the information provided in the printed material. Repeating information across multiple sources (e.g. PowerPoint slides, printed materials and posters) may have participants read the same information several times, which may result in an unnecessary information overload (also see sub-section 4.3). The visualizations on 29 flip-charts were hand-drawn illustrations made by participants or the facilitators. As further discussed in sub-section 4.4, none of the tools used allowed the participants to interact directly with the data, and visualizations were generally static. Since the focus of this study is on group deliberations of a number of alternatives over multiple criteria, visualizations used to support the two following tasks are scrutinized: 4.1.1. Comparing multiple alternatives over a set of criteria A colour-coded performance table was used to visualize how different design concepts performed in relation to each other (Figure 4.1). In meetings with public staff and also consultants, it was mentioned several times that this colour-coded visualization (or similar ones) is very popular for comparing multiple alternatives. Dark red illustrated “worst” performance, dark green “best”, while orange and yellow were intermediates. The performance of the design concepts (alternatives) was depicted over 5 objectives and 27 sub-objectives. The scale used to assess each objective is given in the column ‘Units’, with the majority measured on a scale of 1 to 5 and the meaning clarified in the brochure. The performance for each alternative and sub-objective was specified in the individual cells. Neither the table nor the brochure, however, show the score functions, which show how they had mapped the real performances of alternatives over each criterion to the scores in the performance table given on the scale of 1 to 5. For example, it was not possible to question why alternative #7 performs 4.5 on criterion 1A while alternative #3 performs 1.0 (i.e. the former is 4.5 times better than the latter). When I asked about this later, I was given another report explaining how they came up with the scores. 30 This report, however, was not given in the workshop and a reason told by one of public staff was that most participants do not usually ask to get this detailed information, and it is therefore not wise to provide such details for the sake of the minority. In addition, the static nature of (and the lack of interaction with) brochures and PowerPoint slides did not allow to facilitate such personalized access to detailed information when someone needs to (see sub-section 4.4). To illustrate how to interpret the table, the facilitators showed PowerPoint slides where cells in the table were highlighted with a black rectangle to explain strengths and weaknesses of a design concept. For example, Figure 4.1 shows the slide used to demonstrate that the design concepts 2 and 6 are inexpensive, simple from an operational point of view, but with mediocre performance over the rest of objectives. Similarly, the facilitator used few other slides of the performance table to illustrate some other differences for example the design concepts #1 and #7 are “least expensive and functionally strong”, #3 and #4 are “with bigger financial and operational risks, but better sustainability pay-off”, and #8 is “a strong all-rounder” that “performs quite well across a wide range of measures”. It, however, was a one-way presentation and participants just listened to the facilitator. The table was not used later by participant to systematically identify key differences/trade-offs (to be discussed first), instead we observed that attentions were usually devoted towards what few participants mentioned as their views/questions on descriptions of some alternatives provided in the booklet or PowerPoint slides. The facilitator usually led discussions around those mentioned topics. 31 The colour-coding in the performance table made it possible to identify some trade-offs. For example, when analysing the outcome under objective #1 “Provide robust secondary treatment” it is easy to see that alternative #1 outperforms alternative #2. It is also easy to see that alternatives #1 and 7 perform best (several green cells), whereas alternatives #3 and 4 perform poorly (several red cells). It is, however, difficult to deduce whether or not #1 outperforms #7 because the colours in the corresponding sub-objectives vary from red to green, which makes it hard to aggregate and deduce their overall performances. Figure 4.1: Colour-coded performance table shown in one slide from the presentation in the real-world case study that the facilitator used to explain that alternative #2 and #6 are inexpensive and simple and have mediocre performance. 32 4.1.2. Analyzing participants’ responses Bar-charts, stacked bars or scatter plots were used to visualize participants’ responses to the trade-off questions as well as the swing-weighting and ranking exercises (Gregory et al., 2012). Bar graphs, such as the one shown in Figure 4.2, were used to probe into participant responses to trade-off questions, for example by asking participants to explain why they gave a certain response. The distribution of responses in the graphs helped identify the level of disagreement across participants for each question. The explanations provided by participants also clarified some of their concerns, interests, and suggestions related to making the trade-offs. For example, one question was whether to use a second site for integrated resource recovery from wastewater and solid organic waste that “reduces demand on existing infrastructure, and provides additional space for alternative development opportunities on site.” The responses widely varied from “negative impact” to “potentially positive benefit”. Proponents highlighted that “adding more organic inputs may make the project more cost effective” or “the project has the potential to draw young entrepreneurs who may learn from, develop and research new approaches to urban waste”, while opponents were concerned that “shipping materials off the site may mitigate benefits” or “it may not create economic return.” This example shows how a basic visualization (i.e. a bar chart) has the potential to help expose a wide distribution in attitudes of participants over one trade-off, which also helped them probe into their underlying reasons and 33 assumptions. Basic visualizations, however, are limited to support the assessment of the interplays that exist in trade-offs across multiple criteria and individuals’ differences over different MCDA concepts. This limitation was aimed to be addressed in the other study that I did with Ms. Bajracharya and our research team (see sub-section 4.2). Figure 4.2: Audience responses to one of the 7 questions asked in one of the workshops held during the planning process for the renewal of a Canadian sewage treatment facility. In addition, time limitation confined the discussion on every trade-off; therefore, most comments were only noted with a promise to be answered later. Brief answers to these comments were then given in the minutes of the workshop, which was emailed to participants a couple of weeks after. Time-wise, it was almost impossible to engage everyone in discussions over trade-offs. Many participants usually stayed quiet during most discussions. The question that came up later in our internal meetings with my research team was whether and how visualization techniques can help identify key sticking-points and also identify and first engage those participants that their concerns and opinions are more likely to be influential 34 for the outcomes of the workshop. This question was also discussed with some potential users, which somehow influenced the design of Group ValueCharts. In the workshop that we expressed our preferences over multiple criteria and alternatives, after a lunch break the project team printed a personalized 2-page report for each participant with several graphs in which their weights (Figure 4.3-a) and direct ranking results (Figure 4.3-b) were marked with a distinctive colour while the other participants were marked with a white dot. The report also provided a graph of each person’s overall evaluation based on their weights (Figure 4.3-c), which made it possible to see the alternative that each participant had given the highest rank based on their preferences (in Figure 4.3-c, this is alternative #7 “U. Garden” for Taheri as a participant). 35 a) b) c) Figure 4.3: Graphs used to show results for a) swing weights given by all participants, b) direct weights given by all participants, and c) ranking results based on individual swing weights. These graphs were printed for each participant based on their results. 36 The facilitator used the screen to show these graphs with all dots in the same colour. When the facilitator hovered the pointer over a certain dot in the graph, a pop-up appeared with the name of the participant who had given this score. The facilitator used these graphs to go through objectives, sub-objectives and alternatives where disagreements were large, and asked for explanations from those who had given minimum or maximum weights. For example, when the facilitator was reviewing the result for objective 1, he hovered over the dot in the graph (similar to Figure 4.3-a) with the highest score (~37%), and asked the identified participant to justify and explain their preferences. Each explanation was usually followed by a short discussion between participants and the project team, which helped identify potential areas of disagreement among participants, unpack some concerns and interests, and collect suggestions. The graphs made it possible to identify differences among perspectives and probe in to underlying reasons and assumptions. For instance, participants’ weights to “promote sustainability policy objectives” varied from 0.0 to 25%(see Figure 4.3-a), which showed that some participants thought that “promoting sustainability” is not important at all even though there were others who find this objective very important. Identifying such differences enabled the facilitator to engage participants with extreme views to share their reasons and assumptions. Some participants acknowledged that this workshop was the most useful and effective one in informing them and also discussing alternatives and other aspects of the project. A participant 37 mentioned that the discussions in the previous workshops were not as structured as this one. This example from this SDM workshop highlights some potentials of using visualization techniques to support multi-criteria decision analysis in a group setting. As discussed in the following sub-sections, however, there are limitations in their techniques and also opportunities to even better support such group deliberations. In the next sub-section, interactive visualizations techniques used in Group ValueCharts are discussed on how they support the above mentioned tasks (i.e. comparing alternatives and analyzing preferences). The following sub-sections will further discuss the visualizations used in this real-world decision process. 4.2. Visualizations used in Group ValueCharts – Orchard Commons case The original ValueCharts was first developed based on a task framework, proposed by Bautista and Carenini (2006). Grounded in research findings from both decision analysis and information visualization, this task framework (see Figure 4.4) was developed for designing and evaluating infovis tools that are aimed to support an individual in all steps of a decisions analysis process namely as: construction, inspection, and sensitivity analysis (Bautista & Carenini, 2006; Carenini & Loyd, 2004). 38 Figure 4.4: A task framework to design infovis tools to support decision analysis (Bautista & Carenini, 2006) Group ValueCharts (Bajracharya et al., 2014) is an extended version of ValueCharts with the aim to support a group of individuals during a meeting to analyze a decision problem together. Group ValueCharts extends the above task framework for the inspection phase to support group deliberations. There are a number of new tasks supported in Group ValueCharts based on the literature, our conversations with UBC staff participated in Orchard Commons case, and our observations and meetings during the first case study. For instance, our observations and meetings from the first case study helped recognize that identification of disagreements can play a key role in facilitating group deliberations during MCDA/SDM processes. A number of studies have probed into visualization techniques to support identification of disagreements in 39 consensus reaching process (CRP). The majority of these are designed to track the level of agreements/disagreements among participants over time during CRP, and to help identify when sufficient agreement has been reached (Alonso & Herrera-Viedma, 2007; Palomares & Martínez, 2014). Group ValueCharts, however, was the first study on using interactive visualizations to support MCDA processes, which helps a group of individuals identify their disagreements in a criterion level (see Figure 2.2) and relate them to those in their overall rankings (see Figure 2.4: the staked bar-charts for all users on every alternative). Several features were also added after its demonstration in our first meetings with UBC staff in the Orchard Commons case study. For example, a line was added in the Detailed view of Group ValueCharts that shows the average overall ranking for each alternative. This enables participants to probe into each individual’s results while they can still see the average. Similar lines were also added for each criterion. In addition, a ‘heat map’ (which later changed to a ‘progress-bar’) visualization was used to show the level of disagreements in each criterion based on weights, scores, and the multiplication of the both (called product in the tool). The design of Group ValueCharts is described in detail in our paper and also Ms. Bajracharya’s MSc thesis (Bajracharya, 2014). In the following, visualizations used in Group ValueCharts are discussed over two aspects (comparing multiple alternatives and analyzing preferences), which were also used to discuss visualizations in the real-world case study. 40 4.2.1. Comparing multiple alternatives over a set of criteria In comparison to the first real-world case study, Group ValueCharts demonstrated that it can enable a faster overview of the overall ranking results of all individuals (the average and distribution), and then to do further assessment by using several filtering and highlighting features. Group ValueCharts also allows to better probe into differences in performances of multiple alternatives without a need to switch to another completely detached visualization. In the Orchard Commons case study, for example, participants started using Group ValueCharts by comparing the overall rankings of alternatives. “Stacked bar” visualization appeared to help participants promptly identify the alternatives with highest and lowest rankings. For example, when the results were presented via the “Average” view, one participant quickly mentioned that “I liked (alternative) #2 worked out to be the best”. Another participant highlighted a negligible difference between alternative 2 and 3, which resulted in some discussion on their differences and trade-offs. During a discussion on overall ranking results, some participants pointed to performances of some alternatives, which shifted the focus of their discussion onto differences of those alternatives in a criterion level. It is worth mentioning that these discussions took place when they were looking at Group ValueCharts. In contrast, in the first case study, several visualizations were used separately. The performance table (see Figure 4.1) for example was used to facilitate comparing performances of multiple alternatives, but as we discussed in the previous sub-41 section, it was not successful for helping participants deduce the aggregated performances. The facilitator in the first case later showed another visualization (see Figure 4.3-c) for the overall ranking results, which also did not provide any way to relate the alternatives’ performances in a criterion level, provided in the performance table, to the overall rankings. Visualizations used in Group ValueCharts also showed a potential to help participants validate their intuitive evaluations of some alternatives. For instance, a participant commented on the alternative in the third rank that “we'd blown it (the third-ranked alternative) out the door already. So it's nice to see that it was the right decision”. The other way around, some participants got surprised when they saw the average results on some aspects are totally different from theirs. Therefore, a hybrid use of interactive visualizations in Group ValueCharts shows a great potential to enable participants to promptly compare alternatives on their overall performances, and if needed, probe into their differences in a criterion level. 4.2.2. Analyzing participants’ responses Gradually, discussions from the overall ranking were shifted towards differences that exist in participants’ preferences. After some comments on the overall ranking in the Average view, a participant commented on a criterion: “I'm surprised that (the weight for) water conservation on average is lower”. This comment resulted in a discussion between all participants on things like different 42 assumptions on its unit ($ cost or cubic meter of water) and its importance relative to other criteria. For instance, a participant mentioned that they think its importance is lower than some other criteria maybe because they are from a department with different priorities. This is one of the several discussions that they had when they identified a big discrepancy between participants. Because of limited information provided by the Average view, some participants asked to see the results in a participant level, which was provided in the “Detailed” view. In the Detailed view, people discussed their differences on both overall ranking and criterion levels. Reviewing their results in both Average and Detailed views usually resulted in engaging discussions among participants and probing into their reasons, underlying assumptions, and confusions, which were usually replied back by other participants or our research team. In the first real-world case study, the use of visualizations to analyze the preferences also created similar promising results and its participants expressed their gratification at the end of the workshop. The survey results (see Figure 4.5) from Orchard Commons case also show that participants acknowledged that (interactive visualizations used in) Group ValueCharts was able to facilitate identification of agreements and disagreements, make their meetings more participatory, structure group interaction, and improve exchange of information. The survey results show that participants in Orchard Commons find the Detailed view more successful in improving participation, group interaction, exchange of information, and identification of disagreements and agreements. Group 43 ValueCharts was also appraised in the survey for its potential to help them make informed decisions. A participant for example commented that “based on the group discussion, the tool really allows exchange of ideas and comments among group members, and helps reveal the disagreements and agreements”. Figure 4.5: A summary of survey results from six participants in a study investigating the perceived usefulness of an interactive group decision visualization tool, named Group ValueCharts (Bajracharya et al., 2014). Most of discussions, however, took place over two parts of Group ValueCharts: 1) overall ranking (with “staked bars” visualization), and 2) pop-up windows representing all participants’ utility functions (see Figure 4.6). 44 Figure 4.6 The pop-up window to see utility functions of users for a criterion Figure 4.7: Visualization in Group ValueCharts to show the interplay between weights and utility functions The visualizations representing the interplays between weights and scores (see Figure 4.7) were rarely used in discussions. As discussed in section 4.5, this might be explained as because of the difficulty some participants have in understanding MCDA concepts. Learning MCDA concepts usually requires time-consuming training especially on learning about the multiplication of weights and utilities and also the weighted sum of all score functions to compute the final rankings. 45 In the heuristic evaluation (see section 2.4), some participants asked for help to analyze the results. Several suggestions were given in order to facilitate analysis of the results and better interaction with the tool. One suggestion was to provide help buttons in difference places with some descriptive explanation and/or short training videos. Another suggestion was to use interactive tutorials in which users are asked to do some specific actions directly to the interface. In the heuristic evaluation study, some participants also shared their suspicious about the ranking results. To understand the interplays between different MCDA concepts, the role of facilitator was emphasized to be critical to properly explain how the results are computed based on their preferences and also to be available for any questions that may arise. Visualization techniques (such as highlighting, tooltips, labels, filtering, and pop-ups) were also discussed to have potentials to visually help users see the interplays. Although training can be vital for deep analysis of the results, this may not be feasible for larger and more diverse groups of participants. If the aim is to support large groups, future studies should therefore focus more on visualization techniques that can visually help users understand the interplays among MCDA concepts. Also, it may make more sense to provide an optional access to details on preference information in order to prevent information overload in the main interface. 46 4.3. Presenting complementary information and the risk for information overload As active participants in the workshops, we (I and Professor Öberg) felt that it was challenging to absorb the information in the brochure while listening to the presentation and understanding the information in the slides and there is no doubt that we cherry picked information and read more about some alternatives than others. Our sense is that the oral presentation was more influential on our responses than the information provided in the brochure. Also, we felt that we did not have sufficient time to learn equally about the alternatives and we found ourselves spending more time learning about some alternatives than others. We felt unable to draw on the new information, whether written or oral, when filling out the questionnaires as well as during the discussion and believe that we carried out the ranking based mainly on our previous knowledge. It is known that people who face complex decision problems that involve multiple trade-offs often experience information overload, which may lead to an increased number of processing errors and decreased decision accuracy (Eppler & Mengis, 2004; Vogel & Coombes, 2010). This is known to happen when a large cognitive load is placed on the ‘working memory’, which is shorthand for the cognitive processes that are responsible for applying and integrating new information with relevant old knowledge to solve a specific problem (Zhu & Chen, 2008). Speier (2006) argues that the complexity a person experiences in 47 completing a task is proportional to the amount of information that they need to process (Speier, 2006). Their studies suggest that people generally lean towards less effort than higher accuracy; and therefore, in face of experiencing information overload and complexity they will select and process only a subset of available information. In addition to the above self-observation that we made during the SDM workshops, there were some clues suggesting that the other participants also faced information overload, as commonly is the case in this type of workshops (Eppler & Mengis, 2004; Vogel & Coombes, 2010). For instance, it appeared that some of the questions participants asked were because they could not find the answers in the brochure or did not recall that the information they were asking for had been already given in the presentation, which indicates that they, like us, only had read certain parts of the brochure, and only listened to part of the presentation. These questions also indicate that it is likely that some participants only recalled some of the key points about objectives and alternatives when answering the questions or carrying out the weighting tasks, which according to previous studies may lead to more ‘processing errors’ and lower ‘decision accuracy’ (Speier, 2006; Vogel & Coombes, 2010). Furthermore, these participants were expected to answer trade-off questions or rank objectives based on their preferences while remembering what they heard during the presentation as well as what they read in the brochure. Being required to review a large amount of information, they were also expected to complete these tasks in a short period of time, which might have intensified the experienced 48 information overload as this is known to happen when information is presented faster than a participant can process (Nunamaker & Deokar, 2008; Vogel & Coombes, 2010). It is commonly agreed that infovis is a powerful tool to help participants understand and analyze underlying data (Amar, Eagan, & Stasko, 2005) and thus reduce the risk for information overload. Previous studies suggest that infovis has the potential to strengthen people’s cognitive ability by, for instance, expanding working memory and information storage, reducing search time, enhancing pattern recognition, facilitating “perceptual inference of relationships that are otherwise more difficult to induce”, and providing “a manipulable medium that, unlike static diagrams, enables the exploration of a space of parameter values” (Thomas & Cook, 2005). When studying the impacts of information presentation formats on complex decision tasks, Speier (2006) argues that infovis can improve decision performance by controlling the complexity of the task, and sometimes help achieve higher decision accuracy and shorter decision time. One way to reduce the risk of information overload is to enable participants to access relevant details when they feel that they need. Drawing on our experiences from the workshops, we used ValueCharts to link various parts of the decision-model with relevant information in a detailed report for one of the real-world cases. For example, as shown in Figure 4.8, if a user wondered how an objective was defined, they could access the information by right-clicking on the objective. This action led to a PDF file of the 49 project report opened in a new window at the location where the objective was described. This basic integration of ValueCharts and a PDF file helped demonstrate potentials of the “information on demand” feature, which was very much appreciated by our partners as it made it possible to find relevant information in much shorter time as compared to having to search through background material manually. It was mentioned that this feature can for example enable participants to first look at the “big picture” by overviewing the overall ranking and locating main trade-offs. Then, if they have any question or ambiguity, they will be guided to relevant details inside the report(s). This feature, however, facilitated a different way of reviewing the details from what they have been doing in most MCDA/SDM-based projects. They told us that they often prepare and share a big report, which have readers go through lots of detailed information before making any sense of it. For instance, the typical report for an MCDA problem typically start with the problem definition, objectives, evaluation criteria/indicators, alternatives, and then evaluation results based on inputs collected from stakeholders and experts. Based on our meetings with people working in a public organization, we also learnt that a notable summarization is usually needed for reporting to people in higher levels of management. In most projects, final decision makers usually expect to see a one- to two-page report. As a result, this short report can only include the financial aspect of the decision and a very brief explanation on other aspects. A participant mentioned that this intensive summarization results in removing some of critical information such as trade-offs made and uncertainties that still exist. Future work may therefore investigate whether and how this feature and similar ones can 50 actually reduce the risk for information overload in sharing MCDA results among people involved in such real-world decision processes, and also reduce the risk of overlooking critical details in reporting to higher levels of management. Figure 4.8: Integrating ValueCharts with the reports to enable “details on demand”. When a user right click on an alternative or a criterion, they can go to the exact part of the report talking about it. 4.4. Interaction with data The infovis used in the workshops with the Community Research Forum (CRF) was mainly designed to represent the data and did not allow participants to interact with it, for example by adding, updating, selecting or filtering the data, which according to Bautista and Carenini (2006) are essential tasks in the decision process. For instance, the static nature of the performance table (Figure 4.1), made it impossible for participants and facilitators to select and compare two non-adjacent 51 alternatives. We did not have the opportunity to demonstrate ValueCharts in the CRF. When we demonstrated ValueCharts in meetings with various potential users, they commonly expressed strong appreciation of the dynamic and interactive features of the tool. It was evident that users appreciated that it was possible to play with the weights and seamlessly view how changing weights impacted the final ranking of alternatives. A participant for example mentioned that this interactivity enables to quickly assess various prioritizations of multiple stakeholders, elicited as weights, and simultaneously see how much the ranking results may change from one prioritization to another. Interactive features in both ValueCharts and Group ValueCharts were also appreciated by the participants in the Orchard Commons case study. For example, when participants were asked in the survey what they liked the most about the tool, a participant answered: “The opportunity to compare options and adjust values and weight factors to reflect personal preferences.” Another participant appreciated “the ability to isolate participants (this is one of the filtering features in Group ValueCharts) and engage in discussion about assumptions was very powerful.” Besides that, previous studies emphasize that an effective visualization tool needs to facilitate a variety of interactions with data (Bautista & Carenini, 2006; Munzner, 2014; Yi, Kang, Stasko, & Jacko, 2007). It seems safe to claim that it is easier for people to relate different types of information if they are allowed to interact with the data. Interactive features may also help people deduce the interplays 52 among different MCDA concepts and reduce the amount of training needed (see subsections 4.2.2 and 4.5). Drawing on the literature and our observations, we conclude that interactive visualization tools, like ValueCharts and Group ValueCharts, hold the potential of rendering deliberation processes less frustrating and more efficient. 4.5. Understanding MCDA and trust to the results When we demonstrated ValueCharts’ seamless ranking of alternatives in meetings with potential users, the initial reaction among the participants is perhaps best described as delighted appreciation (aahh.. wow!, etc.) but generally there was at least one participant in each group who went from appreciation to bafflement to suspicion. This change was demonstrated by non-verbal actions such as leaning back in the chair, crossing the arms or frowning, accompanied with verbal expressions such as “But … how is the outcome calculated?”. Responses such as “it is based on data the user puts into the model” seemed to increase rather than defuse suspicion. Discussing this phenomenon with various participants (‘suspicious’ as well as ‘non-suspicious’ ones) led us to conclude that the initial positive reaction was due to that the weighting process was intuitively understood and they found the seamless visualization of changes in weights very illustrative. The suspicion appears to be tied to the fact that many seem to find it challenging to grasp the relationship between scores and weights, and how the interplay between the two impacts the overall ranking results, and perhaps most 53 important of all: the scoring is pre-set and hidden (how well a certain alternative performs in relation to a specific objective). Questions related to how scores are determined, how they relate to weights and how the two interact surfaced in every meeting, which led the participants to conclude that it was complicated. When probing further into these questions, several participants revealed that they felt that the ease of ValueCharts made them suspect that the seamless ranking of the alternatives was not reliable. For instance, in a meeting during demonstration of ValueChart, a participant mentioned that “it intuitively feels as if it ought to be hard to visualize something that is hard to understand.” In Group ValueCharts, although participants found the tool very useful to discuss their preferences, uncover underlying assumptions, and assess trade-offs between multiple alternatives, we also observed that they did not assess and discuss the interplays between their weights and utility functions and how the interplay between them impact the overall ranking. Assessing the interplays helps focus discussions on those disagreements that really matter. This for example can help see whether or not a disagreement around scores given by all participants to an alternative’s outcome resulted in completely different rankings. If not, it may not be wise to spend the limited time of their meeting on discussing such uninfluential disagreements. We got the impression that the negative reaction mainly surfaced among participants who were not familiar with multi-criteria decision analysis (MCDA) or tools that support structured decision-making (SDM). Therefore, a pertinent 54 question for future development of ValueCharts is how to ensure users grasp what weights and scores functions are and how the interrelationship between them impacts the final ranking. 4.6. Allowing group interaction Salo and Hämäläinen (2010) argue that MCDA facilitates identification of differences in a group and learning about other views, which in turn helps them better understand the decision problem. Bautista and Carenini’s framework (2006) suggests that effective infovis needs to facilitate “sensitivity analysis” by enabling “comparison of results among different evaluations”. In a group setting, the implication would be to make it possible for participants to probe into differences and similarities in their preferences. This was also clearly seen in the Community Resource Forum (CRF) workshops where the facilitators’ combined use of bar charts and scatter plots (Figure 4.2 and Figure 4.3) made it possible to identify potential sticking points. This was, for example, done by using graphs to identify and probe into objectives that revealed large differences (high disagreement) in some weights. These discussions helped the facilitators reveal whether the disagreements were due to differences in assumptions, concerns, interests, or priorities. For example, one discussion revealed that the differences in the weights given to the sub-objective “nuisance associated with truck traffic” was due to that some participants assumed that the trucks would be traveling during night-time and this would therefore not be a 55 nuisance, whereas others assumed that the trucks would be traveling during daytime, which they felt would be highly disturbing. Similarly, the participants also differed in their assumptions when it came to how noisy the trucks were going to be. It was also mentioned that they had a difficulty to infer the difference in nuisance of 7 travels of trucks/week (the minimum number needed in one of alternatives) and 45 travels of trucks/week (the maximum number needed). Another discussion showed that participants had weighed “risk of odour nuisances” differently because they relied on dissimilar assumptions about ‘frequency’ and ‘intensity’ of the odour from the site. The construction of the graphs was, however, onerous, and it was not possible to iteratively move between multiple graphs to overview and relate the results. It was for example not possible for the participants to see what and how differences in their preferences (elicited as weights), would influence the overall rankings of the alternatives. Therefore, it was not possible to identify sticking points that mattered for the outcome. When we demonstrated the original version of ValueCharts in meetings with potential users during and after the first real-world case study, the dynamic and interactive features were acknowledged as useful features as this for example makes it possible for participants to personalize their exploration into an analysis of the information based on their personal preference model. Reviewing group results together, however, was challenging and not effectively supported. While ValueCharts does allow each user to explore how changing the weights, and scores 56 (or value-functions) impact the ranking of alternatives, it does not allow users to collectively explore differences in a group. Nor does it allow users to probe into the roots of the differences (are they due to differences in weights, scores or value-functions?). Our conversations suggested that an ability to visualize aggregated group ranking would be helpful, as this would allow users to move between individual preferences and the group average or aggregated ranking. In design of Group ValueCharts, the aim therefore was to aid users to assess preferences of all participants together, identify disagreements, and analyze whether and to what extent these SPs influence the final ranking of alternatives, which in turn we hypothesized would increase the effectiveness of the group deliberation process. To our knowledge, Group ValueCharts is the first infovis tool that is designed to facilitate group deliberations in MCDA/SDM processes. For example, Group ValueCharts makes it possible to visually assess all individuals’ score functions (see Figure 4.6), weights and scores and the multiplication of the both (see Figure 4.7), overall rankings, which together helps deduce in what way individual preferences differed in the group and on where the largest disagreements lie. The survey results from Orchard Commons study show that participants strongly acknowledged that Group ValueCharts can improve their group interactions and exchange of information. Our observations also confirm that Group ValueCharts was successful in helping them identify their disagreements and engage everyone to discuss underlying reasons and assumptions. For instance, 57 when participants were reviewing their preferences in Group ValueCharts for the criterion “risk”, it became clear that a participant had given a very high weight to it and another had given a very small weight, and the rest had given a medium weight. These two participants were engaged by others to explain their reasons. Interestingly, the first participant said that he weighed the “risk” very high because he was thinking about it in a general sense without considering its implication for the four available alternatives. The second participant explained that he gave very small weight in comparison to other participants because based on his own experience in operation our estimates of “risk” for the four alternative were negligible. At the end of the second meeting, another participant accredited the potential role of Group ValueCharts (and interactive visualizations) to support group decision analysis in their meetings as: “I think this tool will actually allow us to zero in on where we think there might be issues and that’s real value here. ... We’re drilling in on specific areas. So whether it’s been set up slightly off or not, it doesn’t matter. The fact that you’re able to have real discussion here about it is the valuable part of the tool, then make adjustments accordingly” 58 Conclusions Drawing on the literature and our results, it appears that interactive and dynamic infovis tools, such as ValueCharts and Group ValueCharts, have the potential of increasing the effectiveness of MCDA decision processes. First, properly-designed infovis tools can support participants to compare multiple alternatives over a set of criteria. For example, they can quickly see the highest or lowest ranked alternatives, and identify main trade-offs among them. Such tools hold the potential of improving the decision accuracy, as they can allow participants to interact with the data and access details when they want to, which in turn would reduce the risk for information overload. Second, as observed in both case studies, using visualization techniques can facilitate effective identification of sticking points and to engage participants with different preferential positions to explain their underlying reasons and assumptions. As acknowledged by participants in Orchard Commons case, Group ValueCharts has the potential to improve group interaction and exchange of information in meetings. The Detailed view, which visualizes everyone’s results, was accredited 59 for higher improvement in group interaction and exchange of information than the Average view. ValueCharts (the original version) is able to compute and visualize rankings based on one individual’s preferences. In ValueCharts, the ability to see how changing the weights impacts the overall ranking was particularly appreciated among potential users. Based on discussions following demonstration of ValueCharts to potential users and drawing on the literature, Group ValueCharts was developed to allow facilitators and participants to probe into the underlying reasons and assumptions for main sticking points in a group setting. Group ValueCharts demonstrated an empowering role for group deliberations during MCDA/SDM processes especially in identification of sticking points and when they matter for the final ranking. For instance, it could potentially help reveal if such differences are rooted in different weighting or scores. Our conversations after showing ValueCharts suggest, however, that some users grow suspicious of the final ranking in the tool, albeit initially positive, if they do not fully grasp the difference between weights, and scores, and how the interplay between them impacts the final ranking. Similarly, in using Group ValueCharts no discussion took place on the visualization elements (see Figure 4.7) that are representing the interplays between weights and scores of participants for each alternative. Future research may therefore focus on developing training sessions or modules that are aimed to efficiently familiarize participants with MCDA concepts especially the interplays between weights and utility functions. Some new 60 visualization techniques may also be evaluated to see whether they can visually aid users to understand and analyze the interplays. Furthermore, another research question to be studied can be whether and how the seamless change of the rankings (as weights are changed) in ValueCharts can influence the trust in the tool for users who are not familiar with MCDA. In our studies on ValueCharts and Group ValueCharts, we invited small groups of participants and worked on decision problems that have small numbers of alternatives and criteria. Also, our tools were studied in a group setting in which there was one projection screen and one meeting table, and every participant was provided with a laptop. In MCDA and SDM processes, however, such infovis tools might be used in different settings and group sizes, and for more complex decision problems. As can be seen in the first case study, SDM processes usually involve multiple groups of participants in different sizes through several interrelated meetings. In addition, their decision problems usually consist of a large number of alternatives and criteria. Therefore, future studies may also focus on validating and extending our findings to bigger groups involved in more complex decision problems and in other group settings. We conclude that infovis tools harbour great potential to support deliberations among a group of users familiar with the basic concepts of MCDA and SDM such as objectives, weights, and scores, but we believe considerable development is needed before the tool can be of wider use, for example in public consultations. 61 Bibliography Adewumi, J. R. (2011). A decision support system for assessing the feasibility of implementing wastewater reuse in South Africa. Faculty of Engineering and the Built Environment, University of the Witwatersrand. Alallah, F. S., Nezhadasl, M., Irani, P., & Jin, D. (2007). Visualizing the Decision-Making Process in a Face-to-Face Meeting. In 2007 11th International Conference Information Visualization (IV ’07) (pp. 168–176). IEEE. http://doi.org/10.1109/IV.2007.138 Alonso, S., & Herrera-Viedma, E. (2007). Using Visualization Tools to Guide Consensus in Group Decision Making. In F. Masulli, S. Mitra, & G. Pasi (Eds.), Applications of Fuzzy Sets Theory (Vol. 4578, pp. 77–85). Berlin, Heidelberg: Springer Berlin Heidelberg. http://doi.org/10.1007/978-3-540-73400-0 Amar, R., Eagan, J., & Stasko, J. (2005). Low-level components of analytic activity in information visualization. In IEEE Symposium on Information Visualization, 2005. INFOVIS 2005. (pp. 111–117). IEEE. http://doi.org/10.1109/INFVIS.2005.1532136 Bajracharya, S. (2014). Interactive Visualization for Group Decision-Making. The University Of British Columbia. Bajracharya, S., Chen, K. Di, Taheri, H., Carenini, G., Poole, D., & Öberg, G. (2014). Interactive Visualization for Group Decision-Making In Water Infrastructure Planning. In IWA Kathmandu Conference. Bautista, J., & Carenini, G. (2006). An integrated task-based framework for the design and evaluation of visualizations to support preferential choice. In Proceedings of the working conference on Advanced visual interfaces - AVI ’06 (p. 217). New York, New York, USA: ACM Press. http://doi.org/10.1145/1133265.1133308 Bell, D. E., Raiffa, H., & Tversky, A. (1988). Decision making: Descriptive, normative, and prescriptive interactions. Cambridge University Press. Belton, V., & Stewart, T. J. (2002). Multiple criteria decision analysis: an integrated approach. Springer. Carenini, G., & Loyd, J. (2004). ValueCharts. In Proceedings of the working conference on Advanced visual interfaces - AVI ’04 (pp. 150–157). New York, New York, USA: ACM Press. http://doi.org/10.1145/989863.989885 62 Chamberlain, B. C., Carenini, G., Oberg, G., Poole, D., & Taheri, H. (2014). A Decision Support System for the Design and Evaluation of Sustainable Wastewater Solutions. IEEE Transactions on Computers, 63(1), 129–141. http://doi.org/10.1109/TC.2013.140 Conati, C., Carenini, G., Hoque, E., Steichen, B., & Toker, D. (2014). Evaluating the Impact of User Characteristics and Different Layouts on an Interactive Visualization for Decision Making. Computer Graphics Forum, 33(3), 371–380. http://doi.org/10.1111/cgf.12393 Dalal, S., Khodyakov, D., Srinivasan, R., Straus, S., & Adams, J. (2011). ExpertLens: A system for eliciting opinions from a large pool of non-collocated experts with diverse knowledge. Technological Forecasting and Social Change, 78(8), 1426–1444. http://doi.org/10.1016/j.techfore.2011.03.021 Dinesh, N., & Dandy, G. (2003). A decision support system for municipal wastewater reclamation and reuse. Water Supply, 3(3), 1–8. Eppler, M. J., & Mengis, J. (2004). The Concept of Information Overload: A Review of Literature from Organization Science, Accounting, Marketing, MIS, and Related Disciplines. The Information Society, 20(5), 325–344. http://doi.org/10.1080/01972240490507974 Finney, B. A., Gearheart, R. A., Salverson, A., & Zhou, G. (2009). WAWTTAR: A planning tool for selecting wastewater treatment technologies. Water Environment & Technology, (10), 51–54. Fischer, D. (2003). Multi-criteria analysis of ranking preferences on residential traits. In 10th ERES conference. Fülöp, J. (2001). Introduction to Decision Making Methods, 1–15. Gratzl, S., Lex, A., Gehlenborg, N., Pfister, H., & Streit, M. (2013). LineUp: visual analysis of multi-attribute rankings. IEEE Transactions on Visualization and Computer Graphics, 19(12), 2277–86. http://doi.org/10.1109/TVCG.2013.173 Gregory, R., Failing, L., Harstone, M., Long, G., McDaniels, T., & Ohlson, D. (2012). Structured Decision Making: A Practical Guide to Environmental Management Choices. Wiley-Blackwell. Guest, J. S., Skerlos, S. J., Barnard, J. L., Beck, M. B., Daigger, G. T., Hilger, H., … Love, N. G. (2009). A New Planning and Design Paradigm to Achieve Sustainable Resource 63 Recovery from Wastewater 1. Environmental Science & Technology, 43(16), 6126–6130. http://doi.org/10.1021/es9010515 Hamouda, M., Anderson, W., & Huck, P. (2009). Decision support systems in water and wastewater treatment process selection and design: a review. Heer, J., & Agrawala, M. (2008). Design considerations for collaborative visual analytics. Information Visualization, 7(1), 49–62. http://doi.org/10.1057/palgrave.ivs.9500167 Hidalgo, D., Irusta, R., Martinez, L., Fatta, D., & Papadopoulos, A. (2007). Development of a multi-function software decision support tool for the promotion of the safe reuse of treated urban wastewater. Desalination, 215(1), 90–103. Huang, I. B., Keisler, J., & Linkov, I. (2011). Multi-criteria decision analysis in environmental sciences: Ten years of applications and trends. Science of The Total Environment, 409(19), 3578–3594. http://doi.org/10.1016/j.scitotenv.2011.06.022 Joksimovic, D., Kubik, J., Hlavinek, P., Savic, D., & Walters, G. (2006). Development of an integrated simulation model for treatment and distribution of reclaimed water. Desalination, 188(1), 9–20. Kiker, G. a, Bridges, T. S., Varghese, A., Seager, T. P., & Linkov, I. (2005). Application of Multicriteria Decision Analysis in Environmental Decision Making. Integrated Environmental Assessment and Management, 1(2), 95–108. http://doi.org/10.1897/IEAM_2004a-015.1 Kilgour, D., & Eden, C. (2010). Handbook of Group Decision and Negotiation. (D. M. Kilgour & C. Eden, Eds.)Zhurnal Eksperimental’noi i Teoreticheskoi Fiziki (Vol. 4). Dordrecht: Springer Netherlands. http://doi.org/10.1007/978-90-481-9097-3 Korhonen, P. (2005). Interactive Methods. In Multiple Criteria Decision Analysis: State of the Art Surveys SE - 16 (Vol. 78, pp. 641–661). Springer New York. http://doi.org/10.1007/0-387-23081-5_16 Loetscher, T. (2002). A decision support system for selecting sanitation systems in developing countries. SocioEconomic Planning Sciences, 36(4), 267–290. http://doi.org/10.1016/S0038-0121(02)00007-1 Matthies, M., Giupponi, C., & Ostendorf, B. (2007). Environmental decision support systems: Current issues, methods and tools. Environmental Modelling & Software, 22(2), 123–127. 64 Mendoza, G. A., & Martins, H. (2006). Multi-criteria decision analysis in natural resource management: A critical review of methods and new modelling paradigms. Forest Ecology and Management, 230(1-3), 1–22. http://doi.org/10.1016/j.foreco.2006.03.023 Mendoza, G. a., & Martins, H. (2006). Multi-criteria decision analysis in natural resource management: A critical review of methods and new modelling paradigms. Forest Ecology and Management, 230(1-3), 1–22. http://doi.org/10.1016/j.foreco.2006.03.023 Montis, A. De, Toro, P. De, Droste-franke, B., Omann, I., & Stagl, S. (2004). Assessing the quality of different MCDA methods. In Alternatives for environmental valuation (pp. 99–184). Routledge; New Ed edition. Munzner, T. (2014). Visualization Analysis and Design. A K Peters/CRC Press. Nielsen, J. (1994). Usability inspection methods. In Conference companion on Human factors in computing systems (pp. 413–414). Nunamaker, J. F., & Deokar, A. V. (2008). GDSS Parameters and Benefits. In Handbook on Decision Support Systems. Springer Berlin Heidelberg. http://doi.org/10.1007/978-3-540-48713-5_17 Palomares, I., & Martínez, L. (2014). Low-dimensional Visualization of Experts’ Preferences in Urgent Group Decision Making under Uncertainty1. Procedia Computer Science, 29, 2090–2101. http://doi.org/10.1016/j.procs.2014.05.193 Raiffa, H., & Keeney, R. (1976). Decisions with multiple objectives: Preferences and value tradeoffs. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Randall, D., & Rouncefield, M. (2014). Ethnography. In: Soegaard, Mads and Dam, Rikke Friis (eds.). In The Encyclopedia of Human-Computer Interaction (2nd ed.). The Interaction Design Foundation. Retrieved from https://www.interaction-design.org/encyclopedia/ethnography.html Riabacke, M. (2012). A Prescriptive Approach to Eliciting Decision Information. Russell, S. J., Norvig, P., & Davis, E. (2010). Artificial intelligence: a modern approach (Vol. 2). Prentice hall Englewood Cliffs. Salo, A., & Hämäläinen, R. (2010). Multicriteria Decision Analysis in Group Decision Processes. In D. M. Kilgour & C. Eden (Eds.), Handbook of Group Decision and 65 Negotiation SE - 16 (Vol. 4, pp. 269–283). Springer Netherlands. http://doi.org/10.1007/978-90-481-9097-3_16 Schilling, M. S., Oeser, N., & Schaub, C. (2007). How effective are decision analyses? Assessing decision process and group alignment effects. Decision Analysis, 4(4), 227–242. Scholz, R., & Tietje, O. (2001). Embedded case study methods: Integrating quantitative and qualitative knowledge. SAGE. Retrieved from http://books.google.com/books?hl=en&lr=&id=mcvpbO3PLcwC&oi=fnd&pg=PR5&dq=Embedded+case+study+methods:+Integrating+quantitative+and+qualitative+knowledge&ots=ToRROdJM9h&sig=Hvqb6DITRFYB_TvEQ_dsseZ6WFo Scholz, R. W. (2011). Transdisciplinarity for environmental literacy. In Environmental Literacy in Science and Society: From Knowledge to Decisions (pp. 373–404). Cambridge University Press. Scholz, R. W., Lang, D. J., Wiek, A., Walter, A. I., & Stauffacher, M. (2006). Transdisciplinary case studies as a means of sustainability learning: Historical framework and theory. International Journal of Sustainability in Higher Education, 7(3), 226–251. http://doi.org/10.1108/14676370610677829 Shih, H.-S., Wang, C.-H., & Lee, E. S. (2004). A multiattribute GDSS for aiding problem-solving. Mathematical and Computer Modelling, 39(11-12), 1397–1412. http://doi.org/10.1016/j.mcm.2004.06.014 Sojda, R. S., Chen, S. H., Sawah, S. El, J. HA, A. J. G., Lautenbach, S., & McIntosh, B. S. (2012). Identifying the decision to be supported: a review of papers from Environmental Modelling and Software. International Congress on Environmental Modelling and Software. Retrieved from http://www.iemss.org/iemss2012/proceedings/A1_0987_Sojda_et_al.pdf Speier, C. (2006). The influence of information presentation formats on complex task decision-making performance. International Journal of Human-Computer Studies, 64(11), 1115–1131. http://doi.org/10.1016/j.ijhcs.2006.06.007 Stone, D., Jarrett, C., Woodroffe, M., & Minocha, S. (2005). User interface design and evaluation. Morgan Kaufmann. Thomas, J., & Cook, K. (2005). Illuminating the path: The research and development agenda for visual analytics. IEEE-Press. Retrieved from 66 http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:Illuminating+the+Path:+The+Research+and+Development+Agenda+for+Visual+Analytics#0 Trutnevyte, E., Stauffacher, M., & Scholz, R. W. (2011). Supporting energy initiatives in small communities by linking visions with energy scenarios and multi-criteria assessment. Energy Policy, 39(12), 7884–7895. http://doi.org/10.1016/j.enpol.2011.09.038 Trutnevyte, E., Stauffacher, M., & Scholz, R. W. (2012). Linking stakeholder visions with resource allocation scenarios and multi-criteria assessment. European Journal of Operational Research, 219(3), 762–772. http://doi.org/10.1016/j.ejor.2012.01.009 Vogel, D., & Coombes, J. (2010). The Effect Of Structure On Convergence Activities Using Group Support Systems. In D. M. Kilgour & C. Eden (Eds.), Handbook of Group Decision and Negotiation SE - 18 (Vol. 4, pp. 301–311). Springer Netherlands. http://doi.org/10.1007/978-90-481-9097-3_18 Yi, J. S. (2008). Visualized Decision Making: Development and Application of Information Visualization Techniques to Improve Decision Quality of Nursing Home Choice. Georgia Institute of Technology. Yi, J. S., Kang, Y. A., Stasko, J., & Jacko, J. (2007). Toward a deeper understanding of the role of interaction in information visualization. IEEE Transactions on Visualization and Computer Graphics, 13(6), 1224–31. http://doi.org/10.1109/TVCG.2007.70515 Zhu, B., & Chen, H. (2008). Information Visualization for Decision Support. In Handbook on Decision Support Systems (pp. 699–722). http://doi.org/10.1007/978-3-540-48716-6_32 67 Appendices Appendix A: Multi-Attribute Utility Theory (MAUT) ValueCharts has been developed based on Multi-Attribute Utility Theory (MAUT) (Raiffa & Keeney, 1976). MAUT is a popular approach in environmental and infrastructure planning projects (Huang et al., 2011). Introduction to the main concepts used in MAUT In the following, main concepts used in MAUT are briefly introduced. Alternatives are what the decision makers are comparing to choose between. Criteria are used to compare and evaluate the alternatives. These are sometimes called objectives or attributes. The criteria can be hierarchically organized. Let C be the set of all lowest-level criteria. User is a person who has preferences and compares alternatives to choose between. For each criterion, each alternative has an outcome. All outcomes of one criterion are measured in one unit (e.g. kg, days, $/year). The outcome is meant to be objective or at least something that we can argue whether it is true or not. The outcome does not depend on a user’s preference. For example, the cost of a design (dollars), or the amount of energy saving of a design (kWh/year). We use the notation o(a,c) to denote the outcome of alternative a for criterion c. 68 Score or utility represents a user’s preference on a particular outcome. For each criterion and a user, there is a score or utility function for all outcomes. We use the notation su(o) for the score of criterion c on outcome o for user u. In score functions, the best outcome for the user will have a score of 1 and the worst outcome will have score of 0.0. Scores of other outcomes will be scaled between these. su(oworst) = 0 ≦ su(oother outcomes) ≦ 1 = su(obest) Weight specifies a real-valued importance that a user gives to a criterion. wu(c) is the weight of criterion c from user u. All weights given by a user u sum to 1; that is ∑ 𝑤𝑢(𝑐)𝑐 = 1. These weights represent relative importance between the criteria for a user. Total Score (often called multi-attribute utility function) for a user u and alternative a, written Su(a), specifies a measure of how desirable alternative a is for user u. 𝑆𝑢(𝑎) = ∑𝑤𝑢(𝑐) 𝑠𝑢(𝑜(𝑎, 𝑐))𝑐∈𝐶 In the hierarchical structuring of the criteria, higher levels receive aggregated values from lower levels using additive linear models, assuming that all criteria are additively independent. MAUT is based an additive independence model (Raiffa & Keeney, 1976). The additive independence assumption implies that user’s preference over a 69 criterion is preferentially independent of the rest of criteria. In addition, score functions should be developed based on rational preferences. Russell provides six rules/constraints for scores that should be obeyed in order to be rational (Russell, Norvig, & Davis, 2010). In the below, two main constraints are introduced: 70 Appendix B: Booklet prepared for the first meeting of Orchard Commons Title: Using ValueCharts as an interactive decision support tool The aim of this project is to make it possible for a group to visualize how individual preferences, assumptions and predictions impact the outcome when choosing between a given set of solutions. The intention is to identify ‘sticking points’, differences that actually affect the outcome. We wish to develop an interactive platform using feedback from UBC staff that are involved in the planning of water and sewage infrastructure. Using real and imagined scenarios we will explore ways to adapt the interactive software to visualize a ‘group function’, to help identify to what extent differences among participants might impact the outcome. When multi-criteria decision analysis (MCDA) is used to guide decision processes participants are commonly asked to rate and weigh objectives (e.g. disturbance to neighbours vs. cost). How well each alternative performs in relation to each objective is normally pre-determined by experts (e.g. how much disturbance alternative A, B and C will cause). Our observations suggest that the fact that the determinations by the experts are ‘hidden’ makes participants suspicious. The reasoning for the experts’ assessment is contained within the ‘utility functions’. We will use a simplified version of ValueCharts to allow users to explore: 71 1. How changing the weighting of objectives impacts the outcome. 2. How ‘tweaking’ the utility functions impacts the outcome. Case study: Run-off management at the Orchard Commons The Orchard Commons” is the 13,637 m2 located at the northeast corner of West Mall and Agronomy Road, behind the McMillan building. Alternatives – overly simplified descriptions Alternative 1: Site unchanged There area is presently covered with partially cracked pavement. The impervious surface area is estimated to be 80%-85%. We assume that close to 100% of the storm water leaves the site as surface run-off. The water is directed to the existing storm water sewer system, which is drained through to a creek that 72 runs along the Trail 7 access to Wreck Beach. A 200-year 24 hour-long precipitation event will generate approximately 15 cubic meters of storm water. Alternative 2: Vantage College. Conventional run-off management It is proposed that Vantage Collage is built on Orchard Commons. We assume that 5430 m2 of the site will covered by the building (39.6% of site), that one third of the unbuilt area will be landscaped with 100% infiltration, and that this would give 20% reduction in storm water flow. As in Alternative 1, all runoff from buildings and impervious surfaces will be directed to the existing storm water sewer system. Alternative 3: Vantage Collage. Best practice on-site run-off management The size of the building is assumed to be identical to Alternative 2 but the storm water is managed according to best practice principles, thus going beyond UBC’s present requirements. We assume that this will result in a system with no runoff (aside from 200-year outlier storm events). This alternative will use detention (e.g. rainwater harvesting with water reuse for irrigation and toilet flushing) and retention (e.g. green roof, other vegetation, retention tanks). The overflow during rare heavy rainfall events will be directed to UBC storm water sewer system. Alternative 4: Direct run-off to Sustainability Street Whether or not Vantage Collage is built, all storm water is directed to the Sustainability Street, where it will be used for irrigation of horticulture, landscaping and educational demonstration projects. Sustainability Street is located at a higher 73 elevation as compared to Orchard Commons. This alternative requires construction of around 250 meters piping and requires pumping unless Vantage Collage is built and water from the roofs is led to Sustainability Street via an aqueduct. It is assumed that no water will be directed to the existing sewer system. Objectives (Evaluation Criteria) – also overly simplified We have deliberately chosen objectives and utility functions that we believe will help us develop the software. Reuse Potential The reuse potential is the ability to utilize precipitation for irrigation, toilet flushing, and potentially other purposes. This is directly related to UBC’s sustainability goal of decreasing potable water use. The criterion is measured in averaged volume (m3) of water collected per month that is available for reuse purposes. Possible outcomes vary from 0 to 100 m3 month. Cliff Erosion Cliff erosion is one of UBC’s explicit concerns for storm water management. Possible outcomes are: no erosion, medium erosion, and large erosion. Innovative Solutions The use and consideration of innovative solutions are explicitly among UBC’s sustainability goals. Possible outcomes are: not innovative, medium level of innovation, high level of innovation. 74 Risk Risk is one of the criteria used by UBC Operations when prioritizing among projects. The assessment is determined by multiplying two sub-components (likelihood and consequence) each with values ranging from 1-5. The possible outcomes range from 0 to 25. The here considered alternatives range from 2.0 (lowest estimated risk) and 12.0 (highest estimated risk). Disruption to stake-holders Minimizing the disruption is another criteria used by UBC Operations when prioritizing among projects and may include issues such as noise, smell, mobility disruption and utility disruption. In our case, there are here three possible levels of disruption: not disruptive, a bit disruptive, and very disruptive. Potential Domino Effects The potential for negative domino effects is a third criteria used by UBC Operations when prioritizing among projects. This criterion is estimated based on the number of other projects that might need to be undertaken due to the onset of the chosen alternative. The possible outcomes are here set between 4 (lowest estimated) and 20 (highest estimated). 75 Appendix C: Booklet prepared for the second meeting of Orchard Commons For the second meeting, we prepared one-page document in which we described each alternative along with an image that visually shows how each alternative will look like after the construction. 76 Appendix D: Survey questions for Orchard Commons First survey questions in the beginning of the meeting: 1. How often do you make collaborative decisions at work? 2. Are you familiar with decision analysis techniques (e.g. AHP, MAUT, and Outranking)? 3. How often do you use decision analysis techniques in collaborative decision making at work? 4. If you have used decision analysis techniques in collaborative decision making, please list them. 5. Are you familiar with visualization tools like bar charts, scatter plots, heat map? 6. How often do you use visualization tools to make collaborative decisions at work? 7. If you have used visualization tools in collaborative decision making, please list them. Two similar surveys given to participants after working with the Average and Detailed views of Group ValueCharts: 1. I believe that (Average/Detailed) Group ValueCharts helps make our discussions more participatory. 1 = Strongly Disagree 5 = Strongly Agree 2. Please rate the tool’s potential to improve group interaction. 1 = Worse group interaction 5 = Better group interaction 77 3. Please rate the tool’s potential to improve information exchange among participants. 1 = Less exchange of information 5 = More exchange of information 4. The tool helps identify agreements and disagreements. 1 = Strongly Disagree 5 = Strongly Agree 5. The tool helps make informed decisions based on everyone’s preferences. 1 = Strongly Disagree 5 = Strongly Agree 6. I would be happy if the alternative with the highest average score was chosen. 1 = Strongly Disagree 5 = Strongly Agree 7. I would like to use (Average/Detailed) Group ValueChart for collaborative decision making at work in the future. 1 = Strongly Disagree 5 = Strongly Agree