UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Interactive visualization for group decision-making Bajracharya, Sanjana 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2014_november_bajracharya_sanjana.pdf [ 2.35MB ]
JSON: 24-1.0166963.json
JSON-LD: 24-1.0166963-ld.json
RDF/XML (Pretty): 24-1.0166963-rdf.xml
RDF/JSON: 24-1.0166963-rdf.json
Turtle: 24-1.0166963-turtle.txt
N-Triples: 24-1.0166963-rdf-ntriples.txt
Original Record: 24-1.0166963-source.json
Full Text

Full Text

Interactive Visualization for Group Decision-MakingbySanjana BajracharyaB.E., Institute of Engineering, Tribhuvan University, 2010A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMaster of ScienceinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Computer Science)The University Of British Columbia(Vancouver)August 2014c© Sanjana Bajracharya, 2014AbstractIn infrastructure planning, identifying ‘the best solution’ out of a given set of al-ternatives is a context-dependent multi-dimensional multi-stakeholder challenge inwhich competing criteria must be identified and trade-offs made. In a recent study,colleagues from Institute of Resources, Sustainability and Environment found thatthere is a need for a visualization tool that enables planners and decision makersto collectively explore individual preferences among those involved in the deci-sion. This thesis concerns designing and evaluating an interactive visualization toolthat facilitates group decisions by making the problem analysis more participatory,transparent, and comprehensible. To do so, we extend the interactive visualiza-tion tool ValueCharts to create Group ValueCharts. We conducted studies withtwo different groups to evaluate the effectiveness of Group ValueCharts in groupdecision-making. The first group was university staff in leading positions in dif-ferent departments, presently engaged in and responsible for water infrastructureplanning. The second group was employees of an analytics company who are in-volved in buying scientific software licenses. Each group was instructed on how touse the tool in application to their current decision problem. The discussions wereaudio recorded and the participants were surveyed to evaluate usability. The resultsindicate that participants felt the tool improved group interaction and informationexchange, and made the discussion more participatory. Additionally, the partici-pants strongly concur that the tool reveals disagreements and agreements withinthe group. These results suggest that Group ValueCharts has the ability to enhancetransparency and comprehensibility in group decision-making.iiPrefaceDr. David Poole, my academic supervisor, introduced me to the ongoing researchin Informed Decisions: Exploring Alternatives for Sustainability (IDEAS) researchgroup: a need to visualize individual preferences in group decision-making. As themain contributor, I proposed the initial visualization ideas (Section 3.1) for the re-search problem in close consultation with Dr. Poole and Dr. Giuseppe Carenini. Iwas responsible for designing and building the tool, Group ValueCharts, conduct-ing the user studies and designing the surveys for evaluation of the tool. I used partsof source code from previous work on ValueCharts that was originally developedby Carenini and Lloyd [2] in my work.Dr. Gunilla O¨berg, my academic supervisor, put forward the stormwater man-agement at Orchard Commons problem which was a perfect fit to assess our tool.Mr. Hamed Taheri was responsible for gathering all the information required towell-define the Orchard Commons decision problem. Mr. Taheri along with somecolleagues from IRES (Ms. Ghazal Ebrahimi and Mr. Daniel Klein), in close con-sultation with Dr. O¨berg, identified criteria, developed alternatives and their out-comes (Appendix A). Mr. Taheri wrote the first draft of discussion on effectivenessof the tool in terms of group participation( Section 5.1).Ms. Kai Di Chen contributed in making the ValueCharts tool robust and addsome new features to it, like logging user interactions, comment box, etc. Shehelped in implementing some features in the Group ValueCharts: disagreementheat map (Figure 3.6), box plot for score function statistics (Figure 3.7). She alsohelped in conducting the user studies, transcribing the audio recordings of discus-sion from the studies and evaluating the tool. She wrote the first draft of discussionon effectiveness of the tool in terms of transparency (Section 5.2).iiiMr. Joel Ferstay and colleagues from AeroInfo Systems, a Boeing Company,generated criteria and alternatives for the buying software licenses decision prob-lem (Appendix B).Mr. Taheri and I carried out the user studies along with Dr. O¨berg as the prin-cipal investigator. The UBC Behavioral Research Ethics Board (BREB) certificateof approval number is H12-03317.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.1 Structured decision-making . . . . . . . . . . . . . . . . . . . . . 42.2 Multi-attribute utility theory . . . . . . . . . . . . . . . . . . . . 62.3 Simple multi-attribute rating technique exploiting ranks . . . . . . 82.4 ValueCharts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.5 Participatory observation at workshops on environmental decisionproblems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.6 ValueCharts experiments . . . . . . . . . . . . . . . . . . . . . . 142.7 Limitations of ValueCharts . . . . . . . . . . . . . . . . . . . . . 14v3 Group ValueCharts . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.1 Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.2 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.3.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.3.2 Multi-attribute trade-off method . . . . . . . . . . . . . . 213.4 User studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.4.1 Stormwater management at Orchard Commons . . . . . . 223.4.2 Buying software licenses at a software analytics company 324 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.1 Results from stormwater management problem . . . . . . . . . . 344.2 Results from buying software licenses problem . . . . . . . . . . 354.3 Survey results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.1 Level of participation . . . . . . . . . . . . . . . . . . . . . . . . 405.2 Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415.3 Comprehensibility . . . . . . . . . . . . . . . . . . . . . . . . . . 436 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49A Stormwater Management at Orchard Commons . . . . . . . . . . . 53A.1 Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54A.2 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56B Buying Software Licenses at a Software Analytics Company . . . . 59B.1 Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59B.2 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62viList of TablesTable 2.1 Worst and best values for all criteria for Orchard Commonsproblem for an imaginary user . . . . . . . . . . . . . . . . . . 9Table 2.2 Weights assigned by SMARTER for number of criteria = 6 . . 9Table 3.1 Steps in a generalized multi-attribute trade-off method [11] . . 21Table 4.1 A summary of survey results two studies investigating the per-ceived usefulness of Group ValueCharts. The survey is basedon the framework developed by Schilling et al. [25] . . . . . . 37viiList of FiguresFigure 2.1 ValueCharts for stormwater management system . . . . . . . 11Figure 2.2 Example score functions, showing positive and negative slop-ing continuous and discrete functions . . . . . . . . . . . . . 12Figure 2.3 Audience responses to one of the questions asked in the form[28] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Figure 2.4 Color-coded performance table presented in the environmentaldecision-making workshops [28] . . . . . . . . . . . . . . . . 13Figure 3.1 Initial four designs to visualize individual preferences in a groupdecision process (representation is for one criteria, one alterna-tive and three users) . . . . . . . . . . . . . . . . . . . . . . . 19Figure 3.2 A fictive decision problem ‘choosing a hotel’ used to introduceparticipants to ValueCharts in the first round of study . . . . . 23Figure 3.3 An aggregated view in Group ValueCharts showing individualpreference models for a group of participants for ‘choice ofhotel’ example . . . . . . . . . . . . . . . . . . . . . . . . . 24Figure 3.4 Two different values represented in Group ValueCharts: weight(top) and weighted score (bottom) for criteria ‘rate’ . . . . . . 25Figure 3.5 The values for hotel ‘Sheraton’ for ‘User 1’ (top) and ‘User3’ (bottom) in individual ValueCharts (left) and Group Val-ueCharts (right) respectively . . . . . . . . . . . . . . . . . . 25Figure 3.6 Group ValueCharts: average view showing disagreement heatmap with respect to score . . . . . . . . . . . . . . . . . . . . 28viiiFigure 3.7 Box plot for score function distribution for criterion ‘negative/-positive domino effects’ showing extreme users . . . . . . . . 30Figure 3.8 Group ValueCharts: detailed view . . . . . . . . . . . . . . . 31Figure A.1 Different options for criteria ‘community engagement’ . . . . 55ixAcknowledgmentsFirst of all, I thank my research supervisor, Dr. David Poole for his continuous sup-port and guidance. This work would not have been possible without his invaluablesupervision and encouragement. I would also like to express my appreciation tomy co-supervisor, Dr. Gunilla O¨berg and Dr. Giuseppe Carenini for their counseland active involvement in my work.I thank Dr. Jiaying Zhao for being the second examiner for my thesis. I amgrateful to all the colleagues who were involved in my thesis, in particular Mr.Hamed Taheri and Ms. Kai Di Chen.Finally, I am extremely grateful to my parents, my brother and my sister whoalways believed in me and were there for me.xDedicationI dedicate my work to my parents for their endless love and support.xiChapter 1IntroductionIn group decision-making, also called collaborative decision-making, the decisionmakers have to jointly choose from a given set of alternatives. There are manyways for a group to make a decision. One way could be generating the ideas andholding discussions within the group but the final decision is made by an authorizedperson. Another way is to go with the decision for which the majority of the groupholds the most votes. But in these methods the group members might feel thatthey did not have an equal opportunity to influence the decision. This might causedissent within the group and not everyone in the group will be equally dedicatedin implementing the decision made. Thus, one may argue that the best approach togroup decision-making is to discuss and negotiate until everyone understands andagrees with the decision. Also, having everyone chip in their opinions reduces therisk of overlooking central risks and challenges and increases the chance of a morerobust decision. Each group member has his/her own preferences. A consensusdecision should be the one on which everyone, despite their difference in prefer-ences, agrees to accept or at least has everyone’s consent. It integrates all opinionswithout ignoring any.Decision makers involved in infrastructure planning struggle to identify the‘best solution’ for their specific context; and they generally settle on the traditionalsolution because they do not have sufficient resources to carry out an integratedanalysis. For example, in water and wastewater system decisions, traditional engi-neering factors like function, safety and cost-benefit analysis dominate. Other im-1portant factors like public engagement and social issues are often ignored or givenlimited attention [12]. Structured Decision Making (SDM) and Multi-Criteria De-cision Analysis (MCDA) are increasingly used to facilitate decision-making, par-ticularly in the municipal and environmental domains [11, 15, 21]. The fundamen-tal steps in SDM are:1. Clarify the decision context by establishing the scope and bounds for thedecision being addressed.2. Identify the criteria that will be used to generate and evaluate the alternatives.3. Generate alternatives that are going to be compared in the decision-making.Alternatives are different approaches to solve the decision problem.4. Estimate consequences of implementing alternatives on the identified crite-ria.5. Compare alternatives and select the one that meets an acceptable balanceacross different criteria.6. Implement and monitor the alternatives.It has been shown that using MCDA in group decision-making can facilitate com-munication, learning, and consensus building across multiple stakeholders [6, 24,27].A recent study suggests that there is a need for interactive and dynamic visu-alization techniques that support group decision-making by encouraging innova-tive brainstorming and facilitating identification of sustainable solutions [28]. Thestudy also suggests that there is a need for a visualization tool that makes it possi-ble to collectively explore individual preferences. Such a tool could, for example,help the group focus the discussions on differences in opinions and preferences thatmatter for the final ranking of alternatives and avoid getting stuck on differencesthat have no impact on the outcome.We use an interactive visualization tool, ValueCharts [2], that utilizes multi-attribute utility theory (MAUT) [16] to support preferential choice. ValueChartssupports individual preference construction and visualization. However, it does not2support visualization of group preferences. In this thesis, we extend ValueChartsto visualize the preferences of the group altogether in a single visualization. Wecall the extended version Group ValueCharts. We apply this tool to two groupdecision problems: (1) stormwater infrastructure planning and (2) scientific soft-ware licensing problem. We investigate Group ValueCharts capabilities to help thegroup make fruitful discussions by identifying the agreements and disagreements.We try to identify which features of Group ValueCharts help make the decisionproblem analysis more participatory, transparent, and comprehensible.In Chapter 2, we provide a brief background of the underlying theories: Struc-tured Decision-Making, Multi-attribute utility theory and SMARTER. It also givesan overview of ValueCharts and talks about some experiments done with Val-ueCharts. In Chapter 3, we introduce Group ValueCharts and describe its designand features in detail. We also talk about the methods we used in the studies weconducted. In Chapter 4, we present the results of the two studies that we con-ducted to assess our tool and we discuss them in chapter 5. Finally, in chapter 6,we conclude and discuss possible future work and research.3Chapter 2Background2.1 Structured decision-makingStructured Decision-Making (SDM) is an organized, inclusive and transparent ap-proach to understanding complex multi-criteria problems and generating and eval-uating alternatives [11]. Criteria are used to judge the alternatives. They are some-times called objectives or attributes. Alternatives are what the decision makers arechoosing between. SDM is based on the fact that making good decisions needs aclear understanding of what is important and what are the consequences if an al-ternative is implemented. It is designed for multi-criteria decision problems char-acterized by diverse stakeholders and difficult trade-offs. Thus, it emphasizes de-cision structuring for consistency and transparency for controversies in the group.SDM follows a prescriptive approach to help individuals and groups make betterdecisions. There are six fundamental steps in SDM that can be applied to manydecision problems. Below, we explain the steps in terms of a ‘water infrastructureplanning problem’, one of the decision problems that is currently being made atUBC.1 Define the decision contextThere is a need for stormwater management in a site called Orchard Com-mons at the University of British Columbia (UBC) which is currently usedas a car park. Orchard Commons is proposed to be the physical home for the4UBC Vantage College, which is anticipated to be developed as a mixed-useacademic/ student housing hub.2 Identify the criteria by which the alternatives will be judgedFrom the input from various stakeholders involved in the stormwater man-agement project and design brief, the criteria were divided into two fun-damental groups: sustainability and operations. Under sustainability, therewere criteria like water conservation, reduce energy consumption, commu-nity engagement. Under operations comes reduce risk, reduce disruption tostakeholders, negative/positive domino effects.3 Generate alternatives to compareAfter the criteria were identified, four alternatives were developed. The firstalternative was to direct all stormwater to the sewers as in older buildings oncampus. The second alternative was to meet but not exceed UBC require-ments for onsite stormwater management, the third one assumed that bestsustainability practices for onsite stormwater management were applied andthe fourth one assumed to retain or detain half of the stormwater onsite.4 Estimate consequences of each alternativeThe consequences, also called outcomes, of the alternatives on the crite-ria were estimated with input from experts. For example, for criteria ‘Re-duce energy consumption’, the energy needed in the operational phase wasestimated in consultation with operation engineers. Alternative 4 requiredthe most energy of 26500 kWh/year, alternative 3 required 12500 kWh/yearwhereas alternatives 1 and 2 did not need any energy as the water was trans-ported by gravity.5 Compare and select alternativesFrom the available information from step 4, the users chose a preferred alter-native that achieves an acceptable balance across multiple criteria. The usersperformed a trade-off analysis to do so. For example, alternative 2 outper-formed other alternatives in terms of risk of downstream flooding whereasalternative 4 caused the least pressure on the sewer system. It depends on5the personal preferences to make the choice and in this step of SDM usersidentify the key trade-off to provide a plausible decision-making process.Often in this step, multi-attribute trade-off methods (Section 2.2) are used toquantify user preferences for clarity and transparency in decision-making.6 Implement and monitor alternativesSDM facilitates learning about both outcomes and preferences over time.SDM emphasizes on using new information to review the decision and ac-tions based on learning.2.2 Multi-attribute utility theoryIn a multi-criteria decision analysis (MCDA), where multiple criteria are explic-itly considered in decision-making problems, it is necessary to take each decisionmaker’s preferences into account to compare different alternatives. Different indi-viduals have different preferences, opinions and judgments. This makes decision-making require subjective judgment. Decision makers try to reach to an alternativeby trading-off each criteria against others. There are many techniques to performMCDA. Multi-attribute utility theory (MAUT) is one such technique which pro-vides a framework for the quantitative representation of subjective preferences of adecision maker [16]. MAUT has been successfully used in several environmentaldecision-making problems [17, 21, 26]. Additionally, there has been work done indeveloping visual analytical tools to help apply these methods in practical decisionsupport [1–4, 10, 22].We assume the following components of a multi-criteria decision problem:Alternatives, as mentioned in Section 2.1, are what the decision makers are choos-ing between. In the example of stormwater management mentioned ear-lier, the alternatives are the four different designs for what to do with thestormwater in a building to be constructed at Orchard Commons.Criteria are used to judge the alternatives. These are sometimes called objectivesor attributes. We allow the criteria to be hierarchically organized. For exam-ple, the higher level criteria for stormwater management were sustainability6and operations. Sustainability was further divided into low-level criteria likewater conservation, energy use, risk etc. Let C be the set of all lowest-levelcriteria.User is a person whose preferences are elicited. When a tool is used in groupdecision-making, the users are also called participants, as the most pertinentinformation is that the users are participating in a group decision. It is thepreferences over multiple users that we want to elicit and compare. Theseusers may or may not be the actual decision makers.Outcome of a criterion on an alternative is the measure of the value of the criterionon the alternative. It is often called consequence of an alternative in someliterature [11, 16]. The outcome is meant to be objective or at least somethingthat we can argue whether it is true or not. The outcome does not depend onthe user. For example, the cost of a design (dollars), or the amount of energysaving of an alternative (kWh/year). We use the notation o(a,c) to denotethe outcome of alternative a for criterion c.Score function specifies a real-valued measure which we call score of the pref-erence of an outcome for an alternative for a user. Score function has othernames in the literature such as utility function, value function, or preferencefunction [13, 16]. We use the notation su(o(a,c)) for the score of criterion con outcome o for user u. The scores are in the range [0,1]. For each criterionand user, the best outcome for the user will have a score of 1 and the worstoutcome will have score of 0. Other outcomes will be scaled between these.Weight specifies a real-valued importance for a user and a criterion. wu(c) is theweight of criterion c for user u. The weights for a user u sum to 1; thatis, ∑c wu(c) = 1. The weights reflect the relative importance to a user ofchanging the outcome of an alternative from its worst to its best value.Total Score (often called multi-attribute utility or multi-attribute value) for a useru and alternative a, written Su(a), specifies a measure of how desirable alter-native a is for user u. It is the sum of the product of weight and score. Wecall this product the weighted score. We assume an additive independence7model [16] and define the total score by the following formula:Su(a) = ∑c∈Cwu(c) ∗ su(o(a,c))where, wu(c) ∗ su(o(a,c)) is the weighted score of alternative a for criteria c.The additive independence assumption implies that the scores do not dependon the outcomes for the other criteria.In the hierarchical structuring of the criteria, higher levels receive aggregatedvalues from lower levels using additive linear models, assuming that all criteria areadditively independent. Initially, the users provide the scores for each lowest-levelcriterion by identifying the best and the worst outcomes and then scaling the otheroutcomes between these. In the Orchard Commons example, the total score lookslike the following hierarchically,Su(a) = ∑c∈Sustainabilitywu(c) su(o(a,c))+ ∑c∈Operationswu(c) su(o(a,c))where,Sustainability = set of sustainability criteria = {water conservation, reduce en-ergy consumption, community engagement}Operations = set of operations criteria = {reduce risk, reduce disruption tostakeholders, negative/positive domino effects}2.3 Simple multi-attribute rating technique exploitingranksSimple multi-attribute rating technique exploiting ranks (SMARTER) is a subjec-tive weighting method to initialize the criteria weights [7]. The users are asked torank the criteria based on the value of the change from the worst to the best out-come of a criterion, with respect to that of the values for the remaining criteria. Forexample, they are asked to imagine the worst possible case of the decision problem(i.e. scoring worst on all criteria) and if given the opportunity to improve one crite-rion from its worst value to its best, which one would they choose. For the OrchardCommons problem, Table 2.1 gives the worst and best values for all criteria for an8imaginary user. This user ranks the criteria based on is it worth to have a systemfor which the need of sewer infrastructure investment can be considerably deferredversus does it make more sense to have a system with reduced disruption to thestakeholders by 3 units or having a less risky system by 6 units is more worthy andso on.Table 2.1: Worst and best values for all criteria for Orchard Commons prob-lem for an imaginary userWorst value Best valueNegative/positive domino effects Soon ConsiderablyDeferredReduce disruption to stakeholders 6.0 Units 3.0 UnitsReduce risk 8.0 in 25 Scale 2.0 in 25 ScaleWater conservation 0 stormwater col-lected for reuse7500 stormwatercollected for reuseReduce energy consumption 12500 kWh/year 0 kWh/yearCommunity engagement Hidden Interactive-GB in-fraThe weights are calculated such that the weight of a criterion ranked to the ith isdefined aswi =1nn∑k=i1kwhere n is the number of criteria, and ∑i∈C wi = 1 where, C is the set of all criteria.Table 2.2 gives the weights calculated from the above equation for n = 6.Table 2.2: Weights assigned by SMARTER for number of criteria = 6Rank Weights1 0.40832 0.24173 0.15834 0.10285 0.06116 0.02789SMARTER is a simple method that may only assign approximate weights tothe criteria but it produces less elicitation errors. Users instead of finding justifiablejudgments and then figuring out how to elicit them, they rather use simplest pos-sible judgments and then try to determine whether they will lead to substantiallysuboptimal choices. This is known as ‘the strategy of heroic approximation’ [7].The information elicited by the SMARTER method is ordinal and purely basedon ranking. Although ranking reduces cognitive load on users, converting fromordinal preferences to cardinal weights might not closely reflect their actual prefer-ence. In our case studies, the users were allowed to refine their weights for a morereliable preference model that reflects their preferences accurately [23].2.4 ValueChartsValueCharts is an interactive visualization tool to compare multiple alternatives andvisualize the trade-offs between them based on multi-attribute utility theory [1, 2].We use the water infrastructure planning problem (i.e. the stormwater managementat Orchard Commons used in sections 2.1,2.2 and 2.3) to describe ValueChartsand its features. Below is a brief introduction to ValueCharts, which shows thepreferences of an imaginary user.The decision criteria are arranged hierarchically, and are represented in thebottom left quadrant of Figure 2.1. The criteria that determines the selection of astormwater management system is decomposed at the first level of the hierarchyinto its operation and sustainability criteria. The height of each block indicates therelative weight assigned to the corresponding criterion; its percentage (in decimalvalue) of importance is also given. In Figure 2.1, the criterion Operation has 41%weight and Sustainability has 59%. Operation is further divided into three criteria,and the individual weights for each divided criterion (negative/positive domino ef-fects: 12.4%, reduce disruption to stakeholder: 12.5% and reduce risk: 16.1%) sumup to 41%. The same goes for Sustainability. For each lowest level criterion, thebest and worst outcome for that criterion is shown. For example, for this user, theworst outcome for the criterion ‘reduce energy consumption’ is 26500 kWh/yearand the best outcome is 0 kWh/year.On the right of each lowest level criterion, the corresponding score function is10Figure 2.1: ValueCharts for stormwater management systemdisplayed as a graph with the range of criteria values on the x-axis and a worst-best score on y-axis. This function expresses the preference for each value for thatcriterion as a number in the [0,1] interval where 0 represents the worst outcomeand 1 represents the best. Figure 2.2 represents different types of score functionsfor different criteria.As shown in the bottom right quadrant of Figure 2.1, each column representsan alternative. Each alternative is labeled with a number and the top left box givesthe actual names of the alternatives. In the original design, the alternative nameswere printed instead of numbers. With feedback received from the participantsduring pilot studies, we modified it as the names are often long and do not fit in theframe.Each colored cell specifies how the corresponding alternative fares with re-spect to each criteria. More precisely, the amount of filled color relative to cell size11Figure 2.2: Example score functions, showing positive and negative slopingcontinuous and discrete functionsdepicts the alternative’s weighted score ( wu(c) ∗ su(o(a,c)) ) for the particularcriteria for this user. So, for instance, alternative 1 has the least energy consump-tion (highest preference for the user), but it generates one of the highest levels ofrisk (lowest preference for the user). In the upper right quadrant all values are ac-cumulated and presented as vertical stacked bars, displaying the aggregate score( Su(a) ) of each alternative. In this case, alternative 3 is the best alternative, theone with the highest aggregate score.ValueCharts has several interactive techniques like sensitivity analysis of cri-teria weights by moving the joint edges of cells. The user is able to change theweights and score functions. They can change the weights in two ways; by ‘drag-ging’ the boundaries between the lowest-level criteria, and by ‘pumping’ to in-crease or decrease a weight (keeping the other weights in proportion to each other).In pumping, the user repeatedly clicks on the criteria till they reach the desiredweight. The users can modify the score functions according to their preferences bydragging the points (continuous) or bars (discrete) in the graph.2.5 Participatory observation at workshops onenvironmental decision problemsA research group from Institute of Resources, Environment and Sustainability(IRES) at UBC observed and participated in public and closed workshops on re-newal of a Canadian sewage treatment facility [28]. During the workshops, theparticipants were asked to fill out a form with questions on the sewage treatmentfacility designs. The responses were collected on paper and were manually fed to a12computer to generate bar charts (Figure 2.3) showing level of disagreement acrossthe participants.Figure 2.3: Audience responses to one of the questions asked in the form [28]Figure 2.4: Color-coded performance table presented in the environmentaldecision-making workshops [28]The participants were also asked to elicit criteria weights and rank design con-cepts on paper. All participants’ responses were aggregated by taking average overall participants in a color coded performance table (Figure 2.4). Like the Val-ueCharts, the left most column represents 5 criteria which are further divided into27 low-level criteria. The top row depicts 10 design concepts. The individual cell13specifies either the outcome or score of each alternative with respect to each crite-ria. Most of the criteria outcomes were scored on a scale of 1 to 5. Worst and bestscores were illustrated by color coding; dark red illustrating the worst and darkgreen the best while orange and yellow were the intermediates.Taheri et al. [28] conclude that this type of illustration is not always helpfulbecause it is not possible to use the color coding to compare the significance ofthe trade-offs when comparing alternatives with respect to criteria. The table isnot self-evident how one might use the numbers shown in it. The authors arguethat because these methods were not interactive and did not facilitate analyticalactivities, they did not effectively contribute much in an informed decision-making[28].2.6 ValueCharts experimentsIn parallel, the IRES research group tested ValueCharts in semi-structured inter-views with potential decision-makers of renewal of sewage treatment facilities [28].It was found that ValueCharts fits analyzing multi-criteria decision problems be-cause• it is interactive,• it has the potential of reducing information overload, and• it helps the participants to relate different types of information that need tobe processed.This means that ValueCharts has the potential of making the decision process lessfrustrating and more efficient than the paper based methods used in workshopsmentioned in Section 2.5. The interactivity of ValueCharts also allows to tweakand change various parameters while in real time the user is able to relate andanalyze the influence of those changes visually.2.7 Limitations of ValueChartsValueCharts can be used to construct and visualize individual user preferences. Thestudy by Taheri et al. [28] suggests that it is not enough for the users to visualize14their individual preferences because everybody’s opinions should be taken into ac-count in a group decision-making. This gap might be filled if each user were ableto see in what ways they vary with rest of the group. This can be done with the helpof a single visualization which brings the preferences of all participants together.A visualization that can generate summary statistics like average group scores, fa-vorite alternatives, etc would help identify disagreements and decide whether thesedisagreements needs to be discussed further or they are not important.The observations at workshop and the ValueCharts experiments conducted byTaheri et al. [28] suggest that an extended version of ValueCharts has the potentialof supporting group decision-making. It could help answer some questions givenbelow, that are essential in group decision-making:• On which criteria do most participants agree/disagree?• What are the main reasons behind the disagreements?– weights, scores or weighted score?• Do these variances matter at all for the final ranking of alternatives? Canthey be overlooked?In a group decision-making context, there may exist controversy and disagree-ments about the criteria (weights and/or scores). Although the participants maydisagree on some criteria weight or scores, it might be the case that their favoritealternatives are same because the total score for their favorite alternatives might bethe same or only has slight difference. Total score is a sum of products of weightsand scores. The core hypothesis of this thesis is that if we are able to present thisinformation to the participants, we can help them in preventing endless and unnec-essary discussions. Since ValueCharts has proved to be a useful tool for comparingalternatives over criteria individually, we set out to expand it so that the preferencesof the users could be explored collectively.Visualization techniques are needed for group decision-making that encourageinnovative brainstorming and facilitate identification of alternatives. Visualizationtechniques are also important to make the process transparent and to make it possi-ble for the group to focus the discussions on differences in preferences rather thanon direct evaluation of alternatives based on intuition.15In this thesis, we extend ValueCharts and develop a tool that allows visual-ization of group preferences that facilitates discussions and thus group decision-making. We call the tool Group ValueCharts. We hypothesize that such a capabil-ity can help identify and probe into differences in preferences, which is seminal tobuild consensus (or at least consent) across participants.16Chapter 3Group ValueCharts3.1 IdeasBased on the observations from the workshop and ValueCharts experiments carriedout by Taheri et al. [28], we started gathering ideas for an interactive group deci-sion tool. The aim was to make a tool to help make effective group decisions wherethe group members would feel confident that their views were taken into accountduring the decision-making process. Drawing on the results of the workshop obser-vation by the IRES research team, after several brainstorming sessions and someliterature review on information visualization, we created four hand drawn imagesof potential ways to visualize individual preferences in a group (Figure 3.1). Theywere based on the basic information visualization techniques like bar charts, boxplot, scatter plot and heat map.Idea 1: Bar charts The first idea was a based on bar charts (Figure 3.1, top left).There were two bars: one representing the weight of the criterion (red box)and second one representing the weighted score of the alternative for thatcriterion (green box). Here, ‘cost’ is used as a criterion and ‘i’tidal’ is analternative for some decision problem. There are 3 users in this example:David, Gunilla and Sanjana. All three users have different weights for thiscriterion. Although this is the case, David and Sanjana still have the sameweighted score for this alternative. This means David has given a high score17for this alternative but Sanjana does not prefer the outcome of this alternativeand hence gave a score lower than David. This visualization might be helpfulwhen few people (5-10) are involved in the decision-making. With morenumber of people, there is a risk that it will create information clutter [9, 29].Idea 2: Box plot This idea used two box plots (Figure 3.1, top right). The outerbox plot shows the weight, i.e how important cost is in relation to the othercriteria. So the weight will be the same for every alternative. The inside boxplot shows the weighted score distribution among users (the larger the box,the larger the dispersion in opinions). Box plot summarizes a quantitativedistribution with five standard statistics: the smallest value, lower quartile,median, upper quartile and the largest value. In addition, the median showshow it was weighted. Box plot uses median not the mean because the distri-bution of opinions is not normally distributed all the time; most of the timeit is skewed and mean is effected by the outliers but median does not sufferfrom that problem. For example, if only two people were disagreeing wewould rather have a small box with long whiskers which easily shows thatthere are only few people who disagree with the rest of the group. This typeof visualization can be utilized when the number of users is big as it takesless space [9].Idea 3: Scatter plot The third idea uses scatter plot (Figure 3.1, bottom left). Theinclined line shows how the criteria weights vary. In the figure, David hasgiven the lowest weight to criterion ‘cost’ and Gunilla the most, Sanjana liessomewhere in between. The green dots represent the weighted scores of thealternative for particular user. As in the case of bar chart, David and San-jana have the same weighted score. In this visualization, the users are sortedaccording to the criteria weights they have assigned. This visualization ishelpful if the number of users is huge. If clusters are formed in the scatterplot, it will show what most people agree/disagree on [9]. For example, bigclusters towards the beginning of the inclined line means users give less im-portance to this criteria and the outcome of this alternative is not so preferredby many.18Figure 3.1: Initial four designs to visualize individual preferences in a groupdecision process (representation is for one criteria, one alternative andthree users)Idea 4: Heat map This idea is inspired from heat map1 (Figure 3.1, bottom right).Similar to the bar chart idea, the height of the bars represents the criteriaweights. The color-coding represents the score of the alternative for thatcriteria. For example, red means low score and green means high score.If the height of the bar is tall and green colored, it means this criterion isimportant for this user and prefers this alternative’s outcome with respect tothis criterion. As with bar charts, this visualization can not handle too manyusers.These ideas were presented to our research group and to colleagues and staffinvolved in water and sewage infrastructure planning. Based on their feedback, wechose to use bar charts to visualize values of each user as it appeared to be themost intuitively easy to understand among the four. Thus, we extend ValueCharts(which also uses bar charts) to develop the new tool, Group ValueCharts. Our aimwas to make it possible for the participants involved in a group decision to comparemultiple alternatives across multiple criteria and identify the impact of their elicitedvalues explicitly and transparently.1http://en.wikipedia.org/wiki/Heat map193.2 AssumptionsWe had to make some assumptions while developing the tool. We assumed that thedecision problem has been clarified, i.e. the stakeholders have been identified andthe range of alternatives and criteria to be considered has been established. Criteriato be used in the decision problem have been identified and alternatives have beendeveloped. The outcomes of the alternatives on the criteria are estimated in consul-tation with experts including scientists, engineers and local traditional knowledgeholders. While evaluating the tool we developed, these steps were carried out bythe IRES research team in consultation with the UBC infrastructure planning com-mittee members for the stormwater management at Orchard Commons example.The main goal of this thesis is to help the participants in a group decision fo-cus on step 5 of SDM, which is to choose an alternative that achieves acceptablebalance across multiple criteria. This step involves value-based judgments aboutwhich reasonable people may disagree. We use MAUT to formally model valuesand express trade-offs. Thus, the goal is not just to help the decision makers reachto a consensus, but to create a basis for communicating the rationale for a deci-sion by making the trade-offs explicitly and transparently informed through a goodunderstanding of outcomes of the criteria on alternatives and the consequences ofimplementing those alternatives.3.3 MethodThe methodology followed in developing Group ValueCharts is explained in thissection.3.3.1 DesignGroup ValueCharts was developed iteratively, that is, initial design ideas were eval-uated by various users and user feedback was incorporated to refine designs. Thefirst version of Group ValueCharts had a very similar interface as that of Val-ueCharts with criteria and alternatives aligned the same way, the only differencewas it could represent not just a single user but multiple users in the same visu-alization. We tested the first prototype within our research group and improvedsome glitches in both ValueCharts and Group ValueCharts. Then, we conducted20two studies with the UBC infrastructure planning committee using the ongoingstormwater management project as an example. We also conducted a study withsoftware analytics company to verify that the tool can be used not only in envi-ronment management problems but in other domains as well. They had a multi-stakeholder problem of buying software licenses for their company.3.3.2 Multi-attribute trade-off methodWe followed a multi-attribute trade-off method in the studies we conducted [11].Table 3.1 summarizes a generalized multi-attribute trade-off method steps. Weskipped steps 2 and 4 of the method because of time constraints and it was not themain focus of the study to compare the differences in direct ranking and rankingin an informed way (that is, rank alternatives based on outcomes of criteria onthem). Each study was two hour long where the participants were invited to a wellequipped room with laptops and projector. The studies were audio recorded andthe participants’ interactions with the tool were recorded with verbal consent.Table 3.1: Steps in a generalized multi-attribute trade-off method [11]1 The group reviews and confirms understanding of criteria, alternatives andoutcomes2 Each participant directly ranks and scores the alternatives3 Each participant weights the criteria; scores and ranks are then calculatedfor each alternative for each participant4 Each participant reviews their individual results and examines inconsisten-cies across the two methods5 The group reviews aggregated results. Areas of agreement and differenceamong individuals are identified and discussed6 Each participant provides a final ranking or preferences, based on whatthey’ve learned7 The group clarifies key areas of agreement and disagreement and providereasonsIn each study, the criteria and alternatives for the decision problem were briefedassuming that the participants were familiar with the problem. The participantswere then introduced to ValueCharts and were guided step by step how to use itso that they could construct their individual preference models. The participants21were first asked to determine their score function for each criterion. Once the scorefunctions had been constructed for all criteria, they were asked to set weights forthe criteria, which was done using SMARTER method. Participants were theninstructed that they could fine-tune the weights and score functions in ValueCharts.Once the participants had constructed their individual preference models, they wereasked to save their models. These models were then aggregated by the facilitatorand projected on Group ValueCharts.The facilitator explained the Group ValueCharts features. One of the partici-pants were then asked to lead the discussion. The facilitator demonstrated differentfeatures as the discussion on disagreements were ongoing in the group.3.4 User studiesWe carried out two user studies to validate the usefulness of Group ValueCharts.3.4.1 Stormwater management at Orchard CommonsAfter we evaluated our initial tool within our research group, we conducted twostudies with the UBC infrastructure planning committee. We used the tool for anongoing project on stormwater management at a site called Orchard Commons atUBC (which was also described in previous sections). We conducted two roundsof study. The first round was to make the participants familiar with the tool andthe second round was to test the refined tool and collect user feedback on changesmade to the tool since the first round. In the second round, surveys designed toevaluate the process effectiveness of the tool were conducted. The discussionswere also audio-recorded. Behavioral Research Ethics Board (BREB) approval forthese studies can be seen in ??.The surveys were designed on the framework developed by Schilling et al. [25].The framework is categorized in metrics to assess the process effectiveness, outputeffectiveness and outcome effectiveness. We focus on the process effectivenessassessment, which focuses on the quality of the decision process. Specifically, wewere evaluating the tool in terms of group participation, transparency and compre-hensibility. The survey had scale based questions and open ended questions.22First round: Making the group familiar with the toolIn the first meeting, the participants were introduced to the tool using a hotel ex-ample where the group had to choose a hotel out of given set of hotels based oncriteria like ‘rate’, ‘room’ and ‘location’. Room was further divided into ‘internet-access’ and ‘size’ whereas location was divided into ‘skytrain-distance’ and ‘area’.The participants were guided through the different steps (as mentioned in Sec-tion 3.3.2) to construct their individual preference models using ValueCharts. Thehotel example in ValueCharts is shown in Figure 3.2.Figure 3.2: A fictive decision problem ‘choosing a hotel’ used to introduceparticipants to ValueCharts in the first round of studyEach participant had their own preference model and hence their own favoritehotel. These models were then aggregated and projected on Group ValueCharts(Figure 3.3). The design was simple and based on idea number 1, bar charts de-23Figure 3.3: An aggregated view in Group ValueCharts showing individualpreference models for a group of participants for ‘choice of hotel’ ex-amplescribed in Section 3.1, where each user is identified by a color-coded legend. Crite-ria are aligned on y-axis and alternatives are aligned on x-axis as in the individualValueCharts. The red boxes (Figure 3.4, top) signify the weights given to eachcriteria by the participants. As it is evident that the red boxes are same over allthe alternatives since the importance of the criteria does not depend on the alterna-tives. The colored bars (Figure 3.4, bottom) represent the weighted score of thatalternative for each user on the respective criteria. The heights of these coloredbars are different for different alternatives and users as it depends on how well isthe outcome of an alternative scored by the user in terms of that criteria and howimportant this criteria is for this user. The individual colored bars per criteria arethen stacked together as in ValueCharts. Thus, showing total scores at the top for24each alternative for each user, highlighting the best one in red.Figure 3.4: Two different values represented in Group ValueCharts: weight(top) and weighted score (bottom) for criteria ‘rate’Figure 3.5: The values for hotel ‘Sheraton’ for ‘User 1’ (top) and ‘User 3’(bottom) in individual ValueCharts (left) and Group ValueCharts (right)respectivelyThe bars in the Group ValueCharts represent the same bars in the individualValueCharts. This is shown in Figure 3.5 where on the left side are user 1’s and25user 3’s individual ValueCharts and on the right side the Group ValueCharts. Theblue highlighted box represents user 1’s weighted scores for hotel ‘Sheraton’ andthe green highlighted box represents the same for user 3. The arrows show themapping of these scores into the Group ValueCharts.There were some filtering features in the Group ValueCharts. For example, itwas possible to show the top choices for each participant, to show the average totalscores for each alternative. Hovering over a particular user on the user list on leftin Figure 3.3 would only display that user’s preferences and de-emphasize the restof the users.The group was shown this Group ValueCharts and the Director of InfrastructurePlanning was asked to lead the group discussion.Results from first roundAs mentioned before, the highest scores were highlighted in red and representsthe best alternative for each user. In this case (Figure 3.3), for users 2,3,6 and7, ‘Shangri-la’ was the best hotel whereas for users 1,4 and 5 ‘Radisson’ was thebest hotel. A discussion was started with the Group ValueCharts projected on thescreen. It was clear that there were disagreements over the alternatives. Two alter-natives were clearly the winners, some alternatives (‘Hyatt’, ‘Rosewood’) could betaken out of the picture as they had low values from all the participants.As the group started discussing the decision problem, it was realized that newfeatures needed to be added to the tool. The group appreciated the available fea-tures such as the ability to highlight the top choice for each participant and to viewaverage total scores for all alternatives.The group felt that there is a need• to show average weights of the criteria,• to be able to see average total scores over all users which would look like,something similar to the performance table (Figure 2.4) used in the work-shops attended by IRES research team,• to have some visualizations to show level and type of disagreement (Theparticipants had questions like was the disagreement on the weights or scores26or both?), and,• to be able to see how different the score functions are among the participants.Second study: using Group ValueCharts on real decision problemKeeping the suggestions and feedback received from first round, we added new fea-tures to the tool and conducted a second round of study with a real decision prob-lem of stormwater management with the same group. The university had decidedto construct a building complex in an area on campus called Orchard Commons,with run-off management being one of the decision problems facing the planning.The IRES research team worked on identifying criteria, generating alternativesand estimating the consequences of alternatives in terms of the identified criteria.The IRES research team was led by Dr. Gunilla O¨berg while Mr. Hamed Taheriwas the one in charge of communicating with the experts and knowledge holders.The criteria and alternatives were introduced in Section 2.1 while describing theSDM steps. Detail descriptions can be found in the Appendix A. The same methodof multi-attribute trade-off (Section 3.3.2) using ValueCharts was used to constructindividual preferences, which were then collected and Group ValueCharts was gen-erated.Based on feedback received in the first meeting, we added suggested featuresto the tool before the second meeting, such as,Average View (Figure 3.6) which shows a ValueChart based on the average weightsand scores across all users. In this view, the minimum, maximum and av-erage weights were displayed for each criteria. The best average alternativecould be seen highlighted in red, which in this case is alternative 3.Disagreement heat map Since the participants in the first study complained thatalthough they could see there were disagreements, it was not clear where thedisagreements were coming from and what was the level of disagreement.The disagreements could arise either due to differences in weights, scoresor weighted scores. We implemented a feature using ‘heat map’ to illus-trate the degree of disagreement with regards to scores, weights or weightedscore. A disagreement color guide with five colors ranging from pale yellow27Figure 3.6: Group ValueCharts: average view showing disagreement heatmap with respect to scoreto red were used to indicate the levels, with a faint yellow representing lowdisagreement and more intense red a high disagreement. For example, inFigure 3.6, the most disagreed criteria were ‘Reduce Risk’, ‘Water Conser-vation’ and ‘Community Engagement’ in terms of score.The disagreement was calculated as the standard deviation across the users’input. It is defined independently for each criterion so as to compare thedisagreement levels across criteria. For each criterion, the standard deviationis given by,sN =√1NN∑i=1(xi− x¯)2where, N = number of users, xi = weight, score or weighted score (depending28on the type of disagreement) for ith user, x¯ = mean of corresponding weights,scores or weighted scores.For weights, the sample size is equal to the number of users as for each crite-rion, each user assigns a single value. A straightforward standard deviation iscalculated. For scores, since each criterion has a range of values, a standarddeviation is calculated for each value across all users. Then, a mean of thestandard deviations is calculated for an overall value for the criterion. Theproduct or weighted score disagreement is calculated similarly. A standarddeviation is calculated for each alternative, across all users, then the mean isused to show the disagreement for each criterion.Originally, the level of disagreement was measured on a relative scale. Thatis, for each type of disagreement, the highest standard deviation is taken as a‘High Disagreement’ value, the lowest standard deviation is taken as a ‘LowDisagreement’ and scale the remaining intermediate values linearly. How-ever, from discussions within the research team, we realized that it wouldnot accurately separate cases where all criteria had high disagreements fromthose where all criteria had low disagreements. Thus, we implemented anabsolute scale against which the standard deviations are tallied.Popoviciu’s inequality 2 on variances states:Var ≤14(M−m)2where M is the maximum possible value and m is the minimum possiblevalue. As all weights, scores, and weighted scores have a maximum valueof 1 and minimum of 0, maximum disagreement has variance at most 0.25and standard deviation of at most 0.5. However, in practical cases, it isunlikely for all users to disagree completely on their values. If we define,arbitrarily, that a 50% or 0.5 disagreement is very high, then we let M =0.5 and set the maximum standard deviation to be 0.25. For disagreementin assigned weights and scores, the scale ranges linearly between 0 to 0.25,with anything above 0.25 falling into the highest disagreement category. As2See http://en.wikipedia.org/wiki/Popoviciu%27s inequality on variances29weighted scores are the product of the weights and scores, we allow a slightlylarger range of variance, capping at maximum standard deviation of 0.33.Figure 3.7: Box plot for score function distribution for criterion ‘negative/-positive domino effects’ showing extreme usersScore function statistics As suggested in the first study, we implemented a fea-ture to show the distribution of score functions over all users for each cri-terion. We used ‘box plot’ for implementing this feature. Figure 3.7 showsan example for the aggregated score functions for criteria ‘negative/positivedomino effects’. The extreme users can be displayed in this box plot. It isalso possible to see the preferences of a particular user for this criteria. Forexample, in this case for user 3 (the pink circle), the best outcome for thiscriteria is a system for which the need of sewer infrastructure investment canbe considerably deferred, and the worst one is the system which requires a30Figure 3.8: Group ValueCharts: detailed viewsooner need.Detailed View (Figure 3.8) which is similar to the one used in the first study withmore features. The users are uniquely identified with colors. Other than dis-agreement heat map and box plot of score functions, there are more featuresin detailed view like• Show top choices for each user• Show average overall scores for all alternatives• Show average scores for each criteria• Show average weights for each criteria31There were two discussions: one after demonstrating Average View and oneafter Detailed View. The decision process was evaluated with surveys at the end ofeach discussion. The discussions were also audio recorded for evaluation.3.4.2 Buying software licenses at a software analytics companyWe conducted another study with a group of employees of a software analyticscompany involved in buying software licenses. We wanted to test the tool in adomain other than environmental management but still had a multi-criteria decisionproblem structure. With this group we followed the same method as with the UBCgroup. Because of time constraint, we could only conduct one round of study withthe updated version (the version used in second round for UBC group) of the toolwith this group. We also conducted the same surveys with this group to evaluateour decision process.The software licensing multi-criteria decision problem involved selecting thebest solution among many for satisfying the criteria used to evaluate how good asolution is. For example, one solution to the software problem, a particular licens-ing strategy, could improve the ease of access to licenses for users but cost morethan other strategies; the two criteria in this example would be ease of access to thesoftware (or not being denied access to the software) and the cost of this particu-lar licensing solution. The criteria and alternatives were developed by one of theemployees of the company in consultation with fellow employees. There were 9criteria:1. Up-front license/solution cost2. Raw aggregate license denial3. Mission critical work4. Cost and ease of implementation5. Quantification of benefit6. Effects culture change7. Near term versus long term relief328. Compliance and sustainability9. Reversibility of solutionAnd 10 alternatives:1. Education/Outreach: Minimize wasteful behavior through education/outreach;licenses should be checked out when absolutely needed.2. Create a high priority pond: Create a separate, small pool of licenses for highpriority users.3. Multiple checkout management: Limit the number of licenses that can bechecked out during peak demand periods.4. Add more licenses now: Cope with current demand right now by maintaininga delicate balance.5. Group licenses for critical users: These are not concurrent licenses, for ex-ample, licenses for top-10 mission critical users.6. Distributed/parallel toolbox for large batch jobs: Use concurrent licenses.7. 90 day (Intern) licenses: Solves the problem of many incoming interns dur-ing the period of summer.8. Total engineering headcount: Find out engineer count and get a license foreverybody.9. Seasonably variable pool size: Create license pool according to the season.10. Move licenses in/out of closed areas: Move idle licenses from one area toanother where there is need.The detail descriptions of criteria and alternatives can be found in the Appendix B.Most of the criteria only had qualitative values, it was difficult to quantify them.Thus, the range of values for these criteria was defined by levels, for example fromlevel 1 to level 5; they did not have absolute units of measurement.33Chapter 4ResultsWe evaluated the decision process and the tool with surveys for both groups andaudio recordings from the discussions of the UBC group.4.1 Results from stormwater management problemIn the second study with UBC infrastructure planning group, there were more dis-cussions in relation to the selection of criteria and the scales and units used tomeasure the outcomes. More time than anticipated was taken for construction ofthe individual preference models.The group appreciated that we had incorporated their feedback from the firststudy and made it possible to show a ‘Single ValueCharts of the group average’(Average View, Figure 3.6). Showing the Average View induced a discussionwhere all the participants were engaged, albeit the discussion was less active thanthe previous meeting. I acted as the facilitator and demonstrated some of the newfeatures, such as ‘heat maps’ to illustrate the degree of disagreement with regardsto weights, scores, and the product of these two (Figure 3.6). This feature was verymuch appreciated by the assigned chair. Another feature that received positivecomments was box plots for displaying score function distribution (Figure 3.7).Demonstration of the aggregated individual ValueCharts (Detailed View, Fig-ure 3.8), led to a more engaged discussion, although again not as intense as the oneduring the first meeting. It was evident that the visualization made the disagree-34ments visible and they became the focal points for the discussion, particularly withthe Detailed View.One of the participants commented that he was surprised by the small differ-ence between alternatives 2 and 3, as he was expecting alternative 2 to be the clearwinner. He was also surprised that the disagreement in the group was small withrespect to this result. His comments were supported by rest of the group and theyexplicitly expressed an interest in continued exploration of these two alternatives.The discussion on the Detailed View results, the features, and ways to improve thetool had to be interrupted due to lack of time.4.2 Results from buying software licenses problemThe decision criteria and alternatives set for buying software licenses problem wasnot well defined as the employees were still investigating on the measures to beused for the assessment of this decision problem. Thus, not all criteria were quan-tifiable and hence the trade-offs had to be made in assumptions. Due to this reason,the results from this study were not very realistic. However, the participants wereemployees of an analytics company, so we got a very good feedback from informa-tion visualization and analytics point of view. There were some suggested featuresthat they thought would help facilitate the decision-making process:• a feature to rank the alternatives and/or users according to the score,• a visualization to show distribution of weights; we only had visualization(box plots) for scores.• an improved visualization by removing extra white bars (for signifying weightvariation). Instead, the user should be able to pick which metric to use -weight, score, or the weighted score.The participants agreed that Average View serves as a good starting point fordiscussions. Although Detailed View provided more information, some partici-pants felt it takes a fair amount of explanation. One of the participant quoted:It is open to the risk of people misunderstanding and sinking a lot oftime into either debating how to read the chart or debating the topicat hand but doing so under false premises.35However, they felt that the tool facilitates social interaction and has the potentialto be used as strategic planning tool in professional decision-making situations.They showed interest to use the tool in future for some other well defined decisionproblem at their company.4.3 Survey resultsThe survey results for both studies are summarized in (Table 4.1). All participantsfrom UBC planning group indicated that they ‘Agreed’ or ‘Strongly Agreed’ withthe statement ‘I believe that Average/Detailed ValueCharts helps make our discus-sions more participatory.’ The interest in using the tool as a support for groupdecision-making was rated higher after seeing and discussing the Detailed View inboth the studies.The participants were asked to describe what they liked about the tool in anopen-ended question. The comments showed that the tool was well appreciated.For example, one participant wrote:The different criteria are clearly shown and the utility functions arewell-defined. I like the fact that the user can easily manipulate differ-ent values and weights (participant #6, Orchard Commons)The comments provided evidence that the tool facilitated the discussions, suchas:Based on the group discussion, the tool really allows exchange ofideas and comments among group members, and helps reveal the dis-agreements and agreements (participant #5, Orchard Commons).Creates a good discussion and interesting perspectives (participant#6, Orchard Commons)I really like the fact that based on the opinions of all the team mem-bers, the tool helps put the discussion on track and shortlist the alter-natives that most of the team have strong feelings on. So it narrowsand focuses the decision making process. (participant # 5, SoftwareLicenses)36Table 4.1: A summary of survey results two studies investigating the per-ceived usefulness of Group ValueCharts. The survey is based on theframework developed by Schilling et al. [25]Survey QuestionsOrchardCommonsAverage(Min-Max)SoftwareLicensesAverage(Min-Max)Measured DimensionBased on Schilling et al.[25] frameworkAverageViewDetailedViewAverageViewDetailedViewI believe that (Average/Detailed)Group ValueCharts helps make ourdiscussions more participatory.1 = Strongly Disagree5 = Strongly Agree4.33(4 - 5)4.33(4 - 5)4(3 - 5)3.33(2 - 4) ParticipationPlease rate the tool’s potential toimprove group interaction.1 = Worse group interaction5 = Better group interaction4(3 - 5)4.33(4 - 5)3.67(1 - 5)3.83(2 - 5)Quality of InformationExchangePlease rate the tool’s potential toimprove information exchange amongparticipants.1 = Less exchange of information5 = More exchange of information4(3 - 5)4.5(4 - 5)3.5(1 - 5)3.83(3 - 5)Quantity ofInformation ExchangeThe tool helps identify agreementsand disagreements.1 = Strongly Disagree5 = Strongly Agree4.17(3 - 5)4.67(4 - 5)3.83(2 - 5)4.5(4 - 5)Transparency &ComprehensibilityThe tool helps make informeddecisions based on everyone’spreferences.1 = Strongly Disagree5 = Strongly Agree3.5(3 - 4)4.33(4 - 5)3.5(1 - 4)4.17(2 - 5)Transparency &ComprehensibilityI would be happy if the alternativewith the highest average score waschosen.1 = Strongly Disagree5 = Strongly Agree2.83(1 - 4)3.17(2 - 4)3.2(2 - 4)3.5(3 - 4) TransparencyI would like to use (Average/Detailed)Group ValueChart for collaborativedecision making at work in the future.1 = Strongly Disagree5 = Strongly Agree3.6(3 - 4)4.17(3 - 5)4.33(4 - 5)4.67(4 - 5)37Create concrete social constructs to anchor discussion (participant#6, Software Licenses)The participants were also asked what they disliked about the tool. For OrchardCommons problem, most of these comments were related to the criteria, scales orscore functions used to assess the alternatives, such as:The value functions should be created to minimize confusion about‘directionality’ of the criteria. e.g., for risk, the x-axis values shouldbe consistent; if the criteria is ‘reduce risk’ then the scoring valueshould mean higher number = reduced risk. (participant #2)It should be possible to make all the criteria point in the same direc-tion i.e., higher number is better. (participant #2)Some of the criteria scoring was not intuitive. (participant #4)Cost or willingness to pay should be included as a criteria. (partici-pant #4)I need to see the dollars, the time lines, NPV and the potential of alter-natives to be synergized or leveraged with other projects in their area.At the moment the tool doesn’t capture this. (participant #5)These comments were not specifically for the tool as we assumed that the criteriaand alternatives were developed beforehand, which is a good sign as it shows thatthey were discussing the topic and not the tool. However, the method along withthe tool we used for group decision-making helped the participants scrutinize theassumptions underlying the criteria and alternatives. In other words, they weremaking informed decisions.For Software Licenses problem, when asked what they dislike about the tool,most comments were more of suggestions on the visualization and interactivityaspect. For example,A simple feature might be to allow the exporting of data in a commonformat so that people could use the data in thier own visualizations orkeep it for later use. (participant #1)Interface. It is not intuitive. For the outcome of the ValueChart (Av-erage Group ValueChart), I would like it to be higher level than now.38I want summary statements that are simpler to understand: such asbased on weight, Criteria 1 showed the most disagreement etc. I knowthis information can be found by the user even now. But I want thiskind of most important information to be much more accessible andstated in concise fashion. (participant #3)It would be interesting to make the methods used for calculating dis-agreement more transparent by listing them, and perhaps allowing youto toggle between robust and non-robust stats (mean versus median).Also, maybe color coding the spread for the utility functions right nextto each criteria without having to double-click to reveal the box plotsmight be good so you can see all variances at the top overview level.(participant #4)...some of the best aspects, such as displaying differences among par-ticipants, seemed like optional functions. It might be better to build aninterface that guides people through several views of the data in orderto get the full impact. (participant #6)These comments show that although the tool could be improved visually, it stillhas a potential to be used as a tool to facilitate group decision-making.39Chapter 5DiscussionAs mentioned in Chapter 4, the surveys focused on the process effectiveness as-sessment, that is, on the quality of the decision process. Specifically, we wereevaluating the tool in terms of group participation, transparency and comprehensi-bility. In this section, we discuss the results in terms of these measures.5.1 Level of participationParticipation in decision-making process refers to the ability for participants toexpress opinions. Previous studies have shown that active participation in MCDA-based decision processes stimulates participants and promotes learning for inter-action, thinking and representation of their preferences. This results in higher de-cision accuracy and preference certainty [20, 24, 30]. It also increases the partici-pants’ trust in the results. Dalal et al. [5] argue that “judgments made by interactinggroups are more accurate than judgments made by the statistical aggregation of in-dividual judgments”. As mentioned in the results section, the participants fromOrchard Commons study were engaged in a lively discussion as soon as they wereshown their aggregated results from the fictional learning trial (choosing a hotel).This is a strong indication that the tool supports or even induces active participa-tion. The survey results support this conclusion as all six participants in OrchardCommons study checked ‘Agree’ or ‘Strongly Agree’ to the statement ‘I believethat Average Group ValueCharts helps make our discussions more participatory.’40One of the Software Licenses study participant’s comment also verifies this find-ing:... Also forced participation and sharing of opinions of all involved,which might otherwise be lost. (participant #1, Software Licenses)The survey results and comments strongly indicate that the tool facilitated in-formation exchange and group interaction, which are both closely tied to the levelof participation. The survey responses for both studies also suggest that the partic-ipants perceived the Detailed View as slightly more successful in facilitating boththe exchange of information and the group interaction in comparison to the AverageView (Table 4.1). The fact that the participants were able to identify their disagree-ments and agreements across the group appears to be a reasonable explanation tothis finding.Furthermore, it appears that the tool reduced the ability of individuals to domi-nate the discussion [20]. For example, Group ValueCharts gave everyone an equalopportunity to express their values and assured every participant that their pref-erences were seen/heard along with others. Several visualizations were providedin the tool to facilitate analysis and deeper group discussions such as box plots ofscore functions, average scores, top choices and so on.5.2 TransparencyTransparency in decision-making refers to free access to knowledge and informa-tion and being able to communicate about the actions performed. Transparencyis a central factor in group decision-making as it conveys a sense of fairness andincreases the trust in the process [19]. Salo and Ha¨ma¨la¨inen [24] argue that deploy-ing MCDA in group decision-making can enhance transparency and legitimacy asit leaves an ‘audit trail’ that enables tracking the process, and also fostering learningthrough enhanced understanding about others’ perspectives and about the decisionproblem. Existing group decision-making visualization tools either display onlysummary statistics for the group [5] or compose a group view by aggregation ofopinions or providing averaged values.[19, 22]. The preliminary results from thestudy by Taheri et al. [28] suggest that the use of average or aggregated data bearsthe risk of decreasing the trust in the decision process.41From the survey responses, the open ended questions and the discussions, itis evident that the Detailed View of our tool has the ability to increase the trans-parency of the group decision process (Table 4.1). The Detailed View receivedhigher scores for all transparency questions in the survey, and the responses indi-cate that it offers greater transparency over the Average View by explicit presenta-tion of the participants’ preferences underlying the summary statistics. This is, forexample, reflected in the survey question: ‘The tool helps make informed decisionsbased on everyone’s preferences’, which had the greatest number of participants in-crease their rating by one point between the views (5 out of 6 in Orchard Commonsstudy and 3 out of 6 in Software Licenses study).The Detailed View was also rated higher on ‘[helping] identify agreements anddisagreements’. The ability to identify agreements and disagreements was usedby the participant leading the discussion in Orchard Commons study to encourageindividuals to justify their differing values. For instance, after the chair confirmedthat everyone agreed on alternatives 2 and 3 being the most preferred, he beganmaking observations that focused attention on the two alternatives. This spurred adiscussion on the differences, particularly in water conservation, between the twoalternatives. The fact that the individual preferences are visible for scrutiny by thegroup is likely to prevent participants from manipulating their preferences to biascertain alternatives.When viewing the results as a whole, the take home message seems to be thatit was the combined use of the two views that was most appreciated by the users.No doubt, the ability to compare individual preferences on a single visualizationinterface made the decision process transparent, as it equally included all users inthe evaluation of the alternatives. This conclusion is supported by comments madein the open questions of the survey:It’s clear, transparent, and easy to use. (participant #5, Orchard Com-mons)I like being able to quickly see which alternatives were rated the high-est, or lowest and then scan for individual differences across userswithin a given alternative. (participant #1, Software Licenses)The ability to see where the strongest disagreement lies. Also the abil-42ity to filter by a user and find out what she/he feel the strongest about.(participant #3, Software Licenses)That it uncovers a lot of information regarding each individual’s re-sponses and opinions about the different alternatives. This is reallyhelpful to drive discussions and allow for people to express why or whynot they agree with something. (participant #5, Software Licenses)5.3 ComprehensibilityAccording to Matheson et al [18], comprehensibility is one of the major aspectsthat determine the quality of a decision process. It is defined as the ability touse meaningful and reliable information, to make clear value trade-offs, to uselogically correct reasoning and to pinpoint disagreements. [18]. Our results suggestthat Group ValueCharts increased the comprehensibility of the decision process,especially in regards to the ability to make clear value trade-offs and to pinpointdisagreements.When it comes to the ability to make clear value trade-offs, the responses to thequestion ‘What did you like most about the tool?’ suggest that visualizing personalpreferences helped the participants quantify their valuations and perform trade-offsanalytically:The opportunity to compare options and adjust values and weightfactors to reflect personal preferences (participant #1, Orchard Com-mons)It provides an opportunity to review/question decisions and prefer-ences (participant #2, Orchard Commons)This conclusion is further supported by the responses given to the statement ‘Thetool helps make informed decisions based on everyone’s preferences’, where allexcept one of the participants increased their rating for the Detailed View as com-pared to the Average View (in Orchard Commons study). Our findings also alignwith the results of Ewing and Baker [8] who developed an Excel-based decisiontool to support decisions around investment in green energy technologies. Theyfound that the participants “appreciated the idea that they can revisit their values asthey have time to reflect and further discuss them.”43When it comes to the ability to identify disagreements, some of the participantswrote:Based on the group discussion, the tool really allows exchange of ideasand comments among group members, and helps reveal the disagree-ments and agreements. (participant #5, Orchard Commons)I like that it helped to identify areas of disagreement, and that youcould identify extreme users, so that they could have a pointed discus-sion about why they might feel so strongly. (participant #4, SoftwareLicenses)The survey results provide evidence that this opinion was shared among the partic-ipants and show that they felt that the Detailed View was more powerful than theAverage View.These results are aligned with that of Hostmann et al. [14], who used MAUTas a framework for conflict resolution in river rehabilitation. Their study indicatesthat MAUT can be used as a framework for pinpointing sources of disagreementand interpersonal conflict between different stakeholder groups.Although our study was not designed to probe into the ability to use meaningfuland reliable information, the two open questions in the survey suggest that the toolwas helpful in identifying participants’ assumptions as well as preset model as-sumptions. For example, when the participants were asked to write about featuresthat they liked most, some of them wrote:It provides a useful mechanism to be clear on the project objectives,and to not overlook important issues for decision making (participant#2, Orchard Commons)The ability to isolate participants and engage in discussion about as-sumptions was very powerful (participant #3, Orchard Commons)These responses, in combination with the discussions surrounding the criteria andthe performance measures, indicate that it might be worthwhile exploring whetherthe tool could be extended to facilitate identification of questionable assumptionsand unreliable information.During discussion, a participant commented:44I think this tool will actually allow us to zero in on where we thinkthere might be issues and that’s real value here. ... We’re drillingin on specific areas. So whether it’s been set up slightly off or not,it doesn’t matter. The fact that you’re able to have real discussionhere about it is the valuable part of the tool, then make adjustmentsaccordingly. (audio recording, Orchard Commons)This suggests that the tool also has potential to make the participants focus oninformation pertinent to the decision.The tool can potentially be used to help a group reach consensus, but in thetype of settings we are targeting, consensus is not the goal but to reveal pointsof agreements and disagreements as the final decision is normally not made bythe group. In most cases, the person in charge brings in a number of experts andask for their input on the decision. After listening to pros and cons, this personmakes the decision. The results show that our tool has the ability to pinpoint suchdisagreements.45Chapter 6ConclusionIn this section, we summarize the research work and enlist some probable futurework.6.1 SummaryOur study results show that an interactive visualization tool that allows partici-pants to construct individual preference models and displays the aggregated valuesin two complementary views (Average View and Detailed View) has the capac-ity to enhance the participation, transparency and comprehensibility in a decisionprocess.Our tool consists of a preference construction module, ValueCharts [2], whichinforms users of how changing values can influence their preferred alternative rank-ing. The second module of our tool allows individual preferences, constructedusing ValueCharts, to be collectively visualized in a single interface (Group Val-ueCharts). There are two aggregated views in the second module: Average View(shows only average value of all participants) and Detailed View (shows individualvalues of each participant).From survey responses and audio recordings of the discussions, it is evidentthat the tool has the capacity of showing agreements and disagreements in weightsand scores among the participants. This helps the group focus their discussion onthe most significant areas of the decision problem. Based on survey results, we con-46clude that the Detailed View was more preferred than Average View, but the latteralso has a very important role in the decision analysis. In fact, Average View wasdeveloped upon request from the participants in the first study, as we originally hadonly a Detailed View. The results suggest that the two views are complementaryin the decision process; participants liked seeing the overall average preferencesof the group, but it is also vital to identify individual preferences transparently.The survey responses to the statement ‘I would be happy if the alternative with thehighest average score was chosen.’ for Average and Detailed View also supportsthis. Two participants increased their ratings for Detailed View for this statement,however all participants were not satisfied with picking the highest average scorealternative.In conclusion, we anticipate that Group ValueCharts can effectively help de-cision makers explore alternative solutions. The tool can help focus attention toinformation significant to the decision problem as well as important issues andcontention, while encouraging active participation from all involved.6.2 Future workAlthough our results suggest that Group ValueCharts can be used as an effectivegroup decision-making tool, there are several future research enhancements to in-vestigate. First, Group ValueCharts can only support decisions involving a limitednumber of criteria, alternatives and users. Visualizations techniques for a large setof alternatives with huge number of criteria with a larger number of users need tobe considered in future.As mentioned in section 3.2, we made some assumptions while developingthe tool as it was out of scope of this research. The tool is useful for a decisionproblem for which criteria and alternatives have been identified and generated.More research needs to be done to develop a tool which also aids in identifyingcriteria and generating alternatives.It would also be interesting to have a computer-based ontology of various tech-nical and non-technical aspects of designs [3]. Then, from this ontology, desireddesigns can be generated by discovering the compatibility dynamically.Some alternatives might not be mutually exclusive, that is, they may overlap47and include elements of one another. Additionally, there might be a need to con-sider a combination of two or more alternatives as one combined solution to theproblem, which the current version of tool can not capture. More research can beundertaken on feasible combinations of the alternatives and having them as sepa-rate alternatives.Due to time constraints, we could not perform any formal evaluation on the toolin terms of Human Computer Interaction (HCI) like heuristic evaluation. Moreusability problems can be identified with formal evaluations. There is a lot ofroom for cosmetic improvement from information visualization point of view aswell. From the studies conducted with the software analytics company, we gotsuggestions for reducing information clutter, using clearer fonts, colors, etc. Someinformation (e.g. red boxes showing weights) were redundant. We anticipate thatour research will be used as a stepping stone to develop a more scalable, adaptableand flexible group decision-making tool.48Bibliography[1] J. Bautista and G. Carenini. An integrated task-based framework for the de-sign evaluation of visualizations to support preferential choice. In Proc. Ad-vanced Visual Interfaces, pages 217–224, 2006. → pages 6, 10[2] G. Carenini and J. Lloyd. Valuecharts: Analyzing linear models expressingpreferences and evaluations. In The International Working Conference onAdvanced Visual Interfaces AVI, pages 150–157, 2004. → pages iii, 2, 10, 46[3] B. Chamberlain, G. Carenini, G. O¨berg, D. Poole, and H. Taheri. A decisionsupport system for the design and evaluation of sustainable wastewater so-lutions. IEEE Transactions on Computers, Special Issues on ComputationalSustainability, 63(1):129–141, 2014. → pages 47[4] C. Conati, G. Carenini, E. Hoque, B. Steichen, and D. Toker. Evaluating theimpact of user characteristics and different layouts on an interactive visualiza-tion for decision making. Computer Graphics Forum, 33(3):371–380, 2014.→ pages 6[5] S. Dalal, D. Khodyakov, R. Srinivasan, S. Straus, and J. Adams. Expertlens:A system for eliciting opinions from a large pool of non-collocated expertswith diverse knowledge. Technological Forecasting and Social Change, 78(8):1426–1444, 2011. → pages 40, 41[6] L. C. Dias and J. N. Clı´maco. Dealing with imprecise information in groupmulti-criteria decisions: a methodology and a gdss architecture. EuropeanJournal of Operational Research, 160(2):291–307, 2005. → pages 2[7] W. Edwards and F. Barron. Smarts and smarter: improved simple methodsfor multiattribute utility measurement. Organizational Behavior and HumanDecision Processes, 60:306–325, 1994. → pages 8, 1049[8] B. Ewing and E. Baker. Development of a green building decision supporttool: a collaborative process. Decision Analysis, 6(3):172–185, 2009. →pages 43[9] S. Few. Now You See It: Simple Visualization Techniques for QuantitativeAnalysis. Analytics Press, 1 edition, 2009. ISBN 0970601980. → pages 18[10] S. French and D.-L. Xu. Comparison study of multi-attribute decision ana-lytic software. Journal of Multi Criteria Decision Analysis, 13:65–80, 2005.→ pages 6[11] R. Gregory, L. Failing, M. Harstone, M. T. Long, G., and D. Ohlson. Struc-tured Decision Making: A Practical Guide to Environmental ManagementChoices. Wiley-Blackwell, 2012. ISBN 1444333429. → pages vii, 2, 4, 7,21[12] J. Guest, S. Skerlos, J. Barnard, M. Beck, G. Daigger, H. Hilger, S. Jackson,K. Karvazy, L. Kelly, L. Macpherson, J. Mihelcic, A. Pramanik, L. Raskin,M. Van Loosdrecht, D. Yeh, and N. Love. A new planning and designparadigm to achieve sustainable resource recovery from wastewater. J. Envi-ronmental Science & Technology, 43(16):6126–6130, 2009. → pages 2[13] J. Hammond, R. Keeney, and H. Raiffa. Smart Choices: a practical guide tomaking better decisions. Harvard Business School Press, 1999. → pages 7[14] M. Hostmann, M. Borsuk, P. Reichert, and B. Truffer. Stakeholder valuesin decision support for river rehabilitation. Archiv fu¨r Hydrobiologie, 155:491–505, 2005. → pages 44[15] I. B. Huang, J. Keisler, and I. Linkov. Multi-criteria decision analysis inenvironmental sciences: Ten years of applications and trends. Science of TheTotal Environment, 409(19):3578–3594, 2011. → pages 2[16] R. Keeney and H. Raiffa. Decisions with multiple objectives: preferences andvalue tradeoffs. New York: John Wiley & Sons, 1976. → pages 2, 6, 7, 8[17] I. Linkov, F. Satterstrom, G. Kiker, T. Seager, T. Bridges, K. Gardner,S. Rogers, D. Belluck, and A. Meyer. Multicriteria decision analysis: A com-prehensive decision approach for management of contaminated sediments.Risk Analysis, 26:61–78, 2006. → pages 6[18] D. Matheson and J. Matheson. The Smart Organization - Creating ValueThrough Strategic R&D. Harvard Business School Press, Boston, 1998. →pages 4350[19] J. Mustajoki, R. Ha¨ma¨la¨inen, and M. Marttunen. Participatory multicriteriadecision analysis with web-hipre: a case of lake regulation policy. Environ-mental Modelling & Software, 19(6):537–547, 2004. → pages 41[20] J. F. Nunamaker and A. V. Deokar. Gdss parameters and benefits. In Hand-book on Decision Support Systems, pages 391–414. Springer Berlin Heidel-berg, 2008. → pages 40, 41[21] P. Reichert, M. Borsuk, M. Hostmann, S. Schweizer, C. Sprri, K. Tockner,and B. Truffer. Concepts of decision support for river rehabilitation. Environ-mental Modelling & Software, 22:188–201, 2007. → pages 2, 6[22] P. Reichert, N. Schuwirth, and S. Langhans. Constructing, evaluating, andvisualizing value and utility functions for decision support. EnvironmentalModelling & Software, 46:283–291, 2013. → pages 6, 41[23] M. Riabacke, M. Danielson, and L. Ekenberg. State-of-the-art prescriptivecriteria weight elicitation. Advances in Decision Sciences, 46, 2012.→ pages10[24] A. Salo and R. Ha¨ma¨la¨inen. Multicriteria decision analysis in group deci-sion processes. In D. M. Kilgour and C. Eden, editors, Handbook of GroupDecision and Negotiation, volume 16, pages 269–283. Springer Netherlands,2010. → pages 2, 40, 41[25] M. S. Schilling, N. Oeser, and C. Schaub. How effective are decision analy-ses? assessing decision process and group alignment effects. Decision Anal-ysis, 4(4):227–242, 2007. → pages vii, 22, 37[26] N. Schuwirth, P. Reichert, and J. Lienert. Methodological aspects of multi-criteria decision analysis for policy support: A case study on pharmaceu-tical removal from hospital wastewater. European Journal of OperationalResearch, 220(2):472–483, 2012. → pages 6[27] H.-S. Shih, C.-H. Wang, and E. S. Lee. A multiattribute gdss for aidingproblem-solving. Mathematical and Computer Modelling, 39(11-12):1397–1412, 2004. → pages 2[28] H. Taheri, G. Carenini, and G. O¨berg. Designing interactive visualization tosupport sustainable infrastructure planning. 2014. (Manuscript submitted forpublication). → pages viii, 2, 12, 13, 14, 15, 17, 41[29] C. Ware. Information Visualization : Perception for Design. Morgan Kauf-mann, 3 edition, 2012. ISBN 0123814642. → pages 1851[30] F. Woudenberg. An evaluation of delphi. Technological Forecasting andSocial Change, 40(2):131–150, 1991. → pages 4052Appendix AStormwater Management atOrchard CommonsThere is a need for stormwater management in a site called Orchard Commons atthe University of British Columbia (UBC) which is currently used as a car park. Or-chard Commons is proposed to be the physical home for the UBC Vantage College,which is anticipated to be developed as a mixed-use academic/ student-housinghub. In consultation with the members of infrastructure planning committee atUBC, two sets of criteria: operations and sustainability were developed. For thestudy, we went through the 25 sustainability criteria that were given in the designbrief for Orchard Commons. We discussed them with several people on Campus,and we identified six criteria that were reasonably measurable and would work forthe study.Under operations comes reduce risk, reduce disruption to stakeholders, neg-ative/positive domino effects. Under sustainability, there were criteria like waterconservation, reduce energy consumption, community engagement. In the follow-ing sections, we describe the criteria and the alternative solutions for this decisionproblem in detail.53A.1 Criteria1 Negative/positive domino effectsThis deals with positive and negative domino effects. We are aware thatoperations use domino effects to denote something negative and synergies todenote something positive. For us, these two criteria are a bit blurred and wechose to combine them. As a measure, we chose the anticipated time spanwhen it will be needed to retrofit the aging sewer infrastructure.(a) Unit: Need for sewer infrastructure investments(b) Range of values: [Soon, Slightly deferred, Deferred, Considerably de-ferred]2 Reduce disruption to stakeholders The second criteria is to reduce dis-ruption to occupants and users, which we measured totally theoretically forconstruction and operation phases.(a) Unit: Additive scale of 1-3(b) Range of values: [Minimal, Some, Considerable]3 Reduce riskRisk is estimated by assessing the likelihood of something happening andhow bad that would be. Both are assessed on a scale 1-5 and then multi-plied by each other. So the maximum value is 25 and the minimum is 1.After talking with concerned people from operations six potential risks wereidentified: On-site flooding, Downstream flooding, Erosion and so on and soforth. And then we assessed the likelihood and the ‘significance’ of them.(a) Unit: scale 25(b) Range of values: [2, 4, 8]4 Water conservation We first estimated the amount of storm water that the-oretically can be collected for reuse, and then estimated the reduction ofdemand for municipal water.54(a) Unit: Reduced demand for municipal water in comparison to the base-line (existing system) in cubic meter per year(b) Range of values: [0, 2500, 7500]5 Reduce energy consumption This criteria focuses on reducing energy con-sumption during the operational phase. The energy needed for pumping andbasic treatment and transporting water.(a) Unit: kWh/year(b) Range of values: [0, 12500, 26500]6 Community engagement This criteria is a merger of two criteria called ‘En-gage the Surrounding Community’ and ‘Create Micro-Opportunities to Ex-perience Nature’. The different options for this criteria are shown in Fig-ure A.1.Figure A.1: Different options for criteria ‘community engagement’55A.2 Alternatives1 Alternative 1Alternative 1 corresponds to the systems in older buildings at UBC and weuse it as a baseline. We assume that all stormwater is directed to the sewers.Alternative 1Criteria ValuesNegative/positive domino effects SoonReduce disruption to stakeholders 4Reduce risk 8 in 25 scaleWater conservation 0 m3/yearReduce energy consumption 0 kWh/yearCommunity engagement Hidden2 Alternative 2Alternative 2 meets but does not exceed UBC’s requirements for on-site run-off management as outlined in UBC campus plan, technical guidelines andREAP. In line with LEED, the flow rate is reduced by 50% and we assumethat detention strategies are in place to reduce the volume of runoff to a 2-year-event flow rate.Alternative 2Criteria ValuesNegative/positive domino effects Slightly deferredReduce disruption to stakeholders 3Reduce risk 2 in 25 scaleWater conservation 2500 m3/yearReduce energy consumption 0 kWh/yearCommunity engagement Blue/Green features3 Alternative 3We assume that best sustainability practices for on-site storm water manage-ment are applied. This alternative goes beyond UBC requirements. In thisalternative, we assume that all run-off is retained or detained, which means56Alternative 3Criteria ValuesNegative/positive domino effects DeferredReduce disruption to stakeholders 3Reduce risk 2 in 25 scaleWater conservation 7500 m3/yearReduce energy consumption 26500 kWh/yearCommunity engagementBlue/Green Infrastruc-turethat nothing is directed to the sewers not even during 10-year rainfall events.We assume that rainwater is reused for irrigation, green infrastructure andenhancing the local environment.4 Alternative 4We assume that half of the stormwater is retained or detained on-site. Excessstormwater is directed to Sustainability Street where it is used for irrigationof horticulture, landscaping and educational demonstration projects. Bestsustainability practices are applied from a community perspective.Alternative 4Criteria ValuesNegative/positive domino effects Considerably deferredReduce disruption to stakeholders 6Reduce risk 4 in 25 scaleWater conservation 2500 m3/yearReduce energy consumption 12500 kWh/yearCommunity engagementInteractive Blue &Green infrastructureThe summary of alternative designs for stormwater management with visualrepresentation used in the studies can be found in the following page.57Storm water management at Orchard Commons – Alternative solutions    Alternative 1: All storm-water is directed to the sewers. This alternative corresponds to older buildings on campus and is used as a baseline.      Alternative 2 meets but does not exceed UBC’s present requirements for on-site run-off management. To comply with the requirements, the flow rate must be reduced by 50 percent, and the runoff volume should not exceed the 2-year 24-hour design storms. The detention strategies include green infrastructure, such as small ponds and vegetation, to reduce storm water directed to the sewers as well as peak flows of storm water. The site also includes some ornamental water features with irrigation from detained storm water to maintain ecological integrity and health as well as reduce potable water demand.   Alternative 3 applies best sustainability practices for onsite storm water management. This alternative goes beyond UBC storm-water management requirements to maximize its sustainability goals. It is assumed that all run-off is detained or retained up to 10-year events. As compared to Alternative 2, this alternative makes more use of green infrastructure such as green walls, roofs, trees, ponds and wetlands.  Alternative 4: Half of the storm-water is retained or detained on site. Excess storm-water is moved to Sustainability Street (Stores Road) where it is used for irrigation of horticulture, landscaping and educational demonstration projects. The amount of water directed to Sustainability Street is controlled by for example the use of a detention tank placed under the Commons. Best sustainability practices are applied in a community scale perspective. As compared to Alternative 3, this alternative uses more green infrastructures.   Appendix BBuying Software Licenses at aSoftware Analytics CompanyThe software licensing problem involves selecting the best solution among manyfor satisfying the criteria used to evaluate how good a solution is. For example, onesolution to the licensing problem, a particular licensing strategy, could improve theease of access to licenses for users but cost more than other strategies; the twocriteria in this example would be ease of access to the software (or not being deniedaccess to the software) and the cost of this particular licensing solution.In the following sections, the criteria and alternatives for buying software li-censes problem are described.B.1 Criteria1 Up-front license/solution costHow much utility or ‘goodness’ we want to associate with the cost of solu-tions. This criteria captures up-front costs of: initial architecture investment,software investment, or just whatever the cost of solution is up front.(a) Unit: Dollars ($)(b) Range of values: [$1,000,000 , $750,000 , $500,000 , $250,000 , $0]592 DenialHow much utility or ‘goodness’ we want to associate with a given measure ofraw aggregate software denials avoided. If someone tries to access software,and there are not enough licenses, we call it a ‘denial’. Ideally, we mightwant to minimize the amount of raw aggregate denials.(a) Unit: Number of denials(b) Range of values: [4000, 3000, 2000, 1000, 0]3 Mission critical workThis criteria assigns utility to the volume of mission critical work that the so-lution mitigates - a solution with dynamic license movement for temporallymission critical work. Solutions that ensure people with upcoming importantmilestones : essentially this criteria measures the degree to which, when andwhere a license is needed badly. We are looking for solutions that offer ahigh degree of license transfer or flexibility with licenses.(a) Unit: Not defined(b) Range of values: [Low Support, Mid-Low Support, Mid Support, Mid-High Support, High Support]4 Cost/ease of implementationThis criteria assigns utility to how much cost and effort a solution will requireto manage through its life cycle. The criteria covers soft costs like how muchwe want to capture the overhead cost of time and resources of managing thelicenses/solution and questions like: ‘Over the next few years to maintainthis solution, how much will this cost?’ It also captures any surroundingarchitectural changes, implementation changes required to facilitate imple-mentation of the solution, or possible unknowns or risks: essentially, themore things you change, the more risk you have.(a) Unit: Not defined(b) Range of values: [Low Ease, Mid-Low Ease, Mid Ease, Mid-HighEase, High Ease]605 Quantification of benefitThis criteria assigns a utility value to how easy it is to measure the benefitof this solution. A solution that might be difficult to gauge the benefit ofeducation and outreach efforts.(a) Unit: Not defined(b) Range of values: [Difficult, Medium, Easy]6 Culture changeThis criteria captures the degree to which a solution influences employees tobe more courteous or conscientious with their license usage, using licensesonly sparingly and deliberately when needed for a given task.(a) Unit: Not defined(b) Range of values: [Low, Medium, High]7 Near term versus long term reliefThis criteria captures the degree to which a particular solution only offersnear term relief versus a more steady long-term solution.(a) Unit: Not defined(b) Range of values: [Near Term, Mid-Term, Long-Term]8 Compliance and sustainabilityThis criteria captures how likely employees will adhere to a given solution.It also captures things like expected success or probable success.(a) Unit: Not Defined(b) Range of values: [Low, Medium, High]9 Reversibility of solutionThis criteria captures to what degree selection and implementation of a par-ticular solution can be undone. We may want to choose solutions that areeasily reversible if we do not want to fully commit to a solution, but want totest it out.61(a) Unit: Not defined(b) Range of values: [Low, Medium, High]B.2 Alternatives1 Education/OutreachPrimary goals are to use education/outreach to minimize wasteful behavior;licenses should only be checked out when absolutely needed, so that they areavailable for others when they are not checked out.Secondary goals are to integrate road maps so that the license pool is op-timally sized over time. Some positives of this solution include: creates aculture of conservation and integrates the planning process. A negative sideis the difficulty of measuring how successful this solution is at reducing de-nials.Education/OutreachCriteria ValuesUp-front cost $2000Denial 2500Mission critical work Mid-Low SupportCost/ease of implementation Mid-HighQuantification of benefit DifficultCulture change HighNear term versus long term relief Mid-TermCompliance and sustainability MediumReversibility of solution High2 Create a high priority pondThe concept is to create a separate, smaller pool (e.g. ‘pond’) of licenses forhigh priority users. This solution may require active management to changepond sizes as demand changes in real time. A positive includes: good likeli-hood of protecting mission critical users/programs. Some negatives include:new management process, shifts denials elsewhere, and may require activeattention.62Create a high priority pondCriteria ValuesUp-front cost $5000Denial 2000Mission critical work High SupportCost/ease of implementation Mid-LowQuantification of benefit EasyCulture change MediumNear term versus long term relief Long-TermCompliance and sustainability MediumReversibility of solution Medium3 Multiple checkout managementThe idea of multiple checkout management is to limit the number of licensesthat can be checked out during peak demand periods. This could be time-based. Some positives of this solution include: it smears license consump-tion over 24hr clock. Some negatives include: may not be feasible every-where.Multiple checkout managementCriteria ValuesUp-front cost $5000Denial 2500Mission critical work Mid-High SupportCost/ease of implementation LowQuantification of benefit MediumCulture change MediumNear term versus long term relief Long-TermCompliance and sustainability MediumReversibility of solution High4 Add more licenses nowAdd more licenses to cope with current demand right now; delicate balance:must strive to minimize over-buying. Some positives: near-term relief. Somenegatives: expensive, only a band-aide solution, not sustainable.63Add more licenses nowCriteria ValuesUp-front cost $500,000Denial 2000Mission critical work Low SupportCost/ease of implementation HighQuantification of benefit EasyCulture change LowNear term versus long term relief Near-TermCompliance and sustainability LowReversibility of solution Low5 Group licenses for critical usersThese are not concurrent licenses. They would be for ‘top-10’ mission criti-cal users, for example, as a Group License. Would require some overhead toreallocate if/when users leave the company. Some positives include: wouldeliminate denial risk for users with Group Licenses. Some negatives include:new management process, inefficient utilization if not used 24-7.Group licenses for critical usersCriteria ValuesUp-front cost $300,000Denial 2000Mission critical work High SupportCost/ease of implementation Mid-LowQuantification of benefit MediumCulture change LowNear term versus long term relief Mid-TermCompliance and sustainability LowReversibility of solution Low6 Distributed/parallel toolbox for large batch jobsFor those who use a lot of concurrent licenses; e.g. those who have nearconstant license checkout. Some positives: cheaper for large consumption.Some negatives: access to cluster including maintenance costs, and trainingto set up cluster jobs.64Distributed/parallel toolbox for large batch jobsCriteria ValuesUp-front cost $300,000Denial 2000Mission critical work Low SupportCost/ease of implementation LowQuantification of benefit MediumCulture change LowNear term versus long term relief Long-TermCompliance and sustainability MediumReversibility of solution Medium7 90 day (intern) licensesThe original intent for this alternative was to solve the problem of many in-coming interns during the period of the summer; however, this solution mayhave utility for emergent bursts of consumption in general, whenever andfor whatever reason they happen. Some positives of this approach include:Nominal fit within intern window and less expensive (20%) on a per-licensebasis. Some negatives include: cannot transfer licenses to different, missioncritical user groups, not renewable, cannot stop clock once the 90 day periodstarts (less flexibility), may not cover entire period of time when interns or agroup is using licenses.90 day (intern) licensesCriteria ValuesUp-front cost $200,000Denial 2000Mission critical work Low SupportCost/ease of implementation Mid-HighQuantification of benefit EasyCulture change LowNear term versus long term relief Mid-TermCompliance and sustainability LowReversibility of solution High658 Total engineering headcountFind out how many engineers you have and get a license for everybody.Some positives of this solution include: eliminates denial risk, applicable toclosed areas, can revert this solution easily. Some negatives of this solutioninclude: most expensive, not sustainable; have to keep buying more licenseswhenever you get more employees.Total engineering headcountCriteria ValuesUp-front cost $1,000,000Denial 0Mission critical work Low SupportCost/ease of implementation HighQuantification of benefit EasyCulture change LowNear term versus long term relief Mid-TermCompliance and sustainability LowReversibility of solution Low9 Seasonably variable pool sizePeak demand varies with season, this results in carrying many more licensesin pool than necessary during off-season. Some positives of this solutioninclude: pool optimally sized during year, assets that are retained (alternativeto 90 day). Some negatives include: low feasibility.Seasonably variable pool sizeCriteria ValuesUp-front cost $250,000Denial 500Mission critical work Low SupportCost/ease of implementation LowQuantification of benefit MediumCulture change LowNear term versus long term relief Long-TermCompliance and sustainability LowReversibility of solution Medium6610 Move licenses in/out of closed areasOverall software license makeup is 60/40 restricted area/open area. Node-locked licenses often have to be purchased for closed area programs withshort lifespan resulting in very poor utilization. Moving idle licenses fromone area to another may be mutually beneficial provided there are no ownerrestrictions and a license transfer mechanism. Some positives of this ap-proach include: may mitigate under-utilization of licenses. Some negativesof this approach include: determining what the license transfer mechanismwill require some overhead for clarification.Move licenses in/out of closed areasCriteria ValuesUp-front cost $10,000Denial 2000Mission critical work Mid-Low SupportCost/ease of implementation Mid-LowQuantification of benefit MediumCulture change LowNear term versus long term relief Near-TermCompliance and sustainability LowReversibility of solution High67


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items