UBC Faculty Research and Publications

Development and preliminary user testing of the DCIDA (Dynamic computer interactive decision application)… Bansback, Nick; Li, Linda C; Lynd, Larry; Bryan, Stirling Aug 1, 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


52383-12911_2013_Article_829.pdf [ 3.16MB ]
JSON: 52383-1.0223536.json
JSON-LD: 52383-1.0223536-ld.json
RDF/XML (Pretty): 52383-1.0223536-rdf.xml
RDF/JSON: 52383-1.0223536-rdf.json
Turtle: 52383-1.0223536-turtle.txt
N-Triples: 52383-1.0223536-rdf-ntriples.txt
Original Record: 52383-1.0223536-source.json
Full Text

Full Text

RESEARCH ARTICLE Open AccessDevelopment and preliminary user testing of theDCIDA (Dynamic computer interactive decisionusing DCIDA.Bansback et al. BMC Medical Informatics and Decision Making 2014, 14:62http://www.biomedcentral.com/1472-6947/14/62Vancouver, BC V5Z 1M9, CanadaFull list of author information is available at the end of the articleConclusion: Preliminary results suggest that DCIDA has potential to improve the quality of patient decision-making.Next steps include larger studies to test individual components of DCIDA and feasibility testing with patientsmaking real decisions.Keywords: Patient decision aids, Medical decision-making* Correspondence: Nick.Bansback@ubc.ca1School of Population and Public Health, University of British Columbia, 2206East Mall, Vancouver, BC V6T 1Z3, Canada2Centre for Clinical Epidemiology and Evaluation, Vancouver CoastalResearch Institute, 7th Floor, 828 West 10th Avenue, Research Pavilion,application) for ‘nudging’ patients towards highquality decisionsNick Bansback1,2,3,4*, Linda C Li4,5, Larry Lynd3,6 and Stirling Bryan1,2AbstractBackground: Patient decision aids (PtDA) are developed to facilitate informed, value-based decisions about health.Research suggests that even when informed with necessary evidence and information, cognitive errors can preventpatients from choosing the option that is most congruent with their own values. We sought to utilize principles ofbehavioural economics to develop a computer application that presents information from conventional decisionaids in a way that reduces these errors, subsequently promoting higher quality decisions.Method: The Dynamic Computer Interactive Decision Application (DCIDA) was developed to target four commonerrors that can impede quality decision making with PtDAs: unstable values, order effects, overweighting of rareevents, and information overload. Healthy volunteers were recruited to an interview to use three PtDAs convertedto the DCIDA on a computer equipped with an eye tracker. Participants were first used a conventional PtDA, andthen subsequently used the DCIDA version. User testing was assessed based on whether respondents found thesoftware both usable: evaluated using a) eye-tracking, b) the system usability scale, and c) user verbal responsesfrom a ‘think aloud’ protocol; and useful: evaluated using a) eye-tracking, b) whether preferences for options werechanged, and c) and the decisional conflict scale.Results: Of the 20 participants recruited to the study, 11 were male (55%), the mean age was 35, 18 had at least ahigh school education (90%), and 8 (40%) had a college or university degree. Eye-tracking results, alongside a meansystem usability scale score of 73 (range 68–85), indicated a reasonable degree of usability for the DCIDA. The thinkaloud study suggested areas for further improvement. The DCIDA also appeared to be useful to participantswherein subjects focused more on the features of the decision that were most important to them (21% increase intime spent focusing on the most important feature). Seven subjects (25%) changed their preferred option when© 2014 Bansback et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the CreativeCommons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, andreproduction in any medium, provided the original work is properly credited.Bansback et al. BMC Medical Informatics and Decision Making 2014, 14:62 Page 2 of 13http://www.biomedcentral.com/1472-6947/14/62BackgroundIn recent years, numerous patient decision aids (PtDA)have been developed to facilitate informed, value baseddecisions about treatment options [1]. They have been de-veloped in response to many beneficial treatments orscreening strategies which also have negative aspects suchas side-effects or high costs. What is best for one patientmay be different to another depending on how each valuesthe attributes of each option [2]. Health professionals areoften poor proxies of patients’ values [3,4], and often failappropriately ‘diagnose’ patient preferences [5]. Patientscan also have unrealistic expectations of treatment bene-fits and harms [6].PtDAs provide facts about the condition, options, andattributes such as outcomes and probabilities for each op-tion; a value clarification task that helps patients evaluatewhich attributes matter most to them; and a guide in thesteps of deliberation and communication required for theinformed patient to reach their goal – concordance be-tween what matters most to them and their chosen option[7]. An updated Cochrane systematic review found that,among 115 studies involving 34,444 participants, PtDA in-crease patients’ knowledge about treatment options, andreduce their decisional conflict related to feeling unin-formed and unclear about their personal values [1].There have been tremendous advances in the wayPtDAs are developed, from the way risks are presented[8], to the use of animated stories to better communicateinformation [9]. However, there has been comparativelylittle research on reducing decision errors in peopleusing PtDAs [10]. While in theory, PtDAs should helppatients identify the best treatment option for them, re-search shows that various cognitive biases may result inerrors that prevent this in some situations [11-14]. Forexample, individuals are known to make different choiceswhen their options are framed as gains or losses, prefer-ring a surgical procedure with a 90% survival rate to onewith a 10% mortality rate [15]. Studies have shown that in-dividual treatment choices are unduly influenced bywhether individuals learn first about potential harms orpotential benefits [10], and individuals are intimidated andoverwhelmed by options that include numerous rare side-effects leading to irrational decisions [16].The objective of this study was to develop and user testa computer application that enhances conventional PtDAsto improve the quality of decisions by helping patientsovercome common decision errors.Development of DCIDATheoretical motivationNormative decision theory suggests that for patients toapproach treatment or screening decisions rationallythey need to weigh benefits and harms using deliberative“compensatory strategies” to make trade-offs [17]. Thatis, a patient can “compensate” for the negative feature ofone option by considering a positive feature. A patientlooking at cancer screening options may not want tohave annual testing, but would not mind if such frequenttesting were non-invasive. While this approach helpspeople identify the treatment option that matches theirinformed values, descriptive decision theory has identi-fied numerous errors in peoples’ decisions caused bycognitive biases and simplifying heuristics [12-15]. Anunderstanding that people have two systems for cogni-tive functioning has provided a framework for under-standing these errors and providing effective strategiesfor improving decision making [18]. System 1 refers topeople’s intuitive system, which is typically fast, auto-matic, effortless, implicit, and emotional. While oftenuseful for simple decisions, they can lead to decision er-rors for more difficult decisions, such as ones requiringcompensatory strategies. System 2 refers to reasoningthat is slower, conscious, effortful, explicit, and logical.Recent research suggests that when faced with decisionsand information that is unfamiliar, complex, or over-whelming – all common traits targeted by most PtDAs –people can switch to use System 1 functioning, which canlead to decision errors [12,13].By reviewing PtDAs contained in the Ottawa repository[19], the project team identified few examples where PtDAshelped patients making trade-offs, and beyond simple valueclarification exercises gave patients little help in choosingwhat was best for them. The team identified four issuescommon to nearly all PtDAs that were believed could im-pede the quality decision-making, and for which interven-tions were feasible: unstable preferences, order effects,overweighting of rare events, and information overload.i) Unstable values: Unlike most goods and services,where markets help form stable values andconsumers understand the relative value ofattributes through a process of trial and error andnotion of sacrifice, evidence indicates that patientsoften have unstable values when it comes to healthcare [20,21]. This follows from the fact that thepotential benefits and harms of treatment may beunfamiliar to patients and impossible to evaluatewithout a great deal of information and reflection.The consequence is that people’s decisions may beinconsistent with their true underlying values.ii) Order effects: For a rational decision maker, the wayinformation is presented should not have aninfluence on the choice that is made. However, thereis evidence that people tend to, for example,remember information presented to them first(primacy effect) or last (recency effect) [22-24]. Thedecision error in PtDAs caused by this heuristic isdemonstrated in a study of women at high risk ofBansback et al. BMC Medical Informatics and Decision Making 2014, 14:62 Page 3 of 13http://www.biomedcentral.com/1472-6947/14/62breast cancer who were considering tamoxifen, whichfound that patients who learned first about the risks ofthe drug thought more favourably of tamoxifen thanpatients who learned first about the benefits [10].Nevertheless, harms and benefits must be presentedin some order in a PtDA, and therefore, the designerof the PtDA may inadvertently influence the patient tochoose a given option.iii)Overweighting of rare events: Most treatments havemultiple complications. PtDAs are expected toinform patients about any treatment complicationthat is reasonably likely to occur. Although there areno absolute criteria regarding which complicationsmust be included for meeting informed consent,most PtDAs show information on any moderatelysevere complication that occurs at least 1% of thetime, and serious complications that occur even lessoften than that. Prospect theory describes how peoplesystematically overweigh small probabilities in termsof their impact on decisions. The consequence of thisphenomenon is that patients using PtDAs can bescared away from treatments with multiple rareside-effects that rationally would have appeared thebest treatment option for them [14,16].iv) Information overload: When individuals receiveconflicting, incomplete, uncertain, or excessiveinformation, they experience ambiguity and canmake contradictory decisions [25]. The role ofinformation overload causing ambiguity in investmentdecision-making has been well documented. Whenthe complexity of decision-making increases, peopletend to expend less effort to actually make theirdecision, seek others to make decisions for them,or select default options if available [26].A number of promising strategies have been uncov-ered for overcoming specific decision errors. One ap-proach is to encourage people to use System 2 thinkinginstead of System 1 by making the information and de-cision less overwhelming. This can be achieved by fo-cussing attention on the most pertinent information,and by using analytic processes which reduce the num-ber of ‘internal calculations’ which require cognitive ef-fort [27-30]. In the area of PtDAs, the predominantapproach has been to employ formal decision analysistechniques which quantifies patients' values and inte-grates them with probabilistic information [31-33].While there are many perceived advantages to this pre-scriptive approach, there are also criticisms. First, ‘opti-mal’ options derived from decision analysis are relianton assumptions, theories and inaccuracies in inputswhich mean they may not actually prescribe the bestcourse of action for each patient [34]. Second, thecurrent approaches to decision analysis are typically‘overt’ to be best course of action, and consequently havebeen argued to be an extension of paternalism, comprom-ising patient autonomy [35].An alternative strategy is to leverage System 1 thinkingby changing the decision environment to maximize theodds that people will make high quality decisions givenknown biases [26]. For example, it is known that mostpeople have a bias towards inaction, in which providing adefault option has been found to be a powerful decisionenhancer [36]. These strategies have increasingly been re-ferred to as ‘nudges’, reflecting the unavoidable paternalis-tic role of the designer of the tool (in this case, PtDAs) ininfluencing users’ choices [37]. Nudges have been definedas “…any aspect of the choice architecture that alters peo-ple’s behavior in a predictable way without forbidding anyoptions or significantly changing their economic incen-tives…” [37], and so seek to preserve patient autonomyand freedom of choice.We sought to employ decision analysis more covertly[38] to improve decision-making by testing various ‘nudges’which help people focus on the information and optionsthat reflect their values, and simplify their trade-offs.FeaturesWe developed a dynamic computer interactive decisionapplication (DCIDA - pronounced ‘decider’) to employsome of the strategies described above. The overarchingaim of the DCIDA is to present information and the de-cision to each person in an individualized way in orderto maximize their ability to make choices that reflecttheir own informed, stable values. Acknowledging that itis rare that any decision support system will induce opti-mal decisions, the goal is to improve the quality of deci-sions, from what would be made with conventionalPtDAs. Multi-criteria decision analysis (MCDA) using aweighted additive model motivates the application. Fortreatment decisions, this assumes the preferred option isbased on the sum of the importance or weight (on a 0–100 scale) of each attribute, say benefit or harm, multi-plied by each option’s score (on a 0–100 scale) for thatattribute. The treatment with the highest weighted scoreor expected value indicates the patient’s optimal option.The application contains the same content and informa-tion as a PtDA, explaining the condition, providing infor-mation about options and their characteristics (benefits,side-effects, costs etc.) using probabilities and pictographsto describe baseline and incremental absolute risks whereappropriate, a value clarification exercise, and a summaryof information to help the patient deliberate on the deci-sion along with an opportunity to select the preferred op-tion. However, the way this content and information isstructured and organized differs, and where possible indi-vidualized to make it simpler for each person to choosewhat is best for them. Figure 1 compares the pathway aBansback et al. BMC Medical Informatics and Decision Making 2014, 14:62 Page 4 of 13http://www.biomedcentral.com/1472-6947/14/62patient would take between a conventional and DCIDAversion of a PtDA. For example, the first unique feature ofthe DCIDA is that in step 1 the value clarification task,which is usually near the end of a conventional PtDA, ismoved to the beginning. The objective of the task was toa) provide an opportunity for individuals to reflect on therelative value of attributes and subsequently derive morestable values (see “i) Unstable values” above) and b) togenerate the weights for each attribute for use later in thetool. After preliminary testing, we decided to use anFigure 1 Different pathways for obtaining information and indicatinginteractive form of constant sum exercise (also known as a“budget pie”) which requires users to allocate a certainnumber of points (often 100) to each attribute in accord-ance with the relative importance of each. Constant sumexercises have a long history of use and incorporate anumber of properties desirable for encouraging compen-satory decisions [39], but have been criticized for requiringa higher levels of numeracy [40]. We developed a simplerversion that requires users to move multiple sliders, alllinked to an interactive pie chart (Figure 2). For individualspreferences for DCIDA vs conventional PtDA.Bansback et al. BMC Medical Informatics and Decision Making 2014, 14:62 Page 5 of 13http://www.biomedcentral.com/1472-6947/14/62with low numeracy, pie charts can be an effective formatfor communicating the “gist” of health knowledge andtreatment choices [41]. As participants navigated throughthe following steps of the DCIDA version, they could re-turn at any time to the budget pie exercise to change theweights allocated to different attributes of the decision.Figure 2 Example of constrained interactive pie chart.After the values clarification task, Step 2 presented par-ticipants with detailed information (e.g. the actual risk) foreach attribute. Where possible, the information was ex-plained in a simple format, including pictographs wherenecessary to communicate risks. Importantly, this infor-mation was ordered in accordance with the rank of the in-dividual’s weights – obtained from the value clarificationexercise. For each attribute, the patient was asked to ratehow important the attribute (outcome such as benefit orharm) for each option. This was used to create the scorefor each attribute. In our preliminary testing we used asimple 5-point scale to derive numerical scores.In step 3, the summary information for all consequenceswas displayed. In contrast to a conventional PtDA: 1) theconsequences were ordered in accordance with the rankof the individual’s weights – obtained from the value clari-fication exercise. This aimed to exploit order effects bynudging individuals to focus on the information thatwould most significantly influence their decision (see “ii)Order effects” above); 2) rows were further sized in propor-tion to the weights of each consequence, with the mostimportant consequences being presented in wide rowsand less important consequences in narrow rows. Thisserved to take attention away from rare events for the ma-jority of people who rated these attributes to have lowimportance in the value clarification exercise (see “iii)Overweighting of rare events” above); 3) for each conse-quence, the colour for each option was based on the score,with a lighter shade of grey indicating a more preferredoption. Colouring aimed to simplify the information pre-sented (akin to traffic light labelling for the nutritional offood [42]), enabling individuals to process multiple piecesof information and distinguish between harms and bene-fits (see “iv) Information overload” above); 4) the sum ofthe weights and scores were used to determine which op-tion would be preferred using MCDA (Figure 3). This in-dicated the ‘optimal’ choice for a given individual andbecame the default option for the participant, helpingovercome information overload. On the summary page,users were able to select an option other than the defaultoptimal choice; however the presence of a selected defaultoption has proven to help overcome ambiguity [36].DevelopmentThree PtDAs available to the developers were transformedinto DCIDA versions. The goal was to ensure the DCIDAwas usable for a broad range of PtDAs, each of which has adifferent set of individual characteristics. The first PtDAused information on newly developed over the countermedication choices for the treatment of knee osteoarthritis,where it is suspected patients underestimate the risks asso-ciated with high doses. The options were no treatment,acetaminophen (lower benefit, but fewer side-effects), andnon-steroidal anti-inflammatory drugs (NSAIDs) (greaterbenefit, but greater side-effects). The second PtDA focussedon treatment options for patients with Obstructive SleeptotBansback et al. BMC Medical Informatics and Decision Making 2014, 14:62 Page 6 of 13http://www.biomedcentral.com/1472-6947/14/62Figure 3 Example of conventional MCDA weights and scores, andApnea (OSA). The primary treatment option is continuouspositive airway pressure (CPAP), which is effective but in-convenient to use. Less invasive alternatives such as oral ap-pliances or no treatment are also options. The third PtDAconsiders chemotherapy options for patients with late stagenon-small cell lung cancer (NSCLC). Treatments withhigher efficacy are also associated with more frequent andsevere side-effects. All three PtDAs were conceived usingthe Ottawa Framework [43] and were reviewed to Inter-national Patient Decision Aid Standards (IPDAS) guidelines[7]. DCIDA versions of each PtDA were made using exactlythe same information. An example of the DCIDA versionof the PtDAs is available at: http://dcida.cheos.ubc.ca/osa1.MethodsOverviewWe focussed our user testing on both usability (whetherthe user can do what they want to do to without hin-drance, hesitation, or questions) and usefulness (does ithelp the user make a better decision) [44]. Two commonapproaches to user testing were used – eye-tracking and athink aloud protocol – and these were supplemented withvarious validated questionnaires. Ethics was granted fromUniversity of British Columbia Ethics Board.Participants and proceduresA sample of healthy, English-speaking volunteers wasrecruited through online advertisements and posters.al scores and DCIDA.Participants were seated at a computer equipped withan eye-tracker and the interviewer explained the pur-pose of the study. After gaining consent, the participantwent through the calibration procedure to initialize theeye-tracking system. The participant was then asked tochoose which clinical scenario they wanted to imaginethey were facing: Knee Osteoarthritis, OSA, or NSCLC,and proceed to complete an online version of a conven-tional PtDA followed by the DCIDA version created fortheir chosen scenario.Figure 1 shows the flow for the conventional PtDA. Afterselecting their preferred option, they were asked to indicatetheir uncertainty in their decision by completing the Deci-sional Conflict Scale (DCS). They then went through theDCIDA version (right side of Figure 1) and were againasked to choose their preferred option, and their uncer-tainty as measured by the DCS. Finally, they were asked tocomplete the System Usability Scale (SUS) focussing on theDCIDA. After completing the tool, an interview was usedto discover users’ impressions about the tool and their ex-periences. Upon completion of the tasks, the user was com-pensated with a $25 dollar gift certificate.Eye-trackingEye tracking is a promising method for usability testingsince it can evaluate individuals’ information processingwhile they deliberate on decisions [45]. It makes it possibleto determine what type of information individuals look atBansback et al. BMC Medical Informatics and Decision Making 2014, 14:62 Page 7 of 13http://www.biomedcentral.com/1472-6947/14/62and to what extent information is processed. The eye-tracker method is widely used in marketing research aswell as research on cognitive processing and decision-making processes [46].When individuals look at information while reading orsearching, they continually make rapid eye movementscalled saccades. Between saccades, the eyes remain rela-tively still for approximately 200–500 ms [47]. Individualsdo not obtain new information during a saccade becausethe eyes are moving too quickly. Rather, higher levels ofinformation processing require deeper cognitive processesthat can only be processed while the eyes are fixated. Re-search on information processing is therefore mainly con-cerned with fixation durations [47]. Fixation durations areassessed by an infrared camera system built into the eyetracker. This camera measures the light reflex on the cor-nea of individuals sitting in front of the computer screen.In this study, the eye-tracking data was used for two pur-poses. First, to assess user experience, we analyzed heatmapsof each page of the tool to ensure respondents were consist-ently looking and reading the important aspects of designsuch as instructions. Heatmaps visually display the areas inwhich fixations on each page occur. Second, to assess use-fulness we compared fixations between the conventionaldisplay versus the DCIDA display. To analyze the eye-tracker data, we subdivided areas of the summary screeninto areas of interest based on each attribute. The time indi-viduals spend looking at relevant information in relation tothe total time needed to look at the whole summary infor-mation was used to indicate attributes participants werespending time deliberating on [47]. It is expressed by therelative fixation duration, that is, the percentage of the timespent fixating on each attribute relative to the time spentlooking at the whole summary information [47]. A TobiiT120 eye tracker embedded in a 17” display was used.Decisional conflict scale (DCS)After stating a preference for one treatment, participantswere asked to evaluate their uncertainty in their decisionbased on a subscale of the DCS [48]. The DCS is a vali-dated scale that assesses patients’ conflict and uncertaintyin their decision. While the full scale comprises 16 items di-vided into 5 subscales – uncertainty, inadequate knowledge,values clarity, lack of support, and ineffective choice – wefocussed simply on the uncertainty subscale, the compo-nent the DCIDA attempts to increase confidence in thedecision. This subscale includes three items: how clear thepatient is about the best choice, how sure they feel aboutthat choice, and how easy the choice was to make. Allitems are reported on a 5 point Likert scale from stronglyagree to strongly disagree. Lower scores are desirable asthey indicated less conflict. An effect size of 0.06 to 0.3has been reported to discriminate between decision sup-porting interventions [49].System Usability Scale (SUS)After completing the tool, participants were asked to an-swer an adapted version of the SUS. This validated scaleasks 10 questions about aspects of user friendliness, con-tent integration, and support needed to answer the toolproviding a score between 0 and 100 [50]. The SUS is acommonly used quantitative assessment of usability. It isuseful for rough comparison purposes, including assessingthe effects of changes from one prototype iteration to thenext, and for drawing preliminary conclusions about over-all usability of a system. To our knowledge there is noestablished SUS threshold for usability, however previousstudies have shown that a SUS score above 68 is aboveaverage for all studies that have used the scale, while ascore above 74 would place the system in the top 30%[51]. We chose to adopt these thresholds for our study.Think aloud studyWe used a verbal protocol analysis, a form of ‘think aloud’technique, to further investigate respondents’ choices [52].Think aloud data can be obtained in two ways: concurrent,where respondents are asked to verbalize their thoughts asthey complete a task, and retrospective, where respondentsare asked to describe what they were thinking after thetask was completed. Following experience from previousstudies, we used a hybrid approach whereby respondentswere asked to think aloud as they completed the tool,however if they did not think aloud for a period of 10 sec-onds, the interviewer would ask them to reflect back ontheir choices [53]. This approach interferes less with re-spondents’ thought processes while still allowing an ex-ploration of how respondents were making choices.Respondents were asked not to explain or plan what theywere saying, but to act as if they were speaking to them-selves. Following the survey, the interviewer asked respon-dents debriefing questions. In general, respondents wereasked how they found the information and choices theywere presented with and how they would improve thetool. The interviews were tape-recorded and later tran-scribed. Responses were coded by each step in the tooland whether they were related to user experience or us-ability. Two independent reviewers then coded the valenceof each comment (e.g. positive, negative or neutral) anddifferences resolved by discussion.ResultsParticipantsIn total, 20 participants were recruited via posters andcompleted the study. Eleven participants were male (55%),a 15 participants were white, and the mean age was 35(range 19–59) (Table 1). All but two participants had atleast a high school education and 8 (40%) had a college oruniversity degree. None of the participants were sufferingfrom a serious illness and only three participants weredents chose to complete the OSA version and four eachthe titles of the scales of the value clarification taskperception that the tool was unnecessarily complex. ThereUsefulnessEye-trackingRegardless of the type of summary, we observed an ordereffect whereby respondents spent more time observing theattributes at the top of the list (23% of time spent on firstattribute) versus bottom of the list (13% of the time spenton last attribute). This influenced the amount of time indi-viduals fixated on attributes they felt were more importantto them (Table 2). In the conventional summary, 18% of fix-ation duration was spent on the most important attribute,followed by 16% of duration on the second most importantattribute. The DCIDA summary demonstrated an increaseto 30% and 18% respectively. Similarly, in the conventionaldisplay, 12% of time was spent on the least important attri-bute, compared to only 5% of time using the DCIDA. Ana-lyzing the subgroup of participants that changed theirpreferred option between the conventional and DCIDAsummaries shows even greater differences in fixations(Table 2). The heatmaps in Figure 4 describe the influenceof DCIDA on two individuals.Quantitative responsesFor the 12 participants using the OSA tool, based onthe conventional display, 5 participants chose the OralTable 2 Results of system usability scale, decisionalconflict, and eye trackingBansback et al. BMC Medical Informatics and Decision Making 2014, 14:62 Page 8 of 13http://www.biomedcentral.com/1472-6947/14/62was no difference between the different PtDA versions.Think aloud analysis(whether each attribute was more or less important). Thetitles were increased in size and bolded, and this led to in-creased fixations in subsequent participants.SUSThe mean SUS score for DCIDA was 73 and ranged from68 to 85. This suggests that all participants considered thetool better than average interfaces and that the tool hasreasonable usability overall. The lowest scores related to achose the NSCLC and Osteoarthritis versions.User experienceEye-trackingIn general, heat maps suggested participants were readingall the relevant information on each page. For the first 4participants, it was noted that there were few fixations oncurrently taking prescription medications. Twelve respon-Table 1 Participant characteristicsValueAge, mean (range) 35.2 (19-59)Sex, % male 55%Race, n%White 15 (75%)Asian 5 (25%)Education, n %At least high school 18 (90%)At least college or university graduate 8 (40%)In total, the think aloud analysis yielded 65 comments re-lating to user-experience. Positive comments were gener-ally around the interactive features of the tool and its easeof moving from step to step. The subject of negative com-ments included the amount of words required to read, thewording of key instructions and a lack of intuitiveness inhow to interact with some features (such as the valueclarification exercise). Overall, 16 out of 20 (80%) statedthey had no major issues while using the tool. The 4 par-ticipants that suggested the tool was difficult to use wereall in the oldest age quartile. Points of improvement in-cluded: provision of examples to show how to interactwith key features (9 out of the 20 [45%]), clearer colours,speed of the software, and wording of certain questions(all less than 25% of participants).Conventional DCIDASystem Usability Score, mean (range) - 74 (68–85)Decisional conflict uncertaintysubscale (0 = good, 4 = bad),I am clear about the best choice for me 2.9 2.3I feel sure about what to choose 2.8 2.2This decision is easy for me to make 3.4 3.1Overall score 50.4 38.3Mean fixation duration (secs)Most important attribute 4.7 5.62nd most important attribute 4.2 3.3Least important attribute 3.1 1.1Other attributes 14.1 8.5Other areas of the screen 16.2 12.8Total 42.3 31.3Mean relative fixationMost important attribute 18% 30%2nd most important attribute 16% 18%Least important attribute 12% 6%Relative fixation – in 7 participantschanged preferenceMost important attribute 17% 34%nd2 most important attribute 14% 19%Least important attribute 10% 5%Bansback et al. BMC Medical Informatics and Decision Making 2014, 14:62 Page 9 of 13http://www.biomedcentral.com/1472-6947/14/62Appliance while 6 chose CPAP. Two respondents were‘very sure’ of their decision, while 4 respondents were‘not very sure,’ with the rest being ‘moderately’ or ‘some-what sure.’ When presented with the DCIDA version ofthe summary information, 4 of the 12 participants chan-ged their preferred option. The results for the cancerand osteoarthritis tools produced similar results with 2participants changing their decision for the cancer tool,Figure 4 Heatmaps of 2 examplar respondents (where colour represedefined cell space).1 for the osteoarthritis tool. Overall, the decisional conflictuncertainty subscore was 50.4 for the conventional sum-mary, reducing to 38.3 in the DCIDA version (Table 2).Think aloud analysisOf the 45 comments coded for usefulness, 28 (62%) werepositive. The predominant positive themes were that thetreatment information was easy to access and the DCIDAnts the proportion of time spent fixating in areas within theBansback et al. BMC Medical Informatics and Decision Making 2014, 14:62 Page 10 of 13http://www.biomedcentral.com/1472-6947/14/62summary information either confirmed or improved theirtreatment choice. Negative comments were from individ-uals who felt that they knew their decision and were frus-trated that they had to negotiate all the steps beforeindicating their preference option. Four of the participantssuggested that they felt ‘nudged’ to one of the choices, in apositive frame. When prompted for further clarity, it wasnot clear whether they felt this had encroached their au-tonomy to choose or not.DiscussionThis explorative study investigated the feasibility ofusing nudges based on MCDA to improve the quality oftreatment decisions. Software was developed to enableinformation from conventional PtDAs to be restructuredin how it is presented to the user through dynamic andinteractive interfaces. The software was tested for bothusability and usefulness in 20 participants. Both aspectsof user testing provided some positive results, and pro-vide important information for future development.There is limited research on the use of behavioural eco-nomic approaches to improve patients’ use of PtDA. Astudy by Ubel et al. found that order effects in decisionaids could be unbiased by providing the patient further in-formation (in graphical form) [10]. This is one approachfor encouraging System 2 thinking, yet a concern with tar-geting numerous biases through this general technique isthat providing more information can sometimes over-whelm people, causing them to revert back to System 1thinking [54]. The DCIDA approach has sought to enableusers to read less information, but focus on informationthat will most likely influence their choice.While there has been substantial attention to ‘nudge’theory in health [55,56], to our knowledge this theory hasnot been tested in PtDAs. Default options have becomethe predominant ‘nudge’ used in health interventions todate [57], but have typically selected a single default optionfor all users. For example, organ donation programs may‘nudge’ patients to enrol by making organ donation thedefault option. It has been proposed that nudges could beused in PtDAs for conditions where the evidence clearlyindicates that one treatment option is superior to theothers [58]. This is controversial as most PtDAs are devel-oped for preference sensitive decisions where two or moremedically appropriate options exist, and they seek to pro-mote rather than diminish patient autonomy. DCIDA hasbeen designed as a bridge between non prescriptive PtDAsand overtly prescriptive decision analysis tools.The objective of this study was to examine if there was adifference in response between the two versions of PtDAs.If no difference was observed, we would reject the hypoth-esis that the DCIDA version had any impact. While we es-tablish some preliminary demonstration of effectiveness,this study alone cannot ascertain whether the impact isreal or useful and should be interpreted with caution forthree primary reasons. First, participants considered theconventional summary before the DCIDA version, there-fore an ordering effect might have been observed wherebythey became more informed as they spent more timeviewing the information. On average 42 seconds was spentviewing the conventional display versus 31 seconds on theDCIDA display. It is also difficult to disentangle the effectsof each aspect of the DCIDA version that differed fromthe conventional summary, such as the values clarificationexercise, the layout, or colour hues. While it would be un-feasible to investigate the impact of each design feature,we have subsequently evaluated the impact of ordering ef-fects in a larger controlled study [59]. A second limitationrelates to putting the values clarification exercise at the be-ginning of the decision aid. We deliberately presented thisexercise up front to elicit more stable values from partici-pants. However, recent studies have suggested that it canbe problematic to engage in importance weighting toosoon in the decision-making process [60,61]. A relatedthird reason is that we do not know if participants whochanged their decision actually made an improved choice.This challenge of measuring the quality of patients choicesis a limitation in all research on PtDAs [62].Further, we acknowledge some limitations to our use ofstandard measures. With regard to the System UsabilityScale (SUS), this measure has been used frequently for us-ability evaluations of Internet-based interfaces; it is notused typically for evaluations of Internet-based PtDAs.Validation of the scale was based on studies of interfacesup to 18 years old and, thus, there are contextual and de-sign differences between DCIDA and the average toolsused to validate the SUS. However, by triangulating the re-sults of our SUS scores with our think aloud and eye-tracking results, we determined that the initial prototypehas acceptable usability, though we aim to improve it infuture iterations of the prototype and to use and report anewer iteration of the SUS developed by Bangor et al. [63].Additionally, we chose to use the Uncertainty subscale ofthe Decisional Conflict Scale and acknowledge that ouranalysis would have benefitted from also using the ValuesClarity subscale [49]. Use of this additional measure wouldhave contributed to our understanding of how DCIDAimpacted participants’ ability to arrive at stable values.While the usability scores and improvements in deci-sional conflict are encouraging, they suggest there is stillopportunity to further improve the tool. At the time datacollection, the DCIDA software was in alpha stage, andthe results of this research have motivated us to move to adifferent platform for the beta version. The higher usabil-ity results may also be due to the hypothetical nature ofthe task. Participants did not have the diseases and wereaware they were testing a tool. In addition, subjects weremajority college-educated who had access to and comfortor computer proficiency. We are also cognisant that it willsearch will assist us in identifying what proportion ofBansback et al. BMC Medical Informatics and Decision Making 2014, 14:62 Page 11 of 13http://www.biomedcentral.com/1472-6947/14/62people make values congruent decisions when they useDCIDA in comparison to conventional tools. We also planto ask questions about patients’ attitudes to the role ofnudges in making autonomous decisions [38]. Finally, webelieve it will be crucial to include patients of varyinghealth literacy, and numeracy to examine the influence ofthe tool in different patient groups.ConclusionThe DCIDA has been developed to enhance conventionalPtDAs to assist patients in choosing the treatment that ismost congruent with their informed values. This paper re-ports on the theoretical motivation for the DCIDA andthen describes an experiment in which the tool is usertested. The results give some empirical support that theDCIDA is understandable to users and that it can helpusers focus on attributes that are of individual importanceto them – to the extent that some participants changedtheir decisions. A number of valuable insights were learnedfor improving the next version of the DCIDA. In conclu-sion, we propose that the DCIDA is a promising approachto improve conventional PtDAs. Further development isrequired to improve its usability and usefulness; howeverresearch on testing preliminary effectiveness on patientdecision-making is justified.AbbreviationsPtDA: Patient decision aids; DCIDA: Dynamic computer interactive decisionapplication; NSCLC: Non-small cell lung cancer; IPDAS: International patientbe important for the software to be compatible with Inter-net use on tablets, which will require separate testing.Given these opportunities for future research, we planto further explore the influence of the DCIDA in subse-quent studies. In these, individuals will be randomized be-tween a conventional PtDA and a DCIDA version andplan to determine if DCIDA’s unique design features leadto improvements in decision quality, including concord-ance between what matters most to individual patientsand their chosen option. We propose to consider carefullyhow value concordance is measured. There are no stand-ard criteria for studying values concordance and a recentCochrane review [1] shows that there is substantial het-erogeneity among the measures that authors have used todate. We agree with the growing number of researcherscalling for further study into the “active ingredients” ofvalues clarification [64] and the creation of standard mea-sures for analyzing values congruence [62,65]. Such re-using computers. These findings may not be generalizableto a more heterogeneous population with lower educationdecision aid standards; SUS: System usability scale; DCS: Decisional conflictscale; OSA: Obstructive sleep apnea; CPAP: Continuous positive airwaypressure; NSAIDs: Non-steroidal anti-inflammatory drugs.Competing interestsDr Bansback was funded by Pfizer Canada for postdoctoral research relatingto the development of methods for improving decisions. Funding wasunrestricted and had no bearing on the treatments considered.Authors’ contributionsNB conceived the project, carried out the interviews, performed statisticalanalysis, and drafted the manuscript. LCL participated in the study designand help drafted the manuscript. LL and SB helped conceive the project,interpret the statistical analysis, and help draft the manuscript. All authorsread and approved the final manuscript.AcknowledgementsAt the time of this work, Nick Bansback was funded through a postdoctoralfellowship sponsored by the Canadian Arthritis Network and Pfizer Canada.Funding for the study was provided by the Canadian Centre for AppliedResearch in Cancer Control (ARCC) which is funded by the Canadian CancerSociety Research Institute. We are grateful to the three reviewers who greatlyimproved the manuscript with helpful suggestions, and to Sarah Munro forher editorial assistance.Author details1School of Population and Public Health, University of British Columbia, 2206East Mall, Vancouver, BC V6T 1Z3, Canada. 2Centre for Clinical Epidemiologyand Evaluation, Vancouver Coastal Research Institute, 7th Floor, 828 West10th Avenue, Research Pavilion, Vancouver, BC V5Z 1M9, Canada. 3Centre forHealth Evaluation and Outcome Sciences, St Paul’s Hospital, 588-1081 BurrardSt, Vancouver, BC V6Z 1Y6, Canada. 4Arthritis Research Centre of Canada,5591 No. 3 Road, Richmond, BC V6X 2C7, Canada. 5Department ofPhysiotherapy, University of British Columbia, 212 – 2177 Wesbrook Mall,Vancouver, BC V6T 1Z3, Canada. 6Faculty of Pharmaceutical Sciences,University of British Columbia, 2405 Wesbrook Mall, Vancouver, BC V6T 1Z3,Canada.Received: 14 August 2013 Accepted: 25 July 2014Published: 1 August 2014References1. Stacey D, Légaré F, Col NF, Bennett CL, Barry MJ, Eden KB, Holmes-Rovner M,Llewellyn-Thomas H, Lyddiatt A, Thomson R, Trevena L, Wu JH: Decision aidsfor people facing health treatment or screening decisions.Cochrane Database Syst Rev 2014, 1:CD001431.2. Wennberg JE: Unwarranted variations in healthcare delivery: implicationsfor academic medical centres. BMJ 2002, 325(7370):961–964. Oct 26.3. Montgomery AA, Fahey T: How do patients’ treatment preferencescompare with those of clinicians? Qual Health Care 2001,10(Supplement 1):i39–i43. Sep 1.4. Lee CN, Hultman CS, Sepucha K: Do Patients and Providers Agree Aboutthe Most Important Facts and Goals for Breast Reconstruction Decisions?Ann Plast Surg 2010, 64(5):563–566.5. Mulley AG, Trimble C, Elwyn G: Stop the silent misdiagnosis: patients’preferences matter. BMJ 2012, 8:345. e6572–e6572.6. Charles C, Gafni A, Whelan T: Decision-making in the physician-patientencounter: revisiting the shared treatment decision-making model.Soc Sci Med 1999, 49(5):651–661.7. Elwyn G, O'Connor A, Stacey D, Volk R, Edwards A, Coulter A, Thomson R,Barratt A, Barry M, Bernstein S, Butow P, Clarke A, Entwistle V, Feldman-StewartD, Holmes-Rovner M, Llewellyn-Thomas H, Moumjid N, Mulley A, Ruland C,Sepucha K, Sykes A, Whelan T: International Patient Decision Aids Standards(IPDAS) Collaboration. Developing a quality criteria framework for patientdecision aids: online international Delphi consensus process. BMJ 2006,333(7565):417.8. Zikmund-Fisher BJ, Fagerlin A, Ubel PA: Improving understanding ofadjuvant therapy options by using simpler risk graphics. Cancer 2008,113(12):3382–3390.9. Li LC, Adam P, Townsend AF, Stacey D, Lacaille D, Cox S, McGowan J,Tugwell P, Sinclair G, Ho K, Backman CL: Improving healthcare consumereffectiveness: An Animated, Self-serve, Web-based Research Tool(ANSWER) for people with early rheumatoid arthritis. BMC Med InformDecis Mak 2009, 9(1):40. Aug 20.Bansback et al. BMC Medical Informatics and Decision Making 2014, 14:62 Page 12 of 13http://www.biomedcentral.com/1472-6947/14/6210. Ubel PA, Smith DM, Zikmund-Fisher BJ, Derry HA, McClure J, Stark A, Wiese C,Greene S, Jankovic A, Fagerlin A: Testing whether decision aids introducecognitive biases: Results of a randomized trial. Patient Educ Couns 2010,80(2):158–163.11. Tversky A, Kahneman D: Judgment under uncertainty: Heuristics andbiases. Science 1974, 185(4157):1124.12. Chapman GB, Elstein AS: Cognitive processes and biases in medicaldecision making. In Decision making in health care: theory, psychology andapplications. Edited by Chapman GB, Sonnenberg FS. Cambridge:Cambridge University Press; 2000:183–210.13. Redelmeier DA, Rozin P, Kahneman D: Understanding patients’ decisions.Cognitive and emotional perspectives. JAMA 1993, 270(1):72–76. Jul 7.14. Ubel PA: Is information always a good thing? Helping patientsmake“good” decisions. Med Care 2002, 40(9 Suppl):V39.15. McNeil BJ, Pauker SG, Sox HC, Tversky A: On the elicitation of preferencesfor alternative therapies. N Engl J Med 1982, 306(21):1259–1262. May 27.16. Amsterlaw J, Zikmund-Fisher BJ, Fagerlin A, Ubel PA: Can avoidance ofcomplications lead to biased healthcare decisions. Judgment and DecisionMaking 2006, 1(1):64–75.17. O’Connor AM, Tugwell P, Wells GA, Elmslie T, Jolly E, Hollingworth G,McPherson R, Drake E, Hopman W, Mackenzie T: Randomized trial of aportable, self-administered decision aid for postmenopausal womenconsidering long-term preventive hormone therapy. Med Decis Making1998, 18(3):295–303.18. Stanovich KE, West RF: Individual differences in reasoning: Implicationsfor the rationality debate? Behav Brain Sci 2000, 23(5):645–665.19. Decision Aid Library Inventory - Patient Decision Aids - Ottawa HospitalResearch Institute [Internet]. [cited 2010 Nov 5] Available from:http://decisionaid.ohri.ca/cochinvent.php.20. Shiell A, Hawe P, Fletcher M: Reliability of health utility measures and atest of values clarification. Soc Sci Med 2003, 56(7):1531–1541. Apr.21. Shiell A, Seymour J, Hawe P, Cameron S: Are preferences over healthstates complete? Health Econ 2000, 9(1):47–55.22. Scott A, Vick S: Patients, Doctors and Contracts: An Application ofPrincipal‐Agent Theory to the Doctor‐Patient Relationship. Scott J PolitEcon 2003, 46(2):111–134. Jan 7.23. Stewart JM, O’Shea E, Donaldson C, Shackley P: Do ordering effects matter inwillingness-to-pay studies of health care? J Health Econ 2002, 21(4):585–599.24. Kjær T, Bech M, Gyrd‐Hansen D, Hart‐Hansen K: Ordering effect and pricesensitivity in discrete choice experiments: need we worry? Health Econ2006, 15(11):1217–1228.25. Epstein LG: A Definition of Uncertainty Aversion. Rev Econ Stud 1999,66(3):579–608.26. Thaler RH, Sunstein CR: Libertarian Paternalism. American Economic Review2003, 93(2):175–179.27. Payne JW, Bettman JR, Johnson EJ: The adaptive decision maker. New York:Cambridge Univ Pr; 1993.28. Todd P, Benbasat I: Inducing compensatory information processingthrough decision aids that facilitate effort reduction: An experimentalassessment. J Behav Decis Mak 2000, 13(1):91–106.29. Todd PA, Benbasat I: The influence of decision aids on choice strategiesunder conditions of high cognitive load. IEEE Trans Syst Man Cybern 1994,24(4):537–547. Apr.30. Carrigan N, Gardner PH, Conner M, Maule J: The impact of structuringinformation in a patient decision aid. Psychol Health 2004, 19(4):457–77.31. Dolan J: Multi-Criteria Clinical Decision Support: A Primer on the Use ofMultiple-Criteria Decision-Making Methods to Promote Evidence-Based,Patient-Centered Healthcare. Patient 2010, 3(4):229–248.32. Dowie J, Kjer Kaltoft M, Salkeld G, Cunich M: Towards generic onlinemulticriteria decision support in patient-centred health care. HealthExpectations; In Press. Available from: http://onlinelibrary.wiley.com/doi/10.1111/hex.12111/full.33. Montgomery AA, Emmett CL, Fahey T, Jones C, Ricketts I, Patel RR, Peters TJ,Murphy DJ: Two decision aids for mode of delivery among women withprevious caesarean section: randomised controlled trial. BMJ 2007,334(7607):1305. Jun 23.34. Russell LB, Schwartz A: Looking at Patients’ Choices through the Lens ofExpected Utility A Critique and Research Agenda. Med Decis Making 2012,32(4):527–531.35. Elwyn G, Edwards A, Eccles M, Rovner D: Decision analysis in patient care.Lancet 2001, 358(9281):571–574. Aug 18.36. Ritov I, Baron J: Status-quo and omission biases. J Risk Uncertain 1992,5(1):49–61.37. Thaler RH, Sunstein CR: Nudge: Improving Decisions About Health, Wealth andHappiness. New Haven, CT: Yale University Press; 2008.38. Felsen G, Castelo N, Reiner PB: Decisional enhancement and autonomy:public attitudes towards overt and covert nudges. Judgment & DecisionMaking 2013, 8(3):202–213.39. Ryan M, Shackley P: Assessing the benefits of health care: how far shouldwe go? Qual Health Care 1995, 4(3):207.40. Mullen P, Spurgeon P: Priority Setting and the Public. Abingdon: RadcliffePublishing; 1999.41. Hawley ST, Zikmund-Fisher B, Ubel P, Jancovic A, Lucas T, Fagerlin A: Theimpact of the format of graphical presentation on health-relatedknowledge and treatment choices. Patient Educ Couns 2008,73(3):448–455.42. Hieke S, Wilczynski P: Colour Me In–an empirical study on consumerresponses to the traffic light signposting system in nutrition labelling.Public Health Nutr 2012, 15(05):773–782.43. O'Connor AM, Drake ER, Fiset V, Graham ID, Laupacis A, Tugwell P: TheOttawa patient decision aids. Eff Clin Pract 1999, 2(4):163–170.44. Rubin J, Chisnell D: Handbook of Usability Testing: How to Plan, Design, andConduct Effective Tests. New York: John Wiley & Sons; 2011:353 p.45. Duchowski A: Eye tracking methodology: Theory and practice. London:Springer; 2007.46. Horstmann N, Ahlgrimm A, Glöckner A: How distinct are intuition anddeliberation? An eye-tracking analysis of instruction-induced decisionmodes. Judgment and Decision Making 2009, 4:335–354.47. Goldberg JH, Kotval XP: Computer interface evaluation using eye movements:methods and constructs. Int J Ind Ergon 1999, 24(6):631–645. Oct.48. O’Connor AM: Validation of a decisional conflict scale. Med Decis Making1995, 15(1):25–30.49. O’Connor AM: User Manual - Decisional Conflict Scale; [Internet]. 2010.Available from: www.ohri.ca/decisionaid.50. Brooke J: System Usability Scale (SUS): A Quick-and-Dirty Method of SystemEvaluation User Information. Reading, UK: Digital Equipment Co. Ltd; 1986.51. Bangor A, Kortum PT, Miller JT: An empirical evaluation of the systemusability scale. Int J Hum-Comput Int 2008, 24(6):574–94.52. Ericsson KA, Simon HA: Verbal reports as data. Psychol Rev 1980, 87(3):215–251.53. Miguel FS, Ryan M, Amaya-Amaya M: “Irrational” stated preferences: aquantitative and qualitative investigation. Health Econ 2005,14(3):307–322. Mar.54. Schwartz B, Ward A, Monterosso J, Lyubomirsky S, White K, Lehman DR:Maximizing versus satisficing: happiness is a matter of choice. J Pers SocPsychol 2002, 83(5):1178.55. Marteau TM, Ogilvie D, Roland M, Suhrcke M, Kelly MP: Judging nudging:can nudging improve population health? BMJ 2011,342:d228.56. Loewenstein G, Asch DA, Friedman JY, Melichar LA, Volpp KG: Can behaviouraleconomics make us healthier? BMJ 2012, 344:e3482–e3482. May 23.57. Johnson EJ, Goldstein D: Do Defaults Save Lives? Science 2003,302(5649):1338–1339. Nov 21.58. Blumenthal-Barby JS, Cantor SB, Russell HV, Naik AD, Volk RJ: DecisionAids: When “Nudging” Patients To Make A Particular Choice Is MoreEthical Than Balanced. Nondirective Content. Health Aff 2013,32(2):303–310. Feb 1.59. Bansback N, Li L, Lynd LD, Bryan S: Exploiting order effects to improve thequality of decisions. Patient Educ Couns, In Press.60. Pieterse AH, de Vries M, Kunneman M, Stiggelbout AM, Feldman-Stewart D:Theory-informed design of values clarification methods: A cognitivepsychological perspective on patient health-related decision making.Soc Sci Med 2013, 77:156–163.61. De Vries M, Fagerlin A, Witteman HO, Scherer LD: Combining deliberationand intuition in patient decision support. Patient Educ Couns 2013,91(2):154–160.62. Sepucha K, Ozanne EM: How to define and measure concordancebetween patients’ preferences and medical treatments: A systematicreview of approaches and recommendations for standardization.Patient Educ Couns 2010, 78(1):12–23. Jan.63. Bangor A, Kortum P, Miller J: Determining what individual SUS scoresmean: Adding an adjective rating scale. Journal of Usability Studies 2009,4(3):114–123.64. Fagerlin A, Pignone M, Abhyankar P, Col N, Feldman-Stewart D, Gavaruzzi T,Kryworuchko J, Levin CA, Pieterse AH, Reyna V, Stiggelbout A, Scherer LD,Wills C, Witteman HO: Clarifying values: an updated review.BMC Med Inform Decis Mak 2013, 13(Suppl 2):S8.65. Llewellyn-Thomas HA, Crump RT: Decision Support for Patients ValuesClarification and Preference Elicitation. Med Care Res Rev 2013,70(1 suppl):50S–79S.doi:10.1186/1472-6947-14-62Cite this article as: Bansback et al.: Development and preliminary usertesting of the DCIDA (Dynamic computer interactive decision application)for ‘nudging’ patients towards high quality decisions. BMC MedicalInformatics and Decision Making 2014 14:62.Submit your next manuscript to BioMed Centraland take full advantage of: • Convenient online submission• Thorough peer review• No space constraints or color figure charges• Immediate publication on acceptance• Inclusion in PubMed, CAS, Scopus and Google Scholar• Research which is freely available for redistributionBansback et al. BMC Medical Informatics and Decision Making 2014, 14:62 Page 13 of 13http://www.biomedcentral.com/1472-6947/14/62Submit your manuscript at www.biomedcentral.com/submit


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items