Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Understanding online consumers' utilization of multiple advice sources Kim, Hongki 2018

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2018_september_kim_hongki.pdf [ 11.27MB ]
Metadata
JSON: 24-1.0368800.json
JSON-LD: 24-1.0368800-ld.json
RDF/XML (Pretty): 24-1.0368800-rdf.xml
RDF/JSON: 24-1.0368800-rdf.json
Turtle: 24-1.0368800-turtle.txt
N-Triples: 24-1.0368800-rdf-ntriples.txt
Original Record: 24-1.0368800-source.json
Full Text
24-1.0368800-fulltext.txt
Citation
24-1.0368800.ris

Full Text

UNDERSTANDING ONLINE CONSUMERS’ UTILIZATION OF MULTIPLE ADVICE SOURCES    by   Hongki Kim    A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY  in  THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Business Administration)     THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) June 2018    © Hongki Kim, 2018  ii The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, the dissertation entitled:  Understanding Online Consumers’ Utilization of Multiple Advice Sources  submitted by Hongki Kim  in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Business Administration  Examining Committee: Izak Benbasat, Business Administration Co-supervisor Hasan Cavusoglu, Business Administration Co-supervisor  David Hardistry, Business Administration Supervisory Committee Member Carson Woo, Business Administration University Examiner Heather L. O’Brien, Archival and Information Studies University Examiner   Additional Supervisory Committee Members:  Supervisory Committee Member  Supervisory Committee Member    iii ABSTRACT  As increasing numbers of online stores provide multiple advice sources and increasing numbers of shoppers access these sources on the Internet, shoppers develop decision-making strategies to manage a wide variety of information, some of it conflicting. By identifying these decision-making strategies, information system scholars have developed theoretical foundations for designing decision aids. However, few studies have investigated two important aspects: i) online shoppers’ new decision-making strategies in using multiple advice sources that offer diverse opinions; and ii) new decision aids that support such decision-making strategies.  My research addresses this gap and consists of three laboratory-based studies. Study #1 identifies new consistency strategies that embed consistency as a key heuristic through verbal protocol analysis. It also shows that online shoppers use consistency strategies to identify products that deserve to be examined and support their belief in the quality of the products. Study #2 proposes consistency distance identification tools (CDITs) that present objective consistency/inconsistency measures as graphical representations. It also finds that the impact of the CDITs on decision quality and efforts is contingent on the fit between shoppers’ trustworthiness of advice sources, their goals in building a low/high level of understanding of advice sources and products, and the functionalities of the CDITs in supporting shoppers’ task and/or goals in the lab experiments. Study #3 proposes inconsistency reduction tools that clarify why advice sources are inconsistent by identifying the differences of preferences between the online shopper and advice sources, as well as facilitating interactions with a recommendation agent (RA). My research reveals two major findings: i) inconsistency among advice sources increases not only online shoppers’ attribution to the RA, but also the perceived incompetence and deceptiveness of the RA; and ii) utilization of inconsistency reduction tools decreases such online shoppers’ reactions to inconsistency among advice sources.     iv LAY SUMMARY  Today, we receive advices from multiple sources when we shop online. How online shoppers make use of information from different sources is not well known. I first determine how online shoppers use different sources while shopping online. I found that online shoppers consider whether different advice sources have similar assessment of a product in order to incorporate recommendations from these advice sources into their shopping decision making. Second, I propose shopping tools that show agreement of different sources to help online shoppers select a product. Third, I propose shopping tools that show why different sources have similar or different opinions to a product.   Overall, this dissertation improves our understanding of how online shoppers use different advice sources and provides guidelines for online shopping stores on how to design shopping tools that help shoppers better utilize different advice sources.     v PREFACE  The research described in this thesis has been conducted by the student in consultation with members of the supervisory committee. The student has had full responsibility in identifying and designing the research program, analyzing the research data, and preparing this manuscript.   The research was conducted in accordance with the suggested ethics guidelines of the Human Ethics of the UBC Research Ethics Board. The board approved this research via certificate number H12-02101 in August 2012, H16-00974 in July 2016, and H17-02378 in October 2017.       vi TABLE OF CONTENTS  ABSTRACT..................................................................................................................................................iii LAY SUMMARY.........................................................................................................................................iv PREFACE......................................................................................................................................................v TABLE OF CONTENTS.............................................................................................................................vi LIST OF TABLES........................................................................................................................................ix LIST OF FIGURES....................................................................................................................................xii LIST OF ABBREVIATIONS....................................................................................................................xiv ACKNOWLEDGEMENTS........................................................................................................................xv CHAPTER 1: INTRODUCTION................................................................................................................1 1.1 CONTEXT OVERVIEW....................................................................................................................1 1.2 RESEARCH GAPS, CONTRIBUTIONS, AND OBJECTIVES.........................................................1       1.2.1 Research Gaps and Contributions.................................................................................................1       1.2.2 Research Objectives and Questions..............................................................................................3 1.3 OVERALL STRUCTURE OF THE DISSERTATION.......................................................................5 CHAPTER 2: HOW ONLINE CONSUMERS UTILIZE RECOMMENDATIONS AND REVIEWS FROM MULTIPLE SOURCES (STUDY #1).............................................................................................6 2.1 INTRODUCTION.................................................................................................................................6 2.2 LITERATURE REVIEW AND THEORETICAL FRAMEWORK......................................................8       2.2.1 Consumer Decision-Making Strategies........................................................................................8       2.2.2 Cognitive Dissonance Theory.....................................................................................................11       2.2.3 Information Search Process Model.............................................................................................12       2.2.4 Building a Theoretical Framework.............................................................................................13 2.3 METHODOLOGY: PROCESS TRACING METHOD......................................................................14       2.3.1 Protocol Analysis: Concurrent Verbalization for Data Collection..........................................…14       2.3.2 Experimental Design..................................................................................................................15       2.3.3 Coding Scheme Development and Coding Procedures...............................................................23 2.4 IDENTIFYING CONSISTENCY STRATEGIES..............................................................................28       2.4.1 An Example of Identifying a Consistency Strategy.....................................................................29       2.4.2 Overall Summary of Consistency Strategies...............................................................................35       2.4.3 Recommendation Consistency Strategies...................................................................................37  vii       2.4.4 Review Consistency Strategies...................................................................................................49 2.5 THEORETICAL PERSPECTIVES FOR TRIANGULATION..........................................................55       2.5.1 Cognitive Dissonance Theory and the Information Search Process Model.................................57       2.5.2 Confirmation Bias and Bandwagon Effect..................................................................................58 2.6 DISCUSSION......................................................................................................................................59       2.6.1 Theoretical Implications.............................................................................................................59       2.6.2 Practical Implications.................................................................................................................60       2.6.3 Limitations and Future Research.................................................................................................61 CHAPTER 3: SUPPORTING ONLINE CONSUMERS BY IDENTIFYING CONSISTENCY DISTANCE AMONG ADVICE SOURCES (STUDY #2)........................................................................63 3.1 INTRODUCTION...............................................................................................................................63 3.2 THEORETICAL FRAMEWORK AND HYPOTHESIS DEVELOPMENT......................................64       3.2.1 Online Consumers’ Utilization of Multiple Advice Sources.......................................................64       3.2.2 Conceptualizing Consistency Distance and Consistency Distance Identification Tools.............65       3.2.3 Task-Technology Fit Theory......................................................................................................70       3.2.4 Task-Individual-Technology Fit in Utilizing Consistency Distance Identification Tools...........72 3.3 METHODOLOGY..............................................................................................................................80       3.3.1 Developing an Experimental Online Store..................................................................................80       3.3.2 Experimental Design..................................................................................................................86       3.3.3 Participants and Experimental Procedure…...............................................................................88       3.3.4 Measurement Items.....................................................................................................................91 3.4 DATA ANALYSIS AND RESULTS..................................................................................................96       3.4.1 Experiment 2-1: Interplay Between Source CDITs, Trustworthiness Variance, and Product Type....................................................................................................................................................96       3.4.2 Experiment 2-2: Interplay Between Product CDITs, Information Search Stages, and Product Type..................................................................................................................................................102       3.4.3 Overall Findings.......................................................................................................................108 3.5 DISCUSSION....................................................................................................................................109       3.5.1 Theoretical Implications...........................................................................................................110       3.5.2 Practical Implications...............................................................................................................110       3.5.3 Limitations and Future Research...............................................................................................111 CHAPTER 4: ONLINE CONSUMERS’ ATTRIBUTION OF INCONSISTENCY AMONG ADVICE SOURCES (STUDY #3)............................................................................................................................112 4.1 INTRODUCTION.............................................................................................................................112  viii 4.2 THEORETICAL FRAMEWORK AND HYPOTHESIS DEVELOPMENT....................................113       4.2.1 Attribution Theory....................................................................................................................113       4.2.2 Conceptualizing Inconsistency Reduction Tools......................................................................115       4.2.3 Theoretical Framework of Inconsistency Attribution...............................................................118 4.3 METHODOLOGY............................................................................................................................121       4.3.1 Developing an Experimental Online Store................................................................................121       4.3.2 Experimental Design................................................................................................................124       4.3.3 Participants and Experimental Procedure.................................................................................125       4.3.4 Measurement Items...................................................................................................................128 4.4 DATA ANALYSIS AND FINDINGS...............................................................................................130       4.4.1 Impact of Inconsistency Among Advice Sources (Round 1 and Round 2)..............................130       4.4.2 Impact of the Explanatory and Interactive IRTs (Round 2 and Round 3)..................................134       4.4.3 Overall Findings and Theoretical Insight..................................................................................138 4.5 DISCUSSION…………………………………………………….……...........................................141       4.5.1 Theoretical Implications...........................................................................................................141       4.5.2 Practical Implications...............................................................................................................141       4.5.3 Limitations and Future Research...............................................................................................142 CHAPTER 5: CONCLUSION.................................................................................................................143 5.1 A SUMMARY OF THE THESIS......................................................................................................143 5.2 CONTRIBUTIONS...........................................................................................................................145       5.2.1 Theoretical Contributions.........................................................................................................145       5.2.2 Practical Contributions.............................................................................................................147 5.3 LIMITATIONS AND SUGGESTIONS FOR FUTURE RESEARCH.............................................148 BIBLIOGRAPHY.....................................................................................................................................150 APPENDICES...........................................................................................................................................159 Appendix A: Literature Examples on Online Reviews and Recommendations...........................................159  ix LIST OF TABLES  Table 2.1 Manipulation of Recommendation Consistency............................................................................19 Table 2.2 Experimental Design.....................................................................................................................20 Table 2.3 Demographics of Participants........................................................................................................20 Table 2.4 Measurement Items........................................................................................................................21 Table 2.5 Descriptive Statistics and Composite Reliability of Constructs.....................................................22 Table 2.6 Composite Reliability, AVE, and Correlation Among Constructs.................................................23 Table 2.7 Coding Scheme..............................................................................................................................25 Table 2.8 Transcribing Verbalizations...........................................................................................................29 Table 2.9 Segmenting and Categorizing Verbalizations................................................................................30 Table 2.10 Supplementary Coding for Clarification......................................................................................31 Table 2.11 Integrating Individual Decision-Making Process for Generalization...........................................34 Table 2.12 The Number of Participants Examining Consistency...................................................................36 Table 2.13 Consistency Strategies across the Exploration and Elaboration Stages in Information Search Processes.......................................................................................................................................................36 Table 2.14 Description of Consistency Strategies.........................................................................................36 Table 2.15 The Number of Participants Utilizing Consistency Strategies.....................................................37 Table 2.16 Key Verbalizations: Seeking Strategy (Participant #12)..............................................................39 Table 2.17 Key Verbalizations: Seeking Strategy (Participant #51)..............................................................39 Table 2.18 Key Verbalizations: Anchoring Strategy (Participant #27)..........................................................42 Table 2.19 Key Verbalizations: Anchoring Strategy (Participant #54)..........................................................42 Table 2.20 Key Verbalizations: Deliberating Strategy (Participant #3).........................................................45 Table 2.21 Key Verbalizations: Deliberating Strategy (Participant #37).......................................................45 Table 2.22 Key Verbalizations: Adhering Strategy (Participant #34)............................................................48 Table 2.23 Key Verbalizations: Adhering Strategy (Participant #64)............................................................48  Table 2.24 Key Verbalizations: Confirming Strategy (Participant #49)........................................................51  Table 2.25 Key Verbalizations: Confirming Strategy (Participant #8)..........................................................51 Table 2.26 Key Verbalizations: Validating Strategy (Participant #39)..........................................................54 Table 2.27 Key Verbalizations: Validating Strategy (Participant #31)..........................................................54 Table 2.28 T A Summary of Consistency Strategies with Theoretical Triangulations...................................56  Table 3.1 Types of Consistency Distance Identification Tools..................................................................... 66 Table 3.2 Consistency Distance Identification Tools and Associated Consistency Strategies.......................68 Table 3.3 Operationalizations of Consistency Distances...............................................................................68 Table 3.4 Consistency Distance Formulae.....................................................................................................69  x Table 3.5 Functionalities of Consistency Distance Identification Tools........................................................73 Table 3.6 Task Requirements or Goals Across Information Search Stages....................................................75 Table 3.7 Task-Individual-Technology Fit Between Trustworthiness Variance and Source CDITs in the Source Selection Stage..................................................................................................................................77 Table 3.8 Task-Individual-Technology Fit Between Information Search Stages and Product CDITs...........79 Table 3.9 Factorial Design of Experimental Conditions (Experiment 2-1)....................................................87 Table 3.10 Factorial Design of Experimental Conditions (Experiment 2-2)..................................................87 Table 3.11 Demographics of Participants (Experiment 2-1)..........................................................................89 Table 3.12 Demographics of Participants (Experiment 2-2)..........................................................................89 Table 3.13 Measurement Items: Pre-questionnaire........................................................................................91 Table 3.14 Measurement Items: Post-questionnaire......................................................................................93 Table 3.15 Descriptive Statistics and Composite Reliability of Constructs...................................................94 Table 3.16 Composite Reliability, AVE, and Correlation Among Constructs...............................................95 Table 3.17 Post-Group5ing of Trustworthiness Variance..............................................................................96 Table 3.18 MANOVA Summary Table.........................................................................................................97 Table 3.19 Univariate ANOVA Summary Table...........................................................................................97 Table 3.20 Means for Task-Individual-Technology Fit by Source CDIT and Trustworthiness Variance......99 Table 3.21 Means for Decision Quality by Source CDIT and Trustworthiness Variance............................100 Table 3.22 Means for Decision Effort by Source CDIT and Trustworthiness Variance...............................101 Table 3.23 MANOVA Summary Table.......................................................................................................103 Table 3.24 Univariate ANOVA Summary Table.........................................................................................103 Table 3.25 Means for Task-Individual-Technology Fit by the Product CDIT Utilized in the Exploration and Elaboration Stages.......................................................................................................................................105 Table 3.26 Means for Decision Quality by the Product CDIT Utilized in the Exploration and Elaboration Stages..........................................................................................................................................................106 Table 3.27 Means for Decision Quality by the Combination of Product CDITs Utilized in the Exploration and Elaboration Stages................................................................................................................................107 Table 3.28 A Summary of Hypothesis Testing............................................................................................108 Table 4.1 Inconsistency Formulae...............................................................................................................123 Table 4.2 Demographics of Participants......................................................................................................125 Table 4.3 Measurement Item.......................................................................................................................128 Table 4.4 Descriptive Statistics of Constructs.............................................................................................129 Table 4.5 Composite Reliability, AVE, and Correlation Among Constructs...............................................130 Table 4.6 ANOVA Summary Table for Perceived Causal Attribution of Inconsistency..............................131 Table 4.7 Post-Hoc Analysis for Perceived Causal Attribution of Inconsistency........................................132  Table 4.8 MANOVA Summary Table.........................................................................................................133  xi Table 4.9 Univariate ANOVA Summary Table...........................................................................................133 Table 4.10 MANOVA Summary Table.......................................................................................................134 Table 4.11 Univariate ANOVA Summary Table.........................................................................................135 Table 4.12 MANOVA Summary Table.......................................................................................................137 Table 4.13 Univariate ANOVA Summary Table.........................................................................................137 Table 4.14 A Summary of Hypothesis Testing............................................................................................139 Table 5.1 A Summary of the Thesis.............................................................................................................143              xii LIST OF FIGURES  Figure 2.1 Theoretical Framework (Study #1)...............................................................................................13 Figure 2.2 Recommendations in the Experimental Online Store...................................................................17 Figure 2.3 Reviews in the Experimental Online Store...................................................................................17 Figure 2.4 Preferences Elicitation Interface of Recommendation Agent.......................................................18 Figure 2.5 Diagrams of Verbal Protocol Coding Procedures.........................................................................24 Figure 2.6 Visualizing Recommendation Consistency Strategy: Deliberating..............................................35 Figure 2.7 Recommendation Consistency Strategy: Seeking........................................................................38 Figure 2.8 Recommendation Consistency Strategy: Anchoring....................................................................41 Figure 2.9 Recommendation Consistency Strategy: Deliberating.................................................................44 Figure 2.10 Recommendation Consistency Strategy: Adhering....................................................................47 Figure 2.11 Review Consistency Strategy: Confirming.................................................................................50 Figure 2.12 Review Consistency Strategy: Validating..................................................................................53 Figure 3.1 Trustworthiness Variance in the Task-Technology Fit Theory.....................................................72 Figure 3.2 Three Stages in Utilizing Multiple Advice Sources......................................................................74 Figure 3.3 Theoretical Framework................................................................................................................75 Figure 3.4 Aggregated Source CDIT in the Source Selection Stage..............................................................82 Figure 3.5 Pairwise Source CDIT in the Source Selection Stage...................................................................82 Figure 3.6 Aggregated Product CDIT in the Exploration Stage.....................................................................83 Figure 3.7 Pairwise Product CDIT in the Exploration Stage..........................................................................84 Figure 3.8 Aggregated Product CDIT in the Elaboration Stage.....................................................................85 Figure 3.9 Pairwise Product CDIT in the Elaboration Stage..........................................................................85 Figure 3.10 Overview of Experiment 2-1 and Experiment 2-2......................................................................86 Figure 3.11 Overview of Experimental Procedures.......................................................................................90 Figure 3.12 Interaction Effect of Trustworthiness Variance and Source CDITs on Task-Individual-Technology Fit..............................................................................................................................................99 Figure 3.13 Interaction Effect of Trustworthiness Variance and Source CDITs on Decision Quality..........101 Figure 3.14 Interaction Effect of Trustworthiness Variance and Source CDITs on Decision Effort............102 Figure 3.15 Means for Task-Individual-Technology Fit by the Product CDIT Utilized in the Exploration and Elaboration Stages.......................................................................................................................................105 Figure 3.16 Means for Decision Quality by the Product CDIT Utilized in the Exploration and Elaboration Stages..........................................................................................................................................................106    xiii Figure 3.17 Means for Decision Effort by the Product CDIT Utilized in the Exploration and Elaboration Stages..........................................................................................................................................................107 Figure 4.1 Explanatory Inconsistency Reduction Tool................................................................................116 Figure 4.2 Interactive Inconsistency Reduction Tool..................................................................................117 Figure 4.3 Theoretical Framework of Inconsistency Attribution................................................................118 Figure 4.4 Implementing Inconsistency into Consistency Distance.............................................................123 Figure 4.5 Multi-Round Within-Between Subjects Design.........................................................................124 Figure 4.6 Online Shopping Store Interface in Round 1...............................................................................126 Figure 4.7 Presenting Inconsistency in Round 2..........................................................................................127 Figure 4.8 Explanatory Inconsistency Reduction Tool in Round 3..............................................................127 Figure 4.9 Interactive Inconsistency Reduction Tool in Round 3................................................................128 Figure 4.10 Online Consumers’ Perceived Causal Attribution of Inconsistency.........................................132 Figure 4.11 Online Consumers’ Reactions to Inconsistency Among Advice Sources.................................134 Figure 4.12 Utilizing IRTs to Alleviate Perceived Causal Attribution to an RA..........................................136 Figure 4.13 Changes of Perceived Causal Attribution After Utilizing IRTs................................................136 Figure 4.14 Utilizing IRTs to Alleviate Inconsistency Reactions................................................................138 Figure 4.15 Changes of Inconsistency Reactions After Utilizing IRTs........................................................138 Figure 4.16 Impact of IRTs on User-Centric and System-Centric Reactions...............................................140           xiv LIST OF ABBREVIATIONS  ANOVA Analysis of Variance AVE Average Variance Extracted CDIT Consistency Distance Identification Tool DSS Decision Support System EBA Elimination-by-Aspects Strategy EQW Equal Weight Strategy GRG Generalized Reduced Gradient IRT Inconsistency Reduction Tool IS Information System LCC Limited Cognitive Capacity LEX Lexicographic Strategy MANOVA Multivariate Analysis of Variance MOD Majority of the Confirming Dimensions Strategy OSN Online Social Network RA Recommendation Agent SAT Satisfying Strategy WADD Weighted Adding Strategy WOM Word-of-Mouth    xv   ACKNOWLEDGEMENTS  It is a true pleasure to acknowledge those who have been so important to me for the accomplishment of this work.   I foremost would like to thank and express my heartfelt gratitude to my advisors, Prof. Izak Benbasat and Prof. Hasan Cavusoglu. I am mostly grateful to them for challenging me to achieve my highest potential and helping me to live through those challenges by providing their fullest guidance and support. I am fairly confident that without those challenges, I would not be where I am today. I sincerely appreciate that both my advisors have always found the time to devote to our long research meetings, answer all my questions, and provide the most insightful and prompt feedback despite their busy schedules. I also would like to express my appreciation to my committee member, Prof. David Hardisty, who took time out of his busy schedule to be part of my thesis committee and provided his insights to improve this work. His support over the course of this work was very important to me.   I am grateful to the Social Sciences and Humanities Research Council of Canada (SSHRC) for valuable funding to support this research and to the Sauder School of Business, SSHRC, and Affiliated Awards of UBC for scholarships they granted to support my doctoral education.   I feel very fortunate that I have completed my graduate education at one of the greatest institutions and would like to thank the entire faculty at the MIS division of UBC. Yair Wand, Carson Woo, Ron Cenfetelli, Ning Nan, and Gene Moo Lee: thank you for inspiring me with your research agenda, offering the amazing courses that taught me the basics of conducting good research, and supporting me during my education.   The discussions I had with the other graduate students in and out of the classroom were instrumental in developing my research and teaching portfolio and passion for academia. I particularly would like to thank to my peers and friends who are or have once been graduate students in the MIS division: Camille, Burcu, Usman, Arash, Daniel, Amin, Moksh, Atefeh, Pattharin, and Ruijing – I will always remember the times we shared together with joy.    xvi I would like to thank Ms. Elaine Cho, our graduate studies assistant, without whom I would not be able to navigate through the administrative issues of the PhD program. She is one the most reliable, detailed oriented and hardworking people I have ever met, and her assistance was phenomenal.   I am thankful to my professors at the Yonsei School of Business, Yonsei University – Prof. Kil Soo Suh, and Prof. Jai Yeol Son– who encouraged and supported me to take on this academic journey. I certainly would not have built my interest in the information system (IS) field without their guidance. I still so much miss the times and friends from those days. My special thanks to my old-time friends Eung-Kyo Suh and Sung-Won Lee for sharing the wonderful times.   I am blessed with a wonderful family. I would like to thank my father, Sung Min Kim, my mother, Sung Wal Song, and my dear brother Suk Ki Kim, for their eternal love and support that I have felt at every stage of my life. There is no doubt that I would not feel as happy and strong without their presence in my life. Most of all, I would like to thank to my spouse, Dawon Park, who held my hand throughout this tough journey and had to endure the insanity of a graduate student life with me. Thank you for making life much more pleasant with your beautiful heart, love, tenderness, and friendship. Thank you for making such a wonderful home for me and our kids, Jaepyo and Nahyun; teaching me your amazing skills in touching hearts and planning everything; sharing your wisdom and helping me develop my very own perspective about pursuing an academic career; and looking out for my health and sleep when they were not my priorities. You make my life meaningful!        1 CHAPTER 1: INTRODUCTION  1.1 CONTEXT OVERVIEW An increasing number of online stores simultaneously provide information from multiple advice sources. For instance, Amazon.com provides recommendations and/or reviews from recommendation agents (RAs) and consumers. Likewise, a third party electronic product review website, Cnet.com, offers recommendations and reviews from experts and consumers. An RA refers to “a software agent that elicits the interests or preferences of individual consumers for products, either explicitly or implicitly, and makes recommendations accordingly” (Xiao and Benbasat, 2007, p. 137). In contrast, experts’ or consumers’ recommendations and/or reviews do not rely on users’ specific needs and preferences. As a consequence, online consumers face the challenge of deciding how to use such a wide ranging and possibly conflicting set of information to improve their performance in selecting products.   1.2 RESEARCH GAPS, CONTRIBUTIONS, AND OBJECTIVES 1.2.1 Research Gaps and Contributions By identifying decision-making strategies, information system scholars have developed theoretical foundations for designing decision aids that support online consumers (Todd and Benbasat, 1987). Therefore, identifying new strategies and implementing decision aids that support such strategies are prominent research topics in information systems both from theoretical and practical perspectives.   While extant studies have investigated online consumers’ utilization of recommendations or reviews from an advice source, it is not clear how online consumers use multiple advice sources. Although a few studies (Xu et al., 2017) have found that a product commonly recommended by multiple advice sources is more likely to be selected over others, few studies have explored how consistency/inconsistency among advice sources is embedded in online consumers’ decision-making.   In the past three decades, researchers have investigated consumers’ decision-making strategies regarding the preferential choice problem in terms of product attributes that represent an alternative through inherently given values (Bettman et al., 1998; Payne et al., 1993). However, such classic decision-making strategies using inherently given product attributes do not fully explain online consumers’ utilization of multiple advice sources when multiple or possibly conflicting external-evaluations are available. Because online reviews and ratings are externally generated opinions about alternative products and they determine the  2 ranking of products in a list of recommendations from a certain advice source, they represent a wide variety of possibly conflicting opinions across different advice sources, regardless of the inherent product attributes.   To cope with the challenge of deciding how to use such wide ranging and possibly conflicting sets of information to improve performance in selecting products, online consumers resort to new strategies beyond the classical decision-making strategies. By identifying these new decision-making strategies (i.e., consistency strategies), this dissertation develops theoretical foundations for designing decision aids that support these consistency strategies and investigating their impact on decision-making performance. So this study looks beyond classical decision-making strategies (Bettman et al., 1998; Payne et al., 1993) and contributes to a major update of the literatures investigating online consumers’ decision-making strategies.  Understanding of online consumers’ strategic utilizations of multiple advice sources forms the basis of designing better decision aids. The decision-making strategies employed in utilizing multiple advice sources are user-driven (i.e., not system-supported); that is, they are conducted “manually” by the consumer. The user-driven approaches require more effort than system-supported approaches, pointing out to a need for decision aids that guide consumers as to when such strategies can be utilized across information search stages (Wang and Benbasat, 2009). In particular, as consumers’ goals in building a low/high level of understanding of advice sources and products vary across information search stages (Kuhlthau, 1991), decision-aid tools should have diverse functionalities in order to support such diverse goals. In addition, since individual characteristics such as the trustworthiness of advice sources can trigger diverse consistency strategies in utilizing multiple advice sources, it is of paramount importance to shed light on two aspects: how to design decision aids that represent consistency/inconsistency among advice sources; and when and how to provide such tools contingent on consumers’ individual characteristics (i.e., trustworthiness of advice sources) and task goals across information search stages.   Accordingly, this dissertation proposes new decision aids (i.e., consistency distance identification tools; CDITs) that support online consumers’ consistency strategies. It also investigates which combination of consistency distance identification tools, information search stages, and trustworthiness of advice sources is the most efficient and effective in improving decision-making performance.  Lastly, in utilizing consistency strategies, online consumers easily encounter and perceive inconsistency among advice sources. As 70% of online consumers accept RA’s top recommendations (Xu et al., 2017), consumers do validate RA’s recommendations by comparing them with advice from other sources. In addition, people are less reluctant to blame an information system rather than other people (Kim and Hinds,  3 2006; Leahy, 2002). Therefore, when advice sources are inconsistent, online consumers can easily change their belief in an RA; they might even perceive that the RA to be deceptive or incompetent. If online consumers believe an RA is incompetent or deceptive, they might not follow its recommendations and might even consider moving to other online shopping stores (Tan et al., 2016; Xiao and Benbasat, 2011). Therefore, the way in which consumers perceive and respond to the inconsistency among an RA’s advice and other sources advice should be a key concern for online stores. Accordingly, it would be very important for online shopping stores to find ways to implement decision aids that diminish online consumers’ perceived incompetence and/or deceptiveness of an RA. To the best of my knowledge, this is the first research to examine online consumers’ attribution of inconsistency among advice sources. While a few studies (Xu et al., 2017; Kim and Benbasat, 2013) have investigated the positive aspects of utilizing multiple advice sources, the negative influences these may have on online consumers’ perception of an RA and decision-making performance have not been examined.   Accordingly, this dissertation proposes new decision-aid tools (i.e., inconsistency reduction tools) that clarify why advice sources are inconsistent by both identifying the differences of preferences between online consumers and advice sources, as well as facilitating interactions with an RA. This dissertation also investigates how inconsistency reduction tools can serve to counter online consumers’ negative reactions to an RA against inconsistency between the RA and other advice sources.   1.2.2 Research Objectives and Questions In the presence of multiple advice sources on the Internet and consumers’ need to process information from these sources, this dissertation aims to investigate two aspects: i) online consumers’ new decision-making strategies in utilizing a wide variety of advice sources that have possibly conflicting opinions, and ii) new decision aids that support such decision-making strategies.  The dissertation focuses on identifying online consumers’ new decision-making strategies in utilizing multiple advice sources; i.e., how and when consumers utilize recommendation consistency and/or review consistency among multiple advice sources as part of their decision-making strategy. This study also investigates the implementation of consistency distance identification tools (CDITs) aimed at helping consumers in their utilization of multiple advice sources. In this process, it explores the definitions, conceptualizations, and means of measuring consistency distance. This study also investigates the impact of CDITs on decision-making performance across information search stages in which consumers’ aim to extend their knowledge of products as a major part of their decision-making strategy (Kuhlthau, 1991) (i.e., when and how to provide the CDITs across information search stages for improving online consumers’  4 decision-making performance). This study also explores the implementation of inconsistency reduction tools (IRTs) that alleviate online consumers’ negative reactions to an RA triggered by the utilizations of consistency strategies (i.e., why online consumers attribute inconsistency among advice sources to an RA; and how to design IRTs in order to reduce such negative influence in utilizing consistency strategies).  This dissertation examines the following research questions: 1) (Study #1) How do consumers utilize recommendation consistency and/or review consistency from multiple sources as part of their decision-making strategy? 2) (Study #1) What are the key differences of utilizations of recommendation consistency and/or review consistency across information search stages?  3) (Study #2) Will the CDITs allow consumers to better manage conflicting opinions by utilizing better consistency strategies that culminate in better product selection decisions? 4) (Study #2) What is the best combination of a CDIT and information search stage in utilizing consistency and improving decision-making performance? 5) (Study #3) In utilizing multiple advice sources, when and how do consumers attribute inconsistency among advice sources to an RA? 6) (Study #3) Will IRTs alleviate online consumers’ biased attribution to an RA?  To investigate the above research questions, I conducted three laboratory experiments. Study #1 addresses the first two research questions. Given the current nascent state of knowledge of online consumers’ utilization of multiple advice sources, it is more appropriate to conduct an exploratory research that could shed light on online consumers’ decision-making strategies. By using verbal protocol analysis, Study #1 explores the heuristics (i.e., recommendation consistency and review consistency) that online consumers rely on when they utilize multiple advice sources, and identifies consistency strategies.   Study #2 addresses research questions 3 and 4. Four types of CDITs (i.e., Aggregated Source, Aggregated Product, Pairwise Source, and Pairwise Product) with diverse functionalities to support diverse goals across information search stages (i.e., source selection, exploration, and elaboration stages) are designed to investigate the impact of the CDITs on decision-making performance that would be contingent on the trustworthiness of advice sources, which in turn is expected to trigger the utilizations of diverse consistency strategies.   Study #3 addresses research questions 5 and 6. Two types of IRTs (i.e., Explanatory and Interactive) are designed to investigate the underlying mechanism (i.e., differences of product attribute preferences between  5 an individual and other advice sources) of inconsistency attribution and its impact on online consumers’ negative reactions to inconsistency triggered by the utilizations of consistency strategies.  1.3 OVERALL STRUCTURE OF THE DISSERTATION The remainder of this dissertation is structured as follows. Chapter 2 (Study #1) aims to explore online consumers’ decision-making strategies in utilizing multiple advice sources as an exploratory research. Utilizing concurrent verbal protocol analysis, Study #1 identifies four recommendation consistency strategies and two review consistency strategies. Chapter 3 (Study #2) aims to implement CDITs that support online consumers’ utilization of consistency strategies. On the basis of Task-Technology Fit Theory, Study #2 investigates the different impacts of CDITs on online consumers’ decision-making performance in utilizing multiple advice sources and proposes which type of CDIT best suits which information search stage. Chapter 4 (Study #3) aims to examine online consumers’ attribution of inconsistency among advice sources. Based on Attribution Theory, Study #3 develops a theoretical framework of online consumers’ attribution of inconsistency and proposes IRTs that clarify differences of product attribute preferences between a customer and other advice sources. Finally, Chapter 5 summarizes the results of the three studies and outlines the major contributions of this dissertation.    6 CHAPTER 2: HOW ONLINE CONSUMERS UTILIZE RECOMMENDATIONS AND REVIEWS FROM MULTIPLE SOURCES (STUDY #1)  2.1 INTRODUCTION Understanding online consumers’ decision-making strategies in selecting products — which refers to the mental processes involved in information acquisition, selection, judgment, and utilization for effective and efficient decisions in product selection (Bettman et al., 1998; Payne et al., 1993) — is one of the key areas of interest in the information system (IS) discipline. By identifying such strategies, IS scholars have provided theoretical foundations for developing decision aids that support online consumers (Todd and Benbasat, 1987). As more product-related information is increasingly available on the Internet via multiple and diverse advice sources, consumers need to develop new strategies to improve their decision-making. Therefore, identifying consumers’ use of such new strategies is a prominent research topic in IS both from theoretical and practical perspectives.   To support online consumers’ product selection decision-making, a number of online stores provide recommendations and reviews from multiple advice sources, such as a recommendation agent (RA), consumers, and experts (Baum and Spann, 2014; Chen and Xie, 2008; Dimoka et al., 2012; Kamis et al., 2008; Li et al., 2010; Wang and Doong, 2010; Xiao and Benbasat, 2007; Xiao and Benbasat, 2015). For instance, Amazon.com provides recommendations and reviews from both an RA and consumers. Likewise, a third party electronic product review website, Cnet.com, makes available recommendations and reviews from experts and consumers.   Although more information would appear desirable, the availability of diverse and divergent recommendations from multiple sources increases the number of products a consumer must examine to make a decision. Furthermore, these recommendations and reviews may present different opinions that could undermine consumers’ confidence in the quality of the products being assessed. Faced with multiple sources of these recommendations and reviews, consumers may make cognitively costly mistakes in how they choose products. Consequently, consumers need to develop and rely on various decision-making strategies to simplify their information processing so as to cope with this complexity (Bettman et al., 1998; Butler and Peppard, 1998; Payne et al., 1993; Lynch et al., 1988; Liu and Goodhue, 2012; Simon, 1990; Wilkie, 1994).    7 In the last three decades of investigating consumers’ decision-making strategies, two streams of research or perspectives have emerged. In the internal-attribute oriented perspective, researchers have investigated the preferential choice problem in terms of product attributes that represent an alternative through inherently given values (Bettman et al., 1998; Payne et al., 1993). Typically, a decision- maker is presented with product attributes for a given alternative. These attributes permit comparisons with other alternatives in, for example, making elimination decisions or assigning an overall value to various alternatives. However, as online reviews and ratings have increased in number and accessibility, online consumers have also been able to use these external-evaluations in their decision-making. In the external-evaluation oriented perspective, researchers have investigated online feedback mechanisms that support the reduction of uncertainties in online shopping. While the internal-attribute oriented perspective uses product attributes that are given inherent values representing an alternative, the external-evaluation oriented perspective uses externally generated online reviews and ratings which are values that represent others’ opinions about an alternative that can influence customers.   When there are multiple advice sources, external-evaluations of an alternative might be similar or different (e.g., similarities or differences in ranking position in the recommendation or in rating scores in the reviews). Classical decision-making strategies utilizing inherently given product attributes do not fully explain online consumers’ utilization of multiple advice sources when multiple or possibly conflicting external-evaluations are available. Thus, Study #1 (referred to throughout Chapter 2 as also ‘this study’ or ‘my study’) looks beyond the classical decision-making strategies (Bettman et al., 1998; Payne et al., 1993) to explore how online consumers’ reach decisions when they encounter multiple advice sources of product information. In terms of practical relevance, this study has the potential to deliver theoretical foundations for how to best provide information from multiple advice sources and how to design better decision aids to support the effective and efficient utilization of multiple advice sources by online consumers for the product selection decision-making process.   Given the current nascent state of knowledge of online consumers’ utilization of recommendations and reviews from multiple advice sources simultaneously, it is appropriate to conduct an exploratory study that could shed light on online consumers’ decision-making strategies. Numerous studies have extended the knowledge of the impact of these recommendations and reviews on online consumers’ behavior (Benlian et al., 2012; Kamis et al., 2008; Kumar and Benbasat, 2006; Pavlou and Dimoka, 2006; Wang and Benbasat, 2009; Wang and Doong, 2010; Xiao and Benbasat, 2007; Xiao and Benbasat, 2015). However, most of these earlier studies investigated only a single source (e.g., either RAs, or experts, or consumers) and a single type of advice information (e.g., either reviews or recommendations) (see Appendix A). In contrast,  8 Study #1 investigates multiple sources and multiple types simultaneously. Although a few studies (Baum and Spann, 2014; Li et al., 2010; Xu et al., 2017) have examined the impact of the interplay between consumer reviews and RAs, the order of presentation effects of expert and consumer reviews and the consistency (or lack of it) of recommendations between multiple advice sources have not been thoroughly investigated. Several questions remain: i) when are recommendations and reviews used across the information search processes, ii) what are the impacts of inconsistencies among sources, and iii) when using multiple sources, how do consumers simplify the complexity of choosing among products?   Study #1 uses verbal protocol analysis (Ericsson and Simon, 1993) to pursue two main objectives: (1) to explore if, how, and when consumers use recommendation consistency and/or review consistency from multiple sources as part of their decision-making strategy; and (2) to identify and categorize recommendation and review consistency strategies. Verbal protocol analysis would be the most appropriate approach to find interesting and new knowledge.   2.2 LITERATURE REVIEW AND THEORETICAL FRAMEWORK 2.2.1 Consumer Decision-Making Strategies Over the decades, a vast body of behavioral decision theories has identified a variety of decision-making strategies. In response to the limited working memory and computational capabilities of rational individuals, scholars have theorized the cost-benefit trade-offs made in the course of selection of a strategy (Bettman et al., 1998; Butler and Peppard, 1998; Lynch et al., 1988; Liu and Goodhue, 2012; Payne et al., 1993; Simon 1990). Because more normative strategies are more accurate but require more cognitive effort, people in general try to reduce this cognitive effort by adopting less normative strategies that rely on cognitive heuristics. In addition, based on the assumption of constructive and adaptive decision makers (Bettman et al., 1998), researchers have assumed that the choice of decision-making strategies and the use of heuristics are contingent on the characteristics of tasks, such as the size of alternatives, across decision-making processes. In following these assumptions, investigators of behavioral decision theories have developed and pursued two research perspectives, internal-attribute and external-oriented perspectives.   2.2.1.1 Internal-Attribute Oriented Perspective  The internal-attribute oriented perspective investigates the preferential choice problem in terms of product attributes that represent an alternative through inherently given values (Bettman et al., 1998; Payne et al., 1993). Several decision-making strategies use this perspective.   9 One group of strategies uses more comprehensive processes that use all product attributes. For example, the weighted adding strategy (WADD) sums each product attribute’s weighted value that represents its subjective importance. Although WADD is considered to be a more comprehensive strategy than the others delineated below, it requires more processing capacity. The equal weight strategy (EQW) is a more simplified approach that sums all product attribute values without considering their subjective importance. The majority of the confirming dimensions strategy (MCD) is a process of iterative pair-wise comparisons. A consumer compares each attribute between two alternatives and retains the alternative with a majority of the better attribute values. This pair-wise comparison process continues until all alternatives are evaluated.   Another group of strategies employs a more heuristic process that focuses on one or a few product attributes. Using the lexicographic strategy (LEX), a consumer selects a product that has the highest value of the most important product attributes. The elimination-by-aspects strategy (EBA) eliminates alternatives below a cut-off value assigned for its most important product attributes. This process continues by iterating comparisons of the next important product attributes until a single product remains. Although EBA eliminates alternatives by sequentially processing each product attribute, the satisfying strategy (SAT) sequentially processes each product in the order it appears in the list. That is, if one product does not meet any predetermined cut-off value in all of its attributes, it is dropped from the list, and the next product is evaluated. Because each of these strategies has its own strengths and weaknesses, a consumer uses combinations of these strategies contingent on the characteristics of a task (Bettman et al., 1998).   2.2.1.2 External-Evaluation Oriented Perspective  Although the internal-attribute oriented perspective uses product attributes that are inherently given values that represent an alternative, the external-evaluation oriented perspective uses online reviews and ratings that bestow externally generated values representing others’ opinions about an alternative and can be influenced by these others. Online consumers’ utilization of online product recommendations and reviews have received significant attention in IS research. For example, prior studies have found that the recommendations and reviews lessen buyers’ uncertainties about products; consequently, they influence consumers’ intentions to choose the recommended product as well as their decisions to use the online store (e.g., Bansal and Voyer, 2000; Benlian et al., 2012; Duhan et al., 1997; East et al., 2008; Hu et al., 2008; Kamis et al., 2008; Kumar and Benbasat, 2006; Park and Lee, 2009; Pavlou and Dimoka, 2006; Wang and Benbasat, 2009; Wang and Doong, 2010).   Although earlier studies have extended the knowledge of the utilization of recommendations and reviews, to date most have investigated a single source (e.g., either RAs, experts, or consumers) and a single type of  10 advice (e.g., either reviews or recommendations), not multiple sources, and multiple types (see Appendix 1 for a literature review). Only a few studies have investigated how consumers use reviews or recommendations from multiple sources (Baum and Spann, 2014; Li et al., 2010; Xu et al., 2017). Baum and Spann (2014) analyzed the interplay between online consumers’ reviews and recommendations from a single source, i.e., RAs. They found that inconsistency between reviews and recommendations negatively influence consumers' purchasing decisions. However, their study did not examine exactly how the inconsistencies between the two types (e.g., reviews and recommendations) are interpreted and used as part of decision-making strategy across decision-making processes. Xu et al. (2017) examined which types of recommendation sources – RAs, experts, and consumers – were more influential in consumers’ product selection decision-making and specifically the impact of consensus among the sources on adopting the recommendation. Their results showed that a product commonly recommended by multiple advice sources (specifically, by RAs and experts) was more likely to be selected over others. However, they investigated only a single type of advice (e.g., recommendations), rather than multiple ones (e.g., both reviews and recommendations).   In addition, although consumers’ decision-making consists of constructive and adaptive processes, to date most researchers have investigated the impact of decision-making strategies on decision performance without considering how exactly people use normative and/or heuristic approaches across these processes. For example, Li et al. (2010) studied the effectiveness of review sequencing in pre- and post-product screening stages; they found that placing the expert reviews before the consumer reviews led to higher decision performance. However, their study did not demonstrate how the constructive and adaptive processes used expert and consumer reviews through normative and/or heuristic perspectives.   When there are multiple advice sources, external-evaluations of an alternative may differ among these advice sources (e.g., differences in ranking position in recommendations or rating scores in reviews). Thus, consumers are likely to face, and have to cope with, such conflicts in selecting products. In addition, due to the diverse and complementary characteristics (e.g., expertise, benevolence, preference-matching) of each advice source (Xu et al. 2017), consumers might want to use multiple advice sources in building a more comprehensive and reliable understanding of products and to use other sources in validating the recommendations and reviews available from any one source.   Because experts have high levels of product knowledge, their recommendations and reviews represent in-depth and comprehensive details of product performance. Consumers’ recommendations and reviews can reflect their experience and satisfaction gained from product use. RAs provide recommendations that match  11 each consumer’s elicited product attribute preferences (Xiao and Benbasat, 2007). However, any recommendation or review source might provide deceptive recommendations or reviews benefiting certain online stores or manufacturers (Pfeiffer and Benbasat, 2012; Xiao and Benbasat, 2011; Xu et al., 2017). Hence, to validate the faithfulness and value of reviews or recommendations from a given source, it is advisable for consumers to use multiple sources instead of relying on just one (Xiao and Benbasat, 2015).   Thus, to the best of my knowledge, the studies to date have left two unanswered questions: (1) how and when do online consumers use diverse recommendations and/or reviews from multiple sources as part of their decision-making strategy in using multiple sources; and (2) how can such utilization behaviors as decision-making strategies in the product selection process be categorized? Hence, to understand consumers’ product selection decision-making, Study #1 must identify new strategies consumers use to manage diverse recommendations and reviews. To do so, this study applies an exploratory approach that uses verbal protocol analysis to collect data. My exploration via verbal protocol analysis of decision-making based on multiple advice sources could potentially lead to an update of classical consumer decision-making strategies.   2.2.2 Cognitive Dissonance Theory Cognitive Dissonance Theory (Festinger 1962) postulates that relevant but conflicting cognitions create an aversive motivational state. This state makes people either form a cognitive system of beliefs or change their least resistant beliefs to maintain a state of consonance (Gawronski, 2012; Harmon-Jones and Harmon-Jones, 2007). The cognitive consonance concept of this theory has been applied to three processes: 1) identifying products that deserve to be elaborated further; 2) preventing presumably overlooked information; and 3) validating one’s beliefs by pursuing cognitive consistency and avoiding cognitive inconsistency (Gawronski, 2012; Hoch and Ha, 1986; Koller and Salzberger, 2007; Lee et al., 2011; Nickerson, 1998; Pfeiffer and Benbasat, 2012; Quine and Ullian, 1978). First, consumers may apply cognitive consistency as a heuristic to identify those limited set of products that are worthy of consideration (Festinger, 1962; Gawronski, 2012; Harmon-Jones and Harmon-Jones, 2007), since according to the theory of limited cognitive capacity (Bettman et al., 1998; Chewning and Harrell, 1990; Lang, 2000), people are constrained in their ability to fully process all available information and assess performance, especially under conditions of high cognitive loads. Second, to prevent potential misappraisal of products such as screening out high quality alternatives, consumers may compare recommendations and reviews from multiple sources to pursue cognitive consistency (Gawronski, 2012; Quine and Ullian, 1978). That is, if a consumer is considering a product that is not in conformity, i.e., is in conflict, with other sources’ recommendations and/or reviews, or a product that has not been examined or added into a consideration set but is highly rated by other sources, he is likely to deliberate this product further to identify and minimize any mis-assessment.  12 Third, to maintain the cognitive consistency of their beliefs after making a decision, consumers tend to seek and overvalue information confirming their choice while simultaneously avoiding and devaluing disconfirming information (i.e., confirmation bias) (Nickerson, 1998). That is, if consumers and other sources are consistent in their assessments of product quality, consumers would be more certain in their beliefs about and understanding of a product.  2.2.3 Information Search Process Model Extant literature has revealed that the information search process is a major part of decision-making strategy and has defined it as the consumer’s constructive activity of finding meaning from product information in order to extend the state of knowledge on a particular product (Butler and Peppard, 1998; Johnson et al., 2004; Karimi et al., 2010; Klein, 1998; Kuhlthau, 1991; Li et al., 2010; Sproule and Archer, 2000). Hence, to explore and categorize new strategies in utilizing multiple advice sources, Study #1 applies the Information Search Process Model (Kuhlthau, 1991).  The Information Search Process Model (Kuhlthau, 1991) proposes six stages: initiation, selection, exploration, formulation, collection, and presentation. While the general sequence has been considered as forward through the stages (Butler and Peppard 1998), iterations and backward loops in the model exist between stages (Zellweger 1997). The information search stages progress from the initiation of problem recognition to the information search stages such as selection of internal sources (e.g., memory) or external sources (e.g., recommendations, reviews), the exploration of overall product category, the formulation of consideration sets, and the collection of details of each product. Since information asymmetry in purchasing could be alleviated after purchasing, consumers could perceive actual performance of products and elicit their satisfaction or dissatisfaction in the presentation stage by posting positive or negative feedbacks. The purchasing evaluation criteria developed during prior stages provide the basis for the next stage.  Extant literatures have revealed that three of the stages – exploration, formulation, and elaboration (i.e., collection) – are inevitable in any context (Karimi et al. 2010). For instance, Li et al. (2010) proposed formulation as a key component in information search process, which is defined as “the process of delineating attribute levels and filtering alternatives that fail to meet the criteria” (p. 3). In the exploration stage, consumers build an overall understanding of a product for deciding on further elaboration. In the formulation stage, consumers build a consideration set, namely, the set of products that the consumers find attractive and would like to keep in mind for further evaluation in the process of making a final decision (Roberts and Nedungadi, 1995). In the elaboration stage, consumers make an effort to build an in-depth understanding of a product in the consideration set for the product selection decision.   13  2.2.4 Building a Theoretical Framework  In these three phases of information search, consumers will have access to different sources and types of information. As recommendations and reviews represent different types of information that require the utilization of different extents of cognitive resources, information consistency across multiple advice sources is conceptualized as recommendation consistency and review consistency. Study #1 defines information consistency as the consumer’s belief that there is agreement among multiple advice sources of recommendations and/or reviews concerning product quality. Moreover, as consumers can utilize recommendation or review consistencies in understanding products across the “exploration” and “elaboration” stages, each stage includes the utilization of both consistencies. Therefore, this study will distinguish the utilization of consistency strategies between the “exploration” and the “elaboration” stages (see Figure 2.1).  Figure 2.1 Theoretical Framework (Study #1)    My theoretical framework provides a general overview of the information search process and allows us to explore and identify how recommendation consistency and review consistency are utilized across information search processes.   Thus, Study #1 postulates that:  Recommendation consistency and review consistency among multiple advice sources are embedded in consumers’ information search stages and utilized to: 1) identify a product that deserves to be examined,  14 2) minimize their mis-assessment of a product, and 3) support their belief and understanding of a product.  2.3 METHODOLOGY: PROCESS TRACING METHOD 2.3.1 Protocol Analysis: Concurrent Verbalization for Data Collection In their efforts to understand why and how changes are occurring in the decision-making process, IS researchers have examined changes in dependent variables. These examinations have used deductive and confirmatory approaches based on systematically varying the independent variables. The intervening process in these changes has been considered a “black box” and left unexplored. However, researchers need to open the black box and observe the process through an inductive and exploratory approach involving process tracing methods (Todd and Benbasat, 1987). Process tracing methods are considered in many disciplines an effective methodology to observe and access activities occurring between the onset of a stimulus and a response to it (Ericsson and Simon, 1993; Russo et al., 1989). As a whole, this approach could offer a more comprehensive means of evaluating and understanding the decision-making process to allow extraction of the appropriate information for design and evaluation of the IT artifact (Todd and Benbasat, 1987).   Among the varieties of process tracing methods, protocol analysis of the thought processes of a decision maker by using verbal cues has been considered as a method to access the stages of a decision maker’s information processing. Questions that might be answered include what information is examined, how the manipulations conducted on the input stimulus are processed, and what evaluations or assessments are made by the problem solver – all of which are major interests of IS studies (Ericsson and Simon, 1993; Todd and Benbasat, 1987). In addition, because most decision-aid systems are implemented and interact with users in the online shopping stores, video clips recording users’ activities on an experimental website give access to additional insight available from nonverbal cues. Therefore, protocol analysis recording of verbal and nonverbal cues during experimental tasks has been used in IS research (Burton-Jones and Meso, 2006; Ericsson and Simon, 1985; Kim et al., 2000).   Protocol analysis comprises retrospective and concurrent verbalization (Bouwman et al., 1987; Ericsson and Simon, 1993; Todd and Benbasat, 1987). The retrospective verbalization method acquires verbal cues from the long-term memory of problem solvers. This is done by asking them to “recall their processes” after a specific problem-solving task. A concurrent verbalization method gives simultaneous access to thought processes by asking problem solvers to “talk aloud” while performing the task. Given the different  15 roles of short-term and long-term memory, retrospective verbalization could be distorted when problem solvers try to rationalize their behavior. It could even fail to represent concrete and detailed information that was not internalized in long-term memory but processed only in short-term memory. However, retrospective verbalization is a less obtrusive approach than concurrent verbalization that could interfere with an ongoing problem-solving process. Given the strengths and weaknesses of each method, retrospective verbalization has been recommended for less sensitive and less complicated tasks. Concurrent verbalization is considered best for more sensitive and complicated tasks. In addition, to prevent intrusiveness that could alter a problem-solving process, Ericsson and Simon (1993) suggested an unobtrusive manner called “talk aloud.” In “talk aloud”, a problem solver is asked to speak only from the content of short-term memory. That is, a researcher should not directly push problem solvers to explain why they are doing concurrent verbalization; the main contents in the task should not be pictorial representations requiring re-coding process to understand them (Ericsson and Simon, 1993; Todd and Benbasat, 1987).   Because examining diverse reviews and recommendations from multiple sources under potential conditions of information overload would require substantial time and cognitive resources, it would be critical to concurrently access the problem-solving process during the process of a purchasing decision. If the purchasing process is captured after finishing the task, the consumer could consciously or unconsciously re-create a distorted memory or rationalize both the purchasing process and the decision. Thus, to explore and capture the information and strategies used for product selection, this research uses a concurrent verbalization protocol analysis (i.e., “talk aloud”) in a lab-experiment context instead of a retrospective verbalization protocol analysis.   2.3.2 Experimental Design 2.3.2.1 Design of the Online Shopping Store An online store was specifically developed for the laboratory investigation, with two product categories – the laptop and digital camera – to investigate high and low product knowledge and ensure the generalizability of findings1. To enhance mundane realism (i.e., shaping the similarity of experimental events to real experience, Singleton and Straits, 1999), my research selected the laptops and digital cameras sold on Amazon.com, a highly popular online store.                                                  1 In a pretest, this study found significant difference in product knowledge between the laptop and digital camera.  16 Furthermore, real product recommendations and reviews from consumers and experts from Amazon.com as well as well-known, third-party, professional, electronic devices review website (i.e., Cnet.com) were used. To provide realistic and valid reviews from both experts and consumers, I selected products having at least ten reviews from consumers in Amazon.com and at least one review from experts in Cnet.com. Consequently, I built two databases containing 64 laptops and 64 digital cameras respectively, sold on Amazon.com, and added product attributes and reviews into these databases. To control the amount of information contained in the reviews from experts and consumers, the average word-count of reviews from each source was controlled to be around 250, which is the average word-count of consumer reviews on Amazon.com2.  An RA that generates fit scores from the product preferences elicited from each participant was developed on the basis of WADD and found to deliver better decision quality than other strategies (Bettman et al., 1998; Payne et al., 1988; Xu et al., 2017). To receive recommendations from the RA, participants first provided their preference values and an importance score for each attribute. Using this input, the RA generated a fit score3 for each product in the database and provided recommendations from highest to lowest fit score.   As an exploratory study, the presentation format of recommendations and reviews was adopted from Amazon. To allow participants to compare recommendations from different sources and to prevent the effects of different interface designs, recommendations from three sources were presented separately in the same format (see Figure 2.2). Participants could see the recommendations from the RA, experts, and consumers at the same time on the same display screen and freely choose their own sequence of viewing. To prevent any effects from the order in which recommendation sources were displayed, Study #1 randomized the placement of the three sources on the screen.                                                  2 A one-way analysis of variance further reveals no significant differences of word-counts between the review sources of each product category. 3 A laptop has eight attributes. Let !" represents the i th attribute. For i th attribute, its max value is maxAi, and min value is minAi. A user selects his or her preference of i th attribute (#") and the importance of i th attribute ($"). With this information, the FitScore of a laptop is: %"&'()*+ = 5 − 5 /∑ /1 |34564|(89:64584;64<= >4∑ >4?@AB CDE4FG D  17 Figure 2.2 Recommendations in the Experimental Online Store   Each recommendation source provided its top five recommendations based on ratings and fit scores in the form of tables, including a product picture, product attributes, and hyperlinks to experts’ and consumers’ reviews (see Figure 2.3). When the participants clicked on the hyperlinks of reviews from each source, a pop-up screen containing a rating score and comments from experts or consumers appeared on the display. Participants could navigate freely between reviews by clicking hyperlinks. If participants wanted to see all products sold in the online store, they could choose the “all products” section. The products in the “all products” section were randomly sorted.   Figure 2.3 Reviews in the Experimental Online Store   18  The number of product attributes was based on the general rule of thumb of 7 plus or minus 2, offered by Miller (1956). The product attributes for laptops (e.g., price, hard drive, memory, processor, screen size, weight, battery, video card) and digital cameras (e.g., price, megapixel, memory, ISO, aperture, display size, weight, battery) were borrowed from online stores (i.e., Amazon.com, Cnet.com) (see Figure 2.4).    Figure 2.4 Preferences Elicitation Interface of Recommendation Agent   2.3.2.2 Manipulation of Experimental Treatments To explore how online consumers use the similarities and/or differences of the ranking position in the recommendation and rating score in the review, Study #1 relied on two constructs: (i) recommendation consistency and review consistency. Recommendation consistency is operationalized as a binary variable representing whether the product is ranked among the top five recommendations by any two or more of the sources (see additional details below). Review consistency is operationalized as a continuous variable representing the differences in rating scores (out-of-five) in the reviews between experts and consumers;4 the smaller the difference, the higher the review consistency of the product.   To capture the potential impact on the decision-making process of diverse combinations of similarities and/or differences among the recommendation sources, this study generated four recommendation                                                4 82% of participants used rating scores in perceiving agreement between experts’ and consumers’ reviews.  19 consistency conditions. These conditions contained: (1) two common recommendations between the RA and experts, (2) two common recommendations between the RA and consumers, (3) two common recommendations between experts and consumers, and (4) no common recommendations among any two or three sources.5 Table 2.1 describes how the recommendation consistency conditions were implemented. Hence, a 4 (consistency) × 2 (products knowledge) factorial design with two between-subject factors was used to represent a variety of realistic contexts (see Table 2.2).6  Table 2.1 Manipulation of Recommendation Consistency Condition Manipulation Condition 1  - Recommendation Consistency between the RA and Experts The RA and experts had two commonly recommended products on either the second or third position in the ranking and fourth or fifth one. - When the RA’s five recommendations contained a product that was commonly recommended by consumers, this common product was swapped with the RA’s next recommendation (e.g., 6th). - When the RA’s five recommendations contained over two of the experts’ recommendations, the third one was swapped with the RA’s next recommendation (e.g., 6th). - When the RA contained 1 (or 0) of the experts’ recommendations, the second (or second and fourth) ranked experts’ recommendation was (or were) swapped with the RA’s second or third (or second or third and fourth or fifth) recommendation(s). Condition 2 - Recommendation Consistency between the RA and Consumers The RA and consumers had two commonly recommended products on either the second or third position in the ranking and fourth or fifth one. - When the RA’s five recommendations contained a product that was commonly recommended by experts, this common product was swapped with the RA’s next recommendation (e.g., 6th). - When the RA’s five recommendations contained over two of the consumers’ recommendations, the third one was swapped with the RA’s next recommendation (e.g., 6th). - When the RA contained 1 (or 0) of the consumers’ recommendations, the second (or second and fourth) ranked consumers’ recommendation was (or were) swapped with the RA’s second or third (or second or third and fourth or fifth) recommendation(s). Condition 3 - Recommendation Consistency between Experts and Consumers Experts and consumers had two commonly recommended products between the second position and fourth position. - When the RA’s five recommendations contained a product that was commonly recommended by either experts or consumers, this(these) common product(s) was(were) swapped with the RA’s next recommendation (e.g., 6th).                                                5  Because there are natural variances in the reviews from experts and consumers, review consistency was not manipulated. 6  Manipulations for product knowledge and recommendation consistency were successful. On average, the participants assigned into laptop and digital camera conditions had different levels of product knowledge: laptop conditions (m=4.88, SD=1.01) versus digital camera conditions (m=3.52, SD=1.11, t(62)=5.108, p<.001). Perceived recommendation consistency provided by recommendation consistency conditions (m=4.57, SD=1.48) significantly differ from non-recommendation consistency condition, i.e., when there are no common products in recommendations (m=2.81, SD=1.36, t(62)=4.196, p<.001).  20 Condition 4 - Non-Recommendation Consistency All three sources recommended five distinct products. - When an RA’s five recommendations contained a product that was commonly recommended by consumers and/or experts, this common product was swapped with the RA’s next recommendation (e.g., 6th) that was not equivalent with experts’ and consumers’ recommendations.  Table 2.2 Experimental Design  Product Knowledge High (Laptop) Low (Digital Camera) Recommendation Consistency RA and Experts Group 1 (8) Group 5 (8) RA and Consumers Group 2 (8) Group 6 (8) Experts and Consumers Group 3 (8) Group 7 (8) None Group 4 (8) Group 8 (8) Note: 64 participants were randomly assigned into the eight groups  2.3.2.3 Participants and Experimental Procedures To enhance the experimental realism and prevent the potential compounding effects of task involvement (Petty et al., 1983), for Study #1 I recruited 64 participants from a large public university in North America who were interested in purchasing a laptop or a digital camera within the next few months. This study randomly assigned eight participants to each of the eight conditions (see Table 2). Because protocol analysis provides rich data recorded from both verbal and nonverbal cues, it also requires extensive time and effort in data analysis. Even relatively small samples in each condition have been considered as large, expensive, and appropriate samples for protocol studies; my sample of 64 is comparatively very high (Bera et al., 2011; Burton-Jones and Meso, 2006; Ericsson and Simon, 1993; Kim et al., 2000; Todd and Benbasat, 1987).7 To motivate participants to fully engage in the task, every participant received a CAD20 honorarium. Participants’ demographics are summarized in Table 2.3.  Table 2.3 Demographics of Participants  Mean Standard Deviation Age 23.06 4.52 Gender Male 18 N/A Female 46 N/A Have purchased online? Yes 61 N/A No 3 N/A Purchases online during last year 10.72 15.11 Money spent online during last year CAD876.64 CAD1,383.65 Note: Sample size = 64. No missing data.                                                7 For instance, the sample size used by Bera et al. (2011) was 10, Burton-Jones and Meso (2006) was 57, and Kim et al. (2000) was 16.  21  The experimental procedures were as follows. First, prequestionnaires for perceived task involvement and product knowledge were administered to control for confounding effects. Next, participants were trained to “talk-aloud” – verbalizing every thought in their mind as if they were talking to themselves – using two standard training tasks (Ericsson and Simon, 1993). After participants fully understood how to “talk-aloud,” they were instructed on how to use the interfaces of the online store (e.g., eliciting personal preferences on product attributes and hyperlinks to read reviews from experts and consumers). Then, participants viewed a short video clip providing a “talk-aloud” example that used equivalent interfaces. To prevent a learning effect from the video clip, it did not contain verbalization of any information consistency and any decision-making strategies. After participants confirmed their understanding of “talk-aloud” and the online store interface, the main experimental task was administered. Participants were asked to select the best laptop or digital camera that interested them. All of the verbalizations and activities performed by the participants during the main task were recorded. After finishing the task, participants completed post-questionnaires measuring perceived recommendation consistency, perceived deception, and demographic information.    2.3.2.4 Measurement Items The measurement items are listed in Table 2.4, along with their sources. All measurement items have been validated by prior research work. The validity and reliability of measurement items were tested and found acceptable.   Table 2.4 Measurement Items  Construct Measurement Item Task Involvement (McQuarrie and Munson 1992) The product selection task that I have experienced in the website was (TI1) Irrelevant / Relevant to me. (TI2) Of no concern / Of concern to me. (TI3) Didn’t matter / Mattered to me. (TI4) Meant nothing to me / Meant a lot to me. (TI5) Unimportant / Important. Product Knowledge* (Eisingerich and Bell 2008, Sharma and Patterson 2000) (PK1) I possess good knowledge on laptops / digital cameras. (PK2) I can understand almost all the specifications (e.g., memory, hard drive / ISO, apertures) of laptops / digital cameras. (PK3) I am familiar with basic laptop / digital camera specifications (e.g., memory, CPU / ISO, megapixel).  22 Recommendation Consistency (Miranda and Bostrom 1993) (RC1) I realized that same product(s) were recommended in different sources’ top five recommendations. (RC2) I observed that different sources recommended the same product(s) in their top five recommendations. (RC3) I found that different sources tend to agree on what top five products should be recommended Perceived Deceptivensss (Grazioli and Jarvenpaa 2000) Overall, the Recommendation Agent is (PDe1) Genuine / Misleading (PDe2) Truthful / Deceptive (PDe3) Fair / Biased * Measurement items for these constructs were provided in accordance with the assigned condition (i.e., laptops and digital cameras).  To validate reliability, convergent validity, and discriminant validity of measurement items, confirmatory factor analysis was done using SmartPLS. Table 2.5 shows the descriptive statistics and composite reliability of the constructs. All composite reliabilities were greater than 0.7, the recommended cut-off (Barclay et al., 1995; Fornell and Bookstein, 1982). Thus, the reliability of the measurements seemed acceptable.   Table 2.5 Descriptive Statistics and Composite Reliability of Constructs Construct Mean Standard Deviation Composite Reliability Task Involvement (TI) 5.29 1.31 .968 Product Knowledge (PK) 4.20 1.38 .933 Recommendation Consistency (RC) 4.13 1.86 .770 Perceived Deceptiveness (PDe) 3.02 1.21 .826  Convergent validity is the extent of the relatedness of items that theoretically should be related. Convergent validity is assessed by individual item reliability, the composite reliability of the construct, and average variance extracted (AVE) (Barclay et al., 1995; Hu et al., 2004). Individual item reliability was assessed by examining the loadings of the measurement items on their corresponding construct; all the item loadings should be significant and exceed 0.7. All the composite reliability values exceeded 0.7, the recommended criterion (Barclay et al., 1995; Fornell and Bookstein, 1982), and AVE values exceeded 0.5, the generally accepted criterion (Hu et al., 2004) (see Table 2.6). Therefore, these results showed good convergent validity for the measurement items.  23  Table 2.6 Composite Reliability, AVE, and Correlation Among Constructs   CR AVE TI PK RC PDe TI .968 .858 .926    PK .933 .825 .509 .908   RC .770 .544 .067 -.177 .738  PDe .826 .630 .102 .109 .001 .794 Note: Composite Reliability = CR; Average Variance Extracted = AVE; Task Involvement = TI; Product Knowledge = PK; Recommendation Consistency = RC; Perceived Deceptiveness = PDe; Diagonal values are the square root of AVE  Discriminant validity is the degree of difference between a given construct and other constructs. Thus, the measurement items should be distinct from other constructs and load on their own construct. Discriminant validity was assessed by comparison of the square root of AVE and the correlations among constructs. To show good discriminant validity, all the square roots of the AVE should be greater than the off-diagonal elements in the corresponding rows and columns. This result indicates that the construct shares more variance with its measures than with others (Fornell and Bookstein, 1982). The diagonal values of Table 2.6, the square roots of AVE, exceed the correlations among constructs, demonstrating good discriminant validity for all of the constructs. Thus, all conditions for convergent and discriminant validity were satisfied.  2.3.3 Coding Scheme Development and Coding Procedures Verbal cues from concurrent verbalizations during the main task are the major source of data. Nonverbal cues from video clips that present participants’ activities in the experimental website are used as supplementary data for more comprehensive and complete tracing (Rist, 1989). To analyze the transcripts and video clips, Study #1 developed a coding scheme based on advice from Boyatzis (1998) and Ericsson and Simon (1993), and used an episode – small, self-contained phases of highly organized activity (Newell and Simon, 1972) – as a unit of verbal protocol in developing a coding scheme. Coding procedures are described in Figure 2.5.    24 Figure 2.5 Diagrams of Verbal Protocol Coding Procedures    25 This study focused on three major decision-making processes – information search, evaluation of alternatives, and selection decision – as the basic framework of guiding the initial coding scheme (Karimi et al., 2010). Chi (1997) suggested that a coding scheme can be developed based on the topic domain or the research questions being asked. Based on this suggestion, to capture and trace the roles of recommendation and review consistency across the exploration and elaboration stages in the information search process (see Figure 2.1), this study developed sub-stages of each process through categorizing concurrent verbalizations from two pretests with 20 participants in total.   Developing a coding scheme is an inductive and iterative process, involving multiple rounds with different data sets (Boyatzis, 1998) (see Figure 2.5). To improve objectivity and validate completeness and accuracy of the coding scheme, two coders – two graduate students who had investigated online consumer decision-making processes – classified the verbal protocols representing information search, evaluation of alternatives, and product selection. Then they classified the verbal protocols representing utilizations of recommendations, reviews, and consistency in each process. The coding scheme was updated when the two coders found an additional category that was relevant to the task but was not in the initial coding scheme. Another coder who has investigated online consumer behavior and decision-making for over four decades validated the completeness and accuracy of the updated coding scheme. These processes were iterated for five separate rounds. As no new categories arose after the first three coding rounds, this study concluded that theoretical saturation was reached (Strauss and Corbin, 1994) and confirmed my coding scheme after the fifth round, i.e., all three coders agreed that all the task-relevant verbal protocols could be classified into the coding scheme and there were no other relevant coding categories in understanding the product selection processes. The final coding scheme, with descriptions and examples, is listed in Table 2.7.  Table 2.7 Coding Scheme Category Description Example of Verbalization 1 Selecting a Recommendation Source - choosing a recommendation source to see its recommendations - Okay, I will start with the automated agent’s recommendation and for laptops.  - Okay, let's see the expert recommendation.  2 Eliciting Preferences - eliciting preferences (criteria) and importance on product attributes (including brand, color, type, etc.) as well as reasoning behind criteria selection - For weight, I definitely would want a lightweight camera just because if I'm going to use a camera instead of my phone. I want it to be light, small and easy to carry around. I would move the weight down to 300 grams. - So, this is a little laptop and I like to play video games a lot, so I like to have a higher memory card.  26 3 Examining Recommendations - examining product attributes of recommendations - understanding product attributes by analyzing efforts or feelings - CPU, 2.6 is good.  - The other thing I will go for look at for the battery. It’s almost around 7.0. I think that’s not a huge different here. So I go for the video card is almost the same. 4 Examining Reviews - examining comments and ratings of reviews - understanding comments and ratings by analyzing efforts or feelings  - “After a lot of research, I picked the Lumix GX-1 and was not disappointed. Light, capable, easy to use once you play with it for a while. The GX-1 really helped me spend more time on seeing the sights and less time complaining about 3+ lbs of camera hanging from my hip. I really have nothing but good things to say about this camera. I would recommend it wholeheartedly.” - I'm not quite sure what that would mean. I have no idea what this means either. 5 Examining Recommendation Consistency - recognizing a common product between recommendations - Number five kind of reminds me of camera from number four. Looking back and forth, I noticed that they're the exact same model. - They recommended that one as well. Okay, it's automated, interesting. Do they have any customers’ that is really similar? Let's check what they say about this one. Is it exactly the same? It is, 1,000, 256 gigabytes, 384 MB, video, a video card, 2.6 gigahertz, 4 gigabytes memory, and 13.3 inches, 2 kilograms, and 6½ hours. Okay, so Expert just gave me the same one. Probably, yes. Okay. So there is not any difference on it.  - Let's see if there are any identical models. I'm just trying to see if anything really has coincided. 6 Creating a Consideration set - considering a product as one of alternatives for comparing and purchasing - adding a product in the cart, but not confirming the purchase - Yeah, it seems like the last one is probably the most worth of the ones that was on my first recommendations. - I'm going to keep number 2 in mind though.  - I probably go for… You can only select one digital camera, do you want to check out? No, Cancel.  7 Comparing Product Attributes - comparing one product’s attributes with other products’ attributes - The one on expert session did recommend me for the second choice so I will compare this to both the number 2 in Agent Recommendation and Expert Recommendation. Oh, the first thing I saw is the price. I think the Agent Recommendation price around 900. It’s a bit more cheaper compared to other one. I think maybe is not indicated good quality. - The price difference between two and four aren't that much. It's about $20 difference. I  27  To conduct the final coding for analyses, the 64 transcripts and video clips from the participants were provided to coders, different from those used in the earlier coding development process. To improve the objectivity and reliability of the coding, three coders were engaged in the coding process. Coders were graduate students with an interest in protocol analysis and product selection process.   The coding procedure was done during four stages: i) instruction, ii) main coding, iii) feedback, and iv) supplementary coding (see Figure 2.5). In the instruction stage, the general theoretical background of the product selection process and the coding scheme as a framework to understand sub-stages of the process are briefly explained. After coders fully understood the coding scheme, they were instructed on how to assign verbal protocols in a transcript into categories in the coding scheme by referencing a video clip of the transcript. Then, the coders were provided with sample-coding tasks to check their understanding. After two sample coding tasks, all coders were able to distinguish and assign verbal protocols to the categories of the coding scheme. To assure independent coding and prevent learning effect from transcripts and video clips, during instruction this study used transcripts and video clips from pretests. After the coding scheme instruction was completed, all the transcripts and video clips of the 64 participants were sent to each coder for the next stage: main coding.   should really look at what they offer and if that $20 difference matters. So for $20 more, I can get four megapixels more, the zoom will be the same; ISO and aperture, I don't know, so I'm going to skip that. Display size doesn't really matter to me and it's just a 0.3 inch difference. For the weight, however, for number four, it is more heavier and it has less hours. For $20 more I would get more megapixels, a lighter camera and more hours. 8 Comparing Written Reviews - comparing comments and ratings of two reviews  - So already the expert review gave a higher rating than number two. - Customer's ratings, oh, similar to experts.  - When I compare these two expert reviews... - I think expert doesn’t mentioned for the touch pad.  9 Choosing - deciding to choose a product and finishing the task - adding a product to the cart and confirming the product selection decision-making - Okay I'm going to settle for the HP Envy.  - You can always select, yes, I want to check out.   28 In the main coding stage, the coders were asked not to discuss any issues regarding the coding process or associated topics to assure independent coding. The main coding process took around three weeks. This study calculated the inter-coder reliability score proposed by Krippendorff (2004) to validate the objectivity, reproducibility, and reliability of the coding from three independent coders. As the calculated reliability score (0.83) exceeds the recommended cut-off value (0.70), this study concludes that reliability of the coding is assured (please see Step 2 in Figure 2.5).   In the feedback stage, the three coders met to discuss the completeness and accuracy of the coding scheme. They agreed that they could not find any verbal protocol that was relevant but did not fit to the coding scheme and they could not find a verbal protocol that represented multiple coding categories. Hence, Study #1 was able to assure validity and theoretical saturation of the coding scheme.   Lastly, in the supplementary coding stage, to improve traceability of participants’ review and recommendation processing of each product, an information source and a product name on each verbal protocol are marked. Thus, through such coding procedures, Study #1 was able to trace not only which product and source were selected during product selection process, but also when and how participants processed recommendations and reviews from a specific advice source.  2.4 IDENTIFYING CONSISTENCY STRATEGIES To explore how online consumers cope with similar and/or different external evaluations from multiple sources, Study #1 identified when and how participants examined recommendation and review consistencies. Examining recommendation consistency by a participant is inferred when a participant identified a common product in two recommendation sources.8 Examining review consistency is inferred when a participant identified agreement between experts’ and consumers’ reviews of the same product by sequentially examining them.  All the verbalizations from 64 participants are categorized in accordance with the coding schema. To analyze an individual’s decision-making process, I segmented verbalizations into sentences. Each sentence was categorized into a coding schema and organized chronologically. For simplicity, adjacent sentences categorized into the same coding category were grouped. This procedure identified not only whether                                                8 Although this study manipulated recommendation consistency between diverse sources (i.e., RAs and experts, RAs and consumers, and experts and consumers) and product knowledge (i.e., laptops versus digital cameras), there is no statistical difference in recommendation consistency utilization. Therefore, this study did not distinguish them in further analysis.  29 individuals used recommendation and/or review consistencies but also when and how they used specific decision-making processes.   2.4.1 An Example of Identifying a Consistency Strategy The following are procedures to identify the consistency strategies using the coding schema. First, individual’s verbalizations are recorded and transcribed (see Table 2.8).    Table 2.8 Transcribing Verbalizations Transcription: Participant #37 Ok, so I’ll start with automated agent’s recommendations. Price range is important to me, so I’ll give that a 6, and I’ll pay usually up to 1300. Hard drive: I’m not sure what that means, so I’ll keep it at 1,000 and leave it at average. Video card:  Again, I’m not sure what a good video card is, so I’ll leave that at average and at 1500. Processor: I think the higher it is, the faster it is, so bring that up to 3.4 and give that a 6 for importance. For me, it’s important because I store a lot of stuff.  I’ll give that… 9 gigs should be enough, and importance, 7. Screen size: I like something not too big but not too small, so I’ll pick 15, and that’s a 7 for importance. Weight: I prefer something light, so I’ll pick grams, so 1 pound is 2.2 kilograms, so up to 1 poundish, so 2600 grams. That’s also very important to me, so I’ll give it a 7. Battery is also important. I prefer 8 hours, and I’ll give- that a 6. So, submit. So, they’re all…  The second one is pretty light, and then pricing also within my price range. So, Customer’s recommendation. The first one, battery is too low, so not that one; same with the 5th one. The middle three, 5, screen size is good. Price range is also similar, and hard drive, the 4th one is the biggest. So, the 4th one has the biggest hard drive. It’s also the lightest, and the 4th one looks good. Expert’s recommendation: So, the last one, they’re all pretty similar, but the 3rd one has the largest hard drive and then they’re all the same size. I guess number 3 for this one. It also appeared as number 4 in Customer recommendations. The review says it’s thin and light, which is good. Battery life could be better. It’s better than the MacBook Air. Another one…  So, another one that popped up is the Yoga 13, which is a Customer recommendation and an Expert. The review says it’s good.  It costs more than standard Ultrabooks with similar components, so not this one since this sounds overpriced. Ok, so, so far the S5-391-9880 looks good (among others). And something else. So, an automated agent’s recommendation: The screen is too… The first is too pricey. The second one, the screen is relatively too big. The third one is a bit heavy. And the fourth one, customer rating is only 3.6 out of 5, and the last one also. The X875q7390, excellent application and gaming performance. You’re paying for expensive extras. Yoga 13, very slim, very light, great screen, super fast. Easily the best laptop.  The Yoga one looks good, So probably I would get the Yoga 13 or the S5-391-9880. The difference is that one has higher battery life. The other one has higher gigabytes. So, memory, that one’s bigger. I guess I would pick, probably… The main difference is hard drive and memory. The bigger hard drive probably means it’s faster; The difference is that one has higher battery life. The other one has higher gigabytes. So, memory, that one’s bigger. I guess I would pick, probably… The main difference is hard drive and memory. The bigger hard drive probably means it’s faster, because price is relatively good, hard drive seems average and fast, not sure of a video card, CPU that sounds like the average speed, memory 4GB is probably enough, I believe, 13 window size is good, and weight compared to everything else seems good, and battery life is ok. So buy this one, yes.  Second, to categorize verbalizations into the coding schema and array them in time order, I segmented them into sentences. After adjacent verbalizations are categorized into same schema, they are grouped and numbered (see Table 2.9).      30 Table 2.9 Segmenting and Categorizing Verbalizations Line # Verbalization Coding* 1 Ok, so I’ll start with automated agent’s recommendations. 1 2 Price range is important to me, so I’ll give that a 6 and I’ll pay usually up to 1300. Hard drive: I’m not sure what that means, so I’ll keep it at 1,000 and leave it at average. Video card: Again, I’m not sure what a good video card is, so I’ll leave that at average and at 1500. Processor: I think the higher it is, the faster it is, so bring that up to 3.4 and give that a 6 for importance. For me, it’s important because I store a lot of stuff. I’ll give that…  9 gigs should be enough, and importance, 7. Screen size: I like something not too big, but not too small, so I’ll pick 15, and that’s a 7 for importance. Weight:  I prefer something light so I’ll pick grams, so 1 pound is 2.2 kilograms, so up to 1 poundish, so 2600 grams. That’s also very important to me, so I’ll give it a 7. Battery is also important. I prefer 8 hours, and I’ll give it that a 6. So, submit. 2 3 So, they’re all… The second one is pretty light, and then pricing also within my price range. 4 4 So, Customer’s recommendation: 1 5 The first one, battery is too low, so not that one; same with the 5th one. The middle three, 5, screen size is good.  Price range is also similar, and hard drive, the 4th one is the biggest. So, the 4th one has the biggest hard drive. It’s also the lightest and the 4th one looks good. 4 6 Expert’s recommendation: 1 7 So, the last one, they’re all pretty similar, but the 3rd one has the largest hard drive, and then they’re all the same size. 4 8 I guess number 3 for this one. It also appeared as number 4 in Customer recommendations. 6 9 The review says it’s thin and light, which is good. Battery life could be better. It’s better than the MacBook Air. 5 10 Another one… So, another one that popped up is the Yoga 13, which is a Customer recommendation and an Expert. 6 11 The review says it’s good. It costs more than standard Ultrabooks with similar components, so not this one because this sounds overpriced. 5 12 Ok, so, so far the S5-391-9880 looks good (among others). 7 13 And something else. So, an automated agent’s recommendation: 1 14 The screen is too… The first is too pricey. The second one, the screen is relatively too big. The third one is a bit heavy. And the fourth one, 4 15 customer rating is only 3.6 out of 5, 5 16 and the last one also. 4 17 The X875q7390, excellent application and gaming performance. You’re paying for expensive extras. Yoga 13, very slim, very light, great screen, super fast.  Easily the best laptop.  The Yoga one looks good, 5 18 So probably I would get the Yoga 13 or the S5-391-9880. 7  31 19 The difference is that one has higher battery life. The other one has higher gigabytes. So, memory, that one’s bigger. I guess I would pick, probably… The main difference is hard drive and memory. The bigger hard drive probably means it’s faster, 8 20 So I would probably pick the S5-391-9880 compared to everything else. 7 21 Because price is relatively good, hard drive seems average and fast, not sure of a video card, CPU that sounds like the average speed, memory 4GB is probably enough, I believe, 13 window size is good, and weight compared to everything else seems good, and battery life is ok. 4 22 So buy this one, yes. 10 * Coding number denotes following coding schema: 1 (selecting a source), 2 (eliciting preferences), 4 (examining recommendations), 5 (examining reviews), 6 (examining recommendation consistency), 7 (creating a consideration set), 8 (comparing product attributes), 10 (choosing)  Third, to validate transcribed verbalizations and clarify details such as which product and reviews were examined, I compared verbalizations with a video clip recording an individual’s decision-making process. Through this step, I added supplementary coding such as (i) recommendation source when selecting a source; (ii) ranking and source of examined product when examining a product; and (iii) review source when examining product reviews (see Table 2.10).   Table 2.10 Supplementary Coding for Clarification Line # Verbalization Coding* Product Ranking Rec. Source** Rev. Source*** 1 Ok, so I’ll start with Automated Agent’s Recommendations. 1  RA  2 Price range is important to me, so I’ll give that a 6, and I’ll pay usually up to 1300. Hard drive: I’m not sure what that means, so I’ll keep it at 1,000 and leave it at average. Video card: Again, I’m not sure what a good video card is, so I’ll leave that at average and at 1500. Processor: I think the higher it is, the faster it is, so bring that up to 3.4 and give that a 6 for importance. For me, it’s important because I store a lot of stuff. I’ll give that…  9 gigs should be enough, and importance, 7. Screen size: I like something not too big, but not too small, so I’ll pick 15 and that’s a 7 for importance. Weight:  I prefer something light, so I’ll pick grams, so 1 pound is 2.2 kilograms, so up to 1 poundish, so 2600 grams. That’s also very important to me, so I’ll give it a 7. Battery is also important. I prefer 8 hours and I’ll give it that a 6. So, submit. 2    3 So, they’re all… The second one is pretty light, and then pricing also within my price range. 4 2 RA  4 So, Customer’s recommendation: 1  CU   32 5 The first one, battery is too low, so not that one; Same with the 5th one. The middle three, 5, screen size is good. Price range is also similar, and hard drive, the 4th one is the biggest. So, the 4th one has the biggest hard drive. It’s also the lightest and the 4th one looks good. 4 1 CU  5 CU  3 CU  4 CU  6 Expert’s recommendation: 1  EX  7 So, the last one, they’re all pretty similar, but the 3rd one has the largest hard drive and then they’re all the same size. 4 5 EX  3 EX  8 I guess number 3 for this one. It also appeared as number 4 in Customer recommendations. 6 3 EX  4 CU  9 The review says it’s thin and light, which is good. Battery life could be better. It’s better than the MacBook Air. 5 4 CU CU 10 Another one… So, another one that popped up is the Yoga 13, which is a Customer recommendation and an Expert. 6 2 CU  2 EX  11 The review says it’s good. It costs more than standard Ultrabooks with similar components, so not this one since this sounds overpriced. 5 2 CU EX 12 Ok, so, so far the S5-391-9880 looks good (among others). 7 4 CU  13 And something else. So, an automated agent’s recommendation: 1  RA  14 The screen is too… The first is too pricey. The second one, the screen is relatively too big. The third one is a bit heavy. And the fourth one, 4 1 RA  2 RA  3 RA  4 RA  15 customer rating is only 3.6 out of 5, 5 4 RA CU 16 and the last one also. 4 5 RA  17 The X875q7390, excellent application and gaming performance. You’re paying for expensive extras.  5 1 CU EX  33 Yoga 13, very slim, very light, great screen, super fast. Easily the best laptop. The Yoga one looks good, 2 CU CU 18 So probably I would get the Yoga 13 or the S5-391-9880. 7 2 CU  4 CU  19 The difference is that one has higher battery life. The other one has higher gigabytes. So, memory, that one’s bigger. I guess I would pick, probably… The main difference is hard drive and memory. The bigger hard drive probably means it’s faster, 8 2 CU  4 CU  20 so I would probably pick the S5-391-9880 compared to everything else. 7 4 CU  21 Because price is relatively good, hard drive seems average and fast, not sure of a video card, CPU that sounds like the average speed, memory 4GB is probably enough, I believe, 13 window size is good, and weight compared to everything else seems good, and battery life is ok. 4 4 CU  22 So buy this one, yes. 10 4 CU  * Coding number denotes following coding schema: 1 (selecting a source), 2 (eliciting preferences), 4 (examining recommendations), 5 (examining reviews), 6 (examining recommendation consistency), 7 (creating a consideration set), 8 (comparing product attributes), 10 (choosing) ** Rec. Source denotes following sources of recommendations: RA (automated recommendation agents), EX (experts), CU (customers). *** Rev. Source denotes following sources of reviews: EX (experts), CU (customers).  Fourth, to understand the role of recommendation and review consistency as part of decision-making strategies, I examined consumers’ processes before and after examining consistencies. Specifically, I focused on the verbalizations that described perceptions of recommendation and review consistencies, behavioral and cognitive responses, and reasoning for their reactions. This procedure applied to all the participants’ verbalizations to represent their decision-making processes.   Fifth, to identify general decision-making strategies, I grouped participants who used similar utilizations of recommendation and review consistencies (see Table 2.11).   While most examples of participants using consistency strategies are long and complicated, I selected a simple case to clarify my analysis. Participant #37 perceived recommendation consistency between fourth of Customers (CU4) and third of Experts (EX3) that had not added into a consideration set when examined in Experts’ recommendations. As soon as the participant realized the consistency, she deliberated over the product with customer reviews and added the product into her consideration set. That is, she changed her  34 assessment of the product after perceiving the consistency. After further examination of other alternatives, she chose the common product (CU4) as her final decision. Participant #28 showed a similar decision-making process.   Table 2.11 Integrating Individual Decision-Making Process for Generalization Participant #37 Participant #28  Schema # Procedure Product # 1 source selection  2 preference elicitation  4 product examination  1 source selection  4 product examination EX3 1 source selection  4 product examination  6 perceiving consistency CU4 & EX3 5 deliberating product CU4 6 perceiving consistency  5 deliberating product  7 adding into a consideration set CU4 1 source selection  4 product examination  5 review examination  4 product examination  5 review examination  7 adding into a consideration set  8 comparing product attributes  7 adding into a consideration set  4 product examination  10 Choosing CU4    Schema # Procedure Product # 1 source selection  2 preference elicitation  4 product examination RA2 5 review examination  1 source selection  4 product examination  7 adding into a consideration set  1 source selection  4 product examination  6 perceiving consistency EX2 & RA2 5 deliberating product EX2 7 adding into a consideration set EX2 4 product examination  1 source selection  4 product examination  1 source selection  5 review examination  8 comparing product attributes  10 Choosing     35 Through this procedure, Study #1 captured common decision-making process across the utilization of multiple sources and could change individuals’ complicated decision-making processes to a more simplified process, as shown in Figure 2.6.    Figure 2.6 Visualizing Recommendation Consistency Strategy: Deliberating   2.4.2 Overall Summary of Consistency Strategies Data analysis shows that 81% of participants examined either recommendation or review consistencies: 39% examined recommendation consistency, 72% examined review consistency, and 30% examined both. The  36 other 19% of participants utilized classical decision-making strategies, mainly heuristic strategies (e.g., EBA, SAT) without paying attention to consistency (see Table 2.12)9.  Table 2.12 The Number of Participants Examining Consistency Participants Examining Consistency Recommendation Consistency Overall No Yes Review  Consistency No 12 (18.8%) 6 (9.4%) 18 (28.1%) Yes 27 (42.2%) 19 (29.7%) 46 (71.9%) Overall 39 (60.9%) 25 (39.1%) 64 (100%)  Based on the theoretical framework shown in Figure 2.1, this research explores how consumers utilized recommendation or review consistency as part of their decision-making strategy, and thereby identifies six consistency strategies – three for the exploration and three for the elaboration stages (see Table 2.13, 3.14, and 3.15).   Table 2.13 Consistency Strategies across the Exploration and Elaboration Stages in Information Search Processes Consistency Information Search Process Exploration Stage Elaboration Stage Recommendation Consistency Proactive Approach Seeking Anchoring Reactive Approach Deliberating Adhering Review Consistency Confirming Validating  Table 2.14 Description of Consistency Strategies Consistency Strategies Description Recommendation Consistency Seeking Strategy a consumer’s proactive attempt to find a common product and examine it before adding the product into a consideration set Anchoring Strategy a consumer’s proactive attempt to find a common product in recommendations from another source after having examined a product and having added it into a consideration set Deliberating Strategy a consumer’s reactive attempt to identify and assess overlooked or presumably misappraised information concerning a newly identified common product that was previously examined but not added into a consideration set                                                9 Because participants assigned to control groups were not able to utilize recommendation consistency strategies, they utilized review consistency strategies and classical decision-making strategies (e.g., EBA, SAT).   37 Adhering Strategy a consumer’s reactive attempt to keep examining detailed or focused information about a common product already in a consideration set after identifying the product in recommendations from another source Review Consistency Confirming Strategy a consumer’s attempt to ensure that a product has consistent reviews or review rating scores from multiple sources before it is added into a consideration set Validating Strategy a consumer’s attempt to find more detailed information about a product having consistent reviews or review rating scores from multiple sources after adding the product into a consideration set   Table 2.15 The Number of Participants Utilizing Consistency Strategies  Recommendation Consistency Strategy Overall None Before Adding into a Consideration Set After Adding into a Consideration Set Seeking Deliberating Anchoring Adhering Review Consistency Strategy None 12 2 2 2 0 18 Confirming 14 3 4 3 0 24 Validating 4 0 0 3 1 8 Both 9 0 2 1 2 14 Overall 39 5 8 9 3 64 13 12  The procedure followed earlier coding process; the coders were asked to assign each participant’s decision-making process into seven categories – six consistency strategies and the non-consistency strategy. The main coding took around two weeks. As the calculated reliability score (0.79) exceeds the recommended cut-off value (0.70) (Krippendorff, 2004), I conclude that reliability of the classification is assured (see Step 4 in Figure 2.5).  2.4.3 Recommendation Consistency Strategies 2.4.3.1 Seeking Strategy (Before Creating a Consideration Set) In the process of making their product selection decisions, some participants proactively examined recommendation consistency before examining any alternatives. They started with the intent of “seeking” recommendation consistency to identify a product that deserved to be further examined. In the seeking strategy, one began by simultaneously focusing on all three recommendation sources and looking for any common products among the recommended ones. Upon finding such a common product, the features (attributes) of that product were scrutinized first. If they were found satisfactory, the product was added to a consideration set for further and more focused evaluation (see Figure 2.7).    38  Figure 2.7 Recommendation Consistency Strategy: Seeking    39 For example, Participant #12 used a seeking strategy. After disclosing her preferences, she first clicked on all three sources (i.e., RA, Experts, and Consumers). Without first examining any alternatives, she searched for any common products among three sources. As soon as she found two common products, she started examining their attributes. Because one of the two products fitted her needs, she added it into her consideration set. After using a seeking strategy, she examined all other alternatives from top to bottom by checking product attributes and reviews. However, she did not consider review consistency. After adding two more alternatives into her consideration set and comparing them to the common product identified in the beginning, she decided to choose the common product as her final choice. Her key verbalizations are described in Table 2.16. Another example with Participant #51 is depicted in Table 2.17.  Table 2.16 Key Verbalizations: Seeking Strategy (Participant #12) Line # Category Verbalization 1 Selecting a Source “Hm, here is the automated agent. Click it.” 3 Selecting a Source “Then, Experts.” 4 Selecting a Source “Okay, Customers too” 5 Examining Recommendation Consistency “Let's see if there are any identical models. I'm just trying to see if anything really has coincided.” “The second in the agent and fourth are same to experts. Interesting!” 6 Examining Recommendations “The first one is $1,400. The hard drive is a bit small, 128 gigs, compared to the other choices here anyways. Its battery life is 9 hours and it appears quite light. The big drawback to this one is the price, somewhat, and the hard drive size.” 9 Examining Recommendations “Let's take a look at the other option. The other one is 13.3 inches with 6-hour battery life. It is $1,200 so that's about $200 dollars less.” 11 Creating a Consideration Set “Then I guess I would like the second one.” 38 Examining Recommendations “The last one, $1,500 bucks. Too expensive, not consider it.” 41 Choosing “That's going to be option 2 from the automated agent.”   Table 2.17 Key Verbalizations: Seeking Strategy (Participant #51) Line # Category Verbalization 1 Selecting a Source “I'll start with customer's recommendations.” 2 Selecting a Source “I'm going to try with expert's recommendations.” 3 Selecting a Source “And check out the automated agents.”  40 5 Examining Recommendation Consistency “It appears that they're quite different from each other but my list is more similar to the customer recommendations.” 6 Examining Recommendations “Let me look. It's so small.” 8 Creating a Consideration Set “I'll pick the first one because both of the list have the top first, the same model.” 9 Choosing “I’ll just pick it. Okay.”  With exploratory investigation, the use of the seeking strategy is explained by relying on the theory of limited cognitive capacity (Bettman et al., 1998; Chewning and Harrell, 1990; Lang 2000). The major motivation is to reduce a wide variety of alternatives into a manageable size by using the heuristic of identifying products that have recommendation consistency. Participants initially invested their limited cognitive capacity into identifying products that are common to more than one source (i.e., consistency), since that act required less cognitive resources and the resulting products would presumably be of higher quality.   2.4.3.2 Anchoring Strategy (After Creating a Consideration Set) Another group of participants also proactively examined recommendation consistency but only after examining alternatives and after adding a few of them into a consideration set. In the anchoring strategy, participants proactively accessed recommendations from other sources to seek products that were the same as (i.e., consistent) with what they had already added into their consideration set based on examining alternatives from another source. If they found such a common product, they used recommendation consistency to support their belief about a product and became more confident in their previous choice(s). Otherwise, they continued examining other products (see Figure 2.8).     41 Figure 2.8 Recommendation Consistency Strategy: Anchoring    For example, Participant #27 used an anchoring strategy. After providing preferences, she first clicked on the RA. After examining the recommendations of the RA, she added two alternatives into her consideration set after first examining Experts’ and Consumers’ reviews. Then, she clicked on all other sources, Experts  42 and Consumers, and checked whether those other sources also recommended these two products. Because she was assigned to the condition in which there were two common products between the RA and Consumers, one of her choices was recommended by Consumers. After finding recommendation consistency, she decided on the common product as her final choice without further examination of other alternatives. Although this participant did not examine other alternatives exclusively recommended from other sources, some other participants using the anchoring strategy examined other alternatives after identifying recommendation consistency. Key verbalizations are described in Table 2.18. Another example using Participant #54 is depicted in Table 2.19.  Table 2.18 Key Verbalizations: Anchoring Strategy (Participant #27) Line # Category Verbalization 1 Selecting a Source “First, I want to go down with the automated agent's recommendations.” 9 Examining Recommendations “Let's see how weight is it. It's 420 grams, so it's heavier I guess.” 11 Examining Recommendations “The price is mediocre. It should be less than $400 or so. How much is $400, $450, $350, $400 here? Megapixels, I'll prefer 18, quite important. Optical zoom 25 should be fine. Weight should be fine as well. Battery, it's cool because I can always bring extra batteries with me. ISO, this one, aperture, display size.” 14 Creating a Consideration Set “Then I guess I would like the DMC-LX5. Yeah.” 21 Creating a Consideration Set “So I will keep considering for Sony and Lumix.” 22 Selecting a Source “Now, for customer recommendations.” 23 Selecting a Source “And experts.” 24 Examining Recommendation Consistency “Which one does it repeat? This is the one, Lumix.”  “Consumers have Lumix.” 25 Choosing “Well, this one is number 1 (in my mind). I want this one. Okay.”   Table 2.19 Key Verbalizations: Anchoring Strategy (Participant #54) Line # Category Verbalization 1 Selecting a Source “Okay. Now I've click on the expert recommendation first to see ...” 8 Examining Recommendations “For the number 2 one, it has ... Actually, the number 1 and number 2 one has lower hard drive but I don't think I need that much.” 9 Creating a Consideration Set “Probably number 1 is fine for me.”  43 10 Selecting a Source “I will see the automate agent's recommendation right now.” 14 Examining Recommendation Consistency “For sure the ... Isn't that the one that I choose earlier? Comes to the number 5 one, is it? Yes. It was the one.” 20 Creating a Consideration Set “Probably, I'll go for the NP900X3A-A03US, the 1500 one.” 21 Selecting a Source “So how to see what the customer says.” 22 Examining Recommendation Consistency “The customer said ... It doesn't comes up with the one I choose from the auto agent's recommendation.” 43 Choosing “I'll click on one laptop (NP900X3A-A03US), yes and I want to check out with this.”  The use of the anchoring strategy is explained by relying on Cognitive Dissonance Theory (Festinger, 1962; Gawronski, 2012; Nickerson, 1998). The major motivation was to support one’s confidence of product quality by finding and interpreting information in ways that confirm existing beliefs, or expectations. Hence, participants utilized the anchoring strategy to test their hypothesis that the quality of a product already included in their consideration set was satisfactory. They were seeking recommendation consistency to be more confident of the quality of the product. They were also using consistency as a heuristic instead of performing a full in- depth examination of all the attribute data of the alternatives in the consideration set, in line with the predictions of the theory of limited cognitive capacity (Bettman et al., 1998).       2.4.3.3 Deliberating Strategy (Before Adding into a Consideration Set) A deliberating strategy is a consumer’s reactive attempt to identify and assess overlooked or presumably misappraised information about a newly identified common product that was previously examined but not added into a consideration set. Those using a deliberating strategy started their information search process without an initial intention of identifying common products. However, when they realized they had identified the same product as one they had seen or examined earlier from another source (but not previously added to their consideration set), they now deliberated on this common product (i.e., re-examined product attributes or reviews). This deliberation was an effort to minimize their potential misappraisal of a product by exploring whether they had previously overlooked product information10 (see Figure 2.9).                                                     10 The proportion of participants who re-examined a product they had previously examined were more likely to apply a deliberating strategy (65.22%) than any other strategy (28.3%) (exact binomial one-tailed, p < .01).   44 Figure 2.9 Recommendation Consistency Strategy: Deliberating   For example, Participant #3 (Table 2.20) used a deliberating strategy. After her preferences were elicited, she first clicked on the RA. After examining recommendations, she added an alternative into her consideration set. As the second source, she chose Experts and examined the alternatives from the top down. Because she was assigned to the condition in which there were two common products between the RA and Experts, there were two common alternatives. However, none of them had been added into her consideration set. During her examination of alternatives from top to bottom, she realized there was a common product among the RA and Experts. As soon as she realized this recommendation consistency, she carefully deliberated on this alternative and added it into her consideration set. Although this participant  45 did not choose the common product as her final choice, a few participants using the deliberating strategy chose the common alternative after adding it into their consideration sets. Key verbalizations are described in Table 2.20. Another example is Participant #37 as depicted in Table 2.21.  Table 2.20 Key Verbalizations: Deliberating Strategy (Participant #3) Line # Category Verbalization 1 Selecting a Source “I'm going to see the automated agent recommendations first.” 4 Examining Recommendations “They're recommending DMC and, what are these brands? It's very light. It's relatively cheap compared to the other ones.” 9 Examining Recommendations “The Sony one, I don't really like the way it looks. This is looks like an old-school ʼ70s camera.” 13 Creating a Consideration Set “I'll go for the Lumix and since it's only priced $140.” 22 Selecting a Source “In that experts recommendations,” 23 Examining Recommendations “Out of all these … I'll look at the Cannon first since it's under $200. The battery life is averaged pretty low compared to the other ones similar to the Lumix, the megapixel about the same level, weight, it's pretty light.” 27 Examining Recommendation Consistency “Number five kind of reminds me of camera from number four. It is Sony. Looking back and forth, I noticed that they're the exact same model.” 28 Examining Recommendations “Out of the expert recommendation and automated recommendation, I'll review the Sony again. The battery, Sony is a little better. They weight is the same. The display size is the same and aperture, don't know. ISO, I don't know. Zoom is the same and megapixels, Sony is higher. Based on the battery and the megapixels, even though the Sony is $29 more and I've been using Sony products for a long time.” 29 Creating a Consideration Set “So I will keep considering for Sony and Lumix.” 42 Choosing “I'm going to go with the Lumix camera over the mix.”   Table 2.21 Key Verbalizations: Deliberating Strategy (Participant #37) Line # Category Verbalization 4 Selecting a Source “So, Customer’s recommendation” 5 Examining Recommendations “The first one, battery is too low, so not that one; same with the 5th one. The middle three, 5, screen size is good.  Price range is also similar, and hard drive, the 4th one is the biggest. So, the 4th one has the biggest hard drive. It’s also the lightest and the 4th one looks good.” 6 Selecting a Source “Expert’s recommendation”  46 7 Examining Recommendations “So, the last one, they’re all pretty similar, but the 3rd one has the largest hard drive, and then they’re all the same size.” 8 Examining Recommendation Consistency “I guess number 3 for this one. It also appeared as number 4 in Customer recommendations.” 9 Examining Recommendations “The review says it’s thin and light, which is good. Battery life could be better. It’s better than the MacBook Air.” 12 Creating a Consideration Set “Ok, so, so far the S5-391-9880 looks good (among others).” 20 Creating a Consideration Set “So I would probably pick the S5-391-9880 compared to everything else.” 22 Choosing “So buy this one, yes.”  The use of the deliberation strategy is explained by relying on Cognitive Dissonance Theory (Festinger, 1962; Gawronski, 2012). While participants were more focused on developing an overall understanding of recommended products in order to find attractive ones to add to their consideration set, they were made aware of a presumable misunderstanding in their cognitive system of belief by identifying recommendation consistency of a product (i.e., identified as common to more than one source) that had not been added into their consideration set after an earlier examination. Hence, to reduce uncertainty of product quality and minimize misunderstanding in their cognitive system of belief, participants now re-examined the common product after reactively identifying recommendation consistency.   2.4.3.4 Adhering Strategy (After Adding into a Consideration Set) In the reactive adhering strategy, when participants identified that another source had also recommended a product that, based on a previous source, they had already added into their consideration set, they stopped examination of the other products in the consideration set. They concentrated instead on this common product and engaged in searching for more in-depth information about it. In other words, although participants had intended to examine other products in their consideration sets in search of a better-quality selection, they interpreted identification of recommendation consistency of a given product as a cue to adhere to their choice of this product that was already in their consideration set (see Figure 2.10).      47 Figure 2.10 Recommendation Consistency Strategy: Adhering   For example, Participant #34 used the adhering strategy. After providing preferences, she began with experts’ recommendations. After examining them, she added three alternatives into her consideration set. Next, she chose RA and examined the alternatives there from top to bottom. In the middle of this examination, she realized that the one of the alternatives in her consideration set was also recommended by RA. Rather than continue to examine other alternatives in the consideration set, she engaged in an in-depth examination of the common product and confirmed her choice in adding it into the consideration set. After examining other sources, she chose the common product as her final choice. Although this participant chose the common product as her final decision, a few participants using an adhering strategy selected other products after making comparisons. Key verbalizations are described in Table 2.22. Another example of Participant #64 is depicted in Table 2.23.     48 Table 2.22 Key Verbalizations: Adhering Strategy (Participant #34) Line # Category Verbalization 1 Selecting a Source “First I will go for the Expert’s recommendation.” 3 Examining Recommendations “All the first four are all 4GB, but the last one, like 8GB is double storage, but I think when I look at the picture, it’s not that good. It’s so big, so heavy for me to carry on each school day, so I rule out the number 5 choice. The other thing I will go for look at for the battery. It’s almost around 7.0. I think that’s not a huge difference here. So I go for the video card is almost the same.” 9 Creating a Consideration Set “I will choose the number one maybe… and the middle one, and the fourth one.” 10 Selecting a Source “I will go to see the automated agent recommendation.” 16 Examining Recommendation Consistency “They recommended that one as well. Okay, it's automated, interesting. Do they have any customers’ that is really similar? Let's check what they say about this one. Is it exactly the same? It is, 1,000, 256 gigabytes, 384 MB, video, a video card, 2.6 gigahertz, 4 gigabytes memory, and 13.3 inches, 2 kilograms, and 6½ hours. Okay, so Expert just gave me the same one. Probably, yes. Okay. So there is not any difference on it. Hm, this is what I chose. I chose a nice one.” 23 Creating a Consideration Set “So after comparing this to session, the one I want to choose is the number 2 on the Expert recommendation.” 24 Selecting a Source “So maybe I will take a look at the Customers recommendation.” 25 Examining Recommendations “The customer says the number 1 is the X875q7390, the price is around 1500. The battery is only 1.5 hours. It’s so little.” 46 Choosing “After comparing all these five. Okay, choose this one. So I will go to choose the number 3 for the Expert recommendation, same one in the automated agent recommendation.”   Table 2.23 Key Verbalizations: Adhering Strategy (Participant #64) Line # Category Verbalization 1 Selecting a Source “I'm going to automated agents recommendations as it is the first one on my left hand side.” 12 Examining Recommendations “It is actually slightly cheaper than the first one recommended.” 13 Creating a Consideration Set “That's interesting. I'm kind of leaning towards that one a little bit more.” 24 Selecting a Source “Let's just see what the expert recommendations look like and how they line up.” 25 Examining Recommendation Consistency “That's interesting on the ranked. My second choice, selected by the automated agent, is the third overall from the expert recommendations. That is suggestive and it's actually a fairly good choice.”  49 27 Examining Recommendations “so I'm going to quickly look at the top one from the experts and see what the review there says, because I'm curious why that one has been chosen. Okay. It's not actually significantly higher than the one I had. I think mine was 4.15. This is 4.25.” 39 Choosing “I have made my decision and that would be the camera I would pick. So I'm going to put it in my cart. Yes. I want to check out. Okay.”  The use of the adhering strategy is explained by relying on Cognitive Dissonance Theory (Festinger, 1962; Gawronski, 2012; Quine and Ullian, 1978) and the Information Search Process Model (Kuhlthau, 1991); cognitive consistency was utilized to reassure oneself of the quality of a product already in the consideration set and interpret common product recommendation to confirm one’s belief of its quality.   2.4.4 Review Consistency Strategies 2.4.4.1 Confirming Strategy (Before Creating a Consideration Set) Because reviews provided more detailed information than recommendations, consumers used not only recommendations but also reviews to expand their understanding of products by consulting opinions from experts and consumers (Pavlou and Dimoka, 2006). Although some participants used recommendation consistency as part of their decision-making strategies, some of them used review consistency by comparing both experts’ and consumers’ reviews. For example, a group of participants added, or kept, products in their consideration sets that had high review consistency. That is, they tried to identify and make sure that a product had consistent reviews or review rating scores from multiple sources before adding it into their consideration set. After confirming product quality through review consistency, they began searching for in-depth information (see Figure 2.11).     50 Figure 2.11 Review Consistency Strategy: Confirming   For example, Participant #49 used a confirming strategy. After providing her preferences, she looked at the RA’s recommendations. When examining recommendations from top to bottom, she compared review rating scores from experts and consumers. By comparing rating scores, she looked for alternatives that had high ratings from both experts and consumers. Among the five recommendations from the RA, she chose an alternative that had the highest review consistency and examined its product attributes and review comments. She repeated this process with recommendations from other sources. Key verbalizations are described in Table 2.24. Another example with Participant #8 is depicted in Table 2.25.     51 Table 2.24 Key Verbalizations: Confirming Strategy (Participant #49) Line # Category Verbalization 1 Selecting a Source “So I'm going to try the automated agent's recommendations first. I'm going to click on the try me.” 3 Examining Recommendations “I see different camera options. The prices are all...the top few are really expensive.” 5 Comparing Reviews “When I compare these two expert reviews... Customer's ratings, oh, similar to experts, 4 out 5, 4.4, 4 and 3.2, 4.4 and 4.5, 3 and 4.5. Mediocre.” 6 Creating a Consideration Set “I think that the fourth would be the one that (I am going to consider)...” 7 Examining Recommendations “It’s really cheap. I wonder why that is. The lightest also has the most zoom, and whatever the ISO aperture is. Seems to be higher than the other thing, or on the higher end. The battery's a little lower but I think 220 hours is still pretty good.” “Maybe I should look at the reviews. Great pictures, very easy to use, small size makes it easy to fit in pocket. Do you recommend, oh, what did I click? Do you recommend the camera for point-and-shoot type applications? Extra...I don't need the extra stuff.  This seems pretty good. I am very satisfied, small size; could leave this clip. Like if the camera is used, I hear no noise from zoom mechanism and the movie audio generally plays...  Autofocus... Smaller size, more convenient, disappointed doesn't include separate battery charger. The charger is built into camera, power supplied by USB cable. Uses battery. This camera has the Panasonic charger and accessory charger plus plugs into AC wall receptacle that holds a battery being charged away from...” 8 Selecting a Source “I am going to look at Expert recommendation and click ‘show list’ and then see what the Expert recommends.” 35 Choosing “I'll click on this small Canon camera, good reviews, and good price. Okay.”   Table 2.25 Key Verbalizations: Confirming Strategy (Participant #8) Line # Category Verbalization 2 Selecting a Source “I will try automated agent’s recommendations clicking on try me.” 15 Examining Recommendations “This one would be… Optical zoom, 20 times.” 18 Examining Recommendations “I will begin clicking on the review by the experts for camera number two on the automated agent’s recommendations. The Panasonic Lumix has an excellent design, features including ultra wide angle, 20 times zoom runs, 3PS and slightly manual, manual shooting modes as well as fast shooting performances and improved film like photo quality from previous versions by using all of that is 20s, high performance features shows it’s near-pointless touch screen, can really cut into battery life. Also photos are noisy and self one-viewed at 100%. The bottom line, lens might be the main attraction but the camera is an all around excellent. Okay... Capture… all of the shooting and control options… improved low-light photo quality.”  52 19 Examining Recommendations “Okay. Right now, I’m clicking on the customer’s recommendation for camera number two, the Panasonic Lumix by the automated agent’s recommendations. This camera takes some very good pictures and video. Takes really good pictures, panoramic shots are awesome. I had to send it to Panasonic Service Center for repairs upon return - still not sure what the problem was. Just returned from a cance trip where I left the DSLR and video camera at home and used this as my primary photo and video camera. The best way I can say it is WOW. This camera takes some very good pictures and video. No. It's not goint to match that of high end full body equpment, but if you're... hm hm hm... telephoto that while it does loose some saturation and sharpness, is very functional. stabilization works well, nice color, does a lot of things in full auto well. boots up fast, focuses well, video quality is adequate to very good, low light level performance is reasonable, it's manual functions are decent, and it all fits in a shirt pocket and costs under $300. My only complaint? White balance indoors, but that's easy enough to get around with the flash or manual calibration.” 20 Comparing Reviews “And now I’ll compare them (expert and customer review on 2nd one). They are very close.” 21 Creating a Consideration Set “It’s a runner-up to my favorite camera for now which is camera number two from the automated agent’s recommendations.” 27 Selecting a Source “I will now move on to the expert recommendations.” 36 Selecting a Source “I will also be going on to read about the customers recommendations for the different cameras.” 39 Creating a Consideration Set “Okay, that Sony Cyber-shot has caught my attention and I will be comparing it with the Canon PowerShot Elph HS again with the expert’s review.” 49 Choosing “Since I already own a Canon PowerShot, I have decided to go with the Sony Cyber-shot and see what wonders it will give to my pictures. You can always select, yes, I want to check out.”  As Cognitive Dissonance Theory (Festinger, 1962; Gawronski, 2012) posits, by pursuing cognitive consistency and reducing cognitive inconsistency, consumers are motivated to reduce a wide variety of alternatives and identify products that deserve to be added to a consideration set for further focused information seeking.   2.4.4.2 Validating Strategy (After Creating a Consideration Set) Although a group of participants used review consistency to screen out alternatives, others used review consistency to validate their previous decision to add an alternative into their consideration set. Because review inconsistency of an alternative in the consideration set indicates a misunderstanding or poor evaluation of advice sources, participants were motivated to find products with high review consistency that validated the quality of the products in their consideration set and then assigned priority to further scrutiny of such products. Thus, a validating strategy refers to a consumer’s attempt to find consistent reviews or review rating scores from multiple sources after adding the product into a consideration set.  53 Without examining the review consistency of an alternative before adding it into a consideration set, they started using review consistency to validate a product’s quality after they had already added it to their consideration set (see Figure 2.12).  Figure 2.12 Review Consistency Strategy: Validating    54  For example, Participant #39 used a validating strategy. After eliciting preferences, she began with RA, then went to Experts and then to Consumers. When examining recommendations from top to bottom, she chose one alternative from RA and two alternatives from Experts. With these three alternatives, she then examined reviews and compared rating scores from Experts and Consumers. Finding no review consistency, she re-examined the alternatives and eliminated one of the three from her consideration set. Key verbalizations are described in Table 2.26. Another example is Participant #31 as depicted in Table 2.27.  Table 2.26 Key Verbalizations: Validating Strategy (Participant #39) Line # Category Verbalization 1 Selecting a Source “I'm going to try the automated agent's recommendation first.” 3 Examining Recommendations “Let me see, the price is $1,100 for sale. It is not the most expensive, which I think is okay.” 8 Creating a Consideration Set “By comparing, just by a big overview, I am going to go into the number 1 because that seems to set with my preferences.” 9 Selecting a Source “I'm going to go into Expert recommendations. Press ‘show list,’” 10 Examining Recommendations “Then it gives me number 1 M5 laptop. The screen is self pay for the picture. I don't really like this, but probably even though everything is okay. Weight seems okay, I guess. Screen size 14 inches, memory 4 GB, radio card 1,000 MB.” 15 Creating a Consideration Set “I think I'm going to go with this one. It is the Z935-P300.” 16 Selecting a Source “I now press ‘customer recommendations.’” 17 Examining Recommendations “I am going to look at the third one. Actually, the third one is the most expensive one, and it stays 8.4 hours, that was good.” 21 Creating a Consideration Set “I think I will prefer the one that is the number 3, the Yoga-13.” 22 Comparing Reviews “Let me check ratings. The first in the agent is 4.1 and 3 out of 5. The second in the Expert is 4.5 and 4. Hm, sounds good. Last customers, 4, and 3.1. Bad… I will not consider it.” 23 Examining Recommendations “The customer review is that… I was very happy with this one, very slim, very fast, good CPU, and enough RAM memory. Would recommend this product without Hesitation. I bought a netbook, and it was so slow… Blah blah. It says it weighs  2.5 pounds. Actually, it’s like my former traveler computer. Some reviews are bit earlier… Right now, I think that this one is okay.” 30 Choosing “Overall, I will like to buy the Toshiba one.”   Table 2.27 Key Verbalizations: Validating Strategy (Participant #31) Line # Category Verbalization 6 Selecting a Source “Okay, so maybe I’ll just check the automated agent recommendations.”  55 10 Examining Recommendations “Okay. I guess the first one doesn’t look so good.  So I’m not going to choose this one.” 15 Selecting a Source “But I’ll just see what the customers say about these computers.” 24 Examining Recommendations “Let’s see, it’s 4GB. I think that’s good enough. Yeah, I think that’s good enough … Batteries, screen size.” 33 Creating a Consideration Set “I think I’ll just … I’ll have to go with this.” 34 Examining Recommendations “Experts said, Acer Aspire S5, yeah, because it’s very thin and light. It’s even lighter than a MacBook.” 35 Examining Recommendations “Let’s check what the customer say about this computer. Yeah, see, the customer rating is not too low as well.” 36 Comparing Reviews “Comparing to Expert rating … Okay, it’s higher like the expert rating.” 39 Choosing “Yeah, I think I’ll just go with this one.  It seems like a good choice … Checkout.”   As reviews provide more detailed information than recommendations, consumers can expand their understanding of the product using opinions from experts and consumers in the elaboration stage in which consumers try to build an in-depth understanding of a product already included in a consideration set for the product selection decision. However, since review inconsistency of a product in the consideration set indicates mis-assessment of product quality in the exploration stage, as Cognitive Dissonance Theory (Festinger, 1962; Gawronski, 2012) and the Information Search Process Model (Kuhlthau, 1991) posit, participants were motivated to find a product having a cue (i.e., review consistency) supporting their decision to add a product into a consideration set and assign priority to the further scrutiny of such a product.  2.5 THEORETICAL PERSPECTIVES FOR TRIANGULATION Although Study #1 identified consistency strategies across information search stages, my explorations need to be interpreted and supported through a theoretical lens. Behavioral decision theories have shown that conflicting cognitions (i.e., cognitive dissonance) is a major determinant of avoiding and/or accepting potentially biased information. Moreover, the information search process is a major part of a decision-making strategy to extend the state of knowledge of a particular product (Butler and Peppard, 1998; Johnson et al., 2004; Karimi et al., 2010; Klein, 1998; Kuhlthau, 1991; Li et al., 2010; Sproule and Archer, 2000). Hence, to triangulate and validate my understanding of consistency strategies, this study uses Cognitive Dissonance Theory and the Information Search Process Model as its theoretical lenses. In addition, confirmation bias (Nickerson, 1998) and bandwagon effects (Bikhchandani et al., 1992) are used for further triangulations. A summary of recommendation and review consistency strategies with theoretical triangulations is presented in Table 2.28.   56 Table 2.28 A Summary of Consistency Strategies with Theoretical Triangulations Consistency Strategy Heuristic Cue Activation of Strategy Utilization of Consistency Cognitive Motivation Theoretical Triangulation Seeking  Strategy Recommendation Consistency Before adding a product into a consideration set A consumer’s proactive attempt to find a common product and examine it before adding the product into a consideration set To reduce numerous alternatives into a manageable number by relying on consistency heuristic Cognitive Dissonance Theory; Information Search Process Model; Bandwagon Effect Deliberating  Strategy A consumer’s reactive attempt to identify and assess overlooked or presumably misappraised information concerning a newly identified common product that was previously examined but not added into a consideration set To reduce uncertainty of product quality and minimize misunderstandings in their cognitive system of beliefs by relying on consistency heuristics Cognitive Dissonance Theory; Information Search Process Model; Bandwagon Effect Anchoring  Strategy After adding a product into a consideration set A consumer’s proactive attempt to find a common product in recommendations from another source after having examined a product and having added it into a consideration set To find an evidence that reassures oneself of the quality of a product already in the consideration set, instead of performing a full in-depth examination of all the attributes Cognitive Dissonance Theory; Information Search Process Model; Confirmation Bias Adhering Strategy A consumer’s reactive attempt to keep examining detailed or focused information about a common product already in a consideration set after identifying the product in recommendations from another source To reduce uncertainty of product quality and minimize misunderstandings in their cognitive system of beliefs by relying on consistency heuristics; to find an evidence that reassure oneself of the quality of a product already in Cognitive Dissonance Theory; Information Search Process Model; Confirmation Bias  57 the consideration set Confirming  Strategy Review Consistency Before adding a product into a consideration set A consumer’s attempt to make sure that a product has consistent reviews or review rating scores from multiple sources before it is added into a consideration set To reduce a variety of alternatives into a manageable size by the using the heuristic of identifying products that have consistency Cognitive Dissonance Theory; Information Search Process Model; Bandwagon Effect Validating  Strategy After adding a product into a consideration set A consumer’s attempt to find more detailed information about a product having consistent reviews or review rating scores from multiple sources after adding the product into a consideration set To find an evidence that reassure oneself of the quality of a product already in the consideration set, instead of performing a full in-depth examination of all the attributes Cognitive Dissonance Theory; Information Search Process Model; Confirmation Bias   2.5.1 Cognitive Dissonance Theory and the Information Search Process Model In accordance with Cognitive Dissonance Theory and the Information Search Process Model, consistency strategies can be conceptually categorized by whether recommendation and review consistencies are used in the exploration or elaboration stage to reduce cognitive dissonance. For example, in my research, one group of participants used consistency to build an overall understanding of a product for a decision on further elaboration before adding the product into their consideration set; another group used consistency as evidence that supports their in-depth understanding of a product after adding the product into their consideration set. That is, an individual’s utilization of consistencies is determined by the interplay of a required cognitive resource in the information search processes, one’s limited working memory, and computational capabilities – which is the key assumption in behavioral decision theories (Bettman et al., 1998; Chewning and Harrell, 1990; Lang, 2000).   For example, in the seeking strategy and confirming strategy (i.e., before a consideration set is formed), the major motivation is to reduce numerous alternatives to a manageable number by identifying products that have consistency. Participants initially invested their limited cognitive capacity into identifying products that are common to more than one source (i.e., consistency), because these products would presumably  58 have higher quality than others and using consistency requires fewer cognitive resources to find potentially good recommendations.   In the anchoring and validating strategies (i.e., after a consideration set is formed), the major motivation is to support one’s confidence in a product’s quality by finding and interpreting information in ways that confirm existing beliefs or expectations. Hence, participants used an anchoring strategy to test the hypothesis that the quality of a product already included in their consideration set is satisfactory. They sought consistency to build their confidence in the quality of the product(s) in their consideration set. They also used consistency as a heuristic instead of performing a full in-depth examination of all the data on the attributes of the alternatives in the consideration set; this is in line with the predictions of the theory of limited cognitive capacity (Bettman et al., 1998). That is, cognitive consistency is used to reassure oneself of the quality of a product already in the consideration set and to interpret common product recommendations to confirm one’s belief of the product’s quality.   In the deliberating strategy, although participants were more focused on developing an overall understanding of recommended products in order to find attractive ones to add to their consideration set, they were now made aware of a potential misappraisal by identifying recommendation consistency of a product (i.e., identified as common to more than one source) that had not been added into their consideration set after an earlier examination. Hence, to reduce uncertainty about product quality and minimize misunderstanding in their cognitive belief system, participants now re-examined the common product after reactively identifying recommendation consistency.   In the adhering strategy, participants were motivated to find and accept self-supporting evidence. To confirm their previous choice of adding an alternative into a consideration set, individuals tended to accept consistency as valid evidence that justifies the collection of an in-depth information about the alternative. However, inconsistency in the adhering strategy caused cognitive dissonance and made participants avoid alternatives for which no confirmatory evidence was found.   2.5.2 Confirmation Bias and Bandwagon Effect In addition to Cognitive Dissonance Theory and the Information Search Process Model, there are alternative theories that might explain consistency strategies in using multiple sources. For example, literature in various fields has shown that cognitive biases can distort consumers’ decision-making processes. Among various cognitive biases, confirmation bias — the tendency to search for, interpret, or recall information in a way that confirms one's beliefs — provides a theoretical lens through which to understand anchoring,  59 adhering, and validating strategies (Nickerson, 1998). Participants used these strategies after adding a product into a consideration set as a way to test their hypothesis that the quality of the product already included in their consideration set is satisfactory and/or to build their confidence in the quality of the product. However, confirmation bias cannot explain other strategies, such as seeking, deliberating, and validating. People who used these strategies had not examined the product, built the hypothesis that the quality of a product would be satisfactory, nor added the product into their consideration set. For example, those using the deliberating strategy changed their previous belief that the quality of a product was bad and therefore added it into their consideration set.   The bandwagon effect provides a theoretical lens through which to understand seeking, deliberating, and confirming strategies that are used before adding a product into one’s consideration set. The bandwagon effect refers to a psychological phenomenon in which people do something primarily because other people are doing it; this is without regard for their own beliefs, which they may choose to ignore or override (Bikhchandani et al., 1992). People using seeking, deliberating, and confirming strategies have relied on consistency to build an overall understanding of alternatives and have added common products into their consideration sets, even if the common product was not the best match to their preferences. However, the bandwagon effect cannot explain consistency strategies used after adding a product into a consideration set. For example, participants using anchoring, adhering, and validating strategies added a common product that specifically matched their preferences and ignored another common product that did not match. That is, they do not ignore or override their beliefs simply because other people are doing something else.  2.6 DISCUSSION 2.6.1 Theoretical Implications Study #1 has both theoretical and practical implications. From the theoretical perspective, it first identified the decision-making strategies used in an environment of multiple recommendation and review sources. This is important because almost all previous research has primarily focused on the impact of such information from a single source. To date, online consumers’ utilization of diverse recommendations and reviews from multiple sources has been largely ignored. Recent IS studies have recognized the need to examine the utilization and impact of multiple sources on product selection decision-making performance (Baum and Spann, 2014; Li et al., 2010; Xu et al., 2017). To the best of my knowledge, this is the first investigation to identify the use of both recommendation and review consistencies between multiple sources across the exploration and elaboration stages in the information search process. Through concurrent verbal protocol analysis, I explored online consumers’ decision-making processes and identified four  60 recommendation consistency strategies (seeking, anchoring, deliberating, and adhering) and two review consistency strategies (confirming and validating) that more than 81% of participants used during the product selection process. The remaining 19% of participants used classical decision-making strategies, mainly heuristic strategies (e.g., EBA, SAT). Seeking and confirming strategies are used to identify a product that deserves further focused examination. People invest their limited cognitive capacity in identifying products that are common to more than one source (i.e., consistency) because presumably these products are of higher quality than others and require fewer cognitive resources to find potentially good recommendations. Consumers use a deliberating strategy to correct their misappraisal of a product. When they initially fail to add a product to their consideration set only to find it is recommended by other sources, they carefully re-examine its attributes and reviews and add it into their consideration set. Anchoring, adhering, and validating strategies are used to support people’s beliefs about and understanding of a product. After adding a product to their consideration set, people tend to find evidence to test their hypothesis that the quality of a product already in their consideration set is satisfactory and/or become more confident of the quality of the product. As people encounter consistency, they are likelier to choose this common product as their final decision. Thus, the results of Study #1 will help researchers better understand online consumers’ process of product selection and their utilization of recommendations and reviews from multiple sources.   Second, I examined the intervening processes — the so-far unexplored black box of decision-making processes. Although extant literature emphasizes the need to understand the decision-making process in extracting appropriate information for design and evaluation of decision-aid systems (Todd and Benbasat, 1987), few studies have examined the decision-making process in terms of the use of multiple sources or types (Lee et al., 2011; Xu et al., 2017). Through a rigorous coding procedure consisting of multiple rounds and multiple coders, my results provide guidelines for using concurrent verbalization as an exploratory approach for theory building. They also demonstrate the implications of a process-tracing approach by revealing the differential impact of recommendation and review consistency strategies across the exploration and elaboration stages in information search processes. Because the using of recommendation and review consistencies is an intrinsic process occurring in an unobservable decision-making process, concurrent verbalization is a valid method to trace and examine the black box that has been deductively assumed to be present.   2.6.2 Practical Implications From a practical perspective, I improve the understanding of online consumers’ processes of product selection decision-making, which in turn forms a basis for designing better decision aids. Accordingly, online stores should help consumers (by appropriate decisional support functionalities under the users’  61 control) identify recommendation and review consistency. They can accomplish this by highlighting recommendation consistency between multiple sources and/or providing the differences between review rating scores at appropriate stages during the information search process. Extant literature shows that positive reviews on products and sellers increase consumers’ intention to purchase the product from those sellers and their willingness to pay a premium price. However, the results of Study #1 show that even if one source provides a positive review, it might not be sufficient for consumers when they read inconsistent reviews from other sources or recommendation rankings that are not aligned with the reviews. Although this does not mean that recommendation and review consistency is more influential than its valence, online stores need to be encouraged to provide tools to identify recommendation and review consistency. Interestingly, one outcome of the availability of such consistency-identification tools might be to reduce the incidences of reactive strategies (deliberating and adhering) that correct previous misconceptions and in turn lead to a more efficient and effective product selection decision.   2.6.3 Limitations and Future Research Despite the implications of this study, the results of Study #1 have several limitations. First, the use, roles, and impact of recommendation and review consistency are examined without either a public or social context. This could be seen as a conservative test in the laboratory. Future research should consider how the recommendations and review consistency strategies are influential in two situations. One is an online shopping context with social interactions – such as online collaborative shopping or an online social shopping network (OSSN) – where recommendations and reviews from friends, family members, and acquaintances need to be considered). Another is with non-search goods and experiential goods (e.g., apparel, hotels, restaurants).   Second, the participants in this study are undergraduate and graduate students who may not precisely represent the overall population of online shoppers. However, because the participants have the potential to become heavy users (Kim et al., 2013) and 95% of the participants have had previous experience in online shopping, the use of students is not a significant threat to external validity (McKnight et al., 2002).   Third, using an exploratory approach, I discovered new consistency strategies from concurrent verbalizations and present cognitive processes by using consistency strategies. However, I did not investigate the impacts of consistency strategies on individual’s decision-making. I encourage future studies to investigate the impacts of consistency strategies on decision-making performance using behavioral decision theories.    62 Fourth, in exploring the utilization and impact of recommendation and review consistency strategies across the exploration and elaboration stages of the information search process, the designing and implementing of decisional aids supporting consistency strategies would be an associated research topic. Because previous research on the RA design proposed that the strategy restrictiveness of the RA would decrease consumers’ intention to use decision aids (Wang and Benbasat, 2009), future research should consider how to design decision aids supporting recommendation and review consistency strategies across the information search process. In addition, although my experimental website design allowed the users to compare recommendations from multiple sources, there could be a better design of decision aids in support of consistency strategies. For example, to support utilization of consistency strategies, online shopping stores can provide a graphical representation that shows the extent of consistency between sources. Future research should consider which kind of interface design is the best or most appropriate way to present recommendations and reviews from multiple sources and whether the utilization and impact of different consistency strategies are associated with information search processes. Lastly, my sample size of 64 may seem low for studies using a solely input-output (and not process tracing) type of data collection methods. However, my sample size is considerably larger compared with typical process tracing studies published in leading IS journals (Burton-Jones and Meso, 2006; Kim et al., 2000; Todd and Benbasat, 1987).      63 CHAPTER 3: SUPPORTING ONLINE CONSUMERS BY IDENTIFYING CONSISTENCY DISTANCE AMONG ADVICE SOURCES (STUDY #2)  3.1 INTRODUCTION As multiple advice sources concurrently become available in online stores, consumers face the challenge of deciding how to utilize such wide ranging, and possibly conflicting, sets of information in making their product selection decisions (Xu et al., 2017). To alleviate uncertainty by using these multiple advice sources while simultaneously trying to reduce the cognitive overload due to diverse information from these advice sources, consumers strategically utilize consistency among advice sources as a key heuristic cue to cope with these competing challenges (Kim and Benbasat, 2013; Xu et al., 2017). Study #1 (Chapter 2) investigated consumers’ use of consistency strategies when facing multiple advice sources and identified how they are utilized across information search stages.   However, these consistency strategies are user-driven, not system-supported; that is, consistency seeking was conducted “manually” by the consumer. Manual consistency-seeking behaviors are cognitively effortful, suggesting a need to design decision aids (i.e., consistency distance identification tools) to help consumers identify consistency across advice sources and to guide them when they need to consider consistency in making product selection decisions (Wang and Benbasat, 2009). The consistency distance identification tools (CDITs) refer to decision aids that calculate and present consistency distance as graphical representations of consistency among advice sources in order to support online consumers’ utilization of consistency strategies.   According to the findings of Study #1, online consumers compare the difference among rating scores of advice sources to identify consistency. Therefore, in designing the CDITs, consistency distance is operationalized as a continuous variable to capture granularity of consistency among advice sources. Consistency distance is conceptualized as the extent of objective disagreement, reflected by differing ratings, among multiple advice sources of recommendations and/or reviews representing product quality and/or fit. When consistency distance is larger, there is more objective disagreement among advice sources (i.e., inconsistency). When it is smaller, there is less objective disagreement among advice sources (i.e., consistency).  To measure consistency distance, Study #2 (hereafter also referred to as ‘this study’ in this chapter) applied a Euclidean metric (Deza and Deza, 2009) that identifies a straight-line distance between points in  64 Euclidean space. By mapping multiple advice sources’ rating scores representing product quality and/or fit to Euclidean space, consistency distance is measured as a straight-line distance among advice sources. Overall, consistency distance represents a measure of the extent of objective disagreement among advice sources concerning product quality and/or fit.  In addition, in accordance with the Information Search Process Model (Kuhlthau, 1991) that postulates multiple information search stages for screening out alternatives (i.e., width of alternatives, Source vs. Product) and building an in-depth understanding of alternatives (i.e., depth of understanding, Aggregated vs. Pairwise), four types of CDITs (i.e., Aggregated Source, Pairwise Source, Aggregated Product, and Pairwise Product) representing diverse levels of consistency width and depth were proposed.   To investigate when and how to provide the CDITs, Study #2 applied the Task-Technology Fit Theory (Goodhue and Thompson, 1995), which postulates that information technology can improve users’ performance when the functionalities available to the user can support the user’s activities. According to Study #1, there are three major information search stages: 1) advice source selection; 2) product-list exploration to build an overall understanding; and 3) product elaboration to build an in-depth understanding. Each CDIT has a diverse level of functionality that helps online consumers not only to build a low/high level of understanding of advice sources and products across information search stages, but also helps them to utilize consistency strategies that are expected to be triggered by trustworthiness of their advice sources. Accordingly, it is expected that there would be a most effective combination of a “CDIT, an information search stage, and trustworthiness of advice sources” in improving decision-making performance.  Thus, to shed light on how to design and implement new decision aids that support online consumers’ utilization of consistency strategies across information search stages, Study #2 aims to conceptualize and measure consistency distance, as well as investigate when and how to provide CDITs across information search stages to improve consumers’ decision-making performance. Overall, this study helps decision support system (DSS) developers determine whether the CDITs help online consumers to manage conflicting opinions better by utilizing consistency strategy, which subsequently culminates in better product selection decisions. It also informs the DSS developers on which combination of a CDIT, an individual’s decision-making strategy triggered by the trustworthiness of advice sources, and/or an information search stage is the most efficient and effective in improving decision-making performance.  3.2 THEORETICAL FRAMEWORK AND HYPOTHESIS DEVELOPMENT 3.2.1 Online Consumers’ Utilization of Multiple Advice Sources  65 To support online consumers’ decision in selecting products, a number of online stores provide recommendations and reviews from multiple advice sources, such as a recommendation agent (RA), consumers, experts, and even online social networks (OSNs). For instance, Cnet.com makes available experts and consumers; and Amazon.com provides an RA, consumers, and OSNs.11 Because experts have high levels of product knowledge, their recommendations and reviews represent in-depth and comprehensive details about product performance. Consumers’ recommendations and reviews can reflect their experience and satisfaction gained from using a product. RAs provide recommendations that match each consumer’s elicited product attribute preferences (Xiao and Benbasat, 2007). Likewise, consumers’ social networks share similar interests and norms (Han et al., 2015).   According to the findings of Study #1, as online consumers face the challenge of deciding how to utilize such wide ranging, and possibly conflicting, sets of information in making their product selection decisions (Xu et al., 2017), they develop new decision-making strategies (i.e., consistency strategies) utilizing consistency as a key heuristic. However, to implement new decision aids (i.e., CDITs) that support consistency strategies, this study (i.e., Study #2) had two aims: i) conceptualize consistency distance as the extent of objective disagreement among multiple advice sources of recommendations and/or reviews representing product quality and/or fit; and ii) operationalize consistency distance as a continuous variable to capture granularity of consistency among advice sources.12   3.2.2 Conceptualizing Consistency Distance and Consistency Distance Identification Tools 3.2.2.1 Consistency Distance There are several ways to measure distance. For example, Manhattan Distance is the simple sum of the horizontal and vertical components in a space as a strictly horizontal and/or vertical path around blocks (Deza and Deza, 2009), whereas Euclidean Distance captures the diagonal, straight-line distance. Chebychev Distance is the maximum distance that represents only the biggest difference between advice sources rather (Deza and Deza, 2009). Consequently, it fails to capture differences between more than two advice sources. Minkowski Distance is a distance of infinite length (Deza and Deza, 2009), while rating scores from multiple sources are generally measured within a range of scores, such out-of-five rating scores.                                                  11 Amazon.com collaborated with Facebook.com to provide recommendations based on consumers’ online social networks. By encouraging Amazon users to link to their Facebook accounts, Amazon.com could recommend products that are popular among their social networks that share similar interests. 12 Xu et al. (2017) and Study #1 (Chapter 2) operationalized consistency as a binary variable representing whether a product is commonly recommended by multiple advice sources. However, such operationalizations were not able to capture granularity of consistency in order to compare consistencies of the ratings of products from different sources.  66 To develop a consistency distance measure, Study #2 applied Euclidean space and distance (Deza and Deza, 2009) to map advice sources’ rating scores that represent an overall evaluation of product quality.   There are several benefits in applying Euclidean space and distance. For example, in Euclidean space, points of the space are specified with collections of numbers. There is essentially only one Euclidean space for each dimension. Euclidean space specifies each point uniquely in a plane by a pair of numerical coordinates, which are the signed distances measured in the same unit of length. Therefore, Euclidean distance identifies straight-line distance between points in Euclidean space. Study #2 mapped rating scores from each source on Euclidean space, calculated the straight-line distance between them, and consequently measured the consistency distance as an objective and continuous variable.   3.2.2.2 Consistency Distance Identification Tools Bettman et al. (1998) posited that due to limited cognitive capacity in examining alternatives, a decision maker reduces alternatives to a few, which will then be evaluated fully. The Information Search Process Model (Kuhlthau, 1991) postulates that consumers’ decision-making process comprises multiple stages for setting priorities on the number of alternatives (i.e., width of alternatives) and building an in-depth understanding of alternatives (i.e., depth of understanding). Following such perspectives, this study proposed four types of CDITs representing combinations of consistency width (i.e., Source vs. Product) and consistency depth (i.e., Aggregated vs. Pairwise) (see Table 3.1).  Table 3.1 Types of Consistency Distance Identification Tools  Consistency Depth  (Low: Aggregated vs. High: Pairwise) Consistency Width (High: Source vs. Low: Product) Aggregated Product CDIT Pairwise Product CDIT Aggregated Source CDIT Pairwise Source CDIT   1) Consistency Width (Source vs. Product): Consistency width refers to the extent of consistency distance in terms of an advice source or a product, which represents setting priorities on a list of alternatives from a particular advice source or a particular product, regardless of its advice source. According to the findings of Study #1, in utilizing multiple advice sources, consumers generally select a source first, so as to examine a list of recommendation from that source. Therefore, online consumers would rely on consistency in selecting a source, as they utilize consistency in selecting a product.13 That is, there are two levels of                                                13 As consistency is conceptualized and operationalized only to a product, not to a source, in Study #1, this study was not able to identify consistency strategies in source selection. In proposing Source CDITs that support online  67 consistency width, namely source consistency and product consistency. Source consistency refers to an overall agreement of the ratings of a particular advice source with those of other advice sources; product consistency refers to the agreement of the rating of an advice source on a particular product with those of other advice sources on the product. Source CDITs are able to support online consumers’ utilization of source consistency in selecting an advice source, while Product CDITs are able to support online consumers’ utilization of product consistency in selecting a product.   2) Consistency Depth (Aggregated vs. Pairwise): Consistency depth refers to the extent of consistency distance in terms of a number of advice sources, which represents depth of understanding through consistency between a number of advice sources. In utilizing multiple advice sources, consumers build their own understanding and/or expectation of each advice source’s capability in reducing uncertainties and strategically choose sources in examining consistency (Xu et al., 2017). According to the findings of Study #1, in selecting a product, consumers generally build an overall understanding of a list of alternatives before setting priorities on various alternatives, while they generally build an in-depth understanding of a particular alternative after setting priorities. Therefore, online consumers rely on different levels of consistency depth in order to build a low/high level of understanding of alternatives or advice sources.14 That is, there are two levels of consistency depth, namely aggregated consistency and pairwise consistency. Aggregated consistency refers to the agreement of a source with all other sources; pairwise consistency refers to the agreement of a source with another particular source. Overall, Aggregated CDITs are able to support online consumers who want to build overall (i.e., low level) understanding of alternatives or advice sources, while Pairwise CDITs are able to support online consumers who want to build an in-depth (i.e., high level) understanding of alternatives or advice sources.   Thus, Product CDITs are associated with the consistency strategies identified in Study #1, while Source CDITs are related to online consumers’ plausible applications of some consistency strategies (e.g., Anchoring Strategy and Seeking Strategy) in a similar manner for selecting advice sources (see Table 3.2). For example, in Study #1, some online consumers began by simultaneously focusing on all three advice sources and looking for any consistency among them. Upon finding such a common product, that product was scrutinized first (i.e., Seeking Strategy in Study #1). In a similar manner, some online consumers would                                                consumers’ utilization of consistency in selecting an advice source, this study expected that online consumers would utilize consistency strategies (e.g., Seeking Strategy and Anchoring Strategy) to select an advice source in a similar manner on the theoretical perspectives in Study #1. 14 Since consistency is conceptualized and operationalized as a common product between two advice sources, Study #1 not able to investigate consistency depth. In proposing Aggregated CDITs that support online consumers’ utilization of consistency in building an overall understanding of alternatives or advice sources, this study conceptualized consistency depth on the theoretical perspectives.  68 begin by simultaneously focusing on all advice sources and looking for source consistency; upon finding the most consistent advice source, a list of recommendations for that advice source will be scrutinized first (i.e., the application of Seeking Strategy in selecting advice sources). In contrast, some online consumers began by focusing on a particular source and set priorities on products; upon screening out products, consistency between this particular source and others on prioritized products was examined (i.e., Anchoring Strategy in Study #1). In a similar manner, some online consumers would begin by focusing on a particular source and looking for source consistency between the particular source and others (i.e., the application of Anchoring Strategy in selecting advice sources). In addition, as consistency strategies are classified according to whether consistency is used before/after setting priorities on various alternatives, Aggregated Product CDIT and Pairwise Product CDIT are associated with specific consistency strategies respectively (see Table 3.2).  Table 3.2 Consistency Distance Identification Tools and Associated Consistency Strategies CDIT Associated Consistency Strategies Aggregated Product CDIT Seeking Strategy and Deliberating Strategy in Product Selection Pairwise Product CDIT Anchoring Strategy and Adhering Strategy in Product Selection Aggregated Source CDIT Application of Seeking Strategy in Source Selection Pairwise Source CDIT Application of Anchoring Strategy in Source Selection   The operationalizations of consistency distances is depicted in Table 3.3; formulae to calculate consistency distances are depicted in Table 3.4.   Table 3.3 Operationalizations of Consistency Distances Consistency Distance Operationalization Aggregated Product Euclidean distance between a fit/rating score of an advice source on a particular product and those of other advice sources on that product Pairwise Product Euclidean distance between a fit/rating score of an advice source on a particular product and that of another advice source on that product Aggregated Source Average of Euclidean distance between fit/rating scores of an advice source on all products and those of all other advice sources on all the products. Pairwise Source Average of Euclidean distance between fit/rating scores of an advice source on all product and those of another advice source on all the products.    69 Table 3.4 Consistency Distance Formulae Consistency Distance Formula Product Consistency Distance (PCD) • A product (Pi) has fit/rating scores from an RA, an Expert, Other Customers, and an OSN.15 Let RAi represent an RA’s Fit Score of Pi, EXi represent an Expert’s Rating of Pi, GCi represent Other Customers’ Rating of Pi, and SNi represent an OSN’s Rating of Pi.  • With these information, the formulae are: Aggregated PCD of Pi from    RA  (PCD_RA) = 100J1− K13M=N!" − OP"5 CQ + =N!" − ST"5 CQ+=N!" − 'U"5 CQ	WX Expert (PCD_EX) = 100J1 −K13M=OP" − N!"5 CQ + =OP" − ST"5 CQ+=OP" − 'U"5 CQ	WX Other Customers (PCD_CU) = 100J1− K13M=ST" − N!"5 CQ + =ST" − OP"5 CQ+=ST" − 'U"5 CQ	WX OSN (PCD_OSN) = 100J1 −K13M='U" − N!"5 CQ + ='U" − OP"5 CQ +='U" − TY"5 CQ	WX Pairwise PCD of Pi Between RA and Expert (PCD_RA_EX) = 100M1 − |N!" − OP"|5 W RA and Other Customers (PCD_RA_CU) = 100M1 − |N!" − ST"|5 W RA and OSN (PCD_RA_OSN) = 100M1 − |N!" − 'U"|5 W Expert and Other Customers (PCD_EX_CU) = 100M1 − |OP" − ST"|5 W Expert and OSN (PCD_EX_OSN) = 100M1 − |OP" − 'U"|5 W Other Customers  and OSN (PCD_CU_SCU) = 100M1 − |ST" − 'U"|5 W Source Consistency Distance  (SCD) • A product (Pi) has fit/rating scores from an RA, an Expert, Other Customers, and an OSN. Let RAi represent an RA’s Fit Score of Pi, EXi represent an Expert’s Rating of Pi, GCi represent Other Customers’ Rating of Pi, and SNi represent an OSN’s Rating of Pi. • A number of alternatives is n. • With these information, the formulae are:                                                15 An RA calculates fit score based on an individual’s preference elicitation.   70 Aggregated SCD of Source   RA (SCD_RA) = ∑ #TZ_N!;G \  Expert (SCD_EX) = ∑ #TZ_OP;G \  Other Customers (SCD_CU) = ∑ #TZ_TY;G \  OSN (SCD_OSN) = ∑ #TZ_]'U;G \  Pairwise SCD Between  RA and Expert (SCD_RA_EX) = ∑ #TZ_N!_OP;G \  RA and Other Customers (SCD_RA_CU) = ∑ #TZ_N!_TY;G \  RA and OSN (SCD_RA_OSN) = ∑ #TZ_N!_]'U;G \  Expert and Other Customers (SCD_EX_CU) = ∑ #TZ_OP_TY;G \  Expert and OSN (SCD_EX_OSN) = ∑ #TZ_OP_]'U;G \  Other Customers and OSN (SCD_CU_OSN) = ∑ #TZ_TY_]'U;G \    3.2.3 Task-Technology Fit Theory 3.2.3.1 Interplay Between Technology and Task The capability of information technology that supports a user’s task and consequently improves task performance has been a long-standing interest in information systems (IS) research (DeLone and Mclean, 1992; DeLone and Mclean, 2003; Goodhue and Thompson, 1995). In previous studies (Aljukhadar et al., 2014; Iyer et al., 2009; Jiang and Benbasat, 2007; Poddar et al., 2009; Vessey, 1991), Task-Technology Fit Theory has been applied to understand why a user’s utilization of IS enhances performance in making decisions. The Task-Technology Fit Theory (Goodhue and Thompson, 1995) postulates that information technology (IT) can improve users’ performance when the functionality available to the user can support a user’s task-requirements or goals by enabling users to achieve more efficient and effective execution of a task, reducing the cognitive cost of performing the task, or making the task easy to accomplish.   3.2.3.2 Interplay Between Technology and Individual  71 While previous literatures relying on the Task-Technology Fit Theory have mainly focused on the interplay between technology and task requirements, Goodhue and Thompson (1995) also postulated the importance of the individuals’ characteristics. For example, Goodhue and Thompson (1995) mentioned that “a more accurate label for the construct would be task-individual-technology fit” (p.218). Therefore, Liu et al. (2011) divided the concept of task-individual-technology fit into three sub-dimensions: task-technology fit, task-individual fit, and individual-technology fit, to explore interactions among these factors. Task-individual fit is defined as “the degree to which characteristics of individuals fit the needs of certain tasks” (Liu et al., 2011, p. 690); individual-technology fit is defined as “the degree to which characteristics of technologies fit the needs of individuals to solve problems” (Liu et al., 2011, p. 690).   Several studies relying on the Task-Technology Fit Theory (Marcolin et al., 2000; Munro et al., 1997; Parkes, 2012; Wang and Haggerty, 2011; Yoon, 2009) have investigated individuals’ characteristics (such as technology expertise) that would increase individual-technology fit. However, most of them have mainly examined users’ capability to use a technology rather than users’ belief/perception that trigger a specific decision-making strategy which can be assisted/restricted by the functionalities of decision aids. For example, even though an individual is an expert on products and very good at using an RA, that individual’s perceived task-individual-technology fit and decision-making performance might be poor, as the RA restricts his/her choice of a decision-making strategy (Wang and Benbasat, 2009). Particularly, even though individuals may use the same decision aids, their perceived restrictiveness of the decision aids varies substantially due to differences that exist among each individual’s desired decision-making processes/strategies (Silver, 1988). Overall, individuals’ belief or perception triggering a specific decision-making strategy that can be assisted or restricted by the functionalities of technology is another determinant of individual-technology fit.  Previous studies (Payne et al., 1988; Petty et al., 1983; Kim and Benbasat, 2013) have shown that individuals’ utilization of decision-making strategies is determined not only by decision environments (e.g., number of alternatives) and constraints (e.g., time pressure), but also by the individuals’ perceived product knowledge, task involvement, and even possibly, the trustworthiness of the advice sources. For example, Study #1 identifies the Anchoring Strategy and the Seeking Strategy. A group of participants was anchored to a specific advice source and looked for a consistency between the advice source and others (i.e., Anchoring Strategy), while another group of participants looked for consistency across all advice sources without having a specific preference for a certain advice source (i.e., Seeking Strategy). According to a supplementary verbal protocol analysis in Study #1, individuals’ trustworthiness of advice sources is a key determinant of these consistency strategies. When individuals have strong trustworthiness beliefs of only a  72 specific advice source, they tend to use the Anchoring Strategy. However, when individuals have a similar level of trustworthiness belief across advice sources, they tend to use the Seeking Strategy.  Thus, this study (Study #2) conceptualized the variance of trustworthiness among sources as another determinant of task-individual-technology fit (especially individual-technology fit) in utilizing multiple advice sources through CDITs, which improves online consumers’ perceived decision-making performance (see Figure 3.1). Trustworthiness variance refers to the extent of difference between individuals’ trustworthiness of advice sources used in the process of making product selection decisions.16 This study operationalized two levels of trustworthiness variance: polarized trustworthiness and converged trustworthiness. When individuals have a high level of trustworthiness of a specific source, their trustworthiness variance is high (i.e., polarized). However, if individuals have a similar level of trustworthiness across all advice sources, their trustworthiness variance is low (i.e., converged). That is, task-individual-technology fit and decision-making performance will increase when the functionalities of CDITs assist individual’s utilization of a consistency strategy that is triggered by trustworthiness variance. This study added trustworthiness variance representing individuals’ belief, which triggers the selection and utilization of consistency strategy (i.e., Anchoring Strategy or Seeking Strategy) in selecting advice sources.  Figure 3.1 Trustworthiness Variance in the Task-Technology Fit Theory    3.2.4 Task-Individual-Technology Fit in Utilizing Consistency Distance Identification Tools 3.2.4.1 Functionalities of Consistency Distance Identification Tools                                                16 While general trustworthiness of an advice source is an individual’s perception of a characteristic of a particular advice source, variance of trustworthiness among advice sources is an overall distribution of an individual’s perception of characteristics of all advice sources.  73 As the four types of CDITs differ in terms of consistency width and depth, the functionalities of these CDITs vary (see Table 3.5).   Table 3.5 Functionalities of Consistency Identification Tools CDIT Functionality Aggregated Source CDIT Providing overall source consistency among all sources, based on their similarity of preferences and/or product evaluation criteria for all products recommended; e.g., what is the degree of overall agreement among the recommendation agent, experts, and consumers? Pairwise Source CDIT Enabling consumers to compare a specific source to another source in terms of its consistency of preferences and/or product evaluation criteria; e.g., how consistent is the recommendation agent with the experts, or how consistent are the experts with the consumers? Aggregated Product CDIT Enabling consumers to examine the overall agreements of all sources in terms of quality and/or fit of a specific alternative; e.g., what is the overall agreement among all sources for the quality of product A? Pairwise Product CDIT Enabling consumers to examine and compare the consistency between two specific sources in terms of quality and/or fit of a specific alternative; e.g., what is the overall agreement among experts and the recommendation agent regarding the quality of product A?   The Aggregated Source CDIT provides overall source consistency among all advice sources, based on the similarity of rating scores among all alternatives recommended. For example, if the Aggregated Source CDIT of a source is high, it means that the source has more similar preferences on product attributes to those of other sources in evaluating a product. Therefore, the Aggregated Source CDIT can support online consumers looking for consistency among all advice sources (i.e., Seeking Strategy).   The Pairwise Source CDIT enables consumers to compare two specific advice sources in terms of the similarity of rating scores for all the alternatives recommended. For example, consumers can find the most similar or different advice source for a specific source. Therefore, the Pairwise Source CDIT can support online consumers who are anchoring to a specific advice source and looking for a consistency between the advice source and others (i.e., Anchoring Strategy).   The Aggregated Product CDIT enables consumers to examine the overall agreements of all advice sources on a specific alternative’s rating score. Consequently, they can set priorities on alternatives that have greater consistency. Therefore, the Aggregated Product can support online consumers who are building a consideration set of alternatives that deserve to be further examined and set priorities on alternatives before building an in-depth knowledge of them (i.e., Deliberating Strategy).   74  The Pairwise Product CDIT helps consumers to examine and compare the consistency of a specific alternative’s rating score between two specific sources. For example, consumers can examine which source has the most similar or different rating scores of a specific alternative and elaborate more details of the alternative in building an in-depth understanding. Therefore, the Pairwise Product CDIT can support online consumers who are willing to elaborate more details of an alternative and validate their evaluation of an alternative through other advice sources’ rating scores.  3.2.4.2 Task Characteristics of Information Search Stages The Information Search Process Model (Kuhlthau, 1991) postulates that consumers utilize mainly two information search stages: exploration and elaboration. In accordance with this perspective, the consistency strategies identified in Study #1 start from source selection; the exploration and elaboration stages are sequentially preceded by the source selection stage. This study (Study #2) postulates three stages in utilizing multiple advice sources: a source selection stage and two information search stages (i.e., exploration and elaboration stage) (see Figure 3.2).  Figure 3.2 Three Stages in Utilizing Multiple Advice Sources    According to a supplementary verbal protocol analysis in Study #1, consumers use consistency strategies, such as the Anchoring Strategy or Seeking Strategies, based on their belief in advice sources. The exploration stage refers to a process in which consumers build an overall understanding of alternatives for deciding on further elaboration. The elaboration stage refers to a process in which consumers elaborate more details of alternatives and make an effort to build an in-depth understanding of alternatives for the product selection decision. Overall, information search stages have diverse task requirements or goals (see Table 3.6).     75 Table 3.6 Task Requirements or Goals Across Information Search Stages Information Search Stage Task Requirement or Goal Source Selection Set priorities on advice sources for further examination of alternatives  Exploring a Product List Build an overall understanding of alternatives for further elaboration Elaborating a Product Build an in-depth understanding of alternatives for the product selection decision.   3.2.4.3 Theoretical Framework and Hypotheses Because CDITs have their own functionalities in supporting the utilization of consistency strategy, information search stages have diverse task requirements or goals. An individual may have polarized or converged trustworthiness across advice sources, so Study #2 developed a theoretical framework which posits that the task-individual-technology fit between CDITs, information search stages, and trustworthiness variance improves decision-making performance (see Figure 3.3). 17   Figure 3.3 Theoretical Framework 18                                                 17 According to the Task-Technology Fit Theory, perceived task-individual-technology fit will increase decision-making performance. However, research objective of this study is to investigate the impact of CDITs on decision-making performance, impact of task-individual-technology fit on decision-making performance is not hypothesized. 18  Dashed boxes in Figure 3.3 represent operationalizations of key constructs (i.e., Information Search Stages, Trustworthiness Variance, and Consistency Distance Identification Tools) in experiments.  76   As previous studies (Kim and Benbasat, 2013; Xu et al., 2017) have proposed, consumers utilize consistency to improve the efficiency and effectiveness of limited cognitive capacity in examining products. Therefore, CDITs increase decision-making performance such as decision quality and/or effort when their functionality can support i) users’ utilizations of consistency strategies triggered by their trustworthiness variance, and ii) task requirements and/or goals of the information search stages. In addition, as Source CDITs are able to support online consumers’ utilization of source consistency distance in selecting an advice source, they fit the source selection stage better than Product CDITs do. In contrast, as Product CDITs are able to support online consumers’ utilization of product consistency distance in selecting a product, Product CDITs fit the exploration and elaboration stages better than Source CDITs do.  1) Source Selection Stage (Task-Individual-Technology Fit Between Trustworthiness Variance and Source CDITs): In the source selection stage, to set priorities on advice sources for a further examination of alternatives, online consumers may apply consistency strategies, such as Anchoring Strategy and Seeking Strategy. The task in the source selection stage is to set priorities on advice sources for a further examination of alternatives. The technology is the Aggregated Source CDIT and Pairwise Source CDIT, while the individual is the consumers’ trustworthiness variance that would trigger the utilization of Anchoring Strategy or Seeking Strategy. For example, some online consumers would begin by simultaneously focusing on all advice sources and looking for source consistency. Upon finding the most consistent advice source, consumers will first scrutinize a list of recommendations from that advice source (i.e., the application of Seeking Strategy in the source selection stage). In contrast, some online consumers would begin by focusing on a particular source and looking for source consistency between the particular source and others (i.e., the application of Anchoring Strategy in the source selection stage). Thus, task-individual-technology fit of Source CDITs would vary according to consumers’ trustworthiness variance that would trigger the utilization of Anchoring Strategy or Seeking Strategy (see Table 3.7).      77 Table 3.7 Task-Individual-Technology Fit Between Trustworthiness Variance and Source CDITs in the Source Selection Stage Task-Individual-Technology Fit Polarized Trustworthiness (Triggering Application of Anchoring Strategy in the Source Selection) Converged Trustworthiness (Triggering Application of Seeking Strategy in the Source Selection) Aggregated Source CDIT (Supporting Seeking Strategy) Low High Pairwise Source CDIT (Supporting Anchoring Strategy) High Low   For example, according to Study #1, when consumers have a belief in polarized trustworthiness variance, they tend to rely on Anchoring Strategy rather than Seeking Strategy. That is, when consumers have a strong preference for a specific advice source, they are inclined to utilize the consistency between the specific advice source and others rather than overall consistency across advice sources. Therefore, the Pairwise Source CDIT presenting source consistency distance between a specific advice source and others can support individuals’ application of Anchoring Strategy in selecting advice sources, and consequently will improve task-individual-technology fit decision-making performance. In addition, as a system-supported decision-making strategy is less effortful than a system-restricted decision-making strategy (Wang and Benbasat, 2009), individuals applying Anchoring Strategy are inclined to have less difficulty in making a decision through the Pairwise Source CDIT. Thus, it is hypothesized that:  H1: When utilizing the Pairwise Source CDIT in the source selection stage, people having polarized trustworthiness perceive higher task-individual-technology fit than those having converged trustworthiness.  H2: When utilizing the Pairwise Source CDIT in the source selection stage, people having polarized trustworthiness perceive higher decision quality than those having converged trustworthiness.  H3: When utilizing the Pairwise Source CDIT in the source selection stage, people having polarized trustworthiness perceive lower decision effort than those having converged trustworthiness.  However, when consumers have a similar level of trustworthiness across advice sources, they are inclined to utilize the source consistency across all advice sources rather than source consistency between a specific source and others. Therefore, the Aggregated Source CDIT providing overall source consistency distance  78 among advice sources can support individuals’ application of Seeking Strategy in selecting advice sources; consequently this will improve task-individual-technology fit and decision-making performance. In addition, as the system-supported decision-making strategy is less effortful (Wang and Benbasat, 2009), individuals applying Seeking Strategy are inclined to have less difficulty in making a decision through the Pairwise Source CDIT. Thus, it is hypothesized that:  H4: When utilizing the Aggregated Source CDIT in the source selection stage, people having converged trustworthiness perceive higher task-individual-technology fit than those having polarized trustworthiness.  H5: When utilizing the Aggregated Source CDIT in the source selection stage, people having converged trustworthiness perceive higher decision quality than those having polarized trustworthiness.  H6: When utilizing the Aggregated Source CDIT in the source selection stage, people having converged trustworthiness perceive lower decision effort than those having polarized trustworthiness.  2) Exploration Stage and Elaboration Stage (Task-Individual-Technology Fit Between Product CDITs and Information Search Stages): In the information search stages, because consumers need to select a quality product that fits their preferences, Product CDITs supporting consistency strategies provide more task-relevant functionality rather than Source CDITs. Particularly, online consumers’ tasks in the exploration and elaboration stages are different in terms of the extent of product understanding (i.e., an overall understanding of a product list and an in-depth understanding of a product). The exploration stage refers to the process in which consumers build an overall understanding of alternatives for deciding on further elaboration; and the elaboration stage refers to the process in which consumers elaborate more details of alternatives and make an effort to build an in-depth understanding of alternatives for the product selection decision. In summary: i) the task in the exploration stage is to build an overall understanding of a product list while the task in the elaboration stage is to build an in-depth understanding of a product; ii) the technology is the Aggregated Product CDIT and Pairwise Product CDIT; and iii) the individual is online consumers’ utilization of consistency strategies. For example, as Aggregated Product CDITs can provide overall consistency for products across all advice sources, online consumers utilizing Aggregated Product CDITs in the exploration stage can build an overall understanding of alternatives. As Pairwise Product CDITs can provide more detailed consistency between two specific advice sources, online consumers utilizing Pairwise Product CDITs in the elaboration stage can build an in-depth understanding of alternatives. Thus, task-individual-technology fit of Product CDITs would vary according to the information search stages (see Table 3.8).   79  Table 3.8 Task-Individual-Technology Fit Between Information Search Stages and Product CDITs Task-Individual-Technology Fit Exploration Stage (building an overall-understanding) Elaboration Stage  (building an in-depth understanding) Aggregated Product CDIT (Supporting Deliberating Strategy) High  Low Pairwise Product CDIT (Supporting Adhering Strategy) Low High    In the exploration stage, consumers build an overall understanding of alternatives for deciding whether an alternative deserves to be examined further in the elaboration stage. Because the Aggregated Product CDIT enables consumers to utilize Deliberating Strategy or Seeking Strategy by providing overall agreements of advice sources on a specific alternative, utilizing the Aggregated Product CDIT during the exploration stage would have higher task-individual-technology fit and decision-making performance compared to utilizing the Pairwise Product CDIT. In the elaboration stage, consumers elaborate more details of alternatives and make an effort to build an in-depth understanding of a product for the product selection decision. Because the Pairwise Product CDIT helps consumers’ utilization of Adhering Strategy or Anchoring Strategy that examines and compares very details of consistency between advice sources on a specific alternative, utilizing the Pairwise Product CDIT in the elaboration stage would have higher task-individual-technology fit and decision-making performance compared to utilizing the Aggregated Product CDIT. Thus, it is hypothesized that:  H7: People utilizing the Aggregated Product CDIT in the exploration stage perceive higher task-individual-technology fit than those who utilizing the Pairwise Product CDIT in the exploration stage.  H8: People utilizing the Pairwise Product CDIT in the elaboration stage perceive higher task-individual-technology fit than those who utilizing the Aggregated Product CDIT in the elaboration stage.  H9: People utilizing the Aggregated Product CDIT in the exploration stage perceive higher decision quality than those who utilizing the Pairwise Product CDIT in the exploration stage.  H10: People utilizing the Pairwise Product CDIT in the elaboration stage perceive higher decision quality than those who utilizing the Aggregated Product CDIT in the elaboration stage.  80  H11: People utilizing the Aggregated Product CDIT in the exploration stage perceive lower decision effort than those who utilizing the Pairwise Product CDIT in the exploration stage.  H12: People utilizing the Pairwise Product CDIT in the elaboration stage perceive lower decision effort than those who utilizing the Aggregated Product CDIT in the elaboration stage.  3.3 METHODOLOGY 3.3.1 Developing an Experimental Online Store 3.3.1.1 Multiple Advice Sources An experimental online store was developed as the platform of a laboratory investigation. For the generalization of the results, Study #2 used two different product types: laptops as a search good and hotels as an experience good.19 To enhance mundane realism (i.e., shaping the similarity of experimental events to real experience; Singleton and Straits, 1999), I selected laptops sold on Amazon.com and hotels listed on Hotels.com. In addition, this study used four advice sources (i.e., Other Customers, Experts, RA, and OSNs).  In building the laptop dataset, rating scores and reviews from customers were adopted from Amazon.com and those from experts were adopted from Cnet.com. In building a hotel dataset, rating scores and reviews from customers were adopted from Hotels.com and those from experts were adopted from Tripadvisor.com. To collect valid rating scores, this study screened laptops and hotels having at least ten customers’ reviews and at least one experts’ review. After screening, each dataset included 30 alternatives. Through rating scores in the laptop and hotel datasets, experts’ and customers’ recommendations were created.   Customers were general users of Amazon.com or Hotels.com. Amazon and Hotels encourage customers to share their opinions, both favorable and unfavorable. Customers share information on the laptop or hotel through written reviews and ratings on a five-star scale. Customers’ reviews and ratings are meant to give prospective customers genuine product feedback and are helpful for learning more about the products from customers’ perspectives.                                                 19 This study found significant difference in product knowledge between the laptop (m=5.29) and hotel (m=4.95) (p<.05) and significant difference in task involvement between the laptop (m=5.94) and hotel (m=4.18) (p<.001).   81 Experts are professional reviewers of Cnet.com or TripAdvisor.com. Cnet is an independent technological organization that compiles data for products. Cnet experts provide the information, tools, and advice that will help people decide what to buy and how to get the most out of information technology products and appliances. Tripadvisor is a travel website company that provides hotels booking as well as reviews of travel-related content. Tripadvisor experts have reviewed more than 100 hotels all around the world and received the Hotel Expertise Badge, which shows their unique knowledge. They give the information and advice that will help people decide where to stay around the world.  An RA is an independent automated recommendation tool that ranks products for users based on their preferences. When participants input their preference for each product attribute as well as its importance, the RA presents a list of products matching their needs for those attributes. For this study, an RA is developed on the basis of a weighted additive strategy that delivers better decision quality than others (Bettman et al., 1998; Payne et al., 1988; Xu et al., 2017).   To create trustworthy recommendations from OSN users who have similar preferences and interests with the participant, OSNs’ recommendations were generated on the basis of the equal weight strategy – a simplified approach that sums each participant’s preferences of product attribute without considering their subjective importance (Bettman et al., 1998). To make participants believe that the OSNs’ recommendations were created from OSN users having similar preferences and interests, this study asked participants to click Like or Dislike on a set of product relevant images before the experiment; and explained that their selection would be used to find OSN users having similar preferences for a product category.   3.3.1.2 Implementing Consistency Identification Tools and Information Search Stages CDITs and information search stages were implemented in the experimental online stores. To implement CDITs across the source selection, exploration, and elaboration stages, this study developed three webpages (i.e., source selection, recommendation list, and product details) that represent the three stages in utilizing multiple advice sources (i.e., source selection, exploration, and elaboration) respectively (see Figures 3.4 – 3.9). 20  21  In addition, to improve the comparability of the consistency distance, CDITs present the consistency distance as a graphical representation across the information search stages (see Figures 3.4 – 3.9). The graphical representation enables consumers to compare source or product consistency distances                                                20 The source selection page (i.e., the source selection stage) with Source CDITs is depicted in Figures 3.4 and 3.5; the recommendation list page (i.e., the exploration stage) with Product CDITs is depicted in Figures 3.6 and 3.7; and the product details page (i.e., the elaboration stage) with Product CDITs is depicted in Figure 3.8 and 3.9. 21 CDITs across Figures 3.4 and 3.9 are not associated with each other.   82 between advice sources with less cognitive effort (Benbasat and Dexter, 1986). Thus, CDITs can be utilized not only to identify, but also to compare consistency distances among sources and/or products.   1) Source CDITs Implemented in the Source Selection Stage: In the source selection page, participants could use the Aggregated or Pairwise Source CDITs and select any one of the advice sources (i.e., Experts, RAs, Other Customers, or OSNs). After utilizing Source CDITs, participants could select one of the advice sources by clicking a button on top of the source selection webpage (see Figures 3.4 and 3.5). To prevent any effects from the order in which the recommendation sources were displayed, this study randomized the placement of the advice sources.  Figure 3.4 Aggregated Source CDIT in the Source Selection Stage   Figure 3.5 Pairwise Source CDIT in the Source Selection Stage   83  2) Product CDITs Implemented in the Exploration Stage: After participants choose an advice source, the online shopping store presented a list of recommendations of the chosen advice source in the recommendation list page (see Figures 3.6 and 3.7). Product attributes with thumbnails of recommendations were presented in the tabular form with the Aggregated or Pairwise Product CDITs. The recommendations of the chosen advice source were sorted in terms of the source’s rating scores. Participants could explore ten recommendations at a time, and freely navigate all 30 alternatives from the highest rated to the lowest. As the exploration stage represents online consumers’ exploration of alternatives in order to build an overall understanding of a product category, the recommendation list page provided information on the product category, such as attributes and rankings of alternatives that represent the overall range of product attributes and preferences of the chosen source.  Figure 3.6 Aggregated Product CDIT in the Exploration Stage        84 Figure 3.7 Pairwise Product CDIT in the Exploration Stage   3) Product CDITs Implemented in the Elaboration Stage: When participants clicked on a recommended product from the chosen advice source, the website presented more details of the recommended product on the product details page, such as five more pictures, product attributes, and written reviews from experts and other customers with Aggregated or Pairwise Product CDITs (see Figures 3.8 and 3.9). As the elaboration stage represents online consumers’ elaboration of an in-depth product information, the product detail page provided more detailed information, such as product pictures and written reviews from experts and other customers. When participants wanted to explore other advice sources and/or alternatives, they could freely navigate between the source selection, recommendation list, and the product details pages.      85 Figure 3.8 Aggregated Product CDIT in the Elaboration Stage    Figure 3.9 Pairwise Product CDIT in the Elaboration Stage      86 3.3.2 Experimental Design To investigate when and how to provide the Source/Product CDITs for improving task-individual-technology fit and decision-making performance across the information search stages, Study #2 implemented two independent lab experiments (i.e., Experiment 2-1 and Experiment 2-2, see Figure 3.10). Three stages for utilizing multiple advice sources (i.e., source selection, exploration, and elaboration) were implemented for all participants.   In Experiment 2-1, participants were provided with Source CDITs in the source selection stage (see Figures 3.4 and 3.5). In Experiment 2-2, participants were provided with Product CDITs in the exploration and elaboration stages (see Figures 3.6 – 3.9).  Figure 3.10 Overview of Experiment 2-1 and Experiment 2-2    3.3.2.1 Experimental Design of Experiment 2-1 Experiment 2-1 uses a 2 x 2 factorial between-subject design to investigate the interaction effects between trustworthiness variance and Source CDITs on task-individual-technology fit and decision-making performance by manipulating Source CDITs and product types in the source selection stage. Each participant interacts with one randomly-assigned product type. The product type is manipulated to generalize the findings. Experiment 2-1 manipulates four conditions (see Table 3.9).  87  Table 3.9 Factorial Design of Experimental Conditions (Experiment 2-1)  Product Type (Search vs. Experience) Source CDIT (Aggregated vs. Pairwise) Condition 1 (30 participants) Laptop  Aggregated Source CDIT Condition 2 (30 participants) Laptop  Pairwise Source CDIT Condition 3 (30 participants) Hotel  Aggregated Source CDIT Condition 4 (30 participants) Hotel  Pairwise Source CDIT   3.3.2.2 Experimental Design of Experiment 2-2 Experiment 2-2 uses a 2 x 2 x 2 factorial between-subject design to investigate the interaction effects between Product CDITs (i.e., Aggregated and Pairwise CDITs) across information search stages (i.e., the exploration stage and the elaboration stage) and product types (i.e., Laptop and Hotel) on task-individual-technology fit and decision-making performance by manipulating the utilizations of Product CDITs across the exploration and elaboration stages. As each Product CDIT can be implemented in each information search stage, to investigate the best combination between two information search stages and two Product CDITs, the experiment 2-2 includes another two-level factor (i.e., information search stages). Each participant interacts with one randomly-assigned product type. Product types are manipulated to generalize the findings. Overall, Experiment 2-2 manipulates eight conditions (see Table 3.10).  Table 3.10 Factorial Design of Experimental Conditions (Experiment 2-2) Product CDIT (Aggregated vs. Pairwise) Product Type (Search vs. Experience) Information Search Stage Exploration Stage Elaboration Stage Condition 1 (30 participants) Laptop  Aggregated  Product CDIT Aggregated  Product CDIT Condition 2 (30 participants) Laptop  Aggregated  Product CDIT Pairwise  Product CDIT Condition 3 (30 participants) Laptop  Pairwise  Product CDIT Aggregated  Product CDIT  88 Condition 4 (30 participants) Laptop  Pairwise  Product CDIT Pairwise  Product CDIT Condition 5 (30 participants) Hotel  Aggregated  Product CDIT Aggregated  Product CDIT Condition 6 (30 participants) Hotel  Aggregated  Product CDIT Pairwise  Product CDIT Condition 7 (30 participants) Hotel  Pairwise  Product CDIT Aggregated  Product CDIT Condition 8 (30 participants) Hotel  Pairwise  Product CDIT Pairwise  Product CDIT   3.3.3 Participants and Experimental Procedure 3.3.3.1 Participants 1) Experiment 2-1: To enhance experimental realism and prevent the potential compounding effects of task involvement (Petty et al. 1983), this study recruited 120 voluntary participants from a large public university in North America, who were interested in purchasing a laptop and booking a hotel in Seattle within a few months.22  A statistical power analysis was performed for sample size estimation using G*Power 3.1. The effect size in this study was considered to be medium to large using Cohen's (1988) criteria. With an alpha=.05 and power=0.80, the projected sample size required for a medium-to-large effect size is approximately N=119 or 51 for 2 x 2 group comparison. Thus, my proposed sample size of 120 is adequate. After the Experiment 2-1 with 120 participants, there is no further data collection. Participants were randomly assigned to each condition (see Table 3.9). To motivate participants to fully engage in the task, every participant received CAD20 as an honorarium. Participants’ demographics are summarized in Table 3.11.                                                    22 To validate their interest in purchasing a laptop and booking a hotel, this study measured their perceived product knowledge and task involvement. Perceived product knowledge is statistically different from four points out of a seven-point Likert scale (m=5.00, p<.001); perceived task involvement is statistically different from four points out of a seven-point Likert scale (m=5.15, p<.001).   89 Table 3.11 Demographics of Participants (Experiment 2-1)  Mean Standard Deviation Age 24.29 3.48 Gender Male 49 N/A Female 71 N/A Have purchased online? Yes 120 N/A No 0 N/A Purchases online during last year 13.34 11.35 Money spent online during last year CAD734.92 CAD1,034.23 Note: Sample size = 120. No missing data.   2) Experiment 2-2: To enhance experimental realism and prevent the potential compounding effects of task involvement (Petty et al. 1983), this study recruited 240 voluntary participants from a large public university in North America, who were interested in purchasing a laptop and booking a hotel in Seattle within a few months. A statistical power analysis was performed for sample size estimation using G*Power 3.1. The effect size in this study was considered to be medium to large using Cohen's (1988) criteria. With an alpha=.05 and power=0.80, the projected sample size required for a medium-to-large effect size is approximately N=119 or 51 for 2 x 2 x2 group comparison. Thus, our proposed sample size of 240 is adequate. After the Experiment 2-2 with 240 participants, there is no further data collection. Participants were randomly assigned to each condition (see Table 3.10). To motivate participants to fully engage in the task, every participant received CAD20 as an honorarium. Participants’ demographics are summarized in Table 3.12.   Table 3.12 Demographics of Participants (Experiment 2-2)  Mean Standard Deviation Age 23.31 3.92 Gender Male 108 N/A Female 132 N/A Have purchased online? Yes 240 N/A No 0 N/A Purchases online during last year 14.72 17.21 Money spent online during last year CAD 962.39 CAD 1,249.31 Note: Sample size = 240. No missing data.      90 3.3.3.2 Experimental Procedures The experimental procedures are as follows (see Figure 3.11). The procedures are similar for both experiments. First, pre-questionnaires for perceived task involvement, product knowledge, trustworthiness of four advice sources (i.e., Experts, Other Customers, RAs, OSNs), and demographics were administered (see Table 3.13). Second, participants were asked to click whether they liked or disliked product-relevant images, with a detailed explanation that the procedure was intended to find OSN users sharing similar preference and interests. Third, participants were instructed on how to use the interfaces of the online store (e.g., in eliciting personal preferences and the subjective importance on product attributes, explanations of CDITs, navigating information search stages). After participants confirmed their understanding of the online store interface, the main experimental task was administered. Participants were asked to select the best laptop or hotel that they were interested in. After finishing the task, the participants completed post-questionnaires measuring utilizing CDITs, task-individual-technology fit, decision quality, and decision effort (see Table 3.14).  Figure 3.11 Overview of Experimental Procedures     91  3.3.4 Measurement Items All measurement items used in Study #2 are listed in Tables 3.13 and 3.14, along with their sources.23 All measurement items have been validated by prior research work.   Table 3.13 Measurement Items: Pre-questionnaire Construct Measurement Item24 Task Involvement* (McQuarrie and Munson, 1992) Choosing a laptop / a hotel in Seattle is (TI1) Irrelevant / Relevant to me. (TI2) Of no concern / Of concern to me. (TI3) Didn’t matter / Mattered to me. (TI4) Meant nothing to me / Meant a lot to me. (TI5) Unimportant / Important. Product Knowledge* (Eisingerich and Bell, 2008; Sharma and Patterson, 2000) (PK1) I possess good knowledge on laptops / hotels (PK2) I can understand almost all the specifications (e.g., memory, hard drive) of laptops / specifications (e.g., amenities, comfort, location) of hotels.  (PK3) I am familiar with basic laptop specifications (e.g., memory, CPU) / hotel specifications (e.g., cleanness, service, food). Trustworthiness of Experts* (MacKnight et al., 2002) Laptop Condition: Experts are professional reviewers of Cnet.com, an independent technological organization that complies data for products. CNET tracks all the latest consumer technology breakthroughs and shows what's new, what matters and how technology can enrich life. Experts of CNET give the information, tools and advice that will help people decide what to buy and how to get the most out of the tech.   Hotel Conditions: Experts are professional reviewers of TripAdvisor.com, a travel website company providing hotels booking as well as reviews of travel-related content. Experts of TRIPADVISOR have reviewed more than 100 hotels all around the world and received the Hotel Expertise Badge that shows their unique knowledge. They give the information and advice that will help people decide where to stay and enjoy all around the world.  (TWE1) I believe that the experts would act in customers’ best interest. (TWE2) If customers required help, the experts would do their best to help customers. (TWE3) The experts are interested in customers’ well-being, not just their own. (TWE4) The experts are truthful in rating laptops. (TWE5) I would characterize the experts as honest. (TWE6) The experts would keep their commitments.  (TWE7) The experts are sincere and genuine. (TWE8) The experts are competent and effective in rating laptops. (TWE9) The experts perform their role of rating laptops very well.                                                23 All measures are disclosed in Tables 3.13 and 3.14. 24 Seven Likert-scale scored items used to assess the respondent’s agreement with items.  92 (TWE10) Overall, the experts are capable and proficient laptop recommendation source. (TWE11) In general, the experts are very knowledgeable about the laptops. Trustworthiness of Other Customers* (MacKnight et al., 2002) Laptop Condition: Customers are general users of Amazon.com, the world’s largest online retailer. AMAZON encourages customers to share their opinions, both favorable and unfavorable. Customers share information on the product through written reviews and ratings on a 5-star scale. Customers’ reviews and ratings are meant to give other customers genuine product feedback and would be helpful to learn more about the products from other customers’ perspectives.   Hotel Condition: Customers are general users of Hotels.com, the world’s largest online hotel booking website. HOTELS encourages customers to share their opinions, both favorable and unfavorable. Customers share information on the hotel through written reviews and ratings on a 5-star scale. Customers’ reviews and ratings are meant to give other customers genuine feedback and would be helpful to learn more about the hotels from other customers’ perspectives.  (TWC1) I believe that the other customers would act in customers’ best interest. (TWC2) If customers required help, the other customers would do their best to help customers. (TWC3) The other customers are interested in customers’ well-being, not just their own. (TWC4) The other customers are truthful in rating laptops. (TWC5) I would characterize the other customers as honest. (TWC6) The other customers would keep their commitments.  (TWC7) The other customers are sincere and genuine. (TWC8) The other customers are competent and effective in rating laptops. (TWC9) The other customers perform their role of rating laptops very well. (TWC10) Overall, the other customers are capable and proficient laptop recommendation source. (TWC11) In general, the other customers are very knowledgeable about the laptops. Trustworthiness of RAs (MacKnight et al., 2002) Recommendation agent (RA) is an independent automated recommendation tool that ranks products for you based on your preferences. When you elicit your preference for each product attribute as well as its importance, the RA presents a list of products matching your needs for those attributes.  (TWR1) I believe that the RAs would act in customers’ best interest. (TWR2) If customers required help, the RAs would do their best to help customers. (TWR3) The RAs are interested in customers’ well-being, not just their own. (TWR4) The RAs are truthful in rating laptops. (TWR5) I would characterize the RAs as honest. (TWR6) The RAs would keep their commitments.  (TWR7) The RAs are sincere and genuine. (TWR8) The RAs are competent and effective in rating laptops. (TWR9) The RAs perform their role of rating laptops very well. (TWR10) Overall, the RAs are capable and proficient laptop recommendation source. (TWR11) In general, the RAs are very knowledgeable about the laptops.  93 Trustworthiness of OSNs (MacKnight et al., 2002) Online social networks (OSNs) are users who have similar interests and preferences of products, news, etc. in Facebook.com, the world’s largest online social networking service provider. FACEBOOK identifies users’ interests from information they’ve added to their Timeline, keywords associated with the Pages they like or apps they use, ads they’ve clicked on and other similar sources. By analyzing your Likes, FACEBOOK presents a list of products that liked by other users sharing your interests and preferences.  (TWO1) I believe that the OSNs would act in customers’ best interest. (TWO2) If customers required help, the OSNs would do their best to help customers. (TWO3) The OSNs are interested in customers’ well-being, not just their own. (TWO4) The OSNs are truthful in rating laptops. (TWO5) I would characterize the OSNs as honest. (TWO6) The OSNs would keep their commitments.  (TWO7) The OSNs are sincere and genuine. (TWO8) The OSNs are competent and effective in rating laptops. (TWO9) The OSNs perform their role of rating laptops very well. (TWO10) Overall, the OSNs are capable and proficient laptop recommendation source. (TWO11) In general, the OSNs are very knowledgeable about the laptops. * Measurement items for these constructs were provided in accordance with the assigned condition (i.e., laptops and hotels).  Table 3.14 Measurement Items: Post-questionnaire Construct Measurement Item25 Utilizing CDITs (Miranda and Bostrom, 1993; modified) (UC1) I can identify whether advice sources have mutual agreement to the quality of recommended product(s). (UC2) I can utilize the gap of rating scores between advice sources. Perceived Decision Quality (Tan et al., 2010) (DQ1) I believe I have made the best choice of the laptop at this website. (DQ2) I would make the same choice if I had to do it again. (DQ3) I believe I have selected the best laptop. Perceived Decision Effort (Perera, 2000; Wang and Benbasat, 2009) (DE1) The laptop selection task that I went through was complex. (DE2) The task of selecting a laptop using the website was complex. (DE3) Selecting a laptop using the website required effort. (DE4) The task of selecting a laptop using the website took time.                                                25 Seven Likert-scale scored items used to assess the respondent’s agreement with items.  94 Perceived Task- Individual-Technology Fit26 (Aiken et al., 2013; Jarupathirunet al., 2007; Lin and Huang, 2008) In helping me to choose the best laptop, the functionalities of “Consistency Distance Identification Tools” were  (TITF1) Very inadequate vs. Very adequate (TITF2) Very inappropriate vs. Very appropriate (TITF3) Not useful at all vs. Very useful (TITF4) Very incompatible with the task vs. Very compatible with the task (TITF5) Not helpful at all vs. Very helpful (TITF6) Not sufficient at all vs. Very sufficient (TITF7) Did not make the task easy at all vs. Made the task very easy (TITF8) In general, did not fit the task at all vs. Best fit the task * Measurement items for these constructs were provided in accordance with the assigned condition (i.e., laptops or hotels).   To validate reliability, convergent validity, and discriminant validity of the measurement items, Study #2 applied confirmatory factor analysis using SmartPLS. Table 3.15 shows the descriptive statistics and composite reliability of the constructs. All composite reliabilities are greater than 0.7, the recommended cut-off (Barclay et al. 1995; Fornell and Bookstein 1982). Thus, the reliability of the measurements is acceptable.   Table 3.15 Descriptive Statistics and Composite Reliability of Constructs  Construct Mean Standard Deviation Composite Reliability Task Involvement (TI) 5.15 1.62 .966 Product Knowledge (PK) 5.00 1.24 .716 Trustworthiness of Experts (TWE) 4.91 1.03 .896 Trustworthiness of Other Customers (TWC) 4.86 1.15 .912 Trustworthiness of RAs (TWR) 4.97 1.19 .946 Trustworthiness of OSNs (TWO) 3.99 1.25 .948 Utilizing CDITs (UC) 5.21 0.99 .835 Perceived Decision Quality (DQ) 5.09 0.98 .897 Perceived Decision Effort (DE) 4.71 1.38 .904 Perceived Task-Individual-Technology Fit (TITF) 4.50 1.30 .937                                                26  The perceived task-individual-technology fit instrument was based on the instrument from Jarupathirun et al. (2007) that investigate interactions between self-efficacy, visualization of decision-support tools, and geographic analysis task. Since this study investigates task-individual-technology interactions to understand how consistency distance identification tools support consistency distance, this study considered this to be suitable measure for task-individual-technology fit not just task-technology fit. This instrument has been validated and adapted in previous studies (Aiken et al., 2013; Jarupathirunet al., 2007; Lin and Huang, 2008).   95   Convergent validity is assessed by individual item reliability, the composite reliability of the construct, and the average variance extracted (AVE) (Barclay et al., 1995; Hu et al., 2004). Individual item reliability was assessed by examining the loadings of the measurement items on their corresponding construct; all the item loadings are significant and exceeded 0.7. All the composite reliability values exceeded 0.7, the recommended criterion (Barclay et al., 1995; Fornell and Bookstein, 1982), and the AVE values exceeded 0.5, the generally accepted criterion (Hu et al., 2004) (see Table 3.16). Therefore, these results show good convergent validity for the measurement items.  Table 3.16 Composite Reliability, AVE, and Correlation Among Constructs   CR AVE TI PK TWE TWC TWR TWO UC DQ DE TITF TI .966 .849 .921          PK .716 .559 .408 .748         TWE .896 .512 .110 .162 .715        TWC .912 .556 -.046 -.033 .321 .746       TWR .946 .614 .031 .015 .347 .136 .784      TWO .948 .625 .014 .037 .296 .284 .232 .791     UC .835 .717 .152 .186 .260 .194 .203 .091 .847    DQ .897 .745 .026 .189 .214 -.003 .060 .125 .367 .863   DE .904 .703 .170 .055 -.017 -.019 -.052 -.086 -.063 -.067 .838  TITF .937 .651 .074 .039 .085 -.026 .177 .117 .104 .053 -.107 .807 Note: Composite Reliability = CR; Average Variance Extracted = ACE; Task Involvement = TI; Product Knowledge = PK; Trustworthiness of Experts = TWR; Trustworthiness of Other Customers = TWC; Trustworthiness of Recommendation Agents = TWR; Trustworthiness of OSNs = TWO; Utilizing CDITs = UC; Decision Quality = DQ; Decision Effort = DE; Task-Individual-Technology Fit = TITF; Diagonal values are the square root of AVE  Discriminant validity is assessed by comparing of the square roots of the AVE and the correlations among constructs. To show good discriminant validity, all the square roots of the AVE should be greater than the off-diagonal elements in the corresponding rows and columns. This result indicates that the construct shares more variance with its measures than with others (Fornell and Bookstein, 1982). The diagonal values of Table 3.16, the square roots of the AVE, exceed the correlations among constructs, demonstrating good discriminant validity of all of the constructs. Thus, all conditions for convergent and discriminant validity are satisfied.     96 3.4 DATA ANALYSIS AND RESULTS 3.4.1 Experiment 2-1: Interplay Between Source CDITs, Trustworthiness Variance, and Product Type To investigate the interplay between Source CDITs and trustworthiness variance, this study categorized participants into polarized or converged trustworthiness groups through post-grouping analysis. To measure trustworthiness variance and identify the polarized trustworthiness group, in which participants build a polarized trustworthiness for a specific advice source, this study standardized individual’s trustworthiness of each advice source and measured the highest standardized trustworthiness value among advice sources.27 If individuals’ perceived trustworthiness of an advice source is similar to that of other advice sources, the highest value among standardized trustworthiness of advice sources will be close to zero. However, as the individuals’ perceived trustworthiness of an advice source is polarized, the highest value among standardized trustworthiness of advice sources increases.28 To categorize 120 participants into polarized and converged groups, this study used the median of maximum value of standardized trustworthiness among advice sources (Mdn=1.014). After classifying participants into polarized and converged trustworthiness groups, this study regrouped them into assigned Source CDITs and product types (see Table 3.17). There is no statistical difference in product knowledge and task involvement across post-groups (p>.1).   Table 3.17 Post-Grouping of Trustworthiness Variance  Aggregated  Source CDIT Pairwise  Source CDIT Polarized Group 34 Participants (Laptop: 17, Hotel: 17) 26 Participants (Laptop: 13, Hotel: 13) Converged Group 26 Participants (Laptop: 13, Hotel: 13) 34 Participants (Laptop: 17, Hotel: 17)   A three-way multivariate analysis of variance (MANOVA) was conducted to determine the effect of trustworthiness variance, Source CDIT, and Product Type on three dependent variables of task-individual-                                               27  An individual has perceived trustworthiness of advice sources (i.e., S1, S2, ... Sn). Let TW_Sn represent an individual’s trustworthiness of Sn, M_S represent the mean of trustworthiness of advice sources, and Std_S represent the standard deviation of trustworthiness of advice sources. With these information, the formula of trustworthiness variance (TW_V) is: TW_V = max (^>__B	5`__	_ab__ , ^>__c	5`__	_ab__ , ..., ^>__d	5`__	_ab__ ) 28 The range of standardized trustworthiness of advice sources is from 0 to 1.5.  97 technology fit, decision quality and decision effort (see Table 3.18). The MANOVA results indicate that the interaction between Source CDIT and trustworthiness variance (Pillai’s Trace=.161, F(3, 110)=7.042, p=.001) significantly affects the combined dependent variable of task-individual-technology fit, decision quality, and decision effort.   Table 3.18 MANOVA Summary Table Multivariate Test Effect Value F Hypothesis df Error df Sig. Intercept .989 3257.954 3 110 .001 Source CDIT .023 .864 3 110 .462 Trustworthiness Variance .003 .098 3 110 .961 Product Type .039 1.474 3 110 .225 Source CDIT * Trustworthiness Variance .161 7.042 3 110 .001 Source CDIT * Product Type .031 1.174 3 110 .323 Trustworthiness Variance * Product Type .004 .146 3 110 .932 Source CDIT * Trustworthiness Variance * Product Type .002 .063 3 110 .979   Univariate analysis of variance (ANOVA) was conducted as a follow-up test (see Table 3.19).29 The ANOVA results indicate that task-individual-technology fit differs significantly for the interaction between Source CDIT and trustworthiness variance (F(1, 112)=12.236, p=.001). Decision quality differs significantly for the interaction between Source CDIT and trustworthiness variance (F(1, 112)=7.688, p=.007). Decision effort does not differ significantly for the interaction between Source CDIT and trustworthiness variance (F(1,112)=.195, p=.659).   Table 3.19 Univariate ANOVA Summary Table Test of Between-Subject Effects Source Dependent Variable Type III Sum of Squares df Mean Square F Sig. Corrected Model Task-Individual-Technology Fit Decision Quality Decision Effort 18.585 7.143 5.944 7 7 7 2.655 1.020 .849 2.716 1.352 .535 .012 .233 .806                                                29 Prior to examining the univariate repeated measures ANOVA results, the alpha level was adjusted to a=.020 due to the risk of Type I error (Mertler and Reinhart, 2016).	 98 Intercept Task-Individual-Technology Fit Decision Quality Decision Effort 2517.193 3052.187 2424.094 1 1 1 2517.193 3052.187 2424.094 2575.229 4044.185 1526.953 .001 .001 .001 Source CDIT Task-Individual-Technology Fit Decision Quality Decision Effort 2.101 .232 .005 1 1 1 2.101 .232 .005 2.150 .308 .003 .145 .580 .957 Trustworthiness Variance Task-Individual-Technology Fit Decision Quality Decision Effort .010 .151 .084 1 1 1 .010 .151 .084 .101 .199 .053 .919 .656 .818 Product Type Task-Individual-Technology Fit Decision Quality Decision Effort 3.484 .636 .148 1 1 1 3.484 .636 .148 3.564 .843 .093 .062 .361 .760 Source CDIT * Trustworthiness Variance Task-Individual-Technology Fit Decision Quality Decision Effort 11.960 5.802 .310 1 1 1 11.960 5.802 .310 12.236 7.688 .195 .001 .007 .659 Source CDIT * Product Type Task-Individual-Technology Fit Decision Quality Decision Effort .294 .173 5.058 1 1 1 .294 .173 5.058 .301 .229 3.186 .585 .633 .077 Trustworthiness Variance * Product Type Task-Individual-Technology Fit Decision Quality Decision Effort .086 .278 .019 1 1 1 .086 .278 .019 .088 .368 .012 .767 .545 .914 Source CDIT * Trustworthiness Variance * Product Type Task-Individual-Technology Fit Decision Quality Decision Effort .126 .004 .139 1 1 1 .126 .004 .139 .129 .005 .087 .720 .943 .768 Error Task-Individual-Technology Fit Decision Quality Decision Effort 109.476 84.528 177.804 112 112 112 .977 .755 1.588   Total Task-Individual-Technology Fit Decision Quality Decision Effort 2650.269 3172.305 2663.500 120 120 120    Corrected Total Task-Individual-Technology Fit Decision Quality Decision Effort 128.061 91.671 183.748 119 119 119      3.4.1.1 Impact of Trustworthiness Variance and Source CDIT on Task-Individual-Technology Fit To investigate the interaction effect between Source CDIT and trustworthiness variance on task-individual-technology fit, this study examines group means for task-individual-technology fit by Source CDIT and trustworthiness variance (see Table 3.20). As shown in Figure 3.12, there is a significant statistical difference of task-individual-technology fit (p=.001) between the converged group (m=5.09) and the polarized group (m=4.43) using Aggregated Source CDIT. That is, when a group of participants having converged trustworthiness variance uses Aggregated Source CDIT, their perceived task-individual-technology fit is higher than those having polarized trustworthiness variance. In addition, there is a  99 significant statistical difference of task-individual-technology fit (p=.001) between the converged group (m=4.17) and the polarized group (m=4.82) using Pairwise Source CDIT. That is, when a group of participants having polarized trustworthiness variance uses Pairwise Source CDIT, their perceived task-individual-technology fit is higher than those having converged trustworthiness variance. Since there is no statistical difference between product types (p>.1), the results are generalized across a search good and an experience good. Overall, this result shows there are interaction effects between trustworthiness variance and Source CDITs in influencing perceived task-individual-technology fit. Thus, H1 and H4 are supported.   Table 3.20 Means for Task-Individual-Technology Fit by Source CDIT and Trustworthiness Variance Mean for Task-Individual-Technology Fit Trustworthiness Variance Converged Polarized Source CDIT Aggregated  5.09 4.43 Pairwise 4.17 4.82   Figure 3.12 Interaction Effect of Trustworthiness Variance and Source CDIT on Task-Individual-Technology Fit      4.14.24.34.44.54.64.74.84.955.1AGGREGATED PA I RW I S ETASK-TECHNOLOGY FITPolarized Converged 100 3.4.1.2 Impact of Trustworthiness Variance and Source CDITs on Decision Quality  To investigate the interaction effect between Source CDIT and trustworthiness variance on decision quality, this study examines group means for decision quality by Source CDIT and trustworthiness variance (see Table 3.21). As shown in Figure 3.13, there is a significant statistical difference of perceived decision quality (p=.007) between the converged group (m=5.40) and the polarized group (m=4.88) using the Aggregated Source CDIT. That is, when a group of participants having converged trustworthiness variance uses Aggregated Source CDIT, their perceived decision quality is higher than those having polarized trustworthiness variance. In addition, there is a significant statistical difference of perceived decision quality (p=.007) between the converged group (m=4.87) and the polarized group (m=5.23) using Pairwise Source CDIT. That is, when a group of participants having polarized trustworthiness variance uses Pairwise Source CDIT, their decision quality is higher than those having converged trustworthiness variance. Since there is no statistical difference between product types (p>.1), the results are generalized across a search good and an experience good. Overall, this result shows that there are interaction effects between trustworthiness variance and Source CDITs in influencing perceived decision quality. Thus, H2 and H5 are supported.   Table 3.21 Means for Decision Quality by Source CDIT and Trustworthiness Variance Mean for Decision Quality Trustworthiness Variance Converged Polarized Source CDIT Aggregated  5.40 4.88 Pairwise 4.87 5.23      101 Figure 3.13 Interaction Effect of Trustworthiness Variance and Source CDIT on Decision Quality    3.4.1.3 Impact of Trustworthiness Variance and Source CDITs on Decision Effort  To investigate the interaction effect between Source CDIT and trustworthiness variance on decision effort, this study examines group means for decision effort by Source CDIT and trustworthiness variance (see Table 3.22). As shown in Figure 3.13, there is no significant statistical difference of perceived decision effort (p>.1) between the converged group (m=4.51) and the polarized group (m=4.56) using Aggregated Source CDIT. In addition, there is no significant statistical difference of perceived decision effort (p>.1) between the converged group (m=4.63) and the polarized group (m=4.45) using Pairwise Source CDIT. Since there is no statistical difference between product types (p>.1), the results are generalized across a search good and an experience good. Overall, this result shows that perceived decision effort is not influenced by the interaction between trustworthiness variance and Source CDITs. Thus, H3 and H6 are not supported.   Table 3.22 Means for Decision Effort by Source CDIT and Trustworthiness Variance Mean for Decision Effort Trustworthiness Variance Converged Polarized Source CDIT Aggregated  4.51 4.56 Pairwise 4.63 4.45  4.64.74.84.955.15.25.35.45.5AGGREGA TED PA IRWIS EDECISION	QUALITYPolarized Converged 102  Figure 3.14 Interaction Effect of Trustworthiness Variance and Source CDITs on Decision Effort    3.4.2 Experiment 2-2: Interplay Between Product CDITs, Information Search Stages, and Product Type A three-way MANOVA was conducted to determine the effect of information search stage, Product CDIT, and Product Type on three dependent variables of task-individual-technology fit, decision quality, and decision effort (see Table 3.23).30 The Box’s Test is not significant and indicates the homogeneity of variance-covariance has been fulfilled, F(42, 89018)=1.087, p=.324, so the Wilk’s Lambda test statistic is used in interpreting the MANOVA results. The MANOVA results indicate that the Product CDIT utilized in the exploration stage (Wilk’s Lambda=.914, F(3, 230)=7.258, p=.001), the Product CDIT utilized in the elaboration stage (Wilk’s Lambda=.920, F(3, 230)=6.684, p=.001), Product Type (Wilk’s Lambda=.943, F(3, 230)=4.662, p=.001), and the interaction effect between the Product CDIT utilized in the exploration stage and the Product CDIT utilized in the elaboration stage (Wilk’s Lambda=.924, F(3, 230)=6.341, p=.001), and the three way interaction among the Product CDIT utilized in the exploration stage, the Product CDIT utilized in the elaboration stage, and Product Type (Wilk’s Lambda=.961, F(3, 230)=3.079, p=.028) significantly affect the combined dependent variable of task-individual-technology fit, decision quality, and decision effort.                                                  30 There is no statistical difference in product knowledge, and task involvement across assigned conditions in Table 3.10 (p>.1). 44.14.24.34.44.54.64.74.84.95AGGREGATED PA I RW I S EDECISION EFFORTPolarized Converged 103 Table 3.23 MANOVA Summary Table Multivariate Test Effect Value F Hypothesis df Error df Sig. Intercept .010 7392.608 3 230 .001 Product CDIT Utilized in the Exploration Stage .914 7.258 3 230 .001 Product CDIT Utilized in the Elaboration Stage .920 6.684 3 230 .001 Product Type .943 4.662 3 230 .003 Product CDIT Utilized in the Exploration Stage * Product Type .972 2.200 3 230 .089 Product CDIT Utilized in the Elaboration Stage * Product Type .982 1.409 3 230 .241   Univariate ANOVA was conducted as a follow-up test (see Table 3.24).31 The ANOVA results indicate that task-individual-technology fit significantly differs for the Product CDIT utilized in the exploration stage (F(1, 232)=11.614, p=.001) and the Product CDIT utilized in the elaboration stage (F(1, 232)=7.168, p=.008). Decision quality significantly differs for the Product CDIT utilized in the elaboration stage (F(1, 232)=13.277, p=.001), Product Type (F(1, 232)=6.397, p=.012), and the three-way interaction among the Product CDIT utilized in the exploration stage, the Product CDIT utilized in the elaboration stage, and Product type (F(1, 232)=7.168, p=.008). Decision effort significantly differs for the Product CDIT utilized in the exploration stage (F(1, 232)=10.659, p=.001).  Table 3.24 Univariate ANOVA Summary Table Test of Between-Subject Effects Source Dependent Variable Type III Sum of Squares df Mean Square F Sig. Corrected Model Task-Individual-Technology Fit Decision Quality Decision Effort 28.256 26.750 22.476 1 1 1 4.037 3.821 3.211 4.065 6.638 2.963 .001 .001 .005 Intercept Task-Individual-Technology Fit Decision Quality Decision Effort 4792.288 6265.780 5523.362 1 1 1 4792.288 6265.780 5523.362 4826.139 10883.911 5097.455 .001 .001 .001 Product CDIT in the Exploration Stage Task-Individual-Technology Fit Decision Quality Decision Effort 11.533 .627 11.550 1 1 1 11.533 .627 11.550 11.614 1.090 10.659 .001 .298 .001                                                31	Prior to examining the univariate repeated measures ANOVA results, the alpha level was adjusted to a=.020 due to the risk of Type I error (Mertler and Reinhart, 2016).	 104 Product CDIT in the Elaboration Stage Task-Individual-Technology Fit Decision Quality Decision Effort 7.117 7.643 .006 1 1 1 7.117 7.643 .006 7.168 13.277 .005 .008 .001 .943 Product Type Task-Individual-Technology Fit Decision Quality Decision Effort 1.652 3.683 5.296 1 1 1 1.652 3.683 5.296 1.663 6.397 4.887 .198 .012 .028 Product CDIT in the Exploration Stage *  Product Type Task-Individual-Technology Fit Decision Quality Decision Effort 2.004 .126 4.361 1 1 1 2.004 .126 4.361 2.018 .218 4.024 .157 .641 .046 Product CDIT in the Elaboration Stage *  Product Type Task-Individual-Technology Fit Decision Quality Decision Effort 3.914 .074 .349 1 1 1 3.914 .074 .349 3.942 .128 .322 .048 .721 .571 Error Task-Individual-Technology Fit Decision Quality Decision Effort 230.373 133.561 251.384 232 232 232 .993 .576 1.084   Total Task-Individual-Technology Fit Decision Quality Decision Effort 5050.917 6426.091 5797.223 240 240 240    Corrected Total Task-Individual-Technology Fit Decision Quality Decision Effort 258.629 160.311 273.861 239 239 239      3.4.2.1 Impact of Product CDITs Utilized in Information Search Stages on Task-Individual-Technology Fit This study investigates group means for task-individual-technology fit by Product CDITs (Aggregated CDIT vs. Pairwise CDIT) utilized in the information search stages (i.e., the exploration stage and the elaboration stage). As shown in Table 3.25 and Figure 3.15, there is a significant statistical difference of task-individual-technology fit (p=.001) between the Aggregated Product CDIT (m=4.69) and Pairwise Product CDIT (m=4.25) utilized in the exploration stage. That is, when a group of participants uses the Aggregated Product CDIT in the exploration stage, their perceived task-individual-technology fit is higher than those using the Pairwise Product CDIT in the exploration stage. In addition, there is a significant statistical difference of task-individual-technology fit (p=.008) between the Pairwise Product CDIT (m=4.64) and the Aggregated Product CDIT (m=4.30) utilized in the elaboration stage. That is, when a group of participants use the Pairwise Product CDIT in the elaboration stage, their perceived task-individual-technology fit is higher than those using the Aggregated Product CDIT in the elaboration stage. Since there is no statistical difference between product types (p>.1), the results are generalized across a search good and an experience good. Overall, these results show that there are main effects of Product CDITs on perceived task-individual-technology fit. Thus, H7 and H8 are supported.    105 Table 3.25 Means for Task-Individual-Technology Fit by the Product CDIT Utilized in the Exploration and Elaboration Stages Mean for Task-Individual-Technology Fit Information Search Stage Exploration Stage Elaboration Stage Product CDIT Aggregated  4.69 4.30 Pairwise 4.25 4.64   Figure 3.15 Means for Task-Individual-Technology Fit by the Product CDIT Utilized in the Exploration and Elaboration Stages    3.4.2.2 Impact of Product CDITs Utilized in Information Search Stages and Product Type on Decision Quality This study investigates group means for decision quality by i) the Product CDIT (Aggregated CDIT vs. Pairwise CDIT) utilized in the elaboration stage; and ii) Product Type (Laptop vs. Hotels). As shown in Table 3.26 and Figure 3.16, there is a significant statistical difference of decision quality (p=.001) between the Aggregated Product CDIT (m=4.93) and Pairwise Product CDIT (m=5.29) utilized in the elaboration stage. That is, when a group of participants uses the Pairwise Product CDIT in the elaboration stage, their perceived decision quality is higher than those using the Aggregated Product CDIT in the elaboration stage. However, there is no statistical difference between Product CDITs utilized in the exploration stage (p=.298). That is, there is main effect of the Product CDIT utilized in the elaboration stage, while there is no main 44.14.24.34.44.54.64.74.84.95EXP LOR ING E LABORAT INGTASK-TECHNOLOGY FITAggregated Pairwise 106 effect of the Product CDIT utilized in the exploration Stage on decision quality, Thus, H9 is not supported; but H10 is supported.   Table 3.26 Means for Decision Quality by the Product CDIT Utilized in the Exploration and Elaboration Stages Mean for Decision Quality Information Search Stage Exploration Stage Elaboration Stage Product CDIT Aggregated  5.16 4.93 Pairwise 5.05 5.29   Figure 3.16 Means for Decision Quality by the Product CDIT Utilized in the Exploration and Elaboration Stages    3.4.2.3 Impact of Product CDITs Utilized in the Exploration Stage on Decision Effort This study investigates group means for decision effort by the Product CDIT (Aggregated CDIT vs. Pairwise CDIT) utilized in the exploration stage. As shown in Table 3.27 and Figure 3.17, there is a significant statistical difference of decision effort (p=.001) between the Aggregated Product CDIT (m=4.58) and Pairwise Product CDIT (m=5.02) utilized in the exploration stage. That is, when a group of participants uses the Pairwise Product CDIT in the exploration stage, their perceived decision effort is higher than those using the Aggregated Product CDIT in the exploration stage. However, there is no statistical difference between Product CDITs utilized in the elaboration stage (p=.943). Since there is no statistical difference 4.54.64.74.84.955.15.25.35.45.5EXP LOR ING E LABORAT INGDECISION QUALITYAggregated Pairwise 107 between product types (p>.1), the results are generalized across a search good and an experience good. Overall, these results show that there is main effect of the Product CDIT utilized in the exploration stage, while there is no main effect of the Product CDIT utilized in the Elaboration Stage on decision effort, Thus, H11 is supported; but H12 is not supported.   Table 3.27 Means for Decision Effort by the Product CDIT Utilized in the Exploration and Elaboration Stages Mean for Decision Effort Information Search Stage Exploration Stage Elaboration Stage Product CDIT Aggregated  4.58 4.80 Pairwise 5.02 4.79   Figure 3.17 Means for Decision Effort by the Product CDIT Utilized in the Exploration and Elaboration Stages      4.34.44.54.64.74.84.955.15.25.3EXP LOR ING E LABORAT INGDECISION EFFORTAggregated Pairwise 108 3.4.3 Overall Findings Through Experiment 2-1 and 2-2, this study revealed the interaction effects among trustworthiness variance, Source and/or Product CDITs, and information search stages on task-individual-technology fit and decision-making performance (i.e., decision quality and effort). Hypotheses 1, 2, 4, 5, 7, 8, 10, and 11 were supported, while hypotheses 3, 6, 9, and 12 were not supported (see Table 3.28).  Table 3.28 Summary of Hypothesis Testing  Hypotheses Result H1 When utilizing the Pairwise Source CDIT in the source selection stage, people having polarized trustworthiness perceive higher task-individual-technology fit than those having converged trustworthiness. Supported H2 When utilizing the Pairwise Source CDIT in the source selection stage, people having polarized trustworthiness perceive higher decision quality than those having converged trustworthiness. Supported H3 When utilizing the Pairwise Source CDIT in the source selection stage, people having polarized trustworthiness perceive lower decision effort than those having converged trustworthiness.  Not Supported H4 When utilizing the Aggregated Source CDIT in the source selection stage, people having converged trustworthiness perceive higher task-individual-technology fit than those having polarized trustworthiness. Supported H5 When utilizing the Aggregated Source CDIT in the source selection stage, people having converged trustworthiness perceive higher decision quality than those having polarized trustworthiness. Supported H6 When utilizing the Aggregated Source CDIT in the source selection stage, people having converged trustworthiness perceive lower decision effort than those having polarized trustworthiness. Not Supported H7 People utilizing the Aggregated Product CDIT in the exploration stage perceive higher task-individual-technology fit than those who utilizing the Pairwise Product CDIT in the exploration stage. Supported H8 People utilizing the Pairwise Product CDIT in the elaboration stage perceive higher task-individual-technology fit than those who utilizing the Aggregated Product CDIT in the elaboration stage  Supported H9 People utilizing the Aggregated Product CDIT in the exploration stage perceive higher decision quality than those who utilizing the Pairwise Product CDIT in the exploration stage. Not Supported H10 People utilizing the Pairwise Product CDIT in the elaboration stage perceive higher decision quality than those who utilizing the Aggregated Product CDIT in the elaboration stage. Supported H11 People utilizing the Aggregated Product CDIT in the exploration stage perceive lower decision effort than those who utilizing the Pairwise Product CDIT in the exploration stage. Supported H12 People utilizing the Pairwise Product CDIT in the elaboration stage perceive lower decision effort than those who utilizing the Aggregated Product CDIT in the elaboration stage. Not Supported    109 In Experiment 2-1, using post-grouping analysis, this study categorized polarized and converged trustworthiness groups; and showed that the Pairwise Source CDIT fits the polarized group, while the Aggregated Source CDIT fits the converged group in increasing task-individual-technology fit and decision-making performance by supporting the Anchoring and Seeking Strategies. In Experiment 2-2, this study found that the Aggregated Product CDIT and Pairwise Product CDIT, respectively fit the exploration and elaboration stages in increasing task-individual-technology fit and decision-making performance. Since there is overall no statistical difference between product types (p>.1), the results were generalized across a search good and an experience good. In addition, as the average correlation between product quality, such as recommendation ranking or rating scores, and product consistency distance is not statistically significant (p>.1), product quality does not influence the impact of consistency on decision-making performance.32 This study examined the best combination among trustworthiness variance, information search stages, and CDITs on the basis of the Task-Technology Fit Theory.   3.5 DISCUSSION In making product selection decision, online consumers face the challenge of deciding how to utilize such wide ranging and possibly conflicting sets of information from multiple advice sources. While Study #1 investigated online consumers’ use of consistency strategies when facing multiple advice sources and revealed how such consistency strategies are utilized across information search stages, there is a need to design and implement decision aids that help consumers identify consistency across advice sources and guide them when they need to consider consistency in making product selection decisions.   Study #2 investigated how to design decision aids that identify consistency across advice sources. First, this study conceptualized consistency distance as a continuous variable to better capture the granularity of consistency by applying a Euclidean metric. Second, this study proposed four types of CDITs representing diverse level of consistency across information search stages, which represents the extent of objective agreement among advice sources of rating scores that testify to product quality and/or fit. Third, to examine which combination of a CDIT and an information search stage is the most efficient and effective in utilizing consistency and improving decision-making performance, this study investigated when and how to provide the CDITs across information search stages on the basis of Task-Technology Fit Theory. Specifically, as CDITs have diverse functionalities in terms of consistency depth and width (as the Task-Technology Fit Theory postulates), CDITs could have a more positive impact on decision-making performance if its                                                32 The average correlations between recommendation rankings or ratings and product consistency distance measure is 0.036 (p>.1).  110 functionalities matched individuals’ trustworthiness variance and requirements or goals in an information search stage.   The results show that there are interaction effects between Source and/or Product CDITs, trustworthiness variance, and information search stages on perceived task-individual-technology fit and decision-making performance. Particularly, Aggregated Source CDIT fits the converged trustworthiness group, while Pairwise Source CDIT fits the polarized trustworthiness group in the source selection stage. In addition, Aggregated Product CDIT fits the exploration stage, while Pairwise Product CDIT fits the elaboration stage in achieving higher task-individual-technology fit and decision-making performance.  3.5.1 Theoretical Implications Study #2 has both theoretical and practical implications. From the theoretical perspective, consistency distance is conceptualized as a more objective and continuous variable to better capture the granularity of inconsistency among advice sources. By adopting a Euclidean metric, this study is able to both specify advice sources’ rating scores that represent the overall evaluation of product quality in Euclidean space as well as measure the consistency distance as an objective and continuous variable. In addition, this study conceptualizes trustworthiness variance representing individual characteristics that trigger the utilization of consistency strategies (i.e., Anchoring and Seeking Strategies). While previous research relying on the Task-Technology Fit Theory focused mainly on the interplay between technology and task in improving users’ performance, few studies have examined the impact of individual characteristics on task-individual-technology fit. Particularly, no study applied individual characteristics that trigger individual’s decision-making strategies with the support of decision aids. Study #2 attempted to fill this theoretical gap by proposing individual characteristics influencing individuals’ decision-making strategies. Specifically, this study conceptualized trustworthiness variance as a key determinant of task-individual-technology fit in utilizing multiple advice sources through CDITs.  3.5.2 Practical Implications From the practical perspective, Study #2 provides guidelines to the developers of DSS. It is important for practitioners to consider two questions. One is how the CDITs can help consumers to better manage conflicting opinions by utilizing better consistency strategy, which will subsequently culminate in better decisions. A second question is which combination of a CDIT and information search stage is the most efficient and effective in utilizing consistency and improving decision-making performance. My results reveal that the Pairwise Source CDIT fits online customers having a strong preference for a specific advice  111 source, while the Aggregated Source CDIT fits those who have a similar extent of trustworthiness across multiple advice sources. In addition, Aggregated Product CDIT needs to be provided when online customers build consideration sets, while Pairwise Product CDIT would be more useful after screening out the alternatives.  3.5.3 Limitations and Future Research Despite the implications of this study, the results have several limitations. First, as this study is a conservative test in the laboratory, a public or social context should be considered in future studies. Second, the participants in this study were undergraduate and graduate students who may not precisely represent the overall population of online shoppers. However, because the participants have the potential to become heavy users (Kim et al., 2013) and all the participants have had previous experience in online shopping, the use of students may not be a significant threat to external validity (McKnight et al., 2002). In future research, a complementary eye-tracking study would allow us to see whether visuospatial attention focuses on the CDITs across information search stages; consequently; this could strengthen my findings by addressing the task-individual-technology fit perspective.     112 CHAPTER 4: ONLINE CONSUMERS’ ATTRIBUTION OF  INCONSISTENCY AMONG ADVICE SOURCES (STUDY #3)  4.1 INTRODUCTION As more online stores provide recommendation agents (RAs) to support online consumers’ product selection decision-making, consumers face the challenge of deciding the extent to which they should rely on and evaluate such recommendations. To avoid the uncertainty of utilizing substandard advice, consumers strategically utilize multiple advice sources, such as those from experts and other consumers (Xu et al., 2017). Study #1 (Chapter 2) proposed consistency identification among advice sources as online consumers’ major heuristic across information search stages, and identified consistency strategies. Consistency refers to a consumer’s belief that there is agreement among multiple advice sources for recommendations concerning product quality. Study #2 (Chapter 3) proposed consistency distance identification tools (CDITs) that provide consistency distance as a representation of consistency among advice source in order to support online consumers’ utilization of consistency strategies, and investigated their impacts on decision-making performance on the basis of Task-Technology Fit Theory. Consistency distance refers to the extent of objective disagreement among multiple advice sources in their recommendations representing product quality and/or fit as rating scores.  However, little attention has been paid to the ways in which online consumers perceive and react to the conflicts and/or disagreements between advice sources (i.e., inconsistency), nor the reasons they attribute such inconsistency to. In utilizing multiple advice sources, 70% of online consumers accept the RA’s top recommendations (Xu et al., 2017). Accordingly, Study #3 expects consumers to validate the chosen advice source (i.e., an RA) by comparing it with other advice sources.33 In addition, people are less reluctant to blame an information system rather than other people (Dietvorst et al., 2015; Kim and Hinds, 2006; Leahy, 2002). Cognitive Dissonance Theory (Festinger, 1962) postulates that people form and/or change a belief that is least resistant to change in order to alleviate an aversive motivational state (i.e., dissonance) and maintain a state of consonance (Gawronski, 2012; Harmon-Jones and Harmon-Jones, 2007). Therefore, when there is inconsistency among advice sources, it is easy for online consumers to change their belief in an RA; and they will perceive that the RA is deceptive or incompetent, decline adherence to the recommendations, and even move to other online stores. According to the literature (Tan et al., 2016; Xiao and Benbasat, 2011), electronic service failures make online consumers either abandon transactions entirely                                                33 According to the results of Study #2, 42% of the participants accepted an RA’s top recommendations, 20% accepted those of Experts, 25% accepted those of Other Customers, and 13% accepted those of Online Social Networks.  113 or switch to other service providers. Therefore, in utilizing multiple advice sources, if consumers perceive and react to the inconsistency between an RA and other advice sources and consequently perceive the RA as incompetent or deceptive, it would be a key concern for any online store. Furthermore, it would be of paramount importance for that store to find ways to reduce and recover any biased attribution of inconsistency towards an RA and consequently to facilitate online consumers’ positive responses. To the best of my knowledge, this is the first study that examines online consumers’ attribution of inconsistency among advice sources. While a few studies (Xu et al., 2017; Kim and Benbasat, 2013) have investigated the positive aspects of utilizing multiple advice sources, its negative influences on online consumers’ perception of RAs’ competence and/or deceptiveness, and decision-making performance have not been examined.   Thus, Study #3 has two key objectives. The first is to understand online consumers’ attribution of inconsistency among advice sources and examine their reactions to it. Inconsistency refers to online consumers’ perceived disagreement among advice sources of recommendations concerning product quality. To investigate online consumers’ attribution of inconsistency, this study applies Attribution Theory (Campbell and Sedikides, 1999; Myers, 2015; Trope, 1986). Specifically, two types of attribution biases (i.e., Correspondence Bias and the Self-Service Bias) provide theoretical foundations for understanding when and how online consumers attribute inconsistency among advice sources to a specific advice source (e.g., an RA).   The second objective is to implement inconsistency reduction tools (IRTs) (i.e., Explanatory IRT and Interactive IRT) that alleviate consumers’ potentially biased attribution to a certain advice source. The Explanatory IRT clarifies the differences between individual’s preference elicitation and other sources’ preference elicitations, while Interactive IRT guides consumers to carefully consider more details of inconsistency among advice sources by facilitating individual’s trials of an RA to decrease inconsistency.   The results of Study #3 should inform online store providers on two aspects: how and why online consumers attribute inconsistency among advice sources to RAs’ incompetence or deceptiveness; and how and why IRTs are capable of alleviating online consumers’ attribution of inconsistency and recovering their perception of RAs’ competence.    4.2 THEORETICAL FRAMEWORK AND HYPOTHESIS DEVELOPMENT 4.2.1 Attribution Theory  114 4.2.1.1 Correspondence Bias To understand online consumers’ attribution of inconsistency among advice sources, this study applies Attribution Theory (Heider, 1958; Jones and Davis, 1965; Kelley, 1972; Kelley and Michela, 1980; Trope, 1986). This theory has mainly classified the cause(s) of behaviour into two factors: personal dispositions (e.g., attitudes, motives, personality traits, abilities) and situational inducements (e.g., social norms, group pressure, task difficulty, the interplay between other players). Since these two factors mainly determine individuals’ attribution of behavior, and the situational attribution is subtracted from the dispositional attribution implied by the behavior, Kelley (1972) postulates the discounting principle that personal attribution is inversely related to the contribution of situational attribution.   Previous studies (Gilbert and Malone, 1995; Jones and Harris, 1967; Ross and Nisbett, 1991) in this research stream propose Correspondence Bias; i.e., people tend to attribute one’s behavior to his/her disposition that corresponds to the behavior, even while one’s behavior is actually under the control of the situation in which the behavior occurs. Correspondence Bias is caused mainly by an individual’s misinterpretation or underestimation of situational factors. That is, people who do not put proper weight on, or give attention, to situational factors would have a biased attribution to the disposition of the target person. Particularly, when the situational factors are invisible to individuals who draw inferences about the behavior, Correspondence Bias is stronger and more evident (Gilbert and Malone, 1995; Ross and Nisbett, 1991).  4.2.1.2 Self-Serving Bias When a behavior or event is directly related to or caused by the individuals themselves, their attribution is determined by the extent of positive or negative outcome of the behavior or event. This is Self-Serving Bias, which refers to the tendency of individuals to ascribe success to internal factors, such as their own efforts or capabilities, but ascribe failure to external factors, such as circumstances due to the need for maintaining and enhancing self-esteem (Campbell and Sedikides, 1999; Myers, 2015). For example, when people get positive comments, they tend to attribute these to their capabilities and personalities. On the other hand, when they get negative comments, they take more responsibility for their group's work or other members’ mistakes.   The underlying mechanism of Self-Serving Bias is the motivation to maintain self-esteem by protecting and enhancing individuals’ positive self-concept. When self-concept is threatened by negative feedback, individuals try to minimize and counter the threat. This self-esteem motivation is a key underlying assumption of several theoretical perspectives of the self (Brown and Dutton, 1995; Campbell and Sedikides, 1999; Dunning, 1993; Sedikides, 1993; Sedikides and Strube, 1997). While there are individuals with  115 negative overall self-concept, most of normal adults are assumed to have a positive self-concept and are motivated to maintain and enhance this positive self-concept (Edwards, 1957; Kendall et al., 1989; Schwartz, 1986). Thus, individuals facing negative feedback that would threaten their self-concept lead to Self-Serving Bias attribution in an attempt to escape and avoid such an uncomfortable state of mind.   Literature investigating Correspondence Bias and Self-Serving Bias has found that similar attributions are made in diverse contexts, such as consumers’ decision-making, interpersonal relationships, and organizations.  4.2.2 Conceptualizing Inconsistency Reduction Tools 4.2.2.1 Explanatory Inconsistency Reduction Tool Previous studies (Gilbert and Malone, 1995; Ross and Nisbett, 1991) have proposed that Correspondence Bias is stronger and more evident when the situational factors are invisible or unnoticeable. That is, people who are unsuccessful in applying proper weight or attention to situational factors would have a biased attribution to the disposition of the target. In the IS discipline, the availability of explanations or justifications of underlying algorithm of decision aids has been investigated to design better decision aids to benefit online consumers and stores (Xiao and Benbasat, 2015).  If online stores do not provide explanations or justification for what could cause the inconsistency among advice sources, the consumers tend to accuse an RA. For example, because an RA’s recommendations rely on an individual’s product attribute preferences and importance, the inconsistency among an RA and other advice sources is caused mainly by the difference of product attribute preferences and/or importance between an individual and other advice sources. If decision aids are capable of inferring other advice sources’ product attributes preferences and/or importance based on their rating scores, online stores can provide explanations or justification for inconsistency among advice sources.34 Therefore, in accordance with Correspondence Bias, an Explanatory IRT that clarifies the differences of such preferences and/or importance between an individual consumer and other advice sources can make the consumer pay attention to the differences, and consequently alleviate biased attribution to RA’s incompetence or deceptiveness.                                                34 To find an advice source’s product attribute preferences and importance, this study identified product attribute preferences and importance that minimize the differences between the given advice source’s rating scores and an RA’s fit scores (see footnote 3) calculated on those product attribute preferences and importance. To find optimal solutions, this study used the	generalized	reduced	gradient	(GRG)	nonlinear	algorithm	that	is	considered	one	of	the	most	robust	nonlinear	algorithms	(Lasdon	et	al.,	1975;	Ortiz	et	al.,	2004). By considering an advice source’s product attribute preferences and/or importance, an online consumer can revise and decrease inconsistency among advice sources.  116 Thus, Explanatory IRT refers to a decision aid that provide the product attribute preferences of, and importance to, an individual, Experts, and other Consumers, which create inconsistency among them (see Figure 4.1).  Figure 4.1 Explanatory Inconsistency Reduction Tool    4.2.2.2 Interactive Inconsistency Reduction Tool As Correspondence Bias posits, the Explanatory IRT can make the situational factors more evident and transparent. However, clarifying the preference elicitations of advice sources would only be a necessary condition, not a sufficient condition. To validate a given explanation or justification, online consumers may want to interact with an RA; and subsequently, they would decrease inconsistency among advice sources (Kim and Benbasat, 2015) by revising their product attribute preferences and/or importance by considering those of other advice sources. Thus, this study proposes an Interactive IRT that facilitates more interactions with an RA by allowing individuals to revise and resubmit their preferences of product attributes. The Interactive IRT refers to decision aids that allow individuals to revise and resubmit their preferences of product attributes multiple times by referencing other advice sources’ product attribute preferences (see Figure 4.2).     117 Figure 4.2 Interactive Inconsistency Reduction Tool       118 Individuals can revise and resubmit their product attribute preferences and importance by considering not only their previous preferences and importance, but also those of other advice sources.35 When individuals click the “Submit Validation” button after eliciting their preferences and importance, the Interactive IRT provides the revised inconsistency in the last row of the table shown on the top of Figure 4.2.  4.2.3 Theoretical Framework of Inconsistency Attribution In accordance with the Attribution Theory, Study #3 develops a theoretical framework of inconsistency attribution (see Figure 4.3). My framework postulates that individuals tend to build potentially biased attribution to an RA (Stage 1). To alleviate such biased attribution, Explanatory and Interactive IRTs are capable of the following: providing explanations and justifications for such inconsistency; and facilitating interactions to validate such justifications and to decrease inconsistency among advice source by revising online consumers’ preference elicitations (Stage 2).  Figure 4.3 Theoretical Framework of Inconsistency Attribution    4.2.3.1 Online Consumers’ Attribution of Inconsistency Among Advice Sources When there is inconsistency among advice sources, as Correspondence Bias and Self-Serving Bias posit (Campbell and Sedikides, 1999; Gilbert and Malone ,1995; Jones and Harris, 1967; Myers, 2015; Ross and Nisbett, 1991), individuals tend to attribute inconsistency to an RA rather than themselves by overlooking situational factors such as differences of product attribute preferences and/or importance between                                                35 To distinguish revised product attribute preferences and importance from those previously provided, the previously provided preferences and importance are labeled as ‘previous preference’ and ‘previous importance’ (see Figures 4.2 and 4.3).  119 individuals and other advice sources.36 37 That is, in order to protect self-esteem, individuals tend to ignore and/or underestimate situational factors when negative feedback is directly related to or caused by themselves (Campbell and Sedikides, 1999; Gilbert and Malone, 1995; Jones and Harris, 1967; Myers, 2015; Ross and Nisbett, 1991).  An RA’s recommendations rely on the individual’s preference elicitations, and the inconsistency among an RA and other advice sources is caused mainly by the difference of product attribute preferences and/or importance between an individual and other advice sources. Therefore, an individual would not recognize those preference and importance gaps in order to protect self-esteem. Consequently, such an individual will react negatively not only to an RA in terms of its incompetence and deceptiveness, but also to decision-making performance (i.e., decision-making quality) in selecting their product. That is, even when an RA is competent and honest, online consumers’ attribution to the RA can be easily biased. Thus, it is hypothesized that,  H1: People attribute inconsistency among advice sources to an RA rather than themselves.  H2: Inconsistency among advice sources decreases perceived decision quality.  H3: Inconsistency among advice sources decreases perceived competence of an RA.  H4: Inconsistency among advice sources increases perceived deceptiveness of an RA.  4.2.3.2 Utilizing IRTs to Alleviate Biased Inconsistency Attribution To reduce online consumer’s attribution to an RA, this study proposes IRTs that identify why advice sources are inconsistent, and subsequently decreases inconsistency among advice sources by revising individuals’ product attribute preferences and/or importance.                                                  36 For example, people would attribute inconsistency among advice sources to an RA’s poor or deceptive algorithm or formula. 37 Cognitive Dissonance Theory (Festinger, 1962) postulates that people form and/or change a belief that is least resistant to change in order to alleviate an aversive motivational state (i.e., dissonance) and maintain a state of consonance (Gawronski, 2012; Harmon-Jones and Harmon-Jones, 2007). In addition, people are less reluctant to put the blame on an information system rather than other people (Dietvorst et al., 2015; Kim and Hinds, 2006; Leahy, 2002). Therefore, when there is inconsistency among advice sources, individuals tend to attribute inconsistency to an RA rather than other advice sources.  	 120 As Correspondence Bias posits, a potentially biased attribution to an RA can be alleviated by clarifying the situational factors that actually cause the inconsistency among advice sources (Gilbert and Malone, 1995; Jones and Harris, 1967; Ross and Nisbett, 1991). Therefore, in order to alleviate potentially biased attribution and negative reactions to an RA, the Explanatory IRT provides other sources’ preference elicitations that clarify the differences of preferences elicitations between an individual (RA) and other advice sources. For example, when considering a laptop, experts may prefer a medium resolution screen for cost-benefit considerations while an individual wants a very high-resolution screen for graphic-intensive tasks; or an individual may attach more importance to battery life for high mobility while the other customers may attach less importance due to immobility. Thus, the Explanatory IRT will make online consumers pay attention to the differences and draw inferences about why there are inconsistencies between advice sources and what they are. This subsequently will reduce consumers’ attribution of inconsistency and negative reactions to an RA. That is, the Explanatory IRT would alleviate the Correspondence Bias that causes the potentially biased attribution and negative reactions to an RA.  While the Explanatory IRT can clarify the differences of preferences elicitations between an individual (RA) and other advice sources, online consumers may want to validate how those differences create the inconsistency among advice sources and how to decrease this inconsistency by revising their preference elicitations. Particularly as they already have a negative perception of (i.e., attribution of inconsistency to an RA) and reactions to an RA, such validation would be a major functionality to change negative attribution and reactions to the RA. Moreover, to protect self-esteem, online consumers would not ascribe inconsistency among advice sources to themselves (i.e., their preference elicitations) without those validations (Campbell and Sedikides, 1999; Myers, 2015). Therefore, by showing that the gap will be smaller by revising individuals’ preferences, the Interactive IRT can make it clear that the inconsistency is caused by individual users’ preferences, and not by the incompetence or deceptiveness of an RA. This consequently would alleviate Correspondence Bias and Self-Serving Bias. That is, by utilizing the Interactive IRT, individuals would not only validate a given explanation and/or justification, but also decrease inconsistency among advice sources. Consequently, online consumers’ attribution of inconsistency and their negative reactions to an RA would be reduced.  Overall, the utilization of IRTs would decrease the impact of inconsistency among advice sources on consumers’ attribution to an RA, perceived decision quality, perceived competence of an RA, and perceived deceptiveness of an RA. Indeed, the impact of the Interactive IRT would be stronger than the Explanatory IRT to alleviate attribution to an RA and recover individuals’ negative reactions. Thus, it is hypothesized that,  121  H5: Utilizing IRTs decreases perceived attribution to an RA.  H6: The impact of the Interactive IRT on perceived attribution to an RA is stronger than the Explanatory IRT.  H7: Utilizing IRTs increases perceived decision quality.  H8: The impact of the Interactive IRT on perceived decision quality is stronger than the Explanatory IRT.  H9: Utilizing IRTs increases perceived competence of an RA.  H10: The impact of the Interactive IRT on perceived competence of an RA is stronger than the Explanatory IRT.  H11: Utilizing IRTs decrease perceived deceptiveness of an RA.  H12: The impact of the Interactive IRT on perceived deceptiveness of an RA is stronger than the Explanatory IRT.   4.3 METHODOLOGY 4.3.1 Developing an Experimental Online Store 4.3.1.1 Recommendation Agents and Multiple Advice Sources An experimental online store was developed as the platform for this laboratory-based Study #3. Because an RA in my research context uses content-based filtering, this study used laptops as a search good. To enhance mundane realism (i.e., shaping the similarity of experimental events to real experience, Singleton and Straits, 1999), this study selected laptops sold on Amazon.com. In addition, this study used two additional advice sources (i.e., customers and experts).  A laptop dataset including product attributes and rating scores from other customers and experts was constructed. In building a laptop dataset, Study #3 adopted rating scores from Amazon.com customers as well as ratings from experts at Cnet.com. To collect valid rating scores, this study screened laptops having  122 at least ten customers’ ratings from other customers, and at least one expert’s rating. After screening, the laptop dataset included 30 alternatives. Through rating scores in the laptop dataset, experts’ and other customers’ recommendations were created.   An RA is an independent automated recommendation tool that ranks products for users based on their preferences. When participants input their preference for each product attribute as well as its importance, the RA presents a list of products that matches their needs for those attributes. An RA is developed on the basis of the weighted additive strategy that delivers better decision quality than other strategies (Bettman et al., 1998; Payne et al., 1988; Xu et al., 2017).   Other customers are general users of Amazon.com, which encourages customers to share their opinions, both favorable and unfavorable. Customers shared information on their laptops through ratings on a five-star scale. Customers’ ratings are meant to give other customers genuine product feedback and, from other customers’ perspectives, are helpful in providing information about the products. Experts are professional reviewers on Cnet.com, an independent technological organization that compiles data for technology products. Cnet experts give the information, tools, and advice that will help people decide what to buy and how to get the most out of the technology.  4.3.1.2 Implementing Inconsistency Among Advice Sources To implement inconsistency as a more objective and continuous variable, this study applies the CDITs implemented in Study #2, which use Euclidean space and distance (Deza and Deza, 2009) to map advice sources’ rating scores that represent the overall evaluation of product quality (see Figure 4.4).      123 Figure 4.4 Implementing Inconsistency into Consistency Distance   In Euclidean space, points of the space are specified with collections of numbers; there is essentially only one Euclidean space of each dimension; and Euclidean space specifies each point uniquely in a plane by a pair of numerical coordinates, which are the signed distances measured in the same unit of length (Deza and Deza, 2009). Therefore, Euclidean distance identifies straight-line distance between points in a Euclidean space.   Overall, Study #3 maps rating scores from each source in Euclidean space, calculates the straight-line distance between them, and measures inconsistency as an objective and continuous variable. Because one group of consumers utilized the aggregated consistency distance among all advice sources, while others utilized the pairwise consistency distance between an RA and another advice source as shown in the results of Study #2, Study #3 provides two types of Product CDITs – namely the Aggregated Product CDIT and the Pairwise Product CDIT (see Table 4.1).   Table 4.1 Inconsistency Formulae Consistency Distance Description and Formulae • A laptop (Pi) has fit/rating scores from an RA, Experts, and Other Customers. Let RAi represent RA’s Fit Score of Pi, EXi represent Expert’s Rating of Pi, and GCi represent Other Consumer’s Rating of Pi. • With these information, the formulae are: Aggregated Product CDIT of Pi = 100J1 −K12M=N!" − OP"5 CQ + =N!" − ST"5 CQ	WX  124 Pairwise Product CDIT of Pi Between an RA and Experts = 100 M1 − |N!" − OP"|5 W Pairwise Product CDIT of Pi Between an RA and Other Customers = 100M1 − |N!" − ST"|5 W   4.3.2. Experimental Design To investigate how online consumers attribute inconsistency among advice sources and how the Explanatory IRT and Interactive IRT alleviate online consumers’ biased attribution to an RA, Study #3 uses a multi-round, within/between-subject design, comprising three rounds of within-subject design and two-levels (i.e., Explanatory IRT and Interactive IRT) of between-subject design (see Figure 4.5).   Figure 4.5 Multi-Round Within-Between Subjects Design    To trace how online consumers perceive and react to inconsistency among advice sources after adhering to an RA’s recommendations, this study applies a within-subject design across three rounds. In Round 1, an RA provided the top ten recommendations based on the participant’s preferences, and participant was asked to select the best laptop from these recommendations. In Round 2, the experimental website presented Product CDITs between an RA and other advice sources, which represented how much other advice source  125 agreed/disagreed with the RA’s opinion on the chosen laptop (see Figure 4.4).38  In Round 3, participants were randomly assigned to two conditions that manipulated the types of IRTs, and were then asked to review the chosen laptop by utilizing the IRTs.  4.3.3 Participants and Experimental Procedure To enhance experimental realism and prevent the potential compounding effects of task involvement (Petty et al., 1983), this study recruited 80 voluntary participants from a large public university in North America who were interested in purchasing a laptop within a few months.39 The participants were randomly assigned to each condition in Round 3 (see Figure 4.5)40. A statistical power analysis was performed for sample size estimation using G*Power 3.1. The effect size in this study was considered to be medium to large using Cohen's (1988) criteria. With an alpha = .05 and power = 0.80, the projected sample size required for a medium-to-large effect size is approximately N = 14 or 30 for three rounds within group comparison. Thus, my proposed sample size of 80 is adequate. After the experiment with 80 participants, there was no further data collection. To motivate participants to fully engage in the tasks, every participant received $20 as an honorarium. Participants’ demographics are summarized in Table 4.2.   Table 4.2 Demographics of Participants   Mean Standard Deviation Age 21.92 6.85 Gender Male 27 N/A Female41 53 N/A Have purchased online? Yes 76 N/A No 4 N/A Purchases online during last year 13.96 18.40 Money spent online during last year $852.44 $1,353.60 Note: Sample size = 80. No missing data.                                                38 In pilots, this study did a survey to find a cut-off value representing inconsistency among advice sources. The survey item is the following: If two individuals disagree on the quality of a laptop, how much do you expect their opinions to differ/vary? The average cut-off value of inconsistency in the pilots is 72.34. In the main experiment, since the average of the Aggregated Product CDIT (m=68.76) presented in Round 2 is below of the cut-off value (m=72.34), this study concluded that the manipulation of inconsistency among advice sources was successful.  39 To validate their interest in purchasing a laptop, this study measured the participants’ perceived product knowledge and task involvement. Perceived product knowledge is statistically different from four points out of a seven-point Likert scale (m=4.60, p<.001); perceived task involvement is statistically different from four points out of a seven-point Likert scale (m=5.76, p<.001).  40 There is no statistical difference in product knowledge and task involvement across conditions (p > .1).  41 According to eMarketer, female Internet users are five million more than male Internet users in the United Stage (https://www.emarketer.com/Article/Gender-Online-Shopping/1004178). In particular, for certain shopping sites such as JCPenny and Federated Department Stores, the percentage of female visitors is much higher than the general Internet population. Therefore, 66.25% of female participants would not be a significant threat to external validity.  126  The experimental procedures are as follows. First, pre-questionnaires for perceived task involvement, product knowledge, and demographics were administered (see Table 4.3). Second, participants were instructed how to use the interfaces of the online store (e.g., eliciting personal preferences and subjective importance on product attributes). After participants confirmed their understanding of the online store interface, the main experimental task of Round 1 was administered. Participants were asked to elicit their preferences and importance on product attributes and to select the best laptop from an RA’s recommendations. They could freely navigate webpages displaying a list of recommendations and details of each alternative (see Figure 4.6). After completing the main task of Round 1, participants completed the post-questionnaires measuring perceived decision quality, perceived competence of an RA, and perceived deceptiveness of an RA (see Table 4.3).   Figure 4.6 Online Shopping Store Interface in Round 1   In Round 2, the experimental website presented inconsistency among an RA and other advice sources, which represented how much other advice sources disagreed with the RA’s opinion of the chosen laptop. Participants were then asked to review other sources’ opinions on the chosen laptop (see Figure 4.7). After completing the main task of Round 2, participants completed the post-questionnaires measuring perceived causal attribution of inconsistency, perceived competence of an RA, and perceived deceptiveness of an RA (see Table 4.3).     127 Figure 4.7 Presenting Inconsistency in Round 2   In Round 3, participants were instructed how to use the IRT assigned to their condition. After participants confirmed their understanding of the interface, participants were asked to review the chosen laptop by utilizing the assigned IRT (see Figures 4.8 and 4.9).  Figure 4.8 Explanatory Inconsistency Reduction Tool in Round 3      128 Figure 4.9 Interactive Inconsistency Reduction Tool in Round 3   After completing Round 3, participants completed the post-questionnaires measuring perceived causal attribution of inconsistency, perceived competence of an RA, perceived deceptiveness of an RA, and perceived decision quality (see Table 4.3).   4.3.4 Measurement Items All the measurement items used in the Study 3 are listed in Table 4.3, along with their sources.42 All measurement items have been validated by prior research work.   Table 4.3 Measurement Item Construct Measurement Item43 Task Involvement (McQuarrie and Munson, 1992) Choosing a laptop is (TI1) Irrelevant / Relevant to me. (TI2) Of no concern / Of concern to me. (TI3) Didn’t matter / Mattered to me. (TI4) Meant nothing to me / Meant a lot to me. (TI5) Unimportant / Important.                                                42 All measures are disclosed in Table 4.3. 43 Seven Likert-scale scored items used to assess the respondent’s agreement with items.  129 Product Knowledge (Eisingerich and Bell, 2008; Sharma and Patterson, 2000) (PK1) I possess good knowledge on laptops  (PK2) I can understand almost all the specifications (e.g., memory, hard drive) of laptops.  (PK3) I am familiar with basic laptop specifications (e.g., memory, CPU). Perceived Decision Quality (Tan et al., 2010) (DQ1) I believe I have made the best choice of the laptop at this website. (DQ2) I would make the same choice if I had to do it again. (DQ3) I believe I have selected the best laptop. Perceived Causal Attribution of Inconsistency (Kumagai et al., 2004) What factors might cause the disagreement on the chosen product among advice sources?  (PCAI1) Automated Recommendation Agent (PCAI2) Experts (PCAI3) Other Consumers  (PCAI4) Your Preferences Perceived Deceptiveness of an RA (Grazioli and Jarvenpaa, 2000) Overall, the Recommendation Agent is  (PDe1) Genuine / Misleading (PDe2) Truthful / Deceptive (PDe3) Fair / Biased Perceived Competence of an RA (Wang and Benbasat, 2012) (PCR1) This recommendation agent is like a real expert in assessing laptops. (PCR2) This recommendation agent has the expertise to understand my needs and preferences about laptops. (PCR3) This recommendation agent has the ability to understand my needs and preferences about laptops. (PCR4) This recommendation agent has good knowledge about laptops. (PCR5) This recommendation agent considers my needs and all important attributes of laptops.   To validate reliability, convergent validity, and discriminant validity of the measurement items, this study applied confirmatory factor analysis using SmartPLS.44 Table 4.4 shows the descriptive statistics and composite reliability of the constructs. All composite reliabilities are greater than 0.7, the recommended cutoff (Barclay et al., 1995; Fornell and Bookstein, 1982). Thus, the reliability of the measurements is acceptable.   Table 4.4 Descriptive Statistics and Composite Reliability of Constructs  Construct Mean Standard Deviation Composite Reliability Task Involvement (TI) 5.76 1.39 0.95 Product Knowledge (PK) 4.60 1.20 0.93                                                44 As decision quality, deceptiveness of an RA, competence of an RA, and causal attribution of inconsistency are repeated measures, participants’ responses in Round 1 were used for confirmatory factor analysis.  130 Perceived Decision Quality (DQ) 5.04 1.12 0.88 Perceived Deceptiveness of an RA (PDe) 2.88 1.25 0.90 Perceived Competence of an RA (PCR) 4.83 1.11 0.85   Convergent validity is assessed by individual item reliability, the composite reliability of the construct, and average variance extracted (AVE) (Barclay et al., 1995; Hu et al., 2004). Individual item reliability was assessed by examining the loadings of the measurement items on their corresponding construct; all the item loadings should be significant and exceed 0.7. All the composite reliability values exceeded 0.7, the recommended criterion (Barclay et al., 1995; Fornell and Bookstein, 1982), and AVE values exceeded 0.5, the generally accepted criterion (Hu et al., 2004) (see Table 4.5). Therefore, these results show good convergent validity for the measurement items.  Table 4.5 Composite Reliability, AVE, and Correlation Among Constructs   CR AVE TI PK DQ PDe PCR TI .945 .775 .880     PK .932 .819 .225 .905    DQ .879 .708 .155 .157 .841   PDe .904 .758 -.135 .003 -.178 .971  PCT .851 .536 .005 .055 .602 -.067 .732 Note: Composite Reliability = CR; Average Variance Extracted = ACE; Task Involvement = TI; Product Knowledge = PK; Decision Quality = DQ; Deceptiveness of an RA= PDe; Competence of an RA = PCT; Diagonal values are the square root of AVE  Discriminant validity is assessed by comparing the square roots of the AVE and the correlations among constructs. To show good discriminant validity, all the square roots of the AVE should be greater than the off-diagonal elements in the corresponding rows and columns. This result indicates that the construct shares more variance with its measures than with others (Fornell and Bookstein, 1982). The diagonal values of Table 4.5, the square roots of AVE, exceed the correlations among constructs, demonstrating good discriminant validity for all of the constructs. Thus, all conditions for convergent and discriminant validity are satisfied.  4.4 DATA ANALYSIS AND FINDINGS 4.4.1 Impact of Inconsistency Among Advice Sources (Round 1 and Round 2) 4.4.1.1 Online Consumers’ Perceived Causal Attribution of Inconsistency Among Advice Sources  131 To investigate how online consumers attribute inconsistency among advice sources, this study measures the perceived causal attribution of inconsistency to three advice sources (RA, Experts and Other Customers) and the User. While an RA represents an individual’s preference elicitations, according to Correspondence Bias and Self-Serving Bias, an individual would overlook such a situational factor in order to protect self-esteem. Therefore, this study distinguishes between perceived causal attribution of inconsistency to an RA and the User himself/herself. Overall, by comparing participants’ perceived causal attributions of inconsistency presented in Round 2, this study investigates how online consumers attribute inconsistency among advice sources.45   To investigate the impact of inconsistency on online consumers’ attribution to advice sources, one-way analysis of variance (ANOVA) was conducted (a summary of the results is presented in Table 4.6). The main effect results reveal that perceived causal attribution is significantly different among advice sources, F(3, 316)=33.002, p=.001. A post-hoc test was conducted to determine which advice sources were significantly different in perceived causal attribution (see Table 4.7). The results reveal that perceived causal attribution to an RA is significantly higher than Experts and User (p<.05), while perceived causal attribution to User is significantly lower than all advice sources (p<001). In addition, perceived causal attribution is not significantly different between Experts and Other Customers (p>.1). Overall, people attribute inconsistency among advice sources to an RA rather than themselves (see Figure 4.10). Thus, H1 is supported.  Table 4.6 ANOVA Summary Table for Perceived Causal Attribution of Inconsistency Dependent Variable: Perceived Causal Attribution of Inconsistency   Sum of Squares df Mean Square F Sig. Between Groups       Advice Sources 151.517 3 50.506 33.002 .001 Within Groups 973.319 316 1.530   Total 1124.836 319                                                       45 To validate the level of inconsistency presented to participants, this study compares the average of presented inconsistency and the cur-off value found in the pilots. Since the average of inconsistency (m=68.76) presented in Round 2 is below of the cut-off value (m=72.34), this study concludes that the manipulation of inconsistency among advice sources was successful.  132 Table 4.7 Post-Hoc Analysis for Perceived Causal Attribution of Inconsistency Dependent Variable: Perceived Causal Attribution of Inconsistency 95% Confidence Interval (I) Con (J) Con Mean Difference (I-J) Std. Error Sig. Lower Bound Upper Bound RA (m=5.01) (SD=1.09) Experts .312 .138 .024 .040 .584 Other Customers .193 .138 .162 -.077 .465 User 1.262 .138 .000 .990 1.534 Experts (m=4.70) (SD=1.30) RA -.312 .138 .024 -.584 -.040 Other Customers -.118 .138 .391 -.390 .152 User .950 .138 .000 .678 1.221 Other Customers (m=4.82) (SD=1.31) RA -.193 .138 .162 -.465 .077 Experts .118 .138 .391 -.152 .390 User 1.068 .138 .000 .797 1.340 User (m=3.75) (SD=1.24) RA -1.262 .138 .000 -1.534 -.990 Experts -.950 .138 .000 -1.221 -.678 Other Customers -1.068 .138 .000 -1.340 -.797   Figure 4.10 Online Consumers’ Perceived Causal Attribution of Inconsistency     4.4.1.2 Online Consumers’ Reactions to Inconsistency Among Advice Sources To examine online consumers’ reactions to inconsistency among advice sources, this study compares perceived decision quality, perceived competence of an RA, and perceived deceptiveness of an RA before and after presenting inconsistency among advice sources (see Figure 4.7) (i.e., Round 1 and Round 2). A one-way repeated measures multivariate analysis of variance (MANOVA) was conducted to determine the 2.503.003.504.004.505.005.50RA Experts Customers UserInconsistency Attribution 133 effect of inconsistency on three dependent variables of decision quality, competence of an RA, and deceptiveness of an RA (see Table 4.8). The MANOVA results indicate that presenting inconsistency (Wilk’s Lambda=.483, F(3, 77)=27.452, p=.001) significantly affects the combined dependent variable of decision quality, competence of an RA, and deceptiveness of an RA.  Table 4.8 MANOVA Summary Table Multivariate Test Effect Value F Hypothesis df Error df Sig. Between Subjects Intercept .020 1286.699 3 77 .001 Within Subjects Presenting Inconsistency .483 27.452 3 77 .001   Repeated measures univariate ANOVA was conducted as a follow-up test (see Table 4.9).46 The ANOVA results indicate that decision quality significantly differs for presenting inconsistency (F(1, 79)=64.722, p=.001). That is, perceived decision quality before presenting inconsistency (m=5.14) significantly decreases after presenting inconsistency (m=4.45). The competence of an RA differs significantly for presenting inconsistency (F(1, 79)=22.928, p=.001). That is, the perceived competence of an RA before presenting inconsistency (m=4.96) decreases significantly after presenting inconsistency (m=4.61). Deceptiveness of an RA differs significantly for presenting inconsistency (F(1, 79)=12.273, p=.001). That is, perceived decision quality before presenting inconsistency (m=2.85) increases significantly after presenting inconsistency (m=3.15). Overall, inconsistency among advice sources decreases perceived decision quality and competence of an RA but increases deceptiveness of an RA (see Figure 4.11). Thus, H2, H3, and H4 are supported.  Table 4.9 Univariate ANOVA Summary Table Test of Within-Subject Contrasts Source Dependent Variable Type III Sum of Squares df Mean Square F Sig. Presenting Inconsistency Decision Quality Competence of an RA Deceptiveness of an RA 19.189 4.692 3.502 1 1 1 19.189 4.692 3.502 64.722 22.928 12.273 .001 .001 .001 Error Decision Quality Competence of an RA Deceptiveness of an RA 23.423 16.168 22.540 79 79 79 .296 .205 .285                                                   46 Prior to examining the univariate repeated measures ANOVA results, the alpha level was adjusted to a=.020 due to the risk of Type I error (Mertler and Reinhart, 2016).  134  Figure 4.11 Online Consumers’ Reactions to Inconsistency Among Advice Sources    4.4.2 Impact of the Explanatory and Interactive IRTs (Round 2 and Round 3) 4.4.2.1 Impact of IRTs on Perceived Causal Attribution of Inconsistency Among Advice Sources To investigate how IRTs change online consumers’ attributions of inconsistency to advice sources, this study compares participants’ perceived causal attributions of inconsistency before (i.e., Round 2) and after utilizing IRTs (i.e., Round 3).  To investigate the impact of IRTs on online consumers’ attributions to advice sources, repeated measures MANOVA was conducted (see Table 4.10). The MANOVA results indicate that IRTs (Wilk’s Lambda=.432, F(4, 75)=24.626, p=.001) significantly affect the combined dependent variable of perceived causal attributions to RA, Experts, Other Customers, and User; the Explanatory and Interactive IRTs (Wilk’s Lambda=.878, F(4, 75)=2.616, p=.042) have a significantly different impact on the combined dependent variable of perceived causal attributions.  Table 4.10 MANOVA Summary Table Multivariate Test Effect Value F Hypothesis df Error df Sig. Between Subjects Intercept .012 1566.092 4 75 .001 Type of IRTs .987 .251 4 75 .908 Within Subjects Utilizing IRTs .432 24.626 4 75 .001 Utilizing IRTs * Type of IRTs .878 2.616 4 75 .042   2.503.003.504.004.505.005.50Decision Quality Competence (RA) Deceptiveness (RA)Inconsistency ReactionsRound 1 Round 2 135 Repeated measures univariate ANOVA was conducted as a follow-up test (see Table 4.11).47 The ANOVA results indicate that perceived causal attribution to an RA differs significantly for utilizing IRTs (F(1, 78)=24.449, p=.001). That is, perceived causal attribution of inconsistency to an RA before utilizing IRTs (m=4.98) decreases significantly after utilizing IRTs (m=4.25). Particularly, decreased perceived causal attribution to an RA by utilizing the Interactive IRT (m=1.08) is greater than those by utilizing the Explanatory IRT (m=0.38) (p=.019). Perceived causal attributions to the User significantly differs for utilizing IRTs (F(1, 78)=57.332, p=.001). That is, perceived causal attribution to the User before utilizing IRTs (m=3.84) increases significantly after utilizing IRTs (m=4.96). However, there is no difference of perceived causal attributions to Experts and Other Customers before and after utilizing IRTs; and there is no difference of utilizing the Interactive CDIT and the Explanatory CDIT in affecting perceived causal attribution to Experts, Other Customers, and the User. Overall, utilizing IRTs decreases perceived causal attribution to an RA; and the Interactive IRT is more effective in alleviating the attribution of inconsistency to an RA (see Figures 4.12 and 4.13). Thus, H5 and H6 are supported.  Table 4.11 Univariate ANOVA Summary Table Test of Within-Subject Contrasts Source Dependent Variable Type III Sum of Squares df Mean Square F Sig. Utilizing IRTs Attribution to an RA Attribution to Experts Attribution to Other Customers Attribution to the User 21.025 4.900 .006 50,625 1 1 1 1 21.025 4.900 .006 50.625 24.449 4.627 .004 57.332 .001 .035 .948 .001 Utilizing IRTs * Type of IRTs Attribution to an RA Attribution to Experts Attribution to Other Customers Attribution to the User  4.900 2.500 2.756 2.500 1 1 1 1 4.900 2.500 2.756 2.500 5.698 2.361 1.858 2.831 .019 .128 .177 .096 Error Attribution to an RA Attribution to Experts Attribution to Other Customers Attribution to the User 67.075 82.600 115.737 68.875 78 78 78 78 .860 1.059 1.484 .883                                                      47 Prior to examining the univariate repeated measures ANOVA results, the alpha level was adjusted to a=.020 due to the risk of Type I error (Mertler and Reinhart, 2016).  136 Figure 4.12 Utilizing IRTs to Alleviate Perceived Causal Attribution to an RA    Figure 4.13 Changes of Perceived Causal Attribution After Utilizing IRTs    4.4.2.2 Impact of IRTs on Online Consumers’ Reactions To investigate how IRTs change online consumers’ reactions of inconsistency among advice sources, this study compares participants’ perceived decision quality, competence of an RA, and deceptiveness of an RA before (i.e., Round 2) and after utilizing IRTs (i.e., Round 3).  To investigate the impact of IRTs on online consumers’ reactions to perceived decision quality, competence of an RA, and deceptiveness of an RA, repeated measures MANOVA was conducted (see Table 4.12). The MANOVA results indicate that IRTs (Wilk’s Lambda=.657, F(3, 76)=13.254, p=.001) significantly affect the combined dependent variable of perceived decision quality, competence of an RA, and deceptiveness of an RA; the Explanatory and Interactive IRTs (Wilk’s Lambda=.819, F(3, 76)=5.609, p=.002) have a significantly different impact on the combined dependent variable of perceived decision quality, competence of an RA, and deceptiveness of an RA. 2.503.003.504.004.505.005.50Attribution to an RA Attribution to Experts Attribution to Other Customers Attribution to the UserInconsistency AttributionRound 2 (Before Uti lizing IRTs) Round 3 (After Utilizing IRTs)-1.50-1.00-0.500.000.501.001.50Attribution to an RA Attribution to Experts Attribution to Other Customers Attribution to the UserChanges of Inconsisency AttributionInteractive IRT Explanatory IRT 137  Table 4.12 MANOVA Summary Table Multivariate Test Effect Value F Hypothesis df Error df Sig. Between Subjects Intercept .014 1751.476 3 76 .001 Type of IRTs .887 3.226 3 76 .027 Within Subjects Utilizing IRTs .657 13.254 3 76 .001 Utilizing IRTs * Type of IRTs .819 5.609 3 76 .002   Repeated measures univariate ANOVA was conducted as a follow-up test (see Table 4.13).48 The ANOVA results indicate that perceived decision quality differs significantly for utilizing IRTs (F(1, 78)=16.040, p=.001). That is, perceived decision quality before utilizing IRTs (m=4.45) increases significantly after utilizing IRTs (m=4.90). Perceived competence of an RA differs significantly for utilizing IRTs (F(1, 78)=19.043, p=.001). That is, the perceived competence of an RA before utilizing IRTs (m=4.61) increases significantly after utilizing IRTs (m=4.86). Particularly, increased perceived competence of an RA by utilizing the Interactive IRT (m=0.40) is greater than those by utilizing the Explanatory IRT (m=0.10) (p=.011). Perceived deceptiveness of an RA differs significantly for utilizing IRTs (F(1, 78)=10.832, p=.001). That is, perceived deceptiveness of an RA before utilizing IRTs (m=3.15) decreases significantly after utilizing IRTs (m=2.90). Particularly, decreased perceived deceptiveness of an RA by utilizing the Interactive IRT (m=0.50) is greater than those by utilizing the Explanatory IRT (m=0.01) (p=.002). There is no difference of utilizing the Interactive CDIT and the Explanatory CDIT in affecting perceived decision quality. Overall, utilizing IRTs increases the perceived decision quality and competence of an RA, but decreases the perceived deceptiveness of an RA; and the Interactive IRT is more effective in alleviating online consumers’ negative reaction to inconsistency among advice sources (see Figures 4.14 and 4.15). Thus, H7, H9, H10, H11, and H12 are supported. However, H8 is not supported.  Table 4.13 Univariate ANOVA Summary Table Test of Within-Subject Contrasts Source Dependent Variable Type III Sum of Squares df Mean Square F Sig. Utilizing IRTs Decision Quality Competence of an RA Deceptiveness of an RA 7.957 2.500 2.503 1 1 1 7.957 2.500 2.503 16.040 19.043 10.832 .001 .001 .001                                                48 Prior to examining the univariate repeated measures ANOVA results, the alpha level was adjusted to a=.020 due to the risk of Type I error (Mertler and Reinhart, 2016).  138 Utilizing IRTs * Type of IRTs Decision Quality Competence of an RA Deceptiveness of an RA 1.170 .900 2.335 1 1 1 1.170 .900 2.335 2.358 6.855 10.108 .129 .011 .002 Error Decision Quality Competence of an RA Deceptiveness of an RA 38.691 10.240 18.021 1 1 1 .496 .131 .231     Figure 4.14 Utilizing IRTs to Alleviate Inconsistency Reactions     Figure 4.15 Changes of Inconsistency Reactions After Utilizing IRTs    4.4.3 Overall Findings and Theoretical Insight Through multi-round, within-between subjects design, this study reveals how online consumers attribute inconsistency among advice sources and how the Explanatory IRT and Interactive IRT can alleviate potentially biased inconsistency attribution to an RA. Most of the hypotheses, except H8, are supported (see Table 4.14).    2.503.003.504.004.505.005.50Decision Quality Competence (RA) Deceptiveness (RA)Inconsistency ReactionsRound 1 Round 2-1.50-1.00-0.500.000.501.001.50Decision Quality Competence (RA) Deceptiveness (RA)Changes of Inconsistency ReactionsInteractive IRT Explanatory IRT 139 Table 4.14 A Summary of Hypothesis Testing  Hypotheses Result H1 People attribute inconsistency among advice sources to an RA rather than themselves. Supported H2 Inconsistency among advice sources decreases perceived decision quality. Supported H3 Inconsistency among advice sources decreases perceived competence of an RA. Supported H4 Inconsistency among advice sources increases perceived deceptiveness of an RA Supported H5 Utilizing IRTs decreases perceived attribution to an RA. Supported H6 The impact of the Interactive IRT on perceived attribution to an RA is stronger than the Explanatory IRT. Supported H7 Utilizing IRTs increases perceived decision quality. Supported H8 The impact of the Interactive IRT on perceived decision quality is stronger than the Explanatory IRT. Not Supported H9 Utilizing IRTs increases perceived competence of an RA. Supported H10 The impact of the Interactive IRT on perceived competence of an RA is stronger than the Explanatory IRT. Supported H11 Utilizing IRTs decrease perceived deceptiveness of an RA. Supported H12 The impact of the Interactive IRT on perceived deceptiveness of an RA is stronger than the Explanatory IRT. Supported  First, to investigate how online consumers perceive and attribute inconsistency among advice sources, this study compared perceived causal attribution of inconsistency after presenting inconsistency in Round 2. The result shows that people attribute inconsistency among advice sources to an RA rather than themselves. Second, to examine online consumers’ reactions to inconsistency among advice sources, this study compares perceived decision quality, perceived competence of an RA, and perceived deceptiveness of an RA before and after presenting inconsistency among advice sources. The result shows that, after perceiving inconsistency among advice sources, people tend to have negative reactions not only to an RA, but also to their decision in choosing a recommended alternative from an RA. Third, to investigate the impact of IRTs on online consumers’ attribution, this study compares changes of causal attribution of inconsistency to an RA, Experts, Other Customers, and the User themselves before and after utilizing IRTs. Our data analysis reveals that IRTs can alleviate individuals’ attribution of inconsistency among advice sources to an RA. By decreasing attribution to an RA and increasing attribution to the Users themselves, online consumers’ attribution of inconsistency is relatively converged across all advice sources, including the User. Lastly, to compare the moderating effect of the IRTs on the relationship between inconsistency among advice sources and individuals’ reactions, this study compares the changes of perceived decision quality, perceived  140 competence of an RA, and perceived deceptiveness of an RA before and after utilizing the IRTs. Overall, data analyses show that the Interactive IRT is more effective for alleviating not only online consumers’ negative reactions to their decision-making performance, but also their negative perceptions of an RA. In addition, as the average correlation between product quality, such as recommendation ranking or rating scores, and product consistency distance is not statistically significant (p>.1), product quality does not influence the impact of consistency on perceived decision quality.49  By comparing the impact of the Explanatory IRT and Interactive IRT on online consumers’ negative reactions, in addition, this study also proposes a theoretical insight on the User-Centric and System-Centric Reactions (see Figure 4.16). According to our theoretical perspectives on Attribution Theory, the Explanatory IRT would be ineffective in alleviating online consumers’ negative reactions to an RA, while it is capable of making people put proper weight or attention to situational factors. That is, the Explanatory IRT may not alleviate negative reactions to the competence and deceptiveness of an RA. However, the Explanatory IRT is only effective in alleviating negative reactions to their decision-making in choosing a recommend alternative from an RA.   Figure 4.16 Impact of IRTs on User-Centric and System-Centric Reactions                                                  49 The average correlations between recommendation rankings or ratings and product consistency distance measure is 0.028 (p>.1).  141 This result proposes that the impact of IRTs is contingent to the type of reactions, such as User-Centric and System-Centric reactions. A User-Centric Reaction refers to online consumers’ negative perception of themselves when they are engaged in an event in which inconsistency among advice sources is triggered. If online consumers’ negative reaction towards the inconsistency attribution is directly associated with themselves, such as their decision-making performance, the decision aids that provide reasonable situational factors would be sufficient to recover their negative reactions toward themselves. A System-Centric Reaction refers to online consumers’ negative perception of the information system when they are engaged in an event in which inconsistency among advice sources is triggered. If online consumers’ negative reaction is associated with the information system in which the RA is incompetent or deceptive, the decision aids providing explanations, justification, and functionalities to validate such explanations and justification would be capable of recovering their negative reactions to the system.  4.5 DISCUSSION 4.5.1 Theoretical Implications To theorize online consumers’ attribution of inconsistency among advice sources, Study #3 uses Correspondence Bias (Gilbert and Malone, 1995; Jones and Harris, 1967; Ross and Nisbett, 1991) and Self-Serving Bias (Campbell and Sedikides, 1999; Myers, 2015) as a theoretical perspective. Through the extent of negative feedback from others, online consumers tend to decontextualize dispositions or overlook situational inducements that actually make inconsistency among advice sources. This theoretical perspective guides this study to propose IRTs that help consumers see why advice sources are inconsistent, and subsequently, reduce consumers’ biased attribution to RA’s incompetence or deceptiveness.  Study #3 proposes a theoretical framework of inconsistency attribution, drawing on the integration of Correspondence Bias and Self-Serving Bias. In addition, this study examines how to alleviate consumers’ biased inconsistency attribution by not only providing underlying mechanisms of inconsistency among advice sources, but also facilitating the validation of the underlying mechanism by revising their product attribute preferences. Overall, this study reveals the ease with which online consumers can attribute inconsistency among advice sources to an RA rather than themselves, whereas an RA actually represents their personal preferences for product attributes.   4.5.2 Practical Implications From the practical perspective, Study #3 proposes IRTs and investigates their impact on recovering online consumers’ perception of RA’s incompetence and deceptiveness. It also shows the importance of decision  142 aids that identify the underlying mechanism describing why advice source are inconsistent. Therefore, by providing decision aids that facilitate interactions between online customers and an RA, online stores are able to guide to draw inferences in understanding the interplay among advice sources, and consequently support online consumers’ efficient and effective purchasing process, and recover consumers’ biased attribution to an RA. Overall, this study can provide useful guidelines for DSS developers.  4.5.3 Limitations and Future Research Despite the academic and practical implications of Study #3, there are limitations. First, as this study was a conservative test in a laboratory, a public or social context should be considered for future studies. Second, the participants in this study are undergraduate and graduate students who may not precisely represent the overall population of online shoppers, while participants have the potential to become heavy users (Kim et al., 2013), and around 95% of them have had previous experience in online shopping. For a future research, a complementary eye-tracking study would allow us to see whether visuospatial attention focuses on the difference of preference elicitations across advice sources. For example, if online consumers – who show a similar level of visuospatial attention to the preferences elicitations in utilizing the Explanatory IRT – retain biased negative reactions toward an RA, it would be a complement to my findings.     143 CHAPTER 5: CONCLUSION  5.1 A SUMMARY OF THE THESIS As more online stores simultaneously provide multiple advice sources and online consumers can find multiple advice sources on the Internet when assessing products, shoppers develop decision-making strategies to manage a wide variety and possibly conflicting sets of information about product fit and quality. By utilizing consistency/inconsistency among advice sources, consumers can conduct better searches for products and/or validate an advice source’s ratings of products.   While extant studies have investigated online consumers’ utilizations of recommendations from an advice source, it is not clear how these consumers utilize multiple advice sources. Few studies have investigated online consumers’ new decision-making strategies in utilizing multiple advice sources or new decision aids that support such decision-making strategies. To address these gaps, this thesis investigated online consumers’ utilization of multiple advice sources. It focused on three particular aspects: consistency strategies used by online consumers (Study #1, Chapter 2); consistency distance identification tools (CDITs) that support these consistency strategies (Study #2, Chapter 3); and inconsistency reduction tools (IRTs) that alleviate online consumers’ potentially biased attribution and reactions triggered by the utilizations of consistency strategies (Study #3, Chapter 4) (see Table 5.1).  Table 5.1 A Summary of the Thesis  Study #1 Study #2 Study #3 Research Type Exploratory Research Confirmatory Research Research Domain Online consumers can access multiple advice sources on the Internet. While online consumers use consistency strategies, there is no decision aids that support such strategies. While consistency strategy is useful, it would increase potentially biased attribution of inconsistency.  Research Objectives Identifying online consumers new decision-making strategy (i.e., consistency strategies) in utilizing multiple advice sources Designing & Implementing CDITs that support product selection  Investigating online consumers’ attribution of inconsistency among advice sources   Designing & implementing IRTs that alleviate potentially biased inconsistency attributions  144 Theoretical Foundations Information Processing Model Correspondence Bias  Self-Serving Bias Cognitive Dissonance Theory  Limited Cognitive Capacity  Task-Technology Fit Contribution Extending classic decision-making strategy literatures by identifying new consistency strategies  Introducing trustworthiness variance as a factor of task-individual-technology fit  Examining impact of CDITs on decision-making performance across information search stages through task-individual-technology fit Examining online consumer’s biased attribution of inconsistency   Investigating the impact of IRTs on online consumers’ reactions to an recommendation agent and their decision-making performance   By identifying decision-making strategies, information system scholars have developed theoretical foundations for designing decision aids that support online consumers (Todd and Benbasat, 1987). Therefore, identifying new strategies and implementing decision aids that support such strategies are prominent research topics in information systems, both from theoretical and practical perspectives. Given the current nascent state of knowledge of online consumers’ utilization of multiple advice sources, Study #1 explored how online consumers process recommendations and reviews from multiple advice sources using concurrent verbal protocol analysis. It identified four recommendation strategies and two review consistency strategies. The results show that consumers utilize consistency as a heuristic in utilizing multiple advice sources.   Understanding online consumers’ strategic utilizations of multiple advice sources forms the basis for designing better decision aids. Decision-making strategies employed for utilizing multiple advice sources are conducted “manually” by the consumer, which requires decision aids that guide consumers concerning when such strategies can be utilized across information search stages (Wang and Benbasat, 2009). Thus, Study #2 and Study #3 proposed new decision aids (i.e., CDITs and IRTs) that respectively increase positive impacts on decision-making performance and decrease potentially biased attribution and negative reactions to an RA in utilizing consistency strategies.   Study #2 proposed CDITs that support online consumers’ consistency strategies and investigated which combination of CDITs, information search stages, and trustworthiness of advice sources is the most efficient and effective in improving decision-making performance. The results show that there are interaction effects between Source and/or Product CDITs, trustworthiness variance, and information search  145 stages on perceived task-individual-technology fit and decision-making performance. Particularly, in selecting an advice source, Aggregated Source CDIT fits online consumers having a similar level of trustworthiness across all advice sources, while Pairwise Source CDIT fits online consumers strongly trusting a particular advice source. In addition, in selecting a product, Aggregated Product CDIT is suitable for building an overall understanding of a product category, while Pairwise Product CDIT is suitable for building an in-depth understanding of a particular product.  Study #3 investigated when and how online consumers attribute inconsistency among advice sources to an RA, and proposed IRTs that alleviate consumers’ potentially biased attribution and negative reactions to an RA. Particularly, by comparing the impact of the two types of IRTs (i.e., Explanatory and Interactive IRTs) on online consumers’ negative reactions, this study revealed that providing explanations and justification would be ineffective in alleviating online consumers’ negative reactions. Furthermore, providing explanations and justification after facilitating more interaction with an RA makes online consumers apply proper weight or increased attention to such provided explanations.   The results of Study #2 and Study #3 show that, when providing multiple advice sources, online stores can enhance positive impact and reduce negative reaction by implementing new decision aids (i.e., CDITs and IRTs). Overall, this thesis improves understanding of online consumers’ utilizations of multiple advice sources and provides guidelines for practitioners.   5.2 CONTRIBUTIONS 5.2.1 Theoretical Contributions All three studies of this thesis have both theoretical and practical implications.   From the theoretical perspective, Study #1 identified the decision-making strategies used in an environment of multiple recommendation and review sources. This is important because almost all previous research focused primarily on a single advice source. To date, online consumers’ utilization of diverse recommendations and reviews from multiple advice sources have been largely ignored. Recent IS studies have recognized the need to examine the utilization and impact of multiple sources on product selection decision-making performance (Baum and Spann, 2014; Li et al., 2010; Xu et al., 2017). To the best of my knowledge, this is the first study to identify the use of both recommendation and review consistencies between multiple sources across information search stages. Through concurrent verbal protocol analysis, this study explored online consumers’ decision-making processes and identified four recommendation  146 consistency strategies (seeking, anchoring, deliberating, and adhering) and two review consistency strategies (confirming and validating) that more than 81% of participants used during the information search stages. Study #1 examined the intervening processes - the yet unexplored black box of decision-making processes. Although past research has emphasized the need to understand the decision-making process in extracting appropriate information for the design and evaluation of decision-aid tools (Todd and Benbasat, 1987), few studies have examined the decision-making process in terms of the use of multiple sources (Lee et al., 2011; Xu et al., 2017). Through a rigorous coding procedure, Study #1 provided guidelines for using concurrent verbalization as an exploratory approach for theory building.   Study #2 conceptualized the trustworthiness variance representing individual user’s characteristics that trigger the utilization of consistency strategies (i.e., Anchoring and Seeking Strategies). While previous studies relying on the Task-Technology Fit Theory have focused mainly on the interplay between technology and task in improving users’ performance, few have examined the impact of individual’s utilizations of decision-making strategies on task-individual-technology fit. Study #2 attempted to fill this theoretical gap by proposing the trustworthiness variance as a key determinant of task-individual-technology fit in utilizing multiple advice sources through CDITs. In addition, consistency distance is conceptualized as a more objective and continuous variable to better capture granularity of inconsistency among advice sources. By adopting a Euclidean metric, Study #2 was able to accomplish two things: specify advice sources’ rating scores representing an overall evaluation of product quality to Euclidean space; and measure the consistency distance as an objective and continuous variable.   Study #3 proposed a theoretical framework of inconsistency attribution drawing from the integration of Correspondence Bias and Self-Serving Bias. It also examined how to alleviate consumers’ potentially biased inconsistency attribution not only by providing underlying mechanisms of inconsistency among advice sources, but also by facilitating interaction with an RA. Overall, this study revealed two things. The first is how easily online consumers attribute inconsistency among advice sources to an RA rather than themselves, even though an RA represents their personal preferences for product attributes. The second is that facilitating users’ validations of explanations for inconsistency among advice sources by revising their personal preferences makes users attend to the differences of product attribute preferences between users and advice sources. This in turn alleviates not only biased attribution to RA’s incompetence and deceptiveness, but also lessens negative reactions to the RA.  Overall, this dissertation explores online consumers’ new decision-making strategies in coping with a wide variety and possibly conflicting external evaluations from diverse advice sources. Since there are few  147 theoretical foundations in utilizing multiple advice sources, Study #1 explored online consumers’ information search process and identified consistency strategies. In particular, this is a major update of classical decision-making theories that mainly relied on internal attribute-oriented perspective. On the basis of findings of Study #1, Study #2 could propose CDITs that directly support online consumers’ use of consistency strategies. While Study #2 showed benefits of utilizing consistency among advice sources, it also revealed the potential costs of utilizing inconsistency. Therefore, Study #3 investigated the underlying mechanism of online consumers’ attribution of inconsistency and proposed IRTs that minimize costs of utilizing inconsistency among advice sources. Overall, through three empirical studies, this dissertation built theoretical foundations of utilizing multiple advice sources, as well as proposed new decision aids that maximize benefits and minimize costs of utilizing multiple advice sources.   5.2.2 Practical Contributions From the practical perspective, Study #1 improves the understanding of online consumers’ processes of product selection decision-making, which in turn forms a basis for designing better decision aids. Accordingly, online stores should help consumers by identifying recommendation and review consistency, through appropriate decisional support functionalities under the users’ control. They can accomplish this by highlighting recommendation consistency between multiple sources and/or providing the differences between review rating scores at appropriate stages during the information search process. Previous studies have shown that positive reviews of products and sellers increase consumers’ intention to purchase the product from those sellers and their willingness to pay a premium price. However, Study #1 reveals that even if one source provides a positive review, it might not be sufficient for consumers when they read inconsistent reviews from other sources or recommendation rankings that are not aligned with the reviews. Thus, online stores need to be encouraged to provide tools to identify recommendation and review consistency.   Study #2 provides guidelines for DSS developers. To implement decision aids that support consistency strategies, it is important to consider two aspects: how the CDITs can aid consumers to better manage conflicting opinions by utilizing better consistency strategy, which culminate in better decisions; and which combination of a CDIT and an information search stage is the most efficient and effective in utilizing consistency and improving decision-making performance. Study #2 reveals that the Pairwise Source CDIT suits online customers having a strong preference for a specific advice source, while the Aggregated Source CDIT is better for those having a similar extent of trustworthiness across multiple advice sources. In addition, the Aggregated Product CDIT needs to be provided before screening out alternatives, while the  148 Pairwise Product CDIT would be more useful after choosing a set of alternatives that deserve to be elaborated.  Study #3 proposed IRTs and investigated their impact on recovering online consumers’ perception of RA’s incompetence and deceptiveness. It showed the importance of decision aids that not only identify the underlying mechanisms describing why advice sources are inconsistent, but also facilitate interactions with an RA. Therefore, by providing decision aids that provide explanations and facilitate interactions with an RA, online stores are able to guide online consumers to draw inferences in understanding the interplay among advice sources, and consequently support efficient and effective online consumers’ purchasing process and recover potentially biased attribution to an RA.   5.3 LIMITATIONS AND SUGGESTIONS FOR FUTURE RESEARCH Despite both the theoretical and practical contributions of this thesis, there are several limitations. First, I examined the use, roles, and impact of consistency in a laboratory through artificial buying tasks. Future research should consider a field experiment to examine more natural online shopping behaviors. Second, the participants in my research were undergraduate and graduate students who may not precisely represent the overall population of online shoppers. However, because the participants have the potential to become heavy users (Kim et al., 2013) and most of the participants have had previous experience in online shopping, the use of students is not a significant threat to external validity (McKnight et al., 2002). Third, my research examined consistency between advice sources. However, there may be other types of consistency, such as consistency within an advice source. For example, with multiple experts and multiple consumers offering advice and therefore scope for varying opinions, future research could investigate consistency within an advice source. Online consumers may also be interested in the consistency of an advice source’s recommendations across time in order to examine whether the advice source’s preference is stable or varied. In addition, even while this study could not find online consumers’ utilization of consistency between a recommendation list (i.e., ranking) and a review (i.e., rating score), online consumers could consider consistency across different types of evaluations. Therefore, future research could consider diverse types of consistency that could enhance online consumers’ shopping experience and decision-making performance. Fourth, while my research examined online consumers’ utilization of consistency/inconsistency, their utilization of consistency/inconsistency would be contingent on a number of advice sources. That is, as more advice sources are accessible and considered, the utilization of consistency/inconsistency and its impact on decision-making performance will increase. Therefore, future research could investigate the moderation of a number of advice sources on the utilization of  149 consistency/inconsistency among advice sources. Fifth, although my research implemented decision aids such as CDITs and IRTs across information search stages and examined their impact on decision-making performance, it could not simultaneously and unobtrusively trace individual user’s utilization of these decision aids. In future research, an eye-tracking study would be a complement that would allow researchers to simultaneously and unobtrusively trace whether visuospatial attention focuses on such decision aids (i.e., CDITs, IRTs); such research could address my theoretical perspectives and strengthen my findings. Lastly, while my research investigated the impact of consistency on online consumers’ decision-making, the consistency and/or inconsistency mechanisms can be applied beyond product choice. For example, there are many domains – such as voting, dating, and government policy – in which diverse groups have a wide variety of and possibly conflicting opinions. Future research can apply the theories and findings of this dissertation to such domains. For example, online dating services can provide consistency among diverse sets of users. Users having similar characteristics and interests would give consistent ratings and reviews to a potential partner. That is, a user having similar characteristics and interests would rely on such consistency in choosing a better partner. When a potential partner has inconsistent ratings and reviews, online dating services can minimize other users’ avoidance of a date with him or her by clarifying which type of users gave low ratings and bad reviews. Overall, as consistency is a key driver of decision-making in utilizing opinions of diverse sources, how to categorize sources will be another key interest in applying consistency mechanism. Therefore, investigating better categorizations in improving consistency can achieve better ratings and reviews for users of online dating services.     150 BIBLIOGRAPHY  Aiken, M., Gu, L. and Wang, J. 2013. “Task Knowledge and Task-Technology Fit in a Virtual Team,” International Journal of Management (30:1), pp. 3-11. Aljukhadar, M., Senecal, S. and Nantel, J., 2014. “Is More Always Better? Investigating the Task-Technology Fit Theory in an Online User Context,” Information & Management (51:4), pp. 391-397. Ba, S., and Pavlou, P. A. 2002. “Evidence of the Effect of Trust Building Technology in Electronic Markets: Price Premiums and Buyer Behavior,” MIS Quarterly (26:3), pp. 243–268. Bansal, H. S., and Voyer, P. A. 2000. “Word-of-Mouth Processes within a Services Purchase Decision Context,” Journal of Service Research (3:2), pp. 166-177. Baum, D. and Spann, M. 2014 “The Interplay between Online Consumer Reviews and Recommender Systems: An Experimental Analysis,” International Journal of Electronic Commerce (19:1), pp. 129-162. Barclay, D., Thompson, R., and Higgins, C. 1995. “The Partial Least Squares (PLS) Approach to Causal Modeling: Personal Computer Adoption and Use as an Illustration,” Technology Studies (2:2), pp. 285-309. Benbasat, I. and Dexter, A.S. 1986. “An Investigation of the Effectiveness of Color and Graphical Information Presentation Under Varying Time Constraints,” MIS Quarterly (10:1), pp. 59-83. Benlian, A., Titah, R., and Hess, T. 2012. “Differential Effects of Provider Recommendations and Consumer Reviews in E-Commerce Transactions: An Experimental Study,” Journal of Management Information Systems (29:1), pp. 237–272. Bera, P., Burton-Jones, A., and Wand Y. 2011. “Guidelines for Designing Visual Ontologies to Support Knowledge Identification,” MIS Quarterly (35:4), pp. 883-908.  Bettman, J. R., Luce, M. F., and Payne, J. W. 1998. “Constructive Consumer Choice Processes,” Journal of Consumer Research (25:3), pp. 187–217. Bikhchandani, S., Hirshleifer, D. and Welch, I. 1992. “A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades,” Journal of Political Economy (100:5), pp. 992-1026. Bouwman, M.J., Frishkoff, P.A. and Frishkoff, P. 1987. “How do Financial Analysts Make Decisions? A Process Model of the Investment Screening Decision,” Accounting, Organizations and Society (12:1), pp.1-29. Boyatzis, R. E. 1998. Thematic Analysis and Code Development: Transforming Qualitative Information, Sage Publications: London and New Delhi Brewer.  151 Brown, J. D., and Dutton, K. A. 1995. “Truth and Consequences: The Costs and Benefits of Accurate Self-Knowledge,” Personality and Social Psychology Bulletin (21), pp.1288-1296.  Burton-Jones, A., and Meso, P. N. 2006. “Conceptualizing Systems for Understanding: An Empirical Test of Decomposition Principles in Object-Oriented Analysis,” Information Systems Research (17:1), pp. 38–60. Butler, P., and Peppard, J. 1998. “Consumer Purchasing on the Internet: Processes and Prospects,” European Management Journal (16:5), pp. 600–610. Campbell, W. K. and Sedikides, C. 1999. “Self-Threat Magnifies the Self-Serving Bias: A Meta-Analytic Integration,” Review of general Psychology (3:1), pp. 23-43. Chen, Y., and Xie, J. 2008. “Online Consumer Review: Word-of-Mouth as a New Element of Marketing Communication Mix,” Management Science (54:3), pp. 477–491. Chewning, E. G., Jr, and Harrell, A. M. 1990. “The Effect of Information Load on Decision Makers' Cue Utilization Levels and Decision Quality in a Financial Distress Decision Task,” Accounting, Organizations and Society (15:6), pp. 527–542. Chevalier, J. A., and Mayzlin, D. 2006. “The Effect of Word of Mouth on Sales: Online Book Reviews,” Journal of Marketing Research (43:3), pp. 345-354. Chi, M. T. H. 1997. “Quantifying Qualitative Analyses of Verbal Data: A Practical Guide,” The Journal of the Learning Sciences (6:3), pp. 271–315. Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences, 2nd Edition. Hillsdale, N.J.: Lawrence Erlbaum. DeLone, W., and McLean, E. 1992. “Information Systems Success: The Quest for the Dependent Variable,” Information Systems Research (3:1), pp. 60–95.  DeLone, W., and McLean, E. 2003. “The DeLone and McLean Model of Information Systems Success: A Ten-Year Update,” Journal of Management Information Systems (19:4), pp. 9–30.  Deza, M.M. and Deza, E. 2009. Encyclopedia of Distances. Springer Berlin Heidelberg. Dietvorst, B.J., Simmons, J.P., and Massey, C. 2015. “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err,” Journal of Experimental Psychology: General (144:1), p. 114-126. Dimoka, A., Hong, Y., and Pavlou, P. 2012. “On Product Uncertainty in Online Markets: Theory and Evidence,” MIS Quarterly (36:2), pp. 395–426. Duhan, D. F., Johnson, S. D., Wilcox, J. B., and Harrell, G. D. 1997. “Influences on Consumer Use of Word-of-Mouth Recommendation Sources,” Journal of the Academy of Marketing Science (25:4), pp. 283–295.  152 Dunning, D. 1993. Words to Live by: The Self and Definitions of Social Concepts and Categories. In J. Suls (Ed.), Psychological Perspectives on the Self (Vol. 4, pp. 99-126). Hillsdale, NJ: Erlbaum.  East, R., Hammond, K., and Lomax, W. 2008. “Measuring the Impact of Positive and Negative Word of Mouth on Brand Purchase Probability,” International Journal of Research in Marketing (25:3), pp. 215-224. Edwards, A. L. 1957. The Social Desirability Variable in Personality Assessment and Research. New York: Dryden Press.  Eisingerich, A. B., and Bell, S. J. 2008. “Perceived Service Quality and Customer Trust: Does Enhancing Customers' Service Knowledge Matter?,” Journal of Service Research (10:3), pp. 256–268. Ericsson, K. A., and Simon, H. A. 1993. Protocol Analysis: Verbal Reports as Data, MIT Press. Cambridge, MA. Festinger, L. 1962. A Theory of Cognitive Dissonance, Stanford University Press. California, CA. Fornell, C., and Bookstein, F. L. 1982. “Two Structural Equation Models: LISREL and PLS Applied to Consumer Exit-Voice Theory,” Journal of Marketing Research (19:4), pp. 440–452. Gawronski, B. 2012. “Back to the Future of Dissonance Theory: Cognitive Consistency as a Core Motive,” Social Cognition (30:6), pp. 652-668. Gilbert, D. T., and Malone, P. S. 1995. “The Correspondence Bias,” Psychological Bulletin (117), pp. 21–38.  Goodhue, D.L. and Thompson, R.L. 1995. “Task-Technology Fit and Individual Performance,” MIS Quarterly (19:2), pp. 213-236. Grazioli, S., and Jarvenpaa, S. L. 2000. “Perils of Internet Fraud: An Empirical Investigation of Deception and Trust with Experienced Internet Consumers,” Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on (30:4), pp. 395–410. Han, X., Wang, L., Crespi, N., Park, S. and Cuevas, Á. 2015. “Alike People, Alike Interests? Inferring Interest Similarity in Online Social Networks,” Decision Support Systems (69), pp.92-106. Harmon-Jones, E., and Harmon-Jones, C. 2007. “Cognitive Dissonance Theory after 50 years of Development,” Zeitschrfit für Sozialpsychologie (38:1), pp. 7-16. Heider, F. 1958. The Psychology of Interpersonal Relations. New York: Wiley.  Hoch, S. J., and Ha, Y. W. 1986. “Consumer Learning: Advertising and the Ambiguity of Product Experience,” Journal of Consumer Research (13:2), pp. 221–233. Hu, N., Liu, L., and Zhang, J. J. 2008. “Do Online Reviews affect Product Sales? The Role of Reviewer Characteristics and Temporal Effects,” Information Technology and Management (9:3), pp. 201-214.  153 Hu, X., Lin, Z., Whinston, A. B., and Zhang, H. 2004. “Hope or Hype: On the Viability of Escrow Services as Trusted Third Parties in Online Auction Environments,” Information Systems Research (15:3), pp. 236–249. Iyer, A., Rosenberg, C. and Karnik, A. 2009. “What is the Right Model for Wireless Channel Interference?”, IEEE Transactions on Wireless Communications (8:5), pp. 2662-2671. Jarupathirun, S. and Zahedi, F.M. 2007. “Exploring the Influence of Perceptual Factors in the Success of Web-based Spatial DSS,” Decision Support System (43), pp. 933-951.   Jiang, Z. and Benbasat, I., 2007. “The Effects of Presentation Formats and Task Complexity on Online Consumers' Product Understanding,” MIS Quarterly (31:3), pp. 475-500. Jiménez, F. R., and Mendoza, N. A. 2013. “Too Popular to Ignore: The Influence of Online Reviews on Purchase Intentions of Search and Experience Products,” Journal of Interactive Marketing (27:3), pp. 226-235. Jones, E. E., and Davis, K. E. 1965. From Acts to Dispositions: The Attribution Process in Person Perception. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology (Vol. 2, pp. 219–266). New York: Academic Press.  Jones, E. E., and Harris, V. A. 1967. “The Attribution of Attitudes,” Journal of Experimental Social Psychology (3), pp. 1-24.  Johnson, E. J., Moe, W. W., Fader, P. S., Bellman, S., and Lohse, G. L. 2004. “On the Depth and Dynamics of Online Search Behavior,” Management Science (50:3), pp. 299–308. Kamis, A., Koufaris, M., and Stern, T. 2008. “Using an Attribute-based Decision Support System for User-Customized Products Online: An Experimental Investigation,” MIS Quarterly (32:1), pp. 159–177. Karimi, S., Papamichail, K. N., and Holland, C. 2010. “A Model of Internet Shopper Behavior, A Cross Sector Analysis,” in Proceedings of the 31st International Conference on Information Systems, Saint Louis, Missouri, Dec 12-25, Paper 87. Kelley, H. H. 1972. Causal Schemata and the Attribution Process. In E. E. Jones, D. E. Kanouse, H. H. Kelley, R. E. Nisbett, S. Valins, & B. Weiner (Eds.), Attribution: Perceiving the Causes of Behavior (pp. 151-174). Morristown, NJ: General Learning Press.  Kelley, H. H. and Michela, J. L. 1980. “Attribution Theory and Research,” Annual Review of Psychology (31), pp. 457-501.  Kendall, P. C., Howard, B. L., and Hays, R. C. 1989. “Self-Referent Speech and Psychopathology: The Balance of Positive and Negative Thinking,” Cognitive Therapy and Research (13), pp. 583-598.  Kim, H. and Benbasat, I. 2013. “How e-Consumers Integrate Diverse Recommendations from Multiple Sources: Exploration and Confirmation-driven Approaches,” in Proceedings of International Conference on Information Systems (ICIS), Milan, Italy, December.  154 Kim, H. and Benbasat, I. 2015, “How Online Consumers Utilize Recommendations and Reviews from Multiple Sources: An Empirical Exploration Using Verbal Protocol Analysis,” working paper Kim, J., Hahn, J., and Hahn, H. 2000. “How Do We Understand a System with (So) Many Diagrams? Cognitive Integration Processes in Diagrammatic Reasoning,” Information Systems Research (11:3), pp. 284-303. Kim, H., Suh, K. S, and Lee, U. K. 2013. “Effects of Collaborative Online Shopping on Shopping Experience through Social and Relational Perspectives,” Information and Management (50:4), pp. 169-180. Kim, T. and Hinds, P. 2006. “Who Should I Blame? Effects of Autonomy and Transparency on Attributions in Human-Robot Interaction,” in Proceedings of 15th IEEE International Symposium, Robot and Human Interactive Communication, Roman, Sep, pp. 80-85. Klein, L. R. 1998. “Evaluating the Potential of Interactive Media Through a New Lens: Search Versus Experience Goods,” Journal of Business Research (41:3), pp. 195–203. Koller, M., and Salzberger, T. 2007. “Cognitive Dissonance as a Relevant Construct Throughout the Decision-Making and Consumption Process-An Empirical Investigation Related to a Package Tour,” Journal of Customer Behaviour (6:3), pp. 217-227. Krippendorff, K. 2004. Content Analysis: An Introduction to Its Methodology, SAGE. Kuhlthau, C. C. 1991. “Inside the Search Process: Information Seeking from the User’s Perspective,” Journal of American Society for Information Science (42:5), pp. 361-371. Kumagai, Y., Bliss, J.C., Daniels, S.E., and Carroll, M.S. 2004. “Research on Causal Attribution of Wildfire: An Exploratory Multiple-Methods Approach,” Society and Natural Resources (17:2), pp. 113-127. Kumar, N., and Benbasat, I. 2006. “Research Note: The Influence of Recommendations and Consumer Reviews on Evaluations of Websites,” Information Systems Research (17:4), pp. 425–439. Lasdon, L.S., Waren, A.D., Jain, A. and Ratner, M. 1978. “Design and Testing of a Generalized Reduced Gradient Code for Nonlinear Programming,” ACM Transactions on Mathematical Software (TOMS) (4:1), pp.34-50. Lang, A. 2000. “The Limited Capacity Model of Mediated Message Processing,” Journal of Communication (50:1), pp. 46-70. Leahy, R.L. 2002. “A Model of Emotional Schemas,” Cognitive and Behavioral Practice (9:3), pp. 177-190. Lee, C., Kim, J., and Chan-Olmsted, S. M. 2011. “Branded Product Information Search on the Web: The Role of Brand Trust and Credibility of Online Information Sources,” Journal of Marketing Communications (17:5), pp. 355–374.  155 Li, M.-X., Tan, C.-H., Wei, K.-K., and Wang, K.-L. 2010. “Where to Place Product Reviews? An Information Search Process Perspective,” in Proceedings of 31st International Conference on Information Systems, Saint Louis, Missouri, Dec 12-25, Paper 60. Lin, T.C. and Huang, C.C. 2008. “Understanding Knowledge Management System Usage Antecedents: An Integration of Social Cognitive Theory and Task Technology Fit,” Information & Management, (45:6), pp. 410-417. Liu, B. Q., and Goodhue, D. L. 2012. “Two Worlds of Trust for Potential E-Commerce Users: Humans as Cognitive Misers,” Information Systems Research (23:4), pp. 1246–1262. Liu, Y., Lee, Y. and Chen, A.N. 2011. “Evaluating the Effects of Task–Individual–Technology Fit in Multi-DSS Models Context: A Two-Phase View,” Decision Support Systems (51:3), pp. 688-700. Lynch, S. J. S., Marmorstein, H., and Weigold, M. F. 1988. “Choices from Sets Including Remembered Brands: Use of Recalled Attributes and Prior Overall Evaluations,” Journal of Consumer Research (15:2), pp. 169–184. Marcolin, B. L., Compeau, D. R., Munro, M. C., and Huff, S. L. 2000. “Assessing User Competence: Conceptualization and Measurement,” Information Systems Research (11:1), pp. 37–60.  McKnight, D. H., Choudhury, V., and Kacmar, C. 2002. “Developing and Validating Trust Measures for E-Commerce: An Integrative Typology,” Information Systems Research (13:3), pp. 334–359. McQuarrie, E. F., and Munson, J. M. 1992. “A revised Product Involvement Inventory: Improved Usability and Validity,” Advances in Consumer Research (19:1), pp. 108–115. Mertler, C.A. and Reinhart, R.V. 2016. Advanced and Multivariate Statistical Methods: Practical Application and Interpretation. Taylor & Francis. Miller, G. 1956. “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information,” The Psychological Review (63), pp. 81–97. Miranda, S. M., and Bostrom, R. P. 1993. “The Impact of Group Support Systems on Group Conflict and Conflict Management,” Journal of Management Information Systems (10:3), pp. 63–95. Mudambi, S. M., and Schuff, D. 2010. “What Makes a Helpful Online Review? A Study of Customer Reviews on Amazon. com,” MIS Quarterly (34:1), pp. 185–200. Munro, M. C., Huff, S. L., Marcolin, B. L., and Compeau, D. R. 1997. “Understanding and Measuring User Competence,” Information and Management (33:1), pp. 45–57. Myers, D.G. 2015. Exploring Social Psychology, 7th Edition. New York: McGraw Hill Education. Newell, A., and Simon, H. A. 1972. Human Problem Solving, Prentice-Hall Englewood Cliffs, NJ. Nickerson, R. S. 1998. “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of General Psychology (2:2), pp. 175-220.  156 Ortiz Jr, F., Simpson, J.R., Pignatiello Jr, J.J., and Heredia-Langner, A. 2004. “A Genetic Algorithm Approach to Multiple-Response Optimization,” Journal of Quality Technology (36:4), p. 432-450. Park, D. H., and Lee, J. 2009. “eWOM Overload and Its Effect on Consumer Behavioral Intention Depending on Consumer Involvement,” Electronic Commerce Research and Applications (7:4), pp. 386-398. Park, D.-H., Lee, J., and Han, I. 2007. “The Effect of On-Line Consumer Reviews on Consumer Purchasing Intention: The Moderating Role of Involvement,” International Journal of Electronic Commerce (11:4), pp. 125–148. Parkes, A. 2013. “The Effect of Task–Individual–Technology Fit on User Attitude and Performance: An Experimental Investigation,” Decision Support Systems (54:2), pp.997-1009. Pavlou, P. A., and Dimoka, A. 2006. “The Nature and Role of Feedback Text Comments in Online Marketplaces: Implications for Trust Building, Price Premiums, and Seller Differentiation,” Information Systems Research (17:4), pp. 392–414. Payne, J. W., Bettman, J. R., and Johnson, E. J. 1988. “Adaptive Strategy Selection in Decision Making,” Journal of Experimental Psychology: Learning, Memory, and Cognition (14:3), pp. 534-552. Payne, J. W., Bettman, J. R., and Johnson, E. J. 1993. The Adaptive Decision Maker, Cambridge University Press. Perera, R. E. 2000. “Optimizing Human-Computer Interaction for the Electronic Commerce Environment,” Journal of Electronic Commerce Research (1:1), pp. 23-44. Petty, R. E., Cacioppo, J. T., and Schumann, D. 1983. “Central and Peripheral Routes to Advertising Effectiveness: The Moderating Role of Involvement,” Journal of Consumer Research (10:2), pp. 135–146. Pfeiffer, J., and Benbasat, I. 2012. “Social Influence in Recommendation Agents: Creating Synergies between Multiple Recommendation Sources for Online Purchase,” in proceedings of 20th European Conference on Information Systems, Barcelona, Spain, Paper 99. Poddar, A., Donthu, N. and Wei, Y. 2009. “Web Site Customer Orientations, Web Site Quality, and Purchase Intentions: The Role of Web Site Personality,” Journal of Business Research (62:4), pp. 441-450. Quine, W., and Ullian, J. S. 1978. The Web of Belief (Vol. 2), R. M. Ohmann (Ed.), New York: Random House. Rist, R. S. 1989. “Schema Creation in Programming,” Cognitive Science (13:3), pp. 389–414. Roberts, J., and Nedungadi, P. 1995. “Studying Consideration in the Consumer Decision Process: Progress and Challenges,” International Journal of Research in Marketing (12:1), pp. 3-7.  157 Ross, L., and Nisbett, R. E. 1991. The Person and the Situation: Perspectives of Social Psychology. New York: McGraw-Hill. Russo, J. E., Johnson, E. J., and Stephens, D. L. 1989. “The Validity of Verbal Protocols,” Memory & Cognition (16:6), pp. 759-769. Schwartz, R. M. 1986. “The Internal Dialogue: On the Asymmetry Between Positive and Negative Coping Thoughts,” Cognitive Therapy and Research (13), pp. 583-598.  Sedikides, C. 1993. “Assessment, Enhancement, and Verification Determinants of the Self-Evaluation Process,” Journal of Personality and Social Psychology (65), pp. 317-338.  Sedikides, C., and Strube, M. J. 1997. Self-Evaluation: To Thine Own Self Be Good, to Thine Own Self Be Sure, to Thine Own Self Be True, to Thine Own Self Be Better. In M. P. Zanna (Ed.), Advances in Experimental and Social Psychology (Vol. 29, pp. 209-269). New York: Academic Press.  Sen, S., and Lerman, D. 2007. “Why Are You Telling Me This? An Examination into Negative Consumer Reviews on the Web,” Journal of Interactive Marketing (21:4), pp. 76-96. Sharma, N., and Patterson, P. G. 2000. “Switching Costs, Alternative Attractiveness and Experience as Moderators of Relationship Commitment in Professional, Consumer Services,” International Journal of Service Industry Management (11:5), pp. 470–490. Silver, M.S. 1988. “User Perceptions of Decision Support System Restrictiveness: An Experiment,” Journal of Management Information Systems (5:1), pp.51-65. Simon, H. A. 1990. “Information Technologies and Organizations,” The Accounting Review (65:3), pp. 658–667. Singleton, R. A., and Straits, B. C. 1999. Approaches to Social Research, (3rd ed.) New York, NY: Oxford University Press, New York and Oxford. Sproule, S., and Archer, N. 2000. “A Buyer Behaviour Framework for the Development and Design of Software Agents in E-Commerce,” Internet Research (10:5), pp. 396–405. Strauss, A., and Corbin, J. 1994. “Grounded Theory Methodology,” Handbook of Qualitative Research. Tan, C.-H., Teo, H.-H., and Benbasat, I. 2010. “Assessing Screening and Evaluation Decision Support Systems: A Resource-Matching Approach,” Information Systems Research (21:2), pp. 305–326. Tan, C. W., Benbasat, I., and Cenfetelli, R. T. 2016. “An Exploratory Study of the Formation and Impact of Electronic Service Failures,” MIS Quarterly (40:1), pp. 1-29. Todd, P., and Benbasat, I. 1987. “Process Tracing Methods in Decision Support Systems Research: Exploring the Black Box,” MIS Quarterly (11:4), pp. 493–512. Trope, Y. 1986. “Identification and Inferential Processes in Dispositional Attribution,” Psychological Review (9:3), pp. 239-257.  158 Utz, S., Kerkhof, P., and van den Bos, J. 2012. “Electronic Commerce Research and Applications,” Electronic Commerce Research and Applications (11:1), pp. 49–58. Vessey, I., 1991. “Cognitive Fit: A Theory-Based Analysis of the Graphs Versus Tables Literature,” Decision Sciences (22:2), pp. 219-240. Wang, H.-C., and Doong, H.-S. 2010. “Argument Form and Spokesperson Type: The Recommendation Strategy of Virtual Salespersons,” International Journal of Information Management (30:6), pp. 493–501. Wang, W., and Benbasat, I. 2009. “Interactive Decision Aids for Consumer Decision Making in E-Commerce: The Influence of Perceived Strategy Restrictiveness,” MIS Quarterly (33:2), pp. 293–320. Wang, Y., and Haggerty, N. 2011. “Individual Virtual Competence and Its Influence on Work Outcomes,” Journal of Management Information Systems (27:4), pp. 299–334.  Wilkie. 1994. Consumer Behavior, New York: John Wiley and Sons. Xiao, B., and Benbasat, I. 2007. “E-Commerce Product Recommendation Agents: Use, Characteristics, and Impact,” MIS Quarterly (31:1), pp. 137–209. Xiao, B., and Benbasat, I. 2011. “Product-Related Deception in E-Commerce: A Theoretical Perspective,” MIS Quarterly (35:1), pp. 169-196. Xiao, Bo, and Benbasat, I. 2015. “Designing Warning Messages for Detecting Biased Online Product Recommendations: An Empirical Investigation," Information Systems Research (26:2), pp. 793-811. Xu, D.J., Benbasat, I. and Cenfetelli, R.T. 2017. “A Two-Stage Model of Generating Product Advice: Proposing and Testing the Complementarity Principle,” Journal of Management Information Systems (34:3), pp. 826-862. Yoon, C. Y. 2009. “The Effect Factors of End-User Task Performance in a Business Environment: Focusing on Computing Competency,” Computers in Human Behavior (25:6), pp. 1207–1212.  Zellweger, P. 1997. “Web-Based Sales: Defining the Cognitive Buyer,” Electronic Markets (7:3), pp. 10-16.     159 APPENDICES Appendix A. Literature Examples on Online Reviews and Recommendations Number of Advice Sources Advice Source Advice  Type Literature Research Scope Research Type Single Consumers Reviews Ba and Pavlou (2002) Impacts of positive reviews on price premiums on sellers Empirical Study Pavlou and Dimoka (2006) Impacts of qualitative aspects of reviews on trust building and price premium Chevalier and Mayzlin (2006) Impacts of negative eWOM on sales Park et al. (2007) Moderating impacts of involvement on the relationship between eWOM and purchasing intention Sen and Lerman (2007) Impacts of diverse valence of eWOM on consumer decision-making Chen and Xie (2008) Roles of product knowledge on the use of eWOM in purchasing decision-making Hu et al. (2008) Impacts of qualitative and quantitative aspects of reviews on sales Park and Lee (2009) Impacts of eWOM overload on consumer decision-making Mudambi and Schuff (2010) Impacts of valence and depth of a review between search and experience goods on helpfulness of the review Utz et al. (2012) Impacts of online store reviews on consumer trust in online stores Jiménez and Mendoza (2013) Impacts of credibility of a review, review agreements between search and experience products on purchasing intention  Reviews Recommendations Kumar and Benbasat (2006) Impacts of recommendations and consumer reviews on consumers’ evaluation of online stores  160 RA Recommendations Xiao and Benbasat (2007, 2015) Use, characteristics and impacts of a RA on consumer decision-making Non-Empirical Study Multiple Consumers Experts Review Li et al. (2010) Impacts of consumers’ and experts’ review placements on consumer decision-making Empirical Study Consumers RA Review Recommendations Baum and Spann (2014) Impact of inconsistency between online consumers’ reviews and recommendations from RAs on consumer decision-making Empirical Study Consumer Experts RA Recommendations Pfeiffer and Benbasat (2012) Complementary impacts of recommendation source on consumer decision-making Non-Empirical Study Xu et al. (2017) Consumers’ adoption of recommendations and the impact of consensus between recommendation sources on decision-making Empirical Study  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0368800/manifest

Comment

Related Items