Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Improving the communication interfaces between consumers and online product recommendation agents Xu, Jingjun 2011

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2011_fall_xu_jingjun.pdf [ 2.53MB ]
Metadata
JSON: 24-1.0102438.json
JSON-LD: 24-1.0102438-ld.json
RDF/XML (Pretty): 24-1.0102438-rdf.xml
RDF/JSON: 24-1.0102438-rdf.json
Turtle: 24-1.0102438-turtle.txt
N-Triples: 24-1.0102438-rdf-ntriples.txt
Original Record: 24-1.0102438-source.json
Full Text
24-1.0102438-fulltext.txt
Citation
24-1.0102438.ris

Full Text

 IMPROVING THE COMMUNICATION INTERFACES BETWEEN CONSUMERS AND ONLINE PRODUCT RECOMMENDATION AGENTS  by  JINGJUN XU  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY  in  THE FACULTY OF GRADUATE STUDIES   (Business Administration)   THE UNIVERSITY OF BRITISH COLUMBIA  (Vancouver)  August 2011  ©Jingjun Xu, 2011    ii  ABSTRACT An online recommendation agent (RA) provides users assistance by eliciting from users their product preferences and then recommending products that satisfy these preferences. While the importance of the RA has been emphasized by practitioners and scholars, precisely how to implement the RA, and what an RA’s effectiveness is relative to other recommendation sources, are not well understood. Through three empirical studies conducted utilizing the experimental method, this dissertation evaluates and improves the input, process, and output interfaces of an RA to facilitate the communication between consumers and RAs in order to reduce decision effort and enhance the quality of their purchasing decisions. Regarding the input component of an RA, Study 1 finds that an RA that interactively demonstrates trade-offs among product attributes improves consumers’ perceived enjoyment and perceived product diagnosticity. It also finds that a medium level of trade-off transparency should be revealed to the user, as it leads to the best perceived enjoyment and product diagnosticity. Further, Study 1 augments the Effort-Accuracy Framework by proposing perceived enjoyment and perceived product diagnosticity as two antecedents for decision quality and decision effort. With respect to the process component of an RA, Study 2 evaluates the efficacy of three types of user feedback (attribute-based feedback, alternative-based feedback, and integrated feedback) in an e-commerce setting and shows that they are better than the absence of feedback in terms of perceived decision effort. Additionally, Study 2 demonstrates that the recommendation source (RA, consumers, or experts) moderates the effects of the three types of user feedback on perceived decision quality. Regarding the output component, Study 3 shows that users are more likely to select a product that is commonly recommended by multiple sources. This also results in higher perceived decision quality. Study 3 also reveals that users with high product knowledge or task involvement are more likely to adhere to the recommendation from the RA as compared to recommendations from experts or consumers. Further, users who rely on the RA’s recommendations will perceive a higher level of decision quality as compared to those who rely on consumer or expert recommendations.  iii  PREFACE The research projects described in this dissertation received approval from the Behavioral Research Ethics Board of the University of British Columbia [certificate number: H09-01646 and H10-00545].                             iv  TABLE OF CONTENTS  ABSTRACT .............................................................................................................................................................. ii PREFACE................................................................................................................................................................ iii LIST OF TABLES ..................................................................................................................................................... vii LIST OF FIGURES.................................................................................................................................................. viii ACKNOWLEDGEMENTS ......................................................................................................................................... ix CHAPTER 1 : INTRODCUTION ................................................................................................................................. 1 1.1 RESEARCH MOTIVATION AND OVERALL OBJECTIVES ......................................................................................................... 1 1.2 INPUT-PROCESS-OUTPUT FRAMEWORK ........................................................................................................................ 2 1.2.1 Input ............................................................................................................................................................. 2 1.2.2 Process ......................................................................................................................................................... 5 1.2.3 Output .......................................................................................................................................................... 7 1.3 RESEARCH QUESTIONS .............................................................................................................................................. 8 1.4 RESEARCH METHOD ................................................................................................................................................. 9 1.5 OUTLINE OF THE THESIS ........................................................................................................................................... 10 CHAPTER 2 THE EFFECTS OF TRADE-OFF TRANSPARENCY OF PRODUCT ATTRIBUTES ON CONSUMERS’ PERCEIVED DECISION QUALITY AND PERCEIVED DECISION EFFORT TOWARDS RECOMMENDATION AGENTS ......................... 11 2.1 INTRODUCTION ...................................................................................................................................................... 11 2.2. THEORETICAL FOUNDATIONS ................................................................................................................................... 14 2.2.1 Effort–Accuracy Framework ....................................................................................................................... 14 2.2.2 Stimulus-Organism-Response Model ......................................................................................................... 15 2.2.3 Cognitive Load Theory and Task Complexity .............................................................................................. 16 2.3 HYPOTHESIS DEVELOPMENT ..................................................................................................................................... 17 2.3.1 Independent and Dependent Variables ..................................................................................................... 17 2.3.2 Impact of the Trade-Off-Transparency Feature on Perceived Enjoyment and Product Diagnosticity ........ 19 2.3.3 The Levels of Trade-off Transparency......................................................................................................... 20 2.3.4 Impacts of Enjoyment and Diagnosticity on Decision Quality and Decision Effort .................................... 21 2.3.5 Impact of Decision Quality and Decision Effort on Intention ..................................................................... 22 2.4. METHODOLOGY .................................................................................................................................................... 23 2.4.1 Manipulation of Trade-Off-Transparency ................................................................................................... 23 2.4.2 Manipulation of Shopping Task .................................................................................................................. 26 2.4.3 Subjects, Incentive, and Procedures .......................................................................................................... 27 2.4.4 Measurements of Dependent and Control Variables................................................................................. 28 2.5. DATA ANALYSIS ..................................................................................................................................................... 29 2.5.1 Sample ....................................................................................................................................................... 29 2.5.2 Manipulation Check ................................................................................................................................... 29 2.5.3 Effect of Trade-off Transparency Levels ..................................................................................................... 30 2.5.4 Test of the Research Model ........................................................................................................................ 33 2.5.4.1 Measurement Model ............................................................................................................................................ 33 2.5.4.2 Structural Model ................................................................................................................................................... 35 2.5.5 Supplementary Analysis on the Effect of Product Diagnosticity on Decision Effort ................................... 36 2.5.6 Supplementary Analysis on Preference Updates ....................................................................................... 38 2.5.7 Discussion .................................................................................................................................................. 39 2.6. CONTRIBUTIONS, LIMITATIONS, FUTURE RESEARCH, AND CONCLUSIONS ........................................................................... 40 2.6.1 Theoretical Contributions .......................................................................................................................... 40 2.6.2 Practical Contributions ............................................................................................................................... 42 2.6.3 Limitations and Future Research................................................................................................................ 44 2.6.4 Conclusions ................................................................................................................................................ 44 v  CHAPTER 3 THE EFFECTS OF FEEDBACK MECHANISMS ON PERCEIVED DECISION QUALITY AND PERCEIVED DECISION EFFORT ................................................................................................................................................ 46 3.1 INTRODUCTION ...................................................................................................................................................... 46 3.2 LITERATURE REVIEW AND THERETICAL FOUNDATIONS .................................................................................................... 49 3.2.1 Feedback Mechanisms ............................................................................................................................... 49 3.2.2 Theory of Human Information Processing and Effort–Accuracy Framework ............................................. 51 3.2.3 Theory of Preferential Choice and Decision Strategies .............................................................................. 51 3.3 HYPOTHESIS DEVELOPMENT ...................................................................................................................................... 53 3.3.1 Effect of Feedback Mechanisms on Perceived Decision Quality (In the Presence of Consumer and Expert Recommendations) ............................................................................................................................................. 54 3.3.2 Effect of Feedback Mechanisms on Perceived Decision Quality (In the Presence of an RA) ..................... 56 3.3.3 Effect of Feedback Mechanisms on Perceived Decision Effort ................................................................... 57 3.4. METHODOLOGY .................................................................................................................................................... 58 3.4.1 Task and Procedures .................................................................................................................................. 59 3.4.2 Experimental Web Site Design ................................................................................................................... 60 3.4.2.1 Manipulation of the Recommendation Sources ................................................................................................... 60 3.4.2.2 Manipulation of Feedback Mechanisms ............................................................................................................... 62 3.4.3 Construct Measurement ............................................................................................................................ 65 3.4.4 Data Analysis .............................................................................................................................................. 66 3.4.4.1 Subject Background Information .......................................................................................................................... 66 3.4.4.2 Construct Reliability and Validity .......................................................................................................................... 66 3.4.4.3 Results on the Effect of Recommendation Sources and Feedback........................................................................ 67 3.4.4.4 Supplementary Analysis on the Integrated Feedback ........................................................................................... 71 3.4.5 Discussion .................................................................................................................................................. 71 3.5. CONTRIBUTIONS AND CONCLUSIONS ......................................................................................................................... 73 3.5.1 Theoretical Contributions .......................................................................................................................... 73 3.5.2 Practical Contributions ............................................................................................................................... 75 3.5.3 Limitations and Future Research................................................................................................................ 76 3.5.4 Conclusions ................................................................................................................................................ 76 CHAPTER 4 THE EFFECTS OF MULTIPLE RECOMMENDATION SOURCES ON USERS’ CHOICE BEHAVIOR AND PERCEIVED DECISION QUALITY ............................................................................................................................ 77 4.1 INTRODUCTION ...................................................................................................................................................... 77 4.2 THEORETICAL FOUNDATIONS .................................................................................................................................... 79 4.2.1 Theory of Preferential Choice .................................................................................................................... 80 4.2.2 Elaboration-Likelihood Model (ELM) ......................................................................................................... 80 4.2.3 Cognitive Dissonance Theory ..................................................................................................................... 81 4.3 HYPOTHESES DEVELOPMENT..................................................................................................................................... 82 4.3.1 Roles of Product Knowledge and Task Involvement ................................................................................... 82 4.3.2 The Effect of Recommendation Convergence from Multiple Sources ....................................................... 84 4.4. METHODOLOGY .................................................................................................................................................... 85 4.4.1 Shopping Task and Incentives .................................................................................................................... 85 4.4.2 Experimental Procedures and Web Site Design ......................................................................................... 86 4.4.3 Manipulation of the Output of Source Recommendations ........................................................................ 89 4.4.4 Construct Measurements .......................................................................................................................... 90 4.5 DATA ANALYSIS ...................................................................................................................................................... 91 4.5.1 Sample Characteristics ............................................................................................................................... 91 4.5.2 Manipulation Check ................................................................................................................................... 92 4.5.2 Adherence to the Recommendation Sources ............................................................................................ 93 4.5.4 Testing of the Hypotheses .......................................................................................................................... 93 4.5.5 Supplementary Analysis on Convergent Recommendations ..................................................................... 95 4.5.6 Supplementary Analysis on Adherence to Sources and Convergent Recommendations .......................... 96 4.5.7 Discussion .................................................................................................................................................. 97 4.6 CONTRIBUTIONS..................................................................................................................................................... 98 4.6.1 Theoretical Contributions .......................................................................................................................... 98 vi  4.6.2 Practical Contributions ............................................................................................................................. 100 4.6.3 Limitations and Future Research.............................................................................................................. 101 4.6.4 Conclusions .............................................................................................................................................. 101 CHAPTER 5: CONCLUSIONS ................................................................................................................................ 103 5.1 SUMMARY OF THE THESIS ...................................................................................................................................... 103 5.2 CONTRIBUTIONS................................................................................................................................................... 105 5.2.1 Theoretical Contributions ........................................................................................................................ 105 5.2.2 Practical Contributions ............................................................................................................................. 107 5.3 LIMITATIONS AND SUGGESTIONS FOR FUTURE RESEARCH .............................................................................................. 109 5.4 CONCLUSIONS ..................................................................................................................................................... 111 REFERENCES....................................................................................................................................................... 112                          vii  LIST OF TABLES Table 2-1 Low Level of Trade-Off Transparency ......................................................................... 25 Table 2-2 Medium Level of Trade-Off Transparency ................................................................... 25 Table 2-3 High Level of Trade-Off Transparency ........................................................................ 26 Table 2-4 Measurement Items of the Dependent Variables .......................................................... 28 Table 2-5 Manipulation Check ..................................................................................................... 29 Table 2-6 ANOVA Summary Table for Perceived Enjoyment ..................................................... 31 Table 2-7 ANOVA Summary Table for Product Diagnosticity ..................................................... 31 Table 2-8 MANOVA Contrast Results.......................................................................................... 31 Table 2-9 Loading and Cross Loading of Measures ..................................................................... 34 Table 2-10 Internal Consistency and Discriminant Validity of Constructs .................................. 34 Table 2-11 Hypotheses Testing Results ........................................................................................ 36 Table 3-1 Experimental Design .................................................................................................... 58 Table 3-2 Experimental Procedure ............................................................................................... 60 Table 3-3 Manipulation of Recommendation Sources ................................................................. 61 Table 3-4 Different Types of RA Feedback .................................................................................. 63 Table 3-5 Measurement Items of the Dependent Variables .......................................................... 65 Table 3-6 Loading and Cross Loading of Measures ..................................................................... 66 Table 3-7 ANOVA Summary Table for Perceived Decision Quality ........................................... 68 Table 3-8 Mean and Standard Deviation for Decision Quality .................................................... 68 Table 3-9 Contrast Results on Perceived Decision Quality .......................................................... 68 Table 3-10 ANOVA Summary Table for Perceived Decision Effort ............................................ 69 Table 3-11 Mean and Standard Deviation .................................................................................... 70 Table 3-12 Contrast Results on Perceived Decision Effort .......................................................... 70 Table 3-13 Hypotheses Testing Results ........................................................................................ 70 Table 4-1 Manipulation of Recommendation Sources ................................................................. 87 Table 4-2 Manipulation of RA’s Output ....................................................................................... 89 Table 4-3 Measurements ............................................................................................................... 91 Table 4-4 Statistics for Number of Sources Accessed .................................................................. 92 Table 4-5 Perceived Convergence ................................................................................................ 92 Table 4-6 Statistics for Recommendation Adherence ................................................................... 93 Table 4-7  Hypotheses Testing Results ......................................................................................... 95 Table 4-8 Percent of Selecting an Overlapped Product ................................................................ 96 Table 4-9 ANOVA Summary for Perceived Decision Quality (RAs) ........................................... 96 Table 4-10 ANOVA Summary for Perceived Decision Quality (Consumers) .............................. 97 Table 4-11 ANOVA Summary for Perceived Decision Quality (Experts) .................................... 97          viii   LIST OF FIGURES Figure 2-1 Proposed Theoretical Model ....................................................................................... 16 Figure 2-2 Interface Design .......................................................................................................... 18 Figure 2-3 Research Model .......................................................................................................... 23 Figure 2-4 Effect of Trade-off Transparency on Perceived Enjoyment ........................................ 32 Figure 2-5 Effect of Trade-off Transparency on Product Diagnosticity ....................................... 32 Figure 2-6 Results of Research Model ......................................................................................... 35 Figure 2-7 Time in Preference Indication and Product Evaluation .............................................. 37 Figure 3-1 RA’s Preference Elicitation ......................................................................................... 62 Figure 3-2 Recommendations from Experts ................................................................................. 62 Figure 3-3 Attribute-based Feedback............................................................................................ 63 Figure 3-4 Alternative-based Feedback ........................................................................................ 64 Figure 3-5 Integrated Feedback .................................................................................................... 64 Figure 3-6 Results on Perceived Decision Quality ....................................................................... 69 Figure 4-1 Screen of the Initial Product Page ............................................................................... 87 Figure 4-2 Website with Three Recommendation Sources Accessed ........................................... 88 Figure 4-3 Website with RA’s Recommendations Generated ....................................................... 88                   ix  ACKNOWLEDGEMENTS My dissertation would not have been possible without the help of so many people in so many ways. I am especially grateful to my research supervisors, Dr. Izak Benbasat and Dr. Ron Cenfetelli. I would like to thank their unfailing guidance and support –both academic and personal – as I continued this journey at the University of British Columbia. Not only were they readily available for me, but they always read and responded to my work and questions more quickly than I could have hoped. Their comments are always extremely insightful, helpful, and relevant. I will always be grateful to Izak and Ron who professionally supervised my graduate work from the outset to the very end.  Many people on the faculty of the Sauder School of Business assisted and encouraged me in various ways during my course of studies. I would like to express my deep appreciation to my committee member Prof. Charles Weinberg, who has generously given his time and expertise to better my work. I thank him for his contribution and good-natured support.  My graduate studies would not have been the same without the social and academic support provided by all my student-colleagues in the Sauder School of Business.  Also I would like to thank the Natural Sciences and Engineering Research Council and Social Sciences and Humanities Research Council for their financial support of my dissertation.  Finally, I come to the most personal source of gratitude. I heartily appreciate my wife, my parents, and my son for their unwavering love and support. They have been my enduring source of strength.       1  CHAPTER 1 : INTRODCUTION  1.1 RESEARCH MOTIVATION AND OVERALL OBJECTIVES Online Recommendation Agents (RAs) that assist consumers in choosing the “right” products are “among the most typical outcomes of the Internet, Web, and e-commerce revolution” (Ricci and Werthner 2007, p. 6), due to the two basic elements of recommender systems.  First, there is an innumerable amount of products offered in e-commerce Web sites. For example, a web retailer such as Amazon.com provides over 18 million items to its customers. Second, there is a large population of users searching and comparing products online (Ricci and Werthner 2007). To illustrate, the comparison-shopping engine NextTagg.com ranks in the top ten in all search engine traffic, including general search engines such as Google (Nielsen 2010). RAs provide assistance by eliciting the purchasing needs of consumers and then making product recommendations that satisfy these preferences (Xiao and Benbasat 2007). As e-business matures, the effectiveness enabled by RAs is recognized as a key success factor for organizations confronted with growing pressures from their competitors (Ahn 2007; Palanivel and Sivakumar 2010). This dissertation investigates the design factors that enhance the utilization and effectiveness of RAs.  The right user interface has the potential to facilitate the Information Technology (IT) adoption process (Kamis et al. 2010; Seneler et al. 2009) and increase customer loyalty (Berman 2002; Cyr et al. 2007; 2009), whereas an inappropriate user interface could result in frustrated consumers and lost sales. According to Andrew Coates, CEO of AgentArts, a leading personalization and recommendation technology company, “the way the storefront presents recommendations is crucial in generating benefits. The user interface layer is the critical difference as to how visible and accessible recommendations really are and, therefore, how easy it is for consumers to discover new content without having to do any work” (Leavitt 2006, p.15). Gretzel and Fesenmaier (2007) emphasized the constructive nature of the human mind, and the importance of the cues provided in the course of a user-technology interaction. They stressed the importance of conceptualizing user-technology relationships as being quasi-social interactions, which means that interactions with RAs have to be understood as a form of discourse. While the importance of the RA’s user interface has been emphasized by practitioners (e.g., Leavitt, 2006, 2  p. 15) and scholars (e.g., Gretzel and Fesenmaier 2007), the user interface utilized to implement the RA and the influence of the interface on various outcome measures are not well understood (Kamis et al. 2010; Hess et al. 2009). As such, research on RAs has shifted considerably towards a human-computer interaction or communication perspective (Ricci and Werthner 2007).  Aligned with the interface and communication emphasis, the objective of the dissertation is to evaluate and improve interfaces (input, process, and output interfaces) that facilitate the communication between consumers and RAs in order to enhance the quality of their purchasing decisions and to reduce their decision effort. The dissertation contains three studies, each of which aims to improve the input, process, and output interfaces of an RA, respectively. In the following sections, we first review the Input-Process-Output Framework and the relevant literature; we then state the specific research questions for each study, and, finally, outline the thesis. 1.2 INPUT-PROCESS-OUTPUT FRAMEWORK The Input-Process-Output Framework integrates the three studies described in this dissertation. According to Xiao and Benbasat (2007), the design of RAs consists of three major components: input (the stage where users’ preferences are elicited), process (the stage where recommendations are generated by RAs), and output (the stage where RAs present recommendations to the users). In the following, we describe each component and review the relevant literature, and then I explain how the three studies correspond to these three components and address a gap in the literature. 1.2.1 INPUT Preference elicitation (input) is a central function of an RA, which allows for the identification of products appropriate for a consumer’s interests (Xiao and Benbasat 2007). Specifically, through the preference elicitation process, users can indicate the preferential values they place upon product attributes and the importance that they attach to each attribute.  The manner in which a consumer’s preferences are gathered during the input stage can significantly influence his/her online decision making. One way to classify the preference elicitation method is needs-based or feature-based. The needs-based method asks the consumer 3  to provide information about themselves and how they will use the product (e.g., “what do you want to do with your laptop computer?”), while the feature-based method asks the consumer to specify what features they want in a product (e.g., “how many gigabytes of a hard drive do you want?”) (Grenci and Todd 2002; Stolze and Nart 2004; Komiak and Benbasat 2006; Felix et al. 2001). Users with low product knowledge should prefer the needs-based preference elicitation method as they might not have the product knowledge to specify the detailed features (e.g., gigabytes) as is required in the feature-based method (Felix et al. 2001). However, for users with high product knowledge, the needs-based method may prevent them from specifying exact product attribute values, and will thus be considered less useful and more difficult to use than the feature-based method. In addition, the needs-based method has to translate a user’s needs into attribute specifications. Users with high product knowledge may consider the needs-based method less transparent, and thus less trustworthy than the feature-based method that directly translates users’ specifications into recommendations (Xiao and Benbasat 2007).  The preference elicitation method can also be divided based upon the control that a user can exert over the system. Under a system-controlled mode, the RA determines the types of user needs and preferences to be elicited and the order in which they are elicited. Users need to answer all of the questions posed by the RA which are presented to them in a fixed order. In contrast, a user-guided RA allows users to control the types, as well as the order of the needs and preferences that they wish to articulate (Nakatsu and Benbasat 2006; Wang 2005). It has been found that the system-controlled method leads to a higher number of iterations and greater perceived decision effort as compared to a user-guided method, because the system-controlled method does not allow a user to freely choose which attribute preference to articulate (Ariely 2000; Song et al. 2007), and it is challenging for a system-controlled method to identify a priori what attributes are relevant for a certain user’s current needs (Nakatsu and Benbasat 2006; Wang 2005).  One main stream of research on preference elicitation has focused on the amount of user input, which refers to the amount of preference information (e.g., desired product features, importance weighting) provided by the users prior to receiving recommendations (Xiao and Benbasat 2007). The variables involved include the number of product attributes that are included in the RA’s preference-elicitation interface, and whether or not the important weighting of each attribute should be elicited. For example, Wan et al. (2010) found that users’ perceived decision quality 4  becomes lower as the number of product attributes elicited increases from five to ten. Tan et al. (2010) evaluated three types of preference elicitation methods (preference indication of a single attribute for each screening attempt, preference indication of multiple attributes for each screening attempt, and the latter with the importance weighting) as well as the moderating role of the number of product attributes elicited (low vs. high).  They discovered that when the number of product attributes is lower, the condition of multiple attributes without importance weighting is perceived to have the same decision quality as the condition of multiple attributes with importance weighting. However, the latter leads to higher perceived decision quality than the former when the number of product attributes is higher. These studies on the RA’s input have focused more on the number of product attributes that are included in the RA’s preference-elicitation interface (Tan et al. 2010; Wan et al. 2010), and the inclusion and exclusion of the important weighting (Wan et al. 2010), while investigations regarding the impact of the trade-off relationship among the product attributes remain scarce. Explicit considerations of the trade-offs among product attributes are helpful for more accurate decision making (e.g., Frisch and Clemen 1994; Delquie 2003). Without a reasonable understanding of attribute trade-offs, when expressing their needs and preferences to RAs, users may overestimate their real needs and end up being presented with unmatched or unfulfilled product choices by RAs. Consequently, users might have negative perceptions towards the RA and discontinue utilizing it (Wang and Benbasat 2007). Thus, a gap that needs to be addressed in the literature is how to design the RA input interface to make the product attribute trade-offs more explicit to consumers. Study 1 addresses this literature gap by focusing on this important dimension of RA design at the preference elicitation stage: namely, trade-off transparency of product attributes. The traditional design of RAs is such that they at most explain to users in general terms that certain attributes are related to each other and users should not overestimate their needs when indicating their product attribute preferences to the RA (Wang and Benbasat 2007; Lee and Benbasat, forthcoming). RAs typically do not inform users of the precise relationship among the product attributes. Conversely, Study 1 proposes the trade-off transparent RA, an RA that shows how the product attributes are related to each other and can interactively respond to the user’s attribute preference indication. That is, when a user indicates a value for one product attribute (e.g., screen size of a laptop), the value of the related product attributes (e.g., weight) will be adjusted 5  accordingly to reflect the underlying relationship among the product attributes. Three levels of trade-off transparency (low, medium and high) are created by manipulating the number of trade- off relationships revealed by the RA. In the case of low level trade-off transparency, seven trade- off relationships between the seven non-price attributes and the price attribute were revealed by the RA. For RAs with a medium level of trade-off transparency, the RA revealed eight additional trade-off relationships in addition to those specified in the low trade-off transparency condition. For the RA with a high level of trade-off transparency, the RA revealed another additional eight trade-off relationships, as well as those specified in the medium trade-off transparency condition. It is expected that overall, the trade-off transparency feature will lead to a higher level of perceived enjoyment and product diagnosticity as compared to the absence of such a feature, and the medium level of trade-off transparency will lead to the highest level of perceived enjoyment and product diagnosticity. 1.2.2 PROCESS Process is concerned with how a user interacts with an RA.  Prior research has recognized a two- stage decision process in which consumers first select a subset of available products to form a consideration set, and then choose a single product to purchase from that reduced set of products (Gensch 1987; Shocker et al. 1991; Payne 1982; Payne et al. 1988; Xiao and Benbasat 2007). The first process is the initial screening of available alternatives; the second process is recommendation refining based upon the user feedback provided on the desirability of the initial recommendations or updating their preferences on product attributes. A user feedback mechanism, such as the one just described, is an important component to a user’s decision making (West 1999; McGinty and Smyth 2007).  Most of the extant RAs facilitate the initial screening of available alternatives (i.e. the first stage of the decision process) and provide relevant product recommendations (Wang and Benbasat 2009; Kamis et al. 2008; 2010; Hess et al. 2009), but have largely overlooked the feedback mechanisms (i.e. second process). For example, Kamis et al. (2008) compared two types of decision support systems (DSS):  attribute-based DSS or alternative-based DSS (Kamis et al. 2008, 2010). An alternative-based DSS simply displays all of the possible product alternatives to the users, who can then choose the product alternatives that they prefer. No feedback mechanism was provided. On the other hand, an attribute-based DSS presents users with all of the product 6  attributes that can be customized along with all of the possible values for each attribute. Users then customize their product by selecting a value for each customizable attribute, and the attribute-based RA then presents them with the customized product. Again, users do not have opportunity to provide feedback about the presented products.  However, the importance of the feedback mechanisms has been recognized by researchers. Feedback plays a critical role in many personalized recommender systems, and the capability to adapt precisely to the needs of target users is the key feature of an RA that distinguishes it from traditional information-retrieval systems (McGinty and Smyth 2007). Further, West et al. (1999, p. 298) asserts that “It would be valuable for agents (RAs) to ask screening criteria, and collect feedback on the desirably of suggested offerings,” because it fulfills users’ need for control.  Therefore, Study 2 improves the process interface of an RA by adding a user feedback mechanism. Users can thus indicate the desirability of the initial product recommendations or update their preferences on product attributes in order to obtain better recommendations. Attribute-based feedback allows users to provide or update their preferences on one or more individual attributes of a product after examining the initial recommendations they receive, either from consumers, experts, or an RA. Alternative-based feedback allows the user to indicate a preference for one product in the recommended set over another, following the initial recommendation they receive, and return products similar to the one the user indicates a preference for. As a hybrid of the attribute-based and alternative-based feedback, integrated feedback not only allows users to indicate a preference for one product alternative over another, it also optionally allows users to provide feedback on one or more product attributes on the preferred alternative (McGinty and Smyth, 2007).  Specifically, three types of RA feedback (attribute-based feedback, alternative-based feedback, and integrated feedback) are compared in terms of perceived decision effort and decision quality. In addition, we examine how the effect of three feedback mechanisms will be moderated by the preference elicitation process (presence or absence).  Customers do not need to indicate their product initial preferences when using expert or consumer recommendations, but their recommendations are more general.  In contrast, a user using an RA needs to provide input regarding his/her preferences on product attributes that the RA uses as criteria when searching 7  for products that match the user’s preferences (Xiao and Benbasat 2007). However, while this input process can lead to personalized recommended products, online consumers might not have the motivation or ability to deliberate product attributes accurately before they see the product alternatives (Murray et al. 2010).  Given that trade-offs are involved in the usage of different recommendation sources, we examine how recommendation sources moderate the effects of feedback mechanisms. 1.2.3 OUTPUT Output is the stage when the RA presents a recommendation to its users (Xiao and Benbasat 2007, p 146).  Most of the previous research on the RA’s output has focused on the acceptance of the RA’s recommendation content and how the number of recommendation alternatives influence decision process and decision outcomes.  For example, RA users have been shown to be much more likely to accept the recommended alternative from the RA (Senecal 2003; Wang 2005; Wang and Benbasat 2009). As the number of recommendation alternatives increased, users’ cognitive effort also increased (Basartan 2001), and consequently, they made poor product choices (Diehl 2003; Wan et al. 2010).  Further, it has been shown that both perceived usefulness and perceived enjoyment will follow an inverted U-shaped curve as the recommendation set size increases (Kamis et al. 2008).  However, previous studies have not evaluated the acceptance of the RA’s recommendation content when the RA’s recommendations are presented together with other competing recommendation sources, such as experts and consumers; thus, it is unclear whether the RA can still maintain the high degree of acceptance as found before, given the competing recommendations available. If not, which recommendation sources (RAs vs. consumer and expert) appeal more to which type of users and under what circumstance? Study 3 of this dissertation presents the RA’s advice together with expert and consumer recommendations simultaneously in one website, and evaluates whether the RA is preferred over experts and consumers for users possessing a high or low degree of product knowledge and task involvement. As one recommendation may not fit all types of users, this research will provide implications for online merchants regarding the implementation of recommendation sources. Further, as no studies have examined the relative effect of one source over the other when these three sources (RAs, consumers, and experts) are together presented to users, the impact of 8  convergent recommendations among the sources is largely unknown. Thus, study 3 also manipulates different types of RA output recommendations so that they are convergent or divergent with the consumer and/or expert recommendations. The objective is to evaluate the impact of recommendation convergence or divergence from the three sources on users’ decision behaviour and perceived decision quality.  As summarized in Xiao and Benbasat’s (2007) comprehensive review of RAs, the key dependent variables in the RA study are perceived decision quality and perceived decision effort. Accordingly, the first two studies include both of these two perceptual variables. The third study mainly focuses on users’ objective choice behaviour (i.e. what source users rely on to make their final decision) as well as perceived decision quality. 1.3 RESEARCH QUESTIONS This dissertation focuses on the following three broad research objectives and the corresponding research questions:  • Improving the preference elicitation interface between consumers and the RAs (i.e. the manner in which an RA should provide trade-off transparency of product attributes); 1) Will the trade-off-transparency feature improve consumers’ perceived product diagnosticity and perceived enjoyment? 2) Which level of trade-off transparency will lead to optimal perceived product diagnosticity and perceived enjoyment? 3) How does trade-off-transparency help users to achieve both better perceived decision quality and lower perceived decision effort simultaneously? • Improving the process interface (i.e. which type of feedback mechanism is the best, and under what circumstance); 4) How do the recommendation sources (RAs vs. experts and consumers) moderate the effect of different types of user feedback (attribute-based feedback, alternative-based feedback and integrated feedback) on perceived decision quality? 5) Are attribute-based feedback, alternative-based feedback and integrated feedback better than the absence of any feedback in terms of perceived decision effort? 9  • Evaluating the impact of the RA’s output (convergence with consumers and experts’ recommendations) on users’ choice behaviour and perceived decision quality, and for which types of users and under what circumstances, is the RA preferable to the recommendations from experts and consumers. 6) Will product knowledge and user involvement influence users’ adherence to the recommendation of one source (e.g., RA) over the others (e.g., experts and consumers)? 7) What is the impact of adherence to certain recommendation sources (RAs vs. experts and consumers) on perceived decision quality? 8) How does recommendation convergence among multiple sources influences consumers’ choice selection and perceived decision quality?  1.4 RESEARCH METHOD To investigate the above research questions, three experiments were conducted. Experiment 1 addressed the first three research questions. A trade-off-transparent RA was designed and compared to the traditional RA. To examine which level of trade-off transparency should be revealed, three levels (low, medium and high) of attribute trade-off relationships were manipulated. Specifically, the trade-off-transparent RA revealed 7, 15, and 23 unidirectional trade-off relationships among the product attributes in low, medium, and high trade-off transparency treatments, respectively.  Experiment 2 addressed research questions 4 and 5. Three types of user feedback were designed and incorporated into the recommendations from RAs, experts, and consumers to evaluate the differences among attribute-based feedback, alternative-based feedback, and the integrated feedback of the previous two, as well as the moderating role of recommendation sources (RAs vs. consumer and experts).  Research questions 6-8 were addressed in Experiment 3.  Websites were designed with three recommendation sources (i.e. RAs, experts, and consumers) presented to the subjects. They might access a single recommendation source alone, or they might access all of the recommendation sources available in the assigned web site. In addition, different types of RA 10  output were manipulated to create recommendation convergence and divergence from consumer and/or expert recommendations in order to examine the impact of convergent recommendations on users’ choice behaviour and perceived decision quality.  The results of these three experiments will assist e-commerce merchants and RA designers to determine how to improve users’ understanding about product attribute trade-offs, how to design feedback mechanisms for an RA as well as consumer and expert recommendations, and whether to implement three recommendation sources (RAs, consumers, and experts) simultaneously in a single website to enhance users’ perceived decision quality. 1.5 OUTLINE OF THE THESIS The remainder of the thesis is structured as follows. Chapter 2 (Study 1) improves the input interface of RAs by proposing the trade-off-transparent RA and compares it to the traditional RA in terms of perceived enjoyment and perceived product diagnosticity. Three levels of trade-off transparency were manipulated and their impact on perceived enjoyment and perceived product diagnosticity were evaluated. Chapter 3 (study 2) improves the RA process when users interact with an RA. Three types of user feedback were designed and incorporated into the RA in order to evaluate the difference among attribute-based feedback, alternative-based feedback, and the integrated feedback of the previous two, and the moderating role of the recommendation sources (RAs vs. consumers and experts).  Chapter 4 (study 3) examines which recommendation sources (RAs vs. consumers and experts) will be more effective for which type of users and under what circumstances in terms of advice adherence and perceived decision quality. It also investigates the impact of recommendation convergence among different sources on user’s choices and perceived decision quality. Finally, Chapter 5 summarizes the results of the studies and outlines the major contributions of this dissertation.      11  CHAPTER 2 THE EFFECTS OF TRADE-OFF TRANSPARENCY OF PRODUCT ATTRIBUTES ON CONSUMERS’ PERCEIVED DECISION QUALITY AND PERCEIVED DECISION EFFORT TOWARDS RECOMMENDATION AGENTS  2.1 INTRODUCTION The large variety and amount of products available on the Internet have given rise to the need for product recommendation agents (RAs) that assist consumers in choosing the “right” products (Ricci and Werthner 2007).  RAs provide assistance by eliciting the purchasing needs of consumers and then making product recommendations that satisfy these preferences (Xiao and Benbasat 2007). As e-business matures, the effectiveness enabled by RAs is recognized as a key success factor for organizations confronted with growing competitive pressures (Ahn 2007; Palanivel and Sivakumar 2010). The right user interface could result in increased sales and customer loyalty (Berman 2002; Cyr et al. 2007; 2009), whereas an inappropriate one may result in frustrated consumers and lost sales. According to Andrew Coates, CEO of AgentArts, a leading personalization and recommendation technology company, “The user interface layer is the critical difference as to how visible and accessible recommendations really are” (Leavitt, 2006). Gretzel and Fesenmaier (2007) emphasized the importance of the cues provided in the course of a user-technology interaction. While the importance of the RAs’ user interface has been emphasized by practitioners (e.g., Leavitt, 2006, p. 15) and scholars (e.g., Gretzel and Fesenmaier 2007), the user interface to implement the RA and the influence of the interface on various outcome measures are still not well understood (Hess et al. 2009; Kamis et al. 2010). A central function of RAs is to capture consumers’ product attribute preferences, which then allows for the identification of products appropriate for a consumer’s interests (Xiao and Benbasat 2007). Because of the conflicting values of product attributes (Goldstein et al. 2001), tradeoffs are inherent in many purchase choices (Häubl and Murray 2003, Bettman et al. 1998). For example, a laptop computer’s faster processor comes with a higher price, and its larger screen size comes with a heavier weight.  In consumer decision making, awareness of trade-offs is a double-edged sword. On one hand, consumers often avoid trade-offs because attributes might link to important self-goals and trading them off (i.e. the realization that some attribute 12  targets may not be fulfilled) could cause significant negative effect (Drolet and Luce 2004; Hedgcock and Rao 2009; Lee and Benbasat, forthcoming; Luce et al. 1999). On the other hand, explicit considerations of the trade-offs among product attributes are helpful for more accurate decision making (e.g., Frisch and Clemen 1994; Delquie 2003). Without a reasonable understanding of attribute trade-offs, when expressing their needs and preferences to RAs, users may overestimate their real needs and end up being presented with unmatched or unfulfilled product choices by RAs. Consequently, users might have negative perceptions towards the RA and discontinue utilizing it (Wang and Benbasat 2007). Thus, a gap that needs to be filled in the literature is how to design the RA input interface to make the product attribute trade-offs explicit to consumers while at the same time making their RA usage experience enjoyable. The first objective of Study 1 is to address this gap by improving the communication interface between an RA and its users during the preference elicitation stage (i.e. input stage) so that consumers will have better perceived enjoyment and perceived product diagnosticity. Specifically, we propose the trade-off-transparent RA that interactively demonstrates the trade- offs among product attributes so that consumers can have an enjoyable use experience and better understanding of these attribute trade-offs to allow them to, in turn, provide better input to the RA.  For example, a trade-off-transparent RA will reveal to users the trade-off relationship between price and the screen size of a LCD HDTV.  This trade-off transparency feature is a novel addition to RA design heretofore unconsidered in the RA research literature (e.g., Häubl and Trifts 2000; Hess et al. 2005-2006; Kamis et al. 2008, 2010; Komiak and Benbasat 2006; Tam and Ho 2005; Wang and Benbasat 2009). Harkening back to the trade-off dilemma mentioned above in terms of trade-off avoidance and the benefit of trade-off awareness, we evaluate the effects of the trade-off-transparency feature in terms of perceived enjoyment and perceived product diagnosticity. Enjoyment is defined as “intrinsic reward derived through the use of the technology or service studied” (Igbaria et al. 1996; Nysveen 2005). It is an affective measure of a user’s perception of whether or not the interaction with a system is interesting and fun (Csikszentmihalyi 1977; Kamis et al. 2008; Koufaris 2002; Novak et al. 2000). Product diagnosticity is the extent to which a consumer believes that a system is helpful in terms of fully evaluating a product (Kempf and Smith 1998; Pavlou and Fygenson 2006; Jiang and Benbasat 2007a). It is a cognitive measure of the user’s experience with RAs. 13  It is well understood that user evaluations, behaviours, task performance, and decision outcomes change as the task complexity faced by users increases (Kamis et al. 2008; Jiang and Benbasat 2007a; Tan et al. 2010). With that in mind, we investigate the different levels of the trade-off transparency as a form of task complexity.  This can be illustrated with the selection of a laptop computer: when the level of trade-off transparency is low, users evaluate fewer trade-offs, such as those between price, and a number of other attributes (e.g., the capacity of a hard drive).  As the trade-off transparency increases, users become more aware of additional trade-off relationships beyond those associated with price, such as the trade-off between weight and screen size. An individual may desire a larger screen size but fulfilling that desire comes at the expense of increased weight. When the trade-off transparency increases further, a user becomes more aware of the need to manage a more significant amount of trade-offs and their evaluation of the transparency function might be different due to the increased effort required.  As a second objective, we draw upon cognitive load theory to predict that trade-off transparency should be maintained at a certain level to achieve optimal outcomes in terms of perceived product diagnosticity and perceived enjoyment, as too much trade-off transparency will exceed users’ cognitive limitation and its effect will be counterproductive. The third objective of Study 1 is to augment the Effort-Accuracy Framework by proposing perceived enjoyment and perceived product diagnosticity as two antecedents of perceived decision quality and perceived decision effort, two central components of the Effort-Accuracy framework. Prior research has overlooked the underlying mechanism of why certain RA characteristics can lead to better perceived decision quality and perceived decision effort. Payne et al. (1993) prescribe that a consumer’s decision-making process is often a trade-off between the accuracy of the decision and the effort required to make the decision. Their research, among others, has supported such a trade-off.  As a result, there is limited research in the RA literature that deals with how perceived decision quality can be improved without simultaneously increasing perceived decision effort. Grounded on the Stimulus-Organism-Response (S-O-R) Model, we develop a theoretical model that augments the Effort-Accuracy Framework by including the constructs of perceived enjoyment (affective variable) and perceived product diagnosticity (cognitive variable), which are used to explain the effects of trade-off transparency on both increased perceived decision quality as well as reduced perceived decision effort. To summarize, the three major research questions derived from the above discussion are: 14  1) Will the trade-off-transparency feature improve consumers’ perceived product diagnosticity and perceived enjoyment? 2) Which level of trade-off transparency will lead to optimal perceived product diagnosticity and perceived enjoyment? 3) How does trade-off-transparency help to achieve both better perceived decision quality and lower perceived decision effort simultaneously? The rest of the Chapter 2 is structured as follows: Section 2.2 reviews the theoretical foundations; Section 2.3 develops the hypotheses; Sections 2.4 and 2.5 describe the experimental design and data analysis respectively; and Section 2.6 concludes the chapter by discussing the theoretical and practical implications and limitations as well as suggestions for future research. 2.2. THEORETICAL FOUNDATIONS With the objectives of augmenting the Effort-Accuracy Framework, and examining the effect of the trade-off transparency feature and its different levels, we review the Effort-Accuracy Framework (Payne et al. 1993), S-O-R Model in environmental psychology (Mehrabian and Russell 1974), and Cognitive Load Theory (Sweller 1988). 2.2.1 EFFORT–ACCURACY FRAMEWORK According to the theory of human information processing (Payne 1982; Payne et al. 1988), humans have a limited cognitive capacity to process information; thus, it is not feasible for them to evaluate all available alternatives in detail before making a choice.  Therefore, individuals seek to attain a satisfactory, although not necessarily an optimal, level of achievement (Simon 1955).  The gist of the Effort–Accuracy Framework (Payne et al. 1993) is that although consumers have a number of available strategies in making choices, the strategy which is ultimately chosen depends on some compromise between the desire to make an accurate decision and the desire to minimize cognitive effort.  Much behavioral research on online RAs has relied on the Effort–Accuracy Framework of cognition to investigate the beneficial impact of decision aids on reducing the cognitive effort expended by users and increasing their decision quality (accuracy) (Häubl and Trifts 2000; Hostler et al. 2005; Todd and Benbasat 1996). For example, Benbasat and Todd (1992, 1996) have demonstrated that RAs are mainly utilized by users to conserve effort, not necessarily to improve their decision quality.  Schafer et al. (2002) and 15  Fasolo et al. (2005) found that features of RAs may lead to better decision quality, but also to higher decision effort.  All of these studies suggest that it is challenging to obtain higher decision quality without also increasing decision effort. To address this challenge, we propose two decision process variables (perceived enjoyment and perceived product diagnosticity) to augment the Effort-Accuracy Framework to explain how perceived enjoyment and product diagnosticity can lead to better decision quality and lower decision effort simultaneously. We next review the S-O-R Model to justify such augmentation. 2.2.2 STIMULUS-ORGANISM-RESPONSE MODEL The S-O-R Model posits that the various stimuli within a shopping environment together affect a consumer’s cognitive and/or affective processes (organism), which in turn determine the consumer’s responses.  Stimuli are cues external to the customers that rouse or incite them (Belk 1975). Stimuli may manifest themselves in different ways, such as a product display or the store environment (Jacoby, 2002). In the context of online shopping, stimuli pertain to the design features of the sales websites with which consumers interact (Eroglu et al., 2003), such as the visual appeal (Parboteeah et al., 2009) and interactivity (Jiang et al. 2010) of a website.  The organism refers to the intervening processes (e.g., cognitive and emotive systems) between the stimuli and the reaction of the consumer (Bagozzi 1986). Response refers to behavioral responses or internal responses, expressed as impression and judgements of quality (Jacob 2002, p.55). The S-O-R Model has been extensively illustrated (Bagozzi 1986) and widely adopted in past research, with promising results, to model the impact of environmental stimuli on consumer responses in both offline and online shopping contexts (e.g., Baker et al. 1994; Eroglu et al. 2001; Eroglu et al., 2003; Fiore and Kim 2007; Sherman et al. 1997). In the RA context, the various types of RA features can be deliberately employed by online merchants to create the stimuli that can activate consumers’ internal cognitive/affective processes, which, in turn, promote a favourable response toward an e-commerce website, for instance. In particular, several studies in Information Systems (IS) have drawn upon the S-O-R paradigm as a theoretical framework to view how website features affect web consumers and their behaviour (Koufaris et al., 2002; Parboteeah et al., 2009, Jiang et al., 2010). 16  As such, the S-O-R Model serves as an appropriate overarching framework for our theoretical model (See Figure 2-1). Following the S-O-R model, this study operationalizes “stimulus” as a trade-off-transparency feature of an online RA, “organism” as perceived product diagnosticity (cognitive system) and enjoyment (affective system), and “response” as perceived decision quality and perceived decision effort, two central components of the Effort-Accuracy Framework. In addition to its relevance for the present study, the use of the S-O-R framework has other advantages: (i) it provides a parsimonious and theoretically justified way of investigating the trade-off-transparency feature as environmental stimuli, and (ii) it enables the examination of the role of the cognitive and affective reactions to trade-off-transparency feature, (iii) it also provides a theoretical rationale for studying perceptions of decision quality and effort as a state of mind resulting from cognitive and affective change of an organism, in contrast to past research that has studied effort and quality as a direct influence of RA features (e.g., Häubl and Trifts 2000).  Figure 2-1 Proposed Theoretical Model  2.2.3 COGNITIVE LOAD THEORY AND TASK COMPLEXITY Cognitive load theory (Sweller 1988) is concerned with techniques for reducing working memory load in order to facilitate the changes in long term memory associated with schema 17  acquisition. It states that the design of learning materials must, if they are to be effective, keep the cognitive load of learners at an appropriate level during the learning process. Wood (1986) proposed a comprehensive framework of task complexity. He specified perceived complexity as a linear combination of three dimensions that capture distinct elements of the information cues that make up a task stimulus: component complexity, coordinative complexity, and dynamic complexity. Perceived component complexity refers to the users’ perceptions of the density and dissimilarity of information cues in the task stimulus. In a website context, dense cues are represented by long text, many images, and colors (Nadkarni and Gupta 2007). In the context of multi-alternative multi-attribute problems, component complexity is the number of attributes for a certain product and/or the number of product alternatives, which represent different information cues (Tan et al. 2010; Kamis et al. 2008). Although component complexity has often been studied in RA research, coordinative complexity has been almost absent. Perceived coordinative complexity describes users’ perceptions of the range of and interdependencies among the different information cues in the task stimulus. In the context of a multi-alternative multi-attribute problem, coordinative complexity is the interrelationship among the attributes. Perceived dynamic complexity refers to the ambiguity (number of different possible interpretations of the same piece of information) and uncertainty (clarity of action– outcome relationships) that individuals face in performing a task (Wood 1986). As we focus on coordinative complexity (i.e. level of trade-off transparency), we keep component complexity and dynamic complexity constant across the experimental groups; this will be further explained in the method section. 2.3 HYPOTHESIS DEVELOPMENT 2.3.1 INDEPENDENT AND DEPENDENT VARIABLES In this study we examine the effectiveness of the trade-off-transparency feature of RAs. Horizontal scales - each with a “slider” - are used to represent the value of each attribute (with low values on the left side and high values on the right). A user will be able to indicate the preferred level of an attribute by clicking and dragging the slider to a certain spot on the bar (see Figure 2-2). The feature unique to our trade-off-transparent RA is that the placement of the slider on a given level of the attribute will lead to an immediate real-time change, observable to the user, in one or more values for other related attributes. The number of attributes that shift 18  automatically is a function of the degree of trade-off transparency that the RA is designed to have. Hence, users can directly learn of the trade-off relationships among attributes when using a trade-off transparent RA.  In contrast, the traditional RA (e.g., Kamis et al. 2008; 2010; Wang and Benbasat 2007; 2009) explains to users in general terms that certain attributes are related and users should not overestimate their needs when indicating their product attribute preferences to the RA. The traditional RA does not automatically adjust the values of other attributes as the user selects a particular value for a given attribute.  Figure 2-2 Interface Design In line with the S-O-R theoretical model, we propose that an online user’s interaction with the trade-off transparency feature leads to both cognitive and affective reactions. Two direct dependent variables are enjoyment and product diagnosticity. Perceived enjoyment is frequently studied in the IS literature to capture users’ affective feeling, and has been shown to be an important success factor in business-to-consumer e-commerce (Koufaris 2002; Kamis et al. 2008; Sun and Zhang 2008; van der Heijden 2004). The importance of perceived product diagnosticity in the online shopping environment has also been demonstrated in multiple IS studies with its influence on attitudes toward product (Jiang and Benbasat 2007b), attitude toward purchasing (e.g., Pavlou and Fygenson 2006), and intention to return to a website (Jiang and Benbasat 2007a). The theoretical model for the study is presented in Figure 2-1. The model indicates that the trade- off transparency feature stimulates both perceived enjoyment and product diagnosticity. Levels 19  of trade-off-transparency have different impacts on enjoyment and product diagnosticity. Further, enjoyment and product diagnosticity positively influence perceived decision quality, and negatively influence perceived decision effort.  Finally, high perceived decision quality and lower perceived decision effort lead to consumers’ increased intentions to use the RA. The proposed model (Figure 2-1) is congruent with past applications of the S-O-R model in that the basic framework (i.e., stimulus, organism, and response) is consistent with environmental psychology literature. However, the model also diverges from past applications in that the cues used as the stimulus (i.e., trade-off-transparency of RA), as well as both the cognitive and affective reactions (i.e., perceived enjoyment and product diagnosticity) are grounded in the IS domain. 2.3.2 IMPACT OF THE TRADE-OFF-TRANSPARENCY FEATURE ON PERCEIVED ENJOYMENT AND PRODUCT DIAGNOSTICITY Prior research on environmental stimulus in the e-commerce context found that an interface with stimulating cues has a positive influence on users’ affective feelings with the content presented (Sproull et al. 1996; Parboteeah et al. 2009) and users will form a positive affective feeling in relation to the interface. Animated images and icons have been found to be more meaningful and involving than simple text presentations (Morrison and Vogel 1998; Griffith et al. 2001). As a stimulus, the trade-off transparency feature vividly shows how the product attributes are related to each other and can interactively respond to the user’s attribute preference indication. For example, if a user moves a slider to indicate a need for a “lighter” weight for a laptop computer, the previously-chosen “large” value for the screen size will automatically move to a “small” value. As users can exert their autonomy over the interactive and lively system, they are likely to experience a sense of fulfillment (Kettanurak et al. 2001). In addition, this interactive function is expected to draw more of a user’s attention, stimulate their sensory experience, and subsequently lead to positive effects (Jiang and Benbasat 2007b). Thus, the trade-off-transparency feature is expected to lead to perceived enjoyment. Hypothesis 1: The trade-off transparency feature positively influences perceived enjoyment. In addition to the affective response, the S-O-R model also posits that environmental stimulus leads to change in an individual’s cognitive systems, including cognitive network, learning performance, etc.  As compared to the traditional RA that does not show exactly how a user’s 20  attribute choices are related to, and constrained by, one another, the trade-off-transparent RA adjusts the values of related product attributes automatically when a certain value of an attribute is specified. For example, the trade-off-transparent RA demonstrates the trade-off relationships among product attributes, such as, how price will be adjusted by changing a value of a non-price feature (e.g., hard drive). In summary, the trade-off-transparent RA shows the exact relationship among multiple pairs of attributes and provides a holistic view of the trade-off relationships. By understanding these trade-off relationships among product attributes, users will have a better understanding of a product. Thus, we posit that: Hypothesis 2: The trade-off transparency feature positively influences product diagnosticity. 2.3.3 THE LEVELS OF TRADE-OFF TRANSPARENCY In the previous section, we have hypothesized the overall effects of the trade-off transparency feature and asserted that increased trade-off transparency leads to higher perceived enjoyment and higher product diagnosticity. However, there is likely a limit to the number of revealed trade- off relationships communicated to the user before he or she becomes cognitively overwhelmed. According to cognitive load theory (Sweller 1988), learning happens best under conditions that are aligned with human cognitive architecture. Research on working memory assumes that people only have limited working memory to process incoming information. Therefore, if one’s working memory is overloaded, the learning effect will deteriorate (Baddeley 1992). As the number of trade-off relationships revealed by the trade-off-transparent RA increase beyond a certain point, the trade-off-transparent RA will reach its limits in improving the user’s cognitive understanding and enjoyment. As a result, the user might leave behind an increasingly large number of unexamined trade-off relationships. Wood (1986) suggested that the relationship between task complexity and productivity is likely curvilinear. Increasing levels of complexity may initially lead to higher levels of challenge and have a positive effect on performance (e.g., Locke et al. 1981). However, past a certain level of complexity, the resulting demands on individuals may begin to exceed their capacities to respond, creating a condition of “overload” which leads to lowered performance (Wood 1986). Kamis et al. (2008) found that as the number of product alternatives increases (i.e. component complexity), perceived enjoyment and usefulness followed an inverted U-shaped curve. 21  Likewise, we expect that as the level of trade-off transparency increases, enjoyment and perceived product diagnosticity will also follow an inverted U-shaped curve. Thus, we propose: Hypothesis 3:  Perceived enjoyment with the trade-off-transparent RA will follow an inverted U- shaped curve as the level of trade-off transparency increases. Hypothesis 4:  Product diagnosticity with the trade-off-transparent RA will follow an inverted U- shaped curve as the level of trade-off transparency increases. 2.3.4 IMPACTS OF ENJOYMENT AND DIAGNOSTICITY ON DECISION QUALITY AND DECISION EFFORT Perceived enjoyment can positively influence user attitudes and satisfaction with a system interface (e.g., Jiang and Benbasat 2007b; Griffith et al. 2001; Morrison and Vogel 1998). In the same view, higher levels of enjoyment are believed to positively affect perceived decision quality. With greater enjoyment, users will more actively process the information provided (Andrews and Shimp 1990; Griffith et al. 2001), resulting in a greater likelihood of selecting a high-quality product alternative. Conversely, less enjoyment may hinder the processing of product information generated by the RA, consequently hampering perceptions of decision quality. Thus, we propose: Hypothesis 5:  Perceived enjoyment is positively related to perceived decision quality. If an interface has features that engage and entertain users, we expect that the perceived decision effort associated with the RA usage will be low. The rationale is that when users are in "a state of deep involvement with software," they are less able to register the passage of time while engaged in interaction (Agarwal and Karahanna 2000). In the RA’s case, when users find that the RA interface is interesting and appealing, they will be more involved in using the RA, and their perception of time spent in using the RA will be less as compared to those finding the interaction with the RA is boring and dull. Therefore, we propose: Hypothesis 6:  Perceived enjoyment is negatively related to perceived decision effort. Decision quality is one of the primary objectives of a decision maker (Payne, 1982). We posit that higher perceived product diagnosticity leads to higher perceived decision quality. Better understanding of the trade-off relationships of product attributes is important to prevent users 22  from mis-specifying their product preferences, and to provide realistic input of attribute preferences to the RA. For example, users with better product diagnosticity are less likely to think that a laptop with a very large screen size will be extremely light. If such an unrealistic combination of attribute values is desired, few, if any, matching products will be found and users will consider the quality of product recommendation to be low. On the other hand, if more realistic attribute values are inputted into the RA, the RA is more likely to recommend a better set of products that fit users’ needs; accordingly, perceived decision quality should be higher. Thus, we propose:  Hypothesis 7: Higher perceived product diagnosticity leads to higher perceived decision quality. Higher product diagnosticity will lead users to provide more valid input to the RA, and subsequently obtain a better set of recommended products. As such, they will be more likely to come across products that match their needs within the initial set of products recommended by an RA. In contrast, when unrealistic product preferences are provided by the user, the RA might not be able to recommend the matched product. Subsequently, users need to spend more cognitive effort to evaluate a longer list of product alternatives to find a desired product. Thus, we propose: Hypothesis 8: Higher perceived product diagnosticity leads to lower perceived decision effort. 2.3.5 IMPACT OF DECISION QUALITY AND DECISION EFFORT ON INTENTION Based on the Effort-Accuracy Framework, users are more likely to adopt an RA if the RA helps increase their decision quality and reduce the cognitive effort expended (Häubl and Trifts 2000; Hostler et al. 2005, Wang and Benbasat 2009).  If decision quality is perceived to be low, users will probably discontinue utilizing it.  Other things being equal, if the usage of RA requires additional effort, users would rather rely on themselves to make a decision. Therefore, we suggest: Hypothesis 9:  Perceived decision quality is positively related to intention to use RAs. Hypothesis 10:  Perceived decision effort is negatively related to intention to use RAs. A model summarizing the hypotheses is presented in Figure 2-3. 23   Figure 2-3 Research Model 2.4. METHODOLOGY A 4 (traditional RA serving as control, trade-off-transparent RA with low, medium, and high level of transparency) X 2 (shopping for friend vs. shopping for yourself)1 between-subjects design was implemented to test the hypotheses. 2.4.1 MANIPULATION OF TRADE-OFF-TRANSPARENCY As the focus of the study was on the level of trade-off transparency (i.e. coordinative complexity), we kept the number of product attributes and alternatives (i.e. component complexity) constant across all the experimental groups to avoid the confounding effect between component complexity and coordinative complexity. Past research used 8, 54 and 150 product alternatives to represent low, medium and high task complexity (Kamis et al. 2008). Thus, we  1 Previous research (Bettman et al.1998) suggests that shopping for friends will help to minimize the effects of negative emotions when making attribute trade-offs, which will likely play an important confounding role if participants are asked to shop for themselves.  However, the self-shopping condition enables us to compare the subject’s initial indication of product preference with the subject’s final attribute preferences inputted to the RA. If there is more deviation for the trade-off-transparent RA condition than the control condition, it would indicate that a trade-off-transparent RA is effective in informing users about the trade-offs existing among product attributes. As users better understand the product attributes, they were able to update their initial preferences, which were indicated at the very beginning of the experiment. In short, the self-shopping condition provides more objective evidence to support the effectiveness of trade-off-transparent RA in increasing users’ product diagnosticity, as we discussed in Section 2.5.6. 24  choose 50 product alternatives to represent a moderate level of component complexity (Miller 1956; Jiang and Benbasat 2007a, Kamis et al. 2008) in order not to overwhelm users since trade- off transparency (i.e. coordinative complexity) was manipulated at three levels. For each laptop, there were eight product attributes (e.g., price, hard drive, memory, processor, screen size, weight, battery, and video card). Miller (1956) offered a general rule of thumb that the span of immediate memory is about 7 +/- 2 items. The number of eight attributes falls in this range and represents a moderate level of component complexity. Fewer product attributes (e.g., 3 or 4) would limit the capability to manipulate the trade-off transparency, while too many attributes (e.g., 10 or 12) could create a ceiling effect of task complexity, which also diminishes the effect of trade-off transparency manipulation. Three levels of trade-off transparency (low, medium and high) were created by manipulating the number of trade-off relationships revealed by the RA. Specifically, the trade-off-transparent RA revealed 7, 15, and 232 unidirectional trade-off relationships in low, medium, and high trade-off transparency treatments, respectively. In the case of low level of trade-off transparency (Table 2- 1), the 7 trade-off relationships3 between the seven non-price attributes and the price attribute were revealed by the RA. This manipulation is based on the notion that the most common form of a trade-off in most marketplace settings is that between price and product quality (Hedgcock and Rao, 2009). Specifically, whenever a user indicates her product preferences on each of the seven non-price attributes, the price will automatically adjust to reflecting their underlying correlations4, while the value of the rest of the non-price attributes remain constant. Note that the change of price does not lead to the change of other product attributes due to the very large number of possible combinations of non-price attributes. For RAs with a medium level of trade-off transparency (Table 2-2), the RA revealed eight additional trade-off relationships in addition to those specified in the low trade-off transparency condition. One example, as mentioned earlier, is the correlation between screen size and weight  2 Given 8 product attributes, 23 sensible unidirectional trade-off relationships are the maximum that we were able to identify. For example, no relationship is expected between the capacity of a hard drive and the video card quality, or between memory (RAM) and battery life. 3 As an example, the increase in price with the increase of the capacity of hard drive (or other product attribute) is counted as one trade-off relationship. 4 The exact relationships between the product attributes were mainly based on an RA’s product database that contains 50 product alternatives, all of which were available in the market at the time of the experiment. 25  of a laptop computer. Similar to the interpretation used with Table 2-1, Table 2-2 should be interpreted by looking at each column, heading first, and then the intersected rows. For example, when the value of hard drive (column heading) changes, the values of two other related attributes (price and processor) in the corresponding intersected rows correspond. Table 2-1 Low Level of Trade-Off Transparency Attributes Price Hard Drive  Video Card Memory  Processor Screen Size Weight Battery Price  Related Related Related Related Related Related Related Hard Drive Video Card Memory Processor Screen size Weight Battery  Table 2-2 Medium Level of Trade-Off Transparency Attributes Price Hard Drive  Video Card Memory  Processor Screen Size Weight Battery Price  Related Related Related Related Related Related Related Hard Drive Video Card    Related Memory   Related Processor    Related       Related Screen size          Related Weight         Related  Related Battery     Related Note: Newly added trade-offs are bold  26  Table 2-3 High Level of Trade-Off Transparency Attributes Price Hard Drive  Video Card Memory  Processor Screen Size Weight Battery Price  Related Related Related Related Related Related Related Hard Drive Video Card    Related   Related Memory   Related   Related Processor    Related    Related   Related  Related Related Screen size          Related Weight    Related     Related Related  Related Battery     Related  Related Note: Newly added trade-offs are underlined For the RA with a high level of trade-off transparency (Table 2-3), the RA revealed an additional eight trade-off relationships, as well as those specified in the medium trade-off transparency condition. One example is the relationship between battery and weight of a laptop computer. Similar with the other tables, Table 2-3 should be interpreted by looking at each column heading first and then the correspondent intersected rows. For example, when the value of the hard drive attribute (column heading) changes, the values of three other related attributes (price, processor and weight) in the intersected rows change accordingly. In the control condition, when a user indicates a certain value for a product attribute, the values of other attributes maintain constancy. However, subjects were told in a written form that: “When you indicate your preferences, please bear in mind that the better the computer component (e.g., faster processor or larger hard disk), the more expensive it is; and, the larger the screen size, the heavier it is. Hence, be careful not to overestimate your need.” 2.4.2 MANIPULATION OF SHOPPING TASK The subjects’ task was to choose a laptop computer for their friend or for themselves, depending on the group they were assigned to. For those assigned to the condition of shopping for a friend, each subject was provided with his/her friend’s product requirements in a written form as follows: 27  “Martin likes to download and collect tons of classical videos onto his laptop computer. Martin often uses his computer to watch movies as well. He often uses the computer to run complicated statistical software, some of which may run for hours before producing the final output. Martin's eyesight is less than perfect, so he desires a large monitor screen.  Martin travels a lot, and he plans to use his new computer when traveling. A lighter machine with sufficient battery will definitely make it easier for him. Martin prefers NOT to spend too much money on his new laptop computer.” For those assigned to the condition of shopping for themselves, they were told to shop for a laptop of their own. In addition, they were asked to indicate their product preferences at the very beginning of the experiment. Instruction was as follows: “Suppose you need to buy yourself a new laptop computer in the near future. Please indicate how important each of the following computer attributes are to you, and what range of computer specifications you are planning for each attribute?” Users then indicated the value range (e.g., $700-800) for each of the 8 attributes (e.g., price). 2.4.3 SUBJECTS, INCENTIVE, AND PROCEDURES According to power analysis for between-subject design, 160 subjects (thus 20 subjects for each group) can assure enough statistical power of 0.8 for medium effect size (f = .25) (Cohen 1988). A minimum of a $10 honorarium and an additional $25 for the 20 best performers were provided to the participants. The criteria used in deciding the “best performers” was how logical and convincing their answers were to the questions asked.  As it was stated to the participants: “There are no right or wrong answers here, we are just interested in getting an honest and detailed description of your perception.”  Previous research (e.g., Mao and Benbasat 2000) has indicated that this is important, as it serves to motivate subjects to view the experiment as a serious online experience session and to increase their involvement. Subjects were first required to fill in a questionnaire in order to record their demographic and control variables. Before subjects were randomly assigned to the experimental groups (control, low, medium, high level of trade-off-transparency) they were trained on how to use the slider to indicate their attribute preference for a laptop computer. In the experimental websites, they could indicate the product attribute preferences to the RA by dragging the slider on each attribute bar 28  (Figure 2-2). After subjects submitted their attribute preferences to the RA, the RA accordingly recommended a list of the computers that fit their needs.  After that, they answered questions related to the dependent variables, such as perceived enjoyment and perceived decision quality. 2.4.4 MEASUREMENTS OF DEPENDENT AND CONTROL VARIABLES For the survey instrument, we adapted established scales for enjoyment, product diagnosticity, perceived decision quality, perceived decision effort, and intention to use RA, from prior literature.  All of the items of the survey and their sources can be seen in Table 2-4. Table 2-4 Measurement Items of the Dependent Variables Construct Names Measurement Items  (7 point scale) Sources Perceived Enjoyment Using the recommendation agent to select a laptop was Unexciting ………..Exciting Dull………………. Neat Not Fun …………...Fun Unappealing ………Appealing Boring……………..Interesting McQuarrie and Munson (1992); Griffith et al. (2001) Perceived Product Diagnosticity This recommendation agent was helpful for me to evaluate the laptop. This recommendation agent was helpful for me to understand the performance of the laptop. This recommendation agent was helpful in familiarizing me with the laptop. Jiang and Benbasat (2007a, 2007b) Perceived Decision Quality Laptops that suited my preferences were suggested by the recommendation agent. Laptops that best matched my needs were provided by the recommendation agent. I would choose from the same set of alternatives provided by the recommendation agent on my future purchase occasion. Widing and Talarzyk (1993) Perceived Decision Effort The laptop selection task that I went through was too complex. The task of selecting the laptop computer using the agent was too complex. Selecting the laptop computer using the agent required too much effort. The task of selecting the laptop computer using the agent took too much time. Pereira et al. (2000) and Wang and Benbasat (2009); Intention to use RA Assuming I have access to the recommendation agent, I intend to use it next time I consider buying a laptop computer. Assuming I have access to the agent, I predict I would use it next time I plan to purchase a laptop computer. Assuming I have access to the agent, I plan to use it next time I consider buying a laptop computer. Venkatesh et al. (2003); Wang and Benbasat (2009) 29  2.5. DATA ANALYSIS 2.5.1 SAMPLE The sample used for this study consists of 160 subjects recruited in a public university.  Among them, 116 were females and 44 were males. Thirteen were nonstudents, 16 were graduate students, and the rest undergraduates. The average age was 22.7. There was no significant difference in gender (Pearson chi-square value=0.25, p=0.96) and age (F=1.55, p=0.20) distribution across the treatment conditions. On average, the subjects had been using the Internet for 10.5 years, spending 31.3 hours on the Internet each week. In general, they were familiar with online shopping (5.21/7). The average reported knowledge level of the product used in the task—laptop computers—was 4.9/7. No significant differences were found across the treatment conditions regarding these four factors. These results indicate the random assignment of subjects to the different experimental conditions was successful. 2.5.2 MANIPULATION CHECK As a manipulation check, both the objective numbers of trade-offs demonstrated to users and users’ awareness of trade-offs were measured. We measured the objective total number of trade- offs demonstrated to each user by taking the sum of the product of the number of each slider movement initiated by a user and the number of related attributes automatically adjusted. This measure therefore takes into account the fact that users might not click and drag all eight attribute sliders in each assigned condition and some users might click and drag an attribute slider multiple times. Table 2-5 Manipulation Check Groups/Constructs Total numbers of attribute trade-offs demonstrated Average number of attribute trade-offs demonstrated per slider movement Users’ awareness of trade-off (7 point scale, one-tailed test) Control 0 a 0 a 4.61 a Low TOT 10.32 b 0.93 b 4.91 b Medium TOT 28.97 c 2.02 c 5.22 c High TOT 45.14 d 3.28 d 5.52 d Average 20.75 1.53 5.06 Note: TOT: Trade-off transparency; Different superscripts in the same column indicate that the difference between means is significant (p<0.05).   30  We also calculated the average numbers of trade-offs demonstrated for each slider movement, which was derived by taking the total number of attribute trade-offs displayed to the user and dividing by the total slider movements initiated by a user. This breakdown offers insights to RA designers regarding how trade-off transparency should be designed towards achieving desirable outcomes (see Section 2.6.2 for details). The comparisons among the four trade-off transparency treatments in terms of objective numbers of trade-offs demonstrated and users’ awareness of attribute trade-offs5 are reported in Table 2-5.  For each measure, each pair of comparisons between different treatments were significant (p<0.05) supporting that the manipulation of trade- off transparency feature was successful. 2.5.3 EFFECT OF TRADE-OFF TRANSPARENCY LEVELS A 2X4 ANOVA 6  on product diagnosticity indicates that trade-off transparency significantly affects perceived enjoyment (Table 2-6), while shopping task and the interaction effect were not significant. Similar results were obtained for product diagnosticity (Table 2-7). Contrast results detailed the difference among various levels of trade-off transparency for product diagnosticity and enjoyment (Table 2-8). Table 2-8 indicates that medium and high levels of trade-off-transparency were all perceived to have significantly higher perceived enjoyment than the control group, thus H1 is largely supported. All three levels of trade-off-transparency were all perceived to have significantly higher perceived diagnosticity than the control group, fully supporting H2. To test whether enjoyment with the trade-off-transparent RA will follow an inverted U-shaped curve as the level of trade-off transparency increases, we conducted three contrast tests, one between the control group and the low level of trade-off transparency, the other between low and medium levels of trade-off transparency, and another between medium and high levels of trade- off transparency. The differences (Table 2-8) between these three pairs of trade-off transparency were -0.255 (p>0.05), -1.08 (p<0.001), and 0.471 (p=0.047). Figure 2-4 shows that the  5 Trade-off awareness was specifically developed by this study and was measured by using the following items: 1) I was aware of the trade-offs among the product attributes when I indicated the product preferences; 2) The number of product trade-offs to consider was high when I indicated the product preferences to the recommendation agents; 3) When I indicated the product preferences to the recommendation agents, there was a significant number of product attribute trade-offs to consider. 6 As trade-off transparency is an ordinal variable, it was analyzed using ANOVA rather than PLS. 31  relationship between enjoyment and three levels of trade-off transparency resembles an inverted U-shaped curve. Similarly, for product diagnosticity (Figure 2-5, Table 2-8), the three contrast differences between the control group and low level, between low and medium, and between medium and high level were -0.525 (p<0.01), -0.40 (p=0.037), and 0.483 (p=0.012), respectively, which indicated the trend is the inverted U-shaped curve as well. Thus, H3 and H4 were supported.  Table 2-6 ANOVA Summary Table for Perceived Enjoyment Independent Variable Sum of Squares df Mean Square F  Sig. Trade-off-transparency 42.838 3 14.279 12.787 0.000 Shopping Task (shopping for friend vs. yourself) 0.229 1 0.229 0.205 0.651 Trade-off-transparency X shopping task 2.167 3 0.722 0.647 0.586   Table 2-7 ANOVA Summary Table for Product Diagnosticity Independent Variable Sum of Squares df Mean Square F  Sig. Trade-off-transparency 21.567 3 7.189 11.037 0.000 Shopping Task (shopping for friend vs. yourself) 1.360 1 1.36 2.088 0.151 Trade-off-transparency X shopping task 1.113 3 0.371 0.570 0.636  Table 2-8 MANOVA Contrast Results Contrast Perceived Enjoyment Perceived Product Diagnosticity Low TOT vs. control Contrast Estimate 0.255 0.525 Significance 0.279 0.01 Medium TOT vs. control Contrast Estimate 1.338 0.925 Significance 0.000 0.000 High TOT vs. control Contrast Estimate 0.867 0.442 Significance 0.000 0.021 Medium vs. low TOT Contrast Estimate 1.08 0.400 Significance 0.000 0.037 High vs. low TOT  Contrast Estimate 0.612 -0.083 Significance 0.010 0.661 High vs. medium TOT Contrast Estimate -0.471 -0.483 Significance 0.047 0.012               TOT: Trade-off-transparency 32   Figure 2-4 Effect of Trade-off Transparency on Perceived Enjoyment   Figure 2-5 Effect of Trade-off Transparency on Product Diagnosticity 33  2.5.4 TEST OF THE RESEARCH MODEL We analyzed the structural model using Partial Least Squares (PLS) structural equation modeling, a component-based approach (Lohmöller 1989). PLS allows the simultaneous testing of the measurement model (the psychometric properties of the scales used to measure a variable) and the estimation of the structural model (the strength and direction of the relationship between the variables). We used the software SMART PLS 2.0 (Ringle et al. 2005) to conduct the analyses. 2.5.4.1	Measurement	Model Assessments of measurement models should examine: (1) individual measurement item reliability, (2) internal consistency, and (3) discriminant validity (Barclay et al. 1995). To support individual item reliability, we performed confirmatory factor analysis to examine the loadings of the individual measurement items on their intended constructs and compared these to recommended tolerances 0.70 (Barclay et al. 1995; Chin 1998). All of the measurement items met this threshold (Table 2-9).  To show internal consistency of the constructs, we calculated composite reliability and Cronbach’s alpha for each construct. All met suggested tolerances (>0.70, Fornell and Larcker 1981) (Table 2-10). The diagonal elements in Table 2-10 represent the square roots of average variance extracted (AVE) of latent variables, while the off-diagonal elements are the correlations between latent variables. For adequate discriminant validity, the square root of the AVE of any latent variable should be greater than the correlation between this particular latent variable and other latent variables (Barclay et al. 1995). All construct pairs met this requirement. Moreover, as showed in Table 2-9, the loadings of a given construct’s indicators are higher than the loadings of any other, and these same indicators load more highly on their intended construct than on any other construct.  This lends further support to discriminant validity.     34  Table 2-9 Loading and Cross Loading of Measures  TOT ENJ PD DQ DE INT Enjoyment  (ENJ1) 0.266 0.885 0.373 0.460 -0.215 0.638 Enjoyment  (ENJ 2) 0.283 0.931 0.366 0.530 -0.303 0.601 Enjoyment  (ENJ 3) 0.347 0.919 0.407 0.558 -0.300 0.657 Enjoyment  (ENJ 4) 0.218 0.871 0.459 0.543 -0.296 0.571 Enjoyment (ENJ 5) 0.269 0.916 0.444 0.529 -0.269 0.637 Product Diagnosticity (PD1) 0.308 0.434 0.835 0.537 -0.175 0.449 Product Diagnosticity  (PD2) 0.330 0.366 0.871 0.464 -0.125 0.363 Product Diagnosticity  (PD3) 0.240 0.341 0.839 0.404 -0.160 0.281 Decision Quality (DQ1) 0.311 0.484 0.519 0.863 -0.279 0.383 Decision Quality (DQ2) 0.282 0.494 0.510 0.880 -0.258 0.399 Decision Quality (DQ3) 0.242 0.495 0.467 0.893 -0.311 0.449 Decision Quality (DQ4) 0.228 0.503 0.409 0.770 -0.189 0.557 Decision Effort (DE1) -0.061 -0.271 -0.167 -0.307 0.931 -0.313 Decision Effort (DE2) -0.049 -0.281 -0.167 -0.345 0.928 -0.304 Decision Effort (DE3) -0.112 -0.327 -0.268 -0.384 0.919 -0.326 Decision Effort (DE4) -0.128 -0.233 -0.127 -0.094 0.678 -0.229 Intention to Use (INT1)  0.347 0.657 0.432 0.515 -0.330 0.975 Intention to Use (INT2) 0.382 0.669 0.411 0.511 -0.328 0.981 Intention to Use (INT3) 0.394 0.686 0.441 0.524 -0.318 0.979   Table 2-10 Internal Consistency and Discriminant Validity of Constructs  CR CA ENJ PD DQ DE INT Enjoyment (ENJ) 0.96 0.94 0.91 Product Diagnosticity (PD) 0.89 0.81 0.45 0.85 Decision Quality (DQ) 0.91 0.87 0.58 0.56 0.85 Decision Effort (DE) 0.92 0.87 -0.32 -0.21 -0.34 0.86 Intention to use RA (INT) 0.98 0.97 0.68 0.44 0.52 -0.34 0.98 Note: Composite reliability =CR; Cronbach’s alpha=CA; Diagonal elements are the square root of AVE.  35   Figure 2-6 Results of Research Model   **p<0.01, *** p<0.001 Since all of the items were measured with the same method, we first tested for common method variance using Harman’s one factor test (Podsakoff and Organ 1986).  Using a principal component analysis for all of the variables measured in the study, we found multiple factors with eigenvalues greater than one and no single factor explained the majority of variance. In addition, we tested for multi-collinearity among the five variables in the structural equation model. To formally test for the presence of collinearity, we calculated the variable inflation factor (VIF) for constructs in the model. The results indicated that all of the VIF’s were lower than 2. Tabachnik and Fidell (1996), and Thatcher and Perrewé (2002) suggest that when VIF's exceed 10, collinearity biases result. Because the VIF's did not exceed 10, our analysis indicates that collinearity did not influence the results. Therefore, the common method bias was not significant. 2.5.4.2	Structural	Model We next analyzed the structural model (Figure 2-3) to examine the significance and strength of relationships hypothesized in H5-H10. Results shown in Figure 2-6 indicate that enjoyment positively influenced perceived decision quality (β= 0.41; p < 0.001), and negatively influenced perceived decision effort (β= -0.29; p < 0.001), which supports H5 and H6. Product diagnosticity positively influenced (β= 0.37; p < 0.001) perceived decision quality, supporting H7, but it did not influence perceived decision effort (β= -0.09; p >0.05); thus, H8 was not supported. Finally, 36  perceived decision quality positively influenced intention to use RA (β= 0.47; p <0.001), and perceived decision effort negatively influenced intention (β= -0.19; p <0.01), supporting H9 and H10. Perceived decision quality and perceived decision effort jointly explained 31 percent of the variance in intention to use RA, with perceived decision quality contributing a larger proportion to that explanation. A summary of the outcomes of hypotheses testing was presented in Table 2-11.    2.5.5 SUPPLEMENTARY ANALYSIS ON THE EFFECT OF PRODUCT DIAGNOSTICITY ON DECISION EFFORT The insignificant result for the effect of product diagnosticity on perceived decision effort was not expected. We hypothesized that better understanding of product attribute trade-offs led to matched products recommended in the first place, which should save users’ effort.  However, this hypothesis was not supported. One possibility is that while users saved effort evaluating product recommendations due to better matched products recommended by the RA, they also spent more effort understanding the trade-off relationships among product attributes. Thus, the two effects were cancelled out. To further investigate this possibility, we compared the time spent (see Table 2-11 Hypotheses Testing Results Hypotheses Supported? H1: The trade-off transparency feature positively influences perceived enjoyment. Yes H2: The trade-off transparency feature positively influences product diagnosticity. Yes H3:  Perceived enjoyment with the trade-off-transparent RA will follow an inverted U-shaped curve as the level of trade-off transparency increases. Yes H4:  Product diagnosticity with the trade-off-transparent RA will follow an inverted U-shaped curve as the level of trade-off transparency increases. Yes H5:  Perceived enjoyment is positively related to perceived decision quality. Yes H6:  Perceived enjoyment is negatively related to perceived decision effort. Yes H7: Higher perceived product diagnosticity leads to higher perceived decision quality. Yes H8: Higher perceived product diagnosticity leads to lower perceived decision effort. No H9:  Perceived decision quality is positively related to intention to use RAs. Yes H10:  Perceived decision effort is negatively related to intention to use RAs. Yes 37  Figure 2-7) in indicating product preferences and the time used in evaluating product recommendation between the trade-off-transparent RAs and control group. The results indicated that as compared to the control group, subjects using the medium and high levels of the trade- off-transparency feature spent significantly more time (p<0.05) indicating their product preferences, but subjects using any one of the three levels of the trade-off-transparency feature spent significantly less time (p<0.05) in evaluating product recommendations. When these two sets of times in preference indication and product evaluation were added up, no difference in total time was found between using trade-off-transparent RA and traditional RA. The bottom line is that while perceived decision effort is not affected either positively or negatively by product diagnosticity, perceived decision quality has improved due to higher product diagnosticity.  Figure 2-7 Time in Preference Indication and Product Evaluation 38   2.5.6 SUPPLEMENTARY ANALYSIS ON PREFERENCE UPDATES Recall that half of the subjects (N=80) were asked to shop for a product for themselves and the other half for a fictitious friend. The group of participants who shopped for themselves provided objective data for us to understand whether or not users’ perceived product diagnosticity has been indeed affected by the use of a trade-off-transparent RA. Our reasoning is that if consumers better understand the product attribute trade-offs with the trade-off transparency feature, they will be more likely to update their product preferences indicated to the RA, as compared to their product preferences indicated before the experiment. To measure the influence of the trade-off-transparent RA on users’ preference updates, we identified the deviation of users’ initial indication of product preference at the beginning of the experiment (see section 2.4.2) from users’ final attribute preference inputted to the RA (Figure 2- 2). For example, if a user indicated that the initial preferred price was $300-$400, but inputted the price value higher than $400 to the RA, we consider this as one instance of a deviation. We only considered the deviations of those attributes that were considered “most important” and “important” by the users. We counted how many such instances of value deviations occurred for each subject and generated a score for each subject. Then we compared the deviation score between the experimental groups using a trade-off-transparent RA and the traditional RA.  In short, a deviation score of 1 means that a subject modifies his/her original preference range for one attribute that is considered as “most important” or “important.” The average score for the three trade-off-transparent RAs was 0.897 and the score for the control group was 0.29. The difference was statistically significant (p<0.001).  This means that the trade- off-transparent RA was effective in informing users’ about the trade-offs among product attributes so that users better understood the product attributes and changed their initial preferences indicated at the very beginning of the experiment. The objective measurement of preference updates corroborated Hypothesis 2 regarding the effect of the trade-off-transparent RA on perceived product diagnosticity. The correlation (0.34, p<0.01) between objective preference updates and perceived product diagnosticity lent support to this argument. Further, the correlation (0.23, p<0.05) between objective preference updates and perceived decision quality  7 The score for low, medium, high trade-off transparent RA is 0.58, 1.00, 1.10 respectively. 39  substantiated the arguments for H7 (i.e. the effect of perceived product diagnosticity on perceived decision quality). 2.5.7 DISCUSSION The results support the theorization that the trade-off-transparent RA is effective in improving perceived enjoyment and perceived product diagnosticity. Levels of trade-off-transparency make a difference in improving users’ perceived enjoyment and product diagnosticity. While the medium level of trade-off-transparency leads to optimal level of enjoyment and product diagnosticity, the higher level generates a counterproductive effect.  Perceived enjoyment improves perceived decision quality and reduces perceived decision effort; and, product diagnosticity also positively influences perceived decision quality without compromising perceived decision effort. Perceived decision quality significantly increases intention to use an RA, and perceived decision effort significantly decreases this intention. The effect of the trade-off transparency feature on product diagnosticity and perceived enjoyment is consistent with the S-O-R model, asserting that environmental cues can impact organism change of cognition and affection.  The trade-off–transparent RA not only informs what product attributes are related to each other, but also sheds light on exactly how users’ attribute choices are related to, and constrained, by one another. This can make users aware of the potential attribute trade-offs that they had not previously recognized, thus leading to better product diagnosticity.  In addition, as more cues (dynamic animation vs. written text) were provided in the trade-off–transparent RA than the traditional RA, it triggered more sensory channels and, in general, was more emotionally attractive (Nisbett and Ross 1980; Jiang and Benbasat 2007b). In addition, given the exploratory nature of the experience during the interaction with the trade-off-transparent RA, users’ positive affect was aroused (Kettanurak et al. 2001), which led to high perceived enjoyment. Multiple IS studies have underscored the importance of both cognitive and affective perceptions in the context of online shopping (Gefen et al. 2003; Koufaris 2002; Van der Heijden 2003, 2004). The evaluation criteria of the trade-off–transparent RA included both cognitive (product diagnosticity) and affective (enjoyment) measures of the user experience with an RA. Trading off a desired attribute represents a loss to the consumer (Lazarus 1991), and conventional wisdom seeks to minimize the extent of necessary tradeoffs (Luce et al. 2001) in a trade-off situation. The 40  results indicated that through proper interface design, attribute trade-offs of a product can be appropriately communicated to consumers and lead to better cognitive and affective outcomes. The prediction regarding the positive effect of product diagnosticity on perceived decision effort was not supported. However, we expect that product diagnosticity could reduce perceived decision effort in the real world for two reasons. First, we provided 50 product alternatives in the experiment in order not to overwhelm consumers based on prior literature. In reality, the number of products would be much more than 50; e.g., Amazon.com provides over 4,000 alternatives of laptop computers. Thus, the study is a very conservative test of the effects of RAs in terms of reducing perceived decision effort. As the number of product alternatives increases, the effects of higher product diagnosticity are expected to be more robust in reducing decision effort. Second, the supplementary analysis indicates that the time spent in understanding attribute trade-off can be recouped in the product evaluation stage. We contend that the former time component is a one-shot investment and will be greatly reduced when the RA is used a second time or more. In other words, as the users’ familiarity with the trade-off-transparent RA increases over time, perceived decision effort will be greatly reduced in the long run. 2.6. CONTRIBUTIONS, LIMITATIONS, FUTURE RESEARCH, AND CONCLUSIONS 2.6.1 THEORETICAL CONTRIBUTIONS The results of the study make important theoretical contributions. First of all, the RA interface to elicit product preferences is critically important (Kamis et al. 2010), but what has not been established is how to design RA interfaces to enable users to provide better input. Most of the extant RA studies (e.g., Häubl and Trifts 2000; Hess et al. 2005-2006; Kamis et al. 2008, 2010; Komiak and Benbasat 2006; Tam and Ho 2005; Wang and Benbasat 2009) only elicit users’ product preference without informing of the attribute trade-offs, in which case users might mis- specify their product preferences, provide unrealistic input of attribute preferences to the RA, and end up being presented with unmatched product choices. Consequently, users might have negative perceptions towards the RA and stop utilizing it (Wang and Benbasat 2007). We advance the RA literature by proposing the trade-off-transparent RA to address this issue. We contribute to this knowledge gap by applying S-O-R theory to explain the differences between the trade-off-transparent RA and the traditional RA. 41  We have assessed the impact of trade-off-transparent RAs relative to the traditional RA in terms of enjoyment and product diagnosticity. Trade-off awareness is beneficial to accurate decision making (Delquie 2003), but may generate unfavourable feelings (Luce et al. 1999). We have demonstrated that with the trade-off-transparent RA, users not only have better understanding about attribute trade-offs, but also experience positive emotions with the interface. This result highlighted the feasibility to introduce trade-off awareness to users without jeopardizing their positive emotional experience. We have augmented the Effort-Accuracy Framework by proposing enjoyment and product diagnosticity as two antecedents of perceived decision quality and perceived decision effort. This is important in that previous research had primarily focused on how RA characteristics can directly impact perceived decision quality and decision effort (Xiao and Benbasat 2007), but the underlying mechanism of why certain RA characteristics can lead to better decision quality and decision effort has been largely ignored. Recent decision support studies have recognized the importance of examining the decision process variables when studying online RAs (Kamis et al. 2008). To the best of our knowledge, this is the first study to examine the role of enjoyment and product diagnosticity within the Effort–Accuracy Framework of cognition. This study will help researchers better understand why high perceived decision quality and low perceived decision effort can be achieved by the use and adoption of RAs. This augmented Effort-Accuracy framework can also serve as a framework to evaluate alternative RA interface designs. We have investigated the effects of different levels of trade-off transparency. Task complexity is an important factor that affects users’ evaluation of RAs (Xiao and Benbasat 2007).  While recent RA studies have investigated the effect of the number of product attributes (e.g., Jiang and Benbasat 2007a) and number of product alternatives (e.g., Kamis et al. 2008), limited attention has been paid to examine effects of the trade-off relationships of a product. According to Wood’s (1988) classification of task complexity, the number of product attributes and number of product alternatives belong to the component complexity, while relationships among product attributes are under the category of coordinative complexity. Thus, we also contribute to the broad literature of task complexity, as previous studies on the role of task complexity have predominately focused on component complexity, and there has been little research into another important dimension of task complexity: coordinative complexity. 42  This study contributes to this knowledge gap by studying how the different number of trade-off relationships revealed by an RA influence users’ evaluations. We have shown that the impact of trade-off transparency levels on enjoyment and product diagnosticity is nonlinear. As we predicted, both variables followed an inverted U-shaped path as the trade-off transparency level increased.   Showing such nonlinear effects on both variables is a significant contribution to the theoretical as well as practical understanding of the dynamics of how users interact with an RA. The inverted U-shaped relationship between the trade-off transparency level and enjoyment (product diagnosticity) is an indication that increasing the number of trade-off relationships demonstrated in a preferential choice task does not guarantee an increase in enjoyment and product diagnosticity. In fact, consumers may be overwhelmed if too many trade-off relationships are revealed, and their enjoyment and product diagnosticity can decrease.  As research on the design of the RA interface grows, we hope that the results will highlight the need for researchers to consider more than simple linear effects. Regarding the effect of enjoyment and product diagnosticity on perceptions of decision quality and decision effort, the results provide some unique insights. Both enjoyment and product diagnosticity improve perceived decision quality without increasing perceived decision effort. In particular, a higher level of enjoyment has a negative impact on the perception of decision effort. Effort and accuracy are an inherent trade-off in the consumer’s decision-making process (Payne et al. 1993). Prior empirical research also showed that higher decision quality is typically associated with higher decision effort (Schafer et al. 2002; Fasolo et al. 2005).  We demonstrate that it is possible to achieve both objectives of better perceived decision quality and lower perceived decision effort simultaneously, through proper interface design. 2.6.2 PRACTICAL CONTRIBUTIONS While the preceding comments focus on theoretical developments, the results regarding the impact of trade-off-transparency on user perceptions have practical implications for online companies, particularly those that have mass customization capabilities, and desire to introduce user customization of products on their websites. On one hand, the trade-off-transparent RA enables consumers to better understand the product attribute trade-offs, and provides appropriate attribute input to the RA so that the RA can provide better product recommendation and thus lead to better perceived decision quality. On the other hand, the interactive interface of a trade-off- transparent RA can increase one’s enjoyment and thus enhance the shopping experience, which 43  leads to better perceived decision quality. Meanwhile, users with higher enjoyment are more engaged in the enjoyable interface and easily forget the passage of time.  These together will motivate users to return to their websites to continue to utilize RAs in product choice. Thus practitioners are advised to incorporate the trade-off transparency function into the RA design in their websites. Regarding which level of trade-off transparency to employ in practice, our results indicate that the medium level of trade-off transparency leads to the best outcomes in terms of perceived enjoyment and product diagnosticity. Actually, even a low level of trade-off transparency significantly improves product diagnosticity over the control group.  In addition, a low level of the trade-off transparency feature only consumes a small amount of a user’s time to understand the relationship between price and non-price attributes, but it significantly reduces the time in product evaluation (see Section 2.5.5). Thus, practitioners who desire a quick implementation of the trade-off transparency feature might start with the low level of the trade-off transparency, and gradually upgrade to the medium level. We found a curvilinear relationship between perceived enjoyment and the number of trade-off relationships revealed to the user.  A similarly curvilinear relationship was found between product diagnosticity and the trade-off relationships that were demonstrated. Both of these findings highlight the danger of overwhelming consumers with too much information.  Under greater conditions of trade-off transparency, consumers may become less interested in the website interface as well as risk making poorer decisions. Thus, when designing an RA interface, practitioners should select the appropriate product trade-off relationships to demonstrate.  Table 2-5 sheds lights on how to exactly classify low, medium, and high levels of trade-off transparency.  For example, a medium level of trade-off transparency means that on average approximately two attribute trade-offs should be demonstrated per slider movement initiated by a user.  We believe these numbers, representing each trade-off transparency level, can be generalized to other contexts with different products, given the longstanding support for the human mind’s capability of juggling a certain degree of information in working memory (e.g., Miller 1956). 44  2.6.3 LIMITATIONS AND FUTURE RESEARCH There are several limitations to this study that should be noted, which opens avenues for future research.  First, as the experiment’s participants were mostly university students, readers should exercise caution in generalizing the results of this study to other demographic groups. To generalize the study results, it will be necessary to conduct additional studies with different subject demographics and in different settings. The second limitation is that the study was conducted in a context in which the participants evaluated an RA in the early stage of their interaction with it. When users become more familiar with the RA, the model results may be different.  For example, it is possible that the perceived effort to understand an attribute trade-off will be reduced when users become more accustomed to it, and as such, the effects of product diagnosticity on perceived decision effort may also become significant.  In addition, the effect of perceived effort may accordingly exert stronger influences on user intention.  Future research is required to further examine the relative importance of various factors on post-adoption perceptions and behaviours toward online RAs. Third, the laptop product examined in this study is considered to be a utilitarian product. It is possible that the effect of trade-off transparency on perceived enjoyment will be higher with a hedonic product. Future studies should consider the other product types (e.g., hedonic products) when replicating the research in order to examine this possibility. 2.6.4 CONCLUSIONS We have filled in the research gap in understanding the role of product diagnosticity and enjoyment in influencing users’ perceived decision quality and decision effort by proposing and testing an augmented Effort–Accuracy Framework. The inclusion of these two variables not only shed light on the decision process, but also demonstrate how higher perceived decision quality can be achieved without trading off decision effort.  This augmented Effort-Accuracy framework can be adopted to evaluate alternative RA interface designs in the future. Additionally, we have extended the previous RA research by proposing the trade-off transparency feature. Such a design is important, as scant attention is paid to the IT artifacts and to the design and development of such artifacts in the MIS research (Benbasat and Barki 2007; Benbasat and Zmud 2003; Orlikowski and Iacono 2001).  Based on S-O-R Model, we have evaluated the advantages of the trade-off-transparent RAs relative to the traditional RA in terms of enjoyment and product diagnosticity.  Providing the trade-off transparency function is more costly and 45  complicated for designers. It is thus important to determine if the trade-off-transparent RA which elicits user preferences will enable users to better enjoy and understand the product, and subsequently, culminate in better decision quality and lower effort perceptions.  The results indicate that it is feasible to be aware of attribute trade-offs and, meanwhile, maintain a favorable degree of enjoyment.  Finally, in contrast to past research that has focused on component complexity, we have examined the role of coordinative complexity (level of trade-off transparency) in influencing perceived enjoyment and product diagnosticity.  The results not only contribute to the task complexity and task-technology fit literature, but also inform RA developers as to which kind of RA is more beneficial, and in what circumstance.                      46  CHAPTER 3 THE EFFECTS OF FEEDBACK MECHANISMS ON PERCEIVED DECISION QUALITY AND PERCEIVED DECISION EFFORT  3.1 INTRODUCTION  With the rapid development in technology, customers are increasingly overwhelmed by the amount of product information they are being presented. For example, a web retailer such as Amazon.com provides over 18 million items to the consumers. Given the sheer volume of information available over the Internet, customers have relied on different kinds of recommendations sources to handle the daunting volume of data within a limited time.  The importance of recommendation sources in the context of product purchases (Murray, 1991) has been long discussed in the marketing literature, which suggests that information sources, such as word-of-mouth, and recommendations from consumers or experts influence consumers’ decision-making (Duhan et al., 1997; Gilly et al., 1998; Price and Feick, 1984; Stamm and Dube 1994; Swearingen and Sinha 2001; Senecal and Nantel 2004),. To assist customers in choosing the right product among the large variety available on the Internet, popular websites have provided different types of product recommendations to help consumers select a product. Among the more frequently observed recommendations are those similar to the following one offered by Barnesandnoble.com: “Customers who bought this also bought”, or the consumers’ hotels rating provided by expedia.com, and “experts’ pick” by ESPN.  In addition to the consumer and expert recommendations, the Internet also provides customers a relatively new type of information source that assists them in choosing the “right” products (Ricci and Werthner 2007). Such sources are online recommendation agents (RA) that assist consumers by eliciting their product needs and then making product recommendations that satisfy those needs (Xiao and Benbasat 2007). Recommendation agents have become increasingly critical to the success of online commerce (Kamis et al. 2008; Ahn 2007; Leavitt 2006; Kamis et al. 2008, 2010; Wang and Benbasat 2009).  In order to confront the growing intense pressures from competitors (Ahn 2007), retailers with an online presence, such as BestBuy and Sears, have developed RAs that help consumers select a product that best matches 47  their needs.  Among these three prevalent sources of product choice information (consumers, experts, and RAs), which of these sources are better? Recommendations from consumers are more likely to offer honest advice because consumers have nothing to gain and they have experienced the products.  Experts have professional expertise and know the details of a product better. However, consumer and expert recommendations do not necessarily fit a user’s preferences, resulting in additional effort in product search. In contrast, the products recommended by RAs are expected to be more personalized than those of experts and consumers, because RAs recommend based on a user’s specific indicated preferences.  As a small cost to personalization, RAs require users to indicate their product preferences as a first step, while the expert and consumer recommendations do not have the initial preference elicitation process required by RAs.  Given the lack of personalization by consumer and expert recommendations relative to an RA, is it possible to add a preference matching function to these two sources to enhance users’ perceived decision quality and reduce users’ perceived decision effort? Can this function be applied to an RA as well to further improve its preference matching and effort reduction capability? In this study, we propose the user feedback function, which refers to a mechanism allowing a user to update their preferences and/or indicate the desirability of the initial product recommendations they are provided. The recommendations could be from experts, consumers, or an RA. For example, when presented with the initially recommended laptop computers from consumers or experts, users can utilize the feedback mechanism to indicate their preferred product “A” in the recommended set over another, and then request “show me more products like product A”, and then the decision aid will recommend the corresponding similar laptop computers to the users.  User feedback is an important component to a user’ decision making (West et al. 1999; McGinty and Smyth 2007). According to the Theory of Preferential Choice (Coombs, 1952), users have preferred values on certain attributes of a product. User feedback fulfills the desire of a consumer to indicate his/her product preferences. With user feedback, a decision-making aid can build a more complete understanding of the user’s product preferences and requirements, and adapt the 48  products recommended to the needs of that user (McGinty and Smyth 2007). Theoretically, products that better fit customers’ preferences will be in greater demand and have the potential to boost firms’ profitability (Peppers and Rogers 1993; Winer 2001). Realizing that it might be difficult to satisfy users’ needs in one cycle, the RA of BestBuy provides one type of feedback mechanism (i.e. attribute-based feedback, defined below) so that users can refine the initial set of recommended products from an RA.  Surprisingly, few studies have examined how to incorporate user feedback into e-commerce recommendation sources. For this reason, in the comprehensive review of RA research (Xiao and Benbasat 2007), feedback mechanisms were not included in their integrated model. Given the lack of research on feedback and the importance attached to user feedback by scholars (e.g., West et al. 1999; McGinty and Smyth 2007) and practitioners (e.g., BestBuy), we evaluate the efficacy of three types of user feedback in an e-commerce setting in terms of perceived decision quality and perceived decision effort. Preferential choice tasks entail selecting among alternatives based on one's preferences on attributes describing the alternatives (Payne et al. 1993), suggesting that alternatives and attributes are two of the key components in preferential choice problems. Thus, we study feedback mechanisms from the perspectives of attributes, alternatives, and the combination of attributes and alternatives. Attribute-based feedback allows users to provide or update their preferences on one or more individual attributes of a product after examining the initial recommendations they receive, either from consumers, experts, or an RA; Alternative-based feedback allows the user to indicate a preference for one product in the recommended set over another, following the initial recommendation they receive, and return products similar to the one the user indicates a preference for. As a hybrid of the attribute-based and alternative-based feedback, integrated feedback not only allows users to indicate a preference for one product alternative in the recommended set over another, it also optionally allows users to provide feedback on one or more product attributes on the preferred alternative (McGinty and Smyth, 2007).  Specifically, we examine how the effect of three feedback mechanisms will be moderated by the recommendation sources (consumers and experts vs. an RA). For expert and consumer recommendations, customers do not actively customize these recommended products themselves. Using an  RA (Xiao and Benbasat 2007), a user needs to provide input regarding 49  his/her preferences on product attributes that the RA uses as criteria when searching for products that match the user’s preferences. While this input process can lead to personalized recommended products, however, online consumers might not have the motivation or ability to deliberate product attributes accurately before they see the product alternatives (Murray et al. 2010).  Given that trade-offs are involved in the usage of different recommendation sources (e.g., RA), we examine how recommendation sources moderate the effects of feedback mechanisms.  In summary, to advance theory and practice, Study 2 aims to answer the following research questions:  1) How do the recommendation sources moderate the effect of different types of user feedback (attribute-based feedback, alternative-based feedback and integrated feedback) on perceived decision quality? 2) Are attribute-based feedback, alternative-based feedback and integrated feedback better than the absence of any feedback in terms of perceived decision effort?  The rest of this chapter is structured as follows: the next section reviews the theoretical foundations and relevant literature, followed by Section 3.3 where we develop the hypotheses to be tested. Section 3.4 describes the research method and reports the experimental results.  A discussion of the findings, implications for theory and practice, and conclusions are provided in Section 3.5.  3.2 LITERATURE REVIEW AND THERETICAL FOUNDATIONS We first introduce the feedback mechanisms investigated in this study, and then briefly review the Accuracy-Effort Framework and the Preferential Choice Theory, which provide a theoretical grounding for our investigation into the impacts of feedback mechanisms and the moderating role of the recommendation sources.  3.2.1 FEEDBACK MECHANISMS User feedback on the desirability of the initial recommendations (i.e. user feedback) is an important component of decision making in general and RAs in particular (McGinty and Smyth 2007).  User feedback is an excellent way to supplement the expert and consumer 50  recommendations, because these two sources do not have a complete understanding of the user’s product preferences and requirements.  Through the user feedback process, preferences matching recommendations then may be generated.  Further, feedback gives users the perception that they control the recommendations rather than the other way around (West et al. 1999). In a similar vein, feedback is expected to improve the users’ perceived interactivity, which refers to the extent to which users can participate in modifying the form and content of a mediated environment in real time (Steuer, 1992; Jiang et al. 2010). As the synchronous response to the feedback provides the user with a prompt acknowledgement of his/her input, it can instil positive user attitudes toward the recommendation system (Kettanurak et al. 2001; Jiang and Benbasat 2007b; Griffith et al. 2001). Specifically, the control over the exchange of information has been demonstrated to lead to increased pleasure (Ariely 2000). Websites with high level of control appeal to consumers by allowing them to act in a variety of ways during their shopping experience. Such control triggers the flow experience, which is described as intrinsically enjoyable (Novak et al., 2000).  While user feedback improves the personalization level for expert and consumer recommendations, it is also useful in the case of an RA. As asserted by West et al. (1999), it would be valuable for an RA to elicit users’ preferences, and after providing a set of initial product recommendations, allow users to provide feedback concerning those recommendations in order to fine-tune the suggested product offerings. Feedback plays a critical role in many RAs, and the capability to adapt precisely to the needs of target users is the key feature of an RA that distinguishes it from traditional information-retrieval systems (McGinty and Smyth 2007).  Three types of feedback have been mentioned in the literature (e.g., McGinty and Smyth, 2007). Attribute-based feedback allows users to provide preference information on each individual attribute as part of their feedback (e.g., “Show me products with a price between $200-400”, and a hard drive between GB200-GB300, etc.). Alternative-based feedback allows the user to indicate a preference for one recommended product over another (e.g., “Show me more products like product B”), and return products similar to the one the user indicates a preference for. Integrated feedback, combining the alternative-based and attribute-based feedback, allows a user to provide feedback on his/her preferred product alternative in the recommended set over 51  another, and optional feedback on the value of one or more attributes of the preferred product (e.g., “Show me more products like product A but with price between $200-400”). To our knowledge, no empirical studies have compared the differences among these three types of feedback.  3.2.2 THEORY OF HUMAN INFORMATION PROCESSING AND EFFORT– ACCURACY FRAMEWORK The Theory of Human Information Processing stipulates that because humans have a limited cognitive capacity to process information, it is not feasible for them to evaluate all available alternatives in detail before making a choice.  While electronic shopping environments make it possible for consumers to access large amounts of product information, consumers still need to process increasingly more information with the same limited processing capacity (West et al. 1999). Therefore, individuals seek to attain a satisfactory, though not necessarily an optimal, level of achievement (Simon 1955). Thus, the gist of the Effort–Accuracy framework) is that the primary objectives of a decision maker are to maximize accuracy (decision quality) and minimize decision effort (Payne 1982; Payne et al. 1993).  One method of coping with information overload is to filter and omit information (Farhoomand and Drury 2002; Senecal 2003). Another is to use decision support tools, such as RAs, to perform resource-intensive information processing tasks, thus freeing up some of the human processing capacity for decision making (Haubl and Trifts 2000). Extensive research has applied the Effort–Accuracy framework and Theory of Human Information Processing to evaluate the design and use of technology-based decision making tools (Todd and Benbasat 1991, 1992; Häubl and Trifts 2000; Hostler et al. 2005).  For example, Benbasat and Todd (1992, 1996) have demonstrated that the support features of RAs may lead to less perceived decision effort.  3.2.3 THEORY OF PREFERENTIAL CHOICE AND DECISION STRATEGIES  Preferential choice is a well-developed research area in many disciplines, including marketing, sociology, psychology, and economics, and is a pervasive and remarkably important human behavior (Carroll et al. 1991). Preferential choice problems normally entail a large number of alternatives and attributes. The gist of the Preferential Choice Theory is that an individual may 52  have an ideal value for an attribute, such as the capacity of a hard drive on a laptop computer, and that his/her utility falls off as an attribute value either increases or decrease (Coomb 1974). The assumption that each individual has a preferential desire for a certain attribute is consistent with the concept of rationality used by economic theorists. Most economists define rationality in terms of consistency principles and evaluate the rationality of behavior using principles of consistency within a system of preferences and beliefs (Rieskamp et al. 2006). The most important principle of preferential choice is perfect consistency or invariant preferences across occasions (Rieskamp et al. 2006).  That is, a person’s choice of a potential alternative is largely based on what s/he prefers.   To make a preferential choice, users leverage different types of decision processing strategies. Stevenson (1979) describes twelve strategies applicable to preferential choice problems, among which the weighted additive (WADD) and elimination-by aspect (EBA) are the two most commonly studied strategies (e.g., Jarvenpaa 1989; Payne 1976; Todd and Benbasat 1999, 2000). These strategies have been implemented in various types of RAs (e.g., Wang and Benbasat 2009; Lee and Benbasat, forthcoming).  WADD strategy is the most information intensive strategy among the twelve. It considers the values of each alternative on all of the relevant attributes and all of the relative importance (weights) of the different attributes to the decision maker (Stevenson 1979; Payne et al. 1993). The rule develops a weighted value for each attribute by multiplying the weight by the value and sums over all attributes to arrive at an overall evaluation of an alternative.  EBA strategy evaluates each alternative along various attributes, and any alternative that violates any threshold value specified by the user for an attribute is eliminated. Unlike the WADD strategy, users’ attribute preferences are not comprehensively processed (e.g., lower valued attributes are not compensated by higher valued attributes).  In addition to WADD and EBA, here we review another strategy, equal weight strategy (EQW), which has been advocated as a highly accurate simplification of the decision-making process (Dawes, 1979; Einhorn & Hogarth, Thorngate, 1980). The EQW strategy examines all 53  alternatives and all attribute values for each alternative. However, EQW ignores information about the relative importance of each attribute. It assumes that the attribute values can be expressed on a common scale of value. Thus, EQW is a special case of WADD.   3.3 HYPOTHESIS DEVELOPMENT  The objective of this study is to investigate (1) the differences among attribute-based feedback, alternative-based feedback, and integrated feedback in terms of perceived decision quality with recommendations sources as the moderating factor (consumer and expert recommendations vs. RAs); (2) the effect of attribute-based feedback, alternative-based feedback, and integrated feedback, over the lack of feedback in terms of perceived decision effort.  We draw on the Theory of Preferential Choice to guide our hypotheses on the interaction effect of user feedback and recommendation sources on perceived decision quality, and the Theory of Human Information Processing to guide the effect of user feedback on perceived decision effort.  Marketing theory suggests that customers use information sources to reduce the uncertainty and risk associated with product purchase (Murray 1991).  Consumer and expert recommendations generally appeared to be more preferred than non-personal sources, such as an online store (Gilly et al. 1998; Lin 2010). However, results about the superior effect of consumer over expert recommendations have been mixed. For example, Senecal and Nantel (2004) and Wang and Doong (2010) that experts were viewed as more credible, while the reverse was found by Chen (2008).  Therefore, in this study, we include both recommendation sources of consumer and expert recommendations, such as, the five “most popular products” by computershopper.com, and “Editor’s Pick” by shopstyle.com.  We are also interested in the role of an RA’s recommendations, as RAs are becoming more important in providing personalized support and reducing information overload for customers in their decision-making process (Kwon and Chung 2010; Wang and Benbasat 2009), and have been incorporated in websites such as BestBuy.com, Projectorcentral.com, ProCompare.com, and Myproductadvisor.com.  54  The common characteristics that distinguish consumer and expert recommendations from RAs are that they require no input (i.e. product preference indication) from users and provide non- personalized recommendations, thus, consumer and expert recommendations are grouped into one category as a counterpart to RAs in this study.  3.3.1 EFFECT OF FEEDBACK MECHANISMS ON PERCEIVED DECISION QUALITY (IN THE PRESENCE OF CONSUMER AND EXPERT RECOMMENDATIONS)  With the expert and consumer recommendations, users will first see the recommended product alternatives. Given that the product recommendations from consumers and experts are not personalized to the users, users may not necessarily pick these products. Based on the Preferential Choice Theory, users have a desire to indicate their specific attribute preferences on product attributes and choose a product that fits their preferences. Attribute-based feedback allows a user to express their preferences so that the recommendations reflect his/her overall desires, which leads to higher perceived decision quality. Meanwhile, since attribute-based feedback can reduce the amount of information (i.e. unmatched products) to be processed and thus augments human information processing capabilities to focus on the evaluations of those highly matched products, it should have a positive impact on perceived decision quality. In addition, as attribute-based feedback returns users the highly matched products, users should be less concerned with the possibility of missing an unexamined high quality product. In contrast, as the absence of attribute-based feedback does not provide consumers with the opportunity to indicate their preferences, a consumer may need to spend significant effort to find a desired product, and still wonder if there are other more preference-matching product options that s/he has not seen. Thus it would lower perceived decision quality.  Regarding the alternative-based feedback, it may not necessarily improve users’ decision quality as compared to the absence of feedback. It does not allow users to express their preferences on specific product attributes.  Instead, it enables users to express their preference for one recommended product over the other recommended ones. The lack of specificity is inherently ambiguous with respect to the user’s intent, as there is only limited capacity for guiding the recommendation process (McGinty and Smyth 2007). For example, a product with five product attributes that have four values each can have 45 or 1024 different product versions. However, 55  someone using an alternative-based feedback could only indicate their preference through one of the few recommended products, which is far from precise to fulfill user’s specific preferences. As these alternatives do not necessarily match users’ preferences, using alternative-based feedback is less likely to return products that closely match users’ true preferences.  Further, while users can formulate their attribute preferences8 by comparing the alternatives, users might still wonder if there are other possible attribute values that they have not been exposed to from the product alternatives. If they indeed wonder, there is no way for them to indicate these attribute preferences.  As compared to the alternative-based feedback, integrated feedback enables users to additionally express their preference on each specific product attribute. For example, users may state that “Overall, I like product B, but can you show me some products similar to it with a price between $200-$400”.  In fact, at one extreme, users can utilize the integrated feedback in a fashion similar to the attribute-based feedback by anchoring on any one product and then requiring the system to show products similar to it with a price between “$200-$400”, a hard drive capacity between “100GB-200GB”, screen size “10-12inch”, etc.  To summarize, as compared to the alternative-based or absence of any feedback, attribute-based feedback (e.g., “show me laptops with a hard drive within the range of 400GB-500GB,” etc.) or integrated feedback (e.g., “show me laptops like Product A with a hard drive within the range of 400GB-500GB,” etc.) allows users to indicate their own product attribute preference(s) more precisely, thereby fulfilling their desire to express their preferences, as postulated by the Preferential Choice Theory. Thus, we hypothesize that:  Hypothesis 1: When the initial product recommendations are from consumers or experts, attribute-based feedback (H1a) and integrated feedback (H1b) will lead to higher perceived decision quality than the absence of any feedback.   8 To illustrate, in the customized kitchen design shops, customers are often led through a showroom and service representatives encourage customers to indicate what they like and don't like about the exhibited kitchen. These preferences can then be used to help select an appropriate alternative. 56  Hypothesis 2: When the initial product recommendations are from consumers or experts, attribute-based feedback (H2a) and integrated feedback (H2b) will lead to higher perceived decision quality than alternative-based feedback.  3.3.2 EFFECT OF FEEDBACK MECHANISMS ON PERCEIVED DECISION QUALITY (IN THE PRESENCE OF AN RA)  Preferential Choice Theory indicates that users have a desire to express exactly what they want. If the value of an attribute (e.g., screen size of a laptop) of a product deviates from their ideal preference, this product will be less desirable than one having the ideal value, with the values of other attributes remaining constant. The unique feature of an RA is that it has the preference elicitation function which asks for users’ preferences and their importance on each product attributes, and then the RA assists users in screening large numbers of alternatives, narrowing down the consideration set, and then provides matching products for them (Todd and Benbasat 1994; Singh and Ginzberg 1996). Users will then evaluate these preference-matching products suggested by an RA in more detail. As not all preference-matching products returned by the RA are equal in exactly matching users’ needs, an alternative-based feedback (e.g., “Show me more products like product B”) or integrated feedback (e.g., “Show me more products like product B, with optional update on one more attributes on product B”) can direct the RA to return another set of product alternatives that are similar to the one a user specifies that closely matches his/her needs, which is expected to further improve users’ decision quality, due to the effect of the “illusion of control.” Prior Decision Support System (DSS) research has discovered people tend to overestimate their decision performance if they have control over their decision making process (Davis and Kottemann 1994; Kottemann et al.1994). For example, Davis and Kottemann (1994) conducted an experiment to investigate the illusion of control caused by what-if analysis. They found that subjects strongly believed that what-if analysis was superior to unaided decision making even though no actual performance difference was detected. Applying this reasoning to the current context, we argue that since alternative-based feedback and integrated-feedback allow consumers to control and interact with the feedback mechanisms, the illusion of control causes consumers to be more confident about their choice.  57  Attribute-based feedback is less likely to further improve perceived decision quality, because consumers have indicated their product attribute preferences to an RA. Using attribute-based feedback entails the change of a preferential value for one or more attribute(s) that s/he indicated to the RA at the very beginning. According to the Preferential Choice Theory, most users have preferred values for product attributes and they might not want to deviate from these values.  Based on the foregoing arguments that alternative-based feedback or integrated feedback can assist users in further refining the preference-matching products returned by the RA, and the limited additive function of an attribute-based feedback RA as compared to an RA’s initial preference elicitation function, we hypothesize that  Hypothesis 3: When the initial product recommendations are from RAs, alternative-based feedback (H3a) and integrated feedback (H3b) will lead to higher perceived decision quality than the absence of any feedback.  Hypothesis 4: When the initial product recommendations are from RAs, alternative-based feedback (H4a) and integrated feedback (H4b) will lead to higher perceived decision quality than the attribute-based feedback.  3.3.3 EFFECT OF FEEDBACK MECHANISMS ON PERCEIVED DECISION EFFORT  Guided by the Theory of Preferential Choice, we have hypothesized the interaction effect between feedback mechanism and recommendation sources on perceived decision quality (i.e. fit for preference). However, an interaction on perceived decision effort is not expected because the Theory of Preferential Choice does not provide an adequate basis for anticipating the nature of an interaction on effort.  Instead, we rely on the Theory of Human Information Processing to guide the hypothesis on perceived decision effort.  Users have limited cognitive capacity to process information (Payne 1982; Payne et al. 1988). As feedback mechanisms can take users’ indicated preferences and recommend products to fulfill their needs, it may not be necessary for users to spend more time and effort to consider all of the products available in a website. Otherwise, given the limited cognitive capacity of most people 58  and the potential desire to collect as much information about products within one’s cognitive limitations, users will more likely feel overwhelmed without the support of a feedback mechanism.  Further, as reviewed in Section 3.2.1, the interactive feature of the feedback permits users to carry on a two-way communication with a system, which is expected to arouse users’ pleasure. This is due to the exploratory nature of the experience, the autonomy that users are able to exert over a system, and the sense of fulfillment that users experience (Kettanurak et al. 2001; Jiang and Benbasat 2007). As an enjoyable process influences user’s perception of the passage of time (Agarwal and Karahanna 2000), we hypothesize the following:  Hypothesis 5: A website with attribute-based feedback (H5a), alternative-based feedback (H5b), or integrated feedback (H5c) will lead to lower perceived decision effort than a website with no feedback mechanism9.  3.4. METHODOLOGY The hypotheses proposed were tested through an online experiment with a 3 X 4 between-subject design (see Table 3-1). The first factor was the three types of recommendation sources (i.e. RA, consumers, and experts). The second factor was the different types of feedback mechanisms (i.e. no feedback, alternative-based feedback, attribute-based feedback, and integrated feedback). Accordingly, twelve simulated shopping web sites were constructed.  According to power analysis for between-subject design, 192 subjects (thus 16 subjects for each group) can assure enough statistical power of 0.8 for medium effect size (f = .25) (Cohen 1988).  Table 3-1 Experimental Design Recommendation Sources/Feedback Types Attribute-based Feedback Alternative-based Feedback Integrated Feedback No Feedback (Control) RA Group1 Group2 Group3 Group4 Consumers Group5 Group6 Group7 Group8 Experts Group9 Group10 Group11 Group12  9 It is unclear whether there is an interaction effect between feedback mechanism and sources in perceived decision effort. In the case of consumer and expert recommendation, attribute-based feedback requires more command from the user but is more precise as compared to alternative-based feedback. In the case of RA, attribute-based feedback may require more command from the user but it has been more familiar to users as compared to alternative-based feedback. 59   3.4.1 TASK AND PROCEDURES The task given to the subjects was to select a product from one of the shopping web sites specifically designed for this study. Before the subjects were assigned to one of the 12 shopping web sites (see Table 3-1), each subject went through a baseline web site, which served as a benchmark for evaluating particular web sites (treatments) based on the adaptation theory (Helson 1964). This theory suggests that people’s judgments are based on three criteria: 1) the sum of their past experiences; 2) the context and background of a particular experience; and 3) a stimulus. Keeping this in mind, we randomly assigned subjects to different treatment conditions to ensure that the subjects’ past experiences were averaged out and thus homogeneous across conditions. Additionally, if a common benchmark was provided for all subjects, we could be more confident that the context and background of their experimental experiences were equivalent, such that the disparities across different conditions were caused only by different treatment stimuli. This approach of using a common benchmark is consistent with those used by Jiang and Benbasat (2004-2005; 2007b) and Kamis et al. (2008).  The baseline web site was designed with products recommended by a store without any feedback mechanism. The store recommended five products10 in the form of a comparison matrix, which provided a convenient summary of the product information with rows displaying product attributes and columns displaying product models. Subjects could also see the rest of the eighty- three products in the store, which were displayed five products at a time on the subsequent pages. In the first task with the baseline web site, each subject was required to shop for a laptop computer for a family member. In the second task, the subjects were required to buy a laptop for themselves. After finishing the two shopping tasks, the subjects were asked to treat the baseline web site as benchmark against which to judge the experimental site.  The steps that each subject went through are summarized in Table 3-2.       10 The five products recommended by the store were dominated by another five of the 88 products in the database. The store did not recommend the best products because it has the potential to make a biased recommendation favoring products that are overstock in inventory or with high profit margins. 60  Table 3-2 Experimental Procedure Step 1 Fill in a questionnaire in order to record their demographic and control variables. Step 2 Go through the baseline web site to pick a product for one of their family members. Step 3 Go through one of the twelve experimental web sites to pick a product for themselves. Step 4 Fill in a questionnaire regarding their perceptions of the experimental web site as compared to the baseline one.  3.4.2 EXPERIMENTAL WEB SITE DESIGN  Twelve simulated shopping web sites were constructed to test the above hypotheses and to address the possible combinations of features considered in the present study (i.e., three types of recommendation sources and four types of feedback mechanisms with one control).   As with the baseline website, each of the twelve websites featured the same eighty-eight laptop computers, which were available from leading online commercial websites at the time of the study. Based on these commercial websites, eight key products attributes and features (price, hard drive, processor, memory, screen size, video card, weight and battery) were included for each product. The rationale behind the delineation of the number of product attributes was that providing ten or more product attributes tends to significantly decreases human abilities to cognitively process alternatives (Malhotra 1982) and has been demonstrated to decrease decision quality significantly (Wan et al. 2009).  3.4.2.1	Manipulation	of	the	Recommendation	Sources For obtaining recommendations from an RA (Figure 3-1), users need to indicate their preferred product attributes and their importance before they can get the top 5 recommended product alternatives from the RA. For example, a subject could specify that the desired hard drive attribute falls into the range of 401 to 500 GB, the price falls into the range of “$400 to $600”, and finally, how important these requirements are to him/her (Figure 3-1).The RA then ranked the products based on the user’s indicated preferences and presented the top five recommendations that best matched the user’s preferences on the first page of the recommendations. Subjects could also see the ranking of the rest of the 83 products; these were displayed five products at a time on the subsequent pages in decreasing level of match. Since they had the option of reviewing on their own all the product alternatives available in the comparison matrix, they could voluntarily decide whether or not to follow the RA’s advice.  For the experts’ and consumers’ recommendations, 5 product alternatives were presented to users 61  on the product page of a website (see Figure 3-2). The 5 products were identical in the expert and consumer conditions, and were carefully selected to make sure that they were both popular products in the market (i.e., based on high consumer rating in CNET) and also recommended by computer experts (i.e. based on the CNET editors’ recommendations). Users were informed that these alternatives were based on consumers’ and experts’ opinions, respectively.  This way of manipulating the consumers’ and experts’ source eliminated the confounding effect between recommended products and recommendation sources11 . Detailed treatments are presented in Table 3-3.  Similar to the RA source treatment, subjects could also see the rest of the 83 products. Subjects also had the option of reviewing on their own all the product alternatives available in the web site, and they could voluntarily decide whether or not to follow the source’s advice.  Table 3-3 Manipulation of Recommendation Sources Recommendation Sources Descriptions RA The automated recommendation agent has ranked the products for you based upon the preferences you indicated (i.e. specification for each laptop attribute). The top 5 recommended laptops are shown on the first page. Consumers Our store recommends these 5 laptops on the first page. They are highly recommended by consumers from CNET, an independent technological organization that compiles data for laptops that are rated highly by consumers. Experts Our store recommends these 5 laptops on the first page. They are highly recommended by experts from CNET, an independent technological organization that compiles data for laptops that are rated highly by experts.   11 We included both consumer and expert recommendation sources because it is unclear which one is better than the other in the literature (Senecal and Nantel 2004; Wang and Doong 2010; Chen 2008). 62   Figure 3-1 RA’s Preference Elicitation    Figure 3-2 Recommendations from Experts   3.4.2.2	Manipulation	of	Feedback	Mechanisms In addition to the manipulations of three different recommendation sources, we also manipulated three different types of feedback mechanisms as well as a control condition (no feedback). In each of the feedback conditions, subjects could utilize the feedback mechanism to indicate or update their product preferences.  Examples of the three different types of feedback are shown in Table 3-4 and Figure 3-2, 3-3, 3-4.  In terms of the decision strategy supported by the feedback 63  mechanisms, both attribute-based feedback and alternative-based feedback used equal weight (EQW) decision strategy (see Section 3.2.3). For attribute-based feedback, it is based on the value of the product attributes indicated (See Figure 3-3). For alternative-based feedback, it is based on the attribute values of the preferred alternative indicated (see Figure 3-4). The integrated feedback combined the EQW (for the preferred alternative indicated) and EBA (for the preferred attributes indicated) decision strategy.  Table 3-4 Different Types of RA Feedback Feedback Types Feedback Examples Alternative-based  Feedback Show me more products like product B. Attribute-based Feedback  Show me more products with hard drive between the ranges of 401 to 500GB, with price between $200-400, with battery between $2-4 hours, etc. Integrated feedback Show me more products like B but with hard drive between the ranges of 401 to 500GB.    Figure 3-3 Attribute-based Feedback 64   Figure 3-4 Alternative-based Feedback   Figure 3-5 Integrated Feedback  65   3.4.3 CONSTRUCT MEASUREMENT  The measures for all dependent variables were adopted from scales validated in prior studies. Product knowledge and task involvement were used as control variables. All of the items of the survey and their sources are shown in Table 3-5. Based on the adaptation theory (Helson 1964) discussed earlier, all values of the dependent variables are comparative values based on a common reference point—i.e., the baseline web site in the first shopping task. These two variables were measured using a Likert scale ranging from 1st laptop website (1) to 2nd laptop website (11), where the neutral point (6) indicates that the subject perceives that the second shopping web site does not differ from the first one, and the 11 point indicates the subject believes the measurement items reflects his/her experience in the 2nd website much more strongly than the 1st one. Table 3-5 Measurement Items of the Dependent Variables Construct Names Measurements Items Sources Perceived Decision Quality 1. Laptops that suited my preferences were suggested by the recommendation agent. 2. Laptops that best matched my needs were provided by the recommendation agent. 3. I would choose from the same set of alternatives provided by the recommendation agent on my future purchase occasion. Widing and Talarzyk (1993); Wang and Benbasat (2009)  Perceived Decision Effort 1. The laptop selection task that I went through was complex. 2. The task of selecting a laptop computer using the website was complex. 3. Selecting a laptop computer using the website required effort. 4. The task of selecting a laptop computer using the website took time. Pereira et al. (2000); Wang and Benbasat (2009) Product knowledge 1. I possess good knowledge of tablets. 2. I can understand almost all the specifications (e.g. memory, hard drive) of tablets. 3. I am familiar with basic tablets specifications (e.g. memory, CPU). Sharma and Patterson (2000); Eisingerich and Bell (2008) Task Involvement The product selection task that I have experienced in the website was Irrelevant                                        Relevant to me. Of no concern                                Of concern to me. Didn’t matter                                   Mattered to me. Meant nothing to me                       Meant a lot to me. Unimportant                                    Important McQuarrie and Munson (1992) 66  3.4.4 DATA ANALYSIS 3.4.4.1	Subject	Background	Information Participants in this study were 192 e-commerce shoppers from a North American panel accessed via a marketing research firm. 64.3% of the participants were females. The mean of the participants’ ages is 38.8 years. The majority of the participants (77.9%) were between 31-50 years old. 42% of the participants had a bachelor degree or higher, 26% had a two-year college degree, and the rest, a high school diploma or lower. On average, the subjects had been using the Internet for 12.8 years, spending over 29 hours on the Internet each week. Over 79% of the participants use Internet for at least 15 hours each week. In general, they were familiar with online shopping (6.26/7). The average reported knowledge level of the product used in the task—laptop computers—was 5.02/7.0. No significant differences were found across the treatment conditions regarding these four factors: Internet usage years, Internet usage hours per week, familiarity with online shopping, and product knowledge. 3.4.4.2	Construct	Reliability	and	Validity Construct reliability was assessed using Cronbach’s alphas. The alphas for perceived decision quality and perceived decision effort were 0.887 and 0.865 respectively. All met suggested tolerances (>0.70, Fornell and Larcker 1981). To aid in interpretability of the results, we report an average of the measurement items used in the scale for each dependent variable (1 to 11).  To test construct validity, confirmatory factor analysis of the perceived decision quality and perceived decision effort was performed with varimax rotation (Table 3-6). Without exception, all items load highly (loading ≥ 0.82) on their associated factors, confirming the convergent validity of the factors (Barclay et al. 1995; Chin 1998). Three factors emerged with no cross construct loadings exceeding 0.30. In other words, the loadings of a given construct’s indicators are higher than the loadings of any other. In all cases among the cross loadings, the differences were greater than 0.50. This lends further support to discriminant validity.  Table 3-6 Loading and Cross Loading of Measures  Perceived Decision Quality Perceived Decision Effort Perceived Decision Quality 1 0.885 0.10 Perceived Decision  Quality2 0.923 -0.047 Perceived Decision Quality 3 0.902 -0.018 Perceived Decision Effort 1 -0.037 0.826 Perceived Decision Effort 2 -0.043 0.856 Perceived Decision Effort 3 0.070 0.846 Perceived Decision Effort 4 -0.055 0.842 67  3.4.4.3	Results	on	the	Effect	of	Recommendation	Sources	and	Feedback As a manipulation check, we asked whether or not subjects were aware of the recommendation sources in the study. Five subjects who reported not noticing the recommendation source in their assigned treatment were eliminated. Computer logs were checked to determine if the subjects used the feedback mechanisms. Those subjects who did not utilize their respective feedback mechanisms were excluded from the data analysis, resulting in the final sample size of 192.  In addition, we measured perceived interactivity to determine a subject’s perception of different types of feedback. Our assumption is that if a user did not use the feedback function, they were less likely to perceive the website to be interactive. Results indicated that integrated feedback was perceived to be most interactive, followed by attribute-based feedback, alternative-based feedback, and the absence of any feedback.  Each pair of comparisons among the four types of feedback was all significant (p<0.05).  We aggregated the consumer and expert treatments to one condition, as we found no significant differences between these two treatments in terms of perceived decision quality and perceived decision effort. The results were robust regardless of the types of feedback mechanisms provided in the consumer and expert treatments. After all, the differences between the consumer and expert treatments were the wording of “consumers” vs. “experts” (See Table 3-3) as they both recommended the same five products.  Decision Quality There were significant effects of feedback mechanisms and the interaction between feedback mechanisms and the recommendation sources on perceived decision quality (see ANOVA results in Table 3-7) after controlling for product knowledge and task involvement. The significant interaction effect suggests that the effects of feedback mechanisms were moderated by the recommendation sources (Figure 3-6); therefore, it was investigated in more detail (Glass and Hopkins 1996; Huck 2000) (Table 3-8 and Table 3-9). Specifically, for subjects in the treatment with consumer and expert recommendations, attribute-based feedback and integrated feedback led to significantly higher perceived decision quality than the control group, supporting H1a and H1b.  Further, attribute-based feedback and integrated feedback significantly led to higher perceived decision quality than the alternative-based feedback, supporting H2a and H2b. 68   Table 3-7 ANOVA Summary Table for Perceived Decision Quality Independent Variable Sum of Squares df Mean Square F  Sig. Product Knowledge 0.153 1 0.153 0.051 0.822 Task Involvement 22.137 1 22.137 7.379 0.007 User Feedback 29.992 3 9.997 3.332 0.021 Recommendation Source 0.807 1 0.807 0.269 0.605 User Feedback X Recommendation Source 43.251 3 14.417 4.806 0.003  Table 3-8 Mean and Standard Deviation for Decision Quality  Total RAs Consumer & Experts Feedback Mechanisms Mean SD N Mean SD N Mean SD N Attribute-based Feedback 8.01 1.89 48 7.15 1.81 16 8.57 1.76 32 Alternative-based Feedback 7.94 1.82 48 8.70 1.67 16 7.50 1.77 32 Integrated Feedback 8.45 1.84 48 8.39 1.88 16 8.49 1.85 32 Control (no feedback) 7.28 1.66 48 7.05 1.73 16 7.41 1.64 32 Total 7.94 1.84 192 7.84 1.89 64 8.00 1.82 128  Table 3-9 Contrast Results on Perceived Decision Quality  Mean Difference (former feedback minus the latter one)  RA Consumer and Expert Attribute vs. Alternative Contrast Value -1.54 1.07 Significance 0.008 0.016 Attribute. vs. Integrated Contrast Value -1.23 0.07 Significance 0.025 0.86 Attribute. vs. Control Contrast Value 0.10 1.15 Significance 0.86 0.010 Alternative vs. Integrated Contrast Value 0.31 -0.99 Significance 0.57 0.022 Alternative vs. Control Contrast Value 1.64 -0.08 Significance 0.006 0.850 Integrated vs. Control Contrast Value 1.33 1.07 Significance 0.02   0.014  For subjects in the RA treatment, both alternative-based and integrated feedback led to significantly higher perceived decision quality than the control group. Therefore, H3a and H3b were supported. Further, alternative-based feedback and integrated feedback led to significantly higher perceived decision quality than the attribute-based feedback, supporting H4a and H4b. 69     Figure 3-6 Results on Perceived Decision Quality  Decision Effort Feedback mechanisms significantly affected perceived decision effort, while the source effects and the interaction effect were not significant at p=0.05 (Table 3-10). Post hoc analysis based on contrast test reveals (see Table 3-11 and Table 3-12): attribute-based feedback, alternative-based feedback and integrated feedback were associated with significantly lower perceived decision effort than the control group (i.e. absence of feedback) thus supporting H5a, H5b, and H5c. Table 3-10 ANOVA Summary Table for Perceived Decision Effort Independent Variable Sum of Squares df Mean Square F  Sig. Product Knowledge 1.401 1 1.401 0.561 0.455 Task Involvement 3.27 1 3.27 1.254 0.264 Feedback 21.18 3 7.06 2.705 0.047 Recommendation Source 6.893 1 6.893 2.760 0.098 Feedback X Recommendation Source 7.057 3 2.352 0.942 0.422  70  Table 3-11 Mean and Standard Deviation for Perceived Decision Effort  Mean SD N Attribute-based Feedback 4.89 2.11 48 Alternative-based Feedback 5.34 1.55 48 Integrated Feedback 5.37 1.83 48 Control (no feedback) 5.98 1.06 48 Total 5.39 1.72 192  Table 3-12 Contrast Results on Perceived Decision Effort  Decision Effort Attribute vs. Control Contrast value -1.08 Significance 0.001 Alternative vs. Control Contrast value -0.63 Significance 0.019 Integrated vs. Control Contrast value -0.61 Significance 0.035 Attribute vs. Alternative Contrast value -0.45 Significance 0.212 Attribute  vs. Integrated Contrast value -0.48 Significance 0.203 Alternative vs. Integrated Contrast value -0.03 Significance 0.934  A summary of the outcomes of hypotheses testing was presented in Table 3-10.  Table 3-13 Hypotheses Testing Results Hypotheses Supported? H1:  When the initial product recommendations are from consumers or experts, attribute-based feedback (H1a) and integrated feedback (H1b) lead to higher perceived decision quality than the absence of any feedback. Yes H2.  When the initial product recommendations are from consumers or experts, attribute-based feedback (H2a) and integrated feedback (H2b) lead to higher perceived decision quality than the alternative-based feedback. Yes H3:  When the initial product recommendations are from RAs, alternative- based feedback (H3a) and integrated feedback (H3b) lead to higher perceived decision quality than the absence of any feedback. Yes H4.   When the initial product recommendations are from RAs, alternative- based feedback (H4a) and integrated feedback (H4b) lead to higher perceived decision quality than the attribute-based feedback. Yes H5:  Superiority of attribute (H5a), alternative (H5b), and integrated (H5c) feedback over the absence of any feedback in perceived decision effort.             Yes 71  3.4.4.4	Supplementary	Analysis	on	the	Integrated	Feedback As noted earlier, for subjects in the RA treatment, the use of integrated feedback led them to perceive higher decision quality than the attribute-based feedback and control condition; for subjects in the consumer and expert treatment, integrated feedback led them to perceive high decision quality than the alternative-based feedback and control conditions. Further, in using integrated feedback, we found that those subjects in the RA condition had on average 1.1 attributes (i.e. “but change” function)  requests when they indicated their preferences on one alternative (i.e. show products similar to this model),  but users in the consumer and expert condition did so with 2.3 attributes request. This indicates that users utilized integrated feedback in a fashion more like alternative-based feedback when the initial recommendations are from an RA, but used it in a manner similar to attribute-based feedback when the initial recommendations are from consumers or experts.  That is, the subjects with expert or consumer advice relied more on attribute searches than the ones with an RA when given integrated feedback.  3.4.5 DISCUSSION We compared the perceptual differences for the interaction effects between feedback mechanisms and recommendation sources in terms of perceived decision quality, and the differences among the various types of feedback mechanisms in terms of perceived decision effort.  In terms of perceived decision quality, we found that attribute-based and integrated feedback is better than the absence of any feedback and alternative-based feedback when the recommendations are from consumers and experts, and alternative-based feedback and integrated feedback is better than the absence of any feedback and attribute-based feedback when the recommendations are from RAs.  It should be noted that the integrated feedback seems to be the best feedback overall regardless of the recommendations sources. In other words, integrated feedback is statistically indistinguishable from attribute-based feedback or alternative-based feedback in the respective conditions.  RAs assist users in performing the tedious and processing-intensive work of screening and sorting products based on users’ expressed preferences, and return users the recommended 72  products that best match their own preferences. Under this condition, alternative-based feedback and integrated feedback are more desirable than attribute-based feedback in helping users finding new products that still stay close to their initial preferences. While attribute-based feedback allows users to update their preference, users are less likely to find it useful, as they may not want to deviate from their originally indicated preference to the RA.  This is consistent with the Preferential Choice Theory that individuals have preference on certain value (e.g., 17 inch) for variables (e.g., screen size) of an object (e.g., laptop computer), with the values of other attributes remaining constant.  On the other hand, with consumer and expert recommendations, users were not exposed to a personalized set of products. Under this condition, attribute-based and integrated feedback is more appropriate than alternative-based feedback in helping users to express their precise preference on product attributes.  In our operationalization of attribute-based feedback, the laptop with 8 product attributes that has 4 values each can have 48 or 65536 different product versions. Users of the attribute-based feedback could indicate one version of the 65536 different preference combinations, while someone using an alternative-based feedback could only indicate their preference through one of the 5 recommended products. Thus, attribute-based feedback provides a much, much more opportunity (65536 vs. 5) for users to indicate their attribute preference.   On the other hand, alternative-based feedback prevents users from providing an exact preferred value for an attribute.  Particularly when none of the initial recommended products are preferred by a user, alternative-based feedback may lead users to nowhere.  We found that attribute-based feedback, alternative-based feedback and integrated feedback can greatly decrease perceived decision effort than can the absence of any feedback. This indicates that the initial effort to indicate product preferences using the three types of feedback (i.e. the feedback command) is relatively small, as compared to the effort that would have been spent in screening and comparing a comprehensive list of product alternatives when the feedback is absent. Although users of integrated feedback usually need to contemplate both the preferred product alternative and the preferred attributes, whereas users only need to indicate a preferred product alternative with alternative-based feedback, and need merely to indicate their preferred product attributes with attribute-based feedback, they still perceive the usage of integrated feedback having significantly less effort, the results of which is particularly encouraging. 73   We found no difference of consumer and expert recommendations in terms of perceived decision quality and effort. However, previous finding on the differences between consumer and expert recommendations in terms of trust have been mixed. For example, Senecal and Nantel (2004) and Wang and Doong (2010) that experts were viewed as more credible, while the reverse was found by Chen (2008).  To further investigate these results, we examined how consumer and expert sources were manipulated in these studies.  One possible explanation is that when one source was stated from a third party (vs. its own online store), it was perceived to be more trustworthy than the source from its own online store.  In our study, as both sources were based on a third-party, CNET, they were not perceived to be different.  3.5. CONTRIBUTIONS AND CONCLUSIONS 3.5.1 THEORETICAL CONTRIBUTIONS Study 2 holds significant theoretical contributions. First, we examine three types of feedback mechanisms that induce perceptual differences of the feedback among users, namely, attribute- based feedback, alternative-based feedback, and the integrated feedback.  Although these three mechanisms have been discussed in the decision-making literature (e.g., McGinty and Smyth 2007), little research has incorporated feedback mechanisms into the design of an RA and an e- commerce website, thus an empirical examination of their effects has not been attempted. That is why in the extensive review of RA research (Xiao and Benbasat (2007), user feedback aspects of RAs were missing in their comprehensive model. To the best of our knowledge, this is the first study to examine the efficacy of three types of feedback mechanisms in terms of perceived decision effort and perceived decision quality. This study will help researchers better understand individuals’ online choice-making behaviors and their use and adoption of feedback mechanisms.  Second, we advance the RA literature by demonstrating that the incorporation of alternative- based feedback and integrated feedback will lead to better perceived decision quality than the absence of any feedback and attribute-based feedback. As mentioned earlier, the alternative- based feedback and integrated feedback has been overlooked in the RA literature (e.g., Kamis et al. 2008; 2010). For those with mechanisms (Wang and Benbasat 2009) that can be considered to be feedback, they only allowed users to iterate through their initial input to the RA as many times as they wished, which is similar to the attribute-based feedback provided in the study. We 74  contribute to the RA literature by suggesting that the alternative-based and integrated feedback with an RA is superior to attribute-based feedback with an RA to achieve better perceived decision quality.  We highlight the importance of feedback mechanisms to the consumers and/or expert recommendations. Prior research has shown that consumer give greater consideration, and ultimately are more likely to choose highly preference-matched personalized content (Tam and Ho, 2006; Ho et al. 2008) and to pay for preference-matched goods and services (Randall et al. 2007). As consumer and expert recommendations are not necessarily personalized, its impact on users’ perceived decision quality and effort are limited. Thus, future research on recommendation sources should seek to incorporate feedback mechanisms such as attribute-based feedback or integrated feedback into consideration.  We also contribute to the research on recommendation sources. While recent studies have investigated various types of recommendation sources, comparisons between human and technological sources have rarely been made. For example, Wang and Doong (2010), and Chen (2008) investigated recommendation sources of experts and consumers. However, there was no consideration for the relative effects of an RA.  Given that using an online RA is becoming more popular in current e-commerce environments, there has been a surprising lack of empirical research investigating the differences among online RAs, human experts, and the “wisdom of crowds”. This study contributes to this knowledge gap by putting together three recommendation sources in one comparison set.  Finally, we contribute to the preferential choice literature by highlighting the fact that users have a desire to indicate their own preferences. This is evident in the fact that users still want to choose a product that reflects their desire and needs, even when presented with recommendations from experts or consumers. In contrast, when users have indicated their product preferences to the RA, they found the attribute-based feedback less useful as they wanted to stay close to their original preference indicated to the RA. 75   3.5.2 PRACTICAL CONTRIBUTIONS This study also has significant implications for practitioners. First, the results of the study suggest that the feedback mechanisms can generally improve users’ perceived decision quality and reduce users’ perceived decision effort.  Accordingly, e-commerce designers should provide different types of feedback mechanisms to facilitate online consumers’ decision making. Integrated feedback is highly recommended as it has been demonstrated to lead to higher perceived decision quality than does the lack of feedback, regardless of the recommendations soruces (RA or consumer and expert recommendation).  This is because users can utilize the integrated feedback in a different way depending on the environment. When the initial recommendations are from consumer or experts, users will use more of the attribute function in the integrated feedback, and less with an RA’s recommendations.  Despite the recent concern that online consumers might lack the motivation or ability to deliberate a full set of product attributes (Murray and Häubl 2009), we found that users want to express their precise preferences on product attributes during the preferential choice task, either before they see the product alternatives (i.e. RAs) or after (i.e. experts and consumers recommendations). Thus, online merchants should have such mechanisms in place for users’ disposal.  For RA designers, our results show that the traditional attribute-based feedback does not necessarily produce the best results. In terms of user feedback, it is actually the alternative-based feedback or integrated feedback that will advance the perceived decision quality to a higher level. Thus, when designing an RA interface, practitioners should select the appropriate feedback function to augment the capabilities of an RA.  For website retailers that have consumer and expert recommendation in place but have not utilized RAs, it should be noted that consumer and expert sources with attribute-based feedback or integrated feedback eventually lead to the same level of “decision quality” as the RA usage. In particular, users don’t need to go through the initial preference elicitation process and indicate the importance weights for products attributes. Thus, as a “short cut” to compete with other online merchants with RAs, these website retailers are advised to insert the attribute-based 76  feedback or integrated feedback to their consumer or expert recommendation to provide better matching products to their customers.  3.5.3 LIMITATIONS AND FUTURE RESEARCH  As with all studies, there are limitations and opportunities for future research. First, we only investigate three types of feedback in the current study. Future research should investigate the effects of other types of feedback, such as rating-based feedback that allows users to rate the initial product recommendations instead of just picking one recommended product as their preferred alternative. Second, the laptop product examined in this thesis is generally considered to be high involvement product, since it is costly and has multiple product features. However, it is beyond the ability of this study to consider several product taxonomies simultaneously within a single effort. Subsequent studies should consider the other product types (i.e. low involvement or experiential products) when replicating the research in order to generalize the findings. Another possible stream of research is to utilize our study as a starting point to evaluate whether the results will be changed by the potential moderators, such as user product knowledge.  3.5.4 CONCLUSIONS Given a large amount of products in e-commerce Web sites, and a large population of users searching and comparing products online (Ricci and Werthner 2007), online firms are increasingly utilizing decision aids to assist online shoppers in arriving at informed decisions. By examining consumers’ perceptions of decision quality and decision effort toward use of three online feedback mechanisms, we contribute to theoretical advances in consumer decision making in e-commerce. Notably, this research fills the research gap in the RA literature by highlighting the role of alternative-based and integrated feedback in further improving users’ perceived decision quality. Additionally, we fill a void in the literature by investigating how the effectiveness of feedback is moderated by recommendation sources.  In doing so, we advance the Preferential Choice Theory in the context of online choice-making supported by recommendation sources and feedback mechanisms. Likewise, we offer guidelines for promoting an effective provision of online decision support services for consumers.    77  CHAPTER 4 THE EFFECTS OF MULTIPLE RECOMMENDATION SOURCES ON USERS’ CHOICE BEHAVIOR AND PERCEIVED DECISION QUALITY   4.1 INTRODUCTION  With the proliferation of e-commerce, more and more websites are providing product recommendations from multiple sources, such as online recommendation agents (RAs), consumers and experts to assist customers in choosing a suitable product and save time when they are considering a selection from a wide variety of product alternatives. For example, Bestbuy.com, a well-recognized multinational retailer of technology and entertainment products and services, provides both RAs and consumer ratings on their website; Computershopper.com provides both computer expert and consumer ratings of electronic products. Procompare.com offers both RAs and expert recommendations on computer products.  Are these recommendation sources (RAs, consumers, and experts) useful in influencing customers’ choices? Do all of these three sources have the same impact?  Given the prevalence of product recommendations in an online context, the impact of online recommendation sources has received some attention in academic research. These sources have been demonstrated to greatly facilitate consumers’ decision processes and assist them in making better choices. For instance, RAs, which assist consumers by eliciting their product preferences and then making product recommendations that satisfy those needs (Xiao and Benbasat 2007), have been shown to influence perceived decision quality (Wang and Benbasat 2009). However, are customers adequately equipped to express their product attribute needs and able to evaluate the subsequently recommended products?  As alternatives to the RA’s recommendations, some customers have relied on other information sources, such as expert and consumer opinions, to assist them in their purchase decisions. The effectiveness of these personal sources can be traced back to the marketing and communication literature. For example, consumers’ ratings have played an increasingly important role in consumers' purchase decisions as a form of word-of-mouth recommendation (Chen and Xie 2008). Positive expert recommendations have been revealed to enhance consumers’ attitudes and 78  behavioural intents (Wang 2008). However, are expert and consumer recommendations effective when compared to RAs under all circumstances?  While prior studies have extended our knowledge of the effect of recommendation sources, relatively less research has examined which sources are more preferable to what type of consumers and which sources are more effective under what circumstances. Although a few studies have examined the recommendation efficacy of consumers and experts (Senecal and Nantel 2004, Wang and Doong 2010) together in one study, very few studies have compared them to a truly interactive online RA. In addition, subjects in these studies were exposed to a single recommendation source only; to our knowledge, no studies have examined the relative effect of one source over the other when these three sources (RAs, consumers, and experts) are together presented to consumers.  Websites can readily provide multiple sources. It is somewhat naïve to think that customers will rely solely on one source without consulting others. So it is important to consider a more realistic situation where multiple sources are simultaneously available. Thus, an opportunity exists to study this phenomenon, such as recommendation convergence and divergence among these sources in an e-commerce context.  Recommendation convergence refers to the existence of products jointly recommended by two or more sources. Recommendation divergence means that no products have been jointly recommended by two or more sources. When consumers are presented recommendations from multiple sources, a key issue is that the recommendations made by different sources do not or might not necessarily have to be in agreement.  Recommendations made by an RA based upon a consumer’s specifically expressed attribute needs may be quite different from what most other consumers express or what experts would generally recommend. However, a subset of the recommended products sometimes might overlap among these sources, such as between experts and an RA. Given these possible divergent and convergent recommendations, how would a consumer make use of these recommendations? Studying these issues will contribute to our knowledge of how customers react to recommendation divergence and convergence from multiple recommendation sources.  While some online merchants provide multiple types of recommendation sources in a single website, others do not, whether by choice or as a result of cost implications, this raises 79  interesting questions as to whether the availability of different recommendations sources influences consumers’ choice behavior and beliefs concerning their choices.  Grounded in the Elaboration-Likelihood Model (ELM), we investigate whether expert and consumer recommendations have a stronger influence than RAs on consumers’ choice selection for consumers with low product knowledge, and whether RAs have a stronger influence than expert and consumer recommendations on consumers’ choice selection when consumers’ task involvement is high.  In addition, we draw upon Cognitive Dissonance Theory to predict that consumers will avoid cognitive dissonance by choosing a product recommended by multiple sources rather than a single source.  Specifically, we pose the following three research questions:  1) Does product knowledge and task involvement influence consumers’ adherence to the recommendation of one source (e.g., RA) versus the others (e.g., experts and consumers)? 2) What is the impact of adherence to certain recommendation sources (RAs vs. experts and consumers) on perceived decision quality? 3) How does recommendation convergence among multiple sources influence consumers’ choice selection and perceived decision quality?  The structure of Chapter 4 is as follows. In the next section, theoretical foundations for studying the impact of multiple recommendation sources are reviewed. Associated hypotheses are presented in Section 4.3. An overview of the methodology and the procedures of the experiment are then described in Section 4.4, followed by contributions and conclusions in Section 4.5.  4.2 THEORETICAL FOUNDATIONS With the aim of studying the role of product knowledge and task involvement on consumers’ reliance upon one source over the others, and how consumers make choices with the recommendation convergence from multiple sources, we review the Elaboration Likelihood Model (ELM) and Cognitive Dissonance Theory. We also review Preferential Choice Theory to explain in the current context what constitutes a quality message as stipulated in ELM.  80  4.2.1 THEORY OF PREFERENTIAL CHOICE Preferential choice is a well-developed research area in many disciplines, including marketing, sociology, psychology, and economics, and is a pervasive and remarkably important human behavior (Carroll et al. 1991). Preferential choice problems normally entail a large number of alternatives and attributes. The gist of the Preferential Choice Theory is that an individual may have an ideal value for an attribute, such as with the capacity of a hard drive on a laptop computer, and that his/her utility falls off as an attribute value either increases or decreases (Coomb 1974). The assumption that each individual has a preferential desire on a certain attribute is consistent with the concept of rationality used by economic theorists. Most economists define rationality in terms of consistency principles and evaluate the rationality of behavior using principles of consistency within a system of preferences and beliefs (Rieskamp et al. 2006). The most important principle of preferential choice is perfect consistency or invariant preferences across occasions (Rieskamp et al. 2006).  That is, a person’s choice of a particular alternative is predicated on what s/he prefers. Based on this theory, we use perceived decision quality (i.e. fit for preference) as the dependent variable in this study.  4.2.2 ELABORATION-LIKELIHOOD MODEL (ELM) ELM posits that persuasion can act via a central or peripheral route. Elaboration in ELM refers to “the extent to which a person scrutinizes the issue-relevant arguments contained in the persuasive communication” (Petty and Cacioppo, 1986 p. 7). When elaboration is high, the recipient is influenced by a central route to persuasion; i.e., people are more likely to carefully examine the content and consider other issue-relevant material (e.g., personal preferences on product attributes). As a result, people are influenced more by argument content. When likelihood of elaboration is low, a peripheral route is more influential; i.e., people tend to attempt to minimize their cognitive effort. Hence, they are more likely to be influenced by heuristic cues, such as the source factor (e.g., experts’ opinions), without careful consideration of the argument content (Petty and Cacioppo 1986; O’Keefe 2002).  Extensive empirical literature has tested ELM in different contexts and conceptualized variables (e.g., argument quality or source) as acting through a central or peripheral route depending on the study context. According to ELM, the likelihood of elaboration will be high when people are highly motivated to process the arguments and/or when they have a high level of ability to do so. 81  When either one of these two factors is at a low level, the likelihood of elaboration is expected to be low (O’Keefe, 1990).  4.2.3 COGNITIVE DISSONANCE THEORY  Cognitive Dissonance Theory (Festinger, 1957) stipulates that when an individual holds two or more elements of knowledge that are relevant to each other but inconsistent with one another, a state of discomfort (i.e. dissonance) is created. For example, dissonance may be aroused after being exposed to conflicting information from different sources. When dissonance occurs, people have a motivational drive to reduce dissonance. They do this by changing their attitudes, beliefs, and/or actions, seeking and recalling consonant information, and avoiding dissonant information.  Although most studies applying Cognitive Dissonance Theory have focused on the post-decision situation, some marketing scholars (Bawa & Kansal, 2008; O’Neil & Palmer, 2004) have recently expanded the application of the theory beyond a post-decision and post-purchase situation by examining the role of cognitive dissonance in broader contexts. For instance, Kim (2011) utilized the Cognitive Dissonance Theory to explain how customers process information facing a word-of-mouth message that is incongruent with their existing belief.  It was contended that “cognitive dissonance can occur not only in a post-purchase situation but also at various stages of the consumption process” (Kim 2011, p. 98).  While numerous studies used the Cognitive Dissonance Theory to explain banking system usage (Bhattacherjee 2001) and consumer behavior (Kim 2011), to our knowledge, there is no study to date that has applied the theory to the context of multiple source recommendations in order to investigate the role of recommendation convergence and recommendation divergence in influencing consumers’ choice decisions.  82  4.3 HYPOTHESES DEVELOPMENT 4.3.1 ROLES OF PRODUCT KNOWLEDGE AND TASK INVOLVEMENT  Within the ELM framework, two types of factors – message and source – have been identified as influential of persuasion through the central route and peripheral routes to persuasion (Petty et al. 1997; Petty and Wegener 1998).  As reviewed in Section, 4.2.2, the likelihood of elaboration will be high when people have a high level of ability to understand the strength of a message. Specifically, perceived higher product knowledge could affect the extent of information processing through ability factors because ability provides sufficient background to discern the merits of strong arguments and the flaws in weak ones (Petty and Cacioppo 1986, Petty 1994).  Further, people with differing levels of product knowledge may weigh the evaluative criteria differently in evaluating and using information (Alba and Hutchinson 1987). According to them, people with less product knowledge are more likely to weigh highly those attributes that are easily understood or peripheral, such as opinions from the “wisdom of the crowds”. In contrast, consumers with high product knowledge will place more weight on functional product information, such as the inherent product quality. In the context of the preferential choice problem, a quality product would be a product that fits a consumer’s preferences, according to the Theory of Preferential Choice, which posits that an individual may have certain preferences on product attributes (Coomb 1974).  Thus, we expect that the source of the recommendation, such as experts or consumers, acts as a peripheral cue. These cues have been termed “authority heuristic” (Sundar 2009), herd behavior (Chen 2008), or “bandwagon heuristic (Sundar 2007).  As such, consumer and expert recommendations will be weighed more heavily by consumers with low product knowledge, because they don’t have sufficient knowledge to specify the exact value for each attribute (Rathnam 2005), nor can they gauge the importance of each product attribute. It is expected that they will find the usage of an RA less informative. As they cannot fully rely on their own product knowledge to perform the product evaluations successfully (Sharma and Patterson 2000), such as with those recommended by an RA, they might instead rely on relational and tangible cues, characteristic of recommendation sources (e.g., expert and consumer) to form their product evaluation.  In contrast, as consumers with higher product knowledge can process product 83  information more effectively (Eisingerich and Bell 2008), they are better able to articulate their preference of product attributes to the RA, and to evaluate whether an RA’s product recommendations fits their preferences. Thus, their reliance on an RA’s recommendation will be higher. Thus, we hypothesize that:  H1. Consumers with high product knowledge are more likely to rely on an RA’s recommendations to make a choice decision rather than the recommendations from experts or consumers.  Source factors (e.g., credibility) are generally peripheral factors, as they have been found to operate under conditions of low but not high task involvement (Johnston and Coolen 1995), except when the arguments in the message are ambiguous; this will bias the processing of the communication (Chaiken and Maheswaran 1994).  Indeed, “If one’s personal relevance of an argument is low, one puts less effort to read and think about the argument, thus is more influenced by the length of the argument, rather than the argument content itself” (O’Keefe 2002, p. 145). Similarly, when involvement is low, people will be more easily influenced by the information source than the personalized content. In contrast, when involvement is high, people will be more influenced by the personalized content than the source. Thus, we hypothesize that:  H2. Consumers with high task involvement are more likely to rely on an RA’s recommendation to make a choice decision than the recommendations from experts or consumers.  The previous two hypotheses predict for what type of consumers and under what circumstance consumers will be more likely to adhere to a certain recommendation source. To capture the potential downstream effects of adherence to a source’s advice in a context with multiple recommendation sources, we investigate an important consequence: higher perceived decision quality.  The RA literature indicates that by eliciting consumers’ product preferences on product attributes, an RA can screen large numbers of alternatives and enable individuals to make complex decisions with high quality (Todd and Benbasat 1994; Singh and Ginzberg 1996). We contend that this will even be true in the context with multiple recommendation sources, where 84  consumers have the opportunity to access and review what experts and consumers have recommended. Their eventual consideration of an RA’s advice indicates that they have weighed the advantages and disadvantages of each source, and thus have a more clear understanding of which source’s recommendation best fit their preferences.  Thus, we hypothesize that,  H3. As compared to those relying only on consumer and/or expert recommendations to reach their choice decision, consumers who rely on an RA’s recommendations will perceive a higher level of decision quality12.  4.3.2 THE EFFECT OF RECOMMENDATION CONVERGENCE FROM MULTIPLE SOURCES  Different sources might recommend different sets of products. Sometimes, a subset of one source’s recommended products may overlap with the other source’s recommendations. How will the recommendation convergence influence a consumer’s behavior and perceived decision quality?  We draw on Cognitive Dissonance Theory (Festinger, 1957) to guide the following hypotheses regarding consumers’ behavior and belief when multiple sources provide both convergent and divergent product recommendations.  When consumers experience conflicting recommendations from multiple sources (i.e. consumers, experts, or RA), they may face cognitive dissonance. When dissonance occurs, this dissonance can be reduced by searching for information that supports one side of the information (i.e. consonant information). Research shows that the preference for consonant information increases with the level of cognitive dissonance, as adding consonant information helps to further stabilize the cognitive system (Frey, 1986). Thus, we expect that consumers will seek to reduce their dissonance by searching for convergent recommendations among the three sources. Further, the presence of jointly recommended products provides an opportunity for the consumer to avoid the dissonant recommendations provided by different sources. In contrast, if consumers do not follow the convergent recommendations, they are more likely to face greater dissonance because the  12 We do not hypothesize the effect on perceived decision quality because it is not clear whether users will spend less effort in using certain recommendation sources.  Prior literature has indicated that RA reduces users’ effort in decision making. However, users who relied on consumer and expert recommendation are less motivated in making a decision, which should lead to less perceived effort as well. 85  convergent recommendations will present stronger conflicting information as compared to what they choose. Thus, we predict that,  H4. When consumers are presented with both recommendations that are convergent among multiple sources and recommendations that are divergent among multiple sources, consumers are more likely to follow the convergent recommendations.  When consumers have lower levels of cognitive dissonance, they will have higher levels of confidence and less regret in their choices, as convergent recommendations provide greater assurance to their decisions. Previous research has found that a lower level of cognitive dissonance reduced perceived risk (Nandan, 2005), is a strong predictor of perceived usefulness (Bhattacherjee 2001) and repatronage behavior (Kim 2011). Thus, we predict that:  H5. Consumers following convergent recommendations will perceive a higher level of decision quality than those following divergent recommendations.  4.4. METHODOLOGY An online experiment (six experimental groups) with the manipulation of the recommendation convergence (six types, see Table 4-2) of multiple sources was implemented to test the proposed hypotheses.  4.4.1 SHOPPING TASK AND INCENTIVES  Subjects were randomly assigned to one of the six experimental groups. The subjects’ task was to choose a tablet for themselves from one of the shopping web sites specifically designed for this study.  A tablet is a complete personal mobile computer, larger than a mobile phone or personal digital assistant, integrated into a flat touch screen and primarily operated by touching the screen. Tablets have become popular since Apple introduced the iPad device in 2010. Immediately, tens of new tablets were announced to compete with it (IDC 2010). In this study, each experimental website contained the same 30 tablets, all of which were available from leading online commercial websites at the time of the study after the iPad was made available online from multiproduct sellers. Based on these commercial websites, seven key product attributes and 86  features (price, storage, processor, memory, screen size, weight and battery) were included for each product. The rationale behind the delineation of the number of product attributes was that a person’s span of immediate memory is about 7 +/- 2 items (Miller 1956); providing 10 or more product attributes tends to drastically decrease human abilities to cognitively process alternatives (Malhotra 1982), and has been demonstrated to decrease decision quality significantly (Wan et al. 2009).  E-commerce shoppers were recruited from a North American panel and accessed via a marketing research firm. Individuals were provided with a point-based incentive (redeemable for various prizes available through the marketing firm) for their assistance in the study. In addition, they were told that serious and honest participants would be awarded a $15 Visa Gift Card. Previous research (e.g., Mao and Benbasat 2000) has indicated that this is important, as it serves to motivate subjects to view the experiment as a serious online experience session and to increase their involvement.  4.4.2 EXPERIMENTAL PROCEDURES AND WEB SITE DESIGN  Each of the six websites was designed with three recommendations sources, namely, RAs, experts, and consumers.  For each experimental condition, consumers can freely choose from which of the one or more recommendation sources they would like to seek advice, as well as the sequence in which they would like to access the sources  available to them (see Figure 4-1, 4-2 and 4-3). They might access a single recommendation source alone, or they might access all of the recommendation sources available in the assigned web site.  When the RA source was accessed, consumers could indicate their preferred product attributes and the weights assigned to the attributes (Table 4-1 and Figure 4-2) in order to get the top five recommended products from the RA (See Figure 4-3, automated  agent’s recommendation). For example, a subject could specify the battery attribute of 6 hours and price attribute of $350, etc., and the importance he/she attached to these requirements (Figure 4-2). The RA then ranked the products based on the consumer’s indicated preferences and presented the top five products that best matched the consumer’s preferences on the first page of the website (Figure 4-3). Subjects could also see the ranking of the rest of the twenty-five products; these were displayed five 87  products at a time on the subsequent pages in decreasing level of match. Since consumers have the option of reviewing on their own all the product alternatives available in the web site, they could voluntarily decide whether or not to follow the RA’s advice.  Table 4-1 Manipulation of Recommendation Sources Recommendation Sources Statements Presented to Consumers  RA “The agent has ranked the products for you based on your indicated overall preferences. The top five recommended tablets are shown on the first page, followed by the next five recommended on the next page.” Experts “The five products on the first page are highly recommended by experts from CNET, an independent technological organization that compiles data for tablets that are rated highly by experts.”  Consumers “The five products on the first page are highly recommended by consumers from CNET, an independent technological organization that compiles data for tablets that are rated highly by consumers.”   Figure 4-1 Screen of the Initial Product Page Note: The position (left, middle, and right) of each source in the websites was counterbalanced in the experiment.   88   Figure 4-2 Website with Three Recommendation Sources Accessed   Figure 4-3 Website with RA’s Recommendations Generated  When the experts’ recommendation source was accessed, five product alternatives based on the CNET editors’ recommendations were presented to consumers and were stated as being recommended by the experts (See Table 4-1 for the descriptions). Subjects could also see the rest of the twenty-five products, which were displayed five products at a time on the subsequent pages (Figure 4-2). Similar manipulation applied to the consumers’ recommendation source (see Table 4-1 and Figure 4-2). 89  4.4.3 MANIPULATION OF THE OUTPUT OF SOURCE RECOMMENDATIONS The hypotheses proposed were tested through an online experiment with six groups (see Table 4-3). Experts’ recommendations were based on the CNET editors’ top five recommendations13 and remained constant across the six experimental groups. Consumers’ recommendations are based on CNET consumers’ highest ratings of the five tablet products in the market, and two of them (Hereafter Products D and E) were found to be overlapped with experts’.  We used these five consumer- recommended products in Groups 5 and 6. To create conditions (Groups 1-4) in which consumers’ recommendations diverged from experts’, we replaced two consumer’s products (Products D and E) with those rated 6th and 7th by consumers so that there was no overlap between expert and consumer recommendations. For RAs, we manipulated the RA’s output (Groups 2-6) to create different types of recommendation convergence among the three sources. Detailed manipulations regarding the RA’s output are described in Table 4-2.  Table 4-2 Manipulation of RA’s Output Convergence Level RA’s Output Manipulation Group 1 (Control) Control group (RA could recommend 0-5 products that were overlapped with expert and/or consumer recommendations) -- RA’s recommendations were not manipulated and were completely based on consumers’ product preferences indicated to the RA. Group 2 (None) All three sources recommended five distinct products. -- When an RA’s supposedly top five recommendations contained a product that was commonly recommended by consumers and/or experts, this overlapped product was swapped with the RA’s next recommended product (e.g., top 6th). Group 3 (RA & Experts) The RA & experts had two commonly recommended products. -- When the RA’s supposedly top five recommendations contained a product that was commonly recommended by consumers, this overlapped product was swapped with the RA’s next recommended product (e.g., top 6th). -- When the RA’s supposedly top five recommendations contained over two of the experts’ recommended products, the third one was swapped with the RA’s next recommended product (e.g., top 6th). -- When the RA contained 0 (or 1) of the experts’ recommended products, the highest (or second) ranked experts’ products outside of the RA’s top five was swapped with the RA’s top 5th (and top 4th) recommendations.       13 CNET editors only recommended 5 tablet products. 90  Table 4-2 Manipulation of RA’s Output Group 4 (RA & Consumers) The RA & consumers had two commonly recommended products -- When the RA’s supposedly top five recommendations contained a product that was commonly recommended by experts, this overlapped product was swapped with the RA’s next recommended product (e.g., top 6th). -- When the RA contained over two of the consumers’ recommended products, the third one was swapped with the RA’s next recommended product (e.g., top 6th). -- When the RA contained 0 (or 1) of the consumers’ recommended products, the highest (or second) ranked consumers’ products outside of the RA’s top five was swapped with the RA’s top 5th (and top 4th) recommendations. Group 5 (Experts & consumers) Experts & consumers have two commonly recommended products (Products D and E) -- When the RA’s supposedly top 5 recommendations contained a product that was recommended by either experts or consumers, this overlapped product was swapped with the RA’s next recommended product (e.g., top 6th). Group 6 (All three sources) All three sources had two commonly recommended products (Products D and E). -- When the RA’s supposedly top five recommendations contained a product that was commonly recommended by consumers and/or experts (Not Products D and E), this overlapped product was swapped with the RA’s next recommended product (e.g., top 6th). -- When the RA’s supposedly top five recommendations did not contain product D (or E), then D (or E) was swapped with the RA’s top 5th recommendations. -- When RA’s supposedly top five recommendations do not contain product D or E, then both D and E were swapped with the RA’s top 5th and 4th recommendations.  4.4.4 CONSTRUCT MEASUREMENTS Table 4-4 lists the measurements for the dependent variables. The measures for the product knowledge, task involvement, and recommendation convergence were adopted from scales validated in prior studies. Product knowledge and recommendation convergence were measured using a Likert scale ranging from strongly disagree (1) to strongly agree (7). A semantic differential scale was used to measure task involvement on a seven-point rating scale.         91   4.5 DATA ANALYSIS 4.5.1 SAMPLE CHARACTERISTICS  250 e-commerce shoppers participated in this study. Among them, 43.3% of the participants were males. The mean of the participants’ ages was 47.4 years. The majority of the participants (62.9%) were between 31-59 years old. 42.5% of the participants had a bachelor degree or higher, 23% had a two-year college degree, and the rest, a high school diploma or lower. On average, the subjects had been using the Internet for 14.5 years, spending over 28.7 hours on the Internet each week. Over 76.4% of the participants use the Internet for at least 15 hours each week. In general, they were familiar with online shopping (6.25/7). The average reported knowledge level of the product used in the task—tablet—was 3.88/7.0. Table 4-3 Measurements Construct Names Measurements Items Source Recommendation Adherence Determine from which of the source’s top five recommendations a product was selected. Developed for this study Adherence to Convergent recommendations Identify whether a product selected was commonly recommended among the top five of the various recommendation sources. Developed for this study  Product knowledge 1. I possess good knowledge of tablets. 2. I can understand almost all the specifications (e.g., memory, hard drive) of tablets. 3. I am familiar with basic tablets specifications (e.g., memory, CPU). Sharma and Patterson (2000); Eisingerich and Bell (2008) Task Involvement The product selection task that I have experienced in the website was Irrelevant                                        Relevant to me. Of no concern                                Of concern to me. Didn’t matter                                   Mattered to me. Meant nothing to me                       Meant a lot to me. Unimportant                                    Important McQuarrie and Munson (1992) Recommendation Convergence (Manipulation Check) 1. I realized that same product(s) were recommended in different sources’ top five recommendations. 2. I observed that different sources recommended the same product(s) in their top five recommendations. 3. I found that different sources tend to agree on what top five products should be recommended. Adapted from Miranda and Bostrom (1993- 1994) 92  4.5.2 MANIPULATION CHECK  Computer logs were checked to determine whether subjects accessed each of the three recommendation sources. This was done by checking whether subjects accessed the three links available in the Figure 4-1.   Table 4-4 shows that 17.2% of the total sample accessed one source only, 22.4% accessed two sources only, and the rest, 60.4%, accessed all of the three sources. A Chi-square test indicates that consumers are significantly (p<0.05) more likely to access the expert and consumer recommendations than the RAs.  Table 4-4 Statistics for Number of Sources Accessed Number Of Sources Accessed Sample  Sample/250 Experts Consumers RAs 1 43 17.2% 20 13 10 2 56 22.4% 54 45 13 3 151 60.4% 151 151 151 TOTAL 250 100% 225 209 174 PERCENTAGE   90% 83.6% 69.6%  Another manipulation check was performed among the 6 experimental groups in terms of perceived recommendation convergence among the three sources.  Results (see Table 4-5) indicate that Group 2 (i.e. no convergence) was perceived to have a lower level of recommendation convergence than each of the other 5 groups (p<0.05), while no difference was found among any of the other five groups.   Table 4-5 Perceived Convergence Convergence Level Mean SD N Group 1(Control) 5.39 1.39 28 Group 2  (No convergence) 4.33 1.76 27 Group 3  (RA & experts) 5.68 1.37 23 Group 4  (RA & consumers) 5.27 1.34 25 Group 5 (Experts & consumers) 5.24 1.42 24 Group 6 (All 3 sources) 5.39 1.11 24 Total 5.20 1.46 151  93  4.5.2 ADHERENCE TO THE RECOMMENDATION SOURCES Table 4-6 reports consumers’ adherence to the three recommendation sources. It indicates that the RA has the highest recommendation adherence rate (40.61%), followed by that of the experts (38.29%) and then consumers (34.53%).  Table 4-6 Statistics for Recommendation Adherence Number of Sources Accessed Number of Consumers that Adhered to the Respective Source Recommendation Experts Consumers RAs 1 17 6 8 2 16 27.5 8.5 3 53.2 38.6 54.2 Number of consumers that adhered to the respective source 86.16 72.16 70.66 Number of consumers that accessed the respective source 225 209 174 Percent of consumers that adhered to the recommendation of the source accessed 38.29% 34.53% 40.61%  Note: If consumers adhered to one source’s recommendation, that source obtains one point. If consumers adhered to both sources’ recommendations, then each source counts 0.5. If consumers adhered to three sources’ recommendations, then each source counts 0.333.   4.5.4 TESTING OF THE HYPOTHESES  In H1 (and H2), it was proposed that consumers with high product knowledge (task involvement) are more likely to rely on an RA’s recommendation to reach their choice decision rather than the recommendations from experts and consumers.  We regressed product knowledge and task involvement on consumers’ recommendation adherence (those relied on RA coded as 1 and 0 otherwise) with product importance14 as a control variable, and found that both product knowledge (β = 0.14, p<0.05) and task involvement (β =0.16, p<0.05) had significant effects, indicating that consumers with high product knowledge or task involvement relied more on recommendations from the RA than those from experts or consumers, thereby supporting H1 and H2.  14 Sample measure items for product importance are 1) For me, tablets are important; and 2) For me, tablets are fun. 94  H3 compared the level of perceived decision quality between consumers who relied on the RA’s recommendations and those who did not. Independent t-test reveals that those relied on the RA’s recommendations (N=97) perceived a higher level of decision quality (mean difference=0.34, p<0.01) than those who relied on consumer and/or expert recommendations only (N=153), supporting H315. As a post-hoc test, we performed another two t-tests to examine whether or not reliance on expert or consumer recommendations made a difference. We coded those who relied on expert recommendations as 1 and 0 otherwise. The results suggest that no significant differences (p>0.05) exist between subjects who relied on expert recommendations and those who did not. Similarly, we coded those who relied on consumer recommendations as 1 and 0 otherwise. No significant difference (p>0.05) was found between these two groups.  To test whether consumers are more likely to select a product that was jointly recommended by multiple sources (H4), we compared the percentage selecting an overlapped product with the probability of randomly selecting an overlapped product (See table 4-7 for the breakdown of each experimental group). To calculate the latter, we also controlled for the number of instances a product was presented to the consumer. To illustrate, a product appearing two times (once in each of the two sources) would have two times the probability of being chosen as compared to a product appearing only once (only from a single source).  Thus, if consumers were choosing randomly, the probability of choosing an overlapped product in Group 3 (RA & expert convergence) is 26% (2 overlapped products X 2 appearance in RA and expert’s source respectively /15). A one-sample t test comparing the percentage of consumers (48%) following the convergent recommendations with the percentage of random selection (31.8%) indicates that consumers were more likely to follow convergent recommendations than divergent recommendations (p<0.001), supporting H4.  H5 evaluated whether those following convergent recommendations will perceive a higher level of decision quality. Based on the hypothesis, we excluded Group 2 (no convergence), in which customers were not presented with convergent recommendations, resulting in a sample size of 124.  An independent t-test comparing those who selected an overlapped product (N=60) and those did not (N=64) revealed a significant difference (mean difference=0.41, p<0.05) in  15 Recommendation convergence between the RA & consumers, between the RA& experts, and among the RA, consumers & experts were not found to be significantly better than the RA alone in term of perceived decision quality (p>0.05). 95  perceived decision quality, supporting H5. A post-hoc analysis found that the acceptance of convergent recommendations between consumers & the RA, and between experts & the RA each (p<0.05) led to higher perceived decision quality than acceptance of convergent recommendations between consumers and experts.  A summary of the outcomes of hypotheses testing was presented in Table 4-7.   4.5.5 SUPPLEMENTARY ANALYSIS ON CONVERGENT RECOMMENDATIONS  The above analyses reveal that overall, convergent recommendations impact consumers’ choice behaviour and perceived decision quality. We further analyze which source combinations of convergent recommendations have the strongest impact. The results for each group are presented in Table 4-7.  We found that consumers were more likely to choose an overlapped product in the condition of Group 1(Control) and Group 3 (convergence between RA & experts). A further analysis on the Group 1 (control) found that although the experts’ recommendation had as likely a chance (18.9% vs. 20.6%) as the consumers’ recommendation to be chosen (if by chance), consumers were still more likely (mean difference=22.6%, p<0.05) to choose an overlapped product between the RA & experts than between the RA & consumers, corroborating the findings in Group 3 that consumers were more likely to choose a product jointly recommended by RA and experts.  Table 4-7  Hypotheses Testing Results Hypotheses Supported? H1. Consumers with high product knowledge are more likely to rely on an RA’s recommendations to make a choice decision rather than the recommendations from experts or consumers. Yes H2. Consumers with high task involvement are more likely to rely on an RA’s recommendation to make a choice decision than the recommendations from experts or consumers. Yes H3. As compared to those relying only on consumer and/or expert recommendations to reach their choice decision, consumers who rely on an RA’s recommendations will perceive a higher level of decision quality. Yes H4. When consumers are presented with both recommendations that are convergent among multiple sources and recommendations that are divergent among multiple sources, consumers are more likely to follow the convergent recommendations. Yes H5. Consumers following convergent recommendations will perceive a higher level of decision quality than those following divergent recommendations. Yes 96  Table 4-8 Percent of Selecting an Overlapped Product Convergence Level Percent of consumers choosing overlapped products Probability of choosing an overlapped product (if by chance) Difference Sig. (p value) Group 1  (Control) 87.5% 39.6% (=2.97*2/15) 47.9% 0.000 Group 2  (No convergence) 0% 0% 0% N/A Group 3 (RA & experts) 60.9% 26% (=2*2/15) 34.9% 0.003 Group 4 (RA & consumers) 32.1% 26% (=2*2/15) 6.1% 0.500 Group 5 (Experts & consumers) 29.2% 26% (=2*2/15) 3.2% 0.741 Group 6 (All 3 sources) 34.6% 40% (=2*3/15) -5.38% 0.576  4.5.6 SUPPLEMENTARY ANALYSIS ON ADHERENCE TO SOURCES AND CONVERGENT RECOMMENDATIONS A 2 (Adherence to RAs’ advice or not) X2 (Adherence to overlapped products or not) ANOVA16on perceived decision quality were conducted to investigate the relative effect of adherence to RA’s advice and adherence to overlapped products.  Table 4-8 indicates that adherence to RA’s advice significantly affects perceived decision quality, while adherence to overlapped products and the interaction effect were not significant. Similarly, we performed another two ANOVA for adherence to consumers’ (Table 4-9) and experts’ (Table 4-10) advice respectively. Table 4-9 reveals that adherence to overlapped products has a significant main effect on perceived decision quality, but adherence to consumers’ advice did not. Similarly, Table 4-10 shows that adherence to overlapped products has a significant main effect on perceived decision quality, but adherence to consumers’ advice did not.  Table 4-9 ANOVA Summary for Perceived Decision Quality (RAs) Independent Variable Sum of Squares df Mean Square F  Sig. Adherence to RA’s advice (Yes vs. No) 5.408 1 5.408 4.553 0.035 Adherence to overlapped products (Yes vs. No) 0.002 1 0.002 0.002 0.965 Adherence to RA’s advice X Adherence to overlapped products 0.006 1 0.006 0.005 0.945    16 A single 2X2X2X2 ANOVA cannot be performed because there is no data in the three cells: (overlap, non-RA, non-consumer, expert), (overlap, RA, non-consumer, non-expert), and (overlap, non-RA, consumer, non-expert).  In theory, overlap requires the adherence to at least two sources. 97  Table 4-10 ANOVA Summary for Perceived Decision Quality (Consumers) Independent Variable Sum of Squares df Mean Square F  Sig. Adherence to consumers’ advice (Yes vs. No) 0.299 1 0.299 0.247 0.620 Adherence to overlapped products (Yes vs. No) 5.03 1 5.03 4.16 0.043 Adherence to consumers’ advice X Adherence to overlapped products 4.711 1 4.711 3.896 0.050  Table 4-11 ANOVA Summary for Perceived Decision Quality (Experts) Independent Variable Sum of Squares df Mean Square F  Sig. Adherence to experts’ advice (Yes vs. No) 0.461 1 0.461 0.387 0.535 Adherence to overlapped products (Yes vs. No) 4.753 1 4.753 3.998 0.048 Adherence to experts’ advice X Adherence to overlapped products 6.669 1 6.669 5.610 0.019  4.5.7 DISCUSSION  We have applied ELM to examine what types of consumers and under what circumstance they will rely on certain sources of recommendations to select a product, and the consequent beliefs of doing so. We have further applied the Cognitive Dissonance Theory to investigate how the effects of recommendation convergence from different sources influence consumers’ choice behaviors and perceived decision quality.  We found that consumers with high (low) product knowledge are more likely to depend on an RA (experts and consumers) to make their choices, while consumers with low (high) involvement are more likely to depend on experts and consumers (RAs) to make their choices. This is consistent with ELM, in that ability and/or motivation are two of the most important factors that influence consumers’ elaboration. Consumers in the low elaboration likelihood state, lacking the task involvement or product knowledge to deliberate thoughtfully, tend to be motivated by peripheral cues, such as consumers’ and experts’ recommendations. Conversely, those in the high elaboration likelihood state are more likely to engage in careful scrutiny or thoughtful processing of an information message (i.e. personalized content by an RA) and, therefore, tend to be more persuaded by an RA than by peripheral cues, such as consumers’ and experts’ recommendations, which are relatively unimportant since they do not know a consumer’s specific preferences and cannot provide personalized product recommendations for a 98  consumer.  In addition, consumers are more likely to choose a product jointly recommended by multiple sources. This is congruent with the Cognitive Dissonance Theory, which states that consumers will try to avoid dissonant information and seek consonant information. In this study context, consonant information was provided through the overlapped products that were jointly recommended by two or three sources.   Under this condition, if a consumer follows the divergent recommendations, which is against the recommendation from the convergent recommendations, it is foreseeable that consumers would likely face cognitive dissonance. According to the dissonance theory, it follows that one would be less likely to engage in the activity, i.e. following divergent recommendations.  We found that those who adhered to the RA’s advice perceived a significantly higher level of decision quality. This is reasonable as RAs provide personalized product recommendation based on the consumer's indicated preferences, while consumer and expert recommendations are general in nature. We also found that following convergent recommendations leads to higher perceived decision quality.  This is because when sources provide different divergent recommendations, consumers have a need to reduce dissonance and seek consonant information.  Our supplementary analysis further reveals that not all convergent recommendations are the same, only those recommendations convergent with RA will lead to high perceived decision quality.  4.6 CONTRIBUTIONS 4.6.1 THEORETICAL CONTRIBUTIONS  This study makes significant theoretical contributions. First, while recent studies in IS have investigated various types of recommendation sources, little empirical research has examined the impact of the multiple recommendation sources on consumers’ choice behavior and perceived decision quality. For example, Wang and Doong (2010) investigated two types of recommendation sources: experts and consumers. However, these sources were not integrated in one website. Further, there was no consideration for the effects of an RA.  Given that using an online RA is becoming more popular in current e-commerce environments, there has been a 99  surprising lack of empirical research investigating consumers’ choice behaviour and perceived decision quality when they are presented with recommendations from online RAs, human experts, and the “wisdom of crowds”. This study contributes to this knowledge gap by putting together three recommendation sources in one website and examining the individual and joint effects (i.e., convergence) of each source.  Second, we use ELM as the theoretical basis to explain the effect of a recommendation source relative to another on choice selection. This study is the first of its kind to apply ELM to the integration of three recommendation sources. Specifically, we empirically investigate the relative effect of an RA with the expert and consumer sources in influencing consumers’ choice behaviour and belief. The results suggest that an RA is more influential for consumers with high product knowledge or high task involvement, while expert and consumer recommendations have more impact for consumers with low product knowledge or low task involvement. Thus, we contribute to the literature by investigating the interplay among three types of recommendation sources, consumers’ product knowledge, and task involvement.  The recent paper, “Personalization without Interrogation” by Murray and Häubl (2009) highlighted the danger of an RA’s preference elicitation process. RAs have historically asked far more questions than a human employee would before providing advice (Murray and Häubl 2009), but online consumers might not have the motivation or ability to deliberate a set of product attributes before they see the product alternatives (Murray et al. 2010). In this study, we empirically test their assertion by presenting the recommendations from an RA, consumers, and experts for consumers to peruse. We found that indeed, consumers with low product knowledge (i.e. ability) and task involvement (i.e. motivation) adhere less to an RA’s advice.  Finally, this study extends the application of the Cognitive Dissonance Theory to a new context by examining consumers’ reactions to recommendation convergence from multiple sources. While many of the previous studies made significant contributions to strengthen and validate the Cognitive Dissonance Theory, most of them have predominantly focused on a post-decision and post-purchase situation, and a few on explaining how customers process information that is incongruent with their existing belief. However, no prior study has applied the theory to the context of multiple recommendation sources with both convergent and divergent 100  recommendations. This study suggests the applicability of Cognitive Dissonance Theory to the evaluation of recommendation convergence from multiple sources.  4.6.2 PRACTICAL CONTRIBUTIONS  This study also has significant implications for practitioners. We found each recommendation source (RAs, consumers or experts) has its own adherents and each takes up from 34.53% to 40.61% of market (Table 4-6), which suggests that e-commerce website merchants should strive to incorporate each of the three recommendation sources in a single website to accommodate the different needs of consumers.  Providing one source alone may result in the loss of the remaining 2/3 (approximately) of the customers, and providing two sources may result in the loss of the other 1/3 of the customers.  It is surprising that few websites in the market provide all three of these sources simultaneously.  Practitioners should be aware that different recommendation sources appeal to consumers with different levels of product knowledge and task involvement.  Even with recommendations available from experts and consumers, consumers with high product knowledge or high task involvement still prefer to use an RA that can provide personalized product recommendations to fit their needs. It is difficult to fulfill all types of consumers’ needs across different conditions through the use of a single recommendation source.  By providing different recommendation sources on one website, practitioners can further leverage the recommendations from different sources to create recommendation convergence. Source convergence can reduce consumers’ cognitive dissonance, and improve their perceived decision quality. As we manipulated different types of recommendation convergence among three sources and among any two possible sources, it was clear that recommendation convergence impacts consumer’s choice. Accordingly, it will be advisable for practitioners to provide such types of recommendation combinations to its online consumers.  In this study, we found that perceived decision quality can be improved in two ways: reliance on an RA’s recommendations to make a choice, and the selection of convergent recommendations among the multiple sources. In particular, it is clear that convergent recommendations between 101  an RA and experts provide the strongest influence on consumers’ recommendation acceptance (Table 4-7). Taken together, it appears that an RA is the most important recommendation source that can help improve perceived decision quality. Thus, online merchants should give serious consideration to implementing.  4.6.3 LIMITATIONS AND FUTURE RESEARCH  There are several limitations to this study that should be noted, and which open avenues for future research. First, products examined in this study are generally considered to be search products, as most of the key product attributes can be determined online. As expert and consumer recommendations may have more weight in influencing consumer’s perceptions in experiential products, future studies should consider the other product types (e.g., experiential products) when replicating the research in order to generalize the findings. Second, we only consider three recommendation sources in the current study. With the proliferation of social networks, subsequent studies will benefit by including sources from online friends and family members, etc.  4.6.4 CONCLUSIONS  Online firms are increasingly utilizing RAs, consumers’ opinions, and to a lesser extent, experts’ opinions, to assist online shoppers in making informed decisions. By examining customers’ choice behaviour and perceived decision quality when they face the recommendations from multiples sources, we contribute to theoretical advances in consumer decision making in e- commerce. Particularly, this research fills in the research gap by offering insight into the individual roles and joint roles of multiple recommendation sources in shaping consumers’ product choices and perceived decision quality. In terms of the individual effect of each source, we empirically investigate which source is preferred by which type of consumer, and under what circumstance. In terms of the joint roles of multiple sources, we examine the effect of the presence of source convergence on a consumer’s choice behaviour and perceived decision quality. In doing so, we advance our understanding about how and why multiple recommendation sources can influence consumers’ decisions and beliefs. The consumer’s choice of an overlapped product is due to their search for consonant recommendations. The results can 102  also be used to advise practitioners on the design and implementation of multiple sources of recommendations.                                              103  CHAPTER 5: CONCLUSIONS 5.1 SUMMARY OF THE THESIS Grounded in the Input-Process-Output Framework, this thesis consists of three experimental studies, each of which examines the design of an RA’s input, process, and output interfaces respectively, on users’ perceived decision quality, perceived decision effort and choice behaviour. The results of the three empirical studies provide answers to the research questions that initially motivated the research:  1) (Study 1) Will the trade-off-transparency feature improve consumers’ perceived product diagnosticity and perceived enjoyment?  The trade-off-transparency feature interactively demonstrates the trade-offs among the product attributes. The results from Study 1 reveal that as compared to the absence of the trade-off- transparency feature, its presence leads to higher perceived enjoyment when the level of trade- off-transparency is medium or high. As compared to an absence of trade-off-transparency, a low level does not lead to higher perceived product enjoyment. However, , trade-off-transparency always leads to higher perceived product diagnosticity regardless of whether it is a low, medium, or high level of trade-off transparency.  2) (Study 1) Which level of trade-off transparency will lead to optimal perceived product diagnosticity and perceived enjoyment?  Perceived product diagnosticity and perceived enjoyment with the trade-off-transparency feature follow an inverted U-shaped curve as the level of trade-off transparency increases. A medium level of trade-off transparency (i.e. on average two attribute trade-offs demonstrated per slider movement initiated by a user) leads to the highest perceived product diagnosticity and perceived enjoyment. When the trade-off transparency feature becomes either low or high, perceived product diagnosticity and perceived enjoyment both drop as compared to the medium level of trade-off transparency.  3) (Study 1) How does trade-off-transparency help to achieve both better perceived decision 104  quality and lower perceived decision effort simultaneously?  Trade-off transparency helps to achieve both better perceived decision quality and lower perceived decision effort simultaneously through its impact on perceived product diagnosticity and perceived enjoyment. Study 1 demonstrates that perceived enjoyment leads to better perceived decision quality and lower perceived decision effort simultaneously, while product diagnosticity leads to better decision quality without compromising perceptions of decision effort.  4) (Study 2) How do the recommendation sources moderate the effect of different types of user feedback (attribute-based feedback, alternative-based feedback and integrated feedback) on perceived decision quality?  Study 2 finds that the recommendation sources moderate the effects of different types of feedback mechanisms on perceived decision quality.  Specifically, when a recommendation agent is provided, alternative-based feedback and integrated feedback lead to higher perceived decision quality as compared to either attribute-based feedback or the absence of any feedback. When consumer or expert recommendations are provided, attribute-based feedback and integrated feedback lead to higher perceived decision quality as compared to either alternative-based feedback or the absence of any feedback.  5) (Study 2) Are attribute-based feedback, alternative-based feedback and integrated feedback better than the absence of feedback in terms of perceived decision effort?  Study 2 reveals that all of the three feedback mechanisms (attribute-based feedback, alternative- based feedback and integrated feedback) lead to lower levels of perceived decision effort than the absence of any feedback. This is because user feedback saves users effort spent in screening and comparing a comprehensive list of product alternatives.  6) (Study 3) Will product knowledge and user involvement influence users’ adherence to the recommendation of one source (e.g., RA) versus the others (e.g., experts and consumers)? 105   Results from study 3 reveal that users possessing high (low) product knowledge are more likely to depend upon an RA (experts and consumers) to make their choices, while users with low (high) involvement are more likely to depend on experts and consumers (RAs) to make their choices.  7) (Study 3) What is the impact of adherence to an RA’s advice on perceived decision quality?  As compared to those only relying on consumer and/or expert recommendations to reach their choice decision, users who rely on the RA’s recommendations will perceive a higher level of decision quality.  8 (Study 3) How does recommendation convergence among multiple sources influences consumers’ choice selection and perceived decision quality?  Study 3 supported the argument that when users are presented with both recommendations that are convergent among multiple sources and recommendations that are divergent among multiple sources, users are more likely to follow convergent recommendations. In addition, users following convergent recommendations will perceive a higher level of decision quality than those following divergent recommendations. 5.2 CONTRIBUTIONS 5.2.1 THEORETICAL CONTRIBUTIONS This dissertation addresses three important gaps in the RA literature associated with input, process, and output components of an RA respectively. It first proposes and examines the advantages of the trade-off-transparent RA relative to the traditional RA, and indicates which level of trade-off transparency leads to the best perceived decision quality and decision effort. Then it examines the differences among four types of feedback mechanisms (attribute-based feedback, alternative-based feedback, integrated feedback, and the absence of any feedback) and demonstrates the advantage of adding alternative-based and integrated feedback feedback to an RA, and the advantage of adding attribute-based feedback and integrated feedback to consumer 106  and expert recommendation sources to improve users’ perceived decision quality. Finally, it investigates the relative efficacy of an RA as compared to expert and consumer recommendations, and how users make a choice when multiple sources provide convergent and divergent recommendations. For Human Computer Interaction (HCI) researchers, this thesis will increase the understanding of how the interface design of an RA and consumer and expert recommendations can improve the perception of decision quality and decision effort towards recommendations from RAs, consumers and experts.  Specifically, the thesis makes the following theoretical contributions. First, this research advances our knowledge by demonstrating that trade-off transparency features can introduce trade-off awareness to users without jeopardizing their positive emotional experience. Trade-off awareness is beneficial to accurate decision making (Delquie 2003), but how to make users aware of the attribute trade-offs is largely unknown. We have demonstrated that with the trade- off-transparent RA, users not only have better understanding about attribute trade-offs, but also experience positive emotions with the interface.  We advance the decision support literature by proposing the augmented Effort-Accuracy Framework, including perceived enjoyment and perceived product diagnosticity as two antecedents of perceived decision quality and perceived decision effort.  This is important in that previous research had primarily focused on how RA characteristics can directly impact perceived decision quality and decision effort (Xiao and Benbasat 2007), but the underlying mechanism of why certain RA characteristics can lead to better decision quality and decision effort has been largely ignored. This research highlights perceived enjoyment and perceived product diagnosticity as two underlying mechanisms that explain the improvement of perceived decision quality and reduction in perceived decision effort.  We also contribute to the broad literature of task complexity, as previous studies on the role of task complexity have predominately focused on component complexity, while neglecting an important dimension: coordinative complexity. We address this knowledge gap by studying how coordinative complexity (manifested by the different number of trade-off relationships revealed by an RA) influences users’ evaluations.  107  Going beyond previous research, this study empirically examines the impact of three feedback mechanisms on perceived decision effort and perceived decision quality. There is a paucity of research examining the effects of different kinds of feedback mechanisms on users’ perceptions. For this reason, feedback mechanisms were not included in the comprehensive review of RA research (Xiao and Benbasat (2007).  We address this literature gap by demonstrating that the incorporation of alternative-based feedback and the integration of feedback into an RA lead to better perceived decision quality than the absence of feedback or attribute-based feedback.  By putting the RA recommendation together with expert and consumer recommendations in one website, this research enhances our understanding of the effectiveness of an RA relative to recommendations from experts and consumers. It also contributes to the literature by investigating the interplay among three types of recommendation sources, users’ product knowledge, and task involvement.  Finally, this research extends the application of the Cognitive Dissonance Theory to a new context by examining users’ reactions to recommendation convergence from multiple sources. While many previous studies have made significant contributions to strengthen and validate Cognitive Dissonance Theory, no prior study has applied the theory to the context of multiple recommendation sources in order to investigate the role of source convergence on users’ choice behaviour. 5.2.2 PRACTICAL CONTRIBUTIONS The results from the thesis will have significant implications for designing online RAs. First, when eliciting consumers’ preferences on product attributes, RA practitioners are advised to provide a trade-off-transparent RA rather than the traditional one, as the trade-off-transparent RA will provide more product diagnosticity and an improved experience (e.g., perceived enjoyment) for consumers, which eventually leads to higher perceived decision quality and lower perceived decision effort. Without the trade-off transparency feature, users might provide unrealistic inputs for attribute preferences to the RA and end up being presented with unmatched product choices. Consequently, users might have negative perceptions towards the RA and stop utilizing it (Wang and Benbasat 2007). However, while providing trade-off transparency is generally beneficial, practitioners should select the appropriate number of product attribute trade-off relationships to 108  demonstrate, as showing all available options might exceed consumers’ cognitive capabilities, thereby lessening the efficacy of the trade-off transparency feature.  Second, it is recommended that practitioners implement the integrated feedback feature as they will enhance consumers’ perceived decision quality without decreasing perceived decision effort, regardless of the recommendation sources (RA, consumers or experts). Except for the case of integrated feedback, firms should understand that a “one size fits all” approach of using feedback mechanisms may not lead to the desired effects, given the contingent effect of the recommendation sources. Specifically, in order to improve perceived decision quality, RAs are better matched with alternative-based feedback, while consumer and expert recommendations are better matched with attribute-based feedback.  In addition, it is recommended that website merchants provide multiple recommendation sources. By doing so, they will avoid the risk of losing potential customers, as different types of customers (low or high product knowledge, low or high task involvement) may prefer various types of recommendation sources. Users might leave a website simply because it does not provide their preferred sources. Additionally, website merchants should highlight the recommendation convergence among the different sources, if any exists, so that users can experience less cognitive dissonance and greater assurance in their decisions.  Finally, we found that perceived decision quality can be improved in two ways: reliance on the RA’s recommendations to make a choice, and the selection of convergent recommendations among the multiple sources. In particular, it is clear that the convergent recommendations between an RA and experts provide the strongest influence on users’ recommendation acceptance.  Taken together, it appears that an RA is the most important recommendation source that can help improve perceived decision quality. Thus, online merchants should give serious consideration to implementing RAs.  Overall, the investigation of the design factors of RAs has significant contributions to the IS literature and practitioners. Although the behavioral beliefs of perceived usefulness (or perceived decision quality) and perceived ease of use (or perceived decision effort) has been important to understanding RA adoption (Baier and Stüber 2010; Kamis et al. 2008), these beliefs offer little design prescriptions and actionable IT design criteria (Benbasat and Barki 2007). Indeed, there 109  has been increasing concern in the IS research community regarding the scant attention paid to the IT artifact and to the design and development of such artifacts (Benbasat and Barki 2007; Benbasat and Zmud 2003; Orlikowski and Iacono 2001). The question of “What IT artifacts are beneficial?” posed by business managers is a critical class of research question in the IS discipline (March and Storey 2008; Hevner et al. 2004). To make IS research truly useful to practitioners, it is important to extend the scope of the technology adoption research to include IT artifact design and evaluation (Venkatesh and Bala 2008).  Study 1 proposes a novel design of trade-off transparency feature that has been demonstrated to ultimately improve perceived decision quality and perceived decision effort; Study 2 proposes to add alternative-based feedback functions to an RA, which improves perceived decision quality; Study 3 examines how an RA’s output should be connected with the recommendations from experts and consumers, which has significantly impacted users’ choice behavior and beliefs. All of these provide actionable IT design guidelines for practitioners. 5.3 LIMITATIONS AND SUGGESTIONS FOR FUTURE RESEARCH As with all studies, there are limitations and opportunities for future research. First, we controlled and implemented a medium level of component complexity (i.e., 8 product attributes) in the experimental application in order to better assess the impact from different levels of coordinative complexity (i.e. trade-off transparency). Future research should investigate the effects of both component complexity and coordinative complexity on perceived diagnosticity and perceived enjoyment, with the goals of isolating and comparing differing levels of component complexity and coordinate complexity, as well as examining potential interactions between the two constructs.  For example, it is possible that when component complexity is high (e.g., over 20 product attributes), even low levels of coordinative complexity will have a negative impact on users’ perceived diagnosticity and perceived enjoyment. It is also possible that when component complexity is low (e.g., less than 5 product attributes), even high levels of coordinative complexity will not have a negative effect.  At the same time, products (i.e., laptops and tablets) examined in this thesis are generally considered to be high involvement products, since they are costly and have multiple product features. However, it is beyond the ability of this thesis to consider several product taxonomies simultaneously within a single set of studies. Subsequent studies should consider the other 110  product types (e.g., low involvement products or experiential products) when replicating the research in order to generalize the findings.  Third, Study 3 found that 1/3 of users relied on an RA to make their choices. Future research will benefit by examining whether more users will adhere to an RA’s recommendation by incorporating the results from Study 1 and Study 2 to design a better RA with both the medium level of trade-off transparency and with alternative-based feedback.  This research also opens additional avenues for future research. One possible stream of research is to utilize the proposed model in Study 1 (Chapter 2) as a framework to evaluate other kinds of RA design. The other is to investigate how the inverted U-shaped curve could be changed by other possible moderators, such as user product knowledge, in order to examine whether the peak point will be the same for different types of users.  Future research should also examine the impact of other recommendation sources, such as those from friends or family members.  This is particularly important with the growth of online social network. Prior research has indicated that weak-tie sources such as expert or consumer recommendations is more conductive to the flow of information, while strong-ties such as friend is more conductive to the follow of influence (Brown and Reingen 1987; Duhan et al. 1997). Thus, it is expected that different results might be found with the sources from friends or family members.  This research has focused more on how to make the RA’s recommendations more personalized to users. Another potential stream of research would be to personalize the RA interface through which the online RA interacts with the consumer. For example, if we found that a high level of trade-off transparency led to the best perceived enjoyment and product diagnostiicty for individuals with high product knowledge, then an RA interface with a high level of trade-off transparency should be presented to these types of users, rather than presenting the medium level of trade-off transparency to all types of users.  Future research might benefit by designing a doubly personalized RA that has a personalized interface and a personalized product recommendation.  111  5.4 CONCLUSIONS In this dissertation, we propose a novel design of an RA interface by enabling it to interactively demonstrate trade-offs among product attributes (i.e. trade-off transparency feature) to improve consumers’ perceived enjoyment and perceived product diagnosticity. We also examine to what extent the trade-offs of product attributes should be revealed to the user. We investigate how different types of recommendation sources (RAs vs. consumers and experts) moderate the effect of the three types of user feedback on perceived decision quality and on perceived decision effort. We also examine why a user adheres to a particular source’s recommendations (RAs vs. consumers and experts) and its consequence, and how a user makes a choice when the product recommendations from multiple sources converge. This dissertation significantly contributes to the literature of RAs, information systems, e-commerce, and recommendation sources. Meanwhile, it provides concrete design implications to RA designers and e-commerce merchants.                    112  REFERENCES  Agarwal, R., and Karahanna, E. 2000. “Time Flies When You’re Having Fun: Cognitive Absorption and Beliefs about Information Technology Usage,” MIS Quarterly (24:4), pp. 665–694. Ahn H. J. 2006-2007. “Utilizing Popularity Characteristics for Product Recommendation,” International Journal of Electronic Commerce (11:2), pp 59-80. Alba, J.W. and Hutchinson, J.W. (1987), “Dimensions of Consumer Expertise”, Journal of Consumer Research (13) 4, pp. 411-54. Andrews, J. and Shimp, T. 1990. “Effects of Involvement, Argument Strength, And Source Vividness on Central and Peripheral Processing of Advertising,” Psychology and Marketing (7:3) pp.195-214. Baddeley A., 1992. “Working Memory: The Interface between Memory and Cognition,” Journal of Cognitive Neuroscience (4:3) pp.281, Bagozzi, R. P. 1986. Principle of Marketing Management, Chicago: SRA. Baker, J., Grewal, D., and Parasuraman, A. 1994. “The Influence of Store Environment on Quality Inferences and Store Image,” Journal of the Academy of Marketing Science (22:4), pp.328-339. Barclay, D., Higgins C., and Thompson R. 1995. “The Partial Least Squares (PLS) Approach to Causal Modeling: Personal Computer Adoption and Use as an Illustration,” Technology Studies 2, pp. 285–324. Bawa, A., & Kansal, P. (2008). Cognitive dissonance and the marketing of services: Some issues. Journal of Services Research, 8(2), 31–51. Belk, R. 1975. “Situational Variables and Consumer Behavior,” Journal of Consumer Research (2:3), pp. 157-164. Benbasat, I., and Barki, H. 2007. “Quo vadis TAM,” Journal of Association Information Systems (8:4), pp. 211–218. Benbasat, I., and Todd, P. 1992. “The Use of Information in Decision Making: An Experimental Investigation of the Impact of Computer-Based Decision Aids,” MIS Quarterly (16:3), pp. 373-393. Benbasat, I., and Todd, P. 1996. “The Effects of Decision Support and Task Contingencies on Model Formulation: A Cognitive Perspective,” Decision Support Systems (17:4), pp. 241- 252. 113  Benbasat, I., and Zmud, R. W. 2003. “The Identity Crisis within the IS Discipline: Defining and Communicating the Discipline’s Core Properties,” MIS Quarterly (27:2), pp. 183–194. Berman, B. 2002. “Should Your Firm Adopt a Mass Customization Strategy?” Business Horizons (45:4), pp. 51–60. Bettman, J. R., Luce, M. F., and Payne, J. W. 1998. “Constructive Customer Choice Processes,” Journal of Consumer Research (25:3), pp.187–217. Bhattacherjee A. 2001. Understanding Information Systems Continuance: An Expectation- Confirmation Model, MIS Quarterly (25:3), pp. 351-370. Brown, Jacqueline and Reingen, Peter H. 1987. " Social Ties and Word-of-Mouth Referral Behavior," Journal of Consumer Research, 14, pp.350-362. Carroll, J.D and G. De Soete 1991. “Toward a New Paradigm for the Study of Multiattribute Choice Behavior,” American Psychologist 46, April, 342-351. Chaiken, S., and Maheswaran, D. 1994. “Heuristic Processing Can Bias Systematic Processing: Effects of Source Credibility, Argument Ambiguity, and Task Importance on Attitude Judgment,” Journal of Personality and Social Psychology (66), pp. 460-473. Chase, William G. and Herbert A. Simon. 1973. “Perception in Chess,” Cognitive Psychology, 4 (January): pp.55-81. Chen, Y. (2008). Herd behavior in purchasing books online. Computers in Human Behavior, 24, 1977–1992. Chen, Y. and Xie, J., Online Consumer Review: Word-of-Mouth as a New Element of Marketing Communication Mix . Management Science, Vol. 54, No. 3, pp. 477-491, 2008. Chin, W. W. 1998. The Partial Least Squares Approach for Structural Equation Modeling. G. A. Marcoulides, ed. Modern Methods for Business Research Lawrence Erlbaum, Mahwah, NJ. pp. 295–336. Chin, W.W., 2001. PLS-Graph User's Guide. C.T. Bauer College of Business, University of Houston, USA. Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences, Lawrence Erlbaum Associates, Hillsdale, NJ. Coombs L. C., 1974. “The Measurement of Family Size Preferences and Subsequent Fertility,” Demography (11:4), pp. 587-611. Csikszentmihalyi, Mihaly.  1977. Beyond Boredom and Anxiety. Jossey-Bass, San Fransisco, CA. Cyr D., Head, M., Ivanov, A. 2009. “Perceived Interactivity leading to E-loyalty: Development 114  of a Model for Cognitive-Affective User Responses”, International Journal of Human- Computer Studies (67:10), pp.850–869. Cyr, D., Hassanein, K., Head, M., Ivanov, A. 2007. “The Role of Social Presence in Establishing Loyalty in e-Service Environments”. Interacting with Computers (19:1), pp. 43-56. Davis, F. D. “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology,” MIS Quarterly (13:3), 1989, pp. 319-339. Davis, F. D., and Kottemann, J. E. “User Perceptions of Decision Support Effectiveness: Two Production Planning Experiments,” Decision Sciences (25:1), 1994, pp. 57-78. Dawes, R. 1979. ‘‘The Robust Beauty of Improper Linear Models in Decision Making.’’ American Psychologist, 34: 571-582 Delquie, P. 2003, “Optimal Conflict in Preference Assessment,” Management Science (49:1), pp. 102–115. Drolet A., and Luce, M.F., 2004. “The Rationalizing Effects of Cognitive Load on Emotion- Based Trade-Off Avoidance,” Journal of Consumer Research (31:1), pp. 63-77. Duhan, D. F., Johnson, S. D., Wilcox, J. B., and Harell, G. D. 1997. “Influences on Consumer Use of Word-Of-Mouth Recommendation Sources,” Journal of the Academy of Marketing Science (25:4), pp.283–295. Duhan, Dale F., Scott D. Johnson, James B. Wilcox, and Gilbert D. Harrell. 1997, "Influences on Consumer use of Word-of-Mouth Recommendation Sources," Journal of the Academy of Marketing Science (25:4), pp. 283-295. Einhorn, H.J. and Hogart, R.M., 1975. Unit weighting schemes of decision making. Organizational Behavior and Human Performance 13, 171-192 Eisingerich, A.B., and Bell, S. J. (2008).  “Perceived Service Quality and Customer Trust: Does Enhancing Customers' Service Knowledge Matter?” Journal of service research (10)3, pp.256. Eroglu, S. A., Machleit K. A., and Davis L. M. 2001. “Atmospheric qualities of online retailing: A conceptual model and implications,” Journal of Business Research (54:5), pp. 177– 184. Eroglu, S. A., Machleit K. A., and Davis L. M. 2003. "Empirical Testing of a Model of Online Store Atmospherics and Shopper Responses," Psychology & Marketing (20: 2), pp. 139- 150. Fasolo, B., McClelland, G. H., and Lange, K. A. 2005. “The Effect of Site Design and 115  Interattribute Correlations on Interactive Web-Based Decisions,” in Online Consumer Psychology: Understanding and Influencing Behavior in the Virtual World, C. P. Haughvedt, K. Machleit, and R. Yalch (eds.), Lawrence Erlbaum Associates, Mahwah, NJ, pp. 325-344. Felix, D., Niederberger, C., Steiger, P., and Stolze, M. “Feature-Oriented vs. Needs-Oriented Product Access for Non-expert Online Shoppers,” in Towards the E-Society: E- Commerce, EBusiness, and E-Government, B. Schmid, K. Stanoevska-Slabeva, and V. TschammerZürich (eds.), Springer, New York, 2001, pp. 399-406. Festinger, L. (1957). A theory of cognitive dissonance. Stanford,  RA: Stanford University Press. Fiore, A. M., and Kim, J. 2007. “An Integrative Framework Capturing Experiential and Utilitarian Shopping Experience,” International Journal of Retail & Distribution Management (35:6), pp. 421-442. Fornell, C., and Larcker, D. F. 1981. “Evaluating Structural Equation Models with Unobservable Variables and Measurement Error,” Journal of Marketing Research 18, pp.39–50. Frey, D. (1986). Recent research on selective exposure to information. InL. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 19, pp. 41-80). San Diego,  RA: Academic Press. Frisch, D., and Clemen, R. T. 1994. “Beyond Expected Utility: Rethinking Behavioral Decision Research,” Psychological Bulletin 116, pp. 46-54. Gefen, D., Karahanna, E., and Straub, D. W. 2003. “Trust and TAM in Online Shopping: An Integrated Model,” MIS Quarterly (27:1), pp. 51-90. Gilly, C.M., Graham, J.L., Wolfinbarger, M.R., Yale, L.J. (1998), "A dyadic study of interpersonal information search", Journal of the Academy of Marketing Science, Vol. 2 No.2, pp.83-100. Gilly, M. C., Graham, J. L., Wolfinbarger, M. F., and Yale, L. J. 1998. “A Dyadic Study of Personal Information Search,” Journal of the Academy of Marketing Science (26:2), pp. 83–100. Glass, G. V., and Hopkins, K. D. Statistical Methods in Education and Psychology, Allyn & Bacon, Needham Heights, MA, 1996 Goldstein, W. M., Barlas, S., and Beatie, J. 2001. “Talk About Tradeoffs: Judgments of Relative Importance and Contingent Decision Behavior,” in Conflict and Tradeoffs in Decision Making, E. U. Weber, J. Baron, and G. Loomes (eds.), New York: Cambridge University 116  Press. Grenci, R. T., and Todd, P. A. 2002. “Solutions-Driven Marketing,” Communications of the ACM (45:2), pp. 64-71. Gretzel U., and Fesenmaier, D. R. 2006-2007. “Persuasion in Recommender Systems,” International Journal of Electronic Commerce (11:2), pp. 81-100. Griffith D. A., Krampf, R. F., and Palmer, J. W. 2001. “The Role of Interface in Electronic Commerce: Consumer Involvement with Print Versus On-Line Catalogs,” International Journal of Electronic Commerce (5:4), pp.135–153. Häubl, G., and Trifts, V. 2000. “Consumer Decision Making in Online Shopping Environments: The Effects of Interactive Decision Aids,” Marketing Science (19:1), pp. 4-21. Hedgcock, W., and Rao, A. R. 2009. “Trade-off Aversion as an Explanation for the Attraction Effect: A functional Magnetic Resonance Imaging Study,” Journal of Marketing Research. (XLVI), pp.1-13. Helson, H. 1964. Adaptation-Level Theory. New York: Harper & Row. Hess, T. J., Fuller, M., and Campbell, D. E. 2009. “Designing Interfaces with Social Presence: Using Vividness and Extraversion to Create Social Recommendation Agents,” Journal of the Association for Information Systems (10:12), pp. 889-919. Hess, T., Fuller, M., and Mathew, J. 2005–2006. “Involvement and Decision-Making Performance with a Decision Aid: The Influence of Social Multimedia, and Gender,” Journal of Management Information Systems (22:3), pp.15–54. Hoffman, D. L., and Novak, T. P. 1996. “Marketing in Hypermedia Computer-Mediated Environments: Conceptual Foundations,” Journal of Marketing (60:3), pp.50–117. Hoque, A., and Lohse, G. 1999. “An Information Search Cost Perspective for Designing Interfaces for Electronic Commerce,” Journal of Marketing Research (34:3), pp. 387- 394. Hostler, R. E., Yoon, V. Y., and Guimaraes, T. 2005. “Assessing the Impact of Internet Agent on End Users’ Performance,” Decision Support Systems (41:1), pp. 313-323. Huck, S. W. “Mixed ANOVAs,” in Reading Statistics and Research, Addison Wesley Longman, 2000, Huffman, C., & Kahn, B.E. 1998. “Variety for Sale: Mass Customization or Mass Confusion?” Journal of Retailing (74:4), pp.491–513. IDC Press Release:. "IDC Forecasts 7.6 Million Media Tablets to be Shipped Worldwide in 117  2010".http://www.idc.com/about/viewpressrelease.jsp?containerId=prUS22345010&secti onId=null&elementId=null&pageType=SYNOPSIS. Igbaria, M., Parasraman, S. And Baroudi, J. J. 1996, “A Motivational Model of Microcomputer Usage,” Journal of Management Information Systems (13:1), pp.127 – 143. Jacoby, J. 2002. "Stimulus-Organism-Response Reconsidered: An Evolutionary Step in Modeling (Consumer) Behavior," Journal of Consumer Psychology (12:1), pp. 51-57. Jarvenpaa, S. L. 1989. “The Effect of Task Demands and Graphical Format on Information Processing Strategies,” Management Science (35:3), pp. 285-303. Jiang, Z. and Benbasat, I. 2007a, “The Effects of Presentation Formats and Task Complexity on Online Consumers’ Product Understanding,” MIS Quarterly (31:3), pp. 475-500. Jiang, Z. and Benbasat, I. 2007b, “Investigating the Influence of Interactivity and Vividness on Online Product Presentations” Information Systems Research (18:4), pp 454-470. Jiang, Z., Chan, J., Tan, B., and Chua, W, 2010. “Effects of Interactivity on Website Involvement and Purchase Intention,” Journal of the Association for Information Systems (11:1) pp. 34-59. Jiang, Z., I. Benbasat. 2004/2005. “Virtual Product Experience: Effects of Visual and Functional Control of Products on Perceived Diagnosticity and Flow in Electronic Shopping,” JMIS (21:3), pp.111–147. Johnston L, and Coolen, P. 1995. “A Dual Processing Approach to Stereotype Change,” Personal Social Psychology Bulletin 21, pp. 660–73. Kamis A., Stern, T. and Ladik, D.M. 2010, “A Flow-Based Model of Web Site Intentions When Users Customize Products in Business-to-Consumer Electronic Commerce,” Information Systems Frontiers (12:2), pp.157-168. Kamis, A., Koufaris, M. and Stern, T. 2008. “Using an Attribute-Based Decision Support System for User-Customized Products Online: An Experimental Investigation,” MIS Quarterly (32:1), pp.159-177. Kempf, D. S., and Smith, R. E. 1998. “Consumer Processing of Product Trial and the Influence of Prior Advertising: A Structural Modeling Approach,” Journal of Marketing Research (35), pp. 325-337. Kettanurak, V., Ramamurthy, N., K. and Haseman, W. D. 2001. “User Attitude As a Mediator of Learning Performance Improvement In An Interactive Multimedia Environment: An Empirical Investigation of the Degree of Interactivity and Learning Styles,” International 118  Journal of Human-Computer. Studies (54:4), pp. 541–583. Kim, Young “Sally”(2011) 'Application of the Cognitive Dissonance Theory to the Service Industry', Services Marketing Quarterly, 32: 2, 96 — 112 Komiak X., S. and Benbasat, I., 2006. “The Effects of Personalization and Familiarity on Trust in and Adoption of Recommendation Agents,” MIS Quarterly (30:4), pp. 941-960. Koufaris, M. 2002. “Applying the Technology Acceptance Model and Flow Theory to Online Consumer Behavior,” Information Systems Research (13:2), pp. 205-223. Lazarus, R. S. 1991. Emotion and Adaptation, Oxford University Press, New York. Leavitt, N. 2006. “Recommendation Technology: Will It Boost E-Commerce?” IEEE Computer Society (39:5), pp.13-16. Lee, Y. E., and I. Benbasat, forthcoming, “Effects of attribute conflicts on consumers’ perceptions and acceptance of product recommendation agents: extending the effort- accuracy framework”, Information Systems Research. Lee, Y.E., and Benbasat, I., “Effects of attribute conflicts on consumers’ perceptions and acceptance of product recommendation agents: extending the effort-accuracy framework” Information Systems Research (forthcoming). Lin, C.S, G.H. Tzeng, Y.C. Chin, C.C. Chang, 2010. Recommendation sources on the intention to use e-books in academic digital libraries (28:6),  , pp. 844-857, The Electronic Library Locke, E. A., Shaw, K. R, Saari, L. M., and Latham, G. P. 1981. “Goal setting and task performance: 1968-1980,” Psychological Bulletin, 90, pp.125-152. Lohmöller J.B. 1989. Latent Variables Path Modeling with Partial Least Squares, Physica Verlag, Heildelberg. Luce, M. F., Bettman, J. R. and Payne, J. W. 2001. Emotional Decisions: Tradeoff Difficulty and Coping in Consumer Choice, The University of Chicago Press, Chicago. Luce, M. F., Bettman, J. R., and Payne, J. W. 1999. “Emotional Tradeoff Difficulty and Choice,” Journal of Marketing Research (36:2), pp.143-159. Malhotra N.K., 1982. “Information Load and Consumer Decision Making,” Journal of Consumer Research (8), pp.419-430. Mao, J., I. Benbasat, 2000. “The Use of Explanations in Knowledge-Based Systems: Cognitive Perspectives and a Process-Tracing Analysis,” Journal of Management Information Systems (17:2), pp.153–179. McGinty L, and Smyth B. 2007. “Adaptive Selection: An Analysis of Critiquing and Preference- 119  Based Feedback in Conversational Recommender Systems,” International Journal of Electronic Commerce (11:2), pp.35-57. McQuarrie, E., and Munson, J. 1991.  “A Revised Product Involvement Inventory: Improved Usability and Validity,” In J. Sherry and B. Stemthal (eds.). Advances in Consumer Research, vol. 19. Provo, UT: Association for Consumer Research, pp. 97-114. McQuarrie, E., and Munson, J. 1992.  “A Revised Product Involvement Inventory: Improved Usability and Validity,” In J. Sherry and B. Stemthal (eds.). Advances in Consumer Research, vol. 19. Provo, UT: Association for Consumer Research, pp. 97-114. Mehrabian, A., Russel J. A. 1974. An Approach to Environmental Psychology. MIT Press, Cambridge, MA. Miller, R. S., H. M. Lefcourt. 1982. “The Assessment of Social Intimacy,” Journal of Personality Assessment (46:5), pp. 514–518. Miranda, S. M., and Bostrom, R. P. 1993-94. “The Impact of Group Support Systems on Group Conflict and Conflict Management,” Journal of Management Information Systems (10:3), pp. 63-95. Moore, D. M., Burton, J. K., and Myers, R. J. 1996. “Multiple-Channel Communication: The Theoretical and Research Foundations of Multimedia,” in Handbook of Research for Educational Communications and Technology, D. Jonassen (ed.), Simon & Schuster MacMillan, New York, pp. 851-875. Morrison J. and Vogel, D. 1998. “The Impacts of Presentation Visuals on Persuasion,” Information & Management (33:3), pp. 125-135. Murray K. B., Liang J.P., Häubl G. 2010. “ACT 2.0: the next generation of assistive consumer technology research,” Internet Research (20:3), pp.232 – 254. Murray K. B., Liang J.P., Häubl G. 2010. “ACT 2.0: the next generation of assistive consumer technology research,” Internet Research (20:3), pp.232 – 254. Murray, K.B. (1991), "A test of services marketing theory: consumer information acquisition activities", Journal of Marketing, Vol. 55 No.3, pp.10-25. Murray, Kyle B. and Häubl G. 2009. “Personalization without Interrogation: Towards More Effective Interactions between Consumers and Feature-Based Recommendation Agents,” Journal of Interactive Marketing (23:2), p138-146. Murray, Kyle B. and Häubl G. 2009. “Personalization without Interrogation: Towards More Effective Interactions between Consumers and Feature-Based Recommendation Agents,” 120  Journal of Interactive Marketing (23:2), p138-146. Nadkarni, S. and Gupta R., 2007. “A Task-based Model of Perceived Website Complexity,” MIS Quarterly (31:23), pp.501-524. Nakatsu, R., and Benbasat, I. 2006. "Designing Intelligent Systems to Handle System Failures: Enhancing Explanatory Power With Less Restrictive User Interfaces and Deep Explanations," International Journal of Human-Computer Interaction (21:1), pp 55-72. Nandan;, S. 2005. "An Exploration of the Brand Identity-Brand Image Linkage: A Communications Perspective." Journal of Brand Management 12(4): 264-278. Nisbett, R., and Ross, L. 1980. Assigning Weights to Data: The “Vividness Criterion.” R. Nisbett, L. Ross, eds. Human Inference: Strategies and Shortcomings of Social Judgment. Prentice-Hall, Inc., Englewood Cliffs, NJ. Norman, D. 1988. The Design of Everyday Things, Currency Doubleday, Novak, T. P., Hoffman D. L., Yung Y.F.  2000.  “Measuring the Customer Experience in Online Environments: A Structural Modeling Approach,” Marketing Science (19:1) 22–42. Nysveen H., Pedersen P. E., Thorbjørnsen H., 2005. "Explaining Intention to Use Mobile Chat Services: Moderating Effects of Gender", Journal of Consumer Marketing (22: 5), pp.247 – 256. O’Keefe, D.J. Persuasion: Theory & Research, 2d ed. Thousand Oaks, CA: Sage, 2002. O’Keefe, D.J. Persuasion: Theory and Research. Newbury Park, CA: Sage, 1990. O’Neill, M., & Palmer, A. (2004). Cognitive dissonance and the stability of service quality perceptions. Journal of Services Marketing, 18, 433–449. Orlikowski, W. J., and Iacono. C. S. 2001. “Research Commentary: Desperately Seeking The “IT” In IT Research—A Call To Theorizing The IT Artefact,” Information Systems Research (12:2), pp. 121–134. Palanivel K., and R. Sivakumar, 2010. “A Study on Implicit Feedback in Multicriteria E- Commerce Recommender System,” Journal of Electronic Commerce (11:2), pp. 140-156. Parboteeah, D. V., Valacich J. S., and Wells J. D. 2009. "The Influence of Website Characteristics on a Consumer's Urge to Buy Impulsively," Information Systems Research (20:1), pp. 60- 78. Pavlou, P.A., and Fygenson, M. 2006. "Understanding and Predicting Electronic Commerce Adoption: An Extension of the Theory of Planned Behavior," MIS Quarterly (30:1), pp.115-144. 121  Payne, J. W. 1976. “Task Complexity and Contingent Processing in Decision Making: An Information Search and Protocol Analysis,” Organizational Behavior and Human Performance (16), pp. 366-387 Payne, J. W. 1982. “Contingent Decision Behavior,” Psychological Bulletin (92:2), pp. 382-402. Payne, J. W., Bettman, J. R., and Johnson, E. 1988. “Adaptive Strategy Selection in Decision Making,” Journal of Experimental Psychology: Learning, Memory, and Cognition (14:3), pp. 534-552. Payne, J. W., Bettman, J. R., and Johnson, E. J. 1993.The Adaptive Decision Maker, Cambridge University Press, Cambridge, England. Peppers, D. and Rogers, M. (1993), The One to One Future, Currency Doubleday, New York, NY. Stevenson O. Process descriptions of decision making (Organizational Behavior and Human Performance, Volume 23, Issue 1, February 1979, Pages 86-112 Pereira, R. E. 2000. “Optimizing Human-Computer Interaction for the Electronic Commerce Environment,” Journal of Electronic Commerce Research (1:1), pp. 23-44. Petty, R. E., and Cacioppo, J. T. 1986. “The Elaboration Likelihood Model of persuasion,” In L. Berkowitz (Ed.), Advances in experimental social psychology 19, pp. 123-205. New York: Academic Press. Petty, R. E., and Cacioppo, J. T. 1986. “The Elaboration Likelihood Model of persuasion,” In L. Berkowitz (Ed.), Advances in experimental social psychology 19, pp. 123-205. New York: Academic Press. Petty, R. E., and Wegener, D. T. 1998. “Attitude Change: Multiple Roles for Persuasion Variables,” In D. Gilbert, S. Fiske, & G. Lindzey (Eds.), the Handbook of Social Psychology (4th ed., vol. 1, pp. 323–390). New York: McGraw-Hill. Petty, R. E., Wegener, D. T., and Fabrigar, L. R. 1997. “Attitude and Attitude Change,” Annual Review of Psychology 48, pp. 609-647. Podsakoff, P. and Organ, D. 1986. “Reports in Organizational Research: Problems and Prospects”, Journal of Management Studies 27, pp. 305–327. Price, L. L., and Feick, L. F. 1984. “The Role of Recommendation Sources in External Search: An Informational Perspective,” In T. Kinnear (Ed.), Advances in consumer research (Vol. 11, pp. 250–255). Provo, UT: Association for Consumer Research. Price, L. L., and Feick, L. F. 1984. “The Role of Recommendation Sources in External Search: An Informational Perspective,” In T. Kinnear (Ed.), Advances in consumer research (Vol. 122  11, pp. 250–255). Provo, UT: Association for Consumer Research. Randall, T., C. Terwiesch, K. Ulrich. 2007. User Design of Customized Products. Marketing Science. 26(2) 268–280. Rathnam, G.  2005. “Interaction Effects of Consumers' Product Class Knowledge and Agent Search Strategy on Consumer Decision Making in Electronic Commerce,” IEEE Transactions on Systems, Man and Cybernetics (35:4), pp.556 Ricci F. and Werthner H. 2007. “Introduction to the Special Issue: Recommender Systems,” International Journal of Electronic Commerce (11:2), pp 5-9. Rieskamp, Jörg, Jerome R. Busemeyer, and Barbara A. Mellers. 2006, “Extending the Bounds of Rationality: Evidence and Theories of Preferential Choice,” Journal of Economic Literature, (44:3), pp.631–661. Ringle, C.M., Wende,  S., and Will, A. 2005. Smart PLS. Hamburg, Germany: University of Hamburg. Schafer, J. B., Konstan, J. A., and Riedl, J. 2002. “Meta-Recommendation Systems: User- Controlled Integration of Diverse Recommendations,” paper presented at the 11th International Conference on Information and Knowledge Management, McLean, VA, November. Senecal, S. 2003. Essays on the Influence of Online Relevant Others on Consumers’ Online Product Choices, unpublished doctoral dissertation, HEC Montreal, University of Montreal. Senecal, S., and Nantel, J. 2004. “The Influence of Online Product Recommendations on Consumers’ Online Choices,” Journal of Retailing (80:2), pp. 159-169. Sharma, N., and P. G. Patterson. (2000). “Switching Costs, Alternative Attractiveness and Experience as Moderators of Relationship Commitment in Professional, Consumer Services.” International Journal of Service Industry Management 11 (November), pp.470- 90. Sherman, E., Mathur, A., and Smith, R. B. 1997. “Store Environment and Consumer Purchase Behavior: Mediating Role of Consumer Emotions,” Psychology and Marketing (14:4), pp. 361-378. Simon, H. A. 1955. “A Behavioral Model of Rational Choice,” Quarterly Journal of Economics (69), pp. 99-118. Singh, Danie`le T., Michael J. Ginzberg. 1996. An empirical investigation of the impact of 123  process monitoring on computer mediated decision making performance. Organ. Behavior Human Decision Process. (67:2) 156–169. Sproull, L., Subramani, M., Kiesler, S., Walker, J.H., and Waters, K. 1996. “When the Interface Is a Face,” Human-Computer interaction 11, pp.97-124. Stamm, K., and Dube, R. 1994. “The Relationship of Attitudinal Components to Trust in Media,” Communications Research (2:1), pp. 105-123. Steuer, J. 1992 "Defining Virtual Reality: Dimensions Determining Telepresence," Journal of Communication (42:4), pp. 73. Stolze, M., and Nart, F. 2004. “Well-Integrated Needs-Oriented Recommender Components regarded as Helpful,” in CHI 2004 Extended Abstracts on Human Factors in Computing Systems, Vienna, Austria, April 24-29, pp.1571. Sun, H., and  Zhang, P. 2008. “An Exploration of Affect Factors and Their Role in User Technology Acceptance: Mediation and Causality,” Journal of the American Society for Information Science and Technology (59:8), pp.1252-1263. Sundar, S. S. 2007. The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 72-100). Cambridge, MA: The MIT Press. Sundar, S. S., Xu, Q., & Oeldorf-Hirsch, A. 2009. Authority vs. peer: How interface cues influence users. Proceedings of the 27th International Conference Extended Abstracts on Human Factors in Computing Systems (CHI’09), 27, 4231-4236. Swanson, R. A. and Law, B. 1993. “Whole-Part-Whole Learning Model,” Performance Improvement Quarterly (6:1), pp. 43-53. Swearingen, K., and Sinha, R. 2001. “Beyond Algorithms: An HCI Perspective on Recommender Systems,” paper presented at the ACM SIGIR Workshop on Recommender Systems, New Orleans, LA, September 13. Sweller, J. 1988. “Cognitive Load During Problem Solving: Effects on Learning,” Cognitive Science 12, pp.257–285 Tabachnick, B. G., L. S. Fidell, Using Multivariate Statistics (3rd ed.), HarperCollins, New York, 1996. Tam, K.Y., and Ho, Y. S. 2005. “Web Personalization as a Persuasion Strategy: An Elaboration Likelihood Model Perspective,” Information Systems Research (16:3). pp. 271-291. Tan, C.H., Teo, H.H., and Benbasat  I., 2010. “Evaluation of Decision Aids for Comparison- 124  Shopping: An Empirical Test of the Effectiveness of Decision Support in Different Information Load Conditions” Information Systems Research, pp. 305-326. Thatcher J. B., P. L. Perrewé. 2002. “An Empirical Examination Of Individual Traits As Antecedents To Computer Anxiety And Computer Self-Efficacy,” MIS Quart. (26:4), pp. 381-396. Thorngate, W. (1980). Efficient decision heuristics. Behavioral Science, 25, 219–225. Todd, P., and Benbasat, I. 1996. “The Effects of Decision Support and Task Contingencies On Model Formulation: A Cognitive Perspective,” Decision Support Systems 17, pp. 241– 252. Todd, P., and Benbasat, I. 2000. “Inducing Compensatory Information Processing through Decision Aids that Facilitate Effort Reduction: An Experimental Assessment,” Journal of Behavioral Decision Making (13:1), pp. 91-106. Van der Heijden, H. 2003. “Factors Influencing the Usage of Websites: The Case of a Generic Portal in the Netherlands,” Information & Management (40:6), pp. 541-549. Van der Heijden, H. 2004. “User Acceptance of Hedonic Information Systems,” MIS Quarterly (28:4), pp. 695-704. Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. 2003. “User Acceptance of Information Technology: Toward a Unified View,” MIS Quarterly (27:3), pp. 425-478. Wan Y, S Menon, A. Ramaprasad, 2009. “The Paradoxical Nature of Electronic Decision Aids on Comparison-Shopping: The Experiments and Analysis,” Journal of Theoretical and Applied Electronic Commerce Research (4) pp.80. Wang  A. Consensus and disagreement between online peer and expert recommendations, Int. J. Internet Marketing and Advertising, Vol. 4, No. 4, 2008, p328-349. Wang H.C. and Doong H.S.2010, “Argument form and spokesperson type: The recommendation strategy of virtual salespersons,” International journal of information management (30:6), pp.493-501 Wang, W. and I. Benbasat, 2009. “Interactive Decision Aids for Consumer Decision Making in e- Commerce: The Influence of Perceived Strategy Restrictiveness” MIS Quarterly (33:2), pp. 293-320. Wang, W., and Benbasat, I. 2007. “Recommendation Agents for Electronic Commerce: Effects of Explanation Facilities on Trusting Beliefs,” Journal of Management Information Systems (23: 4), pp.217-246. 125  West, P., Ariely, D., Bellman, S., Bradlow, E., Huber, J., Johnson, E., Kahn, B., Little, J., Schkade‚ D. 1999.  “Agents to the Rescue?” Marketing Letters (10:3), pp.285-300. Widing, R. E. and Talarzyk, W. W. 1993. “Electronic Information Systems for Consumers: an Evaluation of Computer-Assisted Formats in Multiple Decision Environments,” Journal of Marketing Research (30:2), pp.125-141. Winer, R.S. (2001), “A framework for customer relationship management”, California Management Review, Vol. 43, Summer, pp. 89-105. Wood, R. E. 1986. Task complexity: Definition of the construct. Organizational behavior and human decision processes, 37(1): 60-82. Xiao, B., and Benbasat, I. 2007. “E-Commerce Product Recommendation Agents: Use, Characteristics, and Impact,” MIS Quarterly (31:1), pp. 137-209. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0102438/manifest

Comment

Related Items