UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Design of trustworthy online recommendation agents : explanation facilities and decision strategy support Wang, Weiquan 2005

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2005-105852.pdf [ 10.32MB ]
Metadata
JSON: 831-1.0099817.json
JSON-LD: 831-1.0099817-ld.json
RDF/XML (Pretty): 831-1.0099817-rdf.xml
RDF/JSON: 831-1.0099817-rdf.json
Turtle: 831-1.0099817-turtle.txt
N-Triples: 831-1.0099817-rdf-ntriples.txt
Original Record: 831-1.0099817-source.json
Full Text
831-1.0099817-fulltext.txt
Citation
831-1.0099817.ris

Full Text

D E S I G N O F T R U S T W O R T H Y O N L I N E R E C O M M E N D A T I O N A G E N T S : E X P L A N A T I O N F A C I L I T I E S A N D D E C I S I O N S T R A T E G Y S U P P O R T b y W e i q u a n WANG M a s t e r s o f M a n a g e m e n t S c i e n c e & E n g i n e e r i n g , T s i n g h u a U n i v e r s i t y , 2000 B a c h e l o r o f P h y s i c s E n g i n e e r i n g , T s i n g h u a U n i v e r s i t y , 1997 B a c h e l o r o f E c o n o m i c s , T s i n g h u a U n i v e r s i t y , 1 9 9 7 A T H E S I S S U B M I T T E D I N P A R T I A L F U L F I L M E N T O F T H E R E Q U I R E M E N T S FOR T H E D E G R E E O F DOCTOR O F P H I L O S O P H Y i n T H E F A C U L T Y O F G R A D U A T E S T U D I E S ( B u s i n e s s A d m i n i s t r a t i o n ) T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A S e p t e m b e r 2005 © W e i q u a n WANG, 2005 ABSTRACT Due to advances in Web-based technologies, ample opportunities exist to utilize knowledge-based systems for facilitating online consumer decision-making and for providing recommendation services for consumers. This thesis focuses on online recommendation agents that offer shopping advice based on user-specified needs and preferences. Because of the high risks and uncertainties inherent in online environments, effective recommendation agents need to be trustworthy. B y extending interpersonal trust to trust in technological artifacts, consumers' trust in a recommendation agent is defined to include three belief components: competence, benevolence, and integrity. This thesis examines user acceptance of online recommendation agents and trust formation in the agents and it empirically investigates agent features and capabilities that increase the trust in them so that a higher chance of user acceptance can be realized. Two important agent capabilities are tested: 1) explanation facilities and 2) decision strategy support. A n integrated Trus t -TAM (Technology Acceptance Model) was tested and the results show that trust in agents influences consumers' behavioral intentions. Trust in agents exerts a direct impact on the intentions to adopt recommendation agents as well as an indirect impact via the perceived usefulness of the agents. Written protocols were collected and analyzed to identify the major processes that build and inhibit consumers' trust in recommendation agents. The results highlight the important roles of several processes in cultivating and inhibiting agent trust, such as expectation confirmation, utility assessment, and information sharing. Regarding explanation facilities, this research tests three types of explanations -how explanations, why explanations, and guidance. The results indicate that the use of different types of explanations increases different trusting beliefs: the use of how explanations increases competence and benevolence beliefs; the use of why explanations increases the benevolence belief; and the use of guidance increases the integrity belief. The impact of decision strategy support on consumers' trust and adoption of online recommendation agents was also investigated together with explanation facilities. Three types of recommendation agents with different levels of decision strategy support were compared. Both the benefits and costs of providing a high level of decision strategy support were examined. The results suggest that recommendation agents with decision strategy support capabilities and explanation facilities deliver benefits to users (e.g., more useful and trustworthy) and have a higher chance of being adopted by users, when the use of the agents does not require much additional effort. This research has addressed an important gap that exists in our current understanding of trustworthy online recommendation agents. It also makes a key contribution by empirically testing the effects of explanation facilities and decision strategy support on consumers' trust and acceptance of online recommendation agents. -in-Tables TABLE OF CONTENTS A B S T R A C T n T A B L E OF CONTENTS i v LIST OF T A B L E S v n LIST OF FIGURES i x A C K N O W L E D G E M E N T S x C H A P T E R 1: INTRODUCTION 1 1.1 Research Objective and Questions 2 1.2 Method and Main Findings 4 1.3 Structure of the Thesis 6 C H A P T E R 2: L ITERATURE R E V I E W 8 2.1 Prior Studies on Recommendation Agents 8 2.2 A Focus on Explanation Facilities 18 2.3 A Focus on Decision Strategy Support 20 2.4 Dependent Variables Investigated in the Literature and this Thesis 22 2.5 Trust in Technological Artifacts and Online Recommendation Agents 25 2.6 Defining Trust in Online Recommendation Agent 27 C H A P T E R 3: IMPACT OF E X P L A N A T I O N S O N TRUST IN O N L I N E R E C O M M E N D A T I O N A G E N T S 3 0 3.1 Introduction 30 3.2 Recommendation Agents for E-Commerce 31 3.3 Hypothesis Development 33 3.3.1 Impact of How and Why Explanations 34 3.3.2 Impact of Guidance 38 3.3.3 Control Variables 41 3.4 Research Method 42 3.4.1 Pilot Test on Explanation Validation 43 3.4.2 Participants, Incentives, and Experimental Tasks and Procedures 43 3.4.3 Measures 45 3.5 Data Analysis and Findings 45 3.5.1 Manipulation Check 46 3.5.2 A N C O V A Results 48 3.6 Discussion and Implications 51 3.6.1 Discussion of Findings 51 3.6.2 Limitations and Future Research 52 3.6.3 Implications for Research and Practice 54 - I V -Tables Appendix for Chapter 3 56 C H A P T E R 4: T R U S T - T A M FOR O N L I N E R E C O M M E N D A T I O N A G E N T S 58 4.1 Introduction 58 4.2 Hypothesis Development 59 4.3 Results 65 4.3.1 Data Analysis for the Measurement Model 66 4.3.2 Data Analysis for the Structural Model 68 4.4 Discussion 70 4.4.1 Summary and Discussion of Results 70 4.4.2 Limitations 71 4.4.3 Implications and Future Research 72 Appendix for Chapter 4 76 C H A P T E R 5: A N A L Y S I S OF TRUST F O R M A T I O N PROCESSES IN O N L I N E R E C O M M E N D A T I O N A G E N T S 80 5.1 Introduction 80 5.2 Process Theories of Initial Trust Formation 81 5.2.1 Personality-Based 83 5.2.2 Institution-Based 84 5.2.3 Calculative-Based.. 85 5.2.4 Heuristic-Based... 85 5.2.5 Process-Based 86 5.2.6 Knowledge-Based 89 5.3 Research Method 92 5.3.1 Written Protocol versus Verbal Protocol Analysis 92 5.4 Data Analysis 93 5.5 Results 96 5.6 Discussion 101 5.6.1 Summary of Findings 101 5.6.2 Limitations and Future Research 102 5.6.3 Theoretical and Practical Implications 103 C H A P T E R 6: IMPACT OF DECISION S T R A T E G Y SUPPORT A N D E X P L A N A T I O N FACILITIES ON TRUST A N D ADOPTION OF O N L I N E R E C O M M E N D A T I O N A G E N T S 108 6.1 Overview 108 6.2 Theoretical Background 110 6.2.1 Preferential Choice Strategies 110 6.2.2 Theory of System Restrictiveness 111 6.3 Hypotheses Development 112 6.3.1 Recommendation Agents with Different Levels of Decision Strategy Support 114 6.3.2 Role of Agent Strategy Restrictiveness 115 Tables 6.3.2.1 Impact of Decision Strategy Support and Explanation Facilities on Perceived Agent Strategy Restrictiveness 115 6.3.2.2 Impact of Perceived Strategy Restrictiveness on Trust and Perceived Usefulness of Agents 117 6.3.3 Role of Perceived Agent Transparency 118 6.3.3.1 Impact of Decision Strategy Support and Explanation Facilities on Perceived Agent Transparency 119 6.3.3.2 Impact of Perceived Agent Transparency on Trust 120 6.3.4 Role of Cognitive Effort 121 6.3.4.1 Impact of Decision Strategy Support and Explanation Facilities on Cognitive Effort 121 6.3.4.2 Impact of Cognitive Effort on Perceived Ease-Of-Use of Agents 124 6.3.5 Trust-TAM 125 6.4 Research Method 125 6.4.1 Independent Variables and Experimental Design 126 6.4.2 Control Variables 130 6.4.3 Dependent Variables 131 6.4.4 Measurement Development 133 6.4.5 Participants 134 6.4.6 Experimental Tasks and Procedures 135 6.5 Data Analysis and Results 136 6.5.1 Demographic Data 136 6.5.2 Manipulation Checks 137 6.5.3 Measurement Model 140 6.5.4 A N C O V A Results 146 6.5.4.1 Perceived Strategy Restrictiveness 146 6.5.4.2 Perceived Agent Transparency 149 6.5.4.3 Cognitive Effort 151 6.5.5 Structural Model 156 6.6 Discussion 161 6.7 Limitations, Contributions, and Future Research 163 6.7.1 Limitations and Future Research 163 6.7.2 Contributions 165 Appendices 167 Appendix 6-1 Screen Shots for Experimental Agents 167 Appendix 6-2 Natural GOMS Language Analyses 172 Appendix 6-3 Measurement Items 178 CHAPTER 7: CONCLUSIONS AND FUTURE RESEARCH 182 7.1 Summary of the Thesis 182 7.2 Contributions 184 7.3 Future Research 186 REFERENCES 188 -v i -Tables LIST OF TABLES Table 2-1 Prior Agent Studies 9 Table 3-1 Examples of how explanations, why explanations, and guidance 35 Table 3-2 Construct Attributes 46 Table 3-3 Frequency Distribution of Explanation Use 47 Table 3-4 Means and Standard Deviations of the Trusting Beliefs for Various Experimental Conditions 49 Table 3-5 Results of A N C O V A (Dependent Variable: Competence Belief) 49 Table 3-6 Results of A N C O V A (Dependent Variable: Benevolence Belief) 50 Table 3-7 Results of A N C O V A (Dependent Variable: Integrity Belief) 50 Table 4-1 Differences between this Study and Previous Trust-TAM Studies 64 Table 4-2 Construct Attributes 66 Table 4-3 Factor Loadings and Cross-Loadings 68 Table 4-4 Structural Model Results 69 Table A4-1 Pervious T A M Extension Studies 76 Table 5-1 A Summary of Trust Building Mechanisms 82 Table 5-2 Trust Building/Inhibiting Processes and Examples 95 Table 5-3 Major Trust-Building Processes 98 Table 5-4 Major Trust-Inhibiting Processes 98 Table 5-5 yl Tests Comparing Processes for Different Trusting Beliefs 99 Table 5-6 yl Tests Comparing Trust Building vs Trust Inhibiting Processes 99 Table 6 - 1 3 x 2 Full Factorial Experimental Design 126 Table 6-2 Explanation Validation Results 128 Table 6-3 Examples of how explanations, why explanations, and guidance 129 Table 6-4 Demographic Data 137 Table 6-5 Distribution of Strategy Choice for the Hybrid Agents 139 Table 6-6 Frequency Distributions of Explanation Use 140 Table 6-7 Means (Standard Deviations) of Dependent Variables 141 Table 6-8 Internal Consistencies, AVEs , and Correlations of Constructs 144 Table 6-9 Loading and Cross-Loadings of Measures 145 Table 6-10 M A N C O V A Results 146 Table 6-11 A N C O V A Results (DV: Perceived Strategy Restrictiveness) 147 Table 6-12 Scheffe Comparisons for Perceived Strategy Restrictiveness 147 Table 6-13 Scheffe Comparisons for Perceived Strategy Restrictiveness, with explanations 149 -vi i -Tables Table 6-14 Scheffe Comparisons for Perceived Strategy Restrictiveness, without explanations 149 Table 6-15 A N C O V A Results (DV: Perceived Agent Transparency) 150 Table 6-16 A N C O V A Results (DV: Consideration Set Size) 151 Table 6-17 A N C O V A Results (DV: Decision Time - minutes) 152 Table 6-18 A N C O V A Results (DV: Perceived Cognitive Effort) 152 Table 6-19 Scheffe Comparisons for Perceived Cognitive Effort, with explanations 155 Table 6-20 Scheffe Comparisons for Perceived Cognitive Effort, without explanations 155 Table 6-21 Dummy Codes for Agent Types 156 Table 6-22 A Summary of Hypothesis Testing Results 159 Table 6-23 Total Effects (Direct and Indirect) on Intentions to adopt agents, Trust, PU, and PEOU 161 Table A6-1 Relevant Operators in this Study 172 Table A6-2 N G O M S L Analyses for A C Agents 174 Table A6-3 N G O M S L Analyses for E B A Agents 175 Table A6-4 N G O M S L Analyses for Hybrid Agents 176 Table A6-5 Summary of N G O M S L Analyses and Estimated Execution Time 177 -Vlll-Figures LIST OF FIGURES Figure 2-1 Variables Investigated in Prior Agent Studies 15 Figure 3-1 Agent-User Dialogue from the Experimental Agent 31 Figure 3-2 Recommendations from the Experimental Agent 32 Figure 4-1 Research Model 66 Figure 4-2 PLS Results 69 Figure 5-1 PLS Results for Competence Belief 100 Figure 5-2 PLS Results for Benevolence Belief 101 Figure 5-3 PLS Results for Integrity Belief , 101 Figure 6-1 DSS System Restrictiveness and Agent Strategy Restrictiveness 111 Figure 6-2 Research Model 113 Figure 6-3 Explanation Validation Example 127 Figure 6-4 Perceived Strategy Restrictiveness 148 Figure 6-5 Perceived Agent Transparency 150 Figure 6-6 Consideration Set Size 153 Figure 6-7 Decision Time (minutes) 154 Figure 6-8 Perceived Cognitive Effort 154 Figure 6-9 PLS Structural Model Testing Results 158 Figure A6-1 (a) Question Page for Hybrid Agents 167 Figure A6-1 (b) Question Page for Hybrid Agents - "Non-Essential Preference" Chosen 167 Figure A6-1 (c) Question Page for Hybrid Agents - "Essential Preference" Chosen 168 Figure A6-1 (d) Question Page for Hybrid Agents - How Explanation Shown 168 Figure A6-1 (e) Question Page for Hybrid Agents - Guidance on Strategy Shown 169 Figure A6-l(f) Question Page for A C Agents 169 Figure A6-1 (g) Question Page for E B A Agents 170 Figure A6-1 (h) Result Page for Hybrid Agents and A C Agents 170 Figure A6-1 (i) Result Page for Hybrid Agents and A C Agents - How Explanation Shown 171 Figure A6-1 (j) Result Page for E B A Agents 171 -ix-Acknowledgement ACKNOWLEDGEMENTS Having reached this important milestone in my academic career, I want to extend my deepest gratitude and appreciation to those who gave me their valuable time, insights, and support in this endeavor. I am most grateful to my advisor, Professor Izak Benbasat, for his guidance and continuous support over the years. It has been a privilege working with him. I simply could not imagine going through this journey with any mentor other than Izak. He wi l l always serve as model o f professionalism for me in my academic career. I would also like to thank Professors Charles Weinberg and Dale Griffin for spending time giving me valuable insights on this work. I am also grateful to my thesis examiner - Professors Jai-Yeo l Son, Cristina Conati, and Mark Silver, who took the time to evaluate my work and provided excellent comments. I also could not have survived the process without the friendship and support of my fellow students - Dongmin K i m , Zhenhui Jiang, Lingyun Qiu, Sherrie Komiak, and Young Eun Lee. We shared many of our fun and difficult moments of doing a Ph.D. Thank you for the good moments and constructive discussions that we had. A number of other individuals were instrumental in moving this dissertation forward. Kevin Chen and Victor Wong have contributed extensive efforts in designing the experimental websites. Steve Doak and Glen Wheeler have been helpful in editing this thesis. I also want to thank the hard-working graduate students who provided research assistance: L e i Zhu coded the written protocols; and Joyce Hou and Irene Pan conducted many of the experiments. Finally, many thanks also go to the experimental participants. I would have accomplished far less had it not been for their conscientious effort. I deeply thank my family - my wife, Yanan Dong, who has accompanied me through all the up-and-downs in this research career, and my parents and sister - without their endless giving and persistent love and support, this thesis and much o f my research work could not have been possible. Chapter 1: Introduction CHAPTER 1: INTRODUCTION Good customer service and support are the key factors that attract consumers and keep them loyal to an online store (Reibstein, 2002). Currently, with the proliferation and advances of Web-based technologies, many opportunities are being created for online firms to better serve their customers. In particular, online recommendation agents are becoming increasingly available on websites to provide customers with shopping assistance (Rust and Kannan, 2003), to help buyers and sellers reduce information overload (Maes, 1994), and to improve consumers' decision quality (Haubl and Trifts, 2000). Acting on behalf of consumers, recommendation agents provide advice to assist in shopping activities (Maes et al., 1999). Without the proper support, consumers may be limited in their abilities to evaluate products, inasmuch as they cannot consult with salespeople as they can in conventional shopping environments ( K i m and Y o o , 2000). Thus, the challenge of choosing a product on the Web can be alleviated by an interface with a recommendation agent that informs and directs customer choices (Grenci and Todd, 2002). This dissertation focuses on an important but under-studied type of agents: content-filtering product recommendation agents (Ansari et al., 2000; Maes et al., 1999). They are software entities that carry out some set of operations on behalf o f consumers, and provide shopping advice about the product(s) consumers should purchase, based on their needs and preferences (Ansari et al., 2000). Such agent technologies, for example, those provided by www.ActiveDecisions.com and www.niyProductAdvisor.com, have been utilized to provide value-added services for consumers in a variety of firms, including Yahoo! and Amazon.com. Nevertheless, a survey conducted by Burke (2002) shows that a significant percentage (21%) of online shoppers have negative reactions towards such recommendation services. Consequently, the need has been expressed to investigate the designs that lead to more effective agent technologies for e-commerce. One important factor that has emerged in electronic environments is consumer trust in online recommendation agents. Since consumers delegate a range of tasks to the agents that act - l -Chapter 1: Introduction on their behalf, i f a consumer does not trust the agent, the recommendations and advice wi l l likely be rejected. Moreover, trust is becoming increasingly important in online shopping environments because of the possibility that e-vendors or agent providers might engage in opportunistic behaviors (e.g., by taking advantage of consumers and providing biased recommendations), and the lack of cues available to assess the quality of recommendation services (Gefen et al., 2003b). In a focus group experiment, Andersen, Hansen and Andersen (2001) found that trust in recommendation agents is the most important expectation users have. Nevertheless, trust in online recommendation agents remains an under-investigated area. Previous trust studies have mainly focused on interpersonal and organizational trust. Few studies have examined the role o f agent trust in users' acceptance of recommendation agents, the formation processes of trust in the agents, or agent capabilities that can enhance consumers' trust in the agents. 1.1 Research Objective and Questions The main objective of this research is to study consumers' acceptance of online recommendation agents and their trust formation in the agents, and to empirically investigate agent features and capabilities that can increase trust in recommendation agents and lead to a higher chance of agent acceptance. In this dissertation, two important agent capabilities are examined: 1) explanation facilities and 2) decision strategy support. The ability to explain knowledge and reasoning, referred to as explanation facilities, is a critical component of intelligent systems, including recommendation agents. B y virtue of making the performance of a system transparent to users, user trust is promoted and user acceptance of the system is influenced (Gregor and Benbasat, 1999; Hayes-Roth and Jacobstein, 1994). Nevertheless, few studies have empirically tested the impact of explanation facilities on consumers' trust in recommendation agents. Chapter 1: Introduction Online recommendation agents help consumers reduce information overload and improve decision quality (e.g., Haubl and Trifts, 2000; Maes, 1994), but also restrict consumers to certain decision processes that are supported by the agents (Silver, 1990). Due to the large number of product alternatives that are available in an online store or across a variety of online stores, the decision strategy supported by an agent must be able to effectively narrow down the product alternatives and retain the products that are most suitable for consumers. Accordingly, the design of decision strategy support is a challenging issue for online recommendation agents. The findings reported in this thesis point for more user control and choice of agent strategies so that agents can be less restrictive and better able to cater to consumers' strategy preferences. A s a result, the agents become more trustworthy and useful. In this dissertation, we extend interpersonal trust to trust in technological artifacts, and define trust in online recommendation agents to include three belief components: competence, benevolence, and integrity (Komiak, 2003; McKnight et al., 2002a). Competence-belief means that an individual believes that the agent has the ability, skills, and expertise to perform effectively in specific domains; benevolence-belief means that an individual believes that the agent cares about and acts in the individual's interests; and integrity-belief means that an individual believes that the agent adheres to a set of principles (e.g., honesty and promise keeping) that are acceptable to the individual. In particular, this thesis examines the following questions: 1) What is the nature of trust in online recommendation agents? How important is trust in agents vis-a-vis other antecedents of agent adoption (i.e., perceived usefulness and perceived ease-of-use of agents)? 2) How do consumers form trust in online recommendation agents? What are the most powerful building and inhibiting processes? 3) What types of explanations should be embedded in online recommendation agents? Does the use of explanations influence consumers' trust in agents? If so, how, and to what extent? Chapter 1: Introduction 4) Does a high level of decision strategy support that allows user control and choice of decision strategy influence consumers' trust or other perceptions that lead to user adoption of recommendation agents? If so, how, and to what extent? What is the role of explanations in providing decision strategy support? 1.2 Method and Main Findings To investigate the above research questions, two laboratory experiments were conducted. Experiment 1 is aimed at addressing the first three questions. Different types of explanations were manipulated to test their differential roles in increasing consumers' trust. To examine the nomological validity of trust in recommendation agents and user acceptance of the agents, the T A M (Technology Acceptance Model) is extended with the construct of trust in agents, and relevant user perceptions and intentions were surveyed in the experiment. A t the end of the experiment, participants were also asked to justify their trust levels in the recommendation agents. Their justifications were analyzed to understand the major processes that build or inhibit agent trust. The fourth question is addressed in Experiment 2. Different levels o f decision strategy support were implemented in three types of agents supporting different choice strategies. Both the benefits and the costs of the decision strategy support capabilities were investigated. Explanation facilities were manipulated to examine the relative importance of the two agent capabilities (i.e., decision strategy support and explanation facilities) as well as their joint effects on users' perceptions and evaluations o f the agents. In addition, this study investigated important variables that mediate the impact of the two agent capabilities on the antecedents of users' adoption of an agent. The major findings of the two experiments are summarized as follows: • Trust in agents influences consumers' behavioral intentions, exerting a direct impact on intentions to adopt recommendation agents as well as an indirect impact via perceived usefulness of the agents. Chapter 1: Introduction • Overall, consumers' expectations, utility assessment, and knowledge about an agent were the most important processes influencing their trust levels in the agent. Further, different trusting beliefs are formed via different processes. Also , different processes are involved in increasing or decreasing a trusting belief, indicating an asymmetric structure of trust building and trust inhibiting. • Explanation facilities play an important role in enhancing trust in online recommendation agents. The use of different types of explanations increases different trusting beliefs: the use of how explanations increases consumers' beliefs in agent competence and benevolence, the use of why explanations increases consumers' beliefs in agent benevolence, and the use of guidance increases consumers' beliefs in agent integrity. • Decision strategy support, implemented via user control and user choice of decision strategies, influences perceived agent strategy restrictiveness, which has an impact on consumers' trust and perceived usefulness of an agent. Explanation facilities influence trust in the agent by improving perceived agent transparency. • Explanations should accompany the provision of decision strategy support capabilities so that consumers can easily understand the capabilities and be able to apply them properly. A s well , the decision strategy support capabilities provided by an agent w i l l not be effective unless the cognitive effort in learning and using the capabilities is low. -5-Chapter 1: Introduction 1.3 Structure of the Thesis The remainder of the thesis is structured as follows. Chapter 2 reviews the literature on online recommendation agents. The nature of trust in technological artifacts is also discussed and trust in online recommendation agents is defined. • Chapter 3 focuses on the impact o f explanation facilities on consumers' trust in online recommendation agents. Three types of explanations are investigated: how explanations, why explanations, and guidance. How explanations reveal the line of reasoning used by agents, on the basis of consumer needs and product preferences, and they outline the logical processes involved in reaching final recommendations. Why explanations justify the importance and purpose of agents' questions to consumers, in addition to providing justifications for the recommendations provided after the consultation is complete. Guidance provides objective knowledge regarding the potential constraints brought about by different choices for the questions in the agent-user dialogue. The different effects of the three types of explanations on different trusting beliefs are examined utilizing an analysis of covariance. In chapter 4, an integrated T rus t -TAM is tested to understand user acceptance of online recommendation agents and reveal the relative importance of trust vis-a-vis other antecedents of intentions to adopt recommendation agents. A Partial Least Squares analysis is applied to test the structural model. In chapter 5, the written protocols that are used by participants to justify their trust levels in recommendation agents are analyzed. Based on prior research and the written protocols, an agent trust formation scheme, including 12 processes, are developed to code the written protocols. The major processes that build and inhibit different trusting beliefs are identified from the protocol analysis, and they are further quantitatively analyzed to predict participants' trusting beliefs. Chapter 6 tests the impact of decision strategy support on consumers' trust and adoption of online recommendation agents, together with explanation facilities. Three types of recommendation agents, with different levels of decision strategy support, were -6-Chapter 1: Introduction compared. The role of explanations in realizing the benefits of high decision strategy support is also investigated. To further understand the impact of explanations on trust, the mediating role of perceived agent transparency is examined. This chapter covers both the impact of decision strategy support and explanations on user perceptions, individually and jointly, and the consequence of these perceptions (i.e., behavioral intentions). Finally, chapter 7 summarizes the studies conducted, outlines the major contributions of this research, and provides suggestions for future research. Chapter 2: Literature Review CHAPTER 2: LITERATURE REVIEW This chapter first reviews previous research in the area of recommendation agents and identifies a research gap - design of trustworthy online recommendation agents with explanation facilities and decision strategy support. Then, these two agent features and capabilities that are examined in this dissertation are briefly discussed in sections 2.2 and 2.3. Section 2.4 discusses the nature of trust in technological artifacts and online recommendation agents, and section 2.5 defines trust in recommendation agents. 2.1 Prior Studies on Recommendation Agents Web-based intelligent recommendation agents have been the focus of research for many years. Much research has been being conducted due to the wide applications of recommendation agents in online environments. Table 2-1 summarizes the prior studies that have examined the impact o f recommendation agents and their various features and capabilities. Chapter 2: Literature Review Table 2-1 Prior Agent Studies Main Focus Study Independent Variable Dependent Variable Moderator Key Finding Impact of Agents Haubl and Trifts (2000) • Recommendation Agent (RA) • Comparison Matrix (CM) • Amount of search for product information • Consideration set size and quality • Decision Quality (purchase a non-dominated alternative, switch to another alternative, and degree of confidence) N / A • R A increases consumer decision quality, reduces the amount of information search and consideration set size, and increases consideration set quality. • C M reduces consideration set size and increases consideration set quality. Haubl and Murray (2003) • Recommendation Agent (RA) • Consumers' preference construction • Preference persistence N / A • The inclusion of an attribute in a R A renders this attribute more prominent. • The preference-construction effect persists beyond initial shopping experience. Urban et al (1999) • Virtual Advisor (VA) • Information intensive site • Trust • Intentions to use the V A • Willingness to pay for the recommendation service N / A • V A increases consumer trust in the website. • Most consumers would be willing to pay for the service. • Half participants prefer the V A site while the other half prefer the information intensive site. Lai and Yang (2000) • Five types of browsing agents: recommendation agent, new-content agent, search agent, customized agent, and personal-status agent Only the customized agents were evaluated: • Effectiveness (number of preferred products) and efficiency (number of clicks) • User satisfaction N / A • The customized agents increase the browsing effectiveness and efficiency and user satisfaction. Choi et al (2001) • Anthropomorphic agent (AA) • Telepresence • Social presence • Attitude towards ads and brand • Intentions to revisit the website and to purchase N / A • Ads with an A A generate higher telepresence and social presence than ads without the agent. • People who are exposed to the ads with an agent have more favorable attitude toward the ads and have a higher intention to revisit the website. Chapter 2: Literature Review Table 2-1 Prior Agent Studies (cont'd) Main Focus Study Independent Variable Dependent Variable Moderator Key Finding Different Types of Agents Ansari et al (2000) • Bayesian based recommendation agent (BSRA) • Collaborative filtering based recommendation agents (CFRA) • Marginal Likelihood of data • Deviance information criterion • Predictive ability • Customer heterogeneity • Movie heterogeneity • B S R A has better predictive power than C F R A . • B S R A can even make recommendations when C F R A cannot. Lee et al (2002) • Agent types (profile based vs. self-expressed preference based) • Product types (frequently purchased commodity vs. less frequently purchased products) • Preference accuracy • Ability to adapt to consumer preference changes N / A • The two agents have better performance compared with the benchmark method (&-NN method) for the two types of products respectively. User-Agent Dialogue Design Komiak (2003) • Needs-based vs. attribute-based dialogues • Internalization • Trust (cognitive and emotional) • Intentions to adopt agents as decision aids and to delegate shopping decisions to the agents N / A • Needs-based dialogues increase consumers' perceived internalization of the agents, which increase their cognitive trust in the agents. • Cognitive trust influence s consumers' intentions to adopt the agents as decision aids and this effect is fully mediated by their emotional trust in the agents. Norman (1994) • N / A (General discussion) N / A N / A • Provide a general discussion on user-agent interaction forms and other issues Preference Elicitation Aggarwal and Vaidyanathan (2003) • Conjoint-type full-profile ratings • Self-explicated ratings • Product attribute preference ranking and weights N / A • A mismatch was found between these two methods. Olson and Widingll (2002) • Interactive Agents (user-inputted attribute importance weights) vs. passive agents (equal weights) • Decision quality • Liking of the agent • Decision-making time N / A • Passive agents perform as well as or even better than the interactive agents on all the dependent variables. Moon (2000) • Reciprocity • Sequence • Willingness to reveal intimate information N / A • Subjects were more willing to reveal intimate information in the reciprocal and gradual conditions. Chapter 2: Literature Review Table 2-1 Prior Agent Studies (cont'd) Main Focus Study Independent Variable Dependent Variable Moderator Key Finding Explanation Facilities Sinha and Swearingen (2002) • Perceived transparency of collaborative recommender systems • Liking of the system • Confidence in the recommendations N / A • Users like and feel more confident about recommendations that they perceived more transparent. Herlocker et al (2000) • Explanations • Filtering performance of the system • User acceptance of the system N / A • 86% of survey respondents preferred the systems with explanations. • Filtering performance could not be evaluated due to their research design. Strategy Design Tan (2003) • Hyperlink-based aid • Attribute-screen aid • Weight-attribute-screen aid • Sorting aid • Decision accuracy • Consideration time • Consideration set quality • Consideration set size • Information load • More sophisticated aids reduce cognitive effort but do not increase decision quality • The impact on decision quality depends on the information load Pereira (2000) • Search strategy (elimination by aspect, weighted average method, profile building, and simple hypertext) • Product class knowledge • Satisfaction in the decision process • Confidence in the decision » Trust in recommendations • Propensity to purchase • Perceived cost savings • Perceived cognitive effort • Amount of information • Degree of control (skip attribute/ cut-off values, specify confidence levels in their specifications, and option of returning back to preference elicitation stages) • Subjects with high product class knowledge had more positive affective reactions towards agents using W A D and E B A strategies while subjects with low product class knowledge prefer the profile building based agents more. • The two moderators reduced the above interaction effects. Widing and Talarzyk (1993) • Linear weighted-average aid (LINEAR) • Cutoff aid (CUTOFF) • A simple list of alternatives ( R A N D O M ) • Actual decision quality (switch to alternatives) • Perceived effort • Actual decision time • Perceived decision time • Correlation between product attributes (negative vs. non-negative) • In choice sets with negatively correlated attributes, L I N E A R leads to superior decisions in comparisons with CUTOFF and R A N D O M ; CUTOFF takes longer time than L I N E A R & R A N D O M . • In choice sets with non-negatively correlated attributes, decision quality and decision time were the same across the three aids. Chapter 2: Literature Review Table 2-1 Prior Agent Studies (cont'd) Main Focus Study Independent Variable Dependent Variable Moderator Key Finding Interface Design Cassell (2000) N / A (General Discussion) N / A N / A • Suggestions on the role and implementation of embodied conversational agent interfaces with humanlike features, various conversational functions and modes Dehn and Mulken (2000) N / A (Review paper) N / A N / A • An overview of empirical studies about effects of animated interface agents and the implications and limitations of these studies. Lucente (2000) N / A (General Discussion) N / A N / A • General discussion about conversational interfaces for agents with multi-modes and affable personality Hook (2000) N / A (General Discussion) N / A N / A • Discussion about usability issues for better user-agent interaction design Lester and Stone (1997) N / A (Framework) N / A N / A • Propose a framework for animated pedagogical agents with enhanced believability Reilly (1997) N / A (Framework) N / A N / A • Propose a framework and method to demonstrate an artistic agent personality during user-agent interactions Bickmore and Cassell (2001) N / A (Discussion) N / A N / A • Propose a model of social dialogues aimed at increasing agent trustworthiness Bates (1994) N / A (Discussion) N / A N / A • General discussion about roles of emotion in believable agents Carroll (1987) N / A (Discussion) N / A N / A • General discussion of interface design issues for advice-giving expert systems Chapter 2: Literature Review Table 2-1 Prior Agent Studies (cont'd) Main Focus Study Independent Variable Dependent Variable Moderator Key Finding User characteristics, contextual issues, and other factors Kaplan et al (2001) • Predictive ability information • Locus of control • Involvement • Decision aid reliance N / A • Decision makers are more likely to rely on a decision aid 1) when predictive information is not disclosed, and 2) for decision makers with external locus of control. • Reliance on decision aids for decision makers with internal locus of control is more influenced by involvement than for decision makers with external locus of control. King and Hill (1994) • Conceptual framework for the design of agents based on consumer characteristics (Goal orientation, Autonomy, and Expertise) N / A N / A • Design of different types of agents should incorporate different user characteristics. Reneau and Blanthorne (2001) • Irrelevant distractor information • Information sequence • Judgment accuracy • Judgment confidence N / A • Judgments are more accurate when diagnostic information is presented later and when no irrelevant information is presented. • Judgment confidence is not influenced by these two factors. Komiak (2003) • Familiarity • Trust (cognitive and emotional) • Intentions to adopt agents as decision aids and to delegate shopping decisions to the agents N / A • Familiarity with the agent significant influences cognitive trust, which mediates the effect of familiarity on emotional trust. • Emotional trust influences consumers' intentions to adopt agents as decision aids. Chapter 2: Literature Review Table 2-1 Prior Agent Studies (cont'd) Main Focus Study Independent Variable Dependent Variable Moderator Key Finding General discussion regarding agent impact, design, and applications Maes et al (1999) • This paper introduces the applications of various recommendation agents in different stages of consumer decision making. Maes (1994) • This article discusses the role of agents in reducing work and information overload. O'Keefe and McEachern (1998) • This paper provides a general discussion about the facilities of online decision support for different stages in consumer decision processes. West et al. (1999) • This paper examines various roles that agents perform, and discusses the design issues and the goals to be achieved. Grenci and Todd (2002) • This article discusses the role of customer decision support systems for online consumer decision making. Russo (2002) • This paper proposes a set of design rules and principles for recommendation agents. Several key considerations such as credibility, trust, and control for the design of recommendation agents are also discussed. Smith (2002) • This paper discusses the applications of shopbots and their impact on both consumers and retailers. Redmond (2002) • This article identifies two types of agents and discusses their impact in the short and long run. Iacobucci et al (2000) • This paper examines the functions of intelligent agents used by service providers on the Internet. Ma and Paul (2003) • This article examines the functionalities of web-based consumer decision support systems (WCDSS) at different stages of online purchasing process. • The impact of WCDSS on perceived information quality, decision time, consumer satisfaction, and intentions to use is proposed. He and Leung (2002) • This paper surveys various applications of recommendation agents for business-to-consumer and business-to-business e-commerce. Montgomery et al (2003) • This article discusses the design issues for shopbots based on a utility model of consumer purchasing behavior. This model considers both the intrinsic value of the product and its attributes, and the disutility aspects as well. Kauffman et al (1999) • This paper proposes a framework for agent sophistication and several other design concepts for long-lived Internet agents, such as intelligence, validation, concurrency, recovery, monitoring, and interactivity. Milojicic (2002) • This paper discusses various mobile agent applications. Du, Eldon, and Chang (2003) • This article provides a framework of mobile agent-based applications. Chapter 2 Literature Review Independent Variables • Agent availability • Different types of agents • Agent-user dialogue design -Needs-based -Attribute-based • Preference elicitation —Self-expressed —Profile rating -Reciprocity -Sequence • Explanations • Agent strategies • Interface • Familiarity • User characteristics -Locus of control -Involvement -Goa l orientation —Autonomy -Expertise • Other factors -Distractor information Moderators • Information load • Degree of control -Sk ip attributes/ cutoff values -Specifying confidence levels —Options to return back • Product class knowledge • Trust propensity • Preference of effort saving vs. decision quality • Product types Dependent Variables E E • Performance —Decision quality -Consideration set quality -Predictive power —Ability to generate recommendations • Cognitive effort —Information search amount -Consideration set size -Time —Perceived effort • Trust —Trust in the agent -Trust in the website • Satisfaction • Attitude - L i k i n g • Confidence • Tele- and Social-presence of the media • Intention/willingness -Agent adoption -Task delegation —Revisit the website -Willingness to pay for service -Personal information/ preference revelation • Preferences construction • Other intervening variables: -Internalization -Transparency Figure 2-1 Variables Investigated in Prior Agent Studies As shown in table 2-1, there are four main areas of focus in the prior studies: 1) impact of recommendation agents, 2) comparisons of different types of recommendation agents, 3) impact o f particular design features and agent capabilities, and 4) other issues such as user characteristics and conceptual discussion about the impact, design, and -15-Chapter 2 Literature Review applications of recommendation agents. Figure 2-1 summarizes the main independent variables, moderators, and dependent variables that were examined in the prior empirical studies. Regarding the effects of recommendation agents, prior studies (e.g., Haubl and Murray, 2003; Haubl and Trifts, 2000) have compared recommendation agents with no recommendation agents and have empirically demonstrated the effects such as increase in consumers' decision quality, reduce in cognitive effort (i.e., amount of information search and consideration set size), and increase in trust in e-vendors. Two studies have compared different types of recommendation agents. Most recommendation agents in current use fall into two classes (Ansari et al., 2000). The first class is based on a collaborative filtering method (Ansari et al., 2000; Maes et al., 1999). This method does not require consumers to explicitly inform an agent about their needs and preferences, and it uses customer characteristics, mainly based on customer profiles or identified from current and past purchases, to classify customers into groups. Recommendations are then generated based on the products most frequently chosen by others in the same group. This technique best fits situations where inferences of product attributes from consumer needs are difficult, as they are for products based less on utilitarian considerations but more on subjective taste (e.g., books and music), and it may be unsuitable for complex products (Russo, 2002). The second class, known as content filtering based agents (Ansari et al., 2000), generates recommendations on the basis of agent-user dialogues (Russo, 2002), where consumers answer several questions asked by the recommendation agents regarding their needs and product preferences, and the agents provide shopping recommendations based on their answers. In addition to these two classes of agents, new methods and algorithms as variants of the above two (e.g., incorporating other statistical methods) have been proposed in many studies. A s shown in the review of previous studies, a large amount of recommendation agent types are found in the literature and many studies have been conducted in this area. -16-Chapter 2 Literature Review This research examines product-brokering recommendation agents that use content filtering techniques. These agents provide shopping advice on what to buy based on user specified needs and preferences. Overall, this type of agents is under-investigated, but they hold great potential for e-commerce because they can provide not only the most-needed advice for a variety of complex products but also more accurate ones (Russo, 2002). Many commercial systems, for example those offered by MyProductAdvisor.com, WiseUncle.com and ActiveDecisions.com, are based on the content-filtering method. Although many positive effects of recommendation agents have been confirmed in prior studies, many online shoppers still have negative reactions to the recommendation services and they are not wil l ing to disclose their preferences and profiles to obtain recommendations (Burke, 2002). Recommendation agents need to be better designed so that they can deliver their value more effectively and get a higher chance of adoption (Burke, 2002). Accordingly, assessing the impact of various design features for recommendation agents holds great potentials for research. A s summarized in table 2-1 and figure 2-1, prior studies about particular agent design issues fall into one of the five aspects: 1) consumer preference elicitation, 2) agent-user dialogue design, 3) agent strategy design, 4) explanation facility design, and 5) interface design. Several studies have examined issues related to the preference elicitation. Aggarwal and Vaidyanathan (2003) have investigated different ways to elicit users' preferences: conjoint-type full-profile ratings and self-explicated ratings. A n agent may infer consumers' preferences for attributes and levels on the basis of their ratings of several alternative products or may directly ask them their evaluations of various attributes and levels. They found that the two methods yield different consumer preferences. Moon (2000) has examined issues about the intimate information 1 elicitation via computers and he has examined the factors (i.e., reciprocity and sequence) that influence consumers' willingness to reveal their preferences and other intimate information. Haubl and Murray (2003) have investigated the impact of elicitation of different product attributes on consumers' preference construction. 1 Intimate information is related to disclosers' innermost emotions, attitudes and feelings (Derlega, 1988). Examples of intimate disclosures include, "I am so ashamed of...," or "I feel so guilty about -17-Chapter 2 Literature Review Regarding the agent-user dialogue design, Norman (1994) has conceptually discussed issues related to how people interact with computer agents. However, only one study has been found that has empirically examined the design of agent-user dialogues. Komiak (2003) examined needs-based versus attribute-based questions in the agent-user dialogues. Komiak found that need-based questions increased consumers' feelings of agent internalization which enhances their trust in an agent. More research is needed to better design the agent-user dialogues (Norman, 1994). For example, the effectiveness o f personalized agent-user dialogues and the impact of user control of the agent-user interactions are promising research areas. Regarding the agent interface design, many studies have discussed it conceptually. However, few empirical studies were found, hence more research is needed. For example, the impacts of social and anthropomorphic interfaces need to be empirically examined. Although the above three areas deserve more research, this dissertation focuses on the other two important agent capabilities (i.e., explanation facilities and decision strategy support), which are very important and relevant to IS research and practice. These two aspects are firmly rooted in important IS theories, such as explanation theories for intelligent systems (e.g., Gregor and Benbasat, 1999) and theory of system restrictiveness (e.g., Silver, 1991b). However, no empirical studies have been found to test these theories in the context of web-based recommendation agents. This research aims at bridging these gaps. In sections 2.2 and 2.3, these two aspects are briefly introduced and the choice of them is further justified. 2.2 A Focus on Explanation Facilities The ability to explain knowledge and reasoning, referred to as explanation facilities, is considered to be one of the critical components of intelligent and knowledge-based systems (KBSs) , including decision support systems and online recommendation agents (Dhaliwal and Benbasat, 1996; Gregor and Benbasat, 1999; Hayes-Roth and Jacobstein, 1994; Mao and Benbasat, 2000; Y e and Johnson, 1995). Many prior studies have -18-Chapter 2 Literature Review examined the impact of explanation facilities on user acceptance of advice from K B S (Ye and Johnson, 1995), and knowledge dissemination (Gregor, 2001; Mao and Benbasat, 1998; Mao and Benbasat, 2001). Extending this line of research, we examine the role and impact of explanation facilities for online recommendation agents. Recent research on K B S suggests that the role of explanations continues to be fundamentally important (Gregor, 2001). For example, an Internet delivered patient-advocate system requires an explanation facility to adequately meet the needs of patients (Miksch et al., 1997). Sinha and Swearingen (2002) have surveyed 12 participants regarding their understanding of the recommendation generation on their choice confidence and their preferences of several collaborative-filtering based recommender systems. Overall they found users like the systems and feel more confident about recommendations when explanation facilities are provided. In another study, Herlocker et al. (2000) have investigated the impact of explanations provided by collaborative-filtering based agents on the filtering performance and user acceptance of the agents. Twenty-one explanation components were investigated2. Seventy-eight participants were surveyed to identify the most compelling components and their effects. They found that explanations improve the acceptance of collaborative filtering based agents. Nevertheless, so far no guidelines exist for the provision of explanation types particularly for content-filtering based recommendation agents. To our knowledge, this research is the first study that proposes an appropriate set o f explanations for them, and advances our knowledge about the impact of explanation facilities for online recommendation agents. In practice, many online recommendation agents still lack sufficient explanatory capacities. For example, the recommendation agents available from www.activedecisions.com, one of the most well-known recommendation agent providers, do not have explanations regarding why certain questions were asked to elicit consumers preferences and how conclusions were reached. The results of this research wi l l provide 2 Indeed, these explanations include all relevant information and description about the agents, the products, and the presentation format of information (e.g., whether or not the movie won awards, histogram or complex graph presented movie ratings). They did not investigate the differential impacts of different types of explanations. -19-Chapter 2 Literature Review practitioners with design guidelines of explanation facilities for online recommendation agents. 2.3 A Focus on Decision Strategy Support Online recommendation agents help consumers reduce information overload and improve decision quality (e.g., Haubl and Trifts, 2000; Maes, 1994), but they also restrict consumers to certain decision processes that are supported by the agents (Silver, 1990). Due to the large number of product alternatives available in an online store or across a variety of online stores, the decision strategy supported by an agent must be able to narrow down the product alternatives effectively and retain the products that are most suitable for consumers. Several studies have investigated the strategies used by recommendation agents. For example, Tan (2003) has examined the impact (i.e., decision accuracy, consideration time, consideration set quality, and consideration size) of different strategies used by screening aids (hyperlink-screen aid, attribute-screen aid, and weight-attribute-screen aid) and sorting aid, and the interaction between these agents and information load. He found that agents with normative strategies (weight-attribute-screen): 1) reduce consumers' cognitive effort, and 2) improve the decision quality when the information load is high. In another study, Pereira (2000) has explored the interaction between the search strategies (elimination-by-aspect, weighted average method, collaborative-filtering method, and simple hypertext) and subjects' product class knowledge (low, high), in terms of users' satisfaction in the decision process, confidence in the decision, trust in the recommendations, propensity to purchase, perceived cost savings, and perceived cognitive effort. He found that participants with high product class knowledge had more positive affective reactions towards agents using the weighted average and elimination-by-aspect strategies, while subjects with low product class knowledge prefer the collaborative-filtering based agents more. -20-Chapter 2 Literature Review Nevertheless, the decision strategies utilized by the agents explored in all these studies were pre-configured and each agent employed only one decision strategy. Users had no control over the decision strategies that an agent employed. The concepts of user control and system restrictiveness have been proposed by Silver (1988; 1990; 1991a; 1991b) as an important design concept in the context of decision support systems (DSS). Agent restrictiveness is defined as the extent to which a recommendation agent limits its users' choices and decision-making processes. Since an online recommendation agent includes a finite set of functional capabilities, when a decision-maker relies on an agent to solve a problem, his or her decision-making process is constrained by the agent's capabilities (Silver, 1991b). When a consumer's desired decision strategy is not supported by an agent, the agent wi l l be perceived to be restrictive and the final recommendations may not fit consumers' needs. A s a result, the consumer may have negative perceptions about the agent and may not trust it. However, few studies have investigated the impact of agent restrictiveness on consumers' perceptions and evaluations of online recommendation agents. Working form the theory of system restrictiveness, supplemented by behavioral decision theories (e.g., Payne et al., 1993; Todd and Benbasat, 1992), another focus of this research is to investigate whether or not allowing user control and choice of strategies w i l l influence users' perceptions and evaluations of an agent. Both the benefits and the costs of such design are analyzed. Our review of current agent applications shows that most of the agents do not allow users to control over the strategy employed by the agents. Due to the high uncertainties involved in online environments (e.g., uncertainty about the quality of an agent's recommendations due to users' inability to touch or feel the products or to communicate with salespersons face-to-face), user control is desirable in that, with appropriate control, users "can be reasonably confident that no major, unpleasant surprises w i l l occur" (Merchant, 1984). A s a result, user control can effectively reduce uncertainty perceptions and increase trust (Das and Teng, 1998; Gefen et al., 2003b; Pereira, 2000). Also , the large amount o f product alternatives available in Internet shopping environments requires designs for the strategies employed by an agent that can -21-Chapter 2 Literature Review narrow down the product alternatives and retain the products that are most suitable for consumers. Therefore, the research on the decision strategy support capability has significant practical implications as well . 2.4 Dependent Variables Investigated in the Literature and this Thesis The major dependent variables, which have been used in prior studies to measure the impacts of recommendation agents and their features and capabilities, mainly falls into five categories as summarized in figure 2-1: 1) agent performance (e.g., decision quality, predictive power), 2) cognitive effort (e.g., consideration set size, perceived effort), 3) user evaluations and perceptions (e.g., attitude, confidence, trust), 4) intentions (e.g., adoption intentions), and 5) others variables including intervening variables (e.g., internalization, transparency). The core dependent variable explored in this research is consumers' trust in online recommendation agents. Comparing with other variables, trust in online recommendation agents is not well understood and was under-investigated in prior studies. The nature of trust in online recommendation agents is discussed in section 2.5 and defined in section 2.6. Trust in a recommendation agent significantly influence consumers' intentions to adopt the agent (e.g., Komiak, 2003). Consumers delegate some shopping tasks (e.g., find a product that fits their needs and preferences) to the recommendation agents and the agents work on behalf of the consumers. However, consumers may have some concerns, such as whether or not the agents have the ability to understand consumers' needs and know all potential products that fit their needs, and whether or not the agents are biased toward certain manufacturers. These concerns reduce consumers' trust in the agents and they may perceive relying on such agents to be even detrimental. Consequently, consumers may not be wil l ing to use the recommendation services. In online shopping environments, these concerns are not uncommon due to the lack of proven guarantees that e-vendors or agent providers w i l l not engage in opportunistic behaviors and the lack -22-Chapter 2 Literature Review of cues to judge the quality of recommendation services (Gefen et al., 2003b). Andersen, Hansen and Andersen (2001) found that trust in recommendation agents is the most important expectation users have. Therefore, the impact of important agent capabilities on users' trust in an agent is examined in this dissertation. In particular, this research focuses on consumers' initial trust in online recommendation agents that are formed after customers have a first experience with online recommendation agents. While we recognize the importance of the evolving nature of trust, our focus on initial trust is mainly because consumers' perceptions of uncertainty and risk about using recommendation agents are especially salient when they are not familiar with the agents during the initial contact (McKnight et al., 2002b). Therefore, sufficient initial trust in the agents is needed to overcome these perceptions. Otherwise, consumers can easily switch to other websites by a click of the mouse. Another variable is consumers' intentions to adopt recommendation agents. The value of recommendation services comes from consumers' adoption and use of these services. This research wi l l investigate the effects of the agent features and capabilities mentioned above and examine important intervening variables that influence consumers' adoption decisions. The mediating variables leading to consumers' trust and intentions to adopt online recommendation agents belong to the other categories of dependent variables mentioned earlier. For the cognitive effort related variables, consideration set size, decision time, and perceived cognitive effort were measured in Experiment 2. Regarding agent performance, previous studies investigated the decision quality or other similar performance related variables in certain situations. Some of them investigated collaborative-filtering based agents (e.g., Ansari et al., 2000). In these studies, existing data (e.g., a set of product alternatives preferred by each particular user) are available to examine the recommendation quality by checking whether or not the recommendations were actually the products preferred by users. -23-Chapter 2 Literature Review This research wi l l measure a surrogate variable, participants' competence belief in recommendation agents (one of the trusting beliefs). Competence-belief in an agent means that an individual believes that the agent has the ability, skills, and expertise to perform effectively in specific domains (McKnight et al., 2002a). When consumers make decisions based on an agent's recommendations, their decision quality is influenced by whether or not the agent has the ability and expertise to find suitable products. Therefore, the competence belief in an agent is used as a surrogate variable for the subject measure of decision quality. Other studies used artificial data for product alternatives including non-dominated alternatives (Haubl and Trifts, 2000). A n product alternative is non-dominated i f no other alternatives is superior on an attribute without, at the same time, being inferior on at least one other attribute (Haubl and Trifts, 2000). This enables researchers to measure decision quality by checking to what extent the non-dominated alternatives were recommended. In this research, factual product data are preferred because dealing with the trade-offs among product alternatives is actually one of the key tasks that an agent performs (Haubl and Murray, 2003 ) 3 . It is less necessary to deal with trade-offs i f there is non-dominated alternatives because they are better than all other dominated alternatives on all attributes. For factual product data, the best product alternative is difficult to decide because trade-offs are involved in the evaluations of product attributes and it depends on different consumers' needs and preferences. Another objective way used to measure decision quality is whether or not a shopper, after making a purchase decision, changes his/her mind and switches to another alternatives when he/she is exposed to all product alternatives (Haubl and Trifts, 2000). Given the large amount of product alternatives in online environments, it is also difficult to ask participants to choose one product from hundreds of alternatives. Accordingly, the objective determinant o f agent decision quality is difficult. Instead, the perceived competence of an agent is used as a surrogate variable. 3 Admittedly, factual data may or may not include non-dominated alternatives. -24-Chapter 2 Literature Review Finally, this dissertation also identifies other important mediating variables (e.g., perceived cognitive effort, perceived agent transparency, and perceived agent restrictiveness) that influence consumers' trust and intentions to adopt an online recommendation agent. These mediating variables help understand how the agent capabilities influence consumers' evaluations of an agent. 2.5 Trust in Technological Artifacts and Online Recommendation Agents The importance of trust in online environments has been addressed in many studies (e.g., Gefen et al., 2003b; Jarvenpaa et al., 2000; McKnight and Chervany, 2001; Pavlou, 2003). However, the trust targets in most prior studies are humans, and the nature and role of trust in technological artifacts remain unclear. Trust is a social construction that originates from interpersonal relationships (Sztompka, 1999). The connection between trust and technological artifacts has been the subject of debate in many studies that have explored whether or not technological artifacts can be recipients of trust, and i f it is valid to ascribe human characteristics to technological artifacts (Chopra and Wallace, 2003; Corritore et al., 2003). Some researchers have been opposed to attributing trustworthiness to technological artifacts and have argued that recipients o f trust must possess consciousness and agency (Friedman et al., 2000). Humans exhibit these faculties, but "technological artifacts have not yet been produced in substance and structure that warrant in any stringent sense the attribution of consciousness or agency" (Friedman et al., 2000 , p.36). Friedman and Millett (1997) have reported that among the 29 male undergraduate computer science majors they interviewed, 83 percent attributed aspects of agency - either decision-making or intentions - to computers, but only 21 percent consistently held computers morally responsible for errors. Thus, the study concluded that users are not totally engaged in social relationships with technology, given that computers are not perceived as completely responsible for the consequences of their use. -25-Chapter 2 Literature Review Other researchers have agreed that users attribute human characteristics to technological artifacts, but this has been accepted with a measure of caution. Kiesler and Sproull (1997) have argued that any such attribution is an "as if response rather than a true attribution of humanity, i.e., the characterization "may not extend much further than the situation in which the user is tested" (pp. 196-197). Reeves and Nass (1996) have found that after participating in controlled experiments, individuals might think that their social behavior toward technological artifacts and the personality they have assigned to the technology are not wholly appropriate. Arguably, computers do not have motivations involving a " s e l f and dispositions toward social relationships. Nevertheless, it has been demonstrated empirically that people indeed perceive some human properties in technological artifacts during their interactions with the technology (Dryer, 1999; Reeves and Nass, 1996). The other side of the academic debate, favoring the attribution of trustworthiness to technological artifacts, is supported by a large amount o f evidence. Conceptually, Sztompka (1999) has argued that trust in a person and trust in a technology are not fundamentally different, because behind all human-made technologies, there stand people who design, operate, and control them. Empirically, Reeves and Nass are among the most prominent researchers who have argued convincingly that people treat computers as social actors and apply social rules to them (Reeves and Nass, 1996). After conducting more than 30 empirical studies on this issue, they have found that even technologically sophisticated people treat technological artifacts (e.g., computers) as i f they were human beings, rather than simple tools. People are polite to computers, respond to praise they receive from computers, view them as teammates, and easily assign personalities (e.g., dominance, friendliness and helpfulness) to them. Such social responses apply not only to sophisticated conversational computer agents (Cassell and Bickmore, 2000), but even to computer systems with simple text interfaces (Nass et al., 1997; Reeves and Nass, 1996). Thus, there is ample and convincing evidence that justifies the treatment of technological artifacts as recipients of social and relational aspects of trust. Furthermore, a variety of studies has extended the attribute of trustworthiness to abstract and technical systems, as well as intelligent computer agents (Komiak and Benbasat, 2004; M u i r and -26-Chapter 2 Literature Review Moray, 1996). For example, M u i r and her collaborators (e.g., Mui r , 1987; 1996; M u i r and Moray, 1996) have included a dimension of morality (e.g., responsibility) in their definition of trust in machines and automation. In their experiments, participants were able to evaluate the responsibility of machines in processes of building users' trust. Similarly, in a study of embodied conversational agents by Cassell and Bickmore (2000), trust was defined as a composite of benevolence and credibility. A n agent's benevolence was demonstrated through past examples of benevolent behavior such as third-party affiliations or participation in interaction-based social rituals, such as greetings. Additionally, empirical evidence has indicated that there are no significant differences between the components of trust in humans and those in technological artifacts. Notably, Jian, Bisantz and Drury (2000) conducted a word-elicitation study to understand the similarities and differences among human-human trust, trust in human-machine relationships, and trust in general. Their results indicate that particular components of trust are similar across these three types of trust (i.e., human-human trust, trust in human-machine relationships, and trust in general). Even in cases o f trust in machines, participants use words like "integrity," "honesty," "cruelty," and "harm" to characterize machine behavior. To summarize, while it may at first appear debatable that technological artifacts can be objects of trust, and that people assign human properties to them, evidence from a variety of relevant literature supports this argument. People respond socially to technological artifacts and perceive that they possess human characteristics (e.g., motivation, integrity, and personality). In particular, research findings have demonstrated that components of trust in humans and in technological artifacts do not differ significantly. This indicates that people not only utilize technological artifacts as tools, but also form social and trusting relationships with them. 2.6 Defining Trust in Online Recommendation Agent Based on the supporting evidence, we define trust in online recommendation agents as an extension of interpersonal trust that has been extensively studied in the recent literature of -27-Chapter 2 Literature Review Information Systems (IS) and other disciplines. Recent literature in IS has discussed four general approaches to defining trust (Gefen et al., 2003b; McKnigh t et al., 2002a): 1) a belief or a collection of beliefs (Bhattacherjee, 2002); 2) emotional feelings (Komiak, 2003); 3) an intention (Mayer et al., 1995); and 4) a combination of these elements (McKnight et al., 2002a). Following the belief approach, McKnight et al. (2002a) have defined trusting beliefs as a trustor's perception that the trustee has attributes that are beneficial to the trustor. This belief leads to behavioral intentions. In the emotional feeling approach, Komiak and Benbasat (2002) have defined trust in terms of feelings of security, comfort, and lack of fear. In the intention approach, Mayer et al. (1995) have defined trust as "the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other w i l l perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party." In the last approach, researchers have treated these elements, such as trusting beliefs and intentions, as components of trust (McKnight et al., 2002a). This dissertation focuses on trusting beliefs, because trusting beliefs have been identified as important antecedents of trusting intentions and emotional trust (e.g., Komiak, 2003; McKnight et al., 2002a). According to the theory of reasoned action, people's beliefs influence their behavioral intentions (Ajzen and Fishbein, 1980). Also , previous studies have provided empirical evidence that emotions are evoked primarily by cognition (Kahn et al., 2002) and emotional trust in R A s has been studied in prior research (Komiak 2003). Accordingly, this dissertation follows the belief approach to trust. Adapting the definitions of trust from Komiak (2003) and McKnight et al. (2002a), the current study defines trust in a recommendation agent as an individual's beliefs in an agent's competence, benevolence, and integrity. These three trusting beliefs have been well accepted in many recent studies (McKnight et al., 2002a). According to McKnight et al. (2002a), competence-belief means that an individual believes that the agent has the ability, skills, and expertise to perform effectively in specific domains; benevolence-belief means that an individual believes that the agent cares about her and acts in her interests; and integrity-belief means that an individual believes that the agent -28-Chapter 2 Literature Review adheres to a set of principles (e.g., honesty and promise keeping) that she finds acceptable. A s mentioned earlier, our study concentrates on initial trust. Generally the definition of trust discussed here applies to different temporal contexts including the initial stage of trust formation (Koufaris and Hampton-Sosa, 2004). More detailed discussions of the meaning of trust and general approaches to conceptualizing it can be found in several other studies that have comprehensively reviewed the trust literature (e.g., Gefen et al., 2003b; Mayer et al., 1995; McKnight et al., 2002a). -29-Chapter 3: Impact of Explanations CHAPTER 3: IMPACT OF EXPLANATIONS ON TRUST IN ONLINE RECOMMENDATION AGENTS 3.1 I N T R O D U C T I O N This chapter empirically investigates how and to what extent the use of explanation facilities increases consumers' trust in online recommendation agents. Explanation facilities are important components of intelligent systems including recommendation agents. B y virtue of making the performance of a system transparent to users, user trust is promoted and user acceptance of an agent is influenced (Gregor and Benbasat, 1999; Hayes-Roth and Jacobstein, 1994). A n explanatory capability is thought necessary to imitate behavior that has been demonstrated to be a characteristic of consultations with human experts (Gregor, 2001). Explanation facilities were rated by users as the fourth most important factors among 87 knowledge-based systems ( K B S ) capabilities (Stylianou et al., 1992). Recent research on K B S shows that the role of explanations continues to be fundamentally important (Gregor, 2001). However, so far no guidelines exist for the provision of explanation types particularly for online recommendation agents and few studies have empirically tested the impact of explanation facilities on consumers' trust in online recommendation agents. In this light, the research questions investigated in this chapter include: 1) what types of explanations should be embedded in online recommendation agents, and 2) whether or not the use of explanations influences consumers' initial trust in the agents, and i f so, how and to what extent. This study identifies the characteristics of online recommendation agents that may hamper consumers' trust-building in the agents. The central premise of this study is that explanations can be used to overcome these constraints, and thus facilitate consumers' trust in the agents. A laboratory experiment was conducted to test the effects of different types of explanations on consumers' trust in recommendation agents. The remainder of this chapter is organized as follows. Section 3.2 introduces the topic of recommendation agents in e-commerce. Section 3.3 reviews the literature and -30-Chapter 3: Impact of Explanations theoretical foundations, and develops hypotheses to be tested. Section 3.4 describes the research method and section 3.5 reports the results. This chapter concludes with a discussion of the findings and their implications, as well as the limitations of this research and potential directions for future research. 3.2 R E C O M M E N D A T I O N A G E N T S FOR E - C O M M E R C E This study focuses on content filtering recommendation agents that provide consumers with advice about what to buy (Ansari et al., 2000). The agents generate recommendations on the basis of agent-user dialogues (Russo, 2002), where consumers answer several questions asked by the recommendation agents regarding their needs and product preferences, and the agents provide shopping recommendations based on their answers. Figure 3-1 is a screen shot of the agent-user dialogue in an experimental platform developed for the current study, and Figure 3-2 gives an example of shopping recommendations arising from the agent-user dialogue. This simulated recommendation agent provides shopping advice for digital cameras. Fie E<* Vww FavortM loots Hefc © • ^ lil i=J i'i) P •& «• 0 a a a & a linV: mvCameraAdvisor.com >s^ >s^ >sH Q u e s t i o n s f r o m the I n t e r a c t i v e A d v i s o r Step 2 o f 4 •1. Huw fur tivjny a r " y"»r sulijocLs thai will hu IIICUSIMI IIII m n s l uftnn by Iho iliyit nl i:atm»ra? | [ Guidance | |tfhy E x p l a n a t l o n i ] [How Exp 1 an at1 ant | [ O 1). I don't naed mv camera to focus cn anything othoi than Un- subjuct in the immediate vicinity j O '<?). i '""ant a camera that'll focus on subjects .at a moderate distance i O 3). i 'want a camera that'll focus on subjects from tar awa^ O ' t l . I cioti't have an opinion on this Importance of this criterion I 2 3 4 5 6 7 S 9 #« O O O O O O O O 0 « - r -5. Wliftl lire you 'Jiii'i'j lo do with your picturi;*? ' j Guidance j (Why E x p l a n a t i o n * ) [Hon E x p l a n a t i o n s ] 1 C t j - ' do not want to print my pictures, but only to save and if£u them in electronic forms 1 O 2). 1 .want to gat at least photo-quality 4 x 6 in. pictures, in addition EO electronic pictures. • 0 3)- 1 want to qui at luast photo-quality % A ID in. pictures, in addition to ttlectionic nie rut es. | 0 4)- I don't have an opinion on this Importance of this criterion 1 2 3 1 5 6 7 3 9 **tm*»t>*tO 0 O 0 O O O O 0 " T / / W « « 6. Mow many pictures will you lake at o llmo? j [ Guidance J [Why Exp lana t1 ana ] [How E x p l a n a t i n m | 1 O 1 want, to save- a moderate amount or" nie turns at a time ' ' * Internet Figure 3-1 Agent-User Dialogue from the Experimental Agent Chapter 3: Impact of Explanations •3 http://www.my comer and vttor.com/AdvborH III .- Microsoft Internet Explorer, Favorite! Toob Help mvCartieraAdvisor.com Recommendation rcisults based on yriiir preferences jRefine Your Answers or Start A New Dialogue Again)_ T o p R e c o m m e n d a t i o n s : ComptnutionjMi! OPOFjMterit! Mini-["wny Bxp lana t I on» j ^ How Exp Iana1 tfSlWr.rihftn* The (It score for this model Is 93.1 used Oie following formula to calculate this score: Fit_3core= 100- I(!mportanc8j_evetX Attrl b ute_N e e d s J3 a p). |when a product attribute satisfies your needs, Attrlbute_Needs_Oap equals 0, Otherwise, It either equals 1 or 2 depending on how big the gap Is between the attribute and your needs. |A total gap score of 7 was a result of: Your requirement on the manual features was not satisfied. The number of pictures that you want to take at i time was not satisfied. limy E x p r a n a l l [How Eup1anat 1oni | $ U A '•; 1  «jhttp://www.rriycariK;r.„ ffiPonxnantl -Microioft W... | ~J { I* Iriternet Figure 3-2 Recommendations from the Experimental Agent Recommendation agents employed in e-commerce environments are different from traditional K B S s in at least two aspects. First, there is an agency relationship between an agent and its users (principal) because users depend on the agent to find suitable products on their behalf (Bergen and Dutta, 1992). The agency relationship leads to two key concerns - information asymmetry and opportunism. Information asymmetry means that an agent has more information than the principal with respect to the target behavior (i.e., the "procedures" that an agent applies to generate recommendations). Without such knowledge, the principal cannot completely verify the skills and abilities of the agent (Eisenhardt, 1989). Opportunism follows from the fact that the agent can take advantage of users because they lack the oversight o f the agent. The agent could be programmed in such a way that focuses on a higher profit for the e-vendors only since the agent is provided and owned by the website and e-vendor. Therefore, there could be goal incongruence between the agent and its users. It is not clear i f the agent performs solely for the benefit of its users, or i f it provides recommendations that favor its provider who may be a merchant or manufacturer. -32-Chapter 3: Impact of Explanations Second, users possess a high amount of choice discretion in the agent-user dialogues. When recommendation agents are used to support consumer decision-making, consumers' inputs, composed of their needs, product uses, and preferences, are very flexible. However, lacking adequate knowledge about products, consumers may improperly employ the choice discretion to express their needs. For most products, product attributes are correlated (Widing II and Talarzyk, 1993). For example, a lower price usually means fewer advanced features for a product, and therefore consumers may have to make trade-offs regarding their choices and needs. Without sufficient product knowledge, users may overestimate their real needs and end up with a very powerful product but at a very high price. Consequently, the high amount of discretion may lead to negative user perceptions of the agents. These two issues hamper consumer trust building in online recommendation agents. The central premise of this study is that appropriate explanations need to be provided to deal with these two issues so that trust-building in recommendation agents wi l l be enhanced. 3.3 H Y P O T H E S I S D E V E L O P M E N T In general, previous research has focused on two areas of study, referred to as KBS explanations and DSS (decision support systems) guidance. K B S explanations deliver knowledge about a K B S ' s actions to make it more transparent to its users - what the system does, how it works, and why its actions are appropriate (Gregor and Benbasat, 1999). On the other hand, studies of DSS guidance (Barkhi, 2001; Limayem and DeSanctis, 2000; Mahoney et al., 2003; Silver, 1991a; Wilson and Zigurs, 1999) have focused on presenting knowledge that can adequately guide decision makers using the system (e.g., about how to proceed or what input values to use). Based on the explanation literature, three types of explanations (i.e., how and why explanations from K B S explanations, and guidance) w i l l be evaluated as part of online recommendation agents. -33-Chapter 3: Impact of Explanations Given the three belief components of trust as discussed in section 2.6 (i.e., competence, benevolence, and integrity), we wi l l explore: 1) which types of explanations and knowledge can serve as a direct way to deal with the potential trust obstacles and thus increase consumer trust, and 2) through which trusting beliefs such improvements are realized. 3.3.1 IMPACT OF HOW AND WHY EXPLANATIONS Various K B S explanations and different ways to classify them were summarized in the literature (Dhaliwal and Benbasat, 1996; Gregor and Benbasat, 1999). One approach to classify explanations is based on the nature of explanation queries (e.g., what, why, how, when, and where) (Wick and Slagle, 1989). In particular, the how (referred to in some studies as "lines of reasoning") and why explanations are of interest in this study because they directly deal with the two main concerns identified with the agency relationships (i.e., information asymmetry and opportunism) and facilitate trust building in recommendation agents. How explanations reveal the line of reasoning used by agents based on consumer needs and product preferences, and they outline the logical processes involved in reaching final recommendations. Why explanations justify the importance and purpose of agents' questions to consumers, in addition to providing justifications for the recommendations provided after the consultation is complete. To elaborate on these explanations, examples are provided in table 3-1. These examples come from the digital camera recommendation agent used in the experiment conducted for this study. -34-Chapter 3: Impact of Explanations Table 3-1 Examples of how explanations, why explanations, and guidance A Question in the Agent-User Dialogue How far away are the subjects that will be focused on most often by the digital camera? 1) . I don't need my camera to focus on anything other than subjects in the immediate vicinity 2) . I want a camera that will focus on subjects at a moderate distance 3) . I want a camera that will focus on subjects from far away 4) . I don't have an opinion on this How Explanation Your distance from the subjects you want to focus on will determine the suitable zoom level of a digital camera. If you want a camera that will focus on subjects further away, the camera with stronger optical zoom level will have higher priority in my recommendations. Specifically, the four options will determine the following zoom levels: 1) . 2X optical zoom and blow. 2) . Between 2X and 5X optical zoom. 3) . 4X optical zoom and above. 4) . No minimum requirement in zoom capability. Why Explanation The purpose of asking this question is to know what kinds of photos you will often take. It is quite useful to take photos at different distances. For example, for portraits of family and friends, subjects are close to a camera, but for many scenery or artistic photos, subjects may be far from your camera. Guidance Most digital cameras can take pictures beyond the immediate vicinity. However, cameras capable of taking pictures from very far away will be more expensive. As well, your choices will be more limited (only about 20%). Hence, be careful not to over-estimate your needs. The how and why explanations, first introduced in M Y C I N (Buchanan and Shortliffe, 1984), remain the foundations o f most explanations facilities for current K B S applications (Dhaliwal and Benbasat, 1996). Since users are ultimately responsible for, and are impacted by their choices, they wi l l tend to reject advice from an agent whose reasoning they do not understand (Hollnagel, 1987). Most studies have suggested that explanation facilities can make the advice from a K B S more acceptable to users and more -35-Chapter 3: Impact of Explanations effective in influencing their beliefs (Ye and Johnson, 1995). Regarding trust, due to the lack of reliable measures that were not available until recently, most studies have used surrogate variables to measure trust (Lerch et al., 1997). A full coverage of the findings of K B S explanation studies is found in Gregor and Benbasat (1999). In the present study, how explanations inform the principal (user) about the "procedures" that an agent applies to generate recommendations. How explanations are viewed as links between what buyers know, i.e., their needs, intended uses, preferences and so forth, and what they need to know, i.e., the product attributes that satisfy their needs, uses and preferences (Russo, 2002). With appropriate expertise, the agents are able to generate product recommendations satisfying users' needs and do not miss good product alternatives. However, not explaining agents' behaviors in terms of how they generate the recommendations would create a "knowledge gap" between the agents and their users in that users lack the information regarding the reasoning process of the agent and as a result, they are unable to verify the agent's expertise and ability (Eisenhardt, 1989). How explanations alleviate the information asymmetry barrier to trust building by bridging the "knowledge gap." Furthermore, how explanations reveal the underlying reasoning processes that govern the agent's decision making and thus demonstrate the skills, competencies, and expertise that enable agents to generate recommendations. In discussing human trust in an automated system, Lee and Moray (1992) suggest that a system's technical competence is perceived by human operators through their understanding of the underlying processes governing the system's behavior. Mui r and Moray (1996) have suggested that trust in machines is based primarily on user perceptions of the expertise of the machine. The skills and expertise that an entity (functioning as a trustee) demonstrates increase its competence in the eyes of its users (the trustors) and hence increase the likelihood that users w i l l trust the entity (Hovland et al., 1953). Therefore, H3-la: Consumers will have higher competence beliefs in the recommendation agents with how explanations than in those without how explanations. -36-Chapter 3: Impact of Explanations Benevolence and integrity beliefs are different from the competence belief (McKnight et al., 2002a). How explanations demonstrate an agent's skills and expertise in generating product recommendations and deal with the "knowledge gap" between a recommendation agent and its users, which is directly related to the competence belief. Nevertheless, how explanations may have some influence on other trusting beliefs as well . For example, by sharing the underlying reasoning processes with consumers, an agent might be perceived to be benevolent. Since there is no direct theoretical or empirical evidence to indicate the extent to which how explanations wi l l influence benevolence or integrity beliefs, the impact of how explanations on benevolence and integrity beliefs may not be significant enough. Accordingly, a null effect is predicted and its test is considered as exploratory. H3-lb(c): There will be no difference in consumers' benevolence (integrity) beliefs in the recommendation agents with and without how explanations. The provision of why explanations is also justified by the existence of agency relationship between recommendation agents and their users. Users of an agent may perceive the existence of agent opportunism due to this agency relationship (Bergen and Dutta, 1992). Users may be concerned that the agent "works" for the provider (the online store or manufacturer), and they may question whether or not the agent puts their interests first. Therefore, not explicitly explaining the good "intention" of the recommendation agents would create an "intention gap." Why explanations in this study are used to demonstrate agents' endeavor and purposes of satisfying users' needs, interests, and preferences; they convey the agents' goodwill towards users. Consequently, why explanations bridge the potential "intention gap" perceived by users, and thus alleviate their concerns about the agent's opportunism. Given the virtual nature of an Internet-based recommendation agent, there are fewer cues available for consumers to infer an agent's motivation, compared with salespersons in physical stores whose motivations may be discerned from their appearance, attitude, tone, and so forth. Instead, why explanations can be used effectively to convey the agent' benevolence in providing recommendation services. -37-Chapter 3: Impact of Explanations Studies on trust in automated systems suggest that a system's ability to communicate its motivation enhances users' perception of the system's intention (e.g., Mui r and Moray, 1996). Motives and intentions are important factors in conveying an impression of benevolence toward others (Cook and Wal l , 1980), and by extension, toward to computer systems as well . Trust emerges when a party identifies and understands another party's goals and intentions better (Doney and Cannon, 1997). Therefore: H3-2a: Consumers will have higher benevolence beliefs in the recommendation agents with why explanations than in those without why explanations. Similar to H 3 - l b and H 3 - l c , there is no direct theoretical or empirical evidence to indicate the extent to which why explanations wi l l influence competence or integrity beliefs. We also hypothesize that: H3-2b (c): There will be no difference in consumers' competence (integrity) beliefs in the recommendation agents with and without why explanations. 3.3.2 I M P A C T O F GUIDANCE In addition to how and why explanations, we also suggest the provision o f guidance in recommendation agents. While interacting with online recommendation agents, consumers have high discretion on what options to select to express their needs in the agent-user dialogues. However, as noted in section 3.2, without sufficient product knowledge and expertise, consumers may be unable to express their needs properly leading to unsuitable product recommendations from the agent (Komiak, 2003). A s a result, consumers may have negative perceptions toward the agents. B y supplying consumers with relevant knowledge, the provision of guidance aims at overcoming this obstacle and facilitating consumer trust in recommendation agents. Silver (1991a) has defined guidance in the context of DSS as knowledge provided in a system to "enlighten or sway its users as they structure and execute their decision-making processes - that is, as they choose among and use the system's functional capabilities" (p. 107). The effects of guidance have been empirically tested by several -38-Chapter 3: Impact of Explanations recent studies. Applying guidance to multi-criteria decision-making models, Limayem and DeSanctis (2000) observed that guidance enables groups to achieve greater model understanding, increases decision satisfaction, improves perceptions of decision quality, and fosters comfort and respect with the technology. To our knowledge, the impact of guidance on user trust building in online recommendation agents is still an unexplored area. Silver (1990; 1991a) has suggested that the need for and effects of guidance depend on a system characteristic, system restrictiveness, which refers to the way a system limits its users' choices and decision-making processes. When users have greater discretion in making choices and judgments, as is the case in the task environment used in our study, guidance should be of greater value (Silver, 1991a). In the current study, users' discretion lies in their "freedom" to choose different options in the user-agent dialogues. High user discretion works best when users have the relevant knowledge so that they can make choices and judgments wisely and express their needs properly. However, since consumers are not necessarily experts in the product domain they are dealing with, to express their needs properly in the agent-user dialogues, agents need to guide consumers by providing them with relevant knowledge. Unbiased product information is among the information that online shoppers want most (Nielsen et al., 1999). Marketing research has shown that many people have negative perceptions about salespersons because they are under pressure to achieve their sales quotas and they may provide biased suggestions and guidance (Kopp, 1993). Agent integrity and objectivity are among the main concerns that consumers have when using online recommendation agents and virtual shopping assistants (Komiak et al., 2005). Consumers need objective product information and expertise to make their judgment and choices in the agent-user dialogues. For example, based on their needs, users can choose a corresponding option for each question in an agent-user dialogue, but they also need to know the costs (i.e., price increase or product choice range limitation) of obtaining the product feature that satisfies their needs. -39-Chapter 3: Impact of Explanations Guidance is not necessarily intended to steer decision makers in a given direction. In some situations it can be suggestive, making judgmental recommendations to users, but in others it may simply be informative, providing users with unbiased, pertinent information (Silver, 1991a). In the context of online recommendation agents, we define guidance as the knowledge about potential constraints applied to different choices for each question in the user-agent dialogue, and about how to adjust user needs and preferences accordingly 4. Table 3-1 presents an example of guidance provided by the experimental agent. Guidance in this study provides objective knowledge regarding the potential constraints brought about by different choices for the questions in the agent-user dialogue. Using guidance, consumers wi l l be informed about the potential costs of having different product features (e.g., higher price or limited product choices). Although consumers have the high input choice discretion and can express their needs freely, guidance helps consumers make trade-offs and express their needs more properly, influencing the generation of final recommendations. Without such guidance, consumers may choose too many desired features or overestimate their needs. Accordingly, the agent might provide products with advanced features and a high price, or provide very few recommendations. As a result, consumers may perceive the agents to be biased. B y exposing the constraints of different options, consumers' perceptions of agents' objectivity and honesty wi l l be enhanced through such guidance. A trustee is deemed to exhibit high integrity when a trustor believes the trustee has a strong sense of justice, honesty, and objectivity (Mayer et al., 1995). Therefore: H3-3a: Consumers will have higher integrity beliefs in the recommendation agents with guidance than in those without guidance. Similar to H3-lb(c) and H3-2b(c), due to the lack of direct theoretical or empirical evidence to indicate the extent to which guidance w i l l influence competence or benevolence beliefs, we hypothesize that: 4 Our definition and operationalization of guidance does not include knowledge explaining how to use different operators or menus (Silver 1991), because the agent developed is very simple to use. -40-Chapter 3: Impact of Explanations H3-3b(c): There will be no difference in consumers' competence (benevolence) beliefs in the recommendation agents with and without guidance. How explanations, why explanations, and guidance focus on three independent types of knowledge, namely, agent expertise and line of reasoning in recommendation generation, agent motivation, and objective information for informed inputs. There is no theoretical or empirical evidence to indicate that there wi l l be interactions among how explanations, why explanations, and guidance with respect to different trusting beliefs. Therefore, H4: There will be no interaction effects among how explanations, why explanations, and guidance with respect to consumers' competence, benevolence or integrity beliefs. 3.3.3 CONTROL VARIABLES Prior research into explanations and trust suggests a number of additional factors that should be controlled due to their potential influence on trust. Among them are consumers' trust propensity, product expertise, and preferences for effort-saving versus decision quality. Trust propensity is a personality trait that w i l l affect the likelihood that an entity wi l l exhibit trust (Lee and Turban, 2001; Mayer et al., 1995). A s consumers develop trust for trustees, they look for cues and information about the trustees. The trust propensity of individual consumers magnifies, or reduces, the effectiveness of the cues and information provided by trustees (Lee and Turban, 2001). McKnight et al . (2002a) have likewise suggested that consumers' disposition to trust should influence their trusting beliefs. Hence, trust propensity is included in this study as a control variable in analyzing the impact of explanations on trusting beliefs. Many explanation studies (e.g., Dhaliwal and Benbasat, 1996; Y e and Johnson, 1995) have shown that domain expertise influences the use and effects of explanations. Customers' product expertise provides a foundation for their understanding of the explanations, and therefore, w i l l influence the effects of these explanations. Chapter 3: Impact of Explanations Consumers' preferences for effort-saving versus decision quality are also included as a control variable because they affect consumers' intentions to use and adopt a computerized decision aid (Todd and Benbasat, 1999). Use and assimilation of the explanations wi l l require cognitive effort, and hence their preference for effort-saving versus decision quality should impact their use and understanding of explanations, and consequently the effects of explanation use. 3.4 R E S E A R C H M E T H O D To examine the effects of the three types of explanations on consumer trust, a 2 x 2 x 2 factorial experimental design was employed. The manipulated factors are how explanations (with, without), why explanations (with, without), and guidance (with, without). A l l three factors were manipulated between participants. For the experiment, a recommendation agent for digital cameras that makes recommendations based on the preferences and needs specified by consumers was developed based on a content-filtering method. Our experimental recommendation agent was built to simulate two well-known operational recommendation agents. One is from www.ActiveDecisions.com and the other is from www.dealtime.com. They are the leading agent providers and are widely used by many websites, such as RadioShack.ca and Sony.com. In order to elicit users' preferences and needs, an agent-user dialogue was used to simulate dialogues presented in other studies (Russo, 2002) and in commercial applications. Digital cameras were chosen for two reasons. First, the content-filtering based agent technology works best for relatively complex products (Russo, 2002), and indeed several commercial agents have already been developed for use in the marketing and sale of digital cameras. Digital cameras have a variety of attributes (e.g., zoom and resolution) that require a certain level o f expertise from consumers. In addition, an informal survey that we conducted indicated that many undergraduate students do not have digital cameras, although they are interested in them. This ensures a high motivation level for the participants in the experiment. -42-Chapter 3: Impact of Explanations A user-invoked method was used for explanation provision (Gregor and Benbasat, 1999). In order to reduce users' effort in getting .the explanations, pop-up windows were used. When a user pointed the mouse to an explanation icon, an explanation window would appear automatically, and then disappear when the mouse was moved away. A pilot test indicated that most users liked the use of "pop-up" mechanisms to provide explanations. 3.4.1 PILOT TEST ON EXPLANATION VALIDATION In an effort to assess the face validity and definitional accuracy of the explanations that were incorporated into the agent prototype, a pilot test was conducted. Definitional accuracy refers to how faithfully an explanation represents an operationalization of the definition of its class (Dhaliwal, 1993). In the pilot test, eight graduate students who are experienced digital camera users were asked to classify the explanations to be examined in this study into one of the three categories (how explanation, why explanation, and guidance) or none of them. 92% of the how explanations, 80% of the why explanations, and 90% of the guidance were correctly classified. The explanations thus appeared to be consistent with their definitions. The suggestions from the pilot test regarding clarify in wording were incorporated into the explanations used for the main experiment. 3.4.2 PARTICIPANTS, INCENTIVES, AND EXPERIMENTAL TASKS AND PROCEDURES A total of 120 students at a large North American university were recruited for the experiment, with fifteen participants randomly assigned to each of the eight treatment groups. Based on Cohen (1988), the choice of the sample size is to ensure sufficient statistical power (about 80%) at the significance level of .05 when medium effect size (f=.25) was assumed for the main effects as well as the interaction effects of the three types of explanations based on previous empirical studies (Dhaliwal, 1993). To avoid potential biases in their evaluations, only individuals who did not already own digital -43-Chapter 3: Impact of Explanations cameras were invited to participate in the study . This filtering is justified because most consumers may need extra shopping advice when they first buy a product like a digital camera, and do not have sufficient expertise and experience. The experiment proceeded as follows. A research assistant first trained participants how to use and navigate the assigned Web interface using a tutorial agent that had same features as the experimental agent. Then, each participant was asked to finish two tasks, first choosing a digital camera for a good friend and then selecting another camera for a close family member. The order of the two tasks was counter-balanced. After each task, the participants were directed to an online form to write down their choice and its justifications. There was no time limit for the tasks. Two tasks were used instead of one in order to ensure that participants have sufficient interactions to evaluate the agent6. Finally, after the two tasks, participants were asked to complete a questionnaire which includes the measures of dependent variables including the trust measures for this chapter and several other constructs analyzed in chapter 4. In the end of the questionnaire, participants were asked to answer several open-ended questions to justify their trust levels in the agents. The written protocols were used to elicit the factors and processes that lead to trust formation and they are analyzed in chapter 5. Each participant was guaranteed a monetary compensation for his/her participation ($15). In order to motivate participants to view the experiment as a serious online shopping session and to increase their involvement, the top 25% performers were offered an extra amount ($25), and the participant with the best performance would be offered $200. The participants were told before the experiment that they would be asked to provide their justifications for their choices, and their performance would be judged based on these justifications 7. The main criterion for the judgment is based on the extent 5 The pilot test revealed that participants who already own digital cameras based their evaluations primarily on whether or not the agents recommended the model that they own. Hence, their evaluations might be biased. 6 Our pilot test showed that many participants were not very confident in evaluating the agent after completing only one task. After two tasks, participants' evaluations of the recommendation agents reached a relatively stable level and they had no difficulties in completing the questionnaire. 7 As in many other experimental studies (e.g., Mao and Benbasat 2000), asking participants to provide justifications for their choices is very helpful to make their involvement more serious. However, this may change the goal of the tasks from problem-solving to both problem-solving and learning, which may induce -44-Chapter 3: Impact of Explanations to which their justifications are appropriate and convincing to support their choice of digital camera. 3.4.3 M E A S U R E S This study used validated scales for all constructs. The measures for the three trusting beliefs in recommendation agents that were developed and validated by Komiak (2003) have been adapted for the current study. Although Komiak 's (2003) trust measure was particularly developed for online recommendation agents, the measure is similar to the web trust measure developed by McKnight et al. (2002a). Measures for the control variables have been adapted from Lee and Turban (2001), Davis (1989), and Komiak (2003). A l l measurement items are listed in the Appendix of this chapter. Since the constructs were measured by multiple items, summated scales based on the average scores of the multi-items were used in the analysis (1998). Responses were recorded on a nine-point Likert s ca l e 8 , with the endpoints labeled as "Extremely disagree" and "Extremely agree." 3.5 D A T A A N A L Y S I S A N D F I N D I N G S The descriptive statistics are provided in table 3-2. We used Partial Least Squares (PLS), as implemented in P L S Graph version 3.0, to assess the psychometric properties of the trust measures and a detailed measurement validation are reported together with other measures in chapter 4. The measures for the three trusting beliefs have good reliabilities (Cronbach alphas > .70 as indicated in table 3-2) and satisfactory discriminant and convergent validity. We have examined the item loadings, composite reliability of constructs, and average variance extracted ( A V E ) . A l l o f the measures display strongly positive loadings that are significant at the .001 level, indicating high individual item reliability. No item loads higher on another construct than it does on the construct it is some differences in consumers' use of explanations, and subsequently their evaluations and perceptions of the agent (Gregor and Benbasat 1999). Hence, this could be a limitation of this study. 8 We used a nine-point scale rather than five- or seven-point scales to cover the relatively vague valuations in consumers' initial trust formation. -45-Chapter 3: Impact of Explanations designed to measure and the square root of each constructs A V E is greater than the correlations between the construct and others. Therefore, the trust measures have good discriminant and convergent validity. Table 3-2 Construct Attributes Variable Mean s.d. Reliability (alpha) 1 2 3 4 5 1. Competence 5.55 1.39 .85 2. Benevolence 6.18 1.29 .77 .65" 3. Integrity 6.04 1.21 .75 .34" .51" 4. Trust Propensity 5.36 .89 .90 .26" .20* -.08 5. Effort vs. Quality®' 7.03 1.43 .77 -.06 -.004 .05 -.14 6. Product Expertise 3.00 1.39 1.00®2 .07 .13 -.002 .04 .08 Note: ** p<.01 (2-tailed), *<.05 (2-tailed) Off-diagonal elements are interconstruct correlations. ®' Preference for effort saving versus decision quality ®2 Product expertise is measured by a single item A manipulation check was conducted first to examine the extent of participants' use of the explanations. Then, a 2 x 2 x 2 factorial A N C O V A was conducted to test the effects of explanations on consumer trust in recommendation agents, after accounting for the effects of covariates (Hair et al., 1998). 3.5.1 M A N I P U L A T I O N C H E C K Participants' navigation screens were recorded unobtrusively as videos by screen capture software (Camtasia Recorder 3.0) during the experiments. The author and a research assistant reviewed these videos to count independently participants' use of explanations 9. Table 3-3 reports the distribution of explanation usage rate in the "with" particular explanation groups. On average, 42% of the how explanations, 34% of the why explanations, and 47% of the guidance were viewed by participants. These usage 9 Because Java scripts were used to provide explanations, the client computer and server cannot record participants' use of explanations automatically. The count of explanation use is accomplished objectively. The agreement is close to perfect (98%). We used the average scores of the two judges in the calculations of the explanation use rate. -46-Chapter 3: Impact of Explanations numbers 1 0 refer to the average percentage of explanations that were viewed by participants in the different treatment groups. For each type of explanation, all participants who were given explanations viewed at least some of them. Overall, the average usage rates are quite high compared with other empirical studies (e.g., Dhaliwal, 1993). Hence, the three types of explanations in the experimental system are deemed well designed and extensively used' 1 . Table 3-3 Frequency Distribution of Explanation Use Treatment Group: with how explanations Treatment Group: with why explanations Treatment Group: with guidance % of how explanations used (pc*) Number of subjects % of why explanations used (pc*) Number of subjects Vo of guidance used (pc*) Number of subjects pc = 0 0 pc = 0 0 pc = 0 0 0<pc< 10% 8 0<pc< 10% 4 0<pc< 10% 5 10%<pc<30% 9 10%<pc<30% 27 10%<pc<30% 11 30% <pc< 50% 16 30% <pc< 50% 13 30% <pc< 50% 17 50% <pc< 70% 23 50% <pc< 70% 13 50% <pc< 70% 16 pc > 70% 4 pc > 70% 3 pc > 70% 11 Total 60 Total 60 Total 60 Note *: For each type of explanations that are provided to participants in the recommendation agent, the total number of explanations ranged from 26 to 38. Explanations were provided for each question in the agent-user dialogues as well as for each recommendation, i f any, after the 1 0 The details for the calculation of these percentage numbers are as follows. In the agent-user dialogue, there are 14 questions and after the dialogue, in most cases participants will get 5 recommended products. There will be one why explanation for each question and one for each recommended product, in the case of "with why explanations" group, for example. Hence, in this group, participants will get 14 why explanations, plus 5 why explanations if they get 5 recommendations. After the experiment, we counted how many explanations that were viewed by each participant based on the videos. In our example, if a participant viewed 10 why explanations for the 14 questions and 3 why explanations for the 5 recommended products. Then, for this participant, the usage rate for the why explanations is (10+3)/(14+5) = 68.4%. For how explanations and guidance, the same calculation method is used. The numbers we reported in the chapter is the average number of usage percentages for all participants in the "with" particular explanation conditions. " The extensive usage of explanations also rules out an alternative explanation for the impact of explanations. Because the explanation contents were not automatically provided, they need to be invoked by participants. Accordingly, the impact of explanations should not be simply due to the presence of the explanation icons (i.e., availability). Participants obtained the explanation contents, which generated the impact of explanations. -47-Chapter 3: Impact of Explanations dialogues. The total number of explanations varied because participants got different numbers of recommendations after the agent-user dialogues. 3.5.2 A N C O V A R E S U L T S O f the 120 participants, 62 were female and 58 were male, and 109 were undergraduate and 11 were graduate students. Most participants were in their early 20s. 97% of the participants had more than two years of Internet use experience. No significant differences were found among participants, who were randomly assigned to different treatment groups, with respect to participants' gender, age, Internet experience, online shopping experience, comfort levels with using computers and shopping online. A N C O V A was conducted to examine the effects of the three types of explanations on the three trusting beliefs. We first examined whether the data satisfy the statistical assumptions of A N C O V A (Hair et al., 1998). Data normality was assessed visually and by the skewness and kurtosis statistics. The skewness and kurtosis statistics indicate that all three dependent variables have normal distributions. Levene statistics indicate no significant differences in different groups. Hence, the assumption of equal error variance of different groups is also satisfied. Among the three control variables, only trust propensity affected the dependent variable scores significantly at the .05 level. Given that we only invited volunteers who did not have digital cameras to participate in the experiment, the product expertise levels of participants are quite low (average score is 3.0 on a 9-point scale, see table 3-2). Therefore, product expertise levels may not have adequate variances to explain the trust levels. Similarly, the cognitive effort of using the experiment agent is quite low, leading to its inability to explain the trust levels. Hence, in the later analysis, only trust propensity was included as covariates. Group means of trust beliefs are reported in Table 3-4 and A N C O V A results are shown in Tables 3-5 ~ 3-7. - 4 8 -Chapter 3: Impact o f Explanations Table 3-4 Means and Standard Deviations of the Trusting Beliefs for Various Experimental Conditions Experimental Condition N Competence Benevolence Integrity Mean Std. Deviation Mean Std. Deviation Mean Std. Deviation How explanations Without 60 5.23 1.45 5.94 1.21 5.89 1.23 With 60 5.86 1.24 6.41 1.33 6.19 1.17 Why explanations Without 60 5.46 1.46 5.88 1.38 5.91 1.31 With 60 5.63 1.33 6.47 1.13 6.17 1.08 Guidance Without 60 5.48 1.28 6.09 1.21 5.78 1.21 With 60 5.61 1.51 6.26 1.37 6.29 1.15 Table 3-5 Results of A N C O V A (Dependent Variable: Competence Belief) Source DF Sum of Squares Mean Square F p-value How explanations 1 12.64 5.15 6.85 .010 Why explanations 1 0.02 0.02 .01 .908 Guidance 1 0.47 0.47 .26 .611 Trust Propensity (covariate) 1 15.24 15.24 8.53 .004 How * Why 1 3.37 3.37 1.89 .172 How * Guidance 1 0.17 0.17 .09 .759 Why * Guidance 1 0.19 0.19 .10 .747 How * Why * Guidance 1 0.32 0.32 .18 .675 Error 111 198.32 1.79 -49-Chapter 3: Impact of Explanations Table 3-6 Results of A N C O V A (Dependent Variable: Benevolence Belief) Source D F Sum of Squares Mean Square F p-value How explanations 1 6.83 6.83 4.38 .039 Why explanations 1 7.69 7.69 4.93 .028 Guidance 1 .82 .82 .53 .469 Trust Propensity (covariate) 1 4.71 4.71 3.03 .085 How * Why 1 1.17 1.17 .75 .388 How * Guidance 1 .34 .34 .22 .641 Why * Guidance 1 .06 .06 .04 .847 How * Why * Guidance 1 .04 .04 .02 .879 Error 111 172.96 1.55 Table 3-7 Results of A N C O V A (Dependent Variable: Integrity Belief) Source DF Sum of Squares Mean Square F p-value How explanations 1 2.65 2.65 1.90 .171 Why explanations 1 2.83 2.83 2.03 .158 Guidance 1 7.96 7.96 5.70 .019 Trust Propensity (covariate) 1 1.64 1.64 1.18 .280 How * Why 1 2.47 2.47 1.77 .186 How * Guidance 1 .061 .061 .04 .835 Why * Guidance 1 .34 .34 .25 .620 How * Why * Guidance 1 .76 .76 .55 .461 Error 111 155.02 1.39 A l l hypotheses except H 3 - l b are supported. The use of how explanations has significant and positive effects on users' competence belief ( F ( l , l 11)=6.85, p<0.05, f1 2=0.25) and benevolence belief ( F ( l , l 1l)=4.38,/?<0.05, f =0.19) in the recommendation agent, but not on integrity belief ( F ( l , l 11)=1.90, /?>0.1), at the .05 level. The use o f why explanations significantly and positively affects users' trust beliefs regarding the agents' benevolence ( F ( l , l l l ) = 4 . 9 3 , p<0.05, f =0.21) but not on competence ( F ( l , l 10)=0.01, /»>0.1) or integrity ( F ( l , l 11)=2.03, p>0.1) beliefs. The use of guidance significantly and 1 2 Cohen's (1988) effect size (f) for the ANCOVA was used. -50-Chapter 3: Impact of Explanations positively affects users' trust beliefs regarding agents' integrity ( F ( l , l 1l)=5.70,/?<0.05, f =0.23) but has no significant effects on competence ( F ( l , l 11)=0.26, p>0A) or benevolence ( F ( l , l l 1)=0.53, p>0A) beliefs. A s expected, there are no statistically significant two- or three-way interactions among how explanations, why explanations and guidance. Regarding the control variable, trust propensity influences only the competence belief positively and significantly at the .05 level. We also re-tested the hypotheses by dropping the participants with very low usage of explanations from the sample. A N C O V A was re-performed three times without those participants who used only 1, 2 or fewer, and 3 or fewer explanations, respectively, for any one explanation type (how, why, guidance). In all these conditions, none of the conclusions changed, i.e., no effects gained or lost statistical significance at the 5% level, and the changes in the F and p values were slight. Therefore, the results regarding the effects of explanations are quite robust. Since H 3 - l b , H 3 - l c , H3-2b, H3-2c, H3-3b, and H3-3c are hypothesized such that the predictor is not expected to have a main effect on the dependent variables and no significant effects were actually detected, a power analysis was conducted to examine the potential for the type II error (Cohen, 1988). The likelihood of detecting medium effects (f=.25) was about 80 percent for an alpha level of .05 and the likelihood of detecting small effects (f=. 10) was under 50 percent. 3.6 D I S C U S S I O N A N D I M P L I C A T I O N S 3.6.1 D I S C U S S I O N O F F I N D I N G S This experimental study provides strong evidence that explanation provision enhances consumers' initial trust in online recommendation agents. More importantly, this study reveals that different explanation types influence different trusting beliefs: consumers' beliefs in the competence of recommendation agents can be increased by the use of how explanations, while their beliefs in the benevolence and integrity of the agents can be increased by the use of why explanations and guidance, respectively. -51-Chapter 3: Impact of Explanations Results also show that how explanations increase users' benevolence belief in the recommendation agents. Two possible explanations could account for this unpredicted effect. The first is that the questions in the agent-user dialogue are needs-based. Instead of asking users to specify the product attribute levels directly (e.g., the zoom level of a digital camera), the agent inquires about users' preferences for using the product and their needs (e.g. "How far are the subjects that will be focused on most often by the digital camera?"). These questions may convey cues that the agent considers users' needs, and such good-will in the agent's motivation may enhance their beliefs in the agent's benevolence (Cook and Wal l , 1980). Moreover, when users see the underlying reasoning in the how explanations, they may be more confident that the agents care for their needs, as reflected in the nature of the needs-based questions, and as a result, their beliefs about agent benevolence wi l l be further enhanced. A second possible explanation is that the sharing an agent's underlying reasoning processes with its users may lead users to recognize the agent's benevolence (McKnight et al., 2002b). When an agent is more forthcoming in revealing its reasoning, its behavior wi l l be more predictable in the users' eyes. Consequently, they may be relieved from concerns due to the agency relationship, and their beliefs in the agent's benevolence may be increased. 3.6.2 L I M I T A T I O N S A N D F U T U R E R E S E A R C H Before discussing the implications of this study, we first consider its limitations. Firstly, the application of this study's findings to other types of recommendation agents requires caution. The present study focuses on one type of recommendation agent, namely a content-filtering based product-brokering agent. Explanations that are embedded in other recommendation agents (e.g., collaborative-filtering based agents) might lead to different outcomes, hence additional research is needed. Secondly, the experimental agent is based on needs-based questions in the agent-user dialogue. However, the agent-user dialogue can be designed in other ways, for example the agent may ask users directly about their preferences for product attributes. In this case, why explanations may be more important, because in addition to explaining the -52-Chapter 3: Impact of Explanations agent's motives, these explanations bridge the gap between attribute levels to be chosen and what users know, i.e., their needs and intended uses. Hence, another potential research topic is to explore the types of explanations needed and their impact when the agent-user dialogues are designed in different ways. Thirdly, explanations could be implemented differently. Although the validation results demonstrate that the explanations implemented in this study possess satisfactory definitional accuracy, the effectiveness of explanations on enhancing trust might be different when their contents change. Additional research is required to test whether or not the results still hold when the contents of explanations are represented differently from the present study. Enhancing the design of explanation contents may have potential to further increase the effectiveness of explanations. Fourthly, a variety of other explanations provided in traditional K B S s may have the potential to increase trust. The selection of explanations explored in this study was based primarily on the two characteristics of online recommendation agents which introduce some obstacles to trust building. However, other important explanations, e.g., terminological explanations and justifications for reasoning processes (Gregor and Benbasat, 1999), have not been addressed in this study due to cognitive effort considerations. Users may be overloaded i f too many explanations are provided (Gregor and Benbasat, 1999). This study controlled for subjects' preferences for effort-saving versus decision quality, which had no effects on trusting beliefs. It is possible that the cognitive cost of using the explanations provided in this study is low thus including additional explanations is feasible. Therefore, other types of explanations that may further enhance consumers' trust in agents deserve attention in future research. Lastly, since the experimental participants are university students, readers should exercise caution in generalizing the results of this study to other demographic groups. In addition, only one type of product was used. Further research with different participant samples and different types of products is suggested. -53-Chapter 3: Impact of Explanations 3.6.3 I M P L I C A T I O N S F O R R E S E A R C H A N D P R A C T I C E Notwithstanding these limitations, this study makes significant contributions to research and practice. The main contribution to IS research is an understanding of customer trust building in online recommendation agents via the use of explanations. The importance of explanations for intelligent systems is well recognized in the IS literature (Dhaliwal and Benbasat, 1996; Gregor and Benbasat, 1999), but empirical testing with validated trust measures has thus far been inadequate. Furthermore, previous studies have produced only generalized suggestions that explanations are influential for user acceptance of K B S s and for improving user trust in the advice provided. This study, in contrast, integrates two streams o f explanation use research, K B S explanations and D S S guidance studies, and reveals their complementary impact on trust building with explanation facilities: different types of explanations w i l l increase consumer trust via different trust components. The competence belief is increased by the use of how explanations; the benevolence belief is increased by the use of how and why explanation; and the integrity belief is increased by the use of guidance. The effects of explanations depend largely on the contents and types of explanations that are provided. The primary contribution for practice is an effective approach to store knowledge in online recommendation agents to improve consumers' perceptions of agent trustworthiness. In Internet environments, consumers may want to learn from every transaction in order to be more knowledgeable and self-reliant (Saint-Onge, 1998). Explanations in recommendation agents can achieve two goals. First, why and how explanations facilitate the flow of knowledge from recommendation agents to their users. This knowledge improves users' understanding of and trust in the agents; furthermore, it improves users' knowledge of the particular domains o f the agents' expertise, e.g., digital cameras. Second, the flow of knowledge in the form of guidance from agents to consumers improves the way that consumers convey their needs and preferences to the agent; as a consequence the recommendations and advice that consumers receive from the agents fit their particular needs and goals much better. These two goals demonstrate that codifying and storing knowledge in recommendation agents and sharing it with customers are very useful when providing shopping advice to consumers. Therefore, with -54-Chapter 3: Impact of Explanations adequate and appropriate knowledge embedded within them, recommendation agents can be a cost-effective way for companies to provide electronic customer service to facilitate online consumer decision-making. -55-Chapter 3: Impact of Explanations A P P E N D I X F O R C H A P T E R 3 Trust - Competence 1. This virtual advisor13 is like a real expert in assessing digital cameras. 2. This virtual advisor has the expertise to understand my needs and preferences about digital cameras. 3. This virtual advisor has the ability to understand my needs and preferences about digital cameras. 4. This virtual advisor has good knowledge about digital cameras. 5. This virtual advisor considers my needs and all important attributes of digital cameras. Trust - Benevolence 1. This virtual advisor puts my interest first. 2. This virtual advisor keeps my interests in its mind. 3. This virtual advisor wants to understand my needs and preferences. Trust - Integrity 1. This virtual advisor provides unbiased product recommendations. 2. This virtual advisor is honest. 3.1 consider this virtual advisor to be of integrity. Perceived Usefulness 1. Using this virtual advisor enabled me to find suitable digital cameras more quickly. 2. Using this virtual advisor improved the quality of analysis/search I performed to find suitable digital cameras. 3. Using this virtual advisor made the search task for digital cameras easier to do. 4. Using this virtual advisor enhanced my effectiveness in finding suitable digital cameras. 5. Using this virtual advisor gave me more control over the digital camera search task. 6. Using this virtual advisor allowed me to accomplish more analysis than would otherwise have been possible. 7. Using this virtual advisor greatly enhanced the quality of my judgments. 8. Using this virtual advisor conveniently supported all the various types of analysis needed to find suitable digital cameras. 9. Overall, I found this virtual advisor useful in finding suitable digital cameras. Perceived Ease of Use 1. My interaction with the virtual advisor is clear and understandable. 2. It is easy to get the virtual advisor to do what I want to do. 1 3 We used the word "virtual advisor" to refer to the recommendation agent since in our pilot test, participants suggested that using the word "virtual advisor" is easier to understand than "recommendation agent." -56-Chapter 3: Impact of Explanations 3. Learning to use the virtual advisor is easy to me. 4. It was easy for me to find a suitable digital camera using the virtual advisor. 5. Overall, I found that the virtual advisor is easy to use. Intentions to Adopt Recommendation Agents 1. I am willing to use this virtual advisor as an aid to help with my decision about which product to buy. 2.1 am willing to let this virtual advisor assist me in deciding which product to buy. 3. I am willing to use this virtual advisor as a tool that suggests to me a number of products from which I can choose. Trust Propensity 1. It is easy for me to trust a person/thing14. 2. My tendency to trust a person/thing is high. 3.1 tend to trust a person/thing, even though I have little knowledge of it. 4. Trusting someone or something is difficult for me. Preference for Effort Saving vs. Decision Quality 1. I am willing to examine the product attributes very carefully in order to make sure that the product fits my preferences perfectly. 2.1 prefer to shop hard in order to get exactly what I want. 3. My time is valuable. As soon as I find a product that is adequate for my needs, I will buy it. 1 5 Product Expertise 1.1 am an expert in digital cameras. Essay/Self-report Questions for Product Expertise: 1. When, if ever, is "resolution" important for digital cameras? 2. When, if ever, is "zoom" important for digital cameras? 1 4 Clumping "person" and "thing" together might be problematic because there might be some differences between people's reactions to trust in things vs. trust in persons. Therefore, this is also a limitation of this study. 1 5 This item was dropped because the reliability was not satisfactory when it was included. -57-Chapter 4: Trust-TAM CHAPTER 4: TRUST-TAM FOR ONLINE RECOMMENDATION AGENTS 4.1 INTRODUCTION This chapter empirically justifies the importance of trust in online recommendation agents by testing an integrated Trus t -TAM (Technology Acceptance Model). A s reviewed in chapter 2, the nature of trust in technological artifacts such as online recommendation agents is still an under-studied area: are the dimensions of trust in the agents similar to those of interpersonal trust? If consumers form their trust in the agents, how important is the social and relational aspect of trust in their decision to adopt the agents? In chapter 2, we reported results about the impact of explanations on consumers' trust in online recommendation agents and confirmed that consumers form a certain level of trust in recommendation agents. This chapter reports the second part of the first experiment about the role of trust in determining consumers' adoption of online recommendation agents. This chapter considers the nature of the technology being studied as well as the online context, and it empirically examines the nomological validity of trust in agents by testing the integrated T r u s t - T A M model for online recommendation agents. In so doing, this research reveals the relative importance of initial trust vis-a-vis other use antecedents in T A M , i.e. perceived usefulness (PU) and perceived ease of use (PEOU), in influencing consumers' adoption of online recommendation agents. P U is a measure of an individual's subjective assessment of the utility offered by the recommendation agent while P E O U is an indicator of the cognitive effort needed to learn and to utilize the recommendation agent to make a choice (Davis, 1989; Gefen et al., 2003b). The research results indicate that consumers' initial trust directly influences their decisions to adopt online recommendation agents and also influences consumers' perceptions of usefulness in the agents. -58-Chapter 4: Trust-TAM The next section of this chapter develops hypotheses to be tested. The research method has been introduced in the previous chapter. Section 4.3 reports the results. This chapter concludes with a discussion of the results and the implications of the findings. 4.2 H Y P O T H E S I S D E V E L O P M E N T The theory of reasoned action ( T R A ) (Ajzen and Fishbein, 1980) is generally recognized as the best starting point for studying determinants of individuals' behavior, including their adoption of technology (Sheppard et al., 1988). T A M , which is based on T R A , identifies two key use antecedents (i.e., P U and P E O U ) for users' adoption of a technology. The predictive power of P U and P E O U for individuals' technology acceptance has been empirically confirmed by numerous studies (e.g., Lee et al., 2003). A comprehensive discussion is found in Venkatesh et al. (2003). Previous T A M studies have examined a variety o f information technologies (IT) (Venkatesh et al., 2003). In particular, Gentry and Calantone (2002) tested three models explaining behavioral intentions to adopt shopbots (recommendation agents): T R A (Ajzen and Fishbein, 1980; Fishbein and Ajzen, 1975), Theory of Planned Behavior (TPB) (Ajzen, 1985; 1989; 1991), and T A M (Davis, 1989). They found that T A M explains more variance of shopbot adoption than T R A and T P B . Consumers use recommendation agents to get shopping advice regarding what product to buy as well as where to buy it. The recommendation agents investigated in this study provides shopping advice on what product to buy. According to T A M , more useful and easy to use agents wi l l be employed more readily. Additionally, P U is influenced by the amount of effort users must expend to use the technology (Davis, 1989). A n agent that requires less effort and is easier to use wi l l be perceived to be more useful. Therefore, H4-1: PU of an online recommendation agent will positively affect consumers' intentions to adopt the agent. -59-Chapter 4: Trust-TAM H4-2: PEOU of an online recommendation agent will positively affect consumers' intentions to adopt the agent. H4-3: PEOU of an online recommendation agent will positively affect PU of the agent. Although T A M is considered to be the dominant model for Information Technology (IT) acceptance research (Gefen et al., 2003b; Koufaris, 2002), as pointed out by Davis (1989), more research is needed to address how other variables may influence usefulness, ease o f use, and acceptance. In addition to the constructs that are part of the T R A and T P B , other factors that contribute to the explanatory power of T A M could be considered in light of user characteristics, task contexts, and the nature of particular technologies (Moon and K i m , 2001). We identify these factors in the appendix of this chatper, which provides a non-exhaustive summary of studies that have focused on T A M and its extensions. 1 6 To account for user characteristics, researchers have examined T A M with the inclusion of constructs such as gender (Gefen et al., 1997; Venkatesh and Morris , 2000), culture (Gefen et al., 1997), training and prior experience with the technology being studied (Davis, 1989; Davis et al., 1989; Gefen et al., 2003a; Igbaria et a l , 1995; Taylor and Todd, 1995a; Venkatesh and Morris, 2000; Venkatesh et al., 2003), and Web skills (Koufaris, 2002). With regard to contexts, issues that have been studied include: 1) voluntary versus mandatory use (e.g., Venkatesh et al., 2003), and 2) offline versus online use for work or shopping (e.g., Gefen et a l , 2003b). Recently, a growing number of studies have examined T A M in the context of online shopping. A key question here is whether or not online consumers think and behave differently from their offline counterparts, and researchers have identified several characteristics of online environments that may lead them to do so. 1 6 Due to the large number of articles that have been published using T A M , an exhaustive review of T A M studies is beyond the scope of this study. -60-Chapter 4: Trust-TAM First, the impersonal and virtual nature of the Internet involves a physical distance between buyers and sellers, and between buyers and products in online shopping environments (Ba, 2001; Yoon, 2002). The distance between buyers and products is emphasized by the absence of direct methods for online buyers to evaluate products, whereas in physical stores they can understand products better by touching or feeling them. Furthermore, online shopping environments lack human network attributes. Unlike physical shopping environments, where consumers can communicate with salespersons face-to-face, on the Internet fewer audio, visual, and other sensory channels are available for consumers to interact with salespersons and vendors. Consumers are consequently less able to judge product quality and vendor credibility prior to completing purchases, hence facing high uncertainty in their online shopping (e.g., Ba , 2001). Second, online shopping environments have produced a new spectrum of unregulated activities, but e-vendor behavior is difficult to monitor, and legislation governing online shopping, both in substance and enforcement, is still far from mature (Hamelink, 2001). E-vendors can easily take advantage of online consumers (Gefen et al., 2003b), generating high consumer risk. Third, online consumers can easily switch among different online vendors, and thus can access more product and vendor choices. This makes consumers more powerful. Consequently, for e-vendors, maintaining high consumer loyalty is difficult in online shopping environments (Koufaris, 2002). Simultaneously, it compels buyers to consider more options, making their decision-making processes more complicated (Maes et al., 1999). Researchers have considered the nature of online environments, consequently extending T A M with constructs such as trust (Gefen et al., 2003a; 2003b), playfulness (Moon and K i m , 2001), and flow (Koufaris, 2002). Specifically, trust is well-recognized as a key success factor for e-commerce (e.g., Gefen et al., 2003b; McKnight and Chervany, 2001; Ratnasingham, 1998; Urban et al., 2000). Research has shown that trust can effectively address the main issues in relation to the three characteristics discussed above by reducing environmental uncertainty, complexity, and risk, and by enhancing -61-Chapter 4: Trust-TAM consumer loyalty (Jarvenpaa and Tractinsky, 1999; Jarvenpaa et al., 2000). If online shoppers do not trust an e-vendor, they wi l l generally stay away from its online store (Jarvenpaa and Tractinsky, 1999; Reichheld and Schefter, 2000). Arguably, the issues related to the online context also apply to online recommendation agents. Therefore, as asserted by Gefen et al. (2003a; 2003b), in the present study, trust is expected to operate as an antecedent of consumers' intentions to adopt online recommendation agents. T A M has satisfactory explanatory power for various technologies. However, the impact of the nature of particular technology utilized is not yet well understood thus there is a need for extensions of T A M . Online recommendation agents are perceived to be more than just technologies or tools. They are virtual shopping agents and advisors. Recommendation agents elicit consumer needs and preferences and act on behalf of a principal (consumer) by reflecting her specific needs and preferences. According to Reeves and Nass's Theory of Social Responses to Computers (Reeves and Nass, 1996), consumers treat computerized agents as social actors, and form social relationships that involve trust. Moreover, web-based recommendation agents are not owned by individual users, and there is an agency relationship between an agent and its users (Bergen and Dutta, 1992). Therefore, trust issues associated with recommendation agents are important and complicated, inasmuch as users may have concerns about the competence o f an agent to satisfy their needs, as well as concerns about whether an agent is working on their behalf rather than on behalf of a web merchant or manufacturer. Trust can help consumers overcome these concerns, and encourage them to adopt the agents. The benevolence of agents can be engendered by informing users that the agents care about user needs and preferences, and their integrity can be promoted by providing unbiased recommendations and guidance for users. In sum, although T A M can explain technology acceptance across different technologies, user populations, and contexts, the disparities between online and offline contexts and the special nature of recommendation agent technologies indicate that, in addition to P U and P E O U , trust may also contribute to explaining the user acceptance of -62-Chapter 4: Trust-TAM Web-based technologies for online shopping. The integrated T r u s t - T A M provides a framework to test the nomological validity of trust in technological artifacts. If the construct o f trust in online recommendation agents - defined to include three trusting beliefs (competence, benevolence, and integrity) - is valid, it should have predictive power for consumers' adoption of the agents. Trust has been empirically validated as an important predictor of intended website use by online shoppers (Gefen et al., 2003a; 2003b; Pavlou, 2003). These studies have considered the characteristics of online shopping environments as discussed earlier, and employed trust as a proxy to deal with these characteristics. Consumers' trust in an e-vendor reduces their concerns about the uncertainty, complexity, and risk of online shopping, thus increasing their intentions to use the e-vendor's website (Gefen et al., 2003a; 2003b). Gefen et al. (2003b), conducted a field study targeted at experienced online shoppers, regarding their online book- or CD-shopping experiences. They found that consumer trust in e-vendors is as important to e-commerce adoption intentions as other T A M use antecedents - P U and P E O U . In another study, Gefen et al. (2003a) conducted a free-simulation experiment to compare the relative importance of consumer trust in an e-vendor vis-a-vis T A M use antecedents for new and repeat customers. They found that repeat consumers' purchase intentions were influenced both by their trust in the e-vendor and their perceptions of the website usefulness, while potential consumers were influenced only by their trust in the e-vendor. Trust is particularly important when consumers interact with recommendation agents for the first time and have a limited understanding of the agents' behavior. During the initial time frame, consumers' perceptions of uncertainty and risk in using the agents are particularly salient (McKnight et al., 2002b). If consumers do not have sufficient initial trust toward a website or an online recommendation agent, they can easily switch to others (Koufaris and Hampton-Sosa, 2004). McKnight , Cummings and Chervany (1998) have found that high initial trust is not only necessary, but also pragmatic and possible. In the context o f an organization, high initial trust generally exists among new employees (McKnight et al., 1998). In the online recommendation agent context, Komiak (2003) found that consumers form a certain level of trust in recommendation agents from -63-Chapter 4: Trust-TAM their initial interactions with them, and this initial trust significantly influences their intentions to adopt the agents, although their study examined only one antecedent (i.e., trust). Similarly, we hypothesize that: H6-4: Initial trust in an online recommendation agent will positively affect consumers' intentions to adopt the agent. It is worthwhile to point out that in prior studies that integrate trust into T A M , the trust objects are e-vendors rather than technologies. To the best of our knowledge, this is the first study to examine the validity of integrated T r u s t - T A M to explain online recommendation agent adoption with computerized agents as the object of trust. Also, prior studies examined consumers' intentions to purchase through a website, while this study focuses on consumers' intentions to adopt recommendation agents to get shopping advice. Table 4-1 summarizes the key differences between the T r u s t - T A M models examined in this study and in prior studies. Table 4-1 Differences between this Study and Previous Trus t -TAM Studies This Study Prior Trust-TAM Studies (Gefen et al. 2003a; 2003b; Pavlou 2003) Trust Targets Online recommendation agents e-vendors PU and PEOU Targets Online recommendation agents Websites Behavioral Intentions Intentions to adopt agents to get shopping advice Intentions to use a website and purchase on the website Trust should also increase the perceived usefulness (PU) of online recommendation agents. Prior research has demonstrated that P U is determined by at least two factors. The first is the P E O U of the agent as predicted in H4-3, and the other is the benefits that users expect to achieve from using agents (Davis, 1989; Gefen et al., 2003b). Users may perceive that agents are untrustworthy for a number of reasons: 1) they may not have appropriate expertise in the task domain, 2) they may function in the interests of web merchants or manufacturers rather than those of consumers, 3) they may lack integrity. Thus, consumers may believe that benefits w i l l not be easily derived from -64-Chapter 4: Trust-TAM these agents, and be less likely to adopt them, perhaps even seeing their adoptions as detrimental. The existence of an agency relationship between agents and their consumers determines that such situations are likely to occur. Consumer concerns regarding these issues are not uncommon given the potential harmful opportunistic behavior and higher risks inherent in online environments (Gefen et al., 2003b). A s a result, consumers' expectations of gaining benefits from using agents, leading to their perceptions of usefulness, w i l l be influenced their trust in the agents. H6-5: Initial trust in an online recommendation agent will positively affect PU of the agent. The integrated Trus t -TAM model investigated by Gefen et al. (2003b) also suggests that P E O U increases trust. Gefen et al. have argued that this impact is generated through consumer perceptions that a web merchant is investing in relationships with consumers, and by doing so, the merchant "signals a commitment to the relationship" (p. 65). This argument also applies to online recommendation agents. Ease of use demonstrates that agent providers have expended effort in designing the agents, and that they care about users. Conversely, users may perceive difficult-to-use agents as less capable and less considerate, and thus they may lower their trust in the agents. H6-6: PEOU in an online recommendation agent will positively affect trust in the agent. 4.3 RESULTS The method has been introduced in chapter 3. Before the results are reported, the measures used in this chapter are first described. We used existing validated scales for all constructs. In addition to trust measures that were explained in chapter 3, measures of P U , P E O U , and intentions to adopt have been adapted from Davis 's scale (Davis, 1989). We list all measurement items used in the appendix of chapter 3, and report the construct means and standard deviations in table 4-2. -65-Chapter 4: Trust-TAM Table 4-2 Construct Attributes Variable Mean s.d. Composite Reliability Cronbach Alpha 1 2 3 4 5 6 1. Competence 5.55 1.39 .89 .85 .79a 2. Benevolence 6.18 1.29 .87 .77 .65" .84 3. Integrity 6.04 1.21 .86 .75 .34" .51" .82 4. PU 5.68 1.06 .93 .90 .70" .48" .36" .90 5. PEOU 6.88 1.02 .83 .73 .59" .48" .46" .42** .70 6. Intentions to Adopt 7.03 1.29 .93 .89 .48" .46" .21* .54** .42** .76 a: Diagonal elements are square roots of the average variance extracted (AVE), and off-diagonal elements are inter-construct correlations. * indicates that the correlations are significant at the .05 level (2-tailed). ** indicates the correlations are significant at the .01 level (2-tailed). We used Partial Least Squares (PLS), as implemented in P L S Graph version 3.0, for data analysis. Based on a component-based estimation approach, P L S has been used to assess the psychometric properties of all measures, and subsequently to examine the structural relationships proposed earlier, as illustrated in figure 4-1. Figure 4-1 Research Model 4.3.1 D A T A A N A L Y S I S F O R T H E M E A S U R E M E N T M O D E L Because the three trusting beliefs highly correlate with each other, McKnight et al. (2002a) have suggested that trust be modeled as a reflective second order factor. 1 7 This 1 7 As argued by Chewelos et al. (2001), "the distinction between formative and reflective constructs is not always clear-cut" (p.312). Given that conceptually the three trusting beliefs should not necessarily covary, we have also tried to model the second order construct of trust as formative and the results showed the -66-Chapter 4: Trust-TAM second order construct of trust in online recommendation agents is composed of three sub-constructs (i.e., competence, benevolence, and integrity), which are also measured as reflective. According to previous studies that involve second order factors (e.g., Chwelos et al., 2001), in P L S we have used factor scores of each first order trusting belief as indicators for the second-order constructs of trust in agents. To assess the reliability (individual item reliability and internal consistency) and validity of the constructs, we have examined the item loadings, composite reliability of constructs, and average variance extracted ( A V E ) . A l l of the reflective constructs and sub-constructs display strongly positive loadings that are all significant at the .001 level, indicating high individual item reliability. Furthermore, all composite reliabilities and Cronbach's alphas in table 4-2 are greater than .70, which is considered as a benchmark for acceptable reliability (Barclay et al., 1995). The A V E measures the variance captured by the indicators relative to measurement error (Fornell and Larcker, 1981), and it should be greater than .50 to justify using a construct (Barclay et al., 1995). Adequate A V E s for all constructs are indicated in table 4-2. Barclay et al. (1995) have suggested two criteria to examine discriminant validity. The first criterion requires that the square root of each construct's A V E is greater than the correlations between the construct and others, thereby indicating that the construct shares more variance with its own measures than it shares with other constructs. This criterion is satisfied by the current data, as demonstrated in table 4-2. The second criterion requires that no item loads higher on another construct than it does on the construct it is designed to measure. The factor- and cross-loadings reported in table 4-3 demonstrate adequate discriminant validity except one item P E O U 4 . It loads equally highly on P E O U and on P U and hence has been dropped in later analysis. same patterns: no paths gained or lost statistical significance, no significant paths changed in sign, and the changes in the path values were very slight. Therefore, the results of this study should not be an artifact of our modeling decisions. -67-Chapter 4: Trust-TAM Table 4-3 Factor Loadings and Cross-Loadings Items Competence (CMPT) Benevolence (BNVL) Integrity (INTG) Perceived Usefulness (PU) Perceived Ease of Use (PEOU) Intentions to Adopt (INTN) CMPT1 .83 .48 .31 .59 .52 .38 CMPT2 .89 .54 .22 .55 .43 .32 CMPT3 .76 .39 .25 .58 .45 .40 CMPT4 .76 .61 .22 .51 .44 .38 CMPT5 .71 .57 .35 .58 .57 .44 BNVL1 .57 .86 .46 .47 .42 .50 BNVL2 .57 .89 .44 .41 .46 .40 BNVL3 .50 .75 .37 .34 .36 .25 INTG1 .31 .44 .78 .29 .34 .26 INTG2 .26 .40 .87 .27 .33 .12 INTG3 .26 .41 .81 .35 .46 .12 PU1 .57 .35 .24 .78 .50 .57 PU2 .53 .43 .35 .78 .60 .56 PU3 .58 .44 .32 .84 .62 .47 PU4 .60 .37 .25 .86 .64 .48 PU5 .47 .35 .23 .66 .62 .28 PU6 .42 .23 .23 .68 .47 .25 PU7 .50 .33 .36 .65 .45 .25 PU8 .50 .38 .23 .70 .52 .38 PU9 .65 .46 .32 .88 .63 .60 PEOU1 .51 .36 .36 .55 .76 .35 PEOU2 .27 .24 .23 .32 .64 .35 PEOU3 .34 .33 .33 .44 .66 .11 PEOU4 .55 .44 .32 .73 .73 .46 PEOU5 .33 .31 .38 .41 .74 .27 INTN1 .43 .51 .19 .45 .37 .91 INTN2 .47 .42 .19 .58 .46 .93 INTN3 .40 .42 .19 .52 .41 .89 4.3.2 D A T A A N A L Y S I S F O R T H E S T R U C T U R A L M O D E L The results of the structural model from P L S , including path coefficients, explained variances, and significant levels, are illustrated in figure 4 - 2 . We report the total effects of the three antecedents as well as the direct and indirect effects in table 4-4. -68-Chapter 4: Trust-TAM Competence ^-87 Benevolence ^..88 "^Trust in Agent Integrity ^.68 ** indicates a significance level of .01; * indicates a significance level of .05; n.s. indicates a non-significant path. Figure 4-2 PLS Results Intention to Adopt Agents (R2=.36) Table 4-4 Structural Model Results Hypothesis Standardized Path Coefficient (direct effect) t-value for Path Indirect Effect Total Effect3 H , : PU -> Adoption Intentions .45 3.97 — .45 H 2 : PEOU -» Adoption Intentions - - .25 .25 H 3 : PEOU -» PU .30 3.64 .28 .58 H 4 : Trust Adoption Intentions .20 2.13 .23 .43 H 5 : Trust -» PU .50 8.04 - .50 H 6 : PEOU -» Trust .56 7.76 ~ .56 Our analysis indicates that all of the hypotheses except for H4-2 are supported by data from the experiment. Consumers' initial trust and P U have significant impact on their intentions to adopt recommendation agents, while P E O U does not. Therefore, H4-1 and H4-4 are supported, while H4-2 is not. Consumers' initial trust and P E O U influence their P U of the agents significantly, supporting H4-5 and H4-3. P E O U also influence consumers' trust in agents significantly, supporting H4-6. The significant results regarding the impact of trust on P U and on intentions, as well as the impact of P E O U on trust, confirm the nomological validity of trust in online recommendation agents. -69-Chapter 4: Trust-TAM Consumers' initial trust directly influences their intentions to adopt the recommendation agents, while also exhibiting indirect effects through the consumers' increased P U of the agents. The experiment results listed in table 4-4 indicate that P U exerts the most determinative influence over intentions to adopt, in terms of direct effects. The total effects of P U and trust on intentions are similar and stronger than those of P E O U . The impact of P E O U on intentions to adopt agents is fully mediated by P U and trust. This finding is not uncommon, however, inasmuch as many other T A M studies (e.g., Davis, 1989) have found that P E O U is mediated by P U , and Gefen et al. (2003a) have also found that P E O U is mediated by trust, though this was tested only with experienced consumers. The variance in adoption intentions explained by trust, P U , and P E O U in this study is 36 percent, which is relatively high compared to the results of Gefen et al. (2003a), who found that 27 percent o f the variance in purchase intentions was explained by trust and P U . This confirms the validity of the integrated T r u s t - T A M to explain online recommendation agent adoption. Furthermore, the relative importance of the three trusting beliefs in predicting adoption intentions is also revealed by the loadings of the three trusting beliefs on the second order trust, which are all significant at the level of .001. Consumers' initial beliefs in the competence (.87) and benevolence (.88) of online recommendation agents have similar but higher importance than their beliefs in the integrity (.68) o f the agents, during their deliberations about adopting the agents. 4.4 D I S C U S S I O N 4.4.1 S U M M A R Y A N D DISCUSSION O F R E S U L T S The study has explored the nature of trust in online recommendation agents. Based on the theoretical and empirical work described in the literature, we extended interpersonal trust to trust in online recommendation agents. Data from this study confirm the nomological validity of trust in recommendation agents and the validity of T r u s t - T A M for online -70-Chapter 4: Trust-TAM recommendation agents. The significant loadings of the three trusting beliefs (competence, benevolence, and integrity) indicate that all of them hold for trust in online recommendation agents. When interacting with online recommendation agents, consumers appear to treat computer agents as "social actors" and perceive human characteristics (e.g., benevolence and integrity) in the agents. Regarding the integrated Trus t -TAM, this study reaches similar results as other trust and T A M studies, even though the trust objects in this study are technological artifacts. The analysis shows that consumers' initial trust in agents affected P U of agents and their intentions to adopt the agents. However, unlike Gefen et al. (2003a) who found that potential customers' purchase intentions were only influenced by their trust in e-vendors, we found that for new consumers, both trust in agents and P U of the agents have direct effects on adoption intentions. Consumers perceive online recommendation agents not only as support tools for online shopping, but also as "social actors" (virtual advisors) with human characteristics. Both the usefulness of the agents as tools that provide recommendations and consumers' trust in the agents as virtual assistants are influential in consumers' intentions to adopt the agents. One factor that may explain the above discrepancy is the different behavioral intentions explored in different studies as summarized in table 4-1. This study focuses on consumers' intentions to use agents to get recommendations. Consumers did not delegate the whole purchase task to agents and they did not have to act upon the product recommendations provided. Therefore, relatively low risks were involved. Conversely, Gefen et al. (2003a) explored consumers' purchase intentions. Purchase behaviors involve high uncertainties and risks (e.g., financial loss, personal information and privacy concerns). In such situations, trust might be more salient and it may constitute a powerful determinant of purchase intentions for potential consumers. 4.4.2 L I M I T A T I O N S The major limitations of the first experiment have been reported in chapter 3. In addition, this chapter has another limitation. Some issues should be addressed regarding the analytic methodology used in the current study. The potential for common method -71-Chapter 4: Trust-TAM variance may exist because measurements of all of the constructs in this study were collected at the same point in time and via the same instrument (Straub et al., 1995). To test common method bias, we applied Harmon's one-factor test to data from the current experiment (Podsakoff and Organ, 1986). We performed an exploratory factor analysis on all the variables, but no single factor was observed and no single factor accounted for a majority of the covariance in the variables, suggesting that common method bias is not a concern in the present study. 4.4.3 I M P L I C A T I O N S A N D F U T U R E R E S E A R C H Due to advances in Web-based technologies, there are ample opportunities to utilize knowledge-based systems to facilitate online consumer decision making and provide recommendation services for consumers. However, because of the high risks and uncertainties inherent in online environments, consumers must trust in Web-based technologies in order for them to be effective. Interpersonal trust has been the focus of many previous studies, and trust in technological artifacts remains an under-researched area. The present study has implications for information systems research on the role of trust in users' acceptance of online recommendation agents. Results from this study and prior literature show that the nature of trust in technological artifacts should not be fundamentally different from interpersonal trust. Therefore, trust theories in the interpersonal domain may generally apply to trust in technological artifacts. Nevertheless, there might be unique elements for trust in technological artifacts. More research is needed to examine whether the conceptualization of trust in technological artifacts should be extended to include other relevant beliefs. For example, Lerch et al. (1993) and M u i r (1994) have explored other machine trust related beliefs such as reliability, predictability, dependability, and faith as they relate to machines. In addition, to further extend the line of research on the relational aspect o f trust in agents, researchers may need to identify emotional elements in consumer trust in online recommendation agents (Komiak and Benbasat, 2004; Rempel et a l , 1985). -72-Chapter 4: Trust-TAM The issue of different targets of trust also deserves further research. The relative importance o f different trust dimensions might be different for different trust targets. Although the effect of trust in recommendation agents on the intentions to adopt agents has been confirmed in this study, the role of agents in consumers' trust in e-vendors and the reciprocal impacts of agent and vendor trust have not been studied. Urban et al. (2000) suggest that recommendation agent technology is an effective way of promoting consumer trust in e-vendors. Trust in e-vendors can also be extended to online recommendation agents via the transference process (Doney and Cannon, 1997). Especially, for initial trust building, consumers rely on other relevant sources (e.g., the e-vendors and their websites) to judge the trustworthiness of recommendation agents. Additional empirical research is needed to investigate such mutual influences. A s shown in table 4-1, in contrast to Gefen et al. (2003a; 2003b) and Pavlou (2003), this study focuses on trust in and adoption of online recommendation agents. These previous studies tested similar models in which e-vendors were the trust targets, and P U and P E O U were measured in relation to websites. However, the influence of consumers' perceptions and use of recommendation agents available on websites of a company on their perceptions and use of the websites themselves is still an open research question. In addition to the reciprocal impacts of agent and vendor trust, it is possible that P U , P E O U , and adoptions of the agents wi l l influence consumers' P U , P E O U , and use of the websites. Furthermore, it is not clear whether trust in agents w i l l influence consumers' purchase behavior directly or only indirectly through consumers' trust in the e-vendor and P U of the website. The relationships between the two adoption models (i.e., adoption of agents and websites) require future research. This study also suggests a new perspective for studying IT acceptance research and provides an initial blueprint to investigate relationship building with technologies. In T A M (Davis, 1989), the dominant IT adoption model, IT acceptance is determined by rational processes focusing on expected outcomes such as usefulness. A s summarized by Gefen and Straub (2003), " in that line of research, social aspects were secondary, i f mentioned at al l ; social aspects were studied in the context o f how they influenced perceived usefulness and ease of use" (p.21). Recently, several studies (e.g., Gefen et al., -73-Chapter 4: Trust-TAM 2003a; Gefen et al., 2003b; Gefen and Straub, 2003; Pavlou, 2003) have looked into the role of social factors (e.g., trust) as direct antecedents of behavioral intentions. However, the social factors in most of these studies were examined in the context of interpersonal relationships between consumers and e-vendors. The "social" relationships between consumers and technologies, such as agents, were largely ignored. This study, on the other hand, confirms the importance of such perceptions and highlights the role of relational factors in consumers' intentions to adopt online technologies. Furthermore, the findings of this research can be extended to other decision support systems or knowledge-based systems as well . The integrated T r u s t - T A M may provide a more complete model to explain the user acceptance of these systems. When consumers utilize decision support systems or knowledge-based systems, they face a set of risks and uncertainties (e.g., whether or not a system is reliable and has the competence to improve task performance). Therefore, trust in the system is a factor of system acceptance. This study also has important practical implications for the design of effective online recommendation agents. Particularly, relational and social relationships between consumers and online recommendation agents are important, as positive relationships induce consumer trust in agents and promote agent adoption. A strong, personal connection to customers via web technologies should be one of the key goals of web vendors. Designers could employ several social relationship building mechanisms to induce consumer trust in the agents (Komiak et al., 2005). For example, designers may consider creating personalized agents that know individual users' backgrounds and greet them when they initiate agent applications. Anthropomorphic features (e.g., a human-like body with gestures and emotional reactions) can also be designed into the agents (Qiu, 2002). There are other important agent capabilities that enhance the trustworthiness of online recommendation agents. A s key trust building mechanisms, the appropriate explanation facilities studied in chapter 3 should be embedded in online recommendation agents. In addition, Xiao and Benbasat (2002) have investigated the internalization -74-Chapter 4: Trust-TAM capabilities of recommendation agents. Agent internalization refers to an agent's ability to understand users' real needs, and its ability to apply those needs when generating recommendations (Xiao and Benbasat, 2002). A clear example of high internalization is an agent that effectively elicits consumers' desires by asking appropriate needs-based questions. Their results indicate that consumers invest greater trust in recommendation agents with higher internalization capabilities. B y incorporating these important trust-inducing features, recommendation agents can provide more effective services, gain a higher chance of user acceptance, and further promote consumer intentions to transact with the web vendors. -75-Chapter 4: Trust-TAM A P P E N D I X F O R C H A P T E R 4 Table A4-1 Pervious T A M Extension Studies Studies Constructs" Technologies Users Characteristics Context Findings Davis (1989) PU, PEOU, Usage/BI PROFS email; XEDIT file editor; Chart-Master & Pendraw graphic systems. Not investigated, experienced users for one study and new users for the other Offline, voluntary use for work PU-> Usage/BI; PEOU PU Davis et al. (1989) PU, PEOU, A , BI, Usage WriteOne word processing Experience (through a longitudinal study: both initial and one semester later) Offline, voluntary use for work PU -> A ; PEOU -» A 1 ; P E O U ^ PU; A -» BI 1 ; PEOU -» BI 1 ; PU -» BI; BI -> Usage Mathieson (1991) PU, PEOU, A , BI Spreadsheet, calculator Not investigated Offline, voluntary use for work PU -> A ; PEOU -» A ; P E O U ^ PU; PU -» BI; A ^ B I Adams et al. (1992) PU, PEOU, Usage Voice mail and email, WordPerfect, Lotus 1-2-3, Harvard Graphics Not investigated Offline, voluntary use for work PU -» Usage5; PEOU -» Usage6; P E O U ^ PU; Igbaria et al. (1995) EV, PEOU, PU, Usage Micro-computer User training, computer experience Offline, voluntary use for work E V ^ PEOU; E V -» PU; PEOU -> PU; PU -> Usage; -76-Chapter 4: Trust-TAM Table A4-1 Pervious T A M Extension Studies (Continued) Studies Constructs 3 Technologies Users Characteristics Context Findings Chau(1996) PEOU, near-term PU, long-term PU, BI Word, Excel Not investigated Offline, voluntary use for work PEOU -» Near-term PU; PEOU -> BI; Near-term PU -» Long-term PU; Near-term P U -» BI; Long-term PU -» BI Gefen et al. (1997) PU, PEOU, SPIR, Gender, Usage Email Gender, culture Offline, voluntary use for work Gender PU; Gender PEOU; Gender -> SPIR; SPIR -» PU; PU -> Usage Taylor and Todd (1995a) PU, PEOU, A , SN, PBC, BI, Usage Computer resource center Prior experience Offline, voluntary use for work and study P E O U ^ PU; PU -» A ; PEOU -» A 1 ; PU BI; S N ^ BI; PBC -» BI; BI -» Usage; PBC -» Usage1 Taylor and Todd (1995b) PU, PEOU, A , BI, Usage Computer resource center Not investigated, most participants were familiar with the technologies Offline, voluntary use for work and study PEOU -> PU; PU -» A ; PEOU -» A ; PU -> BI; BI -» Usage Venkatesh and Morris (2000) PU, PEOU, SN, BI, Gender, Experience Various systems for data and information retrieval Gender; experience (through longitudinal study: post training, after one month, and after three months) Offline, voluntary use for work PU -» BI; PEOU -» BI; P E O U ^ PU; SN -» BI 2 -77-Chapter 4: Trust-TAM Table A4-1 Pervious T A M Extension Studies (Continued) Studies Constructs" Technologies Users Characteristics Context Findings Venkatesh et al. (2003) PU, PEOU, SN, BI Sophisticated organizational technologies (e.g., Portfolio Analyzer) experience (through longitudinal study: post training, after one month, and after three months) Offline, voluntary and mandatory use PU BI; PEOU -> BI 3 ; SN B I 4 Gefen et al. (2003a) PU, PEOU, BI, Trust, Familiarity W W W Website Experience with online stores Online, voluntary use for shopping PEOU -> PU; PU -» BI 1 ; Trust -» BI; Familiarity -> BI; Familiarity -» PEOU; Familiarity Trust Gefen et al. (2003b) PU, PEOU, BI, Trust WWW Website Not investigated (experienced shoppers only) Online, voluntary use for shopping PEOU -» PU; PEOU -» Trust; PEOU -» BI; PU -» BI; Trust -» PU; Trust -> BI; Koufaris (2002) PU, PEOU, BI, Flow (PC, Enjoyment, Concentration) WWW Website Web Skills Online, voluntary use for shopping PU -> BI; Enjoyment BI Lederer et al. (2000) PU, PEOU, Usage WWW (Work-related Internet newsgroups) Not investigated (mostly experienced Internet users) Online, voluntary use for work PU -» Usage; PEOU -> Usage -78-Chapter 4: Trust-TAM Table A4-1 Pervious T A M Extension Studies (Continued) Studies Constructs" Technologies Users Context Findings Characteristics Moon and PU, PEOU, WWW Not investigated Online, P U ^ A ; Kim(2001) Playfulness, websites voluntary PEOU -» A ; A , BI, Usage use Playfulness ->A; Playfulness -» PEOU; PEOU -> PU; PU -» BI; Playfulness -» BI; A -» BI; BI -> Usage Gentry and PU, PEOU, A , Shopping Bots Not investigated Online, PEOU -» Calantone BI on WWW voluntary PU; (2002) use for P U ^ A ; shopping P U -» BI; a Legend: A - attitude; BI - behavioral intentions; SN - subjective norm; PBC - perceived behavioral control; PC - perceived control; SPIR - social presence and information richness; E V - external variables (e.g., individual, system, and organizational characteristics) ' The relationship is significant only when participants use the software/technology/computer center/Website for the first time. 2 This relationship is significant only for the post training and after one month tests but not for the after three months test. 3 In the voluntary settings, this relationship is significant only for the post-training test; in the mandatory settings, this relationship is significant for the post-training and after one-month tests. 4 This relationship is significant only for the post-training test in the mandatory settings. 5 This relationship is significant except for the WordPerfect case and the Harvard Graphics case. 6 This relationship is significant except for the Email case and the Voice mail case. -79-Chapter 5: Trust Formation Processes CHAPTER 5: ANALYSIS OF TRUST FORMATION PROCESSES IN ONLINE RECOMMENDATION AGENTS 5.1 INTRODUCTION This chapter identifies the key processes through which consumers' trust in online recommendation agents is enhanced or inhibited. Given the high risks and uncertainties involved in the use of agent technologies in online environments as discussed in chapter 4, building trustworthy online recommendation agents is a challenging task. There is little empirical research on the antecedents and formation of trust in the context of online recommendation agents. This study investigates the processes that build and inhibit 1 8 trust in recommendation agents. Fragility is one of the important qualities of trust (Ba and Pavlou, 2002; Blomqvist, 1997; Gallivan, 2001; Jarvenpaa and Leidner, 1998; Kramer, 1999; Lewick i and Bunker, 1995; McKnight et al., 1998; Meyerson et al., 1996; Ring, 1996; Ruppel et al., 2002-2003; Shankar et al., 2002; Shneiderman, 2000; Slovic, 1993; Worchel, 1979). Researchers have noticed the asymmetry between the trust-building processes and trust-inhibiting processes in the domain o f interpersonal perceptions (e.g., Slovic, 1993). Many researchers have argued that trust is easier to destroy than to create (e.g., Shneiderman, 2000; Slovic, 1993). Especially during the initial stage of trust formation when there is no previous interaction history to rely on, trust is fragile and may be easily destroyed because consumers' perceptions are inherently fluid (Lewicki and Bunker, 1995; Papadopoulou et al., 2001; Ring, 1996). Recent IS research calls for attention for unique factors and processes that contribute to technology usage inhibitors (Cenfetelli, 2004), but there are few empirical studies that focus on the processes leading to the deterioration and decline of trust in recommendation agents. To open the "black box" of consumers' trust-building and trust-inhibiting processes, we analyzed the written protocols that participants used to justify their trust 1 8 The inhibiting processes that lead to trust deterioration and decline is different from the concept of distrust (e.g., Lewicki and Bunker, 1995; Kramer, 1999). The issue of distrust is also important but beyond the scope of this study. -80-Chapter 5: Trust Formation Processes levels after interacting with a recommendation agent in Experiment 1. Using a prior-research-driven approach (Boyatzis, 1998) and mainly based on Komiak (2003) and Komiak and Benbasat (2003)'s classification scheme, a new agent trust formation scheme was developed to code the written-protocols. A s reported in chapters 3 and 4, participants' trust perceptions were surveyed in Experiment 1. The major trust-building and trust-inhibiting processes resulting from the protocol analysis were then used to predict participants' trust levels in a Partial Least Squares (PLS) analysis. Using both an interpretive approach and a quantitative analysis provides us more confidence in revealing the most influential trust formation processes. Based on these trust formation processes, guidelines for the design of trustworthy online recommendation agents are discussed. The chapter proceeds as follows. Section 5.2 reviews the process theories of initial trust formation. Section 5.3 describes the research method. Section 5.4 presents the data analysis and describes the coding scheme. The results are reported in section 5.5, and this chapter ends with a discussion of the results, and limitations and implications of this chapter. 5.2 P R O C E S S T H E O R I E S O F I N I T I A L T R U S T F O R M A T I O N Drawing from research in various disciplines (e.g., psychology, sociology, political sciences, economics, marketing, and organizational sciences), process theories of initial trust formation shed light on how initial trust forms in interpersonal and organizational contexts (Brashear et al., 2003; Doney and Cannon, 1997; Gefen, 2004; Gefen et al., 2003b; Komiak, 2003; Komiak and Benbasat, 2003; Lewick i and Bunker, 1995; McKnight et a l , 1998; Zucker, 1986). To analyze consumers' trust formation processes in online recommendation agents, we followed Boyatzis (1998)'s prior-research- and theory-driven approach to develop a coding scheme. This study used Komiak (2003, see also Komiak and Benbasat, 2003)'s scheme of trust formation in recommendation agents as a basis for the development of a coding scheme, but it also covered other major processes discussed in other studies. -81-Chapter 5: Trust Formation Processes Based on McKnight et al. (1998), Doney and Cannon (1997), Zucker (1986), Lewicki and Bunker (1995), and Komiak (2003), table 5-1 categorizes the major trust building processes into six clusters: (1) personality based, (2) institution based, (3) calculative based, (4) heuristic based, (5) process based, and (6) knowledge based. Table 5-1 A Summary of Trust Building Mechanisms \,Study Process \ -McKnight et al. (1998)-initial trust among employee in an organization Doney & Cannon (1997)-buyer trust in a supplier firm and its salesperson Zucker 1986-General trust Lewicki and Bunker (1995) - General trust Komiak (2003) - consumer trust in recommendation agents This study -consumer trust in recommendation agents 1 .Personality based Dispositional Process V (Trusting stance & Faith in humanity) 2.Institution based Situational normality Structural assurance V V (formal societal structures & intermediary mechanisms) V Risk awareness 3. Calculative based Calculative Process V V V (calculus process) V 4. Heuristic based Categorization Processes V V (characteristic based process) Illusions of Control Process V Transference Process V Media Assessment V -82-Chapter 5: Trust Formation Processes Table 5-1 A Summary of Trust Building Mechanisms (cont'd) 5. Process based Expectation Confirmation V(process based process) V V Verification Process V V Utility Assessment V Control Process V V 6. Knowledge based Information Sharing Process Knowledge based Process Knowledge based Process V V Prediction Process V "Unknown" Awareness Process V Competence Assessment V (capacity process) V Benevolence Assessment V (intentionality assessment) V Integrity Assessment V Note: check marks are used to indicate that a process was examined by the study in a column; comments in parentheses explain the different terms used by the study in a column; combined boxes indicate that the study placed several processes into a broad category. 5.2.1 P E R S O N A L I T Y - B A S E D Personality-based trust building mechanisms mean that people with different cultural backgrounds, personality types, and developmental experiences differ considerably in their general predisposition or tendency to trust others (Kramer, 1999; Mayer et al., 1995). They play an important role in the initial stage of a relationship when there are limited resources to judge other parties' trustworthiness (Gefen et al., 2003b; Mayer et al., 1995; McKnight et al., 1998; Rotter, 1971). Dispositional process is personality-based. Dispositional process is triggered by the general tendency to trust or not to trust others (Gefen et al., 2003b; Mayer et al., 1995; McKnight et al., 1998; Rotter, 1971). Recent research found that disposition to trust is important in establishing interpersonal and organizational trust in e-commerce contexts as well as in building technology trust (Lee and Turban, 2001; McKnigh t et al., 2002a; -83-Chapter 5: Trust Formation Processes McKnight et al., 2004). Ample evidence exists from both laboratory studies and field studies that people differ significantly in their general tendency to trust others including technology (e.g., Kramer, 1999). Some people have a tendency to trust high technology systems while others do not (Komiak et al., 2005). 5.2.2 I N S T I T U T I O N - B A S E D Institution-based trust building mechanisms are triggered by environmental structures that make an environment trustworthy (McKnight et al., 2002a; McKnight et al., 1998). With institution-based trust, people believe that necessary structures are in place to achieve a successful outcome. Structural assurance and situational normality are the two main institution-based trust building processes that are discussed in the literature (McKnight et al., 2002a; McKnight et al., 1998). Structural assurance refers to an assessment of trust based on contextual conditions such as regulations, guarantees, safety nets, promises, legal recourse, or other procedures that are in place (McKnight et al., 1998; Zucker, 1986). Situational normality refers to an assessment of trust based on a situation that is normal and in a properly ordered setting (Lewis and Weigert, 1985; McKnight et al., 1998). Similar to the structural assurance, another process in this category is the risk awareness process. This process occurs when consumers are aware of potential risks in using the recommendation services in online environments. The existence of risks is one of the main reasons for the need of trust (Mayer et al., 1995). With regulations, guarantees or legal recourse, consumers face fewer risks, hence they are more likely to trust agents (Pennington et al., 2004). Inversely, being aware of risks (i.e., personal information leakage) in online environments reduces consumers' trust in agents. Consumers' perceived risk is quite high in the initial stage of relationship building due to the virtual nature of online environments and the lack of prior interaction history (Gefen et a l , 2003b). -84-Chapter 5: Trust Formation Processes 5.2.3 C A L C U L A T I V E - B A S E D Calculative process19 occurs when consumers consider the costs o f and/or the rewards for an agent acting in an untrustworthy way (Doney and Cannon, 1997; Gefen et al., 2003b; Lewick i and Bunker, 1995). When the benefits of acting in an untrustworthy manner (i.e., providing biased recommendations) exceeds the costs of being caught and penalized, consumers may infer that it would be contrary to the agent's benefit o f acting in that way and thus the agent should be trusted (Akerlof, 1970; Gefen et al., 2003b). Gefen et al. (2003b) found that calculative based beliefs positively affect consumer trust in e-vendors. 5.2.4 H E U R I S T I C - B A S E D Heuristic-based trust relies on first impressions or environmental cues rather than experiential interactions. It was termed as cognition-based trust by McKnight et al. (1998). They suggested two processes: 1) categorization process and 2) illusions of control, which apply to situations where consumers do not have prior first-hand experience with an agent. Categorization process means that people put the other person in the same category as oneself, or place anther person into a general category of persons. People perceive others in a same group to be trustworthy because those who are grouped together tend to share common goals and values (Kramer et al., 1996). Also , people form positive trusting beliefs about the other by generalizing from the favorable category into which the person was placed (McKnight et al., 1998). Illusions of control means that people try to assure themselves that things are under their personal control. In the interpersonal and organizational contexts, this process indicates that in an uncertain situation, people take small actions to try to influence others and test whether they are trustworthy (McKnight et al., 1998). McKnight et al. (1998) have suggested that such illusions of control help facilitate consumers' trust formation. 1 9 McKnight et al. (1998) considered this process as personality-based while other studies (e.g., Doney and Cannon 1997; Brashear et al. 2003) examined it as a situational factor. A pilot examination of the written protocols indicated that in most cases, this process is invoked by specific factors that related to the agent provider. Therefore, we included this process separate from the dispositional process. -85-Chapter 5: Trust Formation Processes This process also applies to consumer-agent relationship in that a recommendation agent is inanimate in nature and can be easily controlled by its users. Accordingly, some perceptions of control is not only illusions, but also real (Komiak, 2003). Because it involves agent-user interaction, we categorized it as a process based trust-building mechanism in section 5.2.5. In addition to the above two processes, there are another two heuristic-based processes: 1) transference process and 2) media assessment. Transference process was proposed by Doney and Cannon (1997) and empirically examined by Stewart (2003). Trust can be transferred from one entity or a credible proof source to others (Mil l iman and Fugate, 1998). When consumers interact with an agent for the first time, they identify other entities and heuristic cues closely associated with the recommendation agent (e.g., a general impression about the website or the reputation history o f the e-vendor) to infer the agent's trustworthiness. Media assessment process occurs when people evaluate an agent interface and the Internet media in determining their trust levels (Komiak, 2003; Komiak et al., 2005). A pleasant and easy to use interface indicates that a recommendation agent cares about users' effort and enjoyment in using the agent and invests its own effort in the commitment of the agent-consumer relationship (Gefen et al., 2003b). Consequently, perceived ease of use of an agent delivered in the Internet media influences consumers' trust levels in the agent. 5.2.5 P R O C E S S - B A S E D Process-based trust formation was proposed by Zucker (1986). To determine their trust levels, consumers assess the other party's behavior and performance during their interactions. This mechanism consists of four processes (i.e., expectation confirmation, verification process, utility assessment, and control process). Expectation Confirmation. Consumers use their expectations as a benchmark to judge the trustworthiness of online recommendation agents (Kramer, 1999). A s Kramer (1999) suggested, "individuals' judgments about others' trustworthiness are anchored, at -86-Chapter 5: Trust Formation Processes least in part, on their a priori expectations about others' behavior" (p.576). Consumers trust an agent when they believe the agent behaves as they would expect (Gefen and Straub, 2003). When agents' actions do not conform to consumers' expectation, trust in the agents wi l l be lower (Kramer, 1999; Lewicki and Bunker, 1995; Sitkin and Roth, 1993). In the self-efficacy theory, Bandura (1997) has distinguished between two kinds of expectancy beliefs: outcome expectations - beliefs that certain behaviors w i l l lead to certain outcomes (e.g., the belief that practice wi l l improve one's performance), and efficacy and process expectations - beliefs about whether one can effectively perform the behaviors necessary to produce the outcome (e.g., I can practice sufficiently hard to win the next tennis match). Similarly, consumers have two types of expectations when using recommendation agents. One is outcome expectations, which are about the final recommendations (e.g., I should get several Olympus cameras). Consumers spend time and effort in using an agent, and thus, may expect to get a set of appropriate recommendations in return. When agent recommendations match the expectations, consumer trust emerges. Otherwise, consumer trust might decline. The other is process and behavior expectations, which are related to the activities that the agents should conduct in order to achieve certain outcomes (e.g., the agent should ask me about my brand preferences). Consumers may use their own knowledge to generate expectations regarding agent behavior. When an agent "behaves" in an expected way, consumers' trust in the agent w i l l be higher. Verification Process. This process has been proposed and validated in Komiak (2003) and Komiak et al. (2005). Being able to verify the performance of an agent with other resources greatly facilitate consumers' trust in the agent (Donnelly, 1994). In the initial relationship building, when consumers lack the knowledge to check the credibility of the agent and do not have an interaction history to rely on, they may consult a trusted third party (Komiak et al., 2005). For example, with their friends or product reviewers, consumers can verify whether or not the recommendations provided are good and whether or not the agent's claims are true. On the contrary, lack of means to check the advice from the agent makes the verification process difficult and it may inhibit -87-Chapter 5: Trust Formation Processes consumer trust. Verification process occurs especially when consumers have some doubts or suspicions about the agents so that they need to verify the advice from the agents with other sources (Komiak et al., 2005). The absence of a third party to verify w i l l result in the lack of supporting evidence, hence the doubts and suspicions wi l l remain (Donnelly, 1994), leading to trust deterioration and decline. Utility Assessment. In a task-based context, utility assessment is one of the most important components of personal relationships and utility relationship with the most perceived valued wi l l be characterized as being trustworthy in terms of competence and motive (Atkinson and Butcher, 2003). Extending this line of reasoning, we propose that the benefits gained from using the recommendation agents provide consumers with a foundation to judge the agent's competence and benevolence. The causal relationship between trust and perceived usefulness has been modeled in different ways. For example, Koufaris and Hampton-Sosa (2004) advocated that perceived usefulness of a website influence consumers' initial trust in an online firm, while Gefen et al. (2003b) suggested that trust influences usefulness perceptions. In chapter 4, we also suggested that usefulness perceptions influence trust. The inclusion of the utility assessment process, which was identified in our pilot examination of the written protocols, is justified by the reciprocal influence of many psychological processes (Eagly and Chaiken, 1993). We agree that there might be psychological processes through which consumers' trust in the agents influences their utility assessment. Nevertheless, in the initial interaction with an agent, the utility that consumers gain from using the agent can be used as a basis for trust relationship building (Atkinson and Butcher, 2003). Consumers may generate an overall positive evaluation o f the agent when they perceive it to be useful and such positive evaluation wi l l reinforce and enhance their trust in the agent. The psychological processes of trust and utility assessments can be reciprocal and influence each other in the earlier stage of trust formation. Control Process. In an organizational context, Luhmann (1979) and Das and Teng (1998) proposed a theory concerning the relationship between trust and control. Control mechanisms have an impact on trust because with proper control mechanisms, perceived Chapter 5: Trust Formation Processes level of uncertainty is reduced and "the attainment of desirable goals becomes more predictable" (Das and Teng, 1998, p. 493). This uncertainty reduction principle applies in the initial stage to trust formation in online recommendation agents. With more control over an agent, consumes are less likely to be misguided by the agent and thus more likely to trust the agent (Komiak et al., 2005). A s suggested in Silver (1991b), users' control over an agent involves the amount of choices provided by the agent, the tendency of the agent to influence consumer's decision making, and consumers' opportunity to express their needs, to list a few. 5.2.6 K N O W L E D G E - B A S E D Knowledge-based trust relies on information about the trust target (Gefen et al., 2003b; Lewicki and Bunker, 1995). Trust develops when people gain knowledge about the other party (Gefen et al., 2003b), allowing them to interpret and predict the behavior of the other party (Doney and Cannon, 1997; Lewicki and Bunker, 1995). When interacting with an online recommendation agent, consumers have opportunities to understand and predict agent behavior utilizing information provided by the agent (Komiak, 2003; Komiak et al., 2005). Three processes are included in this cluster : 1) information sharing process, 2) prediction process, and 3) "unknown" awareness process. Information Sharing Process. As discussed in chapter 3, information asymmetry has been regarded as major obstacle to trust building due to the agency relationship between consumers and recommendation agents (Ba, 2001; Das and Teng, 1998; Donnelly, 1994; Hawkes et al., 1989; Whitener et al., 1998). Without sufficient explanation facilities, users lack information about agents' actions (i.e., procedures in generating recommendations) and motives (i.e., motivations in eliciting user requirements), leading to users' inability to verify agent competence and motivation. Trust is reduced when the other party behaves without explanation (Gefen and Straub, 2003). Openness and willingness to sharing information are important approaches to convey agent trustworthiness (Das and Teng, 1998; Whitener et al., 1998). 2 0 As shown in table 5-1, there are another three knowledge-based processes (i.e., competence, benevolence, and integrity assessment). Given that we define trust in recommendation agents to include competence, benevolence, and integrity beliefs. These three processes are tautological to the trust definitions, thus they were not included in this study. -89-Chapter 5: Trust Formation Processes Information sharing takes place when consumers get explanations regarding the agent's behavior, motivation, and so on. Gregor and Benbasat (1999) suggested that explanations are important components of intelligent systems, including online recommendation agents because, by making the performance of systems transparent to their users, they improve user trust in the systems. System transparency and understandability increase consumer trust in online recommendation agents (Aubert and Kelsey, 2003; Herlocker et al., 2000; Hertzum et al., 2002; Lee and Turban, 2001; Sinha and Swearingen, 2001; Sinha and Swearingen, 2002). Through appropriate explanations, consumers know more about the agent, better understand the agent's actions and thus can evaluate its competence, and are aware o f the motivation and thus can evaluate the extent of benevolence. Information sharing also reduces the level of behavioral uncertainty, which, in turn, increase consumer trust (Kwon and Suh, 2004). In addition to such explanations, other information is needed as well , such as detailed product information which indicates that the agent is knowledgeable about a product (Komiak et al., 2005). Prediction Process. In essence, the knowledge-based trust is grounded in the other's predictability (Lewicki and Bunker, 1995). The prediction process was empirically examined by Doney and Cannon (1997) in the context of buyer-seller relationships and by Brashear et al. (2003) in the context of manager-salesperson relationships. It concerns whether or not the agent is reliable, consistent, and predictable in consumers' eyes. When consumers are able to predict the agents' behavior (e.g., follow certain reasonable rules and be reliable), they may perceive the agent to be competent (as demonstrated by agent's use of certain rules) and to be less likely to take advantage of consumers. A s a result, consumers w i l l be more likely to trust the agent. "Unknown" Awareness Process. This process was proposed and examined in Komiak (2003) and Komiak et al. (2005). In the initial relation building, different parties have limited knowledge about each other and might have difficulties to justify a high level of trust in others. Gambetta (1988) suggested that trust is particular relevant in conditions of ignorance with respect to unknown actions of others. When consumers are aware o f some "unknown" about an agent, their trust in the agent w i l l be lower. In contrast, when consumers have favorable perceptions of the agent's competence, -90-Chapter 5: Trust Formation Processes benevolence, and integrity, they wi l l suspend worrying about the "known" (Komiak, 2003; Komiak et al., 2005). Therefore, this occurrence o f this process inhibits consumer trust in an agent. The above first six major clusters of trust formation mechanisms provide insights into how initial trust forms and expand Komiak (2003)'s trust formation scheme. Using a verbal protocol based process-tracing approach, Komiak (2003) and Komiak and Benbasat (2003) identified nine processes for agent trust formation. They explored the processes involved in the trust formation from the initial interaction between customers and a recommendation agent to the second and third interaction, and also examined the different processes for cognitive and emotional trust. Another similar study is Komiak et al. (2005), which compared the different trust-building processes in virtual recommendation agents versus real salespersons. The present study is different from the aforementioned studies in several aspects. First, we employed a written protocol based rather than a verbal protocol based method (the methodological differences are addressed in section 5.3.1). Second, in addition to qualitatively coding the written protocols, a quantitative analysis was also conducted to test the predictive power of the main trust-building and trust-inhibiting processes in P L S . The above studies identified major processes by the frequency o f various processes. However, the frequency of a process does not necessarily indicate its effect in forming consumers' trust perceptions. This study integrated an interpretive approach with a quantitative analysis, allowing us to further validate the major processes that influence consumers' trust levels in online recommendation agents. The advantages of combining qualitative and quantitative analysis approaches have been widely recognized (Kaplan and Duchon, 1988; Mingers, 2001). In this study, the multi-method approach minimizes the threat of mono-method variance, enables us to identify the cognitive processes rather than relying on self-reports only, and increases the robustness of results because the interpretative findings from the protocol analysis are further validated through a quantitative analysis. -91-Chapter 5: Trust Formation Processes Moreover, the recommendation agent investigated in the present study is provided and owned by the e-vendor while the agent in Komiak (2003) was assumed to be owned by consumers. Trust issues are more salient in this study because more uncertainties and risks are involved when consumers delegate their tasks to a recommendation agent owned by another entity. In such a situation, consumers may be more concerned about whether the agent works for the e-vendor or themselves. 5.3 R E S E A R C H M E T H O D As described in chapter 3, after completing the experimental tasks, participants were surveyed regarding their trusting beliefs (i.e., quantitative data) and then were asked to answer several open-ended questions to justify their trust levels in the agents (i.e., qualitative data). The three essay questions relevant to this chapter are 1) Do you believe the virtual advisor's competence? Why? 2) Do you believe the virtual advisor's benevolence? Why? and 3) Do you believe the virtual advisor's integrity? Why? These three questions correspond to the three belief components (i.e., competence, benevolence, and integrity) of trust in online recommendation agents as defined in chapter 2. 5.3.1 W R I T T E N P R O T O C O L VERSUS V E R B A L P R O T O C O L ANALYSIS We used a written protocol based method to elicit the reasons, processes, and factors that lead to consumers' trust development and decline in online recommendation agents (Todd and Benbasat, 1987). A variety of methods for process-tracing is available, such as verbal protocols, written protocols, computer logs, information display boards, and tracing of eye movements. Among them, the verbal protocol approach has been argued powerful because it provides the greatest data richness (Todd and Benbasat, 1987). For example, it provides access to what information is examined, and what analysis or assessment has been conducted by the problem solver. However, our choice of written protocols analysis is mainly because our interest is to investigate the underlying reasons and processes that lead to trust building and decline, rather than to reveal different activities that consumers conducted during the shopping tasks or to reveal the decision -92-Chapter 5: Trust Formation Processes strategies that consumers used to decide their shopping choices. A s discussed in Todd and Benbasat (1987), verbal protocol analysis is appropriate for conditions where participants are required to report only the contents of short term memory (i.e., "what" the problem-solver /decision maker is doing, what info is using, and so forth). In contrast, written protocol analysis is appropriate for justifying why participants have performed in a certain way and justifying their overall evaluation of their experiences. Thus, written protocols were collected in this study. 5.4 D A T A A N A L Y S I S Each written protocol was broken into episodes (unit of coding) and each episode contained at most one trust-building or trust-inhibiting process. On average, each participant provided about six episodes total for the three essay questions. Around 65% of the episodes are related to trust-building processes and 35% of them are related to trust-inhibiting processes. This study used a prior-research-driven and theory-driven approach to analyze the written protocols (Boyatzis, 1998). The process theories of trust formation reviewed earlier helped us interpret the protocols and guided us to identify the potential trust building and inhibiting processes in the written protocols. A preliminary analysis of the written protocols culminated in a coding scheme having six clusters of trust building/inhibiting processes as informed by process theories of trust formation (i.e., personality based, institution based, calculative based, heuristic based, process based, and knowledge based). Each cluster of processes includes one or more individual processes. In total, 12 processes 2 1 were identified as potential trust building/inhibiting processes in online recommendation agents: 1) dispositional process, 2) risk awareness process, 3) transference process, 4) media assessment, 5) calculative 2 1 Al l of the 12 processes apply to both trust building and trust inhibiting. For some of them, the process for trust inhibiting should have been named in an opposite way (e.g., expectation disconfirmation, lack of information sharing). For the sake of simplicity in description, in most cases we only use the names of trust building to refer to both trust building and trust inhibiting. -93-Chapter 5: Trust Formation Processes process, 6) expectation confirmation, 7) verification process, 8) utility assessment, 9) control process, 10) information sharing process, 11) prediction process, and 12) "unknown" awareness process. The coding scheme with 12 processes and examples from the written protocols for each process are shown in table 5-2. Three processes discussed in section 5.2 were not included in the coding scheme: 1) categorization process, 2) illusions of control, and 3) situational normality. The categorization process and illusions of control mainly apply to situations where consumers do not have prior first-hand experience with an agent (McKnight et al., 1998), while in this study participants interacted with an agent to get shopping advice. Our preliminary analysis did not find any protocols related to these two processes. Situational normality related protocols were not found either. Online recommendation agents are an emerging technology, thus currently many websites still do not provide online recommendation services. Therefore, we did not include situational normality in the coding scheme. Two judges (the author and another research assistant) independently coded the protocols. For each episode, the judges decided on: 1) which process it belongs to, i f any, and 2) whether it is trust building or inhibiting. Another column labeled " N " is used for those episodes that were coded as "no processes involved in . " -94-Chapter 5: Trust Formation Processes Table 5-2 Trust Building/Inhibiting Processes and Examples Process Source Examples Personality based 1) Dispositional Process (DP) Gefen et af (2003); Mayer et al. (1995); McKnight et al. (1998); Rotter (1971) Building: "I think that I do believe in the competence of the virtual advisor because well, I trust high-tech things." Inhibiting: ".. . However, in most websites, electronic advisors might be biased." Institution based 2) Risk Awareness Process (RK) Komiak (2003); Mayer et al. (1995); Pennington et al. (2004) Building: ". . . As well there is guarantee for purchasing the product." Inhibiting: "I don't feel so secure in using these services in a virtual environment." Calculative based 3) Calculative Process (CL) Akerlof, (1970); Doney and Cannon (1997); Gefen et al. (2003) Building: ". . . it has no reason to hide any information from me or to try 'push' a certain product to make a quick sale." Inhibiting: (No instances were found) Heuristic based 4) Transference Process (TR) Doney and Cannon (1997); Stewart (2003); Milliman and Fugate (1998) Building: ".. . and because I had good overall impression of the website." Inhibiting: "I would have some drawbacks on this because the online store doesn't really have a long reputation history." 5) Media Assessment (MA) Komiak, 2003; Komiak et al. (2005) Building: "Yes, because it is user-friendly and . . ." Inhibiting: "But a cartoon character might have been nicer; this virtual advisor is not 'physically attractive'." Process based 6) Expectation Confirmation . (ET) Kramer (1999); Komiak et al. • (2005); Lewicki and Bunker (1995); Sitkin and Roth (1993) Building: "Yes - because his recommendations match my expectations and prior knowledge of digital cameras." Inhibiting: "It would be better i f the advisor can provide some outputs of the cameras so that the customers can really see the qualities of the cameras." 7) Verification Process (VF) Donnelly (1994); Komiak et al. (2005) Building: (No instances were found) Inhibiting: "I will also consult my friends who are familiar with digital cameras, as well as experienced digital camera salespeople for their further advice on the recommended models suggested by the virtual advisor." -95-Chapter 5: Trust Formation Processes Table 5-2 Trust Building/Inhibiting Processes and Examples (cont'd) Process Source Examples Process based (cont'd) 8) Utility Assessment (UT) Atkinson and Butcher (2003) Building: ". . . The advisor does provide a useful service." Inhibiting: ".. . but it wasn't as useful to the extent of a salesperson." 9) Control Process (CT) Luhmann(1979); Das and Teng (1998); McKnight et al. (1998); Komiak et al. (2005) Building: ". . . you can always change your answers if the camera they have chosen does not suit your needs." Inhibiting: ".. . I want to consider more choices when I am buying a digital camera." Knowledge based 10) Information Sharing Process (IS) Gefen and Straub (2003); Das and Teng, (1998); Whitener et al. (1998); Komiak et al. (2005) Building: "ves, the how explanations present me all his reasonable reasoning in choosing the cameras." Inhibiting: ". . . but I would like more explanation, so that people like me (who don't know anything about digital cameras) would benefit more." 11) Prediction Process (PD) Lewicki and Bunker (1995); Doney and Cannon (1997); Brashear et al. (2003) Building: "It'd get the job done by following all the steps prescribed, and get consistent results i f the input information is the same." Inhibiting: "I don't think it is able to select the best product of the best price, except by random chance." 12) "Unknown" Awareness Process (UN) Komiak, 2003; Komiak et al. (2005) Building: (No instances were found) Inhibiting: "I do not know and therefore I am skeptical and try to make my own decisions (I try not to get persuaded)." 5.5 R E S U L T S To assess the reliability of the coding scheme and ensure the validity of the analysis, Cohen's Kappa coefficient (Cohen, 1960) was used to measure the inter-coder agreement (Todd and Benbasat, 1987). The Kappa coefficient is 0.71, indicating the agreement between the two coders is substantial. Landis and Koch (1977, p. 165) have suggested kappa ranging from 0.61 to 0.80 as a benchmark of high agreement. The average of the two judges' coding results was used for further analysis which aims at 1) identifying the -96-Chapter 5: Trust Formation Processes major processes for building and inhibiting trusting beliefs; 2) examining the differences in the processes for different trusting beliefs; 3) examining the differences in the processes of trust-building and those of trust-inhibiting for each trusting belief; and 4) testing the predictive power of the major processes on consumers' trusting belief scores. The top five major trust-building processes and top five major trust-inhibiting processes for each trusting belief are listed in tables 5-3 and 5-4. For each process, the total numbers of episodes related to trust formation from all participants were summed up. Similarly, the total numbers of episodes related to trust decline from all participants were summed up. For each trusting belief, the top five building (or inhibiting) processes covers from 76% to 92% of all building (or inhibiting) processes for that belief from all participants. For all other processes, each counts for less than 6% of all building or inhibiting processes for a trusting belief, and accordingly was not considered as major processes. -97-Chapter 5: Trust Formation Processes Table 5-3 Major Trust-Building Processes Trusting Belief Competence Benevolence Integrity Information Sharing Expectation Confirmation Expectation Confirmation (25%) (33%) (32%) Utility Assessment Utility Assessment Control Process (25%) (14%) (12%) Process (%*) Expectation Confirmation Calculative Process Prediction Process (25%) (11%) (12%) Prediction Process Information Sharing Information Sharing ' (10%) (10%) (10%) Transference Process Control Process Calculative Process (6%) (9%) (10%) *: The percentage number = (total numbers of episodes related to a particular process leading to the formation of a particular trusting belief from all participants) / (total numbers of episodes related to the formation of a particular trusting belief from all participants and all processes) Table 5-4 Major Trust-Inhibiting Processes Trusting Belief Competence Benevolence Integrity Expectation Expectation Dispositional Process Disconfirmation Disconfirmation (negative) (38%) (27%) (27%) Information Sharing Dispositional Process Expectation (lack of) (negative) Disconfirmation (21%) (21%) (24%) Dispositional Process Transference Process "Unknown" Awareness (negative) (negative) (20%) Process (%) (13%) (12%) Information Sharing Verification Process "Unknown" Awareness (lack of) (lack of) (11%) (7%) (8%) Calculative Process Calculative Process Utility Assessment (negative) (negative) (negative) (10%) (7%) (8%) Transference Process (negative) (7%) -98-Chapter 5: Trust Formation Processes %2 tests were used to examine the differential structures of different trusting beliefs in terms of building and inhibiting processes. %2 tests were based on all the 12 processes. The results reported in table 5-5 indicate that different trusting beliefs are cultivated via different processes. According to table 5-3, only the information sharing and expectation confirmation processes appear in the top five building processes for all three trusting beliefs. A s well , different trusting beliefs are inhibited via different processes. According to table 5-4, only the expectation disconfirmation and dispositional processes are among the top five inhibiting processes for all three trusting beliefs. Table 5-5 y2 Tests Comparin g Processes for Different Trusting Beliefs Building/Inhibiting *22 p value Building 61.7 <.0001 Inhibiting 44.3 0.003 j2 tests based on all the 12 processes were also used to examine the differences in the building processes versus inhibiting processes for each trusting belief. The % t e s t results in table 5-6 indicate that the building processes are statistically different from those inhibiting processes for each trusting belief. Table 5-6 %2 Tests Comparin g Trust Building vs Trust Inhibiting Processes Trusting Beliefs p value Competence 40.3 <.0001 Benevolence 39.3 <.0001 Integrity 52.9 <.0001 To further examine the power of the major trust-building and trust-inhibiting processes in predicting consumers trust beliefs, a P L S analysis was performed. The unit of analysis is each participant. For each participant, the numbers of the major processes for trusting-building and those for trust-inhibiting were used as input. The multi-item trusting belief measures were modeled as reflective indicators of their corresponding latent constructs, which were used as dependent variables in P L S . Three separate P L S analyses were conducted, with one for each trusting belief. -99-Chapter 5: Trust Formation Processes P L S is especially suitable for exploratory studies in the stage of theory building (Barclay et al., 1995). P L S was chosen also because it does not require normal distribution of data for estimating parameters. The numbers of episodes for one particular process from any one participant is between 0 and 2, and are not normally distributed. The measurement items and measurement validation for the trusting beliefs is described in chapter 4. The P L S results including the significant path coefficients and the significance levels are shows in figures 5-1, 5-2, and 5-3 for competence, benevolence, and integrity, respectively. The results reveal the relatively predictive power of different processes on the trusting belief scores. Information sharing, expectation confirmation, and utility assessment processes significantly contribute to the formation of competence belief, while expectation disconfirmation, lack o f information sharing, utility assessment, and verification processes lead to the decline of competence belief. Expectation confirmation, calculative process, utility assessment, and control processes significantly induce benevolence belief, while only "unknown" awareness processes inhibit benevolence belief. Expectation confirmation, prediction process, and dispositional processes significantly increase integrity belief, while "unknown" awareness processes, calculative process, and transference processes significantly reduce integrity belief. f Expectation — Disconfirmation _ Jtility Assessmen (negative) Information Sharing Utility Assessment Expectation Confirmation Competence (Fr=.41) *** p<01, ** p<.05 (bootstrap method); Only significant paths are shown. Processes in white boxes build trust while those in the black boxes inhibit trust. Information Sharing (lack of) Verification Process (lack of Figure 5-1 PLS Results for Competence Belief -100-Chapter 5: Trust Formation Processes Expectation Confirmation Calculative Process Utility Assessment Control Process Benevolence (R2=.29) "Unknown" Awareness *** p<.01, ** p<05 (bootstrap method); Only significant paths are shown. Processes in white boxes build trust while those in the black boxes inhibit trust. Figure 5-2 PLS Results for Benevolence Belief Expectation Confirmation Prediction Process Dispositional Process "Unknown" Awareness Calculative Drocess (negative Transference Process (negative) *** p<.01, ** p<.05, * p<.1 (bootstrap method); Only significant paths are shown. Processes in white boxes build trust while those in the black boxes inhibit trust. Figure 5-3 PLS Results for Integrity Belief 5.6 D I S C U S S I O N 5.6.1 S U M M A R Y O F FINDINGS Trust formation in online recommendation agents is an under-studied area. Applying both qualitative and quantitative analyses, this study explored consumers' initial formation processes in recommendation agents. Using a prior-research driven approach, Komiak (2003)'s trust formation scheme was revised in this study according to the interplay of the process theories of initial trust formation (e.g., Lewicki and Bunker, 1995; McKnight et al., 1998) and the written protocols collected in this study. The major processes for -101-Chapter 5: Trust Formation Processes promoting and hampering consumers' trusting beliefs in online recommendation agents were identified. These major processes explained a significant portion of variances in the trusting belief scores (see figures 5-1, 5-2, and 5-3). The results reveal different formation processes for different trusting beliefs. Although some processes (i.e., expectation confirmation) are important for all three trusting beliefs, there are certain processes that only relate to particular trusting beliefs. For example, the "unknown" awareness processes mainly lead to the decline of benevolence and integrity, while information sharing processes are most influential for competence belief formation and its reduction, and dispositional processes only contribute to inhibiting the integrity belief. These results indicate that consumers form different trusting beliefs in different ways and also lend support to the fact that different trusting beliefs exist in the initial stage of trust formation (McKnight et al., 2002a). The written protocols confirm that consumers' trust building and inhibiting processes co-exist in the initial formation stages (Cacioppo and Berntson, 1994; Cenfetelli, 2004). There are some processes that contribute to both trust-building and trust-inhibiting in online recommendation agents. For example, expectation confirmation increase consumer competence belief in an agent and expectation disconfirmation leads to the decline of competence belief; utility assessment both facilitates and inhibits competence belief. But many processes' impacts are asymmetric. For example, calculative processes, expectation confirmation, utility assessment, and control processes only increase consumers' competence belief while they do not decrease it. Inversely, the "unknown" awareness processes only lead to benevolence decline, but not increase the benevolence belief. 5.6.2 LIMITATIONS AND FUTURE RESEARCH In addition to the limitations discussed in chapter 3, this chapter has another two limitations. First, the experimental tasks did not involve real purchases although we asked participants to treat the experimental tasks as i f they were real. Participants were only asked to decide what camera models to buy for the tasks with the support from a recommendation agent. Future research is needed to examine the trust formation -102-Chapter 5: Trust Formation Processes processes in a real shopping environment, where consumers' risk and trust perceptions might be stronger and as a result, different processes might be involved in. Second, the written protocol method also has some limitations. We used retrospective protocols (i.e., protocols were collected after the experimental tasks). The advantage of using written protocols rather than verbal protocols is that the elicitation of textual narratives does not interfere with the ongoing consumer-agent interactions during the experimental tasks (Todd and Benbasat, 1987). But it requires participants' recall of their cognitive processes formed earlier and participants may systematically reorganize their perceptions and cognitions (Todd and Benbasat, 1987). Additionally, the essay questions used to elicit the written protocols may have cause participants to be more self-reflective about their trust beliefs formation than usual (Gould, 1997). Nevertheless, aided with the P L S analysis, the most powerful and influential processes from the protocol analysis are further validated with evidence and should be valid. The research identified several trust formation processes that have been largely ignored in the literature. The results show that process-based and knowledge-based mechanisms (e.g., expectation confirmation, utility assessment, information sharing, "unknown" awareness processes) play in important roles in trust formation in online recommendation agents. However, few studies have focused on these important processes, which deserve future research. 5.6.3 T H E O R E T I C A L A N D P R A C T I C A L I M P L I C A T I O N S This study makes significant contributions to research and practice. The main contribution to research is that it enriches the trust theory by revealing the trust formation processes in recommendation agents for online shopping. Trust in online recommendation agents is an important issue in that it directly influences users' adoption of the agents (Komiak, 2003, see also chapter 4). Prior trust research mainly focused on interpersonal and organizational trust. This study bridges the gap in the understanding of trust formation in technology artifacts including online recommendation agents. -103-Chapter 5: Trust Formation Processes Furthermore, this study identified an asymmetric formation structure of trust building and inhibiting processes. Cenfetelli (2004) has considered inhibitors and enablers as dual factors in technology usages and called for researched into the inhibiting perceptions. In the interpersonal and organizational contexts, the fragile nature of initial trust has been known for ages (Lewicki and Bunker, 1995). McKnigh t et al. (1998) discussed the conditions under which trust could be fragile or robust. To sustain consumers' trust in online recommendation agents, we need to not only focus on the trust building processes but also those processes that lead to trust deterioration and decline (Lewicki and Bunker, 1995). This study has important practical implications for the design of trustworthy online recommendation agents. This research reveals that overall the most important processes for trust building include expectation confirmation, utility assessment, and information sharing processes, while the most influential processes that lead to trust decline include "unknown" awareness processes and expectation disconfirmation. Different processes can be engendered by different agent design features and functionalities. Expectation confirmation. Understanding and meeting consumers' expectations facilitate trust-building in online recommendation agents and expectation disconfirmation inhibits consumers' trust. A s deduced from the written protocols, participants with different levels of product expertise expect an agent to behave in different ways. Participants with a low level of expertise expected the agent to ask questions related to basic features of cameras while those with a high level of expertise expected the agent to elicit their requirements on both basic and advanced features. Accordingly, the agent-user dialogues should be designed in a personalized way that different types of questions and different levels of sophistication should be used for different segments of consumers (Russo, 2002). A second solution is that recommendation agents should elicit the importance levels of consumers' different preferences and take them into account when making recommendations. These importance levels reflect consumers' expectation of the priority -104-Chapter 5: Trust Formation Processes of their preferences that the agents should consider and the recommended products should satisfy. Consumers would expect to get the recommendations with most important features. A third solution is that recommendation agents should be able to provide more product recommendations upon consumers' request. Consumers spend time and effort in interacting with the agents and in return, they expect to see a certain number of recommendations. When few suitable products are recommended according to consumers' preferences, consumers should be able to get more recommendations, even sub-optimal, or other products (e.g., best sellers) should be recommended (Aggarwal and Vaidyanathan, 2003). Our written protocols indicate that the expectation disconfirmation process occurred when no or very few recommendations were provided. Information sharing. A s discussed in chapter 3, this process can be prompted by the explanation facilities provided by recommendation agents. This chapter provides additional evidence regarding the impact of explanation facilities on consumers' trust formation in online recommendation agents. We compared the group of subjects in the condition of "with" explanations to those in the "without" explanation condition, and found that when explanations are available, the number of episodes that are related to knowledge-based processes for trust-building increased nearly four-fold (i.e., from 3.5 to 16.5) while the number of trust-inhibiting episodes dropped from 17 to only 1. This confirms the important role of explanation facilities in promoting knowledge-based trust-building processes and reducing trust-inhibiting processes. Accordingly, an agent should inform consumers what and how it performs and why it does in a certain way so that consumers can better understand the agent behavior, and it should also provide consumers with necessary knowledge and guidance so that consumers can make informed choices in the consumer-agent dialogues. The explanations make the agent more open, understandable, and considerate in consumers' eyes. Additionally, consumers need detailed product information, and experts' and other consumers' comments on the recommended products. Such information provides a -105-Chapter 5: Trust Formation Processes foundation for consumers to judge the agent ability, intentions, and integrity, and facilitates their trust in an agent. Utility assessment. One of the major benefits of using recommendation agents is to facilitate consumers' decision-making. One weakness of current agent applications is that consumers' power to evaluate product recommendations is limited. One solution is to embed multimedia technologies into agent applications to facilitate consumers' product evaluation and preference expression. Using virtual reality technology together with recommendation agents enables consumers not only to express their preference and get recommendations, but also to evaluate the product recommendations and construct their preferences. It can greatly facilitate consumers' decision-making. Such an approach wi l l deliver more utility and benefits for users. Jiang, Wang, and Benbasat (forthcoming) proposed a multimedia-based interactive advising technology that embed the user-agent dialogues into virtual reality enabled product images. "Unknown" awareness process. When consumers do not have sufficient interactions with or information about online recommendation agents, they wi l l have difficulties in judging the trustworthiness of the agents. Then, the "unknown" awareness process wi l l be invoked. The explanations mentioned earlier may also help lessen such "unknown" awareness. A s deduced from the written protocols, feedback about the agents from other consumers can be a source to supply additional information about the agents. Providing a discussion forum about an agent can elicit feedback on the agent. The forum should allow consumers to ask questions about an agent and get answers from other consumers as well as to provide comments about the agent to share with others. These additional functionalities help reduce the "unknown" awareness processes and consequently, prevent trust decline and deterioration. B y incorporating various agent features and capabilities that can promote the trust-building processes and reduce the trust-inhibiting processes, recommendation services in online environments could be more effective and gain higher chance of use adoption. Consumers' online shopping can be facilitated by an interface with a -106-Chapter 5: Trust Formation Processes . . . trustworthy recommendation agent that guides and directs customer choices, leading to more positive online shopping experiences (Grenci and Todd, 2002). -107-Chapter 6: Decision Strategy Support & Explanation Facilities CHAPTER 6: IMPACT OF DECISION STRATEGY SUPPORT AND EXPLANATION FACILITIES ON TRUST AND ADOPTION OF ONLINE RECOMMENDATION AGENTS 6.1 Overview Online recommendation agents help consumers reduce information overload and improve decision quality (e.g., Haubl and Trifts, 2000; Maes, 1994), but they also restrict consumers to using only those decision processes that are supported by the agents (Silver, 1990). Silver (e.g., 1988) introduced the concept of agent restrictiveness which refers to the extent to which a recommendation agent limits its users' choices and decision-making processes. Agent restrictiveness has been suggested to be an important factor influencing user perceptions and evaluations of an agent (Silver, 1988; 1990; 1991b). When a consumer's desired decision strategies are not supported by an agent, the agent is perceived to be restrictive, and the final recommendations may not fit the consumer's preferences. A s a result, the consumer may have a negative perception of the agent and may not trust it. However, few studies have investigated the impact of agent restrictiveness on consumers' trust and the adoption of online recommendation agents. In this chapter, the role of agent restrictiveness in influencing consumer trust and adoption of online recommendation agents is examined. Agent restrictiveness is manipulated by different levels of decision strategy support provided by an agent. A n agent with a high level of decision strategy support allows consumers to express their preferences for decision strategies and would be perceived as being less restrictive. Further, to effectively choose their decision strategies, consumers should be able to understand the strategies supported by the agent and how the agent works (Beaulieu and Jones, 1998; Dhaliwal and Benbasat, 1996). Therefore, the role of agent transparency (i.e., the extent to which an agent can be easily understood) is also examined. Since explanation facilities have been suggested to be effective in increasing agent transparency (Gregor and Benbasat, 1999), this study examines how explanations help consumers best utilize the decision strategy support functionality. -108-Chapter 6: Decision Strategy Support & Explanation Facilities In addition, utilizing the strategy support functionality provided and a user's choice o f decision strategies lead to different levels o f cognitive effort (Todd and Benbasat, 1999). According to the cognitive effort perspective in behavioral decision theory (e.g., Payne et al., 1993; Todd and Benbasat, 1992) and the notion of Production Paradox (Carroll and Rosson, 1987), the consumers' preferences for effort vs. decision accuracy play an important role in their use of an agent's capabilities, both influencing their evaluations and acceptance of a recommendation agent. Consumers may not bother spending extra effort learning about an agent or trying its additional features. Thus, the role of cognitive effort is also examined. A second laboratory experiment was conducted to investigate the above issues. The results confirm the presence of a significant impact of perceived agent restrictiveness on consumers' trust in recommendation agents, and reveal the relative effects of perceived agent restrictiveness, perceived agent transparency, and perceived cognitive effort on consumers' trust and intentions to adopt the agents. This study also shows that, to make the decision strategy support functionality effective, explanations should be provided and the cognitive effort involved in utilizing the decision strategy support should be minimal. Otherwise, the decision strategy support functionality may not be effectively utilized. This chapter contributes to research by understanding the role of perceived agent restrictiveness, perceived agent transparency, and the cognitive effort in influencing consumers' trust and adoption of online recommendation agents. Both the benefits and costs of decision strategy support provided by online recommendation agents are examined, and the design implications are discussed. The remainder of this chapter is organized as follows. Section 6.2 discusses the theoretical background for the current study. Section 6.3 develops hypotheses. The research method and results of hypothesis testing are reported in sections 6.4 and 6.5, and the chapter concludes with a discussion o f the results, limitations, and contributions of the study and some future research areas. -109-Chapter 6: Decision Strategy Support & Explanation Facilities 6.2 Theoretical Background 6.2.1 P R E F E R E N T I A L C H O I C E S T R A T E G I E S In this study, product recommendation agents are utilized to help consumers make choices in the context of a multialternative, multiattribute preferential online shopping task (Keeney and Raiffa, 1976). Such tasks require decision-makers to choose one product among a number of alternatives available in an online store, or across a variety of online stores. Each alternative is described by a common set of attributes. Svenson (1979) describes 12 strategies that are applicable to preferential choice problems. Among them, Additive Compensatory ( A C ) and Elimination by Aspect ( E B A ) are the two most commonly studied strategies (e.g., Jarvenpaa, 1989; Payne, 1976; Todd and Benbasat, 1999; 2000), which are applicable to online recommendation agents (e.g., Pereira, 2000; Tan, 2003). The A C strategy is a compensatory strategy. It is based on the evaluation of one alternative at a time with regard to all relevant attributes. Each attribute is assigned a weight indicating the importance level of the attribute. A score for each alternative is determined by summing the products of each attribute's transformed (i.e., normalized) value and weight. Once the computations are completed for all alternatives, the one with the highest weighted total score is chosen. In applying the E B A strategy, all alternatives are evaluated along one attribute and any alternatives that violate the threshold value for that attribute are eliminated. This process is repeated for each attribute until a single alternative remains. For the E B A strategy, lower value attributes are not compensated with higher value ones, hence, good alternatives might be eliminated prematurely. The A C strategy, in contrast, allows high value attributes to compensate for low value ones. Therefore, the decision quality of the A C strategy is superior to that of E B A strategy and the A C strategy is regarded as a normative strategy (e.g., Payne et al., 1993; Todd and Benbasat, 2000). -110-Chapter 6: Decision Strategy Support & Explanation Facilities 6.2.2 T H E O R Y O F S Y S T E M R E S T R I C T I V E N E S S The use of online recommendation agents restricts consumers to certain decision processes that are supported by an agent (Silver, 1990). In the context of DSS , Silver (1990) defines system restrictiveness as the extent to which, and the manner in which, a DSS limits users' choices and decision-making processes. Since a DSS provides various operators (e.g., arithmetic functions) to facilitate users' decision-making, the restrictiveness of a DSS is determined by the factors related to these operators. Silver (1991a) suggested that a DSS can limit decision-making processes by restricting the: 1) set of operators available, 2) inputs to the operators (data, parameters, and control values), 3) outputs from the operators (representations), 4) sequencing of operators, and 5) modification and creation of operators. In the context of needs-based recommendation agents, consumers interact with recommendation agents via other means (e.g., agent-user dialogues) that can express their needs and preferences. Accordingly, for online recommendation agents, the five ways of influencing agent restrictiveness, mentioned above, are adapted to: 1) questions in the dialogue, 2) possible replies to the questions, 3) outputs / recommendations after answering the questions, 4) sequencing of the questions, and 5) decision strategies. (a): DSS (b): Recommendation Agents Figure 6-1 DSS System Restrictiveness and Agent Strategy Restrictiveness -111-Chapter 6: Decision Strategy Support & Explanation Facilities This study focuses on the last aspect of restrictiveness that is related to decision strategies. Based on the depiction of DSS restrictiveness in figure 6- la (Silver, 1991a), the restrictiveness of the decision strategy of the recommendation agents refers to the extent to which all possible decision strategies are supported by the agents (figure 6-lb). Silver (e.g., 1988; 1991b) suggested that the perceived restrictiveness of an agent varies from one user to another. Subjective restrictiveness is determined by how users perceive the set of possible and supported decision-making processes. When an agent provides support for certain decision-making processes, users who prefer these processes perceive the agent to be less restrictive than those who do not prefer the processes. Therefore, what matters for the consumers' evaluations of an agent is the subjective restrictiveness of the agent. Silver also discussed the level of restrictiveness that a system should offer (e.g., Silver, 1990). With excessive restrictiveness (i.e., a DSS provides very limited support for decision-making), most preferred decision processes are not supported by the DSS, hence, consumers may choose not to use the system. On the contrary, with too little restrictiveness, a system may overwhelm the user by presenting a large number of different capabilities (e.g., many strategy options and parameters) so that users may not be able to effectively choose among the many features. Therefore, although reducing agent restrictiveness has a positive influence on the agent's quality (Halloran et al., 1978; Nelson et al., 2005; Swanson and Ramiller, 1987; Wang and Strong, 1996), the agent should be carefully designed so that the agents' capabilities may be effectively utilized by users. Factors need to be considered including the understandability of an agent (e.g., Silver, 1990) and users' effort in using agent capabilities (e.g., Todd and Benbasat, 1999). 6.3 Hypotheses Development This study compares three types of recommendation agents that implement different decision strategies. In addition to the AC-based and EBA-based agents, a new type of recommendation agent that uses a hybrid strategy is proposed (supporting both A C and -112-Chapter 6: Decision Strategy Support & Explanation Facilities E B A strategies, see section 6.3.1). Consumers' perceived restrictiveness of the three types of agents is hypothesized to be different (see section 6.3.2). The relative levels of perceived agent transparency and cognitive effort in using the three types of agents are also examined (see section 6.3.3 and 6.3.4). Hypotheses are also presented to study the interaction effects between the explanation facilities and the decision strategy support on perceived restrictiveness and perceived transparency (see section 6.3.3). In addition, the impacts of these perceptions on trust and the antecedents of user acceptance of an agent in T A M (i.e., P U and P E O U ) are evaluated. Finally, the T r u s t - T A M model is tested (see section 6.3.5). The research model is depicted in figure 6-2. Decision Strategy Support Explanation Use Note: +, - , and * indicate a positive, negative, and interaction relationship, respectively; 0 indicates no predicted significant correlation. Figure 6-2 Research Model - 1 1 3 -Chapter 6: Decision Strategy Support & Explanation Facilities 6.3.1 R E C O M M E N D A T I O N A G E N T S W I T H D I F F E R E N T L E V E L S O F DECISION S T R A T E G Y SUPPORT A s described in section 6.2, the AC-based and EBA-based recommendation agents (hereafter, referred to as A C agents and E B A agents, respectively) only apply one choice strategy. A s implemented in Experiment 1, both A C and E B A agents in this study utilized agent-user dialogues to elicit user needs and preferences. The agents first determine the product features or attribute levels inferred from consumer preferences and then evaluate the available products using the inferred product features or attribute levels (Russo, 2002). When an alternative does not have an inferred product feature or does not fit an inferred attribute level, for the A C agents the fit score of the alternative is deducted by the product of the importance weight and the gap between the user requirement and the level that the alternative can satisfy 2 2, while the E B A agents eliminate the alternative from further consideration. This study proposes a new type of recommendation agent using both A C and E B A strategies, namely, the Hybrid agents. When answering each question in the agent-user dialogue to express preferences, consumers can choose whether or not the preference can be traded-off by others. This was implemented by asking participants i f their preference was essential or non-essential (refer to figure A6- l (a ) in appendix 6-1). When participants chose "non-essential," an importance level was identified by a 9-point scale (refer to figure A6- l (b ) in appendix 6-1). For the set of "essential" preferences, the E B A strategy was applied. For the set of "non-essential" preferences, the A C strategy was applied. 2 2 The formula used to calculate the fit score is as follows. Fit Score = 100 - ^ (ImportanceWeight xAttribute_Needs_Gap) r is the number of questions in the agent-user dialogue. To simplify the calculation, three levels of Attribute_Needs_Gap were used. When a product attribute (or feature) satisfies user needs (or preference), the Attribute_Needs_Gap equals 0. Otherwise, it equals either 1 or 2 depending on the size of the gap between the attribute levels and user needs and preferences. For example, if a consumer prefers a small size camera, for medium size camerasthe Attribute_Needs_Gap equals 1 and for large size cameras it equals 2, while for small size cameras it equals 0. -114-Chapter 6: Decision Strategy Support & Explanation Facilities 6.3.2 ROLE OF AGENT STRATEGY RESTRICTIVENESS 6.3.2.1 I M P A C T OF D E C I S I O N S T R A T E G Y S U P P O R T A N D E X P L A N A T I O N F A C I L I T I E S O N P E R C E I V E D A G E N T S T R A T E G Y R E S T R I C T I V E N E S S According to Silver's theory of system restrictiveness, the users' perceptions of restrictiveness are determined by the extent to which the preferred decision processes are supported. The difference between the Hybrid agents and the two other types ( A C agents and E B A agents) is that the Hybrid agents support both A C and E B A strategies while the other two support only one strategy. A s suggested by Todd and Benbasat (2000), " in practice, individuals typically employ hybrid strategies as opposed to pure approaches" (p. 96). This is especially true in online shopping environments where a large number of alternatives are available in an online store or across a variety of stores. Applying the E B A strategy can easily reduce the number of alternatives, but frequently, a null recommendation set is obtained as good alternatives might be eliminated prematurely (Tan, 2003). On the other hand, the A C strategy can produce a list o f good alternatives with a fit score for each, but may present too many alternatives to consumers. Hybrid agents thus are desirable because the appropriate use of both A C and E B A strategies not only reduces the number of alternatives but also retains the alternatives with high overall quality. Silver (1988) investigated user perceptions of system restrictiveness of DSS with different types and amounts of supporting operators (e.g., Data Sort) that users can choose and control over. Silver compared three types of D S S , namely, "Lotus 1-2-3," "Multi-Attribute-Software System" ( M A S S ) , and "Elimination by Aspector." The "Elimination by Aspector" applied a single choice rule (i.e., "elimination by aspects"), which is a subset of decision rule that is supported by M A S S , while M A S S applied a subset of the choice rules supported by "Lotus 1-2-3." Silver found that users' perceptions of restrictiveness overall were lower when more decision rules were supported by a DSS. Similarly, by supporting both E B A and A C strategies that consumers can choose and control over, Hybrid agents w i l l be perceived as being less restrictive than either A C agents or E B A agents. For A C agents and E B A agents, -115-Chapter 6: Decision Strategy Support & Explanation Facilities consumers' perceived restrictiveness should be in a similar level, because each implements one strategy. Therefore, we hypothesize that: H6-la: Hybrid agents will be perceived to be less restrictive than AC agents. H6-lb: Hybrid agents will be perceived to be less restrictive than EBA agents. H6-lc: AC agents will be perceived to be as restrictive as EBA agents. In addition to the availability of different strategies that are supported by recommendation agents, consumers should understand the agents' decision strategy support capabilities before using them fully (Silver, 1988). Hybrid agents provide consumers with control over the agent strategies rather than pre-determining them, as do A C and E B A agents. To effectively exercise the control, consumers need a good mental model of the agents and the agent features should be clearly explained (Beaulieu and Jones, 1998). If consumers cannot understand the differences between the strategies provided by a Hybrid agent, or how the different strategies are used by the agent, they may not choose the strategies they want to employ. Therefore, perceived restrictiveness of a Hybrid agent depends on whether or not consumers understand the range of strategies that are supported. Without explanations, consumers may not realize the strategy options provided by a Hybrid agent, and thus, their perceived restrictiveness levels for Hybrid, A C , and E B A agents wi l l be similar. Therefore, H6-2a: With explanation use, Hybrid agents will be perceived less restrictive than AC or EBA agents; AC agents will be perceived as to be restrictive as EBA agents. H6-2b: Without explanations, Hybrid agents will be perceived to be as restrictive as AC and EBA agents. -116-Chapter 6: Decision Strategy Support & Explanation Facilities 6.3.2.2 I M P A C T OF P E R C E I V E D S T R A T E G Y R E S T R I C T I V E N E S S O N T R U S T A N D P E R C E I V E D U S E F U L N E S S OF A G E N T S Consumers' perceived restrictiveness in an agent influences their attitudes, beliefs, and other perceptions toward the agent (Silver, 1991a). This study focuses on the two major antecedents of user adoption of online recommendation agents in T rus t -TAM: 1) trust in the agents and 2) perceived usefulness (PU) of the agents. Online recommendation agents with low perceived strategy restrictiveness allow consumers to more freely express their decision strategy preferences. Thus, consumers perceive agents to be similar to themselves, in terms of the decision strategies that can be utilized. Prior research shows that similarity plays an important role in persuasion, and that people who are perceived to be similar to their audience are more influential in changing attitudes and beliefs (Komiak, 2003; Mil ler , 1984; Simons et al., 1970). Earle and Cvetkovich (1995) and Stewart (2003) also suggested that people tend to trust others who are similar to themselves. In the context of human-computer interaction, Nass and Lee (2001)'s results demonstrate that users trust a computer more i f they perceive that its personality is similar to their own. Therefore, reducing the perceptions of agent restrictiveness is posited to enhance consumers' trust in an agent. The perceptions of low agent restrictiveness can also facilitate consumers' trust in a recommendation agent via two other mechanisms: 1) empowering consumer control over the agent, and 2) confirming consumers' expectations (Komiak, 2003; Komiak etal., forthcoming). In an organizational context, Luhmann (1979) and Das and Teng (1998) proposed a theory for the relationship between trust and control. User control has an impact on trust because, with proper control mechanisms, the perceived level of uncertainty is reduced (Luhmann, 1979) and "the attainment of desirable goals becomes more predictable" (Das and Teng, 1998, p. 493). Whitener et al. (1998) also found that in the organizational context, "participation in decision-making and delegating control are key components of trustworthy behavior" (p. 517). L o w agent restrictiveness means that consumers have more control over the agent by deciding which strategy should be employed, thus it facilitates consumers' trust in an agent. -117-Chapter 6: Decision Strategy Support & Explanation Facilities Expectation confirmation means that when an agent behaves in a way as consumers expect, trust in the agent increases (Komiak et al., forthcoming). L o w agent restrictiveness, i.e., an agent's ability to adapt its decision strategies according to the wishes of consumers, enables the agent to find suitable products in a way that is expected by the consumer, thus it increases consumers' trust. Therefore, it is hypothesized that: H6-3: Perceived agent restrictiveness will negatively influence consumers' trust in online recommendation agents. Perceived agent restrictiveness can also influence other antecedents in T A M , such as the P U of an agent. The P U of an agent is determined in part by the utility that consumers can get from the use of the agent (Davis, 1989). The utility of using a recommendation agent comes from: 1) narrowing down the alternatives so as to reduce the consumer's effort to examine them (Tan, 2003), and 2) finding good recommendations that are based on consumer preferences, including their needs and strategy preferences (Komiak, 2003). A s mentioned earlier, by using both the E B A and A C strategies, an agent can effectively reduce the number of alternatives and also retain the good alternatives. Lower agent restrictiveness provides consumers with an opportunity to more freely express their strategy preferences. A s a result, the recommendations from the agent w i l l better cater their needs and preferences and the final recommendations wi l l be more suitable, hence the agent w i l l be perceived as being more useful. Therefore, it is hypothesized that: H6-4: Perceived agent restrictiveness will negatively influence consumers' PU of online recommendation agents. 6.3.3 R O L E O F P E R C E I V E D A G E N T T R A N S P A R E N C Y Providing high decision strategy support allows consumers to make choices in decision strategies and reduces their perceptions of agent restrictiveness. Nevertheless, reducing agent restrictiveness can be a double-edged sword (Silver, 1990). I f too many choices and functionalities are provided by an agent, the agent becomes complicated to use, and it is more difficult for consumers to understand how it works (Silver, 1990). -118-Chapter 6: Decision Strategy Support & Explanation Facilities 6.3.3.1 I M P A C T OF D E C I S I O N S T R A T E G Y S U P P O R T A N D E X P L A N A T I O N F A C I L I T I E S O N P E R C E I V E D A G E N T T R A N S P A R E N C Y The notion of system transparency has been mentioned in many previous studies (Gregor and Benbasat, 1999). According to the Merriam-Webster Dictionary, transparency means being easily detected, readily understood, or seen through. Sinha and Swearingen (2001) surveyed transparency levels of various online music recommender systems. They found that users perceived different systems to be very different in system transparency and that their confidence and preference of a system were closely related to their perceptions of system transparency. In an earlier study, Montgomery and Benbasat (1983) suggested that the sophistication of a computer system influences its transparency. I f users can easily understand the process of recommendation generation, they wi l l perceive the system as being more transparent (Herlocker et al., 2000). A sophisticated agent is difficult to understand, thus it is less transparent to users. Increased agent functionalities (e.g., choosing from different decision strategies rather than using a single pre-determined strategy) may increase the difficulty for consumers to understand the recommendation-making processes, as the strategies employed by the agents become more complex; This wi l l reduce agent transparency. To compensate for these side-effects caused by high decision strategy support, explanations are used to increase the agent transparency (Gregor and Benbasat, 1999; Herlocker et al., 2000; Muramatsu and Pratt, 2001). A s examined in chapter 3, explanation use increases consumers' trust in recommendation agents, though the mediating role of transparency has not been investigated. Explanations inform agent users about, for example, what the system does, how it works, and why its actions are appropriate (Gregor and Benbasat, 1999), and facilitate the consumers' understanding of an agent. Therefore, it is hypothesized that: H6-5: Use of explanations will increase consumers' perceived agent transparency of online recommendation agents. -119-Chapter 6: Decision Strategy Support & Explanation Facilities With these considerations in mind, i f a Hybrid agent does not provide explanations, it w i l l be perceived to be less transparent than the A C or E B A agents; while A C agents w i l l be perceived to be as transparent as the E B A agents since both types employ one decision strategy. Moreover, when the decision strategies deployed by Hybrid agents include both E B A and A C strategies, agent behaviors and motivations can be well explained with a set of appropriate explanations (Herlocker et al., 2000), thus the recommendations wi l l be easily understood. For complex systems, consumers wi l l use explanations for learning and understanding purposes (Gregor and Benbasat, 1999). A s a result, with explanation facilities, agent transparency can be maintained. That is, regardless of whether a single strategy or a hybrid strategy is employed, perceived agent transparency w i l l remain high when the agent provides consumers with detailed explanations. A n interaction wi l l occur between explanation facilities and the agent decision strategy support. Therefore, H6-6a: With explanation use, Hybrid agents will be perceived as being equally transparent as AC agents or EBA agents. H6-6b: Without explanations, Hybrid agents will be perceived to be less transparent than AC agents or EBA agents; AC agents will be perceived to be equally transparent as the EBA agents. 6.3.3.2 I M P A C T OF P E R C E I V E D A G E N T T R A N S P A R E N C Y O N T R U S T A transparent agent means that consumers can easily understand and see though the agent and the agent's recommendations. In so doing, consumers can be relieved from concerns related to information asymmetry that would hamper consumers' trust in the agent, as discussed in chapter 3. Sinha and Swearingen (2002)'s survey results indicate that users' high perceptions of agent transparency increase their confidence in the agents. The transparency and user understanding of a medium are important to gain user trust in the medium (Lee and Turban, 2001). Based on observations from informal interviews and conversations with users o f a variety o f interactive systems, Nickerson (1999) and Hertzum et al. (2002) found that the transparency of a system facilitates users' -120-Chapter 6: Decision Strategy Support & Explanation Facilities conceptualization of the systems, thus increases user trust in the system. Therefore, it is hypothesized that: H6-7: Perceived agent transparency will positively influence consumer trust in online recommendation agents. 6.3.4 R O L E O F C O G N I T I V E E F F O R T The joint use of decision strategy support and explanation facilities is predicted to reduce agent restrictiveness while keeping high transparency levels. Nevertheless, the use of different features of an agent requires cognitive effort which is an important determinant of consumer behavior (e.g., Todd and Benbasat, 1992). According to the Production Paradox (Carroll and Rosson, 1987), end-users would like to accomplish as much work as possible and may not bother spending an extra effort learning about an agent or trying any advanced features. Based on the cognitive effort perspective (e.g., Todd and Benbasat, 1992), the conservation of effort is more important than increase in decision quality. Consumers' cognitive cost plays an important role in their use of agent functionalities, and in their evaluations and acceptance of the recommendation agents. 6.3.4.1 I M P A C T OF D E C I S I O N S T R A T E G Y S U P P O R T A N D E X P L A N A T I O N F A C I L I T I E S O N C O G N I T I V E E F F O R T Silver (1990) suggested that a DSS that presents many different features and options requires extra cognitive effort. Similarly, the functionalities supported by a Hybrid agent (i.e., selecting and switching between different decision strategies) w i l l consume an extra effort. Beaulieu and Jones (1998) suggest that shifting the balance of control towards users requires the consumers to make deliberate decisions to exercise their control over choices, thus increasing their cognitive effort of using an agent. Therefore, it is posited that the consumers' cognitive effort w i l l be higher in using the Hybrid agents, compared to the A C or E B A agents, which do not require users to choose from different strategies. To estimate the levels of effort required to use the three types of agents, we conducted the Natural G O M S Language ( N G O M S L ) analyses devised by Kieras (1997). N G O M S L analyses are widely applied to predict the execution time of using a computer -121-Chapter 6: Decision Strategy Support & Explanation Facilities system. The N G O M S L analyses show that prior to examining recommendations, Hybrid agents require about 15% longer execution time than the A C agents and about 41% longer execution time than the E B A agents, assuming all questions in the agent-user dialogues are answered by users and the agents are used only once (see appendix 6-2 for details). Consumers also need to spend an effort in utilizing the explanation facilities. In the context of learning a word-processing system, Carroll and Kay (1985) found that excessive advice and explanations consumed a high amount of the user's cognitive effort. Todd and Benbasat (1992; 1994b; 1999; 2000) also noted the cognitive effort needed in understanding and using a decision aid with explanations. The use of explanations wi l l 23 thus increase the user's cognitive effort . Frequently used objective measures for cognitive effort, after using an agent, is the consideration set size and decision time (Haubl and Trifts, 2000; Tan, 2003; Todd and Benbasat, 1992). A consideration set is defined as the set of alternatives that a consumer considers seriously before decision-making (Hauser and Wernerfelt, 1990). This is an indicator of the effort that a consumer spends in examining the recommendations from an agent. Haubl and Trifts (2000) found that the consideration set size is smaller when consumers use a recommendation agent. When explanations are provided, consumers can more easily judge the recommendations from an agent and w i l l be more confident in the recommendations. Accordingly, users' consideration set size w i l l be smaller and decision times shorter when explanation facilities are provided. Therefore, it is hypothesized that: H6-8: The use of explanations reduces consumers' consideration size when using recommendation agents (H6-8a: Hybrid agents, H6-8b: AC agents, H6-8c: EBA agents). H6-9: The use of explanations reduces consumers' decision time when using recommendation agents (H6-9a: Hybrid agents, H6-9b: AC agents, H6-9c: EBA agents). In addition, when consumers utilize a Hybrid agent, which can better represent their decision strategy preferences, their consideration set size and decision time w i l l be 2 3 The use of explanations was not considered in the N G O M S L analyses because it is difficult to estimate the time and effort involved in using the explanations, compared with other operators (e.g., moving the mouse). -122-Chapter 6: Decision Strategy Support & Explanation Facilities reduced, since the recommendations would better reflect their preferences and they would be more confident in the agent's recommendations. Nevertheless, as previously stated, to get these benefits, consumers must first understand the agents and their capabilities. Only when explanations are provided, can the consideration set size become smaller and decision times shortened. A s mentioned earlier, while the E B A strategy eliminates product alternatives more easily than does the A C strategy, the E B A strategy may also eliminate good alternatives. The A C strategy may thus produce better recommendations. Although A C agents may provide more recommendations than the E B A agents, consumers' consideration set size as well as decision time wi l l be similar in using these two agents. Therefore, we hypothesize: H6-10a: With explanation use, the consideration set size of consumers using Hybrid agents will be smaller than that of consumers using AC agents or EBA agents; the consideration set size of consumers using AC agents will be similar to that of consumers using EBA agents. H6-10b: Without explanations, the consideration set size of consumers using the Hybrid agents will be similar to that of consumers using AC agents or EBA agents. H6-lla: With explanation use, the decision time of consumers using Hybrid agents will be less than that of consumers using AC agents or EBA agents; the decision time of consumers using AC agents will be similar to that of consumers using EBA agents. H6-llb: Without explanations, the decision time of consumers using Hybrid agents will be similar to that of consumers using AC agents or EBA agents. B y considering both the savings in cognitive effort due to the smaller consideration set size, when explanations are provided in the Hybrid agents, and the extra cognitive effort needed to utilize the additional features provided by Hybrid agents, consumers' perceived overall cognitive effort should be the same for the three types of agents (Hybrid, A C , and E B A agents). Without explanations, however, consumers may still expend an effort in utilizing the additional features of a Hybrid agent, but cannot save any effort by reducing their consideration set size. Consequently, without -123-Chapter 6: Decision Strategy Support & Explanation Facilities explanations, the consumers' perceived cognitive effort is greater with the Hybrid agents, compared to the A G or E B A agents. In comparing the A C and E B A agents, the N G O M S L analyses show that A C agents require more effort in using the agents than E B A agents (see appendix 2). Stil l , the E B A agents may produce an empty set of recommendations due to the elimination strategy. Consumers thus may have to use the E B A several times to refine their preferences and arrive at the recommendations. A s a result, the perceived cognitive effort in using these two agents w i l l be similar. Therefore, H6-12a: With explanation use, the perceived cognitive effort of consumers using Hybrid agents will be the same as that of consumers using AC agents or EBA agents. H6-12b: Without explanations, the perceived cognitive effort of consumers using Hybrid agents will be greater than that of consumers using AC or EBA agents; the perceived cognitive effort of consumers using AC agents will similar to that of consumers using EBA agents. Similarly, although the use of explanations requires cognitive effort, considering both the cognitive effort in using the explanations and the reduced effort in examining fewer recommendations, use of explanations should not influence consumers' perceived cognitive effort. Therefore, H6-13: Use of explanations will not influence consumers' perceived cognitive effort in using recommendation agents (H6-13a: Hybrid agents, H6-13b: AC agents, H6-13c: EBA agents). 6.3.4.2 I M P A C T O F C O G N I T I V E E F F O R T O N P E R C E I V E D E A S E - O F - U S E O F A G E N T S The major impact of perceived cognitive effort is on the perceived ease-of-use (PEOU) of an agent, which is determined in part by how much effort consumers must spend in using the recommendation agents (Davis, 1989; Gefen et al., 2003b). Therefore, it is hypothesized: -124-Chapter 6: Decision Strategy Support & Explanation Facilities H6-14: Perceived cognitive effort (H6-14a), consideration size set (H6-14b), and decision time (H6-14c) will negatively influence consumers' PEOU of recommendation agents. 6.3.5 T R U S T - T A M The left part of the research model (see figure 6-2) examines the impact of decision strategy support and explanation facilities on perceived agent restrictiveness, perceived agent transparency, consideration set size, decision time, and perceived cognitive effort, as well as the effects o f these variables on the antecedents in T r u s t - T A M (i.e., trust, P U , and P E O U ) . This section lists the hypotheses on the right side of the research model which replicates the model investigated in chapter 4. H6-15: PU of recommendation agents will positively affect consumers' intentions to adopt the agents. H6-16: PEOU of recommendation agents will positively affect consumers' intentions to adopt the agents. H6-17: PEOU of recommendation agents will positively affect PU of the agents. H6-18: Trust in recommendation agents will positively affect consumers' intentions to adopt the agents. H6-19: Trust in the recommendation agents will positively affect PU of the agents. H6-20: PEOU of recommendation agents will positively affect consumers' trust in the agents. 6.4 Research Method A laboratory experiment was conducted to test the hypotheses listed in section 6.3. Section 6.4.1 describes the experimental design and treatments. Control variables are introduced in section 6.4.2, and the dependent variables and development of several new -125-Chapter 6: Decision Strategy Support & Explanation Facilities measures are introduced in sections 6.4.3 and 6.4.4. Section 6.4.5 describes the participants and section 6.4.6 presents the experimental procedures. 6.4.1 I N D E P E N D E N T V A R I A B L E S A N D E X P E R I M E N T A L D E S I G N The two main independent variables are 1) decision strategy support (as implemented in the three types of agents with different levels: Hybrid, A C , and E B A agents), and 2) use of explanations. The experimental agent in Experiment 1 was expanded to incorporate the experimental treatments for this experiment. Some questions in the agent-user dialogue were updated based on comments from the pilot tests for this study and from digital camera experts. In total, there were 11 questions in the agent-user dialogue. The product database was also updated to include digital camera models that were available on the market at the time of the experiment. A three (agent types: Hybrid agents, A C agents, and E B A agents) x two (explanations: With and Without) between-subject factorial design (see table 6-1) was used. This design allows for the determination of relative impact of different agents with different decision strategy support levels and explanation facilities, individually and in interactions with each other. Table 6-1 3 x 2 Full Factorial Experimental Desig n Types of Agent (Different Decision Strategy Support) Hybrid Agent A C Agent E B A Agent Explanations With Group 1 Group 2 Group 3 Without Group 4 Group 5 Group 6 Explanation facilities were studied in Chapter 3. Three types of explanations (i.e., how explanations, why explanations, and guidance) were found to be effective in enhancing trust in online recommendation agents. These three types o f explanations were also used in this study, but the content of explanations was adjusted according to the agent decision strategy changes. Similar to the agent in Experiment 1, explanations were also provided for each of the final recommendations. In addition to the three types o f explanations for each question in the agent-user dialogues, guidance for choosing differenfstrategies is provided for the Hybrid agents with explanations. -126-Chapter 6: Decision Strategy Support & Explanation Facilities In an effort to assess the face validity and definitional accuracy of the updated explanations that were incorporated into the agent prototype, a pilot test similar to the one in Experiment 1 was conducted. Eight graduate students majoring in Management Information Systems (MIS) were asked to classify the explanations to be examined in this study into one of the three types (i.e., how explanations, why explanations, and guidance) or none of them. In addition, to get the certainty levels of their judgment, participants were asked to indicate on a five-point scale, to what extent the explanation fits well with the definition of the type of explanation chosen. A validation example is provided in figure 6-3. Please read the explanation carep&y, identify what type of explanation it belongs, and provide your comments on the explanation (regarding the grammar structure, vocabulary, etc). Refer to question No. 3 Exp. No. 3 ^ \ The purpose of asking this question is to know what kinds of photos you will take most often. It is quite useful to take photos at different distances. For example, for portraits of family and friends, subjects are close to your camera, but for many scenery or artistic photos, subjects may be far from your camera. • According to the definitions of three explanation types, please check the type it belongs to: How explanation • Why explanation • Guidance O None • If you did not choose "none", please indicate the extent to which you agree with the statement: "This explanation fits well with the definition of the type of explanation I chose" 1 2 3 4 5 Not sure/guess A bit Sure Very Sure I I The validation results are reported in table 6-2. Most of the explanations were correctly classified. The certainty levels were high (average scores on a five-point scale: 4.6, 4.3, 4.7 for how explanations, why explanations, and guidance, respectively). The explanations thus appeared to be consistent with their definitions. The suggestions that were made during the pilot test regarding clarity in wording were incorporated into the explanations used for the main experiment. -127-Chapter 6: Decision Strategy Support & Explanation Facilities Table 6-2 Explanation Validation Results How explanations Why explanations Guidance How explanations 96.6% 0.0% 2.3% Why explanations •0.0% 81.8% 2.3% Guidance 3.4% 18.2% 95.5% None 0.0% 0.0% 0.0% Total 100% 100% 100% Table 6-3 provides examples of the finalized how explanations, why explanations, and guidance for a question in the agent-user dialogue. The guidance for the strategy choice for all questions is as follows: / will use different approaches to make recommendations based on your choice of "non-essential" or "essential" preference. If you want me to recommend only those cameras that exactly satisfy your desired choice to this question, please select the "essential preference. " On the contrary, if you want me to recommend all cameras that fit your overall preferences quite well but might not exactly satisfy your desired choice to this question, please select the "non-essential preference. " Please be advised that choosing "essentialpreference" or very high importance levels in the "non-essential preference" will significantly reduce the number of recommendations that I could provide. " -128-Chapter 6: Decision Strategy Support & Explanation Facilities Table 6-3 Examples of how explanations, why explanations, and guidance A Question in the Agent-User Dialogue How far will you be from most of the subjects that you photograph? 1) . Immediate vicinity 2) . A moderate distance or less 3) . Far away, in addition to immediate and moderate vicinity How Explanation Your distance from the subjects you want to focus on most often will determine the suitable zoom level of a digital camera. If you choose "non-essential preference", cameras with your desired optical zoom level will be given higher priority in my recommendations; if you choose "essential preference", I will only recommend cameras with your desired optical zoom level. Specifically, the four options will determine the following zoom levels: 1) 2X optical zoom and below. 2) Between 2X and 5X optical zoom. 3) 4X optical zoom and above. Why Explanation The purpose for asking this question is to know what kinds of photos you will take most often. It is quite useful to take photos at different distances. For example, for portraits of family and friends, your subjects may be close to your camera, but for many scenery or artistic photos, the subjects may be far from your camera. Guidance Most digital cameras can take pictures beyond the immediate vicinity. However, cameras capable of taking pictures from very far away will be more expensive. As well, your choices will be more limited (only about 20% of cameras can focus on distant objects). Hence, be careful not to over-estimate your needs. Appendix 6-1 provides screenshots of agent interfaces in different experimental conditions. A sample question in the agent-user dialogue of the Hybrid agent with explanations is presented in figure A6- l (a ) . When the "non-essential preference" button (i.e., A C strategy) was clicked, participants were prompted to indicate an importance weight (figure A6-l (b)) . Alternatively, participants could choose the "essential preference," indicating an E B A strategy (figure A6- l (c) ) . For the agents in the "with" explanation conditions, three buttons (i.e., a "How" button for how explanations, a "Why" button for why explanations, and a "Guidance" -129-Chapter 6: Decision Strategy Support & Explanation Facilities button for guidance) were provided for each question in the agent-user dialogue"14. When one of the buttons is clicked, the corresponding explanation is shown in the area below the "Explanation / Guidance" icon (figure A6- l (d ) in appendix 6-1). For the Hybrid agent with explanations, underneath the choice of "non-essential preference" or "essential preference," a button with the text of "Guidance on Non-Essential vs. Essential" is provided. Cl icking the button shows the content of the guidance for choosing different strategies (figure A6-l(e)) . The sample question pages for the A C and E B A agents are shown in figure A 6 -1(f) and A6- l (g ) , respectively. The final recommendation pages for the Hybrid and A C agents with a fit score for each recommended product are shown in figure A6- l (h ) . Figure A 6 - l ( i ) shows the interface when an explanation for a recommendation is clicked and presented. For the E B A agents, the final recommendation page is shown in figure A6- l ( j ) which does not have fit scores for the recommendations. For the "without" explanation conditions, the interfaces are similar to the corresponding "with" explanation conditions, except for the "How", "Why", "Guidance", and "Guidance on Non-Essential vs. Essential" buttons which are not provided. 6.4.2 C O N T R O L V A R I A B L E S A l l control variables used in Experiment 1 were captured again, including: consumers' trust propensity, product expertise, and preferences for effort-saving vs. decision quality. In addition, another control variable - product involvement - was included. Product involvement was defined as a person's motivational state (i.e., arousal, interest, drive) towards an object based on the relevance or importance of a product class (Koufaris, 2002; Zaichkowsky, 1985; 1994). Product involvement has been shown to influence various consumer behaviors, including search behavior, information processing, and extensiveness of decision-making process (e.g., Celsi and Olson, 1988; Dholakia, 2001). A consumer, who is very involved in a product, is likely to spend more time and effort in deciding what to buy. On the other hand, a consumer, who does not consider a purchase decision to be important and is only marginally involved in the product, w i l l make the 2 4 Note that this is different from Experiment 1 where the three types of explanations were manipulated in a 2 x 2 x 2 factorial design. -130-Chapter 6: Decision Strategy Support & Explanation Facilities choice without much deliberation or cognitive effort. Therefore, product involvement is expected to influence consumers' willingness to spend effort, and the extent to which they use the agent and its embedded features. A s a result, the effectiveness of the agent wi l l be influenced. 6.4.3 D E P E N D E N T V A R I A B L E S Among the various dependent variables investigated in this study, P U , P E O U , trust, intentions to adopt recommendation agents, and perceived cognitive effort have well-established multi-item measures in the literature. Measures for P U and P E O U were adapted from Koufaris (2002) and Davis (1989). Measures for intentions to adopt recommendation agents, perceived cognitive effort, and trusting beliefs were adapted from Venkatesh (2000; 2003), Pereira (2000), and McKnight et al. (2002a), respectively. Two variables - consideration set size and decision time - used objective measures. Similar to prior studies (e.g., Haubl and Trifts, 2000; Tan, 2003), consideration set size was operationalized as the number of products that were examined by clicking the detailed product information links before participants made their choice. Participants' navigation screens during the experiment were recorded unobtrusively as videos by screen capture software (Camtasia Recorder 3.0). The numbers of products that were examined by participants were counted by reviewing the recorded videos. The times spent by participants in the experimental tasks (decision times) were recorded on experimental minutes sheets by the research assistant for the experiment. Based on Moore and Benbasat (1991), new measures were developed for two dependent variables: 1) perceived agent restrictiveness (perceived strategy restrictiveness, in particular) and 2) perceived agent transparency. Silver (1988) investigated how the users of different types of DSS differ in their perceptions of system restrictiveness. Nevertheless, Silver did not directly measure user perceptions of restrictiveness, but asked participants to provide the relative positions of different DSS by ranking their perceptions of restrictiveness for these systems. Given the between-subject design used in this experiment, Silver's approach was not applicable, and new measures were developed. The development process and results are reported in section 6.4.4. -131-Chapter 6: Decision Strategy Support & Explanation Facilities Perceived agent transparency was measured by Sinha and Swearingen (2002) using a single item - "Do you understand why the system recommended this item to you?" Since it is difficult to verify the reliability and validity of a variable measured by a single item, multi-item measures were developed in this study. For control variables, the trust propensity measures were adapted from McKnight (2002a). Four sub-constructs were included in the trust propensity construct: 1) competence, 2) benevolence, 3) integrity, and 4) trusting stance. For product involvement, the enduring product involvement for the product category of digital camera was measured. Zaichkowsky's (1985; 1994) simplified 10-item measure was used. This measure is widely applied in consumer behavior studies. Measures for participant preferences for effort-saving vs. decision quality were adapted from Komiak (2003). Participants' product knowledge was measured in both subjective and objective ways. Flynn and Goldsmith (1999)'s subjective measures for product knowledge were used and the objective measures were based on those of Komiak (2003). The measurement items for all dependent and control variables are provided in appendix 6-3. -132-Chapter 6: Decision Strategy Support & Explanation Facilities 6.4.4 M E A S U R E M E N T D E V E L O P M E N T In an effort to develop measures for perceived strategy restrictiveness and perceived agent transparency, we followed the approach developed by Moore and Benbasat (1991). The starting point for the scale development was the previous empirical and theoretical literature. We could not find well-developed measures, however, and began to generate a sample of measurement items by recruiting four participants (two graduate students: one from the Department of Psychology and the other from the Faculty of Science, and two assistant professors in MIS) . The subjects were provided with definitions of the two constructs from prior studies and dictionaries, and asked to suggest measurement items. Then, to further check content validity, the instrument was submitted to a panel of graduate students majoring in M I S to obtain their views on which items were appropriate to include. This procedure finally generated seven items for perceived strategy restrictiveness and another seven items for perceived agent transparency. A card sorting exercise (Moore and Benbasat, 1991) was next used in the scale development process. Each item was printed on a 3 * 5-inch index card. The three trusting belief measures that were validated in McKnight et al. (2002a) were adapted and shuffled into a random order, together with the newly created measures for presentation to the judges. Two rounds of this exercise were conducted. In the first round, four master's and doctoral students were asked to sort the items into separate categories, based on the similarity and differences of the items, and then label the underlying constructs for each of their categories. The average inter-judge raw agreement rate was 73%. The judges also provided comments on ambiguous or unclear items. In this process, only one item of perceived strategy restrictiveness was identified as being too ambiguous. A l l other items were refined according to the judges' comments and retained for the next sorting round. In the second round, another four master's and doctoral students were asked to sort the refined and retained items based on construct definitions. A "too ambiguous/doesn't fit" category was also included so that the judges did not force to fit any item into a particular predefined category. This round of sorting ended with an -133-Chapter 6: Decision Strategy Support & Explanation Facilities average raw agreement rate of 95%, indicating a very high reliability (Moore and Benbasat, 1991). This process also helped establish the discriminant validity of the items. The resulting measures for perceived strategy restrictiveness and perceived agent transparency contained six and seven items (appendix 6-3), respectively. These measures were further validated in the main experiment and the validation results are reported in section 6.5.3. 6.4.5 P A R T I C I P A N T S Participants in this experiment were recruited campus-wide from the University of British Columbia. The majority were university students. The sample size is determined by the estimated effect size, intended significance level and statistical power. A s in most behavioral studies, we chose 0.05 as the significance level and 0.8 as the level of statistical power (Cohen, 1988). Based on the effect sizes from Experiment 1, the effect size of the impact of explanation use on trust is assumed to be around medium. To our knowledge, no prior studies have empirically studied the impact of decision strategy support and the interaction between explanation use and decision strategy support, thus a medium effect size was assumed (Cohen, 1988, p. 390). Based on these criteria, 26 participants per cell (156 participants in total) were needed for this study. To reduce potential evaluation bias, we planned to screen out those experimental volunteers who own or have purchased digital cameras before, as we had done in Experiment 1. Nevertheless, we found that many student volunteers had digital cameras, hence it would be difficult to recruit a sufficient number of participants i f they were screened out. Rather, in the online form to sign up for the experiment, an additional question was asked to elicit their expertise levels: "How much do you know about digital cameras?"25 A seven-point scale was used, ranging from "Not at a l l " (1) to "Expert" (7). In addition to the volunteers who did not own or had not purchased a digital camera before, we invited those having levels of expertise below 4 on a 7-point scale. Given that our experimental agent platform has a limited number of cameras in its database, this screening process helps reduce the influence of this limitation on participants' 2 5 This measure was employed to screen out subjects. It differs from the product expertise, which was used as a control variable in the experiment and was measured by multi-items (appendix 6-3). -134-Chapter 6: Decision Strategy Support & Explanation Facilities evaluations of the agent. In addition, this screening process is valid in that those who own or purchased digital cameras and have high expertise in digital cameras may not need advice on what to buy and do not belong to the population of our interest. 6.4.6 E X P E R I M E N T A L T A S K S A N D P R O C E D U R E S Similar shopping tasks as used in Experiment 1 were used in this study. Participants were asked to finish two tasks. One was to choose a digital camera for a good friend as a wedding gift and the cost would be shared with another four friends. The other was to select a digital camera for a close family member. Participants were informed that the tasks are flexible and they may make as many assumptions as they wish. A l l participants were randomly assigned to one of the treatment groups. Each participant was guaranteed a monetary compensation ($15) for participating. To motivate participants to view the experiment as a serious online shopping session and to increase their involvement, participants were informed before the experimental tasks that 25% of the participants could get an additional award from $5 to $100 based on their performance. They were also informed that they would be asked to provide their justifications for their shopping choices and that their performance w i l l be judged based on how convincing their justifications are in supporting their shopping choices. The experimental procedures were the same as those in Experiment 1. A research assistant first trained participants on how to use and navigate the assigned Web interface using recorded videos that showed a tutorial agent with the same features as the experimental agent for that condition. Then, each participant was asked to finish the above two tasks. The order of the two tasks was counter-balanced. After each task, participants were directed to an online form to write down their choice and their justifications for the choice. The tasks had no time limit. Finally, after performing the two tasks, participants were asked to complete a questionnaire that included the measures of dependent variables. -135-Chapter 6: Decision Strategy Support & Explanation Facilities 6.5 Data Analysis and Results This section begins by reporting demographic data about participants in the experiment. Manipulation check results are reported in section 6.5.2. A multiple analysis of covariance ( M A N C O V A ) was conducted as the starting point to test whether or not the experimental treatments have an overall direct impact on the dependent variables (i.e., perceived strategy restrictiveness, perceived agent transparency, perceived cognitive effort, decision time, and consideration set size) (Hair et al., 1998), with a follow-up analysis of covariance ( A N C O V A ) performed on individual variables (section 6.5.4). A Partial Least Squares (PLS) analysis, as implemented in PLS-Graph 3.00 (Chin, 2001), was used to assess the measurement properties of the dependent variables and the structural model . 2 6 The measurement properties of the dependent variables are presented in section 6.5.3 and the results of the structural model are reported in section 6.5.5. P L S was chosen over L I S R E L for two main reasons (Barclay et al., 1995). First, to our knowledge, this is the first study to test the role of perceived strategy restrictiveness and perceived agent transparency in building consumers' trust in agents, thus the focus is on theory-development rather than theory-testing. Second, the sample size (N=156) is enough for the P L S analysis, while a larger sample size is needed for the L I S R E L , given the complexity of our structural model and many paths that must be estimated. 6.5.1 D E M O G R A P H I C D A T A Table 6-4 outlines the characteristics of the participants who volunteered in the experiment. The majority were undergraduate students, and their ages were generally around 20 years, which matches the Internet user demographic data (Johnson, 2005). Overall, most participants were experienced Internet users. About two-thirds were female and one-third were male. Conducting group comparisons in PLS is difficult. Given the three types of agents, group comparisons are needed. Therefore, in addition to PLS structural model testing, we also conducted A N C O V A , to test the direct impact of decision strategy support and explanation use as well as their interaction effects on perceived strategy restrictiveness, perceived agent transparency, consideration set size, decision time, and perceived cognitive effort. The Scheffe multiple comparisons were performed in an A N O V A . -136-Chapter 6: Decision Strategy Support & Explanation Facilities Table 6-4 Demographic Data # of Participants Percentage Gender Male 106 67.9% Female 50 32.1% Age 18-20 91 58.3% 21-25 54 34.6% 26-30 5 3.2% 31-35 4 2.6% 36-40 0 0 41 and above 2 1.3% Internet Use Experience Less than 1 year 1 0.6% Between 1 and 3 years 3 1.9% Between 3 and 6 years 68 43.6% Longer than 6 years 84 53.8% Year in School Freshman 31 19.9% Sophomore 50 32.1% Junior 34 21.8% Senior 29 18.6% Graduate 5 3.2% Other 7 4.5% Internet Usage Less than 30 minutes per day 3 1.9% Between 30 minutes and 1 hours per day 36 23.1% Between 1 and 2 hours per day 39 25.0% More than 2 hours per day 78 50.0% In the background questionnaire, participants were asked about their attitudes towards computers and web-shopping risk. Their comfort levels with the Internet and online shopping were also assessed. N o significant differences were found between subjects randomly assigned to one of the experimental groups for each of these variables. Also, no significant differences were found between groups with respect to participants' age, gender, Internet usage, and Internet experience. 6.5.2 M A N I P U L A T I O N C H E C K S Manipulation checks were conducted for the two experimental treatments. To utilize the decision strategy in the Hybrid agents, participants were required to choose either "non--137-Chapter 6: Decision Strategy Support & Explanation Facilities essential preference" (i.e., A C strategy) or "essential preference" (i.e., E B A strategy) for each question, after deciding to reply to that question. Participants were given a reminder message in the "explanations/guidance" window i f they forgot to make the choice. If the "non-essential preference" was chosen, participants were required to indicate an importance weight, and a reminder message was also given i f they forgot to choose an importance weight. Table 6-5 reports the distribution of strategy choices by participants in the two Hybrid agent conditions (i.e., one "with" explanations and the other "without" explanations). Because participants were allowed to change and refine their choices, the distributions of strategy choice that are based on initial choices and final choices are summarized separately. On average, about 40% of participants' choices were "essential preference," while the remaining choices were for "non-essential." The results show that most subjects utilized both the E B A strategy and the A C strategy, indicating that a Hybrid strategy was employed. Therefore, our implementation of the Hybrid agents is considered to be successful. -138-Chapter 6: Decision Strategy Support & Explanation Facilities Table 6-5 Distribution of Strategy Choice for the Hybrid Agents % of "Essential Preference" chosen # of participants using the Hybrid agent in the "with" explanation group # of participants using the Hybrid agent in the "without" explanation group (% of "Non-essential Preference" chosen) (%) (°/ 'o) Based on Initial Choice Based on Final Choice Based on Initial Choice Based on Final Choice 100% 2 2 1 0 (0%) (7.6%) (7.6%) (3.8%) (0%) 75.0% - 99.9% 0 0 1 1 (0.1%-25.0%) (0%) (0%) (3.8%) (3.8%) 50.0% - 74.9% 6 5 8 9 (25.1%-50.0%) (23.1%) (19.2%) (30.8%) (34.6%) 25.0% - 49.9% 10 8 11 11 (50.1%- 75.0%) (38.5%) (30.8%) (42.3%) (42.3%) 0.1%-24.9% 5 8 4 4 (75.1%-99.9%) (19.2%) (30.8%) (15.4%) (15.4%) 0% 3 3 1 1 (100%) (11.5%) (11.5%) (3.8%) (3.8%) Average % of "Essential Preference" chosen 40.1% 35.7% 42.1% 40.8% Average % of "Non-Essential Preference" chosen 59.9% 64.3% 57.9% 59.2% For the A C agents, participants were required to indicate an importance weight after deciding to reply to a question, and were reminded i f they forgot to make their indication. The strategy was then executed by the agent. For E B A agents, after participants answered the questions in the agent-user dialogue, the E B A strategy was executed by the agents. Explanation use was counted by watching over the videos recorded in the experiment. Similar counting procedures were applied as those in Experiment 1. On average, 31% of the how explanations, 22% of the why explanations, and 25% of the guidance were viewed by participants. The average usage rates are similar to those in -139-Chapter 6: Decision Strategy Support & Explanation Facilities other empirical studies (e.g., Dhaliwal, 1993). But the rates are lower than those in Experiment 1. One possible reason is that in Experiment 1, many participants in the "with" explanation conditions were provided by only one or two types of explanations while in this experiments, all participants in the "with" explanation conditions were provided with all three types of explanations. When all three types of explanations were provided, consumers might be more selective in choosing explanations to view, thus the usage rates are lower. The detailed frequency distributions of explanation use are reported in table 6-6. Overall, the three types of explanations in the experimental agents were extensively used and the experimental manipulations are quite successful. Table 6-6 Frequency Distributions of Explanation Use How explanations Why explanations Guidance % of how explanations used (pc) Number of subjects % of why explanations used (pc) Number of subjects Vo of guidance used (pc) Number of subjects pc = Oa 5 pc = 0 5 pc = 0 9 0<pc< 10% 17 0<pc< 10% 25 0<pc< 10% 26 10%<pc<30% 21 10%<pc<30% 20 10%<pc<30% 12 30% <pc< 50% 19 30% <pc< 50% 16 30% <pc< 50% 16 50% <pc< 70% 13 50% <pc< 70% 4 50% <pc< 70% 4 pc > 70% 2 pc > 70% 2 pc > 70% 5 Total 72 b Total 72 Total 72 a There are some participants who did not view any explanations for a particular type of explanations and two participants did not view any explanations for all three types of explanations. Dropping these two participants did not change the hypothesis testing results. Therefore, in the current analysis, all participants were retained. b In total, 78 participants were assigned to the conditions of "with" explanations. But for 6 of them, their navigations and screens were not successfully recorded. Therefore, only 72 participants' explanation use rates were analyzed. 6.5.3 M E A S U R E M E N T M O D E L A l l dependent variables, except for the three objective measures (i.e., consideration set size, decision time, and recommendation adoption) were modeled as reflective constructs. The descriptive data about the dependent variables are shown in table 6-7. Following -140-Chapter 6: Decision Strategy Support & Explanation Facilities Barclay et al. (1995)'s approach to test measurement models in P L S , we examined: 1) individual item reliability; 2) internal consistency; and 3) discriminant validity. Individual reliability is examined by the loadings of measures with their corresponding construct (table 6-7). Most of the loadings exceed 0.7, complying with a common rule of thumb that indicates good item reliability. The loadings of some items for perceived strategy restrictiveness and perceived agent transparency were between 0.56 and 0.70. A s suggested by Barclay at el. (1995), this is not uncommon for newly developed measures and should keep as many items as possible unless there are significant deviations in the items. Further examination of these items revealed that they have good content validity and their loadings are not very low, so that all items were retained at this stage. Table 6-7 Means (Standard Deviations) of Dependent Variables Explanations Agent Perceived Perceived Perceived Competence Benevolence Integrity belief Type Strategy Agent Cognitive belief in belief in - in agents Restrictiveness Transparency Effort agents agents (INTG) (PSR) (PAT) (PCE) (CMPT) ( B N V L ) With Hybrid 3.62 5.25 2.71 4.80 4.60 5.02 (1.09) (0.95) (1.21) (1.22) (1.03) (0.84) A C 3.79 5.22 2.47 4.99 4.60 4.76 (0.90) (0.98) (1.02) (0.78) (0.96) (0.81) E B A 4.69 5.08 3.45 3.57 4.10 4.88 (0.89) (0.99) (1.21) (1.48) (1.37) (1.06) Total 4.03 5.18 2.84 4.45 4.44 4.89 (1.06) (0.96) (1.20) (1.34) (1.15) (0.90) Without Hybrid 4.29 3.87 3.31 3.87 3.67 4.20 (1.04) (1.34) (1.29) (1.54) (1.07) (0.93) A C 3.98 4.49 2.39 4.64 4.17 4.55 (0.90) (0.81) (0.94) (0.98) (1.10) (0.77) E B A 4.38 4.38 3.15 3.62 3.77 4.39 (0.82) (1.30) '(1.37) (1.63) (1.19) (0.71) Total 4.22 4.25 2.95 4.04 3.87 4.38 (0.93) (1.20) (1.27) (1.46) (1.13) (0.81) Total Hybrid 3.95 4.56 3.01 4.33 4.14 4.61 (1.11) (1.35) (1.28) (1.46) (1.14) (0.97) A C 3.88 4.86 2.43 4.81 4.39 4.65 (0.90) (0.96) (0.97) (0.89) (1.05) (0.79) E B A 4.54 4.73 3.25 3.59 3.94 4.63 (0.86) (1.20) (1.29) (1.54) (1.28) (0.93) Total 4.12 4.72 2.90 4.25 4.15 4.63 (1.00) (1.18) (1.23) (1.41) (1.17) (0.89) -141-Chapter 6: Decision Strategy Support & Explanation Facilities Table 6-7 Means (Standard Deviations) of Dependent Variables (cont'd) Explanations Agent PU of Agents P E O U of Intentions to Consideration Decision Recommendati Type (PU) Agents Adopt Agents Set Size" Time a ' b on Adoption 3 (PEOU) (INTN) (SIZE) (TIME) (RCMA) With Hybrid 4.99 5.59 4.59 9.20 35.81 1.71 (1.45) (0.77) (1.76) (7.10) (14.72) (0.57) A C 5.23 5.79 4.56 17.23 38.38 1.69 (0.98) (0.76) (1.47) (15.09) (14.88) (0.43) E B A 4.29 5.28 3.64 14.60 32.19 1.02 (1.44) (1.01) (2.04) (10.81) (10.00) (0.71) Total 4.84 5.56 4.26 13.72 35.46 1.48 (1.35) (0.87) (1.80) (11.86) (13.47) (0.66) Without Hybrid 4.34 5.35 3.78 17.42 32.38 1.44 (1.65) (1.26) (1.82) (21.77) (12.78) (0.75) A C 5.00 5.38 4.69 16.58 35.58 1.40 (0.84) (0.92) (1.60) (9.72) (17.42) (0.58) E B A 4.12 5.02 3.51 15.29 28.81 1.23 (1.50) (1.08) (1.69) (19.01) (14.35) (0.81) Total 4.49 5.25 4.00 16.46 32.26 1.36 (1.41) (1.09) (1.76) (13.35) (15.04) (0.71) Total Hybrid 4.67 5.47 4.19 13.39 34.10 1.58 (1.57) (1.04) (1.82) (16.69) (13.76) (0.67) A C 5.11 5.59 4.63 16.90 36.98 1.55 (0.91) (0.86) (1.52) (12.57) (16.11) (0.53) E B A 4.21 5.15 3.58 14.94 30.50 1.12 (1.46) (1.05) (1.85) (15.22) (12.37) (0.76) Total 4.66 5.41 4.13 15.09 33.86 1.42 (1.39) (1.00) (1.78) (14.88) (14.32) (0.68) hese three are objective measures. b The measurement unit of decision time is minute. Internal consistency was examined by the composite reliability developed by Fornell and Larcker (1981), as a measure of reliability similar to Cronbach's alpha. Both composite reliability and Cronbach's alpha are reported in table 6-8. The benchmark for acceptable reliability is 0.7. A l l constructs met this criterion, indicating that the measures have good internal consistency. In P L S , discriminant validity can be examined by two criteria. One criterion is that a construct should share more variance with its own measures than it shares with other constructs in a model. The measure of Average Variance Extracted ( A V E ) , -142-Chapter 6: Decision Strategy Support & Explanation Facilities suggested by Fomell and Larcker (1981), was used and the square root of A V E of a construct should be greater than the correlations between the construct with other constructs (Barclay et al., 1995). Table 6-8 shows the square roots of A V E s and correlations between constructs. The results met this criterion. A second criterion for discriminant validity is that no item should load more highly with a construct other than the construct it intends to measure. The loadings and cross-loadings of measures are shown in table 6-9. A n examination of the matrix reveals that all items except for one item for perceived cognitive effort (PCE2) satisfied this criterion. Therefore, P C E 2 was dropped from the further analysis and all others items were retained. -143-Chapter 6: Decision Strategy Support & Explanation Facilities Table 6-8 Internal Consistencies, A V E s , and Correlations of Constructs Alpha Internal Consistency 1 2 3 4 5 6 7 8 9 10 11 12 1. Perceived Strategy Restrictiveness 0.73 0.82 0.66" 2. Perceived Agent Transparency 0.92 0.93 -0.36" 0.80 3. Perceived Cognitive Effort 0.91 0.91 0.51" -0.58" 0.79 4. Competence belief in agents 0.91 0.95 -0.52" 0.56" -0.73" 0.91 5. Benevolence belief in agents 0.72 0.88 -0.40" 0.54" -0.71" 0.67" 0.81 6. Integrity belief in agents 0.82 0.93 -0.59" 0.59" -0.73" 0.81" 0.57" 0.88 7. P U of Agents 0.94 0.84 -0.44" 0.52" -0.52" 0.64" 0.50" 0.70" 0.80 8. P E O U of Agents 0.84 0.87 -0.21" 0.59" -0.38" 0.46" 0.45" 0.47" 0.61" 0.79 9. Intentions to Adopt Agents 0.97 0.99 -0.54" 0.49" -0.64" 0.84" 0.55" 0.76" 0.58" 0.43" 0.98 10. Consideration Set Size3 - - 0.21" -0.43" 0.41" -0.39" -0.29" -0.41" -0.32" -0.26" -0.35" — 11. Decision Time a - - 0.00 -0.16 0.21" -0.14 -0.08 -0.09 -0.15 -0.04 -0.09 0.51" — 12. Recommendation Adoption3 - - -0.32" 0.36" -0.40" 0.42" 0.33" 0.49" 0.19' 0.15 0.33" -0.51" -0.17* — a These three constructs are objective measures and each has only one item. b The scores in the diagonal of the matrix are square roots of AVEs while the lower triangle represents the correlations between constructs. ** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed). Chapter 6: Decision Strategy Support & Explanation Facilities Table 6-9 Loading and Cross-Loadings of Measures PSR P A T P C E P U P E O U C M P T B N V L I N T G I N T N S I Z E T I M E R C M A PSR1 0.592 -0.285 0.393 -0.317 -0.306 -0.346 -0.204 -0.185 -0.367 0.159 -0.023 -0.252 PSR2 0.719 -0.221 0.394 -0.397 -0.326 -0.390 -0.352 -0.031 -0.427 0.177 0.072 -0.159 PSR3 0.7S8 -0.398 0.391 -0.468 -0.345 -0.466 -0.393 -0.307 -0.435 0.317 0.144 -0.259 PSR4 0.560 -0.114 0.271 -0.216 -0.167 -0.289 -0.206 -0.063 -0.243 -0.042 -0.124 -0.161 PSR5 0.623 -0.111 0.248 -0.254 -0.175 -0.348 -0.158 -0.054 -0.253 0.000 -0.083 -0.244 PSR6 0.691 -0.211 0.304 -0.340 -0.227 -0.447 -0.341 -0.115 -0.366 0.101 -0.061 -0.199 PAT1 -0.333 0.759 -0.445 0.427 0.333 0.512 0.452 0.389 0.434 -0.340 -0.060 0.254 PAT2 -0.265 0.805 -0.418 0.407 0.441 0.378 0.384 0.449 0.316 -0.273 -0.108 0.171 PAT3 -0.238 0.592 -0.413 0.317 0.356 0.312 0.188 0.302 0.274 -0.340 -0.144 0.240 PAT4 -0.331 0.880 -0.561 0.553 0.561 0.578 0.547 0.547 0.467 -0.369 -0.121 0.340 PAT5 -0.249 0.860 -0.448 0.439 0.400 0.418 0.398 0.508 0.376 -0.346 -0.186 0.309 PAT6 -0.256 0.845 -0.505 0.518 0.490 0.552 0.451 0.523 0.423 -0.412 -0.161 0.364 PAT7 -0.329 0.837 -0.476 0.445 0.443 0.482 0.422 0.533 0.431 -0.367 -0.113 0.335 PCE1 0.554 -0.502 0.809 -0.635 -0.538 -0.736 -0.468 -0.339 -0.634 0.402 0.137 -0.349 PCE2 0.464 -0.525 0.757 -0.748 -0.575 -0.782 -0.593 -0.411 -0.678 0.388 0.113 -0.494 PCE3 0.252 -0.337 0.744 -0447 -0.480 -0.399 -0.235 -0.161 -0.354 0.214 0.188 -0.233 PCE4 0.420 -0.479 0.803 -0.641 -0.606 -0.598 -0.446 -0.264 -0.542 0.493 0.212 -0.425 PCE5 0.422 -0.451 0.888 -0.570 -0.611 -0.536 -0.353 -0.291 -0.454 0.238 0.216 -0.246 PCE6 0.249 -0.433 0.699 -0.348 -0.520 -0.314 -0.302 -0.283 -0.281 0.144 0.139 -0.087 PU1 -0.415 0.487 -0.593 0.900 0.632 0.657 0.550 0.436 0.728 -0.301 -0.064 0.327 PU2 -0.500 0.502 -0.673 0.922 0.645 0.733 0.550 0.410 0.790 -0.362 -0.161 0.359 PU3 -0.477 0.492 -0.631 0.914 0.586 0.702 0.632 0.395 0.736 -0.371 -0.196 0.349 PU4 -0.488 0.556 -0.747 0.896 0.565 0.847 0.600 0.427 0.783 -0.399 -0.096 0.495 PEOU1 -0.266 0.345 -0.508 0.423 0.777 0.345 0.314 0.378 0.394 -0.089 0.014 0.184 PEOU2 -0.334 0.567 -0.562 0.536 0.808 0.525 0.441 0.405 0.434 -0.386 -0.085 0.319 PEOU3 -0.342 0.394 -0.555 0.663 0.806 0.533 0.454 0.373 0.548 -0.244 -0.079 0.270 PEOU4 -0.342 0.433 -0.661 0.500 0.834 0.414 0.375 0.284 0.388 -0.205 -0.109 0.277 CPA1 -0.548 0.532 -0.668 0.738 0.524 0.917 0.643 0.418 0.732 -0.355 -0.057 0.430 CPA2 -0.518 0.519 -0.687 0.729 0.540 0.918 0.633 0.412 0.683 -0.391 -0.097 0.464 CPA3 -0.505 0.598 -0.712 0.808 0.581 0.939 0.669 0.428 0.766 -0.414 -0.153 0.504 CPA4 -0.409 0.391 -0.454 0.549 0.325 0.720 0.488 0.427 0.428 -0.250 0.010 0.317 BNA1 -0.356 0.471 -0.499 0.591 0.476 0.631 0.850 0.518 0.538 -0.327 -0.142 0.203 BNA2 -0.411 0.486 -0.448 0.530 0.412 0.606 0.818 0.452 0.464 -0.315 -0.149 0.244 BNA3 -0.255 0.250 -0.242 0.386 0.266 0.391 0.728 0.518 0.357 -0.051 -0.060 -0.049 1TA1 -0.080 0.494 -0.376 0.365 0.382 0.326 0.461 0.842 0.361 -0.195 -0.080 0.089 ITA2 -0.186 0.447 -0.277 0.336 0.351 0.329 0.419 0.810 0.290 -0.222 -0.064 ' 0.132 ITA3 -0.150 0.474 -0.267 0.358 0.353 0.438 0.483 0.794 0.316 -0.187 0.045 0.176 ITA4 -0.241 0.450 -0.269 0.396 0.331 0.409 0.568 0.726 0.388 -0.231 -0.031 0.080 AAI1 -0.548 0.481 -0.629 0.825 0.535 0.761 0.581 0.413 0.971 -0.338 -0.089 0.331 AAI2 -0.531 0.498 -0.629 0.814 0.560 0.732 0.577 0.441 0.983 -0.347 -0.115 0.307 AA13 -0.505 0.467 -0.610 0.819 0.529 0.733 0.539 0.404 0.981 -0.330 -0.071 0.324 SIZE1 0.210 -0.434 0.406 -0.394 -0.293 -0.409 -0.322 -0.262 -0.346 1.000 0.510 -0.511 TIME1 0.004 -0.155 0.213 -0.142 -0.084 -0.093 -0.155 -0.041 -0.094 0.510 1.000 -0.171 RCMA1 -0.319 0.363 -0.400 0.422 0.330 0.494 0.195 0.149 0.328 -0.511 -0.171 1.000 -145-Chapter 6: Decision Strategy Support & Explanation Facilities 6.5.4 A N C O V A R E S U L T S Since we predicted the direct impact of the experimental treatments on multiple dependent variables (i.e., perceived strategy restrictiveness, perceived agent transparency, perceived cognitive effort, decision time, and consideration set size), M A N C O V A was applied first to test whether the experimental treatments have an overall impact on these dependent variables (Hair et al., 1998). This study also tested several control variables as described in section 6.4.2. Since only the four dispositional trust beliefs (competence, benevolence, integrity, and trusting stance) are significantly correlated with some of these dependent variables, only these four controls are included in M A N C O V A , and in the A N C O V A thereafter. The M A N C O V A results are presented in table 6-10. Overall significant main effects of both explanation use and decision strategy support were revealed, which allows us to examine the impact of explanation use and decision strategy support on the individual dependent variables via A N C O V A . Table 6-10 M A N C O V A Results Wilks' Lambda DF F-Value P-Value Explanation Use 0.781 5, 138 7.72 <0.00T Decision Strategy Support 0.748 10,276 4.30 <0.001 Explanation * Decision Strategy Support 0.944 10, 276 0.80 0.63 Dispositional Trust - Competence (DTC) 0.947 5, 138 1.54 0.18 Dispositional Trust - Benevolence (DTB) 0.966 5, 138 0.98 0.43 Dispositional Trust - Integrity (DTI) 0.992 5, 138 0.23 0.95 Dispositional Trust - Trusting Stance (DTT) 0.942 5, 138 1.71 0.14 6.5.4.1 P E R C E I V E D S T R A T E G Y R E S T R I C T I V E N E S S The A N C O V A results for the impact on perceived strategy restrictiveness are shown in table 6-11. Decision strategy support exerts a significant and negative impact on consumers' perceived strategy restrictiveness of online recommendation agents. To -146-Chapter 6: Decision Strategy Support & Explanation Facilities compare the three types of agents, a Scheffe test for multiple comparisons was conducted (table 6-12) and the results indicate that differences between Hybrid agents and E B A agents and between A C agents and E B A agents are significant, while the difference between Hybrid agents and A C agents is not. Therefore, hypothesis H6- l (b) is supported while H6-l(a) and H6- l (c ) are not. Table 6-11 A N C O V A Results ( D V : Perceived Strategy Restrictiveness) Source DF Sum of Squares Mean Square F p-value Explanations 1 0.86 0.86 0.97 0.33 Decision Strategy Support 2 15.40 7.70 8.68 <0.001 Explanations * Decision Strategy Support 2 6.58 3.29 3.71 0.03 Trust Propensity -Competence (covariate) 1 2.92 2.92 3.294 .072 Trust Propensity -Benevolence (covariate) 1 O.01 <0.01 <0.01 .943 Trust Propensity - Integrity (covariate) 1 <0.01 <0.01 <0.01 .960 Trust Propensity - Trusting Stance (covariate) 1 .36 .36 .405 .525 Error 146 129.52 .88 Table 6-12 Scheffe Comparisons for Perceived Strategy Restrictiveness Group A Group B Mean Difference ( A - B ) Significance Hybrid Agent A C Agent 0.07 0.94 E B A Agent -0.58 0.008 A C Agent Hybrid Agent -0.07 0.94 E B A Agent -0.65 0.003 Table 6-11 also shows that a significant interaction effect occurs between decision strategy support and explanation use. Figure 6-4 compares the perceived strategy restrictiveness levels among the three types of agents in the conditions of "with" -147-Chapter 6: Decision Strategy Support & Explanation Facilities explanations vs. "without" explanations. With explanation use, the level of perceived strategy restrictiveness is lowest in Hybrid agents and increases in the order of A C agents and E B A agents, while, without explanations, the level o f perceived strategy restrictiveness is lowest in A C agents. To test H6-2, we conducted two separate A N O V A s with Scheffe comparisons: one in the "with" explanation conditions and the other in the "without" explanation conditions. In the "with" explanation conditions, decision strategy support exerts a significant main effect (p< 0.001) while in the "without" explanations conditions, it has no significant main effect (p>0.1). The Scheffe test in the "with" explanation conditions shows that the differences between Hybrid agents and E B A agents as well as between A C agents and E B A agents are significant, while the difference between Hybrid agents and A C is not (see table 6-13). Therefore, hypothesis H6-2(a) is partially supported. None o f the Scheffe comparisons in the "without" explanation conditions are significant (see table 6-14). Therefore, Hypothesis H6-2(b) is supported. 5 4.5 4 3.5 3 2.5 2 1.5 1 H 0.5 0 -With Explanations -Without Explanations Hybrid Agent A C Agent EBA Agent Figure 6-4 Perceived Strategy Restrictiveness -148-Chapter 6: Decision Strategy Support & Explanation Facilities Table 6-13 Scheffe Comparisons for Perceived Strategy Restrictiveness, with explanations Group A Group B Mean Difference Significance (with explanations) (with explanations) ( A - B ) Hybrid Agent A C Agent -0.17 0.79 E B A Agent -1.07 <0.001 A C Agent Hybrid Agent 0.17 <0.79 E B A Agent 0.90 0.003 Table 6-14 Scheffe Comparisons for Perceived Strategy Restrictiveness, without explanations Group A Group B Mean Difference Significance (without explanations) (without explanations) ( A - B ) Hybrid Agent A C Agent 0.31 0.46 E B A Agent -0.10 0.93 A C Agent Hybrid Agent -0.31 0.46 E B A Agent -0.40 0.27 6.5.4.2 P E R C E I V E D A G E N T T R A N S P A R E N C Y The A N C O V A results for the impact of decision strategy support and explanation use on perceived agent transparency are shown in table 6-15. A significant and positive main effect of explanation use is detected on perceived agent transparency. Therefore, hypothesis H6-5 is supported. -149-Chapter 6: Decision Strategy Support & Explanation Facilities Table 6-15 A N C O V A Results ( D V : Perceived Agent Transparency) Source DF Sum of Squares Mean Square F p-value Explanations 1 28.43 28.43 24.34 <0.001 Decision Strategy Support 2 2.43 1.22 1.04 0.36 Explanations * Decision Strategy Support 2 2.81 1.41 1.20 0.30 Trust Propensity -Competence (covariate) 1 1.34 1.34 1.15 0.29 Trust Propensity -Benevolence (covariate) 1 3.17 3.17 2.72 0.10 Trust Propensity - Integrity (covariate) 1 1.09 1.09 .93 0.34 Trust Propensity - Trusting Stance (covariate) 1 <0.001 <0.001 <0.001 0.99 Error 146 170.51 1.17 Nevertheless, neither the interaction effect between decision strategy support nor the main effect of decision strategy support is significant (also see figure 6-5). The use of explanations uniformly increases perceived agent transparency; regardless o f whether or not explanations are provided, the perceived agent transparency levels are similar across the three agents. Therefore, H6-6 (a) is supported, while H6-6(b) is partially supported. 6 4 3 2 1 0 -•—With Explanations -•—Without Explanations Hybrid Agent A C Agent E B A Agent Figure 6-5 Perceived Agent Transparency -150-Chapter 6: Decision Strategy Support & Explanation Facilities 6.5.4.3 C O G N I T I V E E F F O R T Three variables were used to measure cognitive effort: consideration set size, decision time, and perceived cognitive effort. The A N C O V A results for these three variables are presented in tables 6-16 to 6-18, respectively. The group means are shown in figure 6-6 to 6-8. Table 6-16 A N C O V A Results ( D V : Consideration Set Size) Source DF Sum of Squares Mean Square F p-value Explanations 1 188.15 188.15 0.84 0.36 Decision Strategy Support 2 310.66 155.33 0.69 0.50 Explanations * Decision Strategy Support 2 534.15 267.07 1.19 0.31 Trust Propensity -Competence (covariate) 1 149.59 149.59 0.67 0.42 Trust Propensity -Benevolence (covariate) 1 23.36 23.36 0.10 0.75 Trust Propensity - Integrity (covariate) 1 59.10 59.10 0.26 0.61 Trust Propensity - Trusting Stance (covariate) 1 91.06 91.06 0.41 0.53 Error 142 31843.34 224.24 -151-Chapter 6: Decision Strategy Support & Explanation Facilities Table 6-17 A N C O V A Results ( D V : Decision Time - minutes) Source DF Sum of Squares Mean Square F p-value Explanations 1 451.08 451.08 2.19 .14 Decision Strategy Support 2 972.08 486.04 2.36 .10 Explanations * Decision Strategy Support 2 4.05 2.03 .01 .99 Trust Propensity -Competence (covariate) 1 209.63 209.63 1.02 .31 Trust Propensity -Benevolence (covariate) 1 4.471 4.471 .02 .88 Trust Propensity - Integrity (covariate) 1 36.68 36.68 .18 .67 Trust Propensity - Trusting Stance (covariate) 1 4.32 4.32 .02 .89 Error 146 30032.46 205.70 Table 6-18 A N C O V A Results ( D V : Perceived Cognitive Effort) Source DF Sum of Squares Mean Square F p-value Explanations 1 0.06 0.06 0.05 0.83 Decision Strategy Support 2 23.05 11.52 8.82 O.001 Explanations * Decision Strategy Support 2 5.62 2.81 2.15 0.12 Trust Propensity -Competence (covariate) 1 6.84 6.84 5.24 0.02 Trust Propensity -Benevolence (covariate) 1 0.21 0.21 0.16 0.69 Trust Propensity - Integrity (covariate) 1 0.01 0.01 0.01 0.94 Trust Propensity - Trusting Stance (covariate) 1 7.41 7.41 5.67 0.02 Error 146 190.72 1.31 -152-Chapter 6: Decision Strategy Support & Explanation Facilities With respect to consideration set size and decision time, none of the main effects and interaction effects of explanation use and decision strategy support were significant. Therefore, H6-8 and H6-9 are not supported. To test H6-10 and H6-11, we also split the sample along the explanation dimension and conducted separate A N O V A with Scheffe tests. The only significant difference was found between the consideration set size of the Hybrid agents and the A C agents, when explanations were provided. The consideration set size for consumers using the Hybrid agents with explanations was significantly smaller than that for consumers using the A C agents with explanations (see figure 6-6). Therefore, H6-10(a) and H6-11(a) are partially supported and H6-10(b) and H6-11(b) are supported. 20 18 16 14 12 10 8 6 4 2 0 -With Explanations -Without Explanations Hybrid Agent A C Agent E B A Agent Figure 6-6 Consideration Set Size -153-Chapter 6: Decision Strategy Support & Explanation Facilities 45 40 35 30 25 20 15 10 5 0 -•—With Explanations -a— Without Explanations Hybrid Agent A C Agent EBA Agent Figure 6-7 Decision Time (minutes) 4 3.5 3 2.5 2 ^ 1.5 1 0.5 0 -With Explanations -Without Explanations Hybrid Agent A C Agent EBA Agent Figure 6-8 Perceived Cognitive Effort With regards to perceived cognitive effort, the main effect of decision strategy was significant, while the main effects of explanation use and the interaction effects were not. We also conducted two separate A N O V A tests with Scheffe comparisons in the "with" and "without" explanation conditions. In the "with" explanation condition, no significant difference was found between the Hybrid agents and the two other types of agents, but the perceived cognitive effort for the A C agents was significantly lower than for the E B A agents (table 6-19). Therefore, H6-12(a) is partially supported. In the "without" explanation conditions, a significant difference was found between the Hybrid -154-Chapter 6: Decision Strategy Support & Explanation Facilities and the A C agents (table 6-20). Therefore, partial support was found for hypothesis H6-12(b). To evaluate the impact of explanation use on perceived cognitive effort, we conducted three separate A N O V A s for the three types of agents. None of the impacts were found to be significant. Therefore, hypothesis H6- 13(a), (b), and (c) were all supported. Table 6-19 Scheffe Comparisons for Perceived Cognitive Effort, with explanations Group A (with explanations) Group B (with explanations) Mean Difference ( A - B ) Significance Hybrid Agent A C Agent 0.24 0.76 E B A Agent -0.64 0.14 A C Agent Hybrid Agent -0.24 0.76 E B A Agent -0.88 0.03 Table 6-20 Scheffe Comparisons for Perceived Cognitive Effort, without explanations Group A (without explanations) Group B (without explanations) Mean Difference ( A - B ) Significance Hybrid Agent A C Agent 0.92 0.03 E B A Agent 0.15 0.90 A C Agent Hybrid Agent -0.92 0.03 E B A Agent -0.76 0.09 -155-Chapter 6: Decision Strategy Support & Explanation Facilities 6.5.5 STRUCTURAL MODEL In the structural model, the variable of explanation use was coded using a dummy variable 2 7 . The "without" explanation conditions were coded as "0" and the "with" explanation conditions were coded as "1" . For decision strategy support, two dummy variables were used for the three types of agents: H Y A C and H Y E B A (table 6-21). These two dummy variables were used as formative indicators for the decision strategy support construct in the P L S structural model. For the interaction between explanation use and decision strategy support, two indicators are used to get the product of the two dummy variables for decision strategy support and the dummy variable for explanation use. Then, these two indicators were also modeled as formative indictors for the interaction construct in the structural model. Table 6-21 Dummy Codes for Agent Types Agent Type Dummy Variable H Y A C H Y E B A Hybrid Agent 0 0 A C Agent 1 0 E B A Agent 0 1 Figure 6-9 shows the results of the structural model. For the direct impact of decision strategy support and explanation use, as well as the interaction effects, on perceived strategy restrictiveness, perceived agent transparency, and cognitive effort (decision time, consideration set size, and perceived cognitive effort), P L S delivers similar results as the A N C O V A results. Explanation use exerts significant main effects on perceived agent transparency while decision strategy support has significant main effects on perceived cognitive effort. The interaction effects on perceived strategies restrictiveness are also significant. We thank Dr. Wynne Chin for the advice on the use of dummy variables for experimental treatment and interaction variables in PLS. The use of dummy variables in PLS appeared in other studies as well (e.g., Yoo and Alavi, 2001). -156-Chapter 6: Decision Strategy Support & Explanation Facilities Perceived strategy restrictiveness is negatively and significantly correlated with both trust and the P U o f agents. Therefore, H6-3 and H6-4 are supported. Perceived agent transparency is positively and significantly correlated with trust in agents, thus H6-7 is supported. Regarding the impact of cognitive effort, only the perceived cognitive effort is negatively and significantly correlated with the P E O U of agents, while the two observed measures (decision time and consideration set size) are not significantly correlated with the P O E U of agents. Therefore, H6-14(a) is supported but neither H6-14(b) nor (c) is supported. The Trus t -TAM part of the model generated the same conclusions as those in chapter 4. The only non-significant correlation is between P E O U and intentions to adopt agents. Both the P U and trust positively and significantly influence the intentions to adopt agents. P E O U positively and significantly influences both P U and trust. Trust also positively and significantly influences P U . Therefore, H6-15, H6-17, H6-18, H6-19, and H6-20 are supported, while H6-16 is not supported. A summary o f hypotheses testing from both the A N C O V A and P L S analyses are provided in table 6-22. -157-Chapter 6: Decision Strategy Support & Explanation Facilities Decision Strategy Support Explanation Use Interaction N, /Agent Adoptions. ( Intentions ) V ^ = 0 . 7 1 ) ^ y ** indicates a significance level of .01; * indicates a significance level of .05; n.s. indicates a non-significant path. Figure 6-9 PLS Structural Model Testing Results Chapter 6: Decision Strategy Support & Explanation Facilities Table 6-22 A Summary of Hypothesis Testing Results Hypothesis Supported? H6-la : Hybrid agents will be perceived to be less restrictive than A C agents. H6-lb: Hybrid agents will be perceived to be less restrictive than E B A agents. H6-lc: A C agents will be perceived to be as restrictive as E B A agents. No (a) Yes (b) No (c) H6-2a: With explanation use, Hybrid agents will be perceived to be less restrictive than A C or E B A agents; A C agents will be perceived to be as restrictive as E B A agents. H6-2b: Without explanations, Hybrid agents will be perceived to be as restrictive as A C and E B A agents. Partially (a) Yes (b) H6-3: Perceived agent restrictiveness will negatively influence consumers' trust in online recommendation agents. Yes H6-4: Perceived agent restrictiveness will negatively influence consumers' PU in online recommendation agents. Yes H6-5: Use of explanations will increase consumers' perceived agent transparency of online recommendation agents. Yes H6-6a: With explanation use, Hybrid agents will be perceived as being transparent as A C or E B A agents. H6-6b: Without explanations, Hybrid agents will be perceived to be less transparent than A C or E B A agents; A C agents will be perceived to be as equally transparent as E B A agents. Yes (a) Partially (b) H6-7: Perceived agent transparency will positively influence consumer trust in online recommendation agents. Yes H6-8: The use of explanations reduces consumers' consideration size when using recommendation agents (H6-8a: Hybrid agents, H6-8b: A C agents, H6-8c: E B A agents). No (a) No (b) No(c) H6-9: The use of explanations reduces consumers' decision time when using recommendation agents (H6-9a: Hybrid agents, H6-9b: A C agents, H6-9c: E B A agents). No (a) No(b) No (c) H6-10a: With explanation use, the consideration set size of consumers using Hybrid agents will be smaller than that of consumers using A C agents or E B A agents; the consideration set size of consumers using A C agents will be similar to that of consumers using E B A agents. H6-10b: Without explanations, the consideration set size of consumers using the Hybrid agents will be as similar to that of consumers using A C and E B A agents. Partially (a) Yes (b) H 6 - l l a : With explanation use, the decision time of consumers using Hybrid agents will be smaller than that of consumers using A C agents or E B A agents; the decision time of consumers using A C agents will be similar to that of consumers using E B A agents. H 6 - l l b : Without explanations, the decision time of consumers using Hybrid agents will be similar to that of consumers using A C and E B A agents. Partially (a) Yes (b) -159-Chapter 6: Decision Strategy Support & Explanation Facilities Table 6-22 A Summary of Hypothesis Testing Results (cont'd) H6-12a: With explanation use, the perceived cognitive effort in consumers using Hybrid agents will be similar to that of consumers using A C agents or E B A agents. Partially (a) H6-12b: Without explanations, the perceived cognitive effort of consumers using Hybrid agents will greater than that of consumers using A C agents or E B A agents; the perceived cognitive effort of consumers using A C agents will be similar to that of consumers using E B A agents. Partially (b) H6-13: Use of explanations will not influence consumers' perceived cognitive effort in using recommendation agents (H6-13a: Hybrid agents, H6-13b: A C agents, H6-13c: E B A agents). Yes (a) Yes (b) Yes (c) H6-14: Perceived cognitive effort (H6-14a), consideration size set (H6-14b), and decision time (H6-14c) will negatively influence consumers' PEOU of recommendation agents. Yes No No H6-15: PU of recommendation agents will positively affect consumers' intentions to adopt the agents. Yes H6-16: PEOU of recommendation agents will positively affect consumers' intentions to adopt the agents. No H6-17: PEOU of recommendation agents will positively affect PU of the agents. Yes H6-18: Trust in recommendation agents will positively affect consumers' intentions to adopt the agents. Yes H6-19: Trust in the recommendation agents will positively affect P U of the agents. Yes H6-20: PEOU of recommendation agents will positively affect consumers' trust in the agents. Yes -160-Chapter 6: Decision Strategy Support & Explanation Facilities 6.6 Discussion This study provides strong evidence to support the important and differential roles of user perceptions of strategy restrictiveness, agent transparency, and cognitive effort in influencing the antecedents in Trus t -TAM: P U , P E O U , and trust. Also , the total effects of these three perceptions on consumers' intentions to adopt an agent were calculated, as were the effects on the three antecedents in Trus t -TAM. The total effects are operationalized as the sum of direct effects (standardized path coefficients) and indirect effects (productions of corresponding standardized path coefficients) (Barclay et al., 1995). The results show that perceived agent transparency exerts the highest impact on trust in an agent; perceived strategy restrictiveness has the greatest influence on P U of an agent; and perceived cognitive effort mostly influences P E O U of an agent (table 6-23). Table 6-23 Total Effects (Direct and Indirect) on Intentions to adopt agents, Trust, PU, and PEOU Perceptions Adoption Intentions Trust P U P E O U Perceived Cognitive Effort 0.19 0.19 0.24 0.71 Perceived Strategy Restrictiveness 0.24 0.25 0.49 ~ Perceived Agent Transparency 0.24 0.43 0.23 ~ In addition, more than 50% of the variances in all the antecedents in Trus t -TAM explained by the three user perceptions, and 71% of the variance in intention to adopt an agent is explained by the three antecedents (Figure 6-9). Thus, the research model is successful in identifying important factors that influence user adoptions of agents as well as their antecedents. Overall, the experimental treatments successfully manipulated the above three important perceptions. A s predicted, regarding perceived agent strategy restrictiveness, an interaction effect was detected between decision strategy support and explanation use. The benefits of Hybrid agents can only be achieved when explanations are provided. For example, without explanations, the levels of perceived strategy restrictiveness of the three types of agents are the same (H6-2b). Therefore, it is concluded that explanations should - 1 6 1 -Chapter 6: Decision Strategy Support & Explanation Facilities accompany the provision of decision strategy capabilities in an agent so that consumers can easily understand the features and apply them properly. One of the key predictions that was not well supported is the difference between Hybrid agents and A C agents. We did not find a significant difference in perceived strategy restrictiveness for these two types of agents (H6-la) . One possible reason is that the cognitive effort needed in using the Hybrid agents is higher than that for A C agents. Different types of effort are involved in making a choice, decision aided by a recommendation agent (e.g., Todd and Benbasat, 1991), such as the effort in learning and using agent features, and the effort in examining recommendations from the agent. Although the consideration set size of participants using Hybrid agents with explanations was significantly smaller than that of participants using A C agents with explanations (H6-10a), the levels of perceived cognitive effort were the same for participants using either of these two types of agents (H6-12a). Therefore, the cognitive effort in learning and using Hybrid agents is quite high, as was estimated in the N G O M S L analyses. Consumers need to spend extra effort to learn and understand the strategy choice and employ their control. According to the cognitive effort perspective in the behavioral decision theories studied in DSS (Todd and Benbasat, 1991; 1992; 1994a; 1994b; 2000), the conservation of effort is more important than increases in decision quality. Therefore, the high effort involved in using the Hybrid agents might have prevented participants from fully utilizing the agent features. A n empirical study by Todd and Benbasat (2000) showed that decision aids can induce the use of normative oriented strategies when the normative strategy is easier to execute than competing alternative strategies. Similarly, it is concluded that additional features provided by an agent w i l l not be effective unless the cognitive effort in learning and using these features is low. We also found a significant difference in perceived strategy restrictiveness between A C agents and E B A agents (table 6-11). A C agents were perceived to be less restrictive than E B A agents, indicating that participants perceived that A C agents provide a higher level of decision process support than E B A agents do. -162-Chapter 6: Decision Strategy Support & Explanation Facilities A s discussed in section 6.2.1. The A C strategy provides superior recommendations, compared to the E B A strategy (e.g., Todd and Benbasat, 1999). In addition, the perceived cognitive effort of participants using E B A agents is significantly higher than for participants using A C agents (Table 6-23), though the N G O M S L analyses shows that E B A agents require less effort than A C agents. The review of the recorded videos from the experiment shows that on average participants used the A C agents 4.6 times while they used the E B A agents 8.6 times, to obtain a proper set of recommendations. It was counted once when participants used the agent to get recommendations or whenever they returned back to the agent-user dialogue page, changed their answers and got recommendations again. For a shopping task, the agents were often used repeatedly, and the E B A agents were used more times than the A C agents, leading to higher cognitive effort in using E B A agents. Again, A C agents are superior to E B A agents, not only because of the high quality of recommendations but also because of the lower effort needed to use them. 6.7 Limitations, Contributions, and Future Research 6.7.1 LIMITATIONS AND FUTURE RESEARCH A number of limitations are involved with this study, including some that are in common with Experiment 1, such as common method bias, testing content-filtering and needs based agents only, and using university students as the testing sample. Regarding the common method bias issues, we applied Harmon's one-factor test to the data from the current experiment (Podsakoff and Organ, 1986). A n exploratory factor analysis was conducted on all dependent variables except for objective variables, but no single factor was observed and no single factor accounted for a majority of the covariance in the variables. This suggests that common method bias is not a concern in the present study. Another limitation that should be pointed out is the fact that this study only investigates one type of agent restrictiveness, namely, strategy restrictiveness. Other types of restrictiveness exist, as proposed by Silver (1990), such as communication -163-Chapter 6: Decision Strategy Support & Explanation Facilities restrictiveness related to the set of questions and its sequence in the agent-user dialogues. A n agent may or may not allow a consumer to choose a sub-set of questions to answer, and the questions in the agent-user dialogue could be in a fixed order or could be answered according to the user's preferences. These features could also influence a consumers' perceived restrictiveness of an agent but in different ways. Future research is needed to examine these differential impacts. Also, the investigation should be conducted at the optimum level of agent restrictiveness. A s pointed out by Silver (1990), with too little restrictiveness, users could become overloaded in making choices. A second future research area is to further reduce the cognitive effort in using the Hybrid agents. A s discussed earlier, this may explain why Hybrid agents were not as effective as expected. To reduce the cognitive effort, for example, a default strategy could be used in the strategy choice so that consumers do not need to spend an extra effort in choosing the strategy, even though they are allowed to do so. Since the A C strategy is closest to a normative strategy (Todd and Benbasat, 1999), and given that A C agents perform better on most of the dependent variables than the E B A agents, the default strategy could be set to the A C strategy, to gain more benefits from using the Hybrid agent. Future research could help to examine whether or not reducing the cognitive effort in using Hybrid agents can generate more benefits. Third, this study focuses on consumers' initial trust based on their first time use o f an un-branded recommendation agent. The effects of decision strategy support and explanation facilities on consumers' trust and adoption of online recommendation agents remain unclear when consumers have repeated interactions with the agents. The generalizability o f results to consumers who have experience with agents is not immediately obvious and warrants future research. Also , the impact of brand name of online vendors and the reputation of recommendation agent were not examined in this study and deserve future research. Lastly, this study examined users' intentions to adopt recommendation agents to get product recommendations, but it did not explore how consumers act upon the recommendations from the agents. It remains unclear how consumers' purchase behavior -164-Chapter 6: Decision Strategy Support & Explanation Facilities wi l l be influenced by the use of recommendation agents. For example, future research is needed to examine whether or not consumers are more wil l ing to follow the recommendations and purchase the recommended products i f they perceive the recommendation agents to be more useful and trustworthy. 6.7.2 C O N T R I B U T I O N S Notwithstanding these limitations, this study makes significant contributions to both research and practice. The main contribution of this study to research is two-fold. First, this study empirically investigates an important feature for trustworthy online recommendation agents - agent restrictiveness. In particular, the impact of agent decision strategy support on agent restrictiveness was examined and its important roles were confirmed. B y enabling consumers to control over and configure agent decision strategies, recommendation agents can adapt to different consumers' needs and are perceived to be less restrictive. A certain level of user control is desirable in the agent applications, especially because of the high risk and uncertainties in online shopping (Gefen et al., 2003b). User control over an agent can help to reduce such uncertainties and increase the user's trust in the agent. A new measure for perceived strategy restrictiveness was developed and, to our knowledge, this is the first empirical study to test its impact on consumers' trust and P U of the agents. Second, in addition to perceived agent restrictiveness, the impacts of agent transparency and cognitive effort on consumer trust and P E O U of the agents were also empirically tested. Although system transparency has been generally discussed in the IS literature (Gregor and Benbasat, 1999; Silver, 1991a), empirical examination of its impact and antecedents is lacking. More importantly, together with perceived strategy restrictiveness, this study reveals the relative importance of these variables on the important antecedents in Trus t -TAM. A s suggested by Davis (1989), the factors that influence the antecedents of user acceptance of a technology need to be identified, as done in this study. -165-Chapter 6: Decision Strategy Support & Explanation Facilities This study also has significant implications for practitioners. Most recommendation agents that are currently applied in various websites use predefined decision strategies. To our knowledge, the design of flexible and malleable strategies used by an agent is largely unexplored. In online environments, user control helps to reduce risk and uncertainty perceptions and increase consumers' trust levels (Das and Teng, 1998). When a recommendation agent is provided by an unfamiliar e-vendor, a loss of control is one of the main obstacles in the initial trust formation (Das and Teng, 1998). Reducing agent restrictiveness by allowing users to have control over an agent (e.g., allowing users to choose different decision strategies) can deal with such obstacles, thus it facilitate users' trust in the agent. B y considering both the benefits and costs of reduced agent restrictiveness for consumers, this study provides practical guidelines for the design of flexible agent strategies. This chapter investigates the interaction effects of explanation facilities and decision strategy support on perceived agent transparency, strategy restrictiveness, and cognitive effort. The results inform designers that effective user control and additional agent features require that: 1) a set of appropriate explanations should be provided so that users can understand these features and employ them properly, and 2) using these features should not induce much cognitive effort so that users are wil l ing to use them. The results of this study and Experiment 1 show that the use of how explanations, why explanations, and guidance significantly increases the perceived transparency levels of an agent and enhances consumers' trusting beliefs in the agent. To reduce the effort to employ user control, as discussed earlier, default settings might be used. They can be set by a normative strategy (e.g., A C strategy for the strategy choice), or inferred from user background and previous interaction history. A transparent agent with additional features, that empowers user control, but does not require much additional effort, delivers more benefits to users (e.g., more useful and trustworthy) and can provide more effective recommendation services, leading to a higher chance of user adoption. -166-Chapter 6: Decision Strategy Support & Explanation Facilities Appendices A P P E N D I X 6-1 S C R E E N S H O T S F O R E X P E R I M E N T A L A G E N T S • A question in the agent-user dialogue for the Hybrid agents with explanations: [BidgeW Srarrf] [ Van features") [ Preferences | [ Extras Distance Printing Camera Size 4) What are you going to do with your pictures? 1) S a v e i l i e m in e lec t ron ic fo rmats only 2) Pr int p i c tu res in s i z e s around 5 " * 7" , in addi t ion to sav ing ihern in e lec t ron ic fo rmats 3) Pr int p i c t u i e s in larger s i z e s (at least 8 " x 10"), in addi t ion lo sav ing t h e m in e lec t ron ic formats Non-Essential Preference OR Essential Preference (_ Guidance on Non-EsBtmUal versus Essential ) ExplanalionfGuittance Figure A6-1 (a) Question Page for Hybrid Agents Gel Recommendations (Guidance ) When the "non-essential preference" option is chosen in the Hybrid agents with explanations: ^Budget i Brand] | .Vain restores | [ Preferences "] [ Extras j Distance Printing Camera Size Gel Recomm en fiat tons 4) What are you going to do with your pictures? 1) S a v e t h e m in e lect ron ic lorrnats only 2) Print p ic tures in s i z e s around 5 " x 7" , in addit ion to sav ing t h e m in e lect ron ic f o i m a t s 3) P i in t p ic tures in larger s i z e s (at least 8 " x 10"), in addit ion to sav ing t h e m in electronic, fo rmats Non-Essential Preference OR Essential Preference ( Guidance on Non-Essenual versus Essential }^ Importance of this criterion 1 2 3 4 5 6 7 8 9 Exnlanal inn/Guidance Figure A6-l(b) Question Page for Hybrid Agents - "Non-Essential Preference" Chosen -167-Chapter 6: Decision Strategy Support & Explanation Facilities When the "essential preference" option is chosen in the Hybrid agents with explanations: [&rtff/3ra:xi| | ,Vari features] [ Preferences ] [ Extras j Distance Printing Get Recommendations 4) What are you going to do with your pictures? 1) S a v e t h e m in e lec t ron ic fo rmats only • 2) Pr int p i c tu res in s i z e s around 5 " x 7" , in addi t ion to sav ing t h e m in electronic- fo rmats 3) Pr int p i c tu res in larger s i z e s (at least 8 " x 10"). in addi t ion to sav ing t h e m in e l e c t i o m c fo rmats Guidance;) Non-Essential Preference OR Essential Preference \ Guidance on.Nrjn-Esstmtial versus Essential ) Explanatioii'Giml^ nrR Figure A6-1 (c) Question Page for Hybrid Agents - "Essential Preference" Chosen When the how explanation was clicked and presented for the Hybrid agents with explanations: 3) How far will you be from most of the subjects that you photograph? 1) I m m e d i a t e v ic in i ty • 2) A m o d e r a t e d i s t a n c e or l e s s • 3) F a r a w a y , in add i t ion to i m m e d i a t e a n d m o d e r a t e v ic in i ty ~ How •") ( Jr^ dancc j Non-Essential Preference OR Essential Preference C Guidance on Non-Essential versus Essential ) Importance of this criterion 2 3 4 5 6 7 8 9 Explanation/Guidance Your distance from the subjects you want to focus on most often will determine tne suitable zoom level of a digital camera. If you choose "non-essential preference", cameras with your desired optical zoom level will be given higher priority in my recommendations; if you choose "essential preference", I will only recommend cameras with your desired optical zoom level. Specifically, the three options will determine the following zoom levels: 1) 2X optical zoom and Below. 2) Between 2X and 5X optical zoom. 3) 4X optical zoom and above. Figure A6-1 (d) Question Page for Hybrid Agents - How Explanation Shown -168-Chapter 6: Decision Strategy Support & Explanation Facilities • When the guidance on "non-essential" versus "essential" was clicked and presented in the Hybrid agents with explanations: 3) How far will you be from most of the subjects that you photograph? j 1) I m m e d i a t e v ic in i ty • 2) A m o d e r a t e d i s t a n c e or l e s s 3) F a i a w a y , in a d d i t i o n to i m m e d i a t e a n d m o d e r a t e v ic in i ty ( How j r Why ) Reset Non-Essential Preference OR Essential Preference ( Guidance on Non-Essential versus Essential ) Importance of this criterion 1 2 3 4 5 S 7 8 9 Explanation/Guidance I will use different approaches to make recommendations based on your choice of 'non-essential' or 'essential' preference. If you want me to recommend only those cameras that exactly satisfy your desired choice to this question, please select the 'essential preference'. On the contrary, if you want me to recommend all cameras that fit your overall preferences quite well but might not exactly satisfy your desired choice to this question, please select the 'non-essential preference'. Please be advised that choosing 'essential preference' or very high importance levels in the 'non-essential preference' will significantly reduce the amount of recommendations that I could provide. Figure A6-1 (e) Question Page for Hybrid Agents - Guidance on Strategy Shown • A question in the agent-user dialogue for the A C agents with explanations: | Budget ?3rarid| | Vain features [ | Preferences^] |" Extras | Camera Size Get Recommendations 4) What are you going to do with your pictures? 1) Save tfnitn in e lect ron ic formats only 2) Print p ic tures in s i z e s around 5 " x 7" . in addit ion to sav ing t h e m in e lec t ron ic formats 3) Print p ic tures in larger s i z e s (at least 8 " >: 10"), in addi t ion to sav ing therr in e lec t ron ic fo rmats 11 My^J ^•Guidance) Reset Importance of this criterion 1 2 3 4 5 6 7 3 9 Low High Explanation/Guidance Figure A6-l(f) Question Page for AC Agents -169-Chapter 6: Decision Strategy Support & Explanation Facilities • A question in the agent-user dialogue for the E B A agents with explanations: Budget/"Ml Man restores Preferences Extras Distance Printing Camera Size 3) How far will you be from most of the subjects that you photograph? i ) I m m e d i a t e v ic inity • 2) A m o d e r a t e d i s t a n c e or l e s s - 3] Far a w a y , in addi t ion to immediate , and m o d e r a t e v ic inity Explanatian.i'Guiriattce Figure A6-1 (g) Question Page for EBA Agents G e l R e c o m m e n d a t i o n s • Recommendation page after the agent-user dialogues of the Hybrid and A C agents with explanations: I have 43 recommendations for you r i t •to Recommendation: 1 (Fit Score 100%) Panasonic DMCFX5 Click to get details Brand Panasonic Price $469.99 Zoom 3 K optical 1M Recommendation: 2 (Fit Score 97%) Canon A7S Click to get details Click to get details Explanation''Guidance Brand Canon Price $289.99 Zoom 3x optical Recommendation: 3 (Fit Score 97%) Panasonic DMCLC80 Brand Panasonic Price $399.99 Zoom 3x optical ( How ) ( W h y } Resolution 4 mega pixel ( H w ) ( . Why ) Resolution 3.2 mega pixel CHOW~~) Resolution 3.9 meqa pixel Back to Question Page Figure A6-1 (h) Result Page for Hybrid Agents and AC Agents -170-Chapter 6: Decision Strategy Support & Explanation Facilities • When the how explanation was clicked and presented in the recommendation page of the Hybrid agents and A C agents with explanations: I have 43 recommendations for you ckto get detai Recommendation: 1 (Fit Score 100%) Panasonic DMCFX5 Click ils Brand Panasonic Price $469.99 Zoom 3x optical Recommendation: 2 (Fit Score 97%) Canon A75 Click to get details Brand C a n o n Price $289.99 Zoom 3K optical Recommendation: 3 (Fit Score 97%) Panasonic DMCLC80 Click to get details Explanation'Guidance Brand Panasonic Price $399.99 Zoom 3x optical ( How ) Why ) Resolution 4 mega pixel ( Hew ) ( Why )^ Re solution 3.2 mega pine) ( How ~) Q Why "") Resolution 3.9 meat pixel Back to Question Page The fit score for this camera is 100. I first eliminated all cameras that do not have any one of your desired attribute levels, inferred from your essential needs and preferences. Then, I generated a score for each of the remaining cameras measuring how well they fit you non-essential preferences. The score was determined by starting with 100 points, then subtracting points for missing features. The number of points that were subtracted (the gap score) were calculated by determining whether the camera rnmnlotplw nr nnrw nartlv/ l^rL -Pii rhp attrihi itp \/m I riocirprl than -arili isfinn the wpinht nf thig nan rianonrlinn Figure A6-1 (i) Result Page for Hybrid Agents and AC Agents - How Explanation Shown Recommendation page after the agent-user dialogues of the E B A agents with explanations: I have 4 recommendations for you CIS. Recommendation: 1 Canon A75 Click to get details Brand Canon IA: (to get details Price $289.99 Recommendation: 2 Canon A80/KIT Click t  Brand Canon Price $349.99 Recommendation: 3 Panasonic DMCLC80 Click to get details Brand Panasonic Price $339.99 ExplanationGuidance top 5 Piuv 5 NuM 5 p> Zoom 3x optical Zoom 3K optical Zoom 3x optical ( How ) C Why - ) Resolution 3.2 mega pixel How j ( Why ") Resolution 4 mega pixel ( How ) C'Why ) Resolution 3.9 meaa pixel ' Back to Question Page Figure A6-1 (j) Result Page for EBA Agents -171-Chapter 6: Decision Strategy Support & Explanation Facilities A P P E N D I X 6-2 N A T U R A L G O M S L A N G U A G E A N A L Y S E S Natural G O M S Language ( N G O M S L ) was developed as a cognitive modeling method by Kieras (1997). It is based on G O M S models, which have been used by software designers to model user behavior in terms of Goals, Operators, Methods and Selection rules. According to Kieras (1997), goals are what users are trying to accomplish. Methods are a series of procedures that describe how to accomplish goals. The procedures consist of operators that are elementary actions to accomplish the goals. I f more than one method is available to accomplish a goal, selection rules specify which method should be used to reach a given goal. A family of G O M S models has been successfully applied in many practical design problems (Kieras, 1997). Among the various G O M S models, N G O M S L has been widely used to predict the execution and learning time for users to carry out a task with a system. Accordingly, we use N G O S M L to predict the execution time of using recommendation agents as a measure of the users' effort to use the agents. Based on Kieras (1997), execution time can be predicted by summing the N G O M S L statement time, primitive operator time, mental operator time, and waiting time. Execution time for each N G O M S L statement time is 0.1 second. Waiting time is set to 0 because the retrieval of information is almost instantaneous in this study. The times for different operators are as suggested by Kieras (1997) (see table A6-1). Our experimental recommendation agents do not require using a keyboard, so that only two standard operators and the mentor operator are needed. Table A6-1 Relevant Operators in this Study Operators User Activity Time (sec.) P Point with mouse to a target on the display 1.1 B B Pushing and releasing mouse button rapidly 0.2 M Mental act of thinking or perception 1.2 -172-Chapter 6: Decision Strategy Support & Explanation Facilities The assumptions for our N G O M S L analyses are as follows: • Participants are average non-secretary typists. • Participants use recommendation agents. • The hyperlink for a recommendation agent is visible on the webpage presented to participants. • A l l questions are answered in the agent-user dialogues. • Participants use the agent once and do not change their answers in a back-and-forth manner. • Answering a question to express a user's needs and preferences requires four mental operators since the user needs to read a question and the three options for each question. • Participants do not need to find the options for a question as they are presented together with the question. • For A C or Hybrid agents, participants do not need to find the location of importance levels as they are presented together with the question. • Hands begin and end on the mouse. The detailed N G O M S L analyses for A C agents, E B A agents, and Hybrid agents are described in tables A6-2 , A6-3 , and A6-4. The analyses are based on a decision task that involves r number of questions (attributes). In our agent interface, four categories of questions are presented (i.e., "Budget/Price," " M a i n Features," "Preferences," and "Extra;" see figures in appendix 6-1). Under each category, several attribute buttons can be found (two buttons for "Budget / Price," three buttons for " M a i n Features," four buttons for "Preferences," and two buttons for "Extra"). B y clicking on a button, a question is presented below it to elicit the user's needs and preferences. A C agents involve (31+17r) statements, (5+3r) P, (5+3r) B B , and (10+7r) M operators (see table A6-2). A C agents have seven high-level goals to be accomplished: 1) choosing the agent to find a product alternative, 2) choosing categories of questions, 3) choosing attributes to express their needs and preferences, 4) answering the questions to express their needs and preferences, 5) assigning importance weights to attributes, 6) -173-Chapter 6: Decision Strategy Support & Explanation Facilities deciding whether or not more attributes are to be chosen, and 7) deciding whether or not more categories of questions are to be presented. Users choose to use the agents once, while they need to repeatedly choose a category of questions and decide whether or not to continue in each category. Users need also repeatedly choose an attribute, answer the question, assign a weight of importance, and decide whether or not to continue for each attribute. Table A6-2 N G O M S L Analyses for A C Agents Description #of statements Total # of statements Type of operators # 0 f operators Total # of operators Method for goal - Choose a product alternative Step 1 Accomplish goal: choose the agent 1 1 Step 2 Accomplish goal: choose a category of questions 1 4 Step 3 Accomplish goal: choose an attribute 1 r Step 4 Accomplish goal: answer the question 1 r Step 5 Accomplish goal: choose an importance level 1 r Step 6 Decide: i f no more questions left, then return with sub-goal accomplished. Otherwise, go to step 3 1 r Step 7 Decide: i f no more categories left, then return with goal accomplished. Otherwise, go to step 2 1 4 Method for goal - choose the recommendation agent Step 1 Decide to choose the agent 1 1 M 1 1 Step 2 Find the agent link 1 1 M 1 1 Step 3 Point to the agent link 1 1 P 1 1 Step 4 Click the agent link 1 1 B B 1 1 Step 5 Return with goal accomplished 1 1 Method for goal - choose a category of questions Step 1 Decide to choose a category 1 1 M 1 4 Step 2 Find the category 1 4 M 1 4 Step 3 Point to the category 1 4 P 1 4 Step 4 Click the category 1 4 B B 1 4 Step 5 Return with goal accomplished 1 4 Method for goal - choose an attribute Step 1 Decide to choose an attribute 1 r M 1 r Step 2 Find the question 1 r M 1 r Step 3 Point to the question 1 r P 1 r Step 4 Click the question 1 r B B 1 r Step 5 Return with goal accomplished 1 r Method for goal - answer the question Step 1 Decide to answer the question 1 r M 4 r Step 2 Point to the option 1 r P 1 r Step 3 Click the option 1 r B B 1 r Step 4 Return with goal accomplished 1 r Method for goal - choose an importance level Step 1 Decide to choose level 1 r M 1 r Step 2 Point to the level 1 r P 1 r Step 3 Click the level 1 r B B 1 r Step 4 Return with goal accomplished 1 r -174-Chapter 6: Decision Strategy Support & Explanation Facilities E B A agents require (31+12r) statements, (5+2r) P, (5+2r) B B , and (10+6r) M operators (table A6-3). E B A agents have six, high-level goals to be accomplished: 1) choosing the agent to find a product alternative, 2) choosing categories of questions, 3) choosing desired attributes to express their needs and preferences, 4) answering the questions to express their needs and preferences, 5) deciding whether or not more attributes are available to choose, and 6) deciding whether or not more categories of questions to be presented. Users choose to use an agent once, but must repeatedly choose a category of questions and decide whether or not to continue, for each category. Also , they need to repeatedly choose an attribute, answer the question, and decide whether or not to continue, for each attribute. Table A6-3 N G O M S L Analyses for E B A Agents Description #of statements Total # of statements Type of operators #of operators Total # of operators Method for goal - Choose a product alternative Step 1 Accomplish goal: choose the agent 1 1 Step 2 Accomplish goal: choose a category of questions 1 4 Step 3 Accomplish goal: choose an attribute 1 r Step 4 Accomplish goal: answer the question 1 r Step 5 Decide: i f no more questions left, then return with sub-goal accomplished. Otherwise, go to step 3 1 r Step 6 Decide: i f no more categories left, then return with goal accomplished. Otherwise, go to step 2 1 4 Method for goal - choose the recommendation agent Step 1 Decide to choose the agent 1 1 M 1 1 Step 2 Find the agent link 1 1 M 1 1 Step 3 Point to the agent link 1 1 P 1 1 Step 4 Click the agent link 1 1 B B 1 1 Step 5 Return with goal accomplished 1 1 Method for goal - choose a category of questions Step 1 Decide to choose a category 1 1 M 1 4 Step 2 Find the category 1 4 M 1 4 Step 3 Point to the category 1 4 P 1 4 Step 4 Click the category 1 4 B B 1 4 Step 5 Return with goal accomplished 1 4 Method for goal - choose an attribute Step 1 Decide to choose an attribute 1 r M 1 r Step 2 Find the question 1 r M 1 r Step 3 Point to the question 1 r P 1 r Step 4 Click the question 1 r BB 1 r Step 5 Return with goal accomplished 1 r Method for goal - answer the question Step 1 Decide to answer the question 1 r M 4 r Step 2 Point to the option 1 r P 1 r Step 3 Click the option 1 r B B 1 r Step 4 Return with goal accomplished 1 r -175-Chapter 6: Decision Strategy Support & Explanation Facilities Table A6-4 NGOMSL Analyses for Hybrid Agents Description #of statements Total # of statements Type of operators #of operators Total # of operators Method for goal - Choose a product alternative Step 1 Accomplish goal: choose the agent 1 1 Step 2 Accomplish goal: choose a category of questions 1 4 Step 3 Accomplish goal: choose an attribute 1 r Step 4 Accomplish goal: answer the question 1 r Step 5 Accomplish goal: choose "essential" vs. "non-essential" 1 r Step 6 Accomplish goal: choose an importance level (if non-essential) 1 r/2 Step 7 Decide: i f no more questions left, then return with sub-goal accomplished. Otherwise, go to step 3 1 r Step 8 Decide: i f no more categories left, then return with goal accomplished. Otherwise, go to step 2 1 4 Method for goal - choose the recommendation agent Step 1 Decide to choose the agent 1 1 M 1 1 Step 2 Find the agent link 1 1 M 1 1 Step 3 Point to the agent link 1 1 P 1 1 Step 4 Click the agent link 1 1 BB 1 1 Step 5 Return with goal accomplished 1 1 Method for goal - choose a category of questions Step 1 Decide to choose a category 1 1 M 1 4 Step 2 Find the category 1 4 M 1 4 Step 3 Point to the category 1 4 P 1 4 Step 4 Click the category 1 4 B B 1 4 Step 5 Return with goal accomplished 1 4 Method for goal - choose an attribute Step 1 Decide to choose an attribute 1 r M 1 r Step 2 Find the question 1 r M 1 r Step 3 Point to the question 1 r P 1 r Step 4 Click the question 1 r B B 1 r Step 5 Return with goal accomplished 1 r Method for goal - answer the question Step 1 Decide to answer the question 1 r M 4 r Step 2 Point to the option 1 r P 1 r Step 3 Click the option 1 r B B 1 r Step 4 Return with goal accomplished 1 r Method for goal - choose "essential" vs "non-essential" Step 1 Decide to choose strategy preference 1 r M 1 r Step 2 Find the choice 1 r M 1 r Step 3 Point to the choice 1 r P 1 r Step 4 Click the choice 1 r B B 1 r Step 5 Return with goal accomplished 1 r Method for goal - choose an importance level (if non-essential) Step 1 Decide to choose level 1 r/2 M 1 r/2 Step 2 Point to the level 1 r/2 P 1 r/2 Step 3 Click the level 1 r/2 B B 1 r/2 Step 4 Return with goal accomplished 1 r/2 Hybrid agents include (31+20.5r) statements, (5+3.5r) P, (5+3.5r) B B , and (10+8.5r) M operators (table A6-4). Hybrid agents have eight, high-level goals to be accomplished: 1) choosing the agent to find a product alternative, 2) choosing categories of questions, 3) choosing attributes they want to express their needs and preferences, 4) answering the questions to express their needs and preferences, 5) deciding whether their -176-Chapter 6: Decision Strategy Support & Explanation Facilities needs and preferences are "essential" or "non-essential", 6) assigning importance weights to those "non-essential" preference-related attributes, 7) deciding whether or not more attributes are to be chosen, and 8) deciding whether or not more categories of questions are available. Users choose to use the agent once, while they repeatedly choose a category of questions and decide whether or not to continue in each category. Also , they need to choose an attribute, answer the question, choose "essential" or "non-essential" preference, and assign importance weights for "non-essential" preferences, and then decide whether or not to continue, for each attribute. Table A6-5 summarizes the N G O M S L analyses and the estimated execution times for the three types of agents. The results show that Hybrid agents have execution times that are about 15% longer than the A C agents and about 41% longer than the E B A agents. Table A6-5 Summary of NGOMSL Analyses and Estimated Execution Time A C agent E B A agent Hybrid agent N G O M S L Analyses (31+17r) statements (5+3r) P (5+3r) BB (10+7r)M (31+12r) statements (5+2r) P (5+2r) B B (10+6r)M (31+20.5r) statements (5+3.5r) P (5+3.5r) B B (10+8.5r)M Estimated execution time (r=ll) 175.6s 142.6s 201.8s -177-Chapter 6: Decision Strategy Support & Explanation Facilities A P P E N D I X 6-3 M E A S U R E M E N T I T E M S • Perceived Strategy Restrictiveness (PSR) PSR1; I could select the way this virtual advisor processes my preferences, in generating its recommendations, (r) PSR2: This virtual advisor allowed me to specify my preferred approach for it to generate its recommendations, (r) PSR3: I had limited control over the way this virtual advisor makes recommendations. PSR4: The virtual advisor constrained my alternatives for possible approaches it can use to generate its recommendations. PSR5: In the context of my preferred way of selecting a digital camera, the approach this virtual advisor uses to generate recommendations was rigid. PSR6: In the context of my preferred way of selecting a digital camera, this virtual advisor's reasoning processes for generating recommendations were restricted. • Perceived Agent Transparency (PAT) PAT1: This virtual advisor made its reasoning process clear to me. PAT2: It was readily apparent to me how this virtual advisor generates its recommendations. PAT3:1 could not understand how this virtual advisor is performing its job. (r) PAT4: I could easily understand this virtual advisor's reasoning process. PAT5: It was easy for me to understand the inner workings of this virtual advisor. PAT6: I could understand why and how this virtual advisor recommends products to me. PAT7: This virtual advisor's logic in providing advice was clear to me. • Perceived Cognitive Effort (PCE) PCE1: The task of selecting digital cameras using this virtual advisor was very frustrating. PCE2: Using this virtual advisor, I easily found the information I wanted, to help me decide what to buy. R (dropped) PCE3: The task of selecting digital cameras using this virtual advisor took too much time. PCE4: The task of selecting digital cameras using this virtual advisor was easy. R PCE5: Selecting digital cameras using this virtual advisor required too much effort. PCE6: The task of selecting digital cameras using this virtual advisor was too complex. -178-Chapter 6: Decision Strategy Support & Explanation Facilities • Perceived Usefulness of Recommendation Agents (PU) PU1: Using the virtual advisor can improve my shopping performance. PU2: Using the virtual advisor can increase my shopping productivity. PU3: Using the virtual advisor can increase my shopping effectiveness. PU4:1 found using the virtual advisor useful. • Perceived Ease-of-Use of Recommendation Agents (PEOU) PEOU1: Learning to use the recommendation virtual advisor would be easy for me. PEOU2: My interaction with the recommendation virtual advisor was clear and understandable. PEOU3: It would be easy for me to become skillful at using the recommendation virtual advisor. PEOU4: I found using the recommendation virtual advisor easy to use. • Competence Bel ief in Recommendation Agents (CPA) CPA1: The virtual advisor is competent and effective in providing digital camera recommendations. CPA2: The virtual advisor performs its role of giving recommendations very well. CPA3: Overall, the virtual advisor is a capable and proficient Internet digital camera recommendation provider. CPA4: In general, the virtual advisor is very knowledgeable about digital cameras. • Benevolence Bel ief in Recommendation Agent ( B N A ) BNA1: I believe that the virtual advisor would act in my best interest. BNA2: If I required help, the virtual advisor would do its best to help me. BNA3: The virtual advisor is interested in my well-being, not just its own • Integrity Bel ief in Recommendation Agent (ITA) ITA1: The virtual advisor is truthful in its dealing with me. ITA2: I would characterize the virtual advisor as honest. ITA3: The virtual advisor would keep its commitments. ITA4: The virtual advisor is sincere and genuine. -179-Chapter 6: Decision Strategy Support & Explanation Facilities • Agent Adoption Intentions (AAI) AAI1: Assuming I had access to the system, I intend to use the virtual advisor. AAI2: Assuming I had access to the system, I predict I would use the virtual advisor. AAI3: Assuming I had access to the system, I plan to use the virtual advisor. • Disposition to Trust - Competence (DTC) DTC1:1 believe that most professional people do a very good job at their work. DTC2: Most professionals are very knowledgeable in their chose field. DTC3: A large majority of professional people are competent in their area of expertise. • Disposition to Trust - Benevolence (DTB) DTB1: In general, people really do care about the well-being of others. DTB2: The typical person is sincerely concerned about the problems of others. DTB3: Most of the time, people care enough to try to be helpful, rather than just looking out for themselves. • Disposition to Trust - Integrity (DTI) DTI1: In general, most folks keep their promises. DTI2:1 think people generally try to back up their words with their actions. DTI3: Most people are honest in their dealings with others. • Disposition to Trust - Trusting Stance (DTS) DTS1:1 usually trust people until they give me a reason not to trust them. DTS2:1 generally give people the benefit of the doubt when I first meet them. DTS3: My typical approach is to trust new acquaintances until they prove I should not trust them. • Preference for Effort-Saving or Decision Quality (PEQ) PEQ1: I am willing to examine the product attributes very carefully in order to make sure that the product fits my preferences perfectly. PEQ2: I prefer to shop extensively in order to get exactly what I want. -180-Chapter 6: Decision Strategy Support & Explanation Facilities PECO: My time is valuable. As soon as I find a product that is adequate for my needs, I wil buy it. (r) Product Involvement (PIV) We would like to know how interested you are in digital cameras, descriptive words listed below to indicate your level of interest in PIV1: Important PIV2: Irrelevant PIV3: Means a lot to me PIV4: Unexciting PIV5: Dull PIV6: Matter to me PIV7: Boring PIV8: Fun PIV9: Appealing PIV 10: Of no concern to me o o o o o o o O O O O O O O 0 0 0 0 0 0 0 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O Please use the series of digital cameras: Unimportant Relevant (r) Means nothing to me Exciting (r) Neat (r) Doesn't matter to me Interesting (r) Not fun Unappealing Of concern to me (r) Product Knowledge / Expertise - Subjective (PKS) PKS1:1 know pretty much about digital cameras. PKS2: I do not feel very knowledgeable about digital cameras. R PKS3: Among my circle of friends, I'm one of the "experts" on digital cameras. PKS4: Compared to most other people, I know less about digital cameras. R PK.S5: When it comes to digital cameras, I really don't know a lot. R Product Knowledge / Expertise - Objective (PKO) P K O l : When, i f ever, is "resolution" important for digital cameras? PKQ2: When, i f ever, is "zoom" important for digital cameras? -181-Chapter 7: Conclusions CHAPTER 7: CONCLUSIONS AND FUTURE RESEARCH 7.1 Summary of the Thesis In online environments, consumers are deluged with an enormous amount of information about products and services from multiple e-vendors. Online recommendation agents provide customers with shopping assistance, but the lack of consumers' trust in recommendation agents prevents the full utilization of their services. However, the nature of trust in online recommendation agents is not well-understood, and few studies have empirically examined the key agent capabilities that can enhance consumers' trust in the agents and their acceptance. The overall objective of this thesis is to understand user acceptance of online recommendation agents and the formation processes of trust in the agents, and to empirically examine the role of key agent capabilities in facilitating consumers' trust and adoption of the agents. B y integrating T A M with the construct of trust in recommendation agents, the important role of trust in influencing user adoption of the agents is confirmed. Consumers form a certain level of trust in agents that influences their behavioral intentions towards the agents. Trust exerts a direct impact on intentions to adopt recommendation agents as well as an indirect impact via perceived usefulness of the agents. To understand consumers' trust formation in online recommendation agents and identify the key agent features that stimulate these processes, a written protocol analysis was conducted. Using a prior-research-driven approach, an agent trust formation scheme that included 12 processes was developed to code the written-protocols used by participants to justify their trust levels in recommendation agents. To further examine the predictive power of the major processes identified from the protocol analysis, they were validated via a quantitative analysis. Overall, consumers' expectations, utility assessment, and knowledge about an agent are the most important factors facilitating or inhibiting consumers' trust formation in the agent. Based on these processes, the agent features and capabilities that enhance consumers' trust in a recommendation agent are suggested. -182-Chapter 7: Conclusions Two agent capabilities are empirically investigated in this thesis: 1) explanation facilities and 2) decision strategy support. Regarding explanation facilities, this thesis empirically tests three types of explanations: how explanations, why explanations, and guidance. The characteristics of online recommendation agents that may hamper consumers' trust-building in the agents are discussed (i.e., agency relationship between agent and users, and high user discretion in agent-user dialogues) and the three types of explanations are suggested to facilitate trust-building by directly dealing with these obstacles. The experimental results confirm that the use of explanations increases consumers' trust in recommendation agents. Furthermore, the differential roles played by the different types of explanations was revealed: the use of how explanations increases consumers' beliefs in agent competence and benevolence, the use of why explanations increases consumers' beliefs in agent benevolence, and the use of guidance increases consumers' beliefs in agent integrity. The second agent capability investigated is decision strategy support. Hybrid agents are proposed to enhance the decision strategy support by allowing consumer control over agent strategies (i.e., choices of A C and E B A strategies). B y supporting both strategies, Hybrid agents not only effectively reduce the number of product alternatives via the E B A strategy, but also retain those alternatives with high quality via the A C strategy. On the other hand, A C agents and E B A agents only support one strategy and consumers do not have control over the strategies employed by the agents. When a consumer's desired decision strategy is not supported by an agent, the agent is perceived to be restrictive and the final recommendations may not fit the consumer's preferences. This research examines the impact of perceived strategy restrictiveness on consumers' trust and perceived usefulness of the agents. Further, to effectively utilize the decision strategy support capability and make choices in decision strategies, consumers should be able to understand the strategies supported by the agent and know how the agent works (Beaulieu and Jones, 1998; Dhaliwal and Benbasat, 1996). We examined the role of agent transparency. Explanations have been found to increase agent transparency, which leads to trust in the agents, and to help effectively utilize the decision strategy support functionality. -183-Chapter 7: Conclusions Surprisingly, the impact of decision strategy support is not fully confirmed as expected. In particular, even with explanations provided, the perceived strategy restrictiveness between the Hybrid agents and the A C agents were not significant. One main reason is that the use of decision strategy support requires cognitive effort. This suggests that the extra cognitive effort required by additional agent features should be kept to a minimum. Otherwise, the potential benefits would be limited. 7.2 Contributions This thesis makes both theoretical and practical contributions. From a theoretical perspective, five major contributions are identified. First, this research tests an integrated Trus t -TAM which helps to explain the adoption of online recommendation agents. This research suggests a new perspective for studying IT acceptance. The relational aspects (including trust) are important determinants of behavioral intentions to adopt an online recommendation agent and deserve more research. Second, this research advances the process theory of trust formation in online recommendation agents. Based on Komiak 's (2003) scheme of trust formation in recommendation agents, as well as the process theory of trust in the interpersonal and organizational contexts, a refined scheme with 12 processes is proposed and evaluated with written protocols. The asymmetric structure of trust-building and trust-inhibiting processes is also confirmed. This helps us better understand the trust formation in online recommendation agents. Third, this thesis empirically tests the impact of explanation facilities on trust in online recommendation agents. The importance of explanations for intelligent systems is well recognized in the IS literature (Dhaliwal and Benbasat, 1996; Gregor and Benbasat, 1999), but empirical testing with validated trust measures has thus far been inadequate. Furthermore, previous studies have produced only generalized conclusions that explanations can help to improve user trust. This study, in contrast, integrates two streams of explanation use research: K B S explanations and DSS guidance studies, and -184-Chapter 7: Conclusions reveals their complementary impact on trust-building. These different types of explanations increase consumer trust via different trusting beliefs. Fourth, this study empirically investigates an important feature for trustworthy online recommendation agents - agent restrictiveness. In particular, the impact of agent decision strategy support on agent restrictiveness is examined. Perceived agent restrictiveness has been argued to influence user perceptions and evaluations of the agent, but to our knowledge, no prior studies have empirically tested its impact. A new measure for perceived strategy restrictiveness was developed and used to test its impact on consumers' trust and P U of the agents. Fifth, in addition to perceived agent restrictiveness, the influence of agent transparency and cognitive effort on consumers' trust and the P E O U of agents was empirically tested. Although system transparency has been generally discussed in the IS literature (Gregor and Benbasat, 1999; Silver, 1991a), empirical examination o f its impact and antecedents is lacking. More importantly, together with perceived strategy restrictiveness, the relative impact of these variables on the important antecedents in Trus t -TAM was evaluated. A s suggested by Davis (1989), it is important to identify factors that influence the antecedents o f user acceptance o f a technology. This study contributes to research by identifying these factors. This thesis also makes significant contributions to practice. The results of this thesis have several implications for the design of trustworthy online recommendation agents. First, the importance of trust, in determining user acceptance of recommendation agents, calls attention to the agent features that can induce trust-building processes and prevent trust-inhibiting processes. For example, expectation confirmation processes require designers to understand and confirm user expectations. Several solutions were suggested in chapter 4, such as personalized agent-user dialogues. Second, this research offers an effective approach to store knowledge in online recommendation agents. The three types of explanations explored in chapter 3 help to facilitate the flow of knowledge from recommendation agents to their users, while improving the way in which consumers convey their needs and preferences to the agents. -185-Chapter 7: Conclusions With adequate and appropriate knowledge embedded within them, recommendation agents can be an effective way for companies to provide electronic customer services to facilitate online consumer decision-making. Third, by considering both the benefits and the costs of the reduced agent restrictiveness for consumers, this study provides practical guidelines for the design of decision strategy support provided by recommendation agents. Effective user control and additional agent features require that: 1) a set of appropriate explanations be provided so that users can understand these features and employ them properly, and 2) using these features should not induce much cognitive effort so that users are wil l ing to use them. A transparent agent with additional features that empowers user control but does not require much additional effort, delivers more benefits to users (e.g., more useful and trustworthy) and provides more effective recommendation services and gains a higher chance of user adoption. 7.3 Future Research A s discussed in previous chapters, important future research areas are summarized as follows. First, more research is needed to examine other possible explanations for online recommendation agents. A s mentioned in chapter 3, other important explanations have been put forward, e.g., terminological explanations and justifications for reasoning processes (Gregor and Benbasat, 1999), that are not addressed in this study. They may further enhance consumers' trust in agents and deserve attention in future research. Second, this research identified several trust formation mechanisms that have been largely ignored in the literature, such as expectation confirmation and utility assessment. Future research is warranted to examine key agent features that might influence these processes, to facilitate consumers' trust in recommendation agents. A third research area is to investigate other methods that influence agent restrictiveness. Other types of restrictiveness were proposed by Silver (1990), such as -186-Chapter 7: Conclusions communication restrictiveness related to the set of questions and its sequence in the agent-user dialogues. It is worthwhile to also explore the optimum level of agent restrictiveness. A s pointed out by Silver (1990), with too little restrictiveness, users might be overloaded in making choices, while with excessive restrictiveness, the decision processes of users are not well supported. In conclusion, consumer trust in online recommendation agents has emerged as an important issue in electronic environments. Recommendation agents need to be transparent to users by providing a set of appropriate explanations, and to provide high decision strategy support to effectively narrow down the product alternatives, while retaining those products that are most suitable for consumers. Further, the additional features should not require much cognitive effort so that users w i l l be wi l l ing to employ them. Incorporating agent features that can facilitate consumers' trust in an agent encourages their acceptance of the agent, thereby strengthening users' intentions to transact with the Web vendors. -187-References REFERENCES Adams, D.A., Nelson, R.R., and Todd, P.A. "Perceived Usefulness, Ease of Use, and Usage of Information Technology: A Replication," MIS Quarterly (16:2), 1992, pp. 227-247. Aggarwal, P., and Vaidyanathan, R. "Eliciting Online Customers' Preferences: Conjoint vs Self-Explicated Attribute-Level Measurements," Journal of Marketing Management (19:1/2), 2003, pp.157-177. Ajzen, I. "From intentions to actions: A theory of planned behavior," In Action-control: From cognition to behavior, J. Kuhl and J. Beckman (eds.), Heidelberg Springer, New York, 1985, pp. 11-39. Ajzen, I. "Attitude Structure and Behavior," In Attitude Structure and Function, A . R. Pratkanis, S. J. Breckler and A. G. Greenwald (eds.), Lawrence Erlbaum Associates, Hillsdale, NJ, 1989, pp. 241-274. Ajzen, I. "The theory of planned behavior," Organizational Behavior and Human Decision Processes (50), 1991, pp. 179-211. Ajzen, I., and Fishbein, M . Understanding Attitudes and Predicting Social Behaviour, Prentice Hall, 1980. Akerlof, G.A. "The Market for "Lemons": Quality Under Uncertainty and the Market Mechanism," Quarterly Journal of Economics (84), 1970, pp. 488-500. Andersen, V. , Hansen, C.B., and Andersen, H.H.K., "Evaluation of Agents and Study of End-user needs and behaviour for E-commerce," COGITO Focus group experiment Report CHMI -01-01, Riso National Laboratory, 2001. Ansari, A. , Essegaier, S., and Kohli, R. "Internet Recommendation Systems," Journal of Marketing Research (37:3), 2000, pp. 363-375. Atkinson, S., and Butcher, D. "Trust in the Context of Management Relationships: An Empirical Study," S.A.M. Advanced Management Journal (68:4), 2003, pp. 24-33. Aubert, B.A. , and Kelsey, B.L. "Further Understanding of Trust and Performance in Virtual Teams," Small Group Research (34:5), 2003, pp. 575-618. Ba, S. "Establishing online trust through a community responsibility system," Decision Support Systems (31:4), 2001, pp. 323-336. Ba, S., and Pavlou, P.A. "Evidence of the Effect of Trust Building Technology in Electronic Markets: Price Premium and Buyer Behavior," MIS Quarterly (26:3), 2002, pp. 243-268. Bandura, A . Self-Efficacy: The Exercise of Control, Freeman, New York, 1997. -188-References Barclay, D., Thompson, R., and Higgins, C. "The Partial Least Squares (PLS) Approach to Causal Modeling: Personal Computer Adoption and Use as an Illustration," Technology Studies (2:2), 1995, pp. 285-309. Barkhi, R. "The Effects of Decision Guidance and Problem Modeling on Group Decision-Making," Journal of Management Information Systems (18:3), 2001, pp. 259-283. Bates, J. "The role of emotion in believable agents," Communications of the ACM (37:7), 1994, pp.122-125. Beaulieu, M . , and Jones, S. "Interactive searching and interface issues in the Okapi best match probabilistic retrieval system," Interacting with Computers (10:3), 1998, pp. 237-248. Bergen, M . , and Dutta, S. "Agency Relationships in Marketing: A Review of the Implications and Applications of Agency and Related Theories," Journal of Marketing (56:3), 1992, pp. 1-24. Bhattacherjee, A. "Individual Trust in Online Firms: Scale Development and Initial Test," Journal of Management Information Systems (19:1), 2002, pp. 211-241. Bickmore, T., and Cassell, J. "Relational Agents: A model and Implementation of Building User Trust," Proceedings of the SIGCHI'01, Seattle, W A , USA, 2001, pp. 396-403. Blomqvist, K. "The many faces of trust," Scandinavian Journal of Management (13:3), 1997, pp. 271-286. Boyatzis, R.E. Transforming Qualitative Information: Thematic Analysis and Code Development, SAGE Publications, California, 1998. Brashear, T.G., Boles, J.S., Bellenger, D.N., and Brooks, C M . "An Empirical Test of Trust-Building Processes and Outcomes in Sales Manager-Salesperson Relationships," Journal of the Academy of Marketing Science (31:2), 2003, pp. 189-200. Buchanan, B.G., and Shortliffe, E.H. Rule-based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project, Addison-Wesley, Reading, M A , 1984. Burke, R.R. "Technology and the Customer Interface: What Consumers Want in the Physical and Virtual Store," Journal of the Academy of Marketing Science (30:4), 2002, pp. 411-432. Cacioppo, J.T., and Berntson, G. "Relationship between attitudes and evaluative space: A critical review, with emphasis on the separability of positive and negative substrates," Psychological Bulletin (115:3), 1994, pp. 401-423. Carroll, J .M., and Kay, D.S. "Prompting, Feedback, and Error Correction in the Design of a Scenario Machine," Proceedings of the Proceeding of CHI'85 Human Factors in Computing Systems, CA, 1985, pp. 149-153. -189-References Carroll, J .M., and McKendree, J. "Interface design issues for advice-giving expert systems," Communications of the ACM (30:1), 1987, pp. 14-32. Carroll, J .M., and Rosson, M . B . "Paradox of the Active User," In Interface Thought, J. M . Carroll (ed.) 1987, pp. 81-111. Cassell, J. "Embodied conversational interface agents," Communications of the ACM (43:4), 2000, pp. 70-78. Cassell, J., and Bickmore, T. "External Manifestation of Trustworthiness in the Interface," Communications of the A CM (43:12), 2000, pp. 50-56. Celsi, R.L., and Olson, J.C. "The Role of Involvement in Attention and Comprehension Processes," Journal of Consumer Research (15:2), 1988, pp. 210-224. Cenfetelli, R.T. "Inhibitors and Enablers as Dual Factor Concepts in Technology Usage," Journal of the Association for Information Systems (5:11-12), 2004, pp. 472-492. Chau, P .Y.K. "An empirical assessment of a modified technology acceptance model," Journal of Management Information Systems (13:2), 1996, pp. 185-204. Chin, W.W. "PLS-Graph User's Guide, Version 3.0," University of Houston, Houston, Texas, 2001. Choi, Y . K . , Miracle, G.E., and Biocca, F. "The Effects of Anthropomorphic Agents on Advertising Effectiveness and the Mediating Role of Presence," Journal of Interactive Advertising (2:1), 2001, Chopra, K., and Wallace, W.A. "Trust in electronic environments," Proceedings of the 36th Hawaii International Conference on System Sciences, Big Island, Hawaii, 2003, Chwelos, P., Benbasat, I., and Dexter, A.S. "Research Report: Empirical Test of an EDI Adoption Model," Information Systems Research (12:3), 2001, pp. 304-321. Cohen, J. "A Coefficient of Agreement for Nominal Scales," Educational and Psychological Measurement (20:1), 1960, pp. 37-46. Cohen, J. Statistical Power Analysis for the Behavioral Sciences, Lawrence Erlbaum Associate Publishers, 1988. Cook, J., and Wall, T. "New Work Attitude Measures of Trust, Organizational Commitment, and Personal Need Nonfulfillment," Journal of Occupational Psychology (53:1), 1980, pp. 39-52. Corritore, C.L. , Kracher, B., and Wiedenbeck, S. "On-line Trust: Concepts, Evolving Themes, A Model," International journal of Human-Computer Studies (58:6), 2003, pp. 737-758. -190-References Das, T.K., and Teng, B.-S. "Between Trust and Control: Developing Confidence in Partner Cooperation in Alliances," The Academy of Management Review (23:3), 1998, pp. 491-513. Davis, F.D. "Perceived Usefulness, Perceived Ease of Use and User Acceptance of Information Technology," MIS Quarterly (13:3), 1989, pp. 319-340. Davis, F.D., Bagozzi, R.P., and Warshaw, P.R. "User Acceptance of Computer Technology: A Comparison of Two Theoretical Models," Management Science (35:8), 1989, pp. 982-1003. Dehn, D.M. , and Mulken, S.v. "The impact of animated interface agents: a review of empirical research," International journal of Human-Computer Studies (52:1), 2000, pp. 1 -22. Dhaliwal, J.S. "An Experimental Investigation of the Use of Explanations Provided by Knowledge-Based Systems," unpublished Unpublished Doctoral Dissertation, University of British Columbia, 1993. Dhaliwal, J.S., and Benbasat, I. "The Use and Effects of Knowledge-based System Explanations: Theoretical Foundations and a Framework for Empirical Evaluation," Information Systems Research (7:3), 1996, pp. 342-362. Dholakia, U . M . "A motivational process model of product involvement and consumer risk perception," European Journal of Marketing (35:11/12), 2001, pp. 1340-1360. Doney, P.M. , and Cannon, J.P. "An Examination of the Nature of Trust in Buyer-Seller Relationships," Journal of Marketing (61:1), 1997, pp. 35-51. Donnelly, P. "Take My Word For It: Trust in the Context of Birding and Mountaineering," Qualitative Sociology (17:3), 1994, pp. 215-241. Dryer, D.C. "Getting Personal With Computers: How to Design Personalities for Agents," Applied Artificial Intelligence (13:4), 1999, pp. 273-295. Du, T.C., L i , E.Y. , and Chang, A.-P. "Mobile agents in distributed network management," Communications of the ACM (46:7), 2003, pp. 127-132. Eagly, A . H . , and Chaiken, S. "Chapter 1: The Nature of Attitudes," In The Psychology of Attitudes, Harcourt Brace Jovanovich, New York, 1993, Earle, T., and Cvetkovich, G.T. Social Trust: Toward a Cosmopolitan Society, Praeger, New York, 1995. Eisenhardt, K . M . "Agency Theory: An Assessment and Review," Academy of Management Review (14:1), 1989, pp. 57-74. Fishbein, M . , and Ajzen, I. Belief Attitude, Intention, and Behavior: An Introduction to Theory and Research, Addison-Wesley, Reading, M A , 1975. -191-References Flynn, L.R., and Goldsmith, R.E. "A Short, Reliable Measure of Subjective Knowledge," Journal of Business Research (46:1), 1999, pp. 57-66. Fornell, C , and Larcker, D. "Evaluating Structural Equation Models with Unobservable Variables and Measurement Error," Journal of Marketing Research (18:3), 1981, pp. 39-50. Friedman, B., Kahn, P.H., Jr., and Howe, D.C. "Trust Online," Communications of the ACM (43:12), 2000, pp. 34-40. Friedman, B., and Millett, L.I. "Reasoning about computers as moral agents: A research note," In Human Values and the Design of Computer Technology, B. Friedman (ed.) CSLI Publications, Stanford, CA, 1997, pp. 201-207. Gallivan, M.J. "Striking a balance between trust and control in a virtual organization: a content analysis of open source software case studies," Information Systems Journal (11:4), 2001, pp. 277-304. Gambetta, D. "Can we trust trust?," In Trust: Making and Breaking Cooperative Relations, D. Gambetta (ed.) Blackwell, Oxford, U K , 1988, pp. 154-175. Gefen, D. "What Makes an ERP Implementation Relationship Worthwhile: Linking Trust Mechanisms and ERP Usefulness," Journal of Management Information Systems (21:1), 2004, pp. 263-288. Gefen, D., Karahanna, E., and Straub, D.W. "Inexperience and Experience With Online Stores: The Importance of T A M and Trust," IEEE Transactions On Engineering Management (50:3), 2003a, pp. 307-321. Gefen, D., Karahanna, E., and Straub, D.W. "Trust and T A M in Online Shopping: An Integrated Model," MIS Quarterly (27:1), 2003b, pp. 51-90. Gefen, D., and Straub, D. "Managing user trust in B2C e-services," eService Journal (2:2), 2003, pp. 7-24. Gefen, D., Straub, D.W., and Boudreau, M.-C. "Gender Differences in the Perception and Use of E-Mail: An Extension to the Technology Acceptance Model," MIS Quarterly (21:4), 1997, pp. 389-400. Gentry, L. , and Calantone, R. "A Comparison of Three Models to Explain Shop-Bot Use on the Web," Psychology and Marketing (19:11), 2002, pp. 945 -956. Gould, S. "An interpretive study of purposeful, mood self-regulating consumption: The consumption and mood framework," Journal of Psychology & Marketing (14:4), 1997, pp. 395-426. -192-References Gregor, S. "Explanations from Knowledge-Based Systems and Cooperative Problem Solving: an Empirical Study," International Journal of Human-Computer Studies (54:1), 2001, pp. 81-105. Gregor, S., and Benbasat, I. "Explanations from Intelligent Systems: Theoretical Foundations and Implications For Practice," MIS Quarterly (23:4), 1999, pp. 497-530. Grenci, R.T., and Todd, P.A. "Solutions-Driven Marketing," Communications of the ACM (45:3), 2002, pp. 65-71. Hair, J.F., Anderson, R.E., Tatham, R.L., and Black, W.C. Multivariate Data Analysis, Prentice Hall, Englewood Cliffs, NJ, 1998. Halloran, D., Manchester, S., Moriarty, J., and Riley, R. "Systems Development Quality Control," MIS Quarterly (2:4), 1978, pp. 1-13. Hamelink, C.J. The Ethics of Cyberspace, Sage Publications, Thousand Oaks, C A , 2001. Haubl, G., and Murray, K . "Preference Construction and Persistence in Digital Marketplaces: The Role of Electronic Recommendation Agents," Journal of Consumer Psychology (13:1&2), 2003, pp. 75-91. Haubl, G., and Murray, K. "Preference Construction and Persistence in Digital Marketplaces: The Role of Electronic Recommendation Agents," Journal of Consumer Psychology (13:1&2), 2003, pp. 75-91. Haubl, G., and Trifts, V . "Consumer Decision Making in Online Shopping Environments: The Effects of Interactive Decision Aids," Marketing Science (19:1), 2000, pp. 4-21. Haubl, G., and Trifts, V . "Consumer Decision Making in Online Shopping Environments: The Effects of Interactive Decision Aids," Marketing Science (19:1), 2000, pp. 4-21. Hauser, J.R., and Wernerfelt, B. "An Evaluation Cost Model of Consideration Sets," Journal of Consumer Research (16:4), 1990, pp. 393-408. Hawkes, J .M., Mast, K.E . , and Swan, J.E. "Trust earning perceptions of sellers and buyers," Journal of Personal Selling and Sales Management (9:1), 1989, pp. 1-8. Hayes-Roth, F., and Jacobstein, N . "The State of Knowledge-Based Systems," Communications of the ACM(37:4), 1994, pp. 27-39. He, M . , and Leung, H . -f. "Agents in E-Commerce: State of the Art," Knowledge and Information Systems (4:3), 2002, pp. 257-282. Herlocker, J.L., Konstan, J.A., and Riedl, J. "Explaining Collaborative Filtering Recommendations," Proceedings of the Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, Philadelphia, Pennsylvania, US, 2000, pp. 241-250. -193-References Hertzum, M . , Andersen, H.H.K., Andersen, V. , and Hansen, C.B. "Trust in Information Sources: Seeking Information from People, Documents, and Virtual Agents," Interacting with Computers (14:6), 2002, pp. 575-599. Hollnagel, E. "Commentary: Issues in Knowledge-Based Decision Support," International J. Man-Machine Studies (27:5-6), 1987, pp. 743-751. Hook, K. "Steps to Take before Intelligent User Interfaces Become Real," Interacting with Computers (12:4), 2000, pp. 409-426. Hovland, C.I., Janis, I.L., and Kelley, H.H. Communication and Persuasion, Yale University Press, New Haven, CT, 1953. lacobucci, D., Arabie, P., and A. , B. "Recommendation agents on the Internet," Journal of Interactive Marketing (14:3), 2000, pp. 2-11. Igbaria, M . , Guimaraes, T., and Davis, G.B. "Testing the determinants of microcomputer usage via a structural equation model," Journal of Management Information Systems (11:4), 1995, pp. 87-114. Jarvenpaa, S.L. "The Effect of Task Demands and Graphical Format on Information Processing Strategies," Management Science (35:3), 1989, pp. 285-303. Jarvenpaa, S.L., and Leidner, D.E. "Communication and Trust in Global Virtual Teams," Journal of Computer-Mediated Communication (3:4), 1998, Jarvenpaa, S.L., and Tractinsky, N . "Consumer Trust in an Internet Store: A Cross-Cultural Validation," Journal of Computer-Mediated Communication (5:2), 1999, pp. 1-35. Jarvenpaa, S.L., Tractinsky, N . , and Vitale, M . "Consumer trust in an Internet store," Information Technology and Management (1:1-2), 2000, pp. 45-71. Jian, J.Y., Bisantz, A . M . , and Drury, C.G. "Foundations for an Empirically Determined Scale of Trust in Automated Systems," International Journal of Cognitive Ergonomics (4:1), 2000, pp. 53-71. Jiang, Z., Wang, W., and Benbasat, I. "Multimedia-based interactive advising technology for online consumer decision support," Communications of the ACM), forthcoming, Johnson, C.A., "The North American Consumer: Online Retail Update," Forrester Research, 2005. Kahn, D., Pace-Schott, E., and Hobson, J.A. "Emotion and Cognition: Feeling and Character Identification in Dreaming," Consciousness and Cognition (11:1), 2002, pp. 34-50. Kaplan, B., and Duchon, D. "Combining Qualitative and Quantitative Methods in Information Systems Research: A Case Study," MIS Quarterly (12:4), 1988, pp. 571-587. -194-References Kaplan, S.E., Reneau, J.H., and Whitecotton, S. "The Effects of Predictive Ability Information, Locus of Control, and Decision Maker Involvement on Decision Aid Reliance," Journal of Behavioral Decision Making (14:1), 2001, pp. 35-50. Kauffman, R.J., March, S.T., and Wood, C A . "Design Principles for Long-Lived Internet Agents," International Journal of Intelligent Systems in Accounting, Finance, and Management (9:4), 1999, pp. 217-236. Keeney, R., and Raiffa, H. Decisions with Multiple Objectives: Preferences and Value Trade-Offs, John Wiley, New York, 1976. Kieras, D. "A Guide to GOMS Model Usability Evaluation Using N G O M S L , " In Handbook of Human-Computer Interaction, M . G. Helander, T. K. Landauer and P. V . Prabhu (eds.), Amsterdam: North-Holland, 1997, pp. 733-766. Kiesler, S., and Sproull, L. "'Social' Human - Computer Interaction," In Human Values and The Design of Computer Technology, B . Friedman (ed.) CSLI Publications, Stanford, C A , 1997, pp. 191-199. Kim, J., and Yoo, B. "Toward the Optimal Link Structure of the Cyber Shopping Mall ," International Journal of Human-Computer Studies (52:3), 2000, pp. 531-551. King, M.F. , and Hi l l , D.J. "Electronic Decision Aids: Integration of a Consumer Perspective," Journal of Consumer Policy (17:2), 1994, pp. 181-206. Komiak, S. "The Impact of Internalization and Familiarity on Trust and Adoption of Recommendation Agents," unpublished Dissertation, the University of British Columbia, 2003. Komiak, S., and Benbasat, I. "The Impact of Internalization and Familiarity on Trust and Adoption of Recommendation Agents," unpublished Working Paper 02-MIS-006, University of British Columbia, 2002. Komiak, S., and Benbasat, I. "The Formation of Trust and Distrust in Recommendation Agents in Repeated Interactions: A Process-tracing Analysis," Proceedings of the 5th International Conference on Electronic Commerce, Pittsburgh, Pennsylvania, 2003, pp. 287-293. Komiak, S., Wang, W., and Benbasat, I. "Comparing Customer Trust in Virtual Salespersons with Customer Trust in Human Salespersons," Proceedings of the 38th Hawaii International Conference on System Sciences (HICSS38), Big Island, Hawaii, 2005, pp. 1-9. Komiak, S., Wang, W., and Benbasat, I. "Trust Building in Virtual Salespersons versus in Human Salespersons: Similarities and Differences," e-Service Journal (4:1), forthcoming, -195-References Komiak, X.S. , and Benbasat, I. "Understanding Customer Trust in Agent-mediated Electronic Commerce, Web-mediated Electronic Commerce, and Traditional Commerce," Information Technology and Management (ITM) (5:1&2), 2004, pp. 181-207. Kopp, R.J. "Ethical Issues in Personal Selling and Sales Force Management," In Ethics in Marketing, N . C. Smith and J. A . Quelch (eds.), Richard D. Irwin, Homewood, Illinois, 1993, pp. 539-556. Koufaris, M . "Applying the Technology Acceptance Model and Flow Theory to Online Consumer Behavior," Information Systems Research (13:2), 2002, pp. 205-223. Koufaris, M . , and Hampton-Sosa, W. "The development of initial trust in an online company by new customers," Information and Management (41:3), 2004, pp. 377-397. Kramer, R . M . "Trust and Distrust in Organizations: Emerging Perspectives, Enduring Questions," Annual Review of Psychology (50), 1999, pp. 569-598. Kramer, R .M. , Brewer, M.B. , and Hanna, B.A. "Collective Trust and Collective Action: The Decision to Trust as a Social Decision," In Trust in Organizations: Frontiers of Theory and Research, R. M . Kramer and T. R. Tyler (eds.), Sage, Thousand Oaks, C A , 1996, pp. 357-389. Kwon, I.-W.G., and Suh, T. "Factors Affecting the Level of Trust and Commitment in Supply Chain Relationships," Journal of Supply Chain Management (40:2), 2004, pp. 4-14. Lai, H. , and Yang, T.-C. "A System Architecture for Intelligent Browsing on the Web," Decision Support Systems (28:3), 2000, pp. 219-239. Landis, J.R., and Koch, G.G. "The Measurement of Observer Agreement for Categorical Data," Biometrics (33), 1977, pp. 159- 174. Lederer, A . L . , Maupin, D.J., Sena, M.P., and Zhuang, Y . "The technology acceptance model and the World Wide Web," Decision Support Systems (29:3), 2000, pp. 269-282. Lee, J., and Moray, N . "Trust, Control Strategies, and Allocation of Functions in Human-Machine Systems," Ergonomics (35:10), 1992, pp. 1243-1270. Lee, M.K.O. , and Turban, E. "A Trust Model for Consumer Internet Shopping," International Journal of Electronic Commerce (6:1), 2001, pp. 75-91. Lee, W.-P., Liu, C.-H., and Lu, C.-C. "Intelligent Agent-based Systems for Personalized Recommendations in Internet Commerce," Expert Systems with Applications (22:4), 2002, pp. 275-284. Lee, Y . , Kozar, K .A . , and Larsen, K.R.T. "The Technology Acceptance Model: Past, Present, and Future," Communications of the Association for Information Systems (12:Article 50), 2003, pp. 752-780. -196-References Lerch, F.J., Prietula, M.J. , and Kim, J. "Measuring trust in machine advice," Working Paper, School of Business Administration, Eastern Michigan University, Ypsilanti, Michigan, USA, 1993. Lerch, F.J., Prietula, M.J., and Kulik., C.T. "The Turing Effect: The Nature of Trust in Expert System Advice," In Expertise in Context: Human and Manchine, P. J . Feltman, K. M . Ford and R. R. Hoffman (eds.), MIT Press, Cambridge, M A , 1997, Lester, J.C., and Stone, B.A. "Increasing Believability in Animated Pedagogical Agents," Proceedings of the Proceeding of the 1st International Conference on Autonomous Agents 97, Marina Del Rey, California, USA, 1997, pp. 16-21. Lewicki, R.J., and Bunker, B.B. "Trust in Relationships: A Model of Trust Development and Decline," In Conflict, Cooperation, and Justice, B. B. Bunker and J. Z. Rubin (eds.), Jossey-Bass, San Francisco, 1995, pp. 133-173. Lewis, J.D., and Weigert, A . "Trust as a Social Reality," Social Forces (63:4), 1985, pp. 967-985. Limayem, M . , and DeSanctis, G. "Providing Decisional Guidance for Multi-Criteria Decision Making in Groups," Information Systems Research (11:4), 2000, pp. 386-401. Lucente, M . "Conversational interfaces for e-commerce applications," Communications of the ACM(43:9), 2000, pp. 59-61. Luhmann, N . Trust and Power, Wiley, New York, 1979. Ma, Q., and Paul, S. "A Discussion of Web-based Consumer Decision Support Systems (WCDSS) and their Effectiveness," Proceedings of the Proceedings of the Ninth Americas Conference on Information Systems, Tampa, Florida, USA, 2003, pp. 2353-2361. Maes, P. "Agents that Reduce Work and Information Overload," Communications of the ACM (37:7), 1994, pp. 31-40. Maes, P., Guttman, R.H., and Moukas, A . G . "Agents that Buy and Sell," Communications of The ACM (42:3), 1999, pp. 81-91. Mahoney, L.S., Roush, P.B., and Bandy, D. "An investigation of the effects of Decisional Guidance and cognitive ability on decision-making involving uncertainty data," Information and Organization (13:2), 2003, pp. 85-110. Mao, J., and Benbasat, I. "Contextualized access to knowledge: theoretical perspectives and a process-tracing study," Information Systems Journal (8:3), 1998, pp. 217-239. Mao, J., and Benbasat, I. "The Use of Explanations in Knowledge-Based Systems: Cognitive Perspectives and A Process-Tracing Analysis," Journal of Management Information Systems (17:2), 2000, pp. 153-179. -197-References Mao, J., and Benbasat, I. "The Effects of Contextualized Access to Explanatory Knowledge on Judgment," International Journal of Human-Computer Studies (55:5), 2001, pp. 787-814. Mathieson, K. "Predicting User Intentions: Comparing the Technology Acceptance Model with the Theory of Planned Behavior," Information Systems Research (2:3), 1991, pp. 173-191. Mayer, R.C., Davis, J.H., and Schoorman, F.D. "An Integrative Model of Organizational Trust," Academy of Management Review (20:3), 1995, pp. 709-734. McKnight, D.H., and Chervany, N . L . "What Trust Means in E-Commerce Customer Relationships: An Interdisciplinary Conceptual Typology," International Journal of Electronic Commerce (6:2), 2001, pp. 35-59. McKnight, D.H., Choudhury, V. , and Kacmar, C. "Developing and Validating Trust Measures for e-Commerce: An Integrative Typology," Information Systems Research (13:3), 2002a, pp. 334-359. McKnight, D.H. , Choudhury, V . , and Kacmar, C. "The Impact of Initial Consumer Trust on Intentions to Transact with a Web Site: A Trust Building Model," Journal of Strategic Information Systems (11:3-4), 2002b, pp. 297-323. McKnight, D.H., Cummings, L . L . , and Chervany, N .L . "Initial Trust Formation in New Organizational Relationships," Academy of Management Review (23:3), 1998, pp. 473-490. McKnight, D.H., Kacmar, C , and Choudhury, V. "Dispositional Trust and Distrust Distinctions in Predicting High- and Low-Risk Internet Expert Advice Site Perceptions," e-Service Journal (3:2), 2004, pp. 35-58. Merchant, K . A . Control in Business Organizations, Pitman Publishing, Marshfield, M A , 1984. Meyerson, B., Weick, K .E . , and Kramer, R . M . "Swift Trust and Temporary Groups," In Trust in Organizations: Frontiers of Theory and Research, R. M . Kramer and T. R. Tyler (eds.), Sage Publications, 1996, pp. 166-195. Miksch, S., Cheng, K. , and Hayes-Roth., B. "An Intelligent System for Patient Health Care," Proceedings of the The Proceedings of the first international conference on Autonomous agents, Marina Del Ray, CA, 1997, pp. 458-465. Miller, C.T. "The Role of Performance Related Similarity in Social Comparison of Abilities: A Test of the Related Attributes Hypothese," Journal of Experimental Social Psychology (18:6), 1984, pp. 513-524. Milliman, R.E., and Fugate, D. "Using Trust Transference as a Persuasion Technique: An Empirical Field Investigation," Journal of Personal Selling and Sales Management (8:2), 1998, pp. 1-7. -198-References Mingers, J. "Combining IS Research Methods: Towards a Pluralist Methodology," Information Systems Research (12:3), 2001, pp. 240-259. Montgomery, A . L . , Hosanagar, K. , Krishnan, R., and Clay, K . B . "Designing a Better Shopbot," Working Paper, Carnegie Mellon University, Pittsburgh, 2003. Montgomery, I., and Benbasat, I. "Cost/Benefit Analysis of Computer Based Message Systems," MIS Quarterly (7:1), 1983, pp. 1-14. Moon, J.-W., and Kim, Y . - G . "Extending the T A M for a World-Wide-Web Context," Information and Management (38:4), 2001, pp. 217-230. Moon, Y . "Intimate Exchanges: Using Computers to Elicit Self-Disclosure From Consumers," Journal of Consumer Research (26:4), 2000, pp. 323-339. Moore, G.C., and Benbasat, I. "Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation," Information Systems Research (2:3), 1991, pp. 192-222. Muir, B . M . "Trust Between Humans and Machines, and the Design of Decision Aids," International Journal of Man Machine Studies (27:5-6), 1987, pp. 527-539. Muir, B . M . "Trust in Automation: Part I. Theoretical Issues in the Study of Trust and Human Intervention in Automated Systems," Ergonomics (37:11), 1994, pp. 1905-1922. Muir, B . M . "Trust in Automation: Part I. Theoretical Issues in the Study of Trust and Human Intervention in Automated Systems," Ergonomics (37:11), 1996, pp. 1905-1922. Muir, B . M . , and Moray, N . "Trust in Automation: Part II. Experimental Studies of Trust and Human Intervention in a Process Control Simulation," Ergonomics (39:3), 1996, pp. 429-460. Muramatsu, J . , and Pratt, W. "Transparent Queries: Investigation Users' Mental Models of Search Engines," Proceedings of the Proceedings of the 24th Annual International ACMSIGIR Conference on Research and Development in Information Retrieval, New Orleans, Louisiana, US, 2001, Nass, C , and Lee, K . N . "Does Computer-Synthesized Speech Manifest Personality? Experimental Tests of Recognition, Similarity-Attraction, and Consistency-Attraction," Journal of Experimental Psychology-Applied (7:3), 2001, pp. 171-181. Nass, C.I., Moon, Y. , Morkes, J., Kim, E.Y. , and Fogg, B .J. "Computers Are Social Actors: A Review of Current Research," In Human Values and the Design of Computer Technology, B. Friedman (ed.) CSLI Publications, Stanford, C A , 1997, pp. 137-162. -199-References Nelson, R.R., Todd, P.A., and Wixom, B.H. "Antecedents of Information and System Quality: An Empirical Examination Within the Context of Data Warehousing," Journal of Management Information Systems (21:4), 2005, pp. 199-235. Nickerson, R.S. "Why Interactive Computer Systems are Sometimes Not Used by People Who Might Benefit from them," International journal of Human-Computer Studies (51), 1999, pp. 307-321. Nielsen, J., Molich, R., Snyder, C , and Farrell, S. "E-Commerce User Experience," Nielsen Norman Group, Fremont, CA. , 1999. Norman, D.A. "How might people interact with agents," Communications of the ACM (37:7), 1994, pp. 68-71. O'Keefe, R .M. , and McEachern, T. "Web-Based Customer Decision Support Systems," Communications of the ACM (41:3), 1998, pp. 71 - 78. Olson, E.L., and Widing II, R.E. "Are interactive decision aids better than passive decision aids? A comparison With Implications for Information Providers on the Internet," Journal of Interactive Marketing (16:2), 2002, pp. 22-33. Papadopoulou, P., Andreou, A. , Kanellis, P., and Martakos, D. "Trust and Relationship Building in Electronic Commerce," Internet Research (11:4), 2001, pp. 322-332. Pavlou, P. A. "Consumer Acceptance of Electronic Commerce: Integrating Trust and Risk with the Technology Acceptance Model," International Journal of Electronic Commerce (7:3), 2003, pp. 101-134. Payne, J.W. "Taks Complexity and Contigent Processing in Decision Making: An Information Search and Protocol Analysis," Organizational Behavior and Human Performance (16), 1976, pp. 366-387. Payne, J.W., Bettman, J., R., and Johnson, E.J. The Adaptive Decision-maker, Cambridge University Press, New York, 1993. Pennington, R., Wilcox, H.D., and Grover, V . "The Role of System Trust in Business-to-Consumer Transactions," Journal of Management Information Systems (20:3), 2004, pp. 197-226. Pereira, R.E. "Optimizing Human-Computer Interaction for the Electronic Commerce Environment," Journal of Electronic Commerce Research (1:1), 2000, pp. 23-44. Podsakoff, P .M. , and Organ, D.W. "Self-Report in Organizational Research: Problems and Prospect," Journal of Management (12:4), 1986, pp. 531-544. -200-References Qiu, L. "How to Provide "Live Help": The Effects of Text-To-Speech Voice and 3D Avatar on Perception of Presence in Electronic Shopping," unpublished Mater's Thesis, The University of British Columbia, 2002. Ratnasingham, P. "The Importance of Trust in Electronic Commerce," Internet Research (8:4), 1998, pp. 313-321. Redmond, W.H. "The Potential Impact of Artificial Shopping Agents in E-Commerce Markets," Journal of Interactive Marketing (16:1), 2002, pp. 56-66. Reeves, B., and Nass, C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places, Cambridge University Press, New York, N Y , 1996. Reibstein, D.J. "What Attracts Customers to Online Stores, and What Keeps Them Coming Back?," Journal of the Academy of Marketing Science (30:4), 2002, pp. 465-473. Reichheld, F.F., and Schefter, P. "E-Loyalty: Your Secret Weapon on the Web," Harvard Business Review (78:4), 2000, pp. 105-113. Reilly, W.S.N. "A Methodology for Building Believable Social Agents," Proceedings of the Proceeding of the 1st International Conference on Autonomous Agents 97, Marina del Rey, California, United States, 1997, pp. 114-121. Rempel, J.K., Holmes, J.G., and Zanna, M.P. "Trust in close relationships," Journal of Personality and Social Psychology (49:1), 1985, pp. 95-112. Reneau, J.H., and Blanthorne, C. "Effects of information sequence and irrelevant distractor information when using a computer-based decision aid," Decision Sciences (32:1), 2001, pp. 145-163. Ring, P.S. "Fragile and Resilient Trust and Their Roles in Economic Exchange," Business & Society (35:2), 1996, pp. 148-175. Rotter, J.B. "Generalized Expectancies for Interpersonal Trust," American Psychologist (26:5), 1971, pp. 443-452. Ruppel, C , Underwood-Queen, L. , and Harrington, S.J. "e-Commerce: The Roles of Trust, Security, and Type of e-Commerce Involvement," e-Service Journal (2:2), 2002-2003, pp. 25-45. Russo, J.E. "Aiding Purchase Decisions on the Internet," Proceedings of the the Winter 2002 SSGRR International Conference on Advances in Infrastructure for Electronic Business, Education, Science, and Medicine on the Internet, Italy, 2002, Rust, R.T., and Kannan, P.K. "E-service: a new paradigm for business in the electronic environment," Communications of the ACM (46:6), 2003, pp. 37-42. -201-References Saint-Onge, H. "How Knowledge Management Adds Critical Value to Distribution Channel Management," Journal of Knowledge Management Practice (1), 1998, pp. Online available at: http://www.knowinc.com/saint-onge/article 1 .htm. Accessed on Apr. 11, 2000. Shankar, V., Urban, G.L., and Sultan, F. "Online trust: a stakeholder perspective, concepts, implications, and future directions," Journal of Strategic Information Systems (11:3-4), 2002, pp. 325-344. Sheppard, B.H. , Hartwick, J., and Warshaw, P.R. "The theory of reasoned action: a meta-analysis of past research with recommendations for modifications and future research," Journal of Consumer Research (15:3), 1988, pp. 325-343. Shneiderman, B. "Design Trust into Online Experiences," Communications of ACM(43:12), 2000, pp. 57-59. Silver, M.S. "User Perceptions of Decision Support System Restrictiveness: An Experiment," Journal of Management Information Systems (5:1), 1988, pp. 51-65. Silver, M.S. "Decision Support Systems: Directed and Nondirected Change," Information Systems Research (1:1), 1990, pp. 47-70. Silver, M.S. "Decisional Guidance for Computer-Based Support," MIS Quarterly (15:1), 1991a, pp. 105-122. Silver, M.S. Systems That Support Decision Makers: Description and Analysis, John Wiley and Sons, Chichester, 1991b. Simons, H.W., Berkowitz, N . , and Moyer, R.J. "Similarity, Credibilty and Attitude Change: A Review and a Theory," Psychological Bulletin (73:1), 1970, pp. 1-16. Sinha, R., and Swearingen, K. "Beyond Algorithms: An HCI Perspective on Recommender Systems," Proceedings of the ACM SIGIR Workshop on Recommender Systems, 2001, Sinha, R., and Swearingen, K . "The Role of Transparency in Recommender Systems," Proceedings of the CHI '02 extended abstracts on Human factors in computing systems, Minneapolis, Minnesota, USA, 2002, pp. 830-831. Sitkin, S.B., and Roth, N .L . "Explaining the Limited Effectiveness of Legalistic "Remedies" for Trust/Distrust," Organization Science (4:3), 1993, pp. 367-392. Slovic, P. "Perceived Risk, Trust, and Democracy," Risk analysis (13:6), 1993, pp. 675-682. Smith, M.D. "The Impact of Shopbots on Electronic Markets," Journal of the Academy of Marketing Science (30:4), 2002, pp. 446-454. Stewart, K.J. "Trust Transfer on the World Wide Web," Organization Science (14:1), 2003, pp. 5-17. -202-References Straub, D., Limayem, M . , and Karahanna-Evaristo, E. "Measuring System Usage: Implications for IS Theory Testing," Management Science (41:8), 1995, pp. 1328-1342. Stylianou, A.C. , Madey, G.R., and Smith, R.D. "Selection Criteria for Expert Systems Shells: A Socio-Technical Framework," Communications of the ACM (35:10), 1992, pp. 30-48. Svenson, O. "Process descriptions of decision making," Organizational Behavior and Human Performance (23:1), 1979, pp. 86-112. Swanson, E.B., and Ramiller, N.C. "Measuring the Effectiveness of Computer-Base Information Systems in the Financial Services Sector," MIS Quarterly (11:1), 1987, pp. 107-124. Sztompka, P. Trust: A Sociological Theory, Cambridge University Press, Cambridge, U K , 1999. Tan, C.-H. "Comparison-Shopping Websites: An Empirical Investigation on the Influence of Decision Aids and Information Load on Consumer Decision-Making Behavior," Proceedings of the Proceedings of the Twenty-Fourth International Conference on Information Systems, Seattle, Washington, 2003, Taylor, S., and Todd, P. "Assessing IT usage: the role of prior experience," MIS Quarterly (19:4), 1995a, pp. 561-570. Taylor, S., and Todd, P.A. "Understanding Information Technology Usage: A test of Competing Models," Information Systems Research (6:2), 1995b, pp. 144-176. Todd, P., and Benbasat, I. "Process Tracing Methods in Decision Support Systems Research: Exploring the Black Box," MIS Quarterly (11:4), 1987, pp. 493-512. Todd, P., and Benbasat, I. "An Experimental Investigation of the Impact of Computer Based Decision Aids on Decision Making Strategies," Information Systems Research (2:2), 1991, pp. 87-115. Todd, P., and Benbasat, I. "The Use of Information in Decision Making: An Experimental Investigation of the Impact of Computer Based Decision Aids," Management Information Systems Quarterly (16:3), 1992, pp. 373-393. Todd, P., and Benbasat, I. "The influence of decision aids on choice strategies under conditions of high cognitive load," IEEE Transactions on Systems, Man and Cybernetics (24:4), 1994a, pp. 537 - 547. Todd, P., and Benbasat, I. "The Influence of Decision Aids on Choice Strategies: An Experimental Analysis of the Role of Cognitive Effort," Organizational Behavior and Human Decision Processes (60:1), 1994b, pp. 36-74. Todd, P., and Benbasat, I. "Evaluating the Impact of DSS, Cognitive Effort, and Incentives on Strategy Selection," Information Systems Research (10:4), 1999, pp. 356-374. -203-References Todd, P., and Benbasat, I. "Inducing Compensatory Information Processing through Decision Aids that Facilitate Effort Reduction: An Experimental Assessment," Journal of Behavioral Decision Making (13:1), 2000, pp. 91 -106. Urban, G.L., Sultan, F., and Quails, W. "Design and Evaluation of a Trust Based Advisor on the Internet," Working paper, MIT, Cambridge, M A , 1999. Urban, G.L., Sultan, F., and Quails, W.J. "Placing Trust at the Center of Your Internet Strategy," Sloan management Review (42:1), 2000, pp. 39-48. Venkatesh, V. , and Davis, F.D. "A theoretical extension of the technology acceptance model: Four longitudinal field studies," Management Science (46:2), 2000, pp. 186-204. Venkatesh, V. , and Morris, M.G. "Why Don't Men Ever Stop to Ask for Directions? Gender, Social Influence, and Their Role in Technology Acceptance and Usage Behavior," MIS Quarterly (24:1), 2000, pp. 115-139. Venkatesh, V . , Morris, M.G. , Davis, G.B., and Davis, F.D. "User Acceptance of Information Technology: Toward a Unified View," MIS Quarterly (27:3), 2003, pp. 425-478. Wang, R.Y. , and Strong, D . M . "Beyond Accuracy: What Data Quality Means to Data Consumers," Journal of Management Information Systems (12:4), 1996, pp. 5-34. West, P .M. , Ariely, D., Bellman, S., Bradlow, E., Huber, J., Johnson, E., Kahn, B. , Little, J., and Schkade, D. "Agents to the Rescue?," Marketing Letters (10:3), 1999, pp. 285-300. Whitener, E .M. , Brodt, S.E., Korsgaard, M.A. , and Werner, J .M. "Managers as Initiators of Trust: An Exchange Relationship Framework for Understanding Managerial Trustworthy Behavior," Academy of Management Review (23:3), 1998, pp. 513-530. Wick, M.R., and Slagle, J.R. "An Explanation Facility for Today's Expert Systems," IEEE Expert (4:1), 1989, pp. 26-36. Widing II, R.E., and Talarzyk, W.W. "Electronic information systems for consumers: an evaluation of computer-assisted formats in multiple decision environments," Journal of Marketing Research (30:2), 1993, pp. 125-141. Wilson, E.V., and Zigurs, I. "Decisional Guidance and End-User Display Choices," Accounting, Management and Information Technology (9:1), 1999, pp. 49-75. Worchel, P. "Trust and Distrust," In The Social Psychology of Intergroup Relations, W. G. Austin and S. Worchel (eds.), Brooks/Cole Publishing, Monterey, C A , 1979, pp. 174-187. Xiao, S., and Benbasat, I. "The Impact of Internalization and Familiarity on Trust and Adoption of Recommendation Agents," unpublished Working Paper 02-MIS-006, University of British Columbia, 2002. -204-References Ye, L.R., and Johnson, P.E. "The Impacts of Explanation Facilities on User Acceptance of Expert Systems Advice," MIS Quarterly (19:2), 1995, pp. 157-172. Yoon, S.-J. "The antecedents and consequences of trust in online-purchase decisions," Journal of Interactive Marketing (16:2), 2002, pp. 47-63. Zaichkowsky, J.L. "Measuring the Involvement Construct," Journal of Consumer Research (12:3), 1985, pp. 341-352. Zaichkowsky, J.L. "The personal involvement inventory: Reduction, revision, and application to advertising," Journal of Advertising (23:4), 1994, pp. 59-70. Zucker, L . -G. "Production of trust: Institutional sources of economic structure, 1840-1920," Research in Organizational Behavior (8), 1986, pp. 53-111. -205-

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0099817/manifest

Comment

Related Items