Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The impact of internalization and familiarity on trust and adoption of recommendation agents Komiak, Sherrie Yi Xiao 2003

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2003-859525.pdf [ 7.1MB ]
Metadata
JSON: 831-1.0091325.json
JSON-LD: 831-1.0091325-ld.json
RDF/XML (Pretty): 831-1.0091325-rdf.xml
RDF/JSON: 831-1.0091325-rdf.json
Turtle: 831-1.0091325-turtle.txt
N-Triples: 831-1.0091325-rdf-ntriples.txt
Original Record: 831-1.0091325-source.json
Full Text
831-1.0091325-fulltext.txt
Citation
831-1.0091325.ris

Full Text

The Impact of Internalization and Familiarity On Trust and Adoption of Recommendation Agents  by Sherrie  Y i X i a o KOMIAK  B. Eng., Tsinghua U n i v e r s i t y , 1994 M. Econ., Fudan U n i v e r s i t y , 1997 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE  FACULTY OF GRADUATE STUDIES  (Sauder School o f Business)  We accept t h i s t h e s i s as conforming t o the r e q u i r e d s t a n d a r d  THE  UNIVERSITY OF BRITISH COLUMBIA June 15, 2003 ® S h e r r i e Y. X. Komiak, 2003  In presenting degree  this  at the  thesis  in  University of  freely available for reference copying  of  department  this or  publication of  partial  fulfilment  British Columbia, and study.  his  or  her  representatives.  DE-6 (2/88)  may be It  is  requirements that the  for  an advanced  Library shall make it  that permission for extensive granted by the understood  head  that  this thesis for financial gain shall not be allowed without  LcfVU^H f i ^  The University of British Columbia Vancouver, Canada  Date  I agree  thesis for scholarly purposes by  the  I further agree  permission.  Department of  of  " M ' S  of  my  copying  or  my written  Abstract Computer agent technology is a useful technology for a customer to deal with information overload and search complexity in ecommerce. This study focus on how a product-brokering recommendation agent (RA) can be designed better so that a customer will be more likely to trust and then adopt the R A for the customer's purchase decision making. Our research model predicts that internalization and familiarity will increase both cognitive trust and emotional trust in an R A , which will then increase a customer's intention to adopt the R A either as a delegated agent or as a decision aid. Internalization refers to a customer's perception of how well an R A represents the customer's real needs. A customer's familiarity with an R A is expected to increase from the initial interaction through repeated interactions. We conducted a lab experiment to test the model. We also used a processtracing method to track the processes of trust formation. The results of the quantitative analysis (PLS analysis) support our hypotheses. (1) Internalization significantly increases a customer's intention to adopt an R A via increased cognitive trust and emotional trust. This implies that an R A should make sure a customer's specifications of product attributes (i.e. the expressed needs) actually represent the customer's real needs instead just taking a customer's specifications as they are, and that the focus of R A design should shift from product filtering toward customer representation. (2) This study shows that R A adoption is not purely rational or cognitive. R A adoption is affected by both cognitive trust and emotional trust. Emotional trust in an R A directly increases a customer's intention to adopt an R A , and emotional trust fully mediates the impact of cognitive trust on the intention to adopt an R A . The effect of emotional trust on the intention to adopt an R A as a delegated agent is higher than its impact on the intention to  adopt an R A as a decision aid, while the effect of cognitive trust on the intention to adopt an R A as a delegated agent is lower than its effect on the intention to adopt an R A as a decision aid. This implies that emotional trust should be investigated in IT adoption and trust research. (3) The intention to adopt an R A as a delegated agent and the intention to adopt an R A as a decision aid are different levels of R A adoption that merit separate research. They are different at the conceptual, measurement, and behavioral levels. (4) The total effects of familiarity on the intention to adopt an R A are low, indicating that a customer decides on R A adoption mainly during her initial interaction with an R A . In addition, we used a process-tracing method to track how customer trust/distrust in an R A is formed. To our best knowledge, this is the first empirical study to trace the processes of trust/distrust formation. We developed two coding schemes in terms of: a) the source and the destination of a process, and b) trust/distrust formation via different ways that customers process their knowledge of an R A . The verbal protocols from 49 subjects, speaking 40 minutes per person on average, were collected. The results of the protocol analysis include: (1) Trust and distrust have different structures in terms of cognitive and emotional components. (2) Trust and distrust in an R A are formed via different ways that customers process their knowledge about an R A (3) A n R A needs to be designed in different ways in order to facilitate cognitive and emotional trust while preventing cognitive and emotional distrust. (4) Repeated interactions change neither the structures of trust/distrust nor the proportion of different trust/distrust formation processes. However, the quantities of processes in  all categories drop by almost 50% from initial interactions to second interactions, and then reach a stable level. (5) The level of R A internalization change neither trust/distrust structures, nor the way that customers process their knowledge in order to form trust or distrust. (6) Finally, an R A needs to be designed in different ways in order to facilitate cognitive trust vs. emotional trust. Similarly, the R A needs to be designed in different ways in order to prevent cognitive distrust vs. emotional distrust.  Table of Contents Abstract  ii  Table of Contents  v  List of Tables  viii  List of Figures  ix  Acknowledgements Dedication  x xi  Chapter 1. Introduction  1  1.1  Background  1  1.2  Research Objectives and Motivations  4  1.3  Significance of Research  6  1.4  Conduct of Research  8  1.5  Summary of Chapter 1  9  1.6  Organization of Dissertation  9  Chapter 2. Literature Review  10  2.1  Recommendation Agents in Agent-mediated E-commerce  10  2.2  Definitions of Customer Trust  14  2.3  Trust Formation Processes  18  2.4  Summary of Chapter 2  22  Chapter 3. Theoretical Foundations  24  3.1  Trust and Trustworthiness  24  3.2  The Nature of Trust  26  3.3  Trust in an IT and Trust in a Person  28  3.4  Internalization and Trust  29  3.5  Familiarity and Trust  33  3.6  Trust and IT Adoption  34  3.7  Summary of Chapter 3  36  Chapter 4. Research Model and Hypotheses  37  4.1  Research Model  4.2  R A Internalization: Low-internalization R A vs. High-internalization R A . . . . 37  4.3  Familiarity: Initial Interaction vs. Repeated Interactions  39  4.4  Customer Trust in an R A : a Proposed Trust Model  40  4.5  Intention to Adopt an R A  50  4.6  Summary of Chapter 4  51  Chapter 5. Research Method  37  53  5.1  Independent Variable: R A Internalization  53  5.2  Independent Variable: Familiarity  56  5.3  Dependent Variables: Measure Development  58  5.4  Protocol Data Collection  58  5.5  Subjects  59  5.6  Experimental Procedures  60  5.7  Summary of Chapter 5  62  Chapter 6. Results of Quantitative Analysis  63  6.1  Statistical Analysis: PLS  63  6.2  Measurement Model  63  6.3  Structural Model  64  6.4  Summary of Chapter 6  67  Chapter 7. Results of Protocol Analysis  69  7.1  Building Coding Schema  69  7.2  Coding  82  7.3  Protocol Analysis Results  86  7.4  Summary of Chapter 7  92  Chapter 8. Conclusions 8.1  Theoretical and Empirical Contributions  94 94  8.2  Managerial Implications  102  8.3  Limitations  106  8.4  Suggestions for Future Research  108  References  Ill  vii  List of Tables Table 1: Four Types of Recommendation Agents  136  Table 2: Examples of Recommendation Agents  137  Table 3: Experimental Design  138  Table 4: Subjects Who Thought Aloud  139  Table 5: Definitions and Measures for independent Variables  140  Table 6: Definitions and Measures for Dependent Variables  141  Table 7: Definitions and Measures for Control Variables  143  Table 8: Reflective Constructs: Loadings and t-statistics  145  Table 9: Correlations of Latent Constructs  146  Table 10: Correlations between Measures and Latent Variables  147  Table 11: The Structural Model  148  Table 12: Tests for Mediating Effects  149  Table 13 Protocol Analysis: Overview  151  Table 14: The Results of Written Protocol Analysis  152  Table 15: Verbal Protocols: the Structures of Trust and Distrust  159  Table 16: Verbal Protocols: Trust and Distrust Formation Processes  160  Table 17: Verbal Protocols: the Structure of Trust in Repeated Interactions  161  Table 18: Verbal Protocols: the Structure of Distrust in Repeated Interactions  162  Table 19: Verbal Protocols: Trust Formation Processes in Repeated Interactions  163  Table 20: Verbal Protocols: Distrust Formation Processes in Repeated Interactions  164  Table 21: The Effects of R A Internalization on the Structures of Trust and Distrust  165  Table 22: The Effects of R A Internalization on Trust/Distrust Formation Processes  166  Table 23: Regression Analysis  167  List of Figures Figure 1: The Research Model  128  Figure 2: Results of PLS Analysis  129  Figure 3: Two Judges Coded the Protocols Independently  130  Figure 4: Trust in Four Experimental Groups  131  Figure 5: Distrust in Four Experimental Groups  132  Figure 6: Trust Formation Processes in Four Groups  133  Figure 7: Distrust Formation Processes in Four Groups  134  Figure 8: The Scatter Plot  135  ix  Acknowledgements I am indebted to my advisor, Dr. Izak Benbasat, for his support, help, and guidance throughout the dissertation process. He has fulfilled his role as an advisor in every way desired. The suggestions provided by the other members of my dissertation corrimittee, Dr. Charles Weinberg and Dr. Albert Dexter, are gratefully acknowledged. The suggestions from my dissertation committee have tremendously improved the quality of this manuscript. I also thank Weiquan Wang for his help with my protocol analysis and his feedback on building the trust model and the Coding Schemes. I thank Wynne Chin for allowing me to use PLS-Graph version 3.0 to conduct my quantitative data analysis. I appreciate the help of Lingyun Qiu and Kevin Chen with conducting my experiments and the help of Palash Bera and David Hood with verbal protocol transcription. I think Dr. Jai-Yeol Son, Dr. Paul Chwelos, Dr. Carson Woo, Paul Komiak, Ron Cenfetelli, Nanda Kumar, Jack Zhenghui Jiang, Genevieve Bassellier, David Patient, and Dongmin K i m for their suggestions about this dissertation. Special thanks also to all the subjects who participated in my experiments.  Dedication This dissertation is dedicated to my husband, Paul Komiak, my parents, Zhidong Xiao and Longmei Chen, and my brother, Xing Xiao, for their encouragement and support during my Ph.D. studies.  xi  CHAPTER 1: INTRODUCTION 1.1 Background As firmly established companies venture an increasing proportion of their activities and business in online environments, and many new businesses emerge centered around the Internet and the opportunities it has introduced, from the perspective of online customers, the growth involved in new electronic media does not simply entail an increase in the shopping opportunities, but also a challenge to digest all the information online. One way for the customers to maximize their success of online shopping is to use computer agents (Maes, Guttman and Moukas, 1999; Moukas, Zacharia, Guttman and Maes, 2000; Nwana, Rosenschein, Sandholm, Sierra, Maes and Guttmann, 1998). Agent-mediated electronic commerce (e-commerce) encompasses buying and selling activities conducted on the Internet through interactions between a human customer and a customer agent (Komiak and Benbasat, forthcoming). Whereas a computer agent, in general, is a software entity that has been given sufficient autonomy and intelligence to enable it to carry out specific tasks with little or no human supervision (Nwana, et al., 1998), a customer agent is more specifically a personalized computer agent that takes a customer's needs as its own preferences when it performs decisionmaking on behalf of the customer. In agent-mediated e-commerce, various customer agents automate processes or provide assistance to customers during several stages of the shopping process (Maes et al., 1999). Customers give instructions to customer agents, which then gather and process product and company information through the Internet, and may additionally make purchase decisions and process payments on behalf of the  1  customer. Customer agents are personalized, autonomous, and intelligent computer agents (Maes et al., 1999). This dissertation focuses on product-brokering Recommendation Agents (RAs), a type of personalized computer agent that advises online customers about what to buy (product-brokering), based on each customer's personal needs (Ansari, Essegaier and Kohli, 2000; Maes, et al., 1999). The results of this study can be further generalized and applied to the design and adoption of other customer agents, or even to other personalized technologies in e-commerce, because all the constructs in the research model exist in the relationships between a customer and other personalized technologies. Given the large number of companies currently operating online, and the sheer number of product offerings from each online company (e.g. the eight million book titles that are available through www.amazon.com), recommendation agents represent a technology that can be very useful for customers, inasmuch as the RAs enable customers to cope with information complexity and information overload (Chiasson, Hawkey, McAllister and Slonim, 2002; Hanani, Shapira and Shoval, 2001; Nwana et al., 1998). The goal of using RAs in e-commerce is to automatically direct the most valuable information to customers in accordance with their needs, and to help customers make optimal use of limited time (Hanani et al., 2001). Empirical studies show that the use of RAs reduces transaction costs and improves the effectiveness  and efficiency of  customers' decision-making (Haubl and Trifts, 2000; Lynch and Ariely, 2000). However, it is still not clear how customers form their attitudes toward RAs, and the considerations involved in decisions to adopt RAs. It is also unclear how RAs can be better designed to promote R A adoption. A n R A may be owned by a website (e.g. www.amazon.com), by a  2  third party (e.g. www.mvsirrimon.com), or by a customer (e.g. customers can buy an R A from www.copernic.com and install it on their own computers). Nwana et al. (1998) suggest that customers have to trust an R A before they adopt the R A . However, consumers do not automatically instill trust in an R A when they install the R A plug-in programs into their web browsers, or simply because the R A is a website feature used by companies previously deemed trustworthy by consumers. The consumers have to proceed through unique processes to establish a separate and unique measure of trust for that particular R A before they decide to adopt it or not. E-commerce is shifting the user base for information-technology (IT), away from a concentration on organizations or employees within the organizations, and increasingly towards customers external to the organizations (Straub and Watson, 2001). This trend leads in turn to a significant value in understanding IT adoption and IT design from the perspective of customers. Therefore, this dissertation contributes to the body of current knowledge by exploring the process of designing RAs, customer trust in RAs (i.e. the attitudes customers take towards RAs), and the decisions made by customers to adopt RAs, all from the perspective of online customers. While prior research on R A design has taken specifications dictated by customers (e.g. preferences expressed for product attributes, such as C P U speed for notebook computers) as the basis for recommendation generation (Ansari et al., 2000; Fisk, 1996; Goker and Thompson, 2001; Sim and Chan, 2000; Smyth and Cotter, 2001; Zahir, 2002), this dissertation posits that the specifications a customer submits might not effectively represent a customer's real needs (e.g. a customer might misguidedly request a notebook computer's R A M to be 128MB while the customer needs to play computer games that  3  would requires R A M to be at least 256MB). Therefore, it is valuable for an R A to go one step further than blindly accepting specifications from customers, to internalize each customer's real needs by asking the customer the right questions in the right way. The current study tests and reveals the positive value of designing a sophisticated R A that internalizes a customer's real needs rather than simply taking a customer's specifications at their face value - in terms of the increased customer trust in an R A and the customer's increased intention to adopt an R A . In addition, while prior research has predominantly treated IT, including R A technology, as a tool only, this dissertation adopts a theory of "social responses to computers" that posits that people will treat computer agents as social actors rather than as inanimate objects (Nass and Moon, 2000; Reeves and Nass, 1996).  1.2 Research Objectives and Motivations This dissertation focuses on the effects of R A internalization and repeated interactions on the trust that customers invest in RAs, and the consequent effects of customer trust on R A adoption. More specifically, this research examines the following questions: •  What are the effects of R A internalization on customer trust? RA  internalization refers to a customer's perception of how well an R A represents the customer's real needs. In the context of customers' trust for RAs, the current study differentiates between cognitive trust and emotional trust (see section 4.4.1), and it develops new measures for each of them.  4  •  What are the effects of familiarity on customer trust? Familiarity refers to  the extent to which customers are knowledgable about the processes by which an R A derives its recommendations. •  What are the effects of cognitive trust and emotional trust on a customer's  intention to adopt an R A as a decision aid or as a delegated agent? The intention to adopt an RA as a delegated agent refers to the extent to which particular customers are willing to let RAs make decisions about the products to buy on their behalves. The intention to adopt an RA as a decision aid refers to the extent to which customers are willing to let RAs refine the lists of products offered to them. Based on recommendations made by RAs, customers will process the product information and make decisions about their purchases. •  How does a customer form trust in a recommendation agent? B y opening  the black box of trust formation processes, the specific factors involved and the unique relationships formed between human customers and Internet-based RAs will be illuminated and exposed to analysis.  Because RAs are personalized computer agents, the relationship between customers and RAs encompasses both representation and delegation. RAs represent individual customers, and the customers delegate various decisions to the R A . Maybe therefore, delegation and representation ought to be considered when designing RAs, because they influence customer trust in the R A and decisions/actions involved in R A adoption. This dissertation provides a progressive framework and analysis to aid in understanding these interrelationships.  5  1.3 Significance of Research Recommendation agents (RAs) are important applications of technology in ecommerce, empowering customers to deal with information complexity and information overload (Ansari, et al., 2000; Ariely, 2000; Chiasson, et al., 2002; Grenci and Todd, 2002; Haubl and Trifts, 2000; Maes, et al., 1999; Moukas, et al., 2000; Nwana, et al., 1998). There is a need to study R A design and R A adoption from the perspective of online customers, because in e-commerce environments, the main IT user base has shifted from organizations and employees to online individual customers (Straub and Watson, 2001). It is important to know how a customer will adopt an R A . Two levels of IT adoption can be differentiated by the degree of responsibilities and activities customers delegate to RAs. A customer may adopt an R A to automate shopping decisions or to assist in making them (Maes, et al., 1999). Therefore, the intention to adopt an R A as a delegated agent can be differentiated from the intention to adopt an R A as a decision aid. An extensive review of previous studies has indicated that this is the first study to make such a differentiation, and to investigate the effects of customer trust on each of these two levels of R A adoption. For RAs to be effective, customers must trust them, because the customers allow the RAs to make decisions for them, or to help them in their decision-making (Nwana, et al., 1998). The investment of a particular customer's trust in an R A is a process shaped by and exhibiting the customer's attitude toward the R A . Prior research into e-commerce has focused on cognitive components of trust in similar situations (Coleman, 1990;  6  Hardin, 2002; Yamagishi, 2001), but it generally has ignored the emotional component of trust discussed in more general psychological and sociological studies (Miller, 2000). This dissertation investigates both cognitive trust and emotional trust (see section 4.4.1), and the effects of emotional trust and cognitive trust on customers' intentions to adopt RAs. This dissertation suggests that customer trust and R A adoption are not based upon purely cognitive and rational processes. In addition, in the course of preparing this dissertation no other study has been identified that empirically traces the processes of trust formation, although prior research has theoretically discussed trust-formation processes  (Doney and Cannon, 1997;  McKnight, Cummings and Chervany, 1998). Our protocol analysis has revealed that customers form both trust and distrust while they interact with RAs. However, the structure of trust and the structure of distrust are different, the processes of trust formation and distrust formation are likewise different, and moreover, the processes of cognitive trust formation and emotional trust formation are different from each other. This indicates that RAs should be designed both to promote trust formation and to prevent distrust formation, and additionally to promote both cognitive trust and emotional trust. This study also ventures an exploration of R A design by studying the impact of R A internalization on customer trust. Prior research into knowledge-based systems (KBSs) and expert systems (ESs) has focused on explanations included with KBSs and ESs to help users to understand the reasoning and logic employed by the systems (Gregor and Benbasat, 1999). However, in the design of RAs, the converse of these explanations should also be considered, with the objective of designing an R A that can more  7  effectively and accurately understand and represent the real needs of customers. In addition, other prior research on R A design has focused on product filtering (Ansari et al., 2000), whereas this dissertation suggests that the focus of R A design should shift from product filtering to customer representation, inasmuch as R A internalization (i.e. representation of customers by RAs) may significantly increase the trust customers place in RAs and influence their intentions to adopt the RAs.  1.4 Conduct of Research In the course of the present study, a laboratory experiment has been conducted to examine the effects of R A internalization and customer familiarity with RAs on customers' trust in RAs and on customers' intentions to adopt the RAs. Multiple methods of measurement have been applied to assess various issues addressed by the study. Questionnaire instruments have been used to assess customer trust and intentions to adopt RAs, and process tracing has been used to analyze the processes of trust formation. Subsequently, both quantitative and qualitative data analysis methods have been used to analyze the results of the empirical investigations. More specifically, the quantitative analysis was comprised of the PLS tests of research models; the qualitative analysis, on the other hand, included a protocol analysis of the development of trust and distrust in RAs as various combinations of cognitive trust and emotional trust, and it also considered the foundations of customer trust in the various ways that customers process their knowledge about RAs.  8  1.5 Summary of Chapter 1 Recornmendation agents are a useful technology for e-commerce. This study examines how recommendation agents can be better designed (i.e. by emphasizing R A internalization) to increase customers' cognitive trust and emotional trust in recommendation agents, and in turn it analyzes the influence of increased cognitive trust and emotional trust on the customers' intentions to adopt recommendation agents as delegated agents or as decision aids. This study also investigates the impact of familiarity on the customers' cognitive trust and emotional trust. In addition, this study empirically traces the processes of trust formation in reference to recommendation agents, observing these as the processes of customer attitude changes toward the recommendation agents.  1.6 Organization of Dissertation Previous studies and reports on RAs and other relevant topics are reviewed in the next chapter, followed by an explanation of the theoretical foundations of the current study in Chapter 3. Chapter 4 then details the research model and the hypotheses used for this study, and Chapter 5 describes the research methods. The results of the quantitative analysis are presented and discussed in Chapter 6, the results of the qualitative analysis are given in Chapter 7, and finally the overarching conclusions and comments on the significance of the current study are given in the eighth chapter.  9  CHAPTER 2: LITERATURE REVIEW This chapter reviews previous research. Section 2.1 reviews literature on recommendation agents in agent-mediated e-commerce. Section 2.2 surveys prior research on various definitions of customer trust, and Section 2.3 goes through prior studies on trust formation processes.  2.1  Recommendation Agents in Agent-mediated E-commerce  Literature on Recommendation Agents: Classifications and Attributes Various styles and types of computer agents are used to automate or assist with decision-making processes, for customers engaged in agent-mediated electronic commerce (Maes, et al., 1999; Moukas, et al., 2000). RAs are mainly used at the stages of product brokering (which product to buy?) and company brokering (where to buy?). The RAs can be classified into four types, as listed in Table 1, in terms of how they generate their recommendations; some specific examples of product-brokering RAs and companybrokering RAs are then given in Table 2. A variety of previous studies have identified three definitive attributes of computer agents (including recommendation agents): autonomy/agency, intelligence, and mobility (Do, March, Rich and Wolff, 2000; Gilbert, Aparicio, Atkinson, Brady, Ciccarino, Grosof, O'Connor, Osisek, Pritko, Spagna and Wilson, 1995; Moukas, et al., 2000; Sinmao, 1999). •  Autonomy/Agency is the degree of independence that an agent exhibits  apart from specific directions given by consumers. RAs and other computer agents should be able to operate on the Internet whether the customers using them are online  10  or offline (Do et al. 2000). Furthermore, a more autonomous agent can cooperate with other agents while a less autonomous one may not be able to cooperate with others. In addition, an R A working as a decision aid is less autonomous than an R A operating as a delegated agent. •  Intelligence refers to the amount of learned behavior and possible  reasoning capacity that an agent may exhibit. For example, an R A giving shopping advice at an expert level is more intelligent than an R A that simply retrieves product information based on particular customers' preferences. •  Mobility. A computer agent can be static and reside on a consumer's  computer to manage and gather Information on that one machine. If a static agent needs information from another computer, it sends a request to the other computer and waits for the information to be remitted. It typically uses a communication mechanism like remote procedure calling. In contrast, computer agents in agentmediated e-commerce are mobile agents. They are free to travel among multiple hosts in a computer network, gathering information on multiple computers until their tasks are completed.  Literature on Recommendation Agent Design Prior research on recommendation agent design has focused on the generation of product and company recommendations, facilitated by the employment of different algorithms for product/company/customer filtering based on specifications identified by customers. Fisk (1996) has designed a collaborative-filtering movie RA called M O R S E . M O R S E makes personalized film recommendations based on each customer's ratings of  11  various films and other similar customers' film ratings. Similarly, Goker and Thompson (2001) have designed the Adaptive Place Advisor, a customized conversational R A that helps customers decide among different restaurants. This R A asks customers to specify the item characteristics they desire, and then constructs a user model, which contains preferences regarding items, attributes, values, value combinations, and diversification. The R A then retrieves suitable items from a case-base. Sim and Chan (2000), on the other hand, have designed and implemented an algorithm that uses multiple criteria to match buyers and sellers based on pre-specified customer profiles. Shih, Chiu, Hsu and Lin (2002) have discussed an analysis mechanism based on the correlation among customers, product items, and product features; their study develops an algorithm to classify customers into groups, and recommendations are produced based on this classification. Smyth and Cotter (2000) have commented that mixing the collaborative-filtering with content-based filtering (constraint satisfaction filtering) seems to bring out the best in both methods. Smyth and Cotter (2001) have designed an Internet-based personalized television-program-listing service that sends registered customers daily T V guides specially compiled to suit their individualized viewing preferences. Zahir (2002) has designed a company-brokering R A by using the ratings from a price-comparison engine and incorporating personal preferences of online customers. One common assumption of the studies shown above is that customers have accurately and adequately expressed what they really want through the customers' specified preferences in the RAs. These studies focus on product/company/customer filtering algorithms, while they do not focus on how effective the RAs represent what the customers really want.  12  Literature on Recommendation Agent Adoption Some research literature has discussed the consequences of using RAs in ecommerce in theoretical terms only. Notably, Peterson and Merino (2003) have commented that the use of RAs is expected to reduce consumer search costs and to make consumer buying behavior more rational. Subsequent to a careful literature review, Smith (2002) has suggested that while RAs allow customers to search for prices and product characteristics from online retailers, they may benefit consumers at the expense of retailer margins in some circumstances; however, retailers retain numerous opportunities to differentiate their products, leverage brand names, set strategic prices, and reduce the effectiveness of searches by RAs. Other literature has empirically demonstrated the consequences of adopting RAs in online shopping. Haubl and Trifts (2000), for example, have conducted an experiment to prove that the use of RAs reduces consumers' efforts when searching for product information, decreases the quantity of results but increases the quality of their consideration sets, and improves the quality of decisions about purchases. In addition, Lynch and Ariely (2000) have conducted a controlled experiment to demonstrate four primary empirical results. First, for differentiated products like wines, lowering the costs of searches for quality information reduces price sensitivity. Second, price sensitivity for wines common to multiple competing stores increases when cross-store comparison is simplified, as many analysts have assumed. However, easy cross-store comparisons have no effect on price sensitivity for unique wines. Third, making information sources more transparent by lowering all three search costs produces welfare gains for consumers in terms of higher customer satisfaction and higher customer retention probability. Fourth,  13  when it is difficult to draw comparisons between stores, the empirical study indicates that the market share of common wines is proportional to their share of distribution; but when it is easier to compare different stores, the market share generated by wide distribution decreases significantly. A l l these results suggest incentives for retailers that carry differentiated goods to make information environments maximally transparent, but also impels them to avoid price competition by carrying more unique merchandise.  2.2  Definitions of Customer Trust  Definitions of Customer Trust in Traditional Commerce Definitions of customer trust in traditional commercial settings can be categorized into three groups, based on prior research: conceptual types (McKnight and Chervany, 2001a), the direct objects of trust (McKnight and Chervany, 2001b), and trust in some specific characteristics of trustees (McKnight and Chervany, 2001a). First, when studying conceptual types, most researchers define customer trust as beliefs customers have about the companies with whom they do business (Anderson and Weitz, 1989; Crosby, Evans and Cowles, 1990; Magrath and Hardy, 1989; Morgan and Hunt, 1994; Schurr and Ozanne, 1985; Swan, Trawick and Silva, 1985), while a few other researchers define customer trust as an attitude (Hallen and Sandstrom, 1991; Lagace and Gassenheimer, 1991) or as a disposition (Ganesan, 1994; Moorman, Deshpande and Zaltman, 1993; Strutton, Pelton and Tanner, 1996). Second, regarding the direct objects of trust, most researchers focus on customer trust in sales representatives (Crosby, et al., 1990; Doney and Cannon, 1997; Swan, Bowers and Richardson, 1999), while others address the trust customers invest in specific  14  companies (Anderson and Narus, 1990; Magrath and Hardy, 1989). A few researchers have taken a more broad approach, by observing multi-faceted customer trust in sales representatives, products, and companies (Plank, Reid and Pullins, 1999). Regarding trust in the characteristics of a trustee, some researchers have suggested that trust is delineated by three characteristics:in some cases identified as competence, integrity, and benevolence (Crosby, et al., 1990; Mayer, Davis and Schoorman, 1995), and in others as competence, goodwill, and contractual relationships (Sako, 1992), while some researchers have suggested that trust is based on only two characteristics:either a combination of competence and goodwill (Barber, 1983; Blomqvist, 1997; Doney and Cannon, 1997; McAllister, 1995; Sako, 1992), or the pairing of credibility and goodwill (Doney and Cannon, 1997; Ganesan, 1994). Alternatively, trust has been seen to depend on four primary factors - competence, benevolence, integrity, and predictability (McKnight and Chervany, 2001a). The definitions of each of these characteristics vary in particular studies as follows: •  Trust in competence has been defined in various ways: the belief that a  trustee has the ability or power to fulfill the needs of a trustor (McKnight and Chervany, 2001a), trust in a trustee's technical capabilities, skills and know-how (Blomqvist, 1997), trust that technically competent performance will be provided (Barber, 1983), the belief that a trustee is capable of fulfilling particular contracts (Sako, 1992), or trust in the skills, competencies, and perceived expertise that enable a trustee to perform effectively in specific domains (Mayer, et al., 1995). •  Trust in integrity has been defined as the belief that a trustee commits to  agreements in good faith, tells the truth, and fulfills its promises (McKnight and  15  Chervany, 2001a), and in other research it has been defined as a trustor's perception that a trustee adheres to a set of principles that the trustor finds acceptable (Mayer, et al., 1995). •  Trust in benevolence has been defined as the belief that a trustee cares  about a trustor and is motivated to act in the trustor's interest (McKnight and Chervany, 2001a), the extent to which a trustee is genuinely interested in the trustor's welfare and motivated to pursue benefits for both of them (Doney and Cannon, 1997), the belief that a trustee has motives beneficial to the trustor when new conditions arise for which a commitment has not been made in advance (Ganesan, 1994), or the degree to which a trustee is believed to want to serve the interests of a trustor in a dyadic relationship, aside from the trustee's own egocentric profit motive (Mayer, et al., 1995). •  Trust in goodwill has been defined alternately as the moral responsibility  of trustees and their positive intentions toward trustors (Blomqvist, 1997), the extent to which a trustee can be depended on to bear moral responsibility for a trustor's welfare (Barber, 1983), or a trustor's belief in a trustee's open commitment to take initiatives for their mutual benefit while refraining from personally opportunistic behavior, even when given opportunities to act solely in their own interests (Sako, 1992). •  Trust in credibility has been defined as a trustor's expectation that a  trustee's word or written statement is reliable (Doney and Cannon, 1997), or a trustor's belief that a trustee has the expertise to perform the job satisfactorily (Ganesan, 1994).  16  •  Trust in predictability has been defined as a trustor's belief that a trustee's  actions (good or bad) are consistent enough that the trustor can forecast them in particular situations (McKnight and Chervany, 2001a). •  Contractual trust is defined as a trustor's belief that a trustee is honest and  will fulfill the explicit and implicit requirements specified in contractual agreements (Sako, 1992).  Definitions of Customer Trust in Web-mediated E-commerce Many researchers have discussed customer trust in web-mediated e-commerce without providing explicit definitions of the term in e-commerce contexts (Ba, Whinston and Zhang, 1999; Chrusciel and Zahedi, 1999; Hoffman, Novak and Peralta, 1999; Jarvenpaa, Tractinsky and Vitale, 2000; Noteberg, Christiaanse and Wallage, 1999; Urban, Sultan and Quails, 2000). However, in several studies customer trust in webmediated e-commerce has been defined as a willingness on the part of the customer to take risks (Ba and Pavlou, 2002; Lee and Turban, 2001; Lim, Sia, Lee and Benbasat, 2001), a willingness to believe a trustee (Fung and Lee, 1999), or as beliefs regarding particular characteristics of trustees (Menon, Konana, Browne and Balasubramanian, 1999; Stewart, 1999). These definitions are primarily adapted from definitions of trust developed in studies of conventional commerce.  Definitions of Customer Trust in Agent-Mediated E-Commerce Although minimal prior research has addressed customer trust in agent-mediated e-commerce, several studies have focused on the use of computer agents in agent-  mediated e-commerce (Doorenbos, Etzioni and Weld, 1997; Maes, et al., 1999; Moukas, et al., 2000; Shardanand and Maes, 1995), the design of computer agents (Ansari, et al., 2000; Zacha, Moukas and Maes, 1999), the effects of computer agents on customer decision-making quality (Haubl and Trifts, 2000) and the role of computer agents in product prices and competition (Crowston and Maclnnes, 2001; Lynch and Ariely, 2000). Nwana, et al. (1998) have suggested that customers will trust computer agents only if they are assured that the agents will not compromise private information or deviate beyond their constraints; computer agents should be reliable systems that do not act unreasonably, and customers should always retain the right to approve purchases and agreements initiated by the agent. Urban, Sultan and Quails (1999) have reported that virtual personal advisors (i.e. RAs) can gain as much customer trust as dealers or salespeople may receive in conventional commercial situations. However, although Nwana et al. (1998) and Urban et al. (1999) have suggested a high potential for computer agents to earn customer trust, they have not explicitly defined customer trust in agentmediated e-commerce, nor have they discussed the antecedents or consequences of customer trust.  2.3  Trust Formation Processes Two notable papers have theoretically discussed the processes of trust formation,  but no prior study has been found that empirically traces the processes of trust building. The first of the theoretical discussions of trust formation, Doney and Cannon (1997), has identified five distinct processes by which trust can be developed in business relationships:  18  •  Calculative process: This process evolves when individuals or  organizations calculate the costs and rewards of other parties cheating or remaining in relationships (Lindskold, 1978). To the extent that the benefits of cheating do not exceed the costs of being caught (factoring in the likelihood of being caught), a particular party may infer that it would be contrary to another party's best interest to cheat, and therefore the latter may be trusted (Akerlof, 1970). For example, businesses may pay premium prices for their purchases to ensure high levels of quality (Rao and Bergen, 1992). Essentially, buyers raise the costs of cheating because suppliers caught acting "untrustworthy" may lose streams of revenue from future purchases. •  Prediction process: The prediction process of developing trust relies on  one party's ability to forecast another party's behavior. Because trust requires an assessment of a given party's credibility and benevolence, the individual or business making the assessment must have information about the other party's past behavior and promises. Repeated interaction enables the party to interpret prior outcomes better, and therefore provides a basis for assessing predictability. •  Competence process: The competence process, which may also be  referred to as the capability process, involves determining a party's ability to meet its obligations, thereby focusing primarily on the credibility component of trust. For example, a salesperson might promise a customer prompt delivery, despite a supply being on allocation due to shortages. However, if the customer doubts whether the salesperson has the clout needed to elevate their order within a queue, the customer will be reluctant to trust the salesperson's word.  19  •  Goodwill process: Trust also emerges through interpretation and  assessment of a party's motives. Using the goodwill process, also referred to as the intentionality process, trustors interpret others' words and behavior and determine their own intentions in response (Lindskold, 1978). People or groups motivated to help or reward a subject that is venturing trust will be more trusted than those suspected of harboring exploitative intentions. Inferences of benevolent intentions also can result when two parties develop shared values or norms (MacNeil, 1980), which enable one party to understand their partner's objectives and goals better (i.e. what drives their behaviours). •  Transference process. Strub and Priest (1976) describe the extension  pattern of gaining trust as using a third party's opinion of a trustee as a basis for judging whether this trustee is trustworthy or not. This suggests that trust can be transferred from one trusted source to another person or group with which the trustor has little or no direct experience (Milliman and Fugate, 1988; Strub and Priest, 1976). For example, a new salesperson representing a highly trusted firm would benefit from the buyer's past experience with the supplier firm. Conversely, distrust can likewise be transferred when an individual lacks other information about someone with whom they consider doing business.  McKnight et al. (1998) has posited the theoretical proposition that the processes of trust formation are cognitive processes. Two types of processes are suggested.  20  •  The first type is a categorized process, in which people classify trustees  into different groups and form their trust in trustees based on their perceptions of each group. Categorization processes include: •  Unit grouping process: trustors ascribe trustees to the same category as themselves. Unit grouping quickly leads to high levels of trusting beliefs.  •  Reputation categorization process: trustors attribute characteristics to trustees based on second-hand information about the trustees. If a given trustee has a good reputation, a trustor will quickly develop trusting beliefs about that trustee, even without firsthand knowledge.  •  Stereotyping process: trustors identify particular trustees within a general category of agents. Stereotyping may be done either on a broad level, for example by using gender as a basis for judgment (Orbell, Dawes and Schwartz-Shea, 1994), or on a more specific level, such as the prejudices held for or against occupational groups, e.g. used-car salespeople (Adkins and Swan, 1980-81). Particularly during initial interactions with trustees, trustors may form stereotypes based on qualities of a trustee's voice (e.g. male or female, domestic or foreign; Baldwin, 1992) or based on physical appearances (Dion, Berscheid and Walster, 1972; Riker, 1971). Through the process of positive stereotyping, trustors can rapidly form positive trusting beliefs about trustees by drawing generalizations from a favorable category into  which the trustees may be placed. Negative stereotyping, on the other hand, will lead to distrust through similar identification processes. •  The second type of cognitive processes in trust formation is the illusion of  control process. An illusion of control is an unrealistically inflated perception by an individual or business of their own personal control in given situations (Langer, 1975; Taylor and Brown, 1988). The related process that helps build trusting beliefs is similar to processes by which people become overconfident about their own judgments (Langer, 1975; Paese and Sniezek, 1991). After forming tentative beliefs, clues and other indications are sought to confirm the beliefs. Even without evidence, the effort exerted to watch for these indications tends to over-inflate one's confidence in their own judgments (Davis and Kotteman, 1994). Similarly, even a slight effort to confirm tentative trusting beliefs in another party may over-inflate one's confidence that high levels of trusting beliefs are warranted.  2.4  Summary of Chapter 2 This chapter reviews the literature on recommendation agents in e-commerce and  the meaning and formation of customer trust, revealing a paucity of prior research on customer trust in R A . Therefore, this dissertation in general examines customer trust in an R A , taking consideration for R A design (RA internalization) as one important antecedent to customer trust, and viewing R A adoption as a consequence of customer trust. The literature reviewed in the course of this study has provided a foundation for the research questions and hypotheses in this dissertation.  22  This chapter illustrates that prior research on R A design has focused on product, company and customer filtering based on specifications dictated by customers; one assumption underlying previous research is that customers accurately and adequately expressed their own real needs through the specifications they give. This study, therefore, will challenge this assumption and investigate the value of efforts to reduce the gap between customer specifications (i.e. their expressed needs) and their real needs. A smaller gap is reflected as high internalization of an R A , which means that the R A can represent the customer's real needs better. Prior research focuses on the cognitive component of trust, and it asserts that R A adoption results from cognitive and rational decisions based on the usefulness of RAs; thus, emotional aspects of trust and R A adoption have been largely ignored. In contrast, this dissertation will examine the role of both emotional and cognitive trust in R A adoption. As noted in this chapter, the present dissertation is in part a response to the lack of prior research empirically tracing the processes of trust formation, and this study will fill in the void.  23  CHAPTER 3: THEORETICAL FOUNDATIONS This chapter discusses several theories about trust that have been used to develop the research model and the hypotheses presented in Chapter 4. The two related concepts of trust and trustworthiness are differentiated from each other in Section 3.1. Then Section 3.2 discusses the literature on the nature of trust, as they are used as the basis for a model of trust proposed later in this dissertation (Section 4.4). Section 3.3 addresses two theories that have claimed that trust in IT is not fundamentally different from trust in individual people, because these claims indicate that studies of interpersonal trust can be drawn upon as the basis for further hypotheses regarding customer trust in RAs (as practical examples of computer agents). In Section 3.4, papers about relationships between similarity and representation on the one hand and trust on the other are addressed, leading to observations that are used in Section 4 to rationalize hypotheses regarding relationships between internalization and customer trust in RAs. Prior research into issues of familiarity and trust are talked about in Section 3.5, and they are used later in this paper to justify hypotheses regarding relationships between familiarity and trust. Finally, the consequences of trust are discussed in Section 3.6.  3.1. Trust and Trustworthiness Before engaging in a discussion of trust, it is helpful to differentiate trust from trustworthiness. Trustworthiness, on the one hand, is a feature specific to a trustee, while trust, on the other hand is a psychological state of a trustor. A n individual (or business) that is deemed to be trustworthy is, by definition, worthy of confidence, and capable of being depended on by others. In contrast, trust is the state when a trustor relies on the  24  truthfulness of a trustee, has confidence in a trustee, relies on a trustee, or hopes or expects confidently that a trustee will behave as expected by the trustor. Trust is connected to trustworthiness, in that "the most important and most common ground for trust is the estimate of the trustworthiness of the target on which we are considering whether to confer trust" (Sztompka, 1999: 70). Trustors become involved in various kinds of "trust ratings", based on the knowledge (information) that they have about particular trustees (Coleman, 1990: 185). Sztompka (1999) has suggested two types of trustworthiness (71): •  Primary trustworthiness: This includes the immanent traits of the trustee and features that the trustee may be said to "possess," e.g. an individual may be honest, or an institution may be efficient.  •  Derived trustworthiness: This entails features attributed to a trustee due to external influences in the context in which the trustee operates, e.g. there may be a reliable execution of contracts, which allows trustors to trust business partners not to cheat them.  However, a trustor does not necessarily trust a trustworthy person, because the connections between trust and trustworthiness are conditional. First, a trustor must possess knowledge about a trustee's trustworthiness. For example, a stranger may be trustworthy, but a trustor may withhold trust because the stranger is not known. In general, trustors usually cannot have full knowledge about the trustworthiness of trustees. Second, trustors must have the ability to judge the trustworthiness of trustees, given that knowledge about the trustees is available. For example, the hiring of a new faculty member in a university implies that the search committee trusts the candidate is qualified  25  in terms of research and teaching abilities. However, information relevant to this judgment includes the candidate's education, past publications, teaching evaluations, papers in progress or under review, courses the candidate can teach, etc. Indeed, the complexity of trust ratings may limit the search committee's ability to judge the ' candidate's trustworthiness. Third, trust is in part independent of trustworthiness because it is more than a mere judgment. It is possible for a trustor to cognitively judge that a person is trustworthy but still feel not willing to invest trust in this person. For example, Jones has constructed a situation in which she distrusts a salesman who has been recommended by a friend (Jones, 1996: 24). In this hypothetical situation, she believes that the salesman is trustworthy, yet cannot help but adopt an attitude of distrust toward him. Jones's assumption is that she cannot explain why she finds him untrustworthy and that she is, without realizing it, a terrible judge of trustworthiness.  3.2 The Nature of Trust Blomqvist (1997), McKnight and Chervany (2001), and Dwyer and Lagace (1986) have provided comprehensive summaries and conceptualizations of trust using interdisciplinary approaches. McKnight and Chervany's model (2001) consists of Disposition to Trust (a concept from psychology research), Institution-Based Trust (a concept from sociology), Trusting Beliefs and Trusting Intentions (both of which are taken from social psychology). Blomqvist (1997), in comparison, has analyzed various levels and dimensions of trust in the fields of social psychology, philosophy, economics and marketing. He enumerates and differentiates among several related constructs, including competence, credibility, confidence, faith, hope, loyalty, and reliance, and then  26  proposes that trust is an actor's expectations about a party's competence and goodwill. Dwyer and Lagace (1986) have proposed that trust can be conceptualized in one of three ways, each of which have been further analyzed in other studies: 1) trust as a personality trait or generalized expectancy about a trustee's competence and reliability (see also Anderson and Weitz, 1989; Anderson and Narus, 1990; Magrath and Hardy, 1989); 2) trust as a predisposition toward a party, or as a belief that a party will behave in a matter beneficial to others (see Morgan and Hunt, 1994); and 3) trust from the standpoint of behavior that involves risk, and which therefore reflects a willingness on the part of buyers to accept vulnerability for their part in transactions (discussed in Hawes, Rao and Baker, 1993; Mayer, et al., 1995; Moorman, et al., 1993; Swan, et al., 1985; Yamagishi, 2001). Particularly in sociological studies, some researchers have pointed out the existence of emotional elements of trust. Trust is based on a cognitive process (i.e. the perception of good rational reasons for trusting), an emotional base (i.e. a strong positive affect for the trustee), and behavioral enactment (i.e. the undertaking of a risky course of action on the confident expectation that all people involved in the situation will act competently and dutifully) (Lewis and Weigert, 1985). More specifically, interpersonal trust in close relationships consists of predictability, dependability, and faith (Rempel, Holmes and Zanna, 1985). Faith reflects an emotional security on the part of individuals, which enables them to go beyond available evidence and thereby to feel assured that their partners will be responsive and caring despite the vicissitudes of an uncertain future (Rempel, et al., 1985). Jones (1996) also suggests the existence of the emotional component of trust by defining trust as a kind of emotional attitude.  27  3.3 Trust in an IT and Trust in a Person Two theories have claimed that trust in IT is not fundamentally different from trust in individual people; therefore prior studies of interpersonal trust can be drawn upon as the basis for hypotheses regarding customer trust in RAs in Section 4.  Reeve and Nass (19961: The Theory of Social Responses to Computers The research model used in this dissertation applies the theory of social responses to computers that argues that individuals will treat computers like real people (Reeves and Nass, 1996). For example, customers may accurately recognize personality cues in a computer-generated voice (e.g. indicating extroversion or introversion) on a book-buying web site; the customers may like and trust a computer voice more when the computer voice shows a personality similar to their own voices (Nass and Lee, 2001). A series of experimental studies has also demonstrated that people apply social rules and expectations to computers (Nass and Moon, 2000). Furthermore, people respond to computers as independent social actors (Sundar and Nass, 2000). Therefore, the present research model posits that customers will treat RAs like virtual salespeople working on behalf of the customers in e-commerce environments.  Sztompka (1999): Trust in Technology and Trust in a Person Sztompka (1999) suggests that the difference between trust in a person and trust in technology is "not so striking and fundamental" (42), inasmuch as behind all humanmade technologies, there stand people who operate and control the technologies, and it is  28  the people whom a trustor ultimately endows with trust. Trustors may be acquainted with the people behind the technologies, they may simply imagine the operators, or they may have some information about the operators. Therefore, trust in another person and trust in technology operate according to the same logic, because behind trust in a person or a technology, "there looms the primordial form of trust - in people, and their actions" (Sztompka, 1999: 46). Appearances notwithstanding, both a person and a technology are reducible to human actions, and "we ultimately trust human actions, and derivatively their effects, or products" (Sztompka, 1999: 46). Thus, in the case of trust in technology, "we trust those who design the technology, those who operate them, and those who supervise the operations" (Sztompka, 1999: 46).  3.4 Internalization and Trust An R A with high internalization is one that customers perceive as effectively applying the customers' real needs as its own preferences. Therefore, when an R A has higher internalization, customers perceive more similarity between their needs and the RA's preferences, and the customers will perceive that the R A encapsulates and serves their needs better. This subsection discusses extant literature concerning similarity and representation and their effects on trust. Section 4.2 will subsequently use the literature discussed in this subsection to rationalize hypotheses regarding the relationships between internalization and customer trust in RAs.  29  Literature about Similarity and Trust "People tend to trust others who are similar to them and to distrust those who are dissimilar from them" (Earle and Cvetkovich, 1995: 17). Putting it another way, people will have trust in other people who have had similar experiences as them (Doney and Cannon, 1997), people who hold similar values and norms as them (Armstrong and Yee, 2001; Fukuyama, 1995; MacNeil, 1980; Siegrist, Cvetkovich and Roth, 2000), and similar goals as them (Kramer, Brewer and Hanna, 1996). Furthermore, customers may recognize a computer's personality type, and therefore they may trust a particular computer more if the computer has a personality similar to their own (Nass and Lee, 2001; Nass, Moon, Fogg, Reeves and Dryer, 1995). In addition, similarity leads to attraction (Crosby, et al., 1990; Siegrist, et al., 2000), which in turn leads to customer trust in a salesperson (Doney and Cannon, 1997; Nicholson, Compeau and Sethi, 2001; Swan, Trawick, Rink and Roberts, 1988). The rationale for this tendency could be that "we are merely better at predicting the behavior of those most like ourselves" (Hardin, 1993: 512). This can also be stated more dramatically: "Not able to predict the future conduct of those who are different from us, we react to such uncertainty with suspicion. A n extreme case of that is xenophobia, a priori distrust of strangers" (Sztompka, 1999: 80).  McKnight. Cummings, and Chervany (1998): Unit Grouping McKnight et al. (1998) have suggested that trustors will inscribe trustees in the same categories as themselves, and they will tend to trust trustees with whom they have drawn this connection. The reason for this is that one group member will be more likely  30  to form trusting beliefs toward another group member, due to shared goals and values (Kramer et al., 1996). For example, Zucker, Darby, Brewer and Peng (1996) have found that common membership in an organization generates the trust needed for scientists to collaborate on research. Similarly, Brewer and Silver (1978) have found that people perceive members of the groups to which they belong to be more trustworthy than others excluded from their groups. These studies provide evidence that unit grouping quickly leads to high levels of trusting beliefs (McKnight et al., 1998). As an example of the rapid effects of unit grouping, studying prospective dating couples who had never met, Darley and Berscheid (1967) have found that a subject's knowledge that they would be paired with another subject would tend to enhance beliefs about the other subject's characteristics. Given these rapid effects, the unit grouping process is particularly helpful to build trust during initial interactions between trustors and trustees (McKnight, et al., 1998).  Hardin (2002): Trust as Encapsulated Interest Hardin suggests that one individual trusts another because the trustee encapsulates the trustor's interests (Hardin, 2002: 24). When a trustor trusts a trustee, in the sense of believing that the trustee's interests encapsulate the trustor's interests at least with respect to a particular matter to which the trustor trusts the trustee, it is natural that the trustor and the trustee share interests to some extent (Hardin, 2002: 144). However, the trustor and the trustee may share interests in two quite different ways. First, they may merely share interests causally, inasmuch as behavior that serves the trustor's interests serves the trustee's interests as well. Alternatively, trustees may genuinely adopt the interests of  31  trustors as their own interests. The second case is trust accounted for as encapsulatedinterests (Hardin, 2002: 144). If trustees virtually assume trustors interests as their own interests, the trustors can trust the trustees almost completely. The shared interests are then virtually a purely coordinated interaction. There might be problems of information and understanding, but motivation will not be a source of problems (Hardin, 2002: 144).  Kelman (1958): The Theory of Attitude-Change Processes Kelman (1958) has proposed a theory of attitude change processes that distinguishes three levels of attitude change. A deep-level attitude change occurs when an individual accepts an influence because the content of the induced behavior—the ideas and the actions of which it is composed—is intrinsically rewarding. In these cases, individuals adopt induced behavior because it is congruent with their value systems. They may consider the behavior useful for the solution of particular problems, or they may find it congenial to their needs. Thus, the satisfaction derived from such an attitude change is due to the content of the new behavior. Individuals who adopt induced behavior through deep-level attitude changes tend to perform the behavior in conditions wherever it is relevant, regardless of surveillance or salience. In other words, private acceptance of induced behavior results from deep-level attitude changes, and deep-level attitude changes result from individuals' perceptions that the induced behavior is congruent with their needs.  32  3.5 Familiarity and Trust In the development of trust, "the familiarity of the trustee is undoubtedly a vital factor" (Luhmann, 1979: 33). Regarding the relationship between familiarity and trust, however, the results of prior research are equivocal. Some studies have suggested that familiarity increases trust. Familiarity with what is going on, why it is happening, and the parties involved may create trust in business relationships (Kumar, 1996). Familiarity increases inter-firm trust (Gulati, 1995), interpersonal trust (Rempel et al., 1985), and customer trust in online vendors (Gefen, Karahanna and Straub, 2003). Through repeatedly making promises and delivering on them, a given salesperson may gain the confidence of customers (Doyle and Roth, 1992; Swan et al., 1985). In the context of ecommerce, an empirical study has demonstrated that familiarity with a website increases customer trust in the online vendor providing the site (Gefen, 2000). In addition, familiarity increases the "illusion of control," fostering an unrealistically inflated perception of personal control (Langer, 1975; Taylor and Brown, 1988), and the illusion of control in turn encourages the development of trust (McKnight, et al., 1998). The rationale that familiarity builds trust lies in the observation that "familiarity opens access to relevant information, and also diminishes the chances of manipulation and deceit" (Sztompka, 1999: 81). Familiarity allows trustors to assess the consistency of trustees' actions over time, therefore familiarity allows trustors to judge "whether trustworthy performance is continuous, typical, and in character" (Sztompka, 1999: 77). In short, "the better and longer we are acquainted with somebody, and the more consistent the record of trustworthy conduct, the greater our readiness to trust" (Sztompka, 1999: 72).  33  However, other research has suggested that the impact of familiarity on customer trust may be neutral. Familiarity denotes reliable expectations, whether favorable or unfavorable (Luhmann, 1979). Therefore, customers' trust in a trustee will increase or decrease as a function of their cumulative interactions (Kramer, 1999). Finally, two studies have found that trust is usually set at an elevated level in initial interactions (Kramer, 1994; McKnight et al., 1998). These results tend to disagree with the viewpoint that trust grows over time (i.e., the assumption that familiarity builds trust).  3.6 Trust and IT Adoption IT adoption (e.g. R A adoption) is positive action by customers toward IT; it is the customers' decisions to depend on IT. In the case of product-brokering RAs, R A adoption is a customer's decision to depend on an R A for decisions about which products to buy. The literature in this subsection has noted that trust will lead to consequences that include IT adoption.  Sztompka (1999): The Functions of Trust Sztompka (1999) has suggested that endowing individuals or businesses with trust evokes positive actions toward them. The uncertainty and risk surrounding their actions is lowered, and hence "possibilities of action increase proportionally to the increase in trust" (Luhmann, 1979: 40). Trustors then become more open toward trustees, and they become more amenable to engaging in new interactions and to entering into lasting relationship with trustees. Interactions with those parties whom trustors endow with trust  34  are liberated from anxiety, suspicion, and watchfulness, and allow for more spontaneity and openness. Trustors are relieved from the necessity to monitor and control every move of others (Sztompka, 1999: 103). Exact opposite consequences are brought about by distrust. Trustors who distrust others are hesitant to initiate interactions (and therefore they may forfeit important opportunities), they carefully check the moves made by others (and therefore remain constantly "on guard"), and they follow safe routines, avoiding any innovations (Sztompka, 1999: 104).  More Literature on the Consequences of Trust Kelman's (1958) theory of attitude change indicates that customers' intentions to adopt RAs (i.e. the acceptance of RAs) are affected by the customers' trust in RAs (i.e. an attitude toward RAs). In addition, a meta-analysis in marketing literature has demonstrated that customer trust in salespeople has a beneficial influence on the development of positive customer attitudes, intentions, and behavior toward the salespeople and the company (Swan et al., 1999). Customer trust in a company's website positively influences customers' attitudes toward the company and customers' willingness to buy (Jarvenpaa, et al., 2000; Lim, et al., 2001; Wetsch and Cunningham, 1999). Customer trust in computer agents is positively related to the customers' willingness to delegate shopping activities to the agents (Nwana, et al., 1998), and it is also positively related to compliance with the agents' recommendations (Gregor and Benbasat, 1999). Trust helps reduce the social complexity consumers face in e-commerce by allowing the consumers to subjectively rule out undesirable yet possible behaviour on  35  the part of the e-vendor, including inappropriate use of purchase information. In this way, trust encourages customers to adopt websites (Gefen et al., 2003).  3.7 Summary of Chapter 3 This chapter discusses literature produced from studies into the nature of trust, comparisons between trust in people and trust in technology, the impacts of internalization (arising from perceptions of similarity and representation) on trust, the influences of familiarity on trust, and the consequences of trust. The literature in this chapter will be used to explain the derivation of hypotheses in Chapter 4.  36  CHAPTER 4: RESEARCH M O D E L AND HYPOTHESES 4.1 Research Model By drawing from prior literature in the fields of MIS (including issues of computer agents, knowledge-based systems, IT adoption, and trust in websites), marketing (particularly regarding customer trust in salespeople), sociology (concerning interpersonal trust), and psychology (regarding cognition, emotion and trust), six constructs at three levels have been organized into a research model for the current study, as depicted in Figure 1. R A internalization and familiarity are expected to affect cognitive trust and emotional trust in RAs, which in turn influence customers' intentions to adopt RAs as delegated agents or as decision aids. The following subsections discuss each of the constructs depicted in Figure 1. Each is explained and hypotheses are developed to specify the expected relationships among the constructs. Measure development is then discussed in Section 5.3.  4.2 RA Internalization: Low-internalization RA vs. High-internalization RA RA internalization refers to customers' perceptions of how well RAs represent the real needs of the customers. Table 5 shows the measurement items. A n R A may not effectively represent a customer's real needs. There is a gap between the needs expressed by customers and the customers' real needs when customers express their needs by answering an R A ' s questions (e.g. regarding the speed of a notebook computer and different types of screen technology). Customers' real needs may result from the customers desire to play computer games or to run statistical software, although they may express needs that do not match these applications. The gap between expressed needs and  37  real needs may arise because customers are not sure how to appropriately convert their real needs into expressed needs, or because the questions asked by RAs are incomplete, limited, and hard to understand. For example, a customer shopping for a notebook computer might not be sure about whether 700 M H z is fast enough for playing computer games or not, or the customer may not be sure about the impact of increasing C P U speed on price or weight. Alternatively, customers may care about battery life, while this product attribute might be missing in an R A ' s question list. The customers will then be unable to express their needs to the R A . In other situations, customers may be limited in expressing their needs because they cannot fully view a product online, or because they might not understand the technical terms used for particular product attributes. In summation, limitations in the options and information offered by RAs and other limitations in the technical knowledge of particular customers may all hamper the ability of the customers to accurately express their real needs in a way that enables RAs to act on those needs. In relationships between customers and RAs, the term "internalization" is more accurate than "representation," because internalization more directly articulates the process by which customers identify themselves with RAs. Representation means that RAs works on behalf of customers, whereas internalization infers that RAs (as agents) internalize the needs of customers (as principals), RAs take the needs of customers as their own preferences while working on behalf of the customer, and RAs represent customers needs effectively. R A internalization is worth examining for two reasons. First, prior research on R A design has focused on the generation of product and company recommendations  38  through the employment of different algorithms of product, company and customer filtering based on customers' expressed needs (Ansari et al., 2000; Fisk, 1996; Goker and Thompson, 2001; Shih et al., 2002; Sim and Chan, 2000; Smyth and Cotter, 2001; Zahir, 2002). In contrast, this paper suggests that the focus of R A design should shift from product, company and customer filtering toward customer representation by RAs. This paper suggests that R A designers should be aware of the existence of the gap between customers' expressed needs and the customers' real needs. Efforts should be put into identifying and reducing this gap. In practice, it is not good enough for RAs simply to ask customers questions and to accept the answers as the basis for generating recommendations. RAs should go one step further, to ensure that the customers' answers accurately represent what they really need. This paper wants to provide empirical evidence that it is beneficial to increase R A internalization, because internalization will significantly increase customer trust in RAs and promote intentions to adopt RAs. R A internalization is the representation of customers by RAs, and representation is an important component in the relationship between customers and RAs. Theoretical and empirical evidence of the positive effects of R A internalization on customer trust and on R A adoption can therefore justify future efforts to design and to develop RAs that can attain high levels of internalization.  4.3 Familiarity: Initial Interaction vs. Repeated Interactions Familiarity refers to the extent to which customers are familiar with the processes by which RAs derive their recommendations. It represents the amount of knowledge customers possess about how RAs perform their tasks, and therefore it is expected to  39  increase with repeated interactions. However, prior research on the impact of familiarity on customer trust has been equivocal (see section 3.5).  4.4 Customer Trust in an RA: a Proposed Trust Model 4.4.1. A Proposed Trust Model Prior research into the nature of trust (Blomqvist, 1997; Currall, 1990; Doney and Cannon, 1997; Holmes, 1991; Jones, 1996; Kipnis, 1986; Lewis and Weigert, 1985; Luhmann, 1979; Mollering, 2001) provides the foundation for the model of trust developed in the current study, which explicitly differentiate cognitive trust and emotional trust. •  Cognitive trust is defined as a customer's rational expectation that a  trustee will have the necessary competence, benevolence, and integrity to be relied upon. 1.  Cognitive trust in a trustee's competence: the rational expectation that a  trustee will have the ability to fulfill its obligations, as they are understood by the customer. 2.  Cognitive trust in a trustee's integrity: the rational expectations that a  trustee will make agreements in good faith, tell the truth, and fulfill promises. 3.  Cognitive trust in a trustee's benevolence: the rational expectation that a  trustee will care about the customer, and that the trustee will intend to act in the customer's interest. (Note: only when an entity is viewed as a social actor, cognitive trust in the benevolence or integrity of this entity may exist. For example, people respond to a  40  computer entity as an independent social actor (Sundar and Nass, 2000); therefore, cognitive trust in the integrity of a computer agent exists. In contrast, cognitive trust in a product's integrity does not exist, because people usually will not view a product as a social actor.) •  Emotional trust is defined as the extent that a trustor feels secure and  comfortable about relying on a trustee.  Cognitive trust should be differentiated from emotional trust, because the two aspects of trust are different in nature. Cognitive trust is embedded in the cognition and rationality of customers and grounded on good rational reasons, while emotional trust is emotional security that enables customers to go beyond the available evidence and to feel assured and comfortable about relying on trustees (Mollering, 2001; Rempel et al., 1985). Therefore, instances of cognitive trust are judgments that are rationally justified, while examples of emotional trust are feelings that are partially rationalized and justified and partially not rationalized nor justified. For example, the appearance and demeanor of a trustee will evoke spontaneous, emotional trust or distrust. Stated in more detail, some aspects of physical appearance, like smiling and aggressive postures, have a biological rationale. Some have symbolic value indicating wealth, social rank, power, and by implication trustworthiness (designer clothes, famous labels worn on display, brand watches, luxurious cars, etc.) (Sztompka, 1999: 79) However, these instances of emotional trust or distrust may be irrational because appearance and demeanor do not necessarily indicate that trustees can or will do what trustors want them to do.  41  Emotional trust is a necessary part of trust for two reasons. First, all social interactions and human thoughts, including trust, include both cognitive and emotional characteristics (Barber, 1983; Lewis and Weigert, 1985). Second, trustors must feel trust (i.e. emotional trust) in addition to judging or thinking about trust (i.e. cognitive trust), due to the "unpredictable and uncontrollable actions" of trustees (Sztompka, 1999: 2223), due to the necessarily incomplete knowledge trustors possess about trustees (Kollock, 1994; Luhmann, 1979; Mollering, 2001), and due to the nature of trust, inasmuch as "trust is a bet about the future contingent actions of others" (Sandholm, 1999: 25). As acknowledged by Luhmann (1979), "since other people have their own first-hand access to the world and can experience things differently they may consequently be a source of profound insecurity to me" (6); and furthermore "The clues employed to form trust... do not supply complete information about the likely behavior of the person to be trusted. They simply serve as a springboard for the leap into uncertainty." (33) Based on prior research (Mayer et al., 1995; McKnight and Chervany, 2001), cognitive trust can be separated into three components: cognitive trust in competence, in integrity, and in benevolence. However, emotional trust does not also have three components, because people's feelings tend to refer to a trustee as a whole. When people begin to analyze or assess particular aspects of trustees, trust moves from feeling toward cognition. In addition, our concept of "emotional trust" is mainly based on prior research including Swan et al. (1999), Lewis and Weigert's (1985), and Rempel et al. (1985), none of which have differentiated any sub-component of emotional trust.  42  Emotional trust is different from cognitive trust in benevolence and integrity, inasmuch as emotional trust is an emotional phenomenon, while cognitive trust is a phenomenon of cognition. For example, if a customer thinks that a salesperson cares about customer interests more than about commission because the salesperson discourages the purchase of a particular product that might not perfectly suit the customer, then the customer will develop cognitive trust in the salesperson's benevolence. However, i f a customer feels assured and comfortable because a salesperson is good-looking, it is instead an example of emotional trust.  What is new about this proposed trust model? The model of trust developed in this dissertation is based on, but different from, prior research on trust. First, this trust model differentiates between cognitive trust and emotional trust, a division that is unlike the differentiation among trust in the competence, benevolence, and integrity of trustees, as drawn in previous studies (Mayer, et al., 1995; McAllister, 1995). Second, differentiation between cognitive trust and emotional trust is dissimilar from differentiations made by other studies between trust and distrust (Kramer, 1999). Trust and distrust are the favorable and unfavorable expectations ventured by trustors. Cognitive trust and emotional trust, on the other hand, are two components of trust. Third, the operative definition of cognitive trust is different from trusting beliefs (McKnight and Chervany, 2001; McKnight, Choudhury and Kacmar, 2002). Although trusting beliefs seem to be primarily cognitive, inasmuch as they are defined as a trustor's confident perception that a trustee has attributes that are beneficial to the trustor  43  ((McKnight et al., 2002: 337), they mix cognition with emotion, while the current trust model clearly distinguishes between cognitive trust and emotional trust. It is useful to maintain this distinction in recognition of the assertion that cognitive trust is different from emotional trust in terms of their natures, and that cognitive trust and emotional trust fall along a continuum with potentially asymmetric effects. In addition, theories about trusting beliefs define the emotional aspects of trust as a confidence that other parties have desirable traits, while in the current study emotional trust is manifested in feelings of security and comfort when relying on trustees. In effect, the emotions in the two models are not the same. Finally, McKnight and Chervany (2001) and Mcknight et al. (2002) have separated trusting beliefs (perceptions about trustee characteristics) from trusting intentions (willingness or intent to depend or rely on another party), while the current study combines them into cognitive trust (the rational expectation that a trustee will have the necessary characteristics to be relied upon). Fourth, the present construct of emotional trust is different from various definitions of emotional trust in prior research. It is different from Lewis and Weigert's (1985) concept of emotional trust as a strong positive affect about the trustee; their version of emotional trust is easily confused with the concepts of liking and affect. However, liking and affect involve different concepts than trust (Chaudhuri and Holbrook, 2001). It is possible for customers to neither like nor dislike particular salespeople, but still to feel secure and comfortable about relying on the salespeople for their shopping decisions, because the salespeople look authoritative. Our concept of emotional trust is also different from earlier concepts of "affect" (Swan, et al., 1999; Swan, et al., 1988), that is defined as the emotion or affect of a buyer  44  "feeling secure or insecure about relying on a salesperson" (Swan et al., 1999). Our definition of emotional trust omits feelings of insecurity, because trust is a state of favorable expectation. We expand the definition from simply feeling "secure," to combine "secure and comfortable," and we expand the scope of trustee from "salesperson" to include all entities that customers encounter in e-commerce or traditional commerce. These entities can be people or impersonal entities like websites. More importantly, Swan et al. (1999) and Swan et al. (1988) have not explained why cognitive trust and affect are two separate components of customer trust in salespeople. We justify this by relying on several other theories about trust (e.g. Holmes, 1991; Luhmann, 1979; Mollering, 2001). The current definition of emotional trust is also similar to, although not exactly the same as, concepts of "faith" (Rempel et al., 1985). Faith reflects an emotional security on the part of trustors, which enables them to go beyond the available evidence and to feel, with assurance, that their partners will be responsive and caring despite the vicissitudes of an uncertain future (Rempel et al., 1985). Faith is a component of interpersonal trust in close relationships (e.g. marriage) (Rempel, et al., 1985). The present version of emotional trust can be perceived to operate in both interpersonal and impersonal trust; it exists in both close and emotionally distant relationships. In addition, because notions of faith focus on caring and responsiveness, it bears similarity to the current interpretation of cognitive trust in benevolence. However, faith and cognitive trust in benevolence are different, in that faith is an emotional state, while cognitive trust in benevolence is a product of cognition.  45  Fifth, the current trust model is based on trust theories proposed by Mollering (2001), Luhmann (1979), and Holmes (1991), although it differs from these theories. Mollering (2001) proposes that trust "stands for a process in which we reach a point where our interpretations are accepted and our awareness of the unknown, unknowable and unresolved is suspended." Luhmann (1979) similarly suggests that there is an incalculable moment in which trust is formed and uncertainty is resolved. Holmes (1991), on the other hand suggests that trust involves a "leap of faith," in which one sets aside doubt and moves forward into the unknown. However, none of these theories provides an explanation for the ability of people to suspend awareness of the unknown. The current trust model suggests that people may suspend their awareness of the unknown when they are emotionally involved with trustees and when they have developed intense emotional trust, and it asserts that emotional trust can be directly or indirectly based on awareness of the known, or directly based on awareness of the unknown.  4.4.2 Customer Trust in an RA Customer trust in an R A means that customers trust the R A to recommend the products that best fit customer needs. Based on the trust model proposed in section 4.4.1, customer trust in an R A includes both cognitive trust and emotional trust. •  Cognitive trust in an RA is defined as a customer's rational assessment  that an R A will have the competence, benevolence, and integrity to be relied upon. Table 6 shows the definitions and measurement items for cognitive trust in an R A ' s competence, cognitive trust in an R A ' s benevolence, and cognitive trust in an R A ' s integrity.  46  •  Emotional trust in an RA is defined as a customer's feelings of security  and comfort about relying on an R A . Table 6 shows the relevant measurement items.  Prior research on customer trust in IT has been predominantly concerned with cognitive trust (Coleman, 1990; Hardin, 2002; Yamagishi, 2001), largely ignoring emotional trust (Miller, 2000). This study investigates both.  The Impact of R A Internalization on Customer Trust in an R A This study draws significantly on prior research on interpersonal trust in MIS, marketing, sociology, psychology, and communications to develop its hypotheses, based on theories posited by Reeve and Nass (1996) and by Sztompka (1999) (as discussed in section 3.3), trust in people and trust in technology are not fundamentally different. It is possible that the impact of R A internalization on cognitive and emotional trust is neutral or even negative. For example, a customer may be a product expert who prefers to shop without help from any salesperson, in which case the customer may trust a low-internalization R A more, because the customer does not need the R A ' s help to ensure that the specifications selected accurately represent real needs, and because the reasoning logic in a low-internalization R A is more simple than that in a highinternalization R A . However, we choose to hypothesize that the impact of R A internalization on customer trust in RAs (including cognitive and emotional trust) will be positive for two reasons. First, higher internalization implies a higher perceived similarity between a customer and an R A in terms of making purchase decisions, because the customer  47  perceives that an R A with high-internalization better takes the customer's real needs as the R A ' s preferences. Prior research, reviewed in section 3.4, has suggested that similarity builds trust. Second, higher R A internalization implies the perception that an R A pays more attention to a customer, because a high-internalization R A may be perceived to work harder to understand a customer than a low-internalization R A . The perception of higher levels of attention then lead to higher levels of trust (Ramsey and Sohi, 1997). High interaction increases customers' cognitive trust in the competence of RAs, because high R A internalization includes perceptions that RAs have a high ability to understand customer needs. High internalization increases cognitive trust in the integrity and benevolence of RAs, because the RAs are perceived to take customer needs as their own preferences, and therefore the RAs may care for the customers' interests and the RAs may be motivated to be honest and unbiased in generating product recommendations for the customers. High internalization increases emotional trust because people tend to feel good about other people (or parties) who shares common goals and values with her own. Therefore, HI: Higher R A internalization will increase cognitive trust in an R A ' s competence (Hla), benevolence (Hlb), and integrity (Hlc). H2: Higher R A internalization will increase emotional trust in an R A .  The Impact of Familiarity on Customer Trust in an R A This study hypothesizes that the impact of familiarity on customer trust in RAs will be positive for three reasons. First, as shown in section 3.5, the evidence suggesting a  48  positive effect appears to be stronger than the evidence indicating neutral or negative effects. Second, familiarity builds trust because it creates contexts to interpret the behaviour of trusted parties (Gefen, et al., 2003). With familiarity, customers can also predict RAs' actions better, and predictability builds trust (Doney and Cannon, 1997; McKnight and Chervany, 2001). Empirical study shows that familiarity with the functionality of particular websites increases customer trust in the online vendors operating the websites (Gefen, 2000). Third, RAs provide valuable advice to help customers reduce transaction costs and improve the effectiveness and efficiency of purchase decision-making (Haubl and Trifts, 2000; Lynch and Ariely, 2000). If valuable advice is received repeatedly, then customers will perceive that RAs are more congruent with their needs. When customers perceive that RAs are more congruent with their needs, trust in the RAs will increase (Kelman, 1958). Thus, H3: Higher familiarity will increase cognitive trust in an RA's competence (H3a), benevolence (H3b), and integrity (H3c). H4: Higher familiarity will increase emotional trust in an R A .  Cognitive Trust and Emotional Trust A n empirical study has found that alcohol affects emotions through cognition (Curtin, Patrick, Lang, Cacioppo and Birbaumer, 2001). Another empirical study has observed that emotions are almost always evoked by cognition (Kahn, Pace-Schott and Hobson, 2002). Regarding customer trust in an R A , it is more likely for customers to feel comfortable and secure (emotional trust) when they cognitively trust an R A , but less  49  likely for them to directly convert the feelings into rational assessments of the RA's characteristics (cognitive trust). Therefore, H5: Higher cognitive trust in an RA's competence (H5a), benevolence (H5b), and integrity (H5c) will increase emotional trust in an R A .  4.5 Intention to Adopt an RA Two levels of R A adoption can be differentiated from each other in terms of the extent to which customers delegate their decisions to RAs. In agent-mediated ecommerce, various customer agents automate or assist in several stages of the shopping process on behalf of customers (Maes et al., 1999). The current research model suggests that adopting RAs either to automate or to assist customer decision-making are two levels of R A adoption that merit separate research. This study differentiates between the intention to adopt an R A as a delegated agent and the intention to adopt an R A as a decision aid. Table 6 lists the definitions and measures for this distinction. At the conceptual level, the intention to adopt an R A as a delegated agent and the intention to adopt an R A as a decision aid are related but different. They are related because they both are examples of R A adoption, and because they both require cognitive trust and emotional trust in an R A . However, the two adoption intentions are different. Adopting an R A as a delegated agent represents a higher degree of delegation to an R A . If the intention is to adopt an R A as a delegated agent, customers will accept an R A ' s advice without carefully examining the RA's recommendations or explanations, and it is more effective for an R A to recommend very few or even only one product. In contrast, if the intention is to adopt an R A as a decision aid, customers will tend to carefully examine  50  the R A ' s recommendations and explanations before they make a purchase decision, and customers prefer that an R A recommends quite a few products so they will have more choices.  The Impact of Customer Trust on the Intention to Adopt an R A There is a paucity of research on customer trust in computer agents, such as RAs, in e-commerce (see Section 2.2). This study contributes by investigating the impact of customer trust on R A adoption in e-commerce. Based on the literature discussed in section 3.6, we expect the following: Regarding the intention to adopt an RA as a delegated agent, H6: Higher cognitive trust in an RA's competence (H6a), benevolence (H6b), and integrity (H6c) will support the intention to adopt an R A as a delegated agent. H7: Higher emotional trust will increase the intention to adopt an R A as a delegated agent. Regarding the intention to adopt anRAas a decision aid, H8: Higher cognitive trust in an RA's competence (H8a), benevolence (H8b), and integrity (H8c) will support the intention to adopt an R A as a decision aid. H9: Higher emotional trust will increase the intention to adopt an R A as a decision aid.  4.7 Summary of Chapter 4 This chapter presents the research model and draws upon prior research on trust in MIS, marketing, sociology, psychology, and communications to derive the hypotheses for the current research. The research model leads to the hypotheses that RA internalization  51  and familiarity will increase both cognitive trust and emotional trust in RAs, and the higher levels of customer trust will lead to the more committed intentions to adopt an as a delegated agent or as a decision aid.  CHAPTER 5. RESEARCH METHOD This chapter describes the research methodology utilized for this dissertation. A laboratory experiment with one hundred participants randomly assigned to four groups, as depicted in Table 3, has been conducted to test the research model and hypotheses described in Chapter 4. In addition, a protocol tracing method has been employed to explore the processes of trust formation. Almost half of the subjects in each group thought aloud (see Table 4). This chapter explains the implementation of the two independent variables, the measures of the dependent variables, the protocol data collection, the subject characteristics, and the experimental procedures. The experimental study was carried out during the period from February 2002 to March 2002. A total of five pilot tests were completed during the period of September 2001 to February 2002. Pilot Test 1 involved 23 subjects in order to choose appropriate RAs and to test the experimental procedures. Pilot Test 2 included 17 subjects, and it collected protocol data in order to build coding schemes for protocol analysis. Pilot Test 3 (18 subjects for card sorting), Pilot Test 4 (162 subjects trying an RA), and Pilot Test 5 (six subjects for interviews) were conducted to develop new measures for dependent variables.  5.1 Independent Variable: RA Internalization Two levels of R A internalization were implemented as the two RAs in our experiment: a high-internalization R A represents customers' real needs better than a lowinternalization R A does. In other words, the two RAs in the experiment are similar in other aspects, while different in this respect. Compared to customers' real needs (e.g.  53  play games on a computer), the customers' expressed needs (e.g. specifications of Installed R A M ) with the high-internalization R A is perceived by customers to be closer than customers' expressed needs with the low-internalization R A . In order to find such two RAs, we first reviewed the RAs currently available online, including the RAs on www.amazon.com. www.cdnow.com, www.landsend.com, etc. We then decided to limit our choices to the websites that provide at least two RAs simultaneously, since the RAs from different websites are very different in terms of the considered products attributes, the product-filtering strategies, etc. These websites included www.amazon.com, www.ActiveBuvGuide.com , etc. We then chose www.ActiveBuyerGuild.com, because other websites like www.amazon.com are much better known, therefore it is more likely that our subjects may have used the RAs on those websites prior to our experiment. www.ActiveBuyGuide.com provided three RAs that are very similar in product filtering, but are different in terms of the way that they understand and internalize customers' needs. We then conducted a pilot test with 23 subjects to test the internalization and complexity of the three RAs. One R A was dropped from further consideration because it asked customers many more questions than the other two which made customers tired. The remaining two RAs were significantly different in terms of internalization. Finally, these two RAs were chosen from www.activebuyerguide.com as implementations of the two levels of RAs. Both RAs selected for this study are constraint-satisfaction RAs, in that customers specify all the product features they desire, and the R A then gives each customer a list of products, ordered by how effectively the products satisfy the constraints specified by the customers (Ansari et al., 2000; Maes et al., 1999). For the same product (e.g. notebook  54  computer), both RAs provide customers with the same product attributes and the same levels of each product attribute for comparative specifications. Both RAs also offer the same explanations for each product attribute and use the same strategies to filter the products available over the Internet. However, for the same product, the two RAs differ in terms of their level of internalization. The high-internalization R A endeavours to ensure that the product attributes specified by customers actually represent the customers' real needs by asking customer-need-based questions and weight questions, while the lowinternalization R A asks product-attribute-based questions and then use whatever information customers specify as the basis for generating its recommendations. Customer-need-based questions enable an R A internalize customers' real needs better than product-based questions do. For example, the low-internalization R A asks customers shopping for notebook computers to specify how big the installed R A M should be (e.g. at least 256MB). The high-internalization R A in addition asks customers' needbased ("get advice") questions, such as: Activities such as downloading music, manipulating digital images, and playing computer games require more memory. Which best describes your needs? • I want a notebook computer for office tasks, reading e-mail, and surfing the web C  I want to play a few games and run some multimedia programs  O  I'm a heavy multimedia user and need lots of notebook memory  C  I don't have an opinion on this  After a customer selects one of these options, the R A specifies the appropriate amount of R A M on behalf of the customer. The customer can then accept or reject the suggested specification. In addition, the high-internalization R A also asks weights questions, such as:  55  Compared to other features, installed R A M is somewhat important, very important, or extremely important?  With weight questions, the customers' expressed needs in the high-interaalization R A is closer to the real data set that customers use in their shopping decision-making. Therefore, with customer-need-based questions and weight questions, we expect that customers would perceive that they can express their needs more effectively with the high-internalization R A than with the low-internalization R A . A manipulation check for R A internalization was conducted. At the end of each experimental session, after a subject had used one R A and answered all the questions about customer trust and the intention to adopt an R A , the subject was shown the other R A and was allowed to try it out for as long as he or she wanted. Then the subject was asked to rate the internalization of both R A ' s by answering three seven-point Likert questions (see Table 5). The manipulation was successful: high-internalization R A (M = 5.8, SD = 1.0) vs. low-internalization R A (M = 4.4, SD = 1.4), t (99) = 8.32, /><0.001.  5.2 Independent Variable: Familiarity Two levels of familiarity were examined in the current research: initial interactions (low familiarity) vs. repeated interactions (high familiarity) between customers and RAs. According to background checks of the test participants, none of the 100 subjects had prior use of the RAs used in the experiment. The subjects in initialinteraction groups used one R A to shop for one product: a notebook computer. The subjects in repeated-interaction groups used the same R A to shop for three products: a notebook computer, a desktop PC, and a digital camera. These three products are similar  56  (all electronic), with similar levels of complexity (the RAs considered approximately eleven attributes for each product). Three products instead of more products were selected because a pilot test with 23 subjects indicated that interactions beyond three may often lead to subject fatigue, whereas three interactions are enough for subjects' trust to reach a steady level. In this experiment, after each subject finished shopping for one product, they were asked one question: to what extend did they agree with the statement "I trust this R A " on 7-point scale? There was no significant difference between trust level after the first interaction (M = 4.8, SD = 1.4) and trust level after the second interaction ( M = 4.9, SD = 1.6), t (48) = 0.3, p > .10, and there was no significant difference between trust level after the second interaction (M = 4.9, SD = 1.6) and trust level after the third interaction (M = 5.0, SD = 1.5), t (48) = 1.1, p > .10, which indicated that trust level had reached a stable level by the end of the third interactions. In addition, there was no significant difference between trust level after the first interaction and trust level after the third interaction either, t (48) = 1.27, p > .10, indicating that the initial interaction is the critical time period for building customer trust in an R A . A manipulation check for familiarity was conducted. After shopping for each product, subjects rated their agreement level with the following statement: "I am familiar with how this R A makes its recommendation" (1- not familiar, 7-vary familiar). The manipulation of familiarity was successful. One-way A N O V A analysis has revealed a significant difference for familiarity, F (1, 98) = 5.4, p < .01, with the repeat-interaction group receiving higher scores (M = 5.5, SD = 1.1) than the initial-interaction group (M = 4.6, SD= 1.8).  57  5.3 Dependent Variables: Measure Development In the course of this research, new measures have been developed for several variables: 1) cognitive trust in an R A ' s competence, 2) cognitive trust in an R A ' s benevolence, 3) cognitive trust in an R A ' s integrity, 4) emotional trust in an R A , 5) the intention to adopt an R A as a delegated agent, and 6) the intention to adopt an R A as a decision aid. The new measures were developed by adopting a) a three-step method of instrument development: scale creation, scale development (three rounds of card-sorting with a total of eighteen subjects), and scale testing that applied factor analysis based on a pilot test with 162 subjects (Moore and Benbasat, 1991); and b) a PLS measurement model (Barclay, Thompson and Higgins, 1995). The measures are shown in Table 6. The validity and reliability of the new measures were further tested when they were used in our main experiment (with 100 subjects). The results of validity and reliability tests are listed in Table 8, Table 9, and Table 10, and they will be explained in Section 6.2.  5.4 Protocol Data Collection 49 subjects, including about half subjects in each experimental groups (Table 4 and Table 3), were asked to think aloud (i.e. verbal protocols) while they were using an R A for shopping. A l l the 100 subjects were asked to answer three questions on paper (i.e. written protocols). The three questions are: •  How much do you trust the competence of the RA? Why?  •  How much do you trust the goodwill of the RA? Why?  •  How much do you feel comfortable about relying on the RA? Why?  58  To be specific, in initial-interaction groups (Group 1 and Group 3 in Table 4), each subject used an R A to shop for a notebook computer. While the subjects were using the R A , they were asked to think aloud. The researcher recorded what the subjects talked about. The subjects then answered written protocol questions on paper. Finally, they completed a questionnaire about trust and R A adoption. In repeated-interaction groups (Groups 2 and 4 in Table 4), subjects used an R A to shop for a notebook computer, a desktop PC, and a digital camera. Within the separate shopping sessions for each product, the subjects were asked to think aloud while they were interacting with the R A . After the subjects finished shopping for one product, they answered the same written protocol questions. The subjects then proceeded to the next product shopping session. After they finished shopping for all three products, they answered the questionnaire about trust and R A adoption.  5.5 Subjects The subjects that participated in the experiment include 76 undergraduate and 24 graduate students from a business school. 54 of the subjects are male and 46 are female. These subjects feel comfortable using computers (M = 6.4, SD = 1.0) and shopping online (M = 4.8, SD = 1.6). The average amount they spent on shopping online in 2001 was C A N $345 (SD = 633). We believed that the subjects who were interested in buying the three products in experiment (notebook computers, desktop PCs, and digital cameras) are more likely to be serious about the shopping tasks during the experiment; therefore, when we were recruiting subjects, we filtered the subjects based on their interest in buying the three products used in our experiment. Background checks of the subjects have revealed  59  that all of them were interested in buying these products at the time they participated in the experiment. The incentive for participation was a monetary reward plus one course credit. In order to motivate the subjects to view the experiment as a serious online shopping session, the subjects were told before the experiment that one participant, selected by a random draw, would receive a $400 refund for purchasing a product that had been selected during the experiment. A l l subjects volunteered to participate. Subjects were randomly assigned to one of the four treatment groups listed in Table 3. Background checks indicated that there were no significant differences between the groups in terms of the general feelings about using computers or shopping online. There was also no significant difference between the groups in terms of four control variables: control propensity, trust propensity, preference for decision quality vs. effort saving, and product expertise. The definition, measures, and reliability tests of control variables are shown in Table 7. The validity of the control variables (control propensity, trust propensity, and the preference for decision quality vs. effort saving) was tested in the card-sorting. The results showed good validity for these measurements.  5.6 Experimental Procedures Each of the 100 subjects participated in the experiment individually. They were allowed to take as much time as they wanted to complete their shopping. Most subjects spent between 75 and 100 minutes on the activity. A l l subjects finished their shopping within 120 minutes. The procedures were as follows: 1) The experiment subjects completed consent forms and background questionnaires.  60  2) The subjects took a tutorial to learn how to use an R A . 3) The subjects read the information sheet which stated that they would test a new R A . A computer company had developed the R A as a virtual shopping assistant, and the company hoped that the subject would buy and own the R A . The subjects were given this information in an attempt to control one possible confounding variable - the ownership of theRA. 4) The subjects interacted with an R A (Table 3). The subjects in groups 1 and 2 used a low-internalization R A , while the subjects in groups 3 and 4 used a highinternalization R A . The subjects in groups 1 and 3 shopped for a notebook computer only. The subjects in groups 2 and 4 shopped for a notebook computer, a desktop PC, and a digital camera. Before subjects started shopping for a particular product, their product expertise was measured. After subjects had finished shopping for a particular product, a manipulation check for familiarity was conducted. At that point, the subjects were also asked to rate their agreement level with a statement: "I trust this R A . " The purpose of this question was to check whether a subject's trust level would reach a stable level after three interactions. 5) The subjects completed a questionnaire about customer trust in an R A , the intention to adopt an R A , and control variables. 6) The subjects conducted manipulation-checks for R A internalization. Each subject in the high-internalization R A groups used a low-internalization R A ; each subject in the low-interaction R A groups used a high-internalization R A . The subjects could try the second R A for as long as they wanted. The subjects then rated R A internalization for both RAs by answering three questions, listed in Table 5.  61  5.7 Summary of Chapter 5 This chapter explains the implementation of the two independent variables, the measure development for the dependent variables, the protocol data collection, the subject characteristics, and the experimental procedures. Questionnaire instruments were used to measure customer trust in RAs and intentions to adopt RAs, while verbal protocols and written protocols were collected in order to understand the processes of trust formation. The reliability and validity of the measures for dependent variables were evaluated; the results will be discussed in Section 6.2. A combination of positivist (quantitative) and interpretive (qualitative) approaches was employed. Chapter 6 will report the quantitative analysis results, and Chapter 7 will report the qualitative analysis results.  62  CHAPTER 6. RESULTS OF QUANTITATIVE ANALYSIS 6.1 Statistical Analysis: PLS For statistical analysis, methods based on partial least squares (PLS) have been used, as implemented in PLS-Graph version 3.0. PLS has been chosen over LISREL because this study focuses on theory-development instead of theory-testing (Barclay et al., 1995; Chin, 2000). This dissertation proposes a new research model by introducing issues of R A internalization, emotional trust, cognitive trust, and two levels of R A adoption. Furthermore, the sample size (N=100) is inadequate for LISREL analysis. PLS analysis provides both a structural model and a measurement model for analyzing experimental data. The measurement model assesses the reliability and validity of measures, and the structural model assesses relationships among theoretical constructs.  6.2 Measurement Model A l l constructs in the current research have been modeled as reflective constructs, because their measure items are manifestations of these constructs (Barclay, et al., 1995) and because these items covary (Chin, 1998; Chin, 2000). PLS analyzes continuous variables instead of category variables, therefore manipulation check scores have been used for R A internalization and familiarity. A l l the items measured have been standardized. The measurement model for the current research shows good individual item reliability, good internal consistency, and good discriminant validity. Regarding individual item reliability, all the reflective constructs display strongly positive loadings (loadings > 0.7) and high levels of statistical significance for all items, as detailed in  63  Table 8. Regarding internal consistency (and reliability), the Cronbach's Alpha of each construct exceeds 0.70, as noted in Table 9, which is the suggested benchmark for acceptable reliability (Barclay et al., 1995). Table 9 shows good discriminant validity because the square root of A V E (average variance extracted) is greater than the interconstruct correlations (Barclay et al., 1995; Fornell and Larcker, 1981). Table 10 also shows good discriminant validity because the correlation between each item and its corresponding latent variable is greater than the correlations between each of the items and other latent variables. The latent variables for each subject have been generated using PLS-Graph version 3.0.  6.3 Structural Model In PLS, users assess a model's validity primarily by examining R and the 2  structure path, as one would with a regression model. Approximately 56% variance in intentions to adopt an R A as a delegated agent and 57% variance in intentions to adopt an R A as a decision aid are accounted for by emotional trust and cognitive trust in an R A . Approximately 39% of the variance in cognitive trust in an R A ' s competence, 56% of the variance in cognitive trust in an R A ' s benevolence, 30% of the variance in cognitive trust in an R A ' s integrity, and 70% of the variance in emotional trust in an R A are accounted for by the model. Five hypotheses (HI, H3, H5, H8, and H9) are supported (see Table 11), and eleven paths are significant (as noted in Figure 2 and Table 11). Thus, the fit of the overall model is good. The findings displayed in Figure 2 and Table 11 support the primary hypotheses that R A internalization significantly increases both cognitive trust and emotional trust,  64  which in turn increases customers' intentions to adopt an R A as a delegated agent or as a decision aid. The total effects of R A internalization on the two types of intentions to adopt RAs are 0.45 and 0.49 respectively. The total effects of familiarity on customers' intentions to adopt RAs are quite low: 0.06 on the intention to adopt an R A as a delegated agent and 0.10 on the intention to adopt an R A as a decision aid (Table 11). These results indicate that customer intentions to adopt RAs are mainly determined during initial interactions with RAs. We next discuss each hypothesis test.  Cognitive Trust in an R A R A internalization and familiarity both significantly increase cognitive trust in an R A ' s competence, benevolence, and integrity; thus H I and H3 are supported (Table 11). Based on suggestions by Chin, Marcolin and Newsted (1996) and (Chin, 2000), the interaction effects of internalization and familiarity were tested by adding an interaction construct. Neither the path coefficients from the interaction to cognitive trust in an RA's competence, benevolence, and integrity nor the path coefficient from the interaction to emotional trust exceeds the minimum standard of significance at 0.20 (Chin, 1998), which suggests that the interaction effect accounts for less then 4% of the variance in either cognitive trust or emotional trust. The correlation between familiarity and internalization is 0.40, which is not high either. Thus, further discussion of the interaction effects of internalization and familiarity has been omitted. The impact of product expertise on cognitive trust is significant, while its impact on emotional trust is not significant. However, the path coefficient from product expertise  65  to cognitive trust is lower than the minimum standard of significance at 0.20 suggested by Chin (1998); therefore, the impact of product expertise on cognitive trust and emotional trust has been omitted from the PLS structural model.  Emotional Trust in an R A R A internalization and familiarity do not significantly increase emotional trust in an R A ; therefore, H2 and H4 are not supported (Table 11). Further analysis reveals that the hypothesized relation is absent because cognitive trust functions as a mediator. The method suggested by (Baron and Kenny, 1986) (http://users.rcn.com/dakenny/mediate.htm)  was employed to test the mediating effects of  cognitive trust. Regarding R A internalization, cognitive trust in an R A ' s benevolence fully mediates the impact of R A internalization on emotional trust in an R A , while cognitive trust in an R A ' s competence and cognitive trust in an R A ' s integrity only partially mediate the impact (Table 12). Regarding familiarity, cognitive trust in an R A ' s competence, benevolence, and integrity fully mediates the impact of familiarity on emotional trust in an R A (Table 12).  Cognitive Trust and Emotional Trust Cognitive trust in an R A ' s competence, benevolence, and integrity significantly increases emotional trust, therefore H5 is supported (Table 11).  Intention to Adopt an R A as a Delegated Agent As shown in Table 11, emotional trust directly increases a customer's intention to adopt an R A as a delegated agent (H7 is supported), while cognitive trust in an R A ' s  66  competence, benevolence, and integrity does not directly increase a customer's intention to adopt an R A as a delegated agent (H6 is not supported). The reason is that emotional trust fully mediates the impact of cognitive trust on a customer's intention to adopt an R A as a delegated agent (Table 12). The total effects of cognitive trust on a customer's intention to adopt an R A as a delegated agent are positive (Table 11).  Intention to Adopt an R A as a Decision Aid Emotional trust significantly increases a customer's intention to adopt an R A as a decision aid, thus H9 is supported (Table 11). However, cognitive trust in an R A ' s competence, benevolence, and integrity does not significantly increase a customer's intention to adopt an R A as a decision aid; therefore, H8 is not supported (Table 11). As with the previous pair of hypotheses, the reason is that emotional trust in an R A fully mediates the impact of cognitive trust in competence, benevolence, and integrity on a customer's intention to adopt an R A as a decision aid (Table 12). The total effects of cognitive trust on a customer's intention to adopt an R A as a decision aid are positive (Table 11).  6.4 Summary of Chapter 6 This chapter reports the quantitative analysis results. PLS analysis demonstrates the following: •  High R A internalization significantly increases customers' cognitive trust  in an R A , while it does not directly increase emotional trust in an R A , because cognitive trust fully mediates the effect of R A internalization on emotional trust.  67  •  High familiarity significantly increases cognitive trust in an R A , while it  does not directly increase emotional trust in an R A , because cognitive trust fully mediates the effects of familiarity on emotional trust. •  Emotional trust significantly affects intentions to adopt an R A either as a  delegated agent or as a decision aid, while cognitive trust does not directly increase customers' intentions to adopt an R A , because emotional trust fully mediates the effects of cognitive trust on a customer's intention to adopt an R A as a delegated agent or as a decision aid. •  The effect of emotional trust on intentions to adopt an R A as a delegated  agent is higher than the effect of emotional trust on intentions to adopt an R A as a decision aid, while the effect of cognitive trust on intentions to adopt an R A as a delegated agent is lower than the effect of cognitive trust on intentions to adopt an R A as a decision aid.  68  CHAPTER 7. RESULTS OF PROTOCOL ANALYSIS Questionnaires have been used in the current study to measure the levels of cognitive trust and emotional trust, while protocol analysis subsequently analyzes the numbers of processes involved in cognitive trust and emotional trust formation. Specifically, a total of 49 subjects were asked to think aloud, voicing their observations and opinions while they engaged in shopping activities. As shown in Table 4, these 49 subjects were almost equally distributed among the four experimental groups. A researcher taped the subjects' verbalizations, which continued for almost 40 minutes on average, and the tape-recorded verbal protocols were then transcribed. The transcribed verbal protocols contained 6,987 lines; the written protocols of these 49 subjects contained 717 lines; therefore a total of 7,704 lines were analyzed. From this total, 3,141 lines of verbal protocols and 556 lines of written protocols were coded because they contain the processes of trust formation. The protocol analysis was conducted in two steps. First, coding schemes were built to determine the method for coding the results. Then the protocols were coded by two judges (the author and another Ph.D. student majoring in MIS at UBC) independently. The protocol analysis reveals how an R A can be better designed to promote trust and to prevent distrust in RAs.  7.1 Building Coding Schema Two coding schemes were created for the current study, building from prior research (Doney and Cannon, 1997; McKnight et al., 1998), the proposed trust model (Section 4.4), and a pilot test with 17 subjects' verbal protocols. Coding Scheme 1  69  classified trust processes in terms of the destination and source of each process. Coding Scheme 2, on the other hand, sorted trust processes in terms of how customers process their knowledge about RAs when forming trust or distrust for RAs.  Coding Scheme 1 In Coding Scheme 1, trust processes were classified in terms of the destination and source of each process. The destination of a process refers to the component of trust that is generated by the process. Based on the trust model proposed in Section 4.4, the prevalent components of trust include cognitive trust in an R A ' s competence, cognitive trust in an R A ' s benevolence, cognitive trust in an R A ' s integrity, and emotional trust in an R A . The source of a process refers to the foundation for that trust process. Coding Scheme 1 identifies two sources of trust: 1) The information that a trustor has about a trustee, for example, a customer's knowledge about an R A (Shapiro, 1987; Sztompka, 1999; Yamagishi and Yamagishi, 1994). Cognitive trust in an R A ' s competence, benevolence, and integrity is the result of rational assessments customers make regarding an R A , based on what they know about the R A . The customers' feelings of security and their comfort about relying on an R A (i.e. emotional trust) may also increase or decrease, depending on what the customers know about the R A . 2) Customer characteristics (including their trust propensity, control propensity, and general view about the Internet and human beings, etc.) also serve as sources of trust. Customer characteristics are independent of any knowledge about an R A ;  70  "Rather they are derived from past history of relationships pervaded with trust or distrust, primarily in the family and later in other groups, associations, and organizations. These are the traces of a personal history of experiences with trust, petrified in the personality of the trusting agent (the trustor)... It is individual, biographical genealogy." (Sztompka, 1999: 70) For example, a person with high trust propensity may trust the benevolence and integrity of a trustee beyond the level warranted by the prudent assessment of available information (Yamagishi and Yamagishi, 1994).  Coding Scheme 1 includes four cognitive trust processes and three emotional trust processes. 1) Cognitive trust (CT) processes: Cognitive trust is a customer's rational assessments that an R A will have the necessary competence, benevolence, and integrity to be relied upon in the future. Cognitive trust is the result of cognitive trust processes which are cognitive processes of interpreting what a customer knows about an R A into rational assessments of the RA's competence, benevolence, and integrity. Customer awareness of unknown features of an R A may also affect cognitive trust in the R A . For example, customers may be aware that they do not know how an R A generates its recommendations, because the R A does not provide such explanations. Such awareness will decrease the customers' cognitive trust in the R A . Cognitive trust processes include four processes: C l , C2, C3, and C4. C l ) The Knowledge-based CT-Competence Process: A customer's cognitive trust in an RA's competence relies on determining the RA's ability to meet its obligations, as  71  they are understood by the customer. The source of this process is the information that customers know and are aware of the unknown features of an R A , including the R A ' s characteristics and actions. The competence process is similar to Doney and Cannon's capability process (1997). For example, • When a customer is using a product-brokering R A , he thought aloud, "These questions asked by the R A have covered all the important features of the product." • "He [the RA] knows so many attributes of the products." • "When I cannot find the product I want, the R A doesn't make helpful suggestions."  C2) The Knowledge-based CT-Benevolence Process: Customers' cognitive trust in an R A ' s benevolence emerges through their rational assessment of whether the R A will care about them and will be motivated to act in their interests (McKnight, et al., 2001b). The source of this process is what customers know and are aware of concerning unknown features of an R A , including the RA's characteristics and actions. The benevolence process is similar to Doney and Cannon's intentionality process (1997). For example, • "I think that this R A is trying to understand me." • "The R A is trying to be considerate." • A customer would like to see the pictures of the recommended notebook computer and he found it in the R A . He thought aloud, "Just the fact that there's a product picture here, I would call that goodwill of the R A . "  C3) The Knowledge-based CT-Integrity Process: Cognitive trust in an R A ' s integrity emerges through customers' rational assessments of whether the R A will commit to agreements in good faith, tell the truth, and fulfill promises (McKnight et al., 2001b). In the context of RAs, integrity refers to whether an R A is honest and unbiased.  72  The sources of this process include known qualities and characteristics of RAs, and customers' awareness of unknown features of the RAs, including the R A s ' characteristics and actions. This process is similar to Doney and Cannon's intentionality process (1997). For example, • "The R A is not biased toward any brand." • "I do trust the R A since it is all technical specifications and it is too hard to lie about them." • When a customer received a set of recommended notebook computers, all of which were Sony, he thought aloud: "Maybe the R A has some relationship with Sony, to push Sony only?" • "I trust the integrity of the R A , because I can get info about different vendors for the average price. There seems no particular interest in one particular retailer or vendor."  C4) ET-Causing-CT Process: It has been theoretically suggested that emotion and cognition affect each other (Lewis and Weigert, 1985), and in some cases emotional trust (ET) can affect cognitive trust (CT, including cognitive trust in competence, benevolence and integrity). This study intended to empirically test the existence of this ET-causingCT process by using our verbal and written protocols collected. However, the current research identified no examples of this process (emotional trust causing cognitive trust). One possible example would have been: • I feel comfortable about relying on this R A , therefore I trust its competence.  2) Emotional Trust (ET) Processes: Emotional trust is customers' feelings of security and comfort about relying on an R A . Emotional trust processes are the processes of a customer developing such feelings. When feelings accumulate and are sufficiently intense, the customers will feel secure and comfort about relying on RAs without worrying about what they do not know about the R A .  73  E l ) CT-Causing-ET Process: It has been theoretically suggested that emotion and cognition affect each other (Lewis and Weigert, 1985). This protocol analysis intended to empirically test the existence of the path that cognitive trust will trigger emotional trust. It is reasonable to believe that when customers develop rational assessments that an R A will act with competence, benevolence, and integrity, these assessments will make the customers feel more secure and comfortable about relying on the R A . Examples from our verbal protocols include: •  "I feel comfortable, because I trusted the RA's goodwill and ability."  • "I still believe the R A has the expertise; I feel comfortable about relying on the R A , except for the price." E2) The Knowledge-based E T Process. This process is developed based on the trust model proposed by Komiak and Benbasat (forthcoming). Our pilot test with 17 subjects showed knowledge-based E T processes actually took place in the context of R A usage. Emotional trust is triggered directly by a known fact, i.e. what customers know about the characteristics and actions of RAs. Customers' knowledge about an R A may trigger emotional trust through cognitive trust (i.e. cognitive trust processes ( C l , C2, C3), then CT-causing-ET process (El)), or they may trigger emotional trust directly (i.e. E2 process). The knowledge in two situations above may be different. For example, in the case that "an R A is an expert on the recommended products," this piece of knowledge may trigger cognitive trust, and cognitive trust may then trigger emotional trust. In another example, a situation where "an R A ' s interface design is nice," this piece of knowledge may trigger emotional trust directly. Customers' knowledge about an R A may also provoke their affects about the R A (Doney and Cannon, 1997; Lewis and Weigert,  74  1985; Nicholson et al., 2001; Swan et al., 1988), and these affects can inspire the customers' emotional trust. In addition, customers' awareness of the unknown about an R A may stimulate emotional trust directly. It is likely that customers will deal with the unknown by positing guesses or by examining the R A for answers. If customers can find the answers or i f they feel comfortable enough with their guesses, their emotional trust will increase. Otherwise, their emotional trust will decrease due to the feelings of uncertainty and insecurity. Examples from our verbal and written protocols include, • "I think that I feel comfortable about relying on the R A because I got detailed information about the recommended product." • "Okay that's pretty good, the R A explains it [the meaning of one product attribute] to me, so that's pretty cool." • "I like the fact that it has got checkboxes." • "I'm not sure where I would find that here so that's a couple of points against the R A . I'm feeling a little bit limited there."  The processes above are E2 processes, because the key words such as "cool", "like", and "feel" indicated that they generated emotional trust, and because the sources of the processes were the customers' knowledge about the R A .  E3) Customer-characteristic-based E T Process: This process is developed based on our pilot test with 17 subjects. Customers' characteristics may trigger emotional trust directly, while these characteristics may also moderate cognitive trust processes. Customer's characteristics include their trust propensity, control propensity, risk concern, and their general faith in humanity (McKnight et al., 1998), as these characteristics are enacted in computer use, or on the Internet. We expect that emotional trust will be  75  affected by these characteristics, especially during the initial interaction between a customer and an R A and when situations are novel and ambiguous. For example, • When a customer was asked whether he felt comfortable and secure about relying on an R A , he said, "Yes, because I am comfortable using computers." • "I don't feel secure about relying on the R A because I still want to see and try the real product before I decided to buy it."  Coding Scheme 1 Also Applies to Distrust Coding scheme 1 also applies to distrust formation, because distrust includes cognitive and emotional components that are parallel to those involved in trust. Distrust is the negative mirror-image of trust (Sztompka, 1999: 26). Cognitive distrust refers to the cognitive component of distrust, which is a customer's negative rational assessment of an R A ' s trust-related characteristics. Cognitive distrust also includes cognitive distrust in an R A ' s competence, benevolence, and integrity. Similar to cognitive trust, cognitive distrust is primarily knowledge-based. However, cognitive trust is a positive attitude toward an R A , based on cognitive and rational interpretations of positive knowledge about the R A , while cognitive distrust is a negative attitude toward an R A based on cognitive and rational interpretations of negative knowledge about the R A . Emotional distrust is the emotional component of distrust. Similar to emotional trust, emotional trust can be based on knowledge about an R A , or it can be based on characteristics of customers. However, customers may feel trust and distrust in different ways. Carolyn Gratton, researching the lived experience of trust, finds that people describe distrust or breaches of trust as bodily stiffness, alertness, or tenseness, while trust is described as lightness, relaxation, or calmness (Gratton, 1973: 281).  76  Coding Scheme 2 Based on prior research by Doney and Cannon (1997) and McKnight et al. (1998) that conceptualize some processes of trust formation (see Section 2.3), Coding Scheme 2 conceptualizes that both trust and distrust formation processes can be categorized according to how customers interpret their knowledge about an R A . A pilot test in which 17 subjects thought aloud while interacting with an R A was conducted. The protocol analysis in the pilot test discovered some new trust/distrust formation processes employed by customers, including 12 (Expectation satisfaction process), 14 (Awareness of the unknown process), 16 (Information sharing process), and 17 (Verification process). Our pilot test also revealed processes suggested in prior theoretic research but not applicable to the context of using RAs. For example, it is interesting to find that Calculative process (Doney and Cannon, 1997) does not pervade; the reason could be that it is not easy for customers to calculate what the costs and benefits are for an R A to cheat them. In addition, Doney and Cannon's transference processes (1997) does not apply to this study because this study focuses on trust/distrust formation while a customer is interacting with an R A , not before the direct interaction between the customer and the R A . Reputation categorization process (McKnight et al. 1998) does not apply to this study because we chose two RAs that none of the subjects had ever heard of before our experiment. We did so because one treatment in our experiment is the initial interaction between customers and an R A . Unit grouping process (McKnight et al. 1998) was rare in our pilot test; the reason could be that unit grouping process may better apply to interpersonal relationship or customer trust in a computer agent with a personality or a  77  human-like image, but not well apply to the context of using the RAs in our experiment. Stereotyping process (McKnight et al. 1998) did not pervade in our pilot test either, possibly because the RAs were new to the customers in our experiment, and it was not easy for the customers to classify the RAs into any general category.  Coding Scheme 2 includes nine processes (11-19). 11. Competence Assessment Process. This process is similar to Competence process proposed by Doney and Cannon (1997). When customers rationally assess that an R A ' s actions or characteristics demonstrate its ability to do its job well, they will trust it more. For example, •  "The R A ' s closer analysis of the recommended products made sense."  • "The fact that it communicates all these brands gives me a sense that it's fairly comprehensive." • "It suggests competence, by the fact that it can suggest name brands like a salesperson." 12. Expectation Satisfaction Process: This process is developed based on our pilot test. When an RA's actions satisfy customer expectations, customer trust in the R A will increase. When an RA's actions do not satisfy expectations, customer trust will decrease. For example, • When an R A did not recommend any I B M notebook computer to a customer, he thought aloud: "Why no IBM? No I B M satisfied the requirements? ... But I know I B M does have a qualified model". • "Okay, this is good, there's a picture of the recommended products, I am visually oriented." • "The R A should explain to me meanings of different options more clearly." • "The R A came up with product recommendations that surprised me."  78  13. Control Process: This process is developed based on Prediction process (Doney and Cannon, 1997), illusion of control process (McKnight et al. 1998), and our pilot test. When customers feel that they have more control over an R A , they will trust it more (Ariely, 2000; Kipnis, 1986). Customers will also feel more secure and comfortable about relying on an R A due to their illusion of control. Trust-building involves illusions according to trust theories (Holmes, 1991; Meyerson, Weick and Kramer, 1996) and empirical studies (Kramer, 1994; Nicholson, et al., 2001), and the illusion of control is an unrealistically inflated perception of personal control that may affect the development of trust (Langer, 1975; Taylor and Brown, 1988). According to McKnight et al. (1998), the illusion of control helps to build trust. This process may be similar to processes by which people become overconfident of their judgments (Langer, 1975; Paese and Sniezek, 1991). Mental assessments tend to increase customer confidence because the customers move a task further from the realm of chance and closer to the realm of a will-based judgment (Langer, 1975). Therefore, the illusion of control or the feeling of being in control may trigger customer trust in an R A . Our pilot test showed that the feeling of being in control is more than an illusion - it is real. The control process involves the customers' interpretations of their perceptions, including their being more familiar with how to use an R A , more comfort with the RA's easy-to-use functions, or more choices given by many products recommended by the R A . The written protocol analysis clearly shows that the feeling of being in control leads to both trust and distrust in an R A (Table 14). The verbal protocol analysis has also verified this point (see Section 7.3). Some examples from our protocol analysis include: • "That would reduce my trust in the RA's goodwill a little because I lose a little bit of the sense of control."  79  • "The R A also gives me more options; so I can select several different brands." • "I've already shopped for one thing. I come back and the interface is very similar so that helps me to more quickly feel comfortable knowing how to navigate it."  14. Awareness of the Unknown Processes: This process is developed based on our pilot test and the trust model proposed by Komiak and Benbasat (forthcoming). Customers will engage in a different process when they are aware of their own ignorance of particular characteristics of an R A . As one previous study has noted, "trust is particularly relevant in conditions of ignorance or uncertainty with respect to unknown or unknowable actions of others" (Gambetta, 1988: 218). The trust model proposed by Komiak and Benbasat (forthcoming) predicts that customers will develop feeling of uncertainty, uneasiness, and insecurity when they are aware of the unknown about an R A , and it is evident that customers may develop feelings of security and comfort by making reasonable guesses about the unknown quantities, based on what they know about the R A or based on their favorable assessments of the R A ' s competence, benevolence, and integrity. When positive feelings are intense enough, the customers will suspend worrying about the unknown. For example, • When an R A could not generate any recommendation, a customer thought aloud, "I think it will increase my trust, if the R A tells me why there are no results, whether my price is too low, or something else. I do not know what's wrong. I really do not know. If I do not know, I do not feel good. " • When a customer realized that she did not understand what an R A did, she thought aloud, "I don't know why, it makes me uncomfortable." • "The R A didn't really spell it out [i.e. why does the R A rank the recommended products in this way], but it inferred that one was cheaper than the other because of the quality. It is OK."  80  15. Integrity Assessment Process: This process is similar to Goodwill process (Doney and Cannon, 1997). When customers rationally assess that an R A ' s actions or features demonstrate the RA's integrity, they will trust it more. For example, • "I trust the integrity of the R A , because I can get info about different vendors for the specified price. There seem no particular interests of one particular retailer or vendor". • "The R A seems to provide an unbiased selection."  16. Information Sharing Process: This process is developed based on our pilot test. When an R A explains its reasoning process explicitly or shares detailed product information with customers, the customers will trust the R A more. However, too much information may confuse or overwhelm the customers, which will decrease customer trust in the RA's competence or benevolence. For example, • "I feel comfortable because I got detailed information about the product." • "Okay, that's pretty good; the R A explains it to me so that's pretty cool." • "Okay, I know the R A is evaluating 583 notebooks; this number gives some sense of comprehension." • "I felt the R A is giving me a bit more information than I need. It confused me on price issues. Otherwise, for this product the R A is highly competent. It gave all the features of a notebook which would be quite difficult for an actual salesperson to remember."  17. Verification Process: This process is developed based on our pilot test. When customers are able to verify that what an R A claims is true, they will trust it more. Similarly, when customers can verify that the product recommendations made by the R A are good, they will trust the R A more. For example, • "I trust this RA's recommendation. I had an I B M notebook and I was happy with it." • "I a] fairly comfortable because I can verify the recommendation with the information provided." • "Panasonic is a good name ... so this [recommended digital camera] is good."  81  • likes it!"  The R A recommended a HP desktop PC, "my friend has a HP PC, and she  18. Interface Process: This process is developed based on prior research (e.g. Sztompka, 1999; Swan et al. 1999) theoretically suggesting that trust may build based on the trustee's appearances. Our pilot test showed that it is also true in the context of customer trust in an R A . Pleasant interfaces trigger both cognitive trust and emotional trust. Examples from our protocol analysis include: •  "The presentation of this R A is pleasing to the eye."  • "This is all on one screen and that many stuffs, it is not clear." • "I notice a recommendation is in red. Negative feeling, and I'm not sure if it's good to interpret it or not but the red gives me a sort of blah feeling, maybe its all those years in school where red means negative. But that pushes me away from that one." 19. Benevolence Assessment Process: This process is similar to Goodwill process (Doney and Cannon, 1997). When a customer rationally assesses that an R A ' s actions or features demonstrate the R A ' s benevolence, the customer will trust the R A more. For example, •  "The R A cares about my interests."  • I trust the benevolence of the R A , because "it does not try to overwhelm me with a lot of questions (e.g. monitor size)."  7.2 Coding As shown in Figure 3, each transcribed protocol was analyzed, which means that each line of the verbal and written protocols collected in the experiment has been classified utilizing the two coding schemes described in Section 7.1. Each line of protocol contains at most one process in Coding Scheme 2. Usually, one process in Coding Scheme 2 results in a particular component of trust or distrust (i.e. one process in Coding  82  scheme 1). In rare cases, however, one process in Coding Scheme 2 may result in two components of trust/distrust (i.e. two processes in Coding Scheme 1). For example, a customer thought aloud, "I think it's kind of strange, because for the brands for these five recommended products, four of the products are Sony; so there might be a little bit biased in the results." This line contained one process (the verification process, 17), but it generates two components of trust - cognitive trust in an R A ' s integrity (C3) - "four of the products are Sony  a little bit biased in the results," and emotional trust (E2) -  "kind of strange... there might be ..." To provide a basis for reliability assessment, two judges independently coded the protocols using the same coding schemes. The averages of the two judges' coding were used to develop the analysis included in this dissertation. The category C4 in Coding Scheme 1 has been omitted from further analysis because coding results have indicated that no example for C4, thus providing evidence that emotional trust does not directly trigger cognitive trust.  Inter-judge Agreement Regarding assessments of reliability, Cohen's coefficient of agreement for verbal protocol analysis was 79% (Cohen, 1960). The agreement level between the two coders regarding the analyzed verbal protocols was 86% because, as shown in Table 13, a total of 6,987 lines were analyzed (all the transcribed protocols were analyzed) and the judges agreed about the coding of 6,012 of these lines. The agreement level regarding coded verbal protocols was 69%, because a total of 3,141 lines were coded (i.e. coded lines were the lines containing at least one process in Coding Scheme 1 or in Coding Scheme  83  2), and the judges were in agreement about 2,166 of these lines. Regarding written protocol analysis, Cohen's coefficient was 80%, the agreement of the analyzed written protocols was 80%, and the agreement of the coded written protocols was 74%. These measures indicate a good inter-judge agreement.  Verbal Protocol Analysis and Written Protocol Analysis The current study focuses on the results of analyzing verbal protocols because they are concurrent protocols - the subjects spoke their minds while they were actually using the R A . Written protocols, on the other hand, were retrospective protocols, written after work with the R A had been concluded. Thus, the analysis of written protocols has played only a supportive role in the entire task of protocol analysis. While customers were talking, they were usually able to describe how they processed their knowledge about the R A . For example, subjects said: "This R A knows a lot about the notebook computer," but they might not explicitly say: "Therefore, I trust the competence of this R A . " When the subjects later answered written questions, including the following: "Why do you trust the competence of this R A ? " the subjects answered that they trusted this R A ' s competence because the R A knew a lot about the product (i.e. notebook computer). Therefore, based on the written protocols, these two judges coded the verbal protocol that "the R A knows a lot about the notebook computer" as C l : The Knowledge-based CTCompetence Process. Table 14 shows the results of written protocol analysis - how each trust/distrust component mainly results from several trust/distrust formation processes. For example, 93% of C l (Knowledge-based CT-competence) results from II (Competence  84  assessment), 17 (Verification), 16 (Information Sharing), and 12 (Expectation Satisfaction). II mainly includes the protocols like: "The R A is knowledgeable about products," "The criteria represent all the important features of the product," etc. 17 mainly refers to the protocols such as "Word of mouth, brand names of the recommended products," "I can tell by the features on the products t h a t a n d "Previous experience with the recommended product." 16 mainly refers to "The R A provides all the info I would need to know; Detailed info." 12 mainly means "The R A recommends the products that fit my expectations/requirements." Customers' answers to the questions about R A s ' goodwill actually included their assessment of both the benevolence and integrity of the RAs. Table 14 has separated the written protocols regarding the formation of cognitive trust in an R A ' s benevolence vs. cognitive trust in an R A ' s integrity, based on the definitions of cognitive trust in benevolence vs. cognitive trust in integrity and also based on two judges' coding. The results of written protocol analysis (Table 14) only helped with verbal protocol analysis when subjects talked about a trust/distrust formation process without specifying which component of trust/distrust the process would generate. This situation happened only to a part of cognitive trust formation ( C l , C2, and C3). Regarding emotional trust formation ( E l , E2, and E3), subjects always expressed their emotions while they were thinking aloud.  Trust and Distrust Formation The protocols have revealed that customers formed both trust and distrust while interacting with an R A . Therefore, the transcribed verbal protocols and written protocols  85  have been separated into two parts: trust formation and distrust formation. As shown in Table 13, of the 3,141 lines of the coded verbal protocols, 53% (1,652 lines) are related to trust formation, while 47% (1,489 lines) are related to distrust formation. Within the 556 lines of the coded written protocols, 68% (380 lines) are related to trust formation, while 32% (176 lines) are related to distrust formation.  7.3 Protocol Analysis Results The verbal protocol analysis has been used to address the following research questions: •  What is the structure of customer trust in an R A , and what is the structure  of customer distrust in an RA? Are these structures the same? •  How do customers process their knowledge about RAs in order to form  trust or distrust? How can RAs be designed to facilitate the formation of trust in an R A or to prevent the formation of distrust? •  What are the effects of repeated interactions on the formation of trust and  distrust? •  What are the effects of R A internalization on the formation of trust and  distrust? •  How can RAs be designed to facilitate cognitive and emotional trust  formation, and to prevent cognitive and emotional distrust?  There is no hypothesis about the research questions above, since the protocol analysis part of study is exploratory and original, to our best knowledge. The results of  86  the verbal protocol analysis have been reported because verbal protocols are concurrent protocols. Therefore, the analysis of verbal protocols demonstrates how customers actually form trust or distrust in RAs while interacting with them.  The Structures of Trust and Distrust As shown in Table 15, a total of 1,859 processes were coded as generating trust components, while 1,642 processes were coded as generating distrust components. The structures of trust and distrust significantly differ in terms of their cognitive vs. emotional components, % (1, N = 3501) = 427.9, p < 0.01. 2  •  The instances of trust in an R A are mainly cognitive trust (66% cognitive  trust and 34% emotional trust). •  The instances of distrust in an R A are equally emotional and cognitive  trust (48% cognitive trust and 52% emotional trust). These results indicate that customers need effective rational reasons as the basis for building cognitive trust. However, feelings about an R A , possibly without good rational reasons, are sufficient to warrant distrust formation. The main component of cognitive trust in RAs is cognitive trust in the R A ' s competence (as measured in Table 15). This result indicates that customers focus on an R A ' s competence, not an R A ' s benevolence or integrity, to build their cognitive trust in an R A . The possible reason could be that, in our experiment, the subjects were asked to assume that they would own the R A ; therefore the subjects' concerns about the R A ' s integrity or benevolence were not very strong. This result may vary when the ownership of the RAs changes, e.g. the RAs may be owned by a seller on a website. Future study is  87  needed to test this. In addition, as shown in Table 15, the main component of emotional trust is knowledge-based emotional trust (E2). When trust and distrust formation are combined, 57% is cognitive and 43% is emotional.  Trust/distrust Formation Processes As shown in Table 16, a total of 1,516 processes have been coded as trust formation processes, while 1,373 processes have been coded as distrust formation processes. The distribution of the trust formation processes (11-19) differs from that of distrust formation processes (11-19), % (8, N = 2888) = 503.5, p < 0.01. Furthermore, 2  processes that account for at least 10% of trust or distrust formation (identified in Table 16) are different. These results reveal that trust and distrust are formed differently. •  •  Almost 80% of trust results from four processes: o  II - competence assessment process, 33%  o  16 - information sharing process, 21%  o  17 - verification process, 14%  o  12 - expectation satisfaction, 11%  Almost 70% of the instances of distrust result from three particular processes: o  14 - awareness of the unknown process, 28%  o  II - competence assessment process, 22%  o  12 - expectation satisfaction, 20%  The Effects of Repeated Interactions on Trust and Distrust Formation  88  The verbal protocols from the repeated-interaction groups (Group 2 and 4 in Table 4) have been fatherly separated in terms of interaction numbers: initial interactions between customers and RAs, the second interaction, and the third interaction; the average numbers of processes per subject during each interaction have then been calculated. Regarding the formation of each trust component (listed in Table 17), the results demonstrate the following: •  The absolute numbers of the processes within every category drop almost  50% from an initial interaction to a second interaction, and the numbers stabilize from the second interaction to a third interaction. •  However, the proportion of the processes forming each trust component  remains the same from the initial interaction to the third interaction. The same observation applies to the formation of distrust components (see Table 18), trust formation processes (see Table 19), and distrust formation processes (see Table 20) in repeated interactions.  The Effects of R A Internalization on Trust/Distrust Formation The verbal protocols from the current research have been segregated in terms of differing levels of R A internalization. The analysis results demonstrate the following: •  The structure of trust formation (depicted in Table 21) does not differ by  R A internalization, % (1) = 0.01, p > 0.10. 2  •  The structure of distrust formation (see Table 21) does not differ by R A  internalization, % (1) = 0.10, p> 0.10. 2  •  The proportion of trust formation processes (see Table 22) does not differ  by R A internalization, % (8) = 5.63, p > 0.10. 2  •  The proportion of distrust formation processes (see Table 22) does not  differ by R A internalization, % (8) = 5.74, p > 0.10. 2  The verbal protocols from this study have been further segmented in terms of the four experimental groups participating in the study. The analysis results prove that the level of R A internalization and repeated interactions (i.e. familiarity) do not affect the structure of trust (given in Figure 4) or the structure of distrust (given in Figure 5), because the proportional distribution of trust and distrust components remains the same across the four groups (the four groups listed in Table 3). These results indicate that customers use stable structures (the proportional distribution of cognitive trust and emotional trust formation in Coding Scheme 1) to form trust and distrust in an R A , regardless of R A types or the number of interactions between the customer and the R A . In addition, the level of R A internalization and repeated interactions (i.e. familiarity) do not affect customers' trust formation processes (see Figure 6) or distrust formation processes (see Figure 7), because the proportional distribution of information-related processes remains the same across the four groups (the four groups in Table 3). These results indicate that customers use stable structures (the proportional distribution of nine information-processing methods in Coding Scheme 2) to process her knowledge about an R A , regardless of the R A types or the number of interactions between the customer and the R A .  90  Building Cognitive and Emotional Trust In order to determine how to facilitate cognitive and emotional trust and how to prevent cognitive and emotional distrust, the verbal protocols have been separated in terms of the test subjects (totaling 49 subjects). SSPS was then used to conduct regression analysis. Because the main component of cognitive trust is knowledge-based cognitive trust in an R A ' s competence (CI) and the main component of emotional trust is knowledge-based emotional trust, regression analysis was conducted using C I and E2 as dependent variables and using 11-19 (trust/distrust formation processes) as independent variables. The regression methods used was Enter. The regression analysis results (Table 23) revealed that: •  To facilitate cognitive trust in an R A ' s competence, the most important  strategy is to design an R A with sufficient, robust, easy-to-access, easy-to-understand, and easy-to-use functions (II). It is helpful when an R A shares detailed information with customers (16), when it understands and behaves according to customer expectations (12), and when it senses and responds to customer awareness of unknown factors (14). For example, cognitive trust in an R A may be improved by increasing user-involvement in R A design, by building a knowledge-base in an R A that infers what customers expect (based on the movements of the customers' mice, webpage browsing, or personal shopping history), or by allowing customers to directly modify the reasoning process inherent in the R A ' s interface. •  In order to facilitate emotional trust in an R A , the two most important  strategies revolve around the creation of a pleasant and intuitive interface (18), and enabling customers to verify the R A ' s questions, information, and recommendations  91  with their own prior knowledge (17). It is also helpful to share detailed information with customers (16) and to understand and behave according to customer expectations (12). •  In order to prevent cognitive distrust, the two most important strategies  entail the concealment of any incompetent functions from customers (II), and avoiding unexpected behavior (12). For example, an incompetent R A might not recommend any products, and not explain its failure. Alternatively, customers will be disappointed when an R A does not follow customer expectations for ordering selections from "Minimum Price" to "Maximum Price.". In other cases, customers may be disappointed when they expect that at least one model from a particular manufacturer (e.g. IBM) would fit their criteria, and the R A does not recommend any computer from that company. •  In order to prevent emotional distrust, the key is to sense and respond to  customer awareness of the unknown (14). This precognition avoids unpleasant interfaces (18) and enables customers to feel in control (13).  7.4 Summary of Chapter 7 This chapter opens the black box of trust and distrust formation in an R A by employing a protocol-tracing method. The results conclude that •  Trust and distrust have different structures in terms of cognitive and emotional components.  •  Trust and distrust in an R A are formed via different ways that customers process their knowledge about an R A  92  •  A n R A needs to be designed in different ways in order to facilitate cognitive and emotional trust while preventing cognitive and emotional distrust.  •  Repeated interactions change neither the structures of trust/distrust nor the proportion of different trust/distrust formation processes. However, the quantities of processes in all categories drop by almost 50% from initial interactions to second interactions, and then reach a stable level.  •  The level of R A internalization change neither trust/distrust structures, nor the way that customers process their knowledge in order to form trust or distrust.  •  Finally, an R A needs to be designed in different ways in order to facilitate cognitive trust vs. emotional trust. Similarly, the R A needs to be designed in different ways in order to prevent cognitive distrust vs. emotional distrust.  93  Chapter 8.  Conclusions  By gathering empirical support for the conceptual models proposed in this dissertation (the trust model proposed in Section 4.4, the research model in Section 4.1, and the model of trust/distrust formation processes in Section 7.1), through the application of both quantitative analysis and qualitative analysis (process tracing), this study provides several new insights into the roles of emotional trust and cognitive trust in R A adoption by customers in e-commerce, while also illuminating the black box of trust formation processes. In addition, because the concepts of internalization, familiarity, trust and adoption analysed in this research model also exist in relationships between customers and other personalized technologies in e-commerce, notably including personalized negotiation agents, the results of this research can be further generalized to the contexts of designing and adopting a variety of personalized technologies.  8.1 Theoretical and Empirical Contributions 8.1.1 Emotional Trust and Cognitive Trust The present research contributes to theories regarding customer trust in ecommerce by proposing an innovative model of trust (in Section 4.4). This trust model extends prior research (Blomqvist, 1997; Curtail, 1990; Doney and Cannon, 1997; Holmes, 1991; Kipnis, 1986; Lewis and Weigert, 1985; Luhmann, 1979; Mollering, 2001) by incorporating and explicitly differentiating between cognitive and emotional components of trust. This study also makes an empirical contribution to the growing body of literature concerning trust in e-commerce by developing new measures of cognitive trust and emotional trust (see Table 6). The protocol analysis portion of this study then  94  extends the trust model proposed in Section 4.4 to include issues of distrust and by explicitly differentiating cognitive components in distrust from emotional components. The results of the protocol analysis empirically confirm the validity of the proposed trust model by revealing the existence of both cognitive and emotional components in trust and distrust. Given this empirical evidence and related theoretical arguments claiming that the emotional component is necessary in the development of trust, it is evident that considerations of emotional aspects of trust should be included in future research on trust and distrust. Prior research has been concerned primarily with cognitive components of trust while largely ignoring emotional components of trust and distrust, and therefore the present study offers new directions and possibilities for future research. The present study theoretically argues and empirically validates the assertion that trust and distrust have different structures in terms of their emotional and cognitive components. Results of the protocol analysis in this study have revealed that trust is mainly cognitive, while distrust is equally cognitive and emotional (see Table 15). This study also theoretically argues and empirically validates the supposition that awareness of unknown details and issues affects both trust and distrust, especially the emotional components of trust and distrust. This finding contrasts conceptual trust models posited in prior research, which focus on the impact of awareness of known details on trust and distrust, while largely overlooking the impact of awareness of unknown details. Thus, potential impacts of customer awareness of unknown details on trust and distrust should be included in research regarding trust, because customer knowledge about trustees is necessarily incomplete (Simmel, 1964), and because awareness of unknown details should be suspended or resolved when customers venture  95  their trust. This awareness of unknown considerations may be more important in ecommerce than in traditional commerce, because it is more likely that customers will be aware of unknown details in e-commerce, due to the distance imposed between customers and sellers by the Internet and the unfamiliar technologies used in ecommerce, e.g. computer agents. This study also empirically validates the relativization of trust due to differences between trustees (Sztompka, 1999). The content of trust is variable for different trustees, because expectations involved in trust are congruent or incongruent with the individual nature of particular trustees (Sztompka, 1999: 55). For example, according to Sztompka (1999: 55), when courts of law function as trustees, trust in integrity dominates the formation of trust; regarding administrative officials, trust in competence dominates; while regarding neighbors and informal acquaintances, trust in benevolence and trust in integrity have a dominant role. As a supplemental contribution to prior research, this study empirically reveals that customers' cognitive trust in an R A is primarily cognitive trust in the R A ' s competence, rather than cognitive trust in an R A ' s benevolence or integrity (see Table 15). This result indicates that, when a trustee is an R A (i.e. a technology), what customers predominantly expect from the R A is its competence, applied in the instrumental qualities of actions taken by an R A (Sztompka, 1999: 53).  8.1.2 RA Internalization and Customer Trust The current study offers a further contribution to knowledge about trust, by conceptualizing that R A internalization affects R A adoption through by increasing levels of cognitive trust and emotional trust in RAs. This conceptual model (and the research  96  model in Section 4.1) extends marketing and sociological research into interpersonal trust (e.g. Barber, 1983; Bergen, Dutta and Walker, 1992; Blomqvist, 1997; Crosby et al., 1990; Currall, 1990; Doney and Cannon, 1997; Gratton, 1973; Hawes, Mask and Swan, 1989; Holmes, 1991; Kelman, 1958; Kramer, et al., 1996; Lewis and Weigert, 1985; Luhmann, 1979; Luhmann and Familiarity, 1988; McAllister, 1995; McKnight et al., 1998; Miller, 2000; Mollering, 2001; Rempel et al., 1985; Swan et al., 1999; Sztompka, 1999), leading toward issues of the contexts of customer trust in RAs (i.e. computer agents) based on the rationale that trust in IT and trust in people are not fundamentally different from each other (Reeves and Nass, 1996; Sztompka, 1999). This study makes an additional contribution by rendering empirical evidence to support this conceptualization (Table 11). These results indicate that the key to promote adoption of and trust in an R A is to design the R A so that it can effectively internalize a customer's real needs. When R A internalization is high, customers are more likely to perceive similarities between themselves and RAs, and they are more likely to create a sense of "us" between RAs and themselves. The sense of "us" and sense of similarity are unique and important in relationships between customers and personalized technologies like RAs. This study also makes an empirical contribution to theoretical understanding of trust by revealing that the level of R A internalization does not affect the distribution of cognitive-and-emotional components in trust and distrust formation (see Table 21) and the distribution of different trust/distrust formation processes employed by customers (see Table 22). These results indicate that customers utilize the same methods to process their knowledge about RAs when forming structures of trust and distrust, regardless of the  97  level of R A internalization. However, the level of R A internalization has a significant and direct effect on the level of cognitive trust in the R A ' s competence, benevolence, and integrity (see Table 11), and the level of R A internalization indirectly affects emotional trust through cognitive trust (see Table 12).  8.1.3 Familiarity and Customer Trust Familiarity is the primary means for trust formation (Luhmann, 1988). This study extends prior research by investigating the impact of familiarity on cognitive trust and emotional trust, instead of treating trust as an indivisible phenomena. In addition, prior research has been controversial about whether the impact of familiarity on trust is positive, neutral, or negative (see Section 3.5), and therefore this study demonstrates that the level of familiarity significantly and directly increases the level of cognitive trust in an R A ' s competence, benevolence, and integrity (see Table 11), and it reveals that the level of familiarity indirectly increases the level of emotional trust in an R A through cognitive trust (see Table 12). These results suggest that the impact of familiarity on cognitive trust is different than the impact on emotional trust; therefore, theoretically, future research on trust should look into individual components of trust, rather than treating trust as one singular object for study. These results also empirically add evidence to the argument that customer familiarity with an R A increases trust in the R A . Note that the patch coefficients from familiarity to cognitive trust are relatively low (0.15-0.21, Table 11), although these impacts are significant. However, the impact of familiarity on R A adoption is not very significant. A one SD increase in the level of familiarity only leads to a 0.06 SD increase in a customer's  98  intention to adopt an R A as a delegated agent, and a 0.10 SD increase in a customer's intention to adopt an R A as a decision aid (see Table 11). The reason for the low impact of familiarity on R A adoption lies in the low path coefficients from familiarity to cognitive trust. The low impact of familiarity on R A adoption indicates that customers decide whether or not to adopt RAs primarily during their initial interactions with the RAs. The protocol analysis results (Tables 17,18,19, and 20) further reveals why initial interaction is critical for trust/distrust formation and R A adoption. The reason is that customers intensely process their knowledge about an R A to form their trust/distrust in the R A during the initial interaction. The reason is not that customers employ different processes to form trust/distrust in the initial interaction, compared to the repeated interactions, because repeated interactions do not change the structures (%) of trust/distrust (Coding Scheme 1) and trust/distrust formation (Coding Scheme 2). The real reason is the change of the intensity of trust/distrust formation, because the intensity (absolute numbers of processes) of trust/distrust and trust/distrust formation drops almost 50% from the initial interaction to the second interaction, and then stay stable. This conclusion can be explained as the indication of a learning curve. Customers spend more cognitive efforts during initial interactions in order to learn about RAs and to form their trust or distrust in the R A ; the cognitive efforts required in subsequent interactions diminish because customers are more familiar with the R A by then.  99  8.1.4 Tow Levels of RA Adoption This study makes a theoretical contribution to research efforts by differentiating between two levels of R A adoption, the intention to adopt an R A as a decision aid and the intention to adopt an R A as a delegated agent; prior literature has tended to treat them as a single construct. This study suggests that the two levels of R A adoption merit separate research because they are different at conceptual, measurement, and behavioral levels. At the conceptual level, these two intentions correspond to two levels of R A adoption: whether an R A will be adopted to automate decision-making processes, or simply to assist customers in making decisions. The protocol analysis for this study reveals that it is common for customers to adopt RAs as decision aids, while it is rare for customers to adopt RAs as delegated agents. However, both levels of R A adoption are present in ecommerce. One example of adopting an R A as a delegated agent is found in the actions of Subject 17 in the experiment portion of this research, who thought aloud, "I trust the RA's advice and didn't know why." One example of adopting an R A as a decision aid, on the other hand, was exhibited by Subject 15, who said, "I feel satisfied on relying on the R A to search for specific products but not to decide on the purchase. I want to review its recommendation before I make a purchase." At the level of measurement, the PLS measurement model and factor analysis demonstrates that the two levels of intentions regarding RAs are different operational factors. The correlation between the two intentions is 0.5, which is not high enough to prove that the two intentions are a single unified object for study. In addition, the scatter plot (depicted in Figure 8) reveals no clear linear relationship between these two intentions.  100  At the behavioral level, emotional trust affects the two intentions differently (as noted in Table 11). Cognitive trust also affects the two intentions differently (see Table 11). Therefore, the intention to adopt an R A as a decision aid and the intention to adopt an R A as a delegated agent are two different levels of R A adoption which worth separate research.  8.1.5 The Roles of Emotional Trust and Cognitive Trust in RA Adoption This study makes theoretical and empirical contributions by showing the significant effect of emotional trust on R A adoption. Emotional trust directly promotes intentions to adopt RAs as delegated agents or as decision aids (see Table 11); and furthermore emotional trust fully mediates the impact of cognitive trust on intentions to adopt RAs as delegated agents or as decision aids (see Table 12). These results demonstrate that future research on IT adoption should address emotional trust. It is evident that IT adoption by customers, compared to IT adoption by organizations or by individuals operating as employees, are more likely to be affected by emotional trust, because customers using IT for personal decision-making tasks may have more freedom to utilize their subjective feelings. Future research is needed on this topic. In addition, this study contributes empirically to ongoing and future research by revealing that the impact of emotional trust on an intention to adopt an R A as a delegated agent is higher than the impact of emotional trust on an intention to adopt an R A as a decision aid, while the impact of cognitive trust on customer intentions to adopt an R A as a delegated agent is lower than the impact of cognitive trust on customer intentions to adopt an R A as a decision aid (see Table 11). These results disclose the importance of  101  building emotional trust in order to encourage R A adoption, and they indicate that the importance increases in the context of adopting an R A to automate decision-making about purchases, compared to the context of adopting an R A to assist purchase decisionmaking.  8.1.6 A Model of Trust/Distrust Formation Processes This dissertation makes a contribution to trust theories by integrating the conceptual models about trust formation processes that have been developed in prior research (Doney and Cannon, 1997; McKnight et al., 1998), applying the results of protocol tracing of the trust formation exhibited by seventeen subjects in a pilot test. This integration justifies a new conceptual model of trust/distrust formation processes employed by customers when they are forming trust and distrust in an R A (Section 7.1, Coding Scheme 2). This study also makes an empirical contribution to trust theories by tracing the ways that 49 subjects process information related to trust and distrust. The results reveal the distribution of nine trust/distrust formation processes (see Table 16), and in repeated interactions (see Table 19 and Table 20). These results disclose that trust and distrust result from different ways that customers process their knowledge about RAs (see Table 16), and they indicate that cognitive and emotional components of trust and distrust are affected by different information-processing operations (see Table 23).  8.2 Managerial Implications Customers must have trust before they adopt any IT application (Nwana et al., 1998) or shop on a website (Ba et al., 1999; Brynjolfsson and Smith, 2000; Keen, 1997;  102  Ratnasingham, 1998; Jarvenpaa et al., 2000; L i m et al., 2001; Wetsch and Cunningham, 1999; Urban et al., 2000) in e-commerce environments. Therefore, designers of personalized technologies (like RAs) and personalized websites should utilize the findings of this study to augment their designs and to improve customer trust in their technology and their websites, which will then increase customer intentions to adopt the technologies and websites. First, it is not enough to only cognitively convince customers of a technology's or a website's competence, benevolence, and integrity (i.e. cognitive trust). It is also important to earn customers' feelings of security and comfort (i.e. emotional trust). Second, a technology or a website needs to be designed both to increase trust and to decrease distrust, because customers develop both trust and distrust while interacting with technology or websites. However, design features of IT and websites intended to increase trust and to decrease distrust should be different, because trust is predominantly cognitive trust while distrust is equally cognitive and emotional (see Table 15), and because almost 80% of trust results from four trust formation processes: the competenceassessment process, the information-sharing process, the verification process, and the expectation of satisfaction; and on the other hand almost 70% of distrust results from three distrust formation processes: the awareness of unknown considerations, the assessment of competence, and the expectation of satisfaction process (Table 16). These results indicate that RAs should be designed in different ways to promote trust formation as opposed to preventing distrust formation. To encourage customers to form trust in an R A , the most important strategy entails an R A exhibiting competent functions to customers, for example an ability to ask appropriate right questions or an  103  ability to provide explanations. A n R A also should share information with customers, enough to address to the customers' concerns (e.g. what is a particular product attribute, how to use an R A , and why an R A generates a particular recommendation), but RAs should not offer so much information that it might overwhelm customers. The best solution is an R A that can process all of the information that customers may want, and then allow the customers to control what level of details they want to access. In addition, an R A should provide questions, explanations, and recommendations containing examples or details that customers can verify with their own existing knowledge. In effect, an R A should confirm customer expectations. To prevent distrust formation, the most important strategy for an R A applies the ability to solve particular customers' awareness of the unknown. When customers recognize something that they do not know about the R A , ideally the R A should sense the customers' awareness of the unknown factor immediately, and provide answers to the customer. Thus, something unknown to customers will be converted into something that is known. This requirement of dynamic communications between customers and RAs is a challenge for R A design. In addition, in order to prevent distrust formation, an R A should not show any incompetence and the R A should avoid performing in a manner that is unexpected by customers. Third, as shown in Section 7.3, design features of an R A intended to increase cognitive trust and to increase emotional trust should further differ from each other because cognitive trust formation and emotional trust formation are affected by different information-processing methods (see Table 23). Similarly, design intended to decrease  104  cognitive distrust in an R A and design intended to decrease emotional distrust in an R A should be different from each other (see Section 7.3 and Table 23). In addition, the significant impact of R A internalization on customer trust and R A adoption demonstrates that IT and website designers should address the gap between customer needs expressed in their specifications (e.g. answers to the RA's questions about a computer) and customers' real needs (e.g. what customers will use the computer for). For example, when a sample customer was shopping with the low-internalization R A , he said: "This R A knew a lot about the product. It asked me a lot of good questions. The R A knew what it was talking about. However, I did not really know what the R A was talking about. I just specified some product attributes in order to get some recommendations. What I asked was not what I wanted. I do not trust the R A . I will not use it." The results in this study indicate that it is beneficial to make efforts to reduce or even eliminate this gap. For an R A , it is important to compel customers to develop a sense of "us" between themselves and the technology or the website, by increasing the perceived similarity between the customers and the technology or website. For example, an R A should ask more personalized questions which enable customers to effectively express what they really want. In effect, it is helpful to ask why customers need a particular product, instead of simply asking about the preferred product attribute. It is also good to assign an R A a personality similar to the customer's own (Nass et al., 1995). Future research is needed into strategies for designing high-internalization personalized technology and websites (i.e. to reduce the gap between customers' expressed needs and their real needs). The task of developing internalization, however, is  105  quite challenging because customer needs may be either explicit or implicit (Smyth and Cotter, 2001). Design elements that may be effective include features that instigate adaptive and dynamic interactions between RAs and customers. The conclusions this study has reached regarding the impact of familiarity on customer trust and R A adoption suggest that first impressions are significant in ecommerce. IT and website design should strive to facilitate initial interactions between customers and IT or websites. For example, R A designers may provide intuitive and clear explanations, easy-to-use and attractive interfaces, and robust search engines (i.e. search engines that generate some recommendations for any search) in order to assist and impress new customers. Initial interactions are important because customers' trust/distrust formation is most intense during their initial interactions with IT, in contrast to subsequent interactions (see Tables 17, 18,19, and 20).  8.3 Limitations The present study involves multiple limitations. First, the ownership of an R A may affect customer trust in RAs. RAs may be owned and controlled by company websites, by third party websites, or by customers. It is likely that customers' trust in their own R A will be higher than their trust for an R A owned by a company website or by a third party. This study avoids this issue by asking sample consumers to assume that they would own the R A . In the information sheet for the experiment, the subjects were told that they were going to use and evaluate an R A , and that a computer company had developed this R A to sell it to customers as a virtual personal shopping assistant. Thus, while the ownership of RAs does not affect the testing of the causal model in this paper,  106  R A ownership should be considered before generalizing the results of these experiments to other contexts. Second, familiarity was measured per single item; therefore the reliability of the measures could not be assessed. Third, the order in which products were shopped for was not randomized in the repeated-interaction groups; therefore the effects of the product order on cognitive and emotional trust could not be controlled. The order of the three products was not randomized, because we assumed that in repeated interactions, the subjects would learn about the R A , but not learn about a product in that initial interaction then apply the learned product knowledge to the other products in the subsequent interactions. We measured the subjects' product expertise at the beginning of each interaction. In the repeated-interaction groups (Group 2 and 4 in Table 3), the levels of average product expertise (measured in 7-point scales) were 2.2 in the initial interaction, 2.6 in the second interaction, and 2.2 in the third interaction. These numbers seemed to support our assumption that the subject did not leam about products in the repeated interactions; therefore, it may be O K not to randomize the order of products in this study. However, future study is needed on this issue. Fourth, regarding Cognitive Trust in the competence of RAs (see Table 6), item zCTC3 should not have added "my needs" to two items (zCTCl and zCTC2) that only discussed product issues. ".. .my needs" in zCTC3 referred to "my requirements - the needs expressed in terms of product attributes." This item was added because R A competence is related to the capability of RAs to give good product recommendations to particular customers, and therefore the R A should have good knowledge about products,  107  and they should pay attention to each particular customer's requirements, the needs they express in terms of product attributes. But it would have been better not to add "my needs" in zCTC3. Fortunately, the Alpha for the measure of cognitive trust in an R A ' s competence ( z C T C l , zCTC2, and zCTC3) was 0.74 which exceeds 0.70, the suggested criteria for acceptable reliability test (Table 9). In addition, the loading of zCTC3 was significantly high (Table 8). Fifth, better incentives could have been given to make subjects take the shopping tasks in our experiment more seriously. It is possible for a few subjects to participate just to get one course credit without much attention to their assessment of the RAs. We have tried to solve this issue by screening the subjects; therefore, all the subjects who participated were interested in buying the products in the experiment. We also give a random draw of $400 rebate to purchase the selected product in order to make subject take their shopping assignments seriously. The average numbers of verbal protocol processes per subject (Table 13.b) show that subjects were spending quite a lot thought during the experiment. However, the incentive could be better if the random draw had changed from "one subject wins $400" to "10 subjects win $40 each".  8.4 Suggestions for Future Research First, future research is needed to test this research model in the contexts of RAs that are owned by the website of a company (e.g. www.amazon.com, www.cdnow.com), by a third party (e.g. www.activebuyerguide; www.priceline.com), or by a customer. The different types of ownership may affect cognitive trust and emotional trust in RAs in different ways.  108  Second, this study provides evidence for the benefits of designing an R A with high-internalization, in terms of increased customer trust and R A adoption rates. It is beyond the scope of the present study to address the design of RAs and why particular designs may lead to high internalization of RAs. However, future research on R A design with high-internalization potential is valuable both theoretically and practically. The possible antecedents of R A internalization are the type of questions asked by an R A (e.g. the questions about the customer's needs in terms of what the product will be used for vs. the questions about the customer's needs in terms of product attributes), the numbers of questions (how many is too few, how many is appropriate number, and how many is too many), or the presentation and contents of explanations (virtual technologies, technical explanations, or examples expressed in simple pictures) etc. Future research is also needed on how to improve the internalization level for different types of RAs (Table 1). For example, for constraint-satisfaction RAs, asking customer-need-based questions instead of product-attribute-based questions may increase R A internalization. For collaborative-filtering RAs, better design to segment customers and better design to assign a particular customer to a customer segment may increase R A internalization. Third, the current research model assumes that customers may develop intentions to adopt RAs as delegated agents or as decision aids simultaneously. However, customer intentions to adopt an R A as a decision aid may come before intentions to adopt the R A as a delegated agent, because customers may want to use an R A as a decision aid first. After a few successful trials, the customers may intend to adopt the R A as a delegated agent. In this way, adopting an R A as a decision aid and adopting an R A as a delegated  109  agent may form a two-stage model in which the adoption of an R A as a decision aid occurs before an R A is adopted as a delegated agent. More research is needed on the relationship between the two levels of R A adoption. Fourth, future research is needed into designing RAs differently if they are to be used as decision aids or as delegated agents. For example, i f an R A is designed to be a delegated agent, it may help to allow customers to modify the reasoning processes within the R A so that it can more effectively internalize the logic particular customers employ in their shopping decisions. If an R A is designed to be a decision aid, on the other hand, allowing customers to modify an R A ' s reasoning logic may be unnecessary. Fifth, the research model seems to be generally applicable to other customer agents in e-commerce or even other personalized decision support systems, inasmuch as the constructs of internalization, familiarity, cognitive and emotional trust, and adoption all exist in relationships between customers and these personalized technologies. Therefore, testing the research model developed in this study in the contexts of other customer agents in e-commerce or other personalized DSS will be possible and valuable.  110  References: 1.  Adkins, R.T. and Swan, J.E. "Improving the Public Acceptance of Salespeople through Professionalism," Journal of Personal Selling & Sales Management (2:Fall/Winter), 1980-81. pp. 32-38.  2.  Akerlof, G.A. "The Market for 'Lemons': Quality under Uncertainty and the Market Mechanism," Quarterly Journal of Economics (84: August), 1970. pp. 488-500.  3.  Anderson, E. and Weitz, B. "Determinants of Continuity in Conventional Industrial Channel Dyads," Marketing Science (8), 1989. pp. 310 - 323.  4.  Anderson, J.C. and Narus, J.A. " A Model of Distributor Firm and Manufacturer Firm Working Partnerships," Journal of Marketing (54), 1990. pp. 42-58.  5.  Ansari, A., Essegaier, S. and Kohli, R. "Internet Recommendation Systems," Journal of Marketing Research (37:3), 2000. pp. 363-375.  6.  Ariely, D. "Controlling the Information Flow: Effects on Consumers' Decision Making and Preferences," Journal of Consumer Research (27September), 2000.  7.  Armstrong, R.W. and Yee, S.M. "Do Chinese Trust Chinese? A Study of Chinese Buyers and Sellers in Malaysia," Journal of International Marketing (9:3), 2001. pp. 63-86.  8.  Ba, S. and Pavlou, P.A. "Evidence of the Effect of Trust Building Technology in Electronic Markets: Price Premiums and Buyer Behavior," MIS Quarterly (26:3), 2002. pp. 243-268.  ill  9.  Ba, S., Whinston, A . B . and Zhang, H . "Building Trust in the Electronic Market through an Economic Incentive Mechanism," Proceedings of the Twentieth International Conference on Information Systems, 1999, pp. 208-213.  10.  Baldwin, M . W . "Relational Schemas and the Processing of Social Information," Psychological Bulletin (112), 1992. pp. 461-484.  11.  Barber, B. The Logic and Limits of Trust, Rutgers University Press, New Brunswick, NJ, 1983.  12.  Barclay, D., Thompson, R. and Higgins, C. "The Partial Least Squares (PLS) Approach to Causal Modeling: Personal Computer Adoption and Use as an Illustration," Technology Studies (2:2), 1995. pp. 285-309.  13.  Baron, R . M . and Kenny, D.A. "The Moderator-Mediator Variable Distinction in Social Psychological Research: Conceptual, Strategic, and Statistical Considerations," Journal of Personality and Social Psychology (51), 1986. pp. 1173-1182.  14.  Bergen, M . , Dutta, S. and Walker, O.C. "Agency Relationships in Marketing: A Review of the Implications and Applications of Agency and Related Theories," Journal of Marketing (56:3), 1992. pp. 1-24.  15.  Blomqvist, K. "The Many Faces of Trust," Scand. Journal of Management (13:3), 1997. pp. 271-286.  16.  Brewer, M . B . and Silver, M . "In-group Bias as a Function of Task Characteristics," European Journal of Social Psychology (8), 1978. pp. 393-400.  112  17.  Brynjolfsson, E. and Smith, M . D . "Frictionless Commerce? A Comparison of Internet and Conventional Retailers," Management Science (46:4), 2000. pp. 563585.  18.  Chaudhuri, A . and Holbrook, M . B . "The Chain of Effects from Brand Trust and Brand Affect to Brand Performance: The Role of Brand Loyalty," Journal of Marketing (65:April), 2001. pp. 81-93.  19.  Chiasson, T., Hawkey, K., McAllister, M . and Slonim, J. "An Architecture in Support of Universal Access to Electronic Commerce," Information and Software Technology (44:5), 2002. pp. 279-289.  20.  Chin, W.W. "Issues and Opinion on Structural Equation Modeling," MIS Quarterly (22:1), 1998. pp. vii - xvi.  21.  Chin, W.W. "Partial Least Squares for Is Researchers: A n Overview and Presentation of Recent Advances Using the PLS Approach," Proceedings of the International Conference of Information Systems, Brisbane, Australia, 2000.  22.  Chin, W.W., Marcolin, B.L. and Newsted, P.R. "A Partial Least Squares Latent Variable Modeling Approach for Measuring Interaction Effects: Results from a Monte Carlo Simulation Study and Voice Mail Emotion/Adoption Study," Proceedings of the International Conference of Information Systems, Cleveland, Ohio, 1996,  23.  Chrusciel, D. and Zahedi, F . M . "Seller-Based Vs. Buyer-Based Internet Intermediaries: A Research Design," Proceedings of the Fifth America Conference on Information Systems, 1999, pp. 241-243.  113  24.  Coleman, J.C. Foundations of Social Theory, Harvard University Press, Cambridge, Massachusetts, 1990.  25.  Crosby, L.A., Evans, K.R. and Cowles, D. "Relationship Quality in Services Selling: A n Interpersonal Influence Perspective," Journal of Marketing (54:July), 1990. pp. 68-81.  26.  Crowston, K . and Machines, I. "The Effects of Market-Enabling Internet Agents on Competition and Prices," (2002:Dec. 8th), 2001.  27.  Currall, S.C. "The Role of Interpersonal Trust in Work Relationships," unpublished doctoral dissertation, Cornell University, 1990.  28.  Curtin, J.J., Patrick, C.J., Lang, A.R., Cacioppo, J.T. and Birbaumer, N . "Alcohol Affects Emotion through Cognition," Psychological Science (12:6), 2001. pp. 527-531.  29.  Davis, F.D. and Kotteman, J.E. "User Perceptions of Decision Support Effectiveness: Two Production Planning Experiments," Decision Sciences (25), 1994. pp. 57-77.  30.  Dion, K . K . , Berscheid, E. and Walster, E. "What Is Beautiful Is Good," Journal of Personality and Social Psychology (24), 1972. pp. 285-290.  31.  Do, O., March, E., Rich, J. and Wolff, T. "Intelligent Agents and the Internet: Effects on Electronic Commerce and Marketing," (Accessed in March 2001), https://mrmac-jr.scs.unr.edu/odie/paper.html  32.  Doney, P . M . and Cannon, J.P. "An Examination of the Nature of Trust in BuyerSeller Relationships," Journal of Marketing (61:2), 1997. pp. 35-51.  114  33.  Doorenbos, R., Etzioni, O. and Weld, D. " A Scalable Comparison-Shopping Agent for the World Wide Web," Proceedings of the First International Conference on Autonomous Agents (Agents97), Marina del Rey, C A , 1997,  34.  Doyle, S.X. and Roth, G.T. "Selling and Sales Management in Action: The Use of Insight Coaching to Improve Relationship Selling," Journal of Personal Selling and Sales Management (12: Winter), 1992. pp. 59-64.  35.  Dwyer, F. and Lagace, R. "On the Nature and Role of Buyer-Seller Trust," Proceedings of the AMA Educators' Conference, Chicago, 1986, pp. 40-45.  36.  Earle, T. and Cvetkovich, G.T. Social Trust: Toward a Cosmopolitan Society, Praeger, New York, 1995.  37.  Fisk, D. "An Application of Social Filtering to Movie Recommendation," BT Technology Journal (14:4), 1996. pp. 124-132.  38.  Fornell, C. and Larcker, D. "Evaluating Structural Equation Models with Unobservable Variables and Measurement Error," Journal of Marketing Research (18), 1981. pp. 39-50.  39.  Fukuyama, F . M . Trust: The Social Virtues and the Creation of Prosperity, Free Press, 1995.  40.  Fung, R . K . K . and Lee, M . K . O . "Ec-Trust (Trust in Electronic Commerce): Exploring the Antecedent Factors," Proceedings of the Fifth America Conference on Information Systems, 1999, pp. 517-519.  41.  Gambetta, D. "Can We Trust Trust?" In Trust: Making and Breaking Cooperative Relations, D. Gambetta (Ed.), Blackwell, Oxford, U K , 1988. pp. 154-175.  115  42.  Ganesan, S. "Determinants of Long-Term Orientation in Buyer-Seller Relationships," Journal of Marketing (58:April), 1994. pp. 1-19.  43.  Gefen, D. "E-Commerce: The Role of Familiarity and Trust," OmegaInternational Journal of Management Science (28:6), 2000. pp. 725-737.  44.  Gefen, D., Karahanna, E. and Straub, D.W. "Trust and Tarn in Online Shopping: A n Integrated Model," MIS Quarterly (27:1), 2003. pp. 51-90.  45.  Gilbert, D., Aparicio, M . , Atkinson, B., Brady, S., Ciccarino, J., Grosof, B., O'Connor, P., Osisek, D., Pritko, S., Spagna, R. and Wilson, L. " I B M Intelligent Agent Strategy," working paper), 1995.  46.  Goker, M . H . and Thompson, C.A. "Personalized Conversational Case-Based Recommendation," Lecture Notes in Artificial Intelligence (1898), 2001. pp. 99111.  47.  Gratton, C. "Some Aspects of the Lived Experience of Interpersonal Trust," Humanitas (9), 1973. pp. 273-286.  48.  Gregor, S. and Benbasat, I. "Explanations from Intelligent Systems: Theoretical Foundations and Implications for Practice," MIS Quarterly (23:4), 1999.  49.  Grenci, R.T. and Todd, P.A. "Solutions-Driven Marketing," Communications of the ACM (45:3), 2002. pp. 64-71.  50.  Gulati, R. "Does Familiarity Breed Trust - the Implications of Repeated Ties for Contractual Choice in Alliances," Academy Of Management Journal (38:1), 1995. pp. 85-112.  51.  Hallen, L . and Sandstrom, M . "Relationship Atmosphere in International Business," New Perspectives on International Marketing), 1991. pp. 108-125.  116  52.  Hanani, U., Shapira, B . and Shoval, P. "Information Filtering: Overview of Issues, Research and Systems," User Modeling and User-Adapted Interaction (11:3), 2001. pp. 203-259.  53.  Hardin, R. "The Street-Level Epistemology of Trust," Politics and Society (21:4), 1993. pp. 505-529.  54.  Hardin, R. Trust and Trustworthiness, Russell Sage Foundation, New York, N Y , 2002.  55.  Haubl, G . and Trifts, V . "Consumer Decision-Making in Online Shopping Environments: The Effects of Interactive Decision Aids," Marketing Science (19:1), 2000. pp. 4-21.  56.  Hawes, J.M., Mask, K . E . and Swan, J.E. "Trust Earning Perceptions of Sellers and Buyers," Journal of Personal selling and sales management (9:Spring), 1989. pp.1-8.  57.  Hawes, J.M., Rao, C P . and Baker, T.L. "Retail Salesperson Attributes and the Role of Dependability in the Selection of Durable Goods," Journal of Personal selling and sales management (13:Fall), 1993. pp. 61-71.  58.  Hoffman, D.L., Novak, T.P. and Peralta, M . "Building Consumer Trust Online," Communications of the ACM (42:4), 1999. pp. 80-85.  59.  Holmes, J.G. "Trust and the Appraisal Process in Close Relationships," In Advances in Personal Relationships, W. H . Jones and D. Perlman (Ed.), 2, Jessica Kingsley, London, 1991. pp. 57-104.  60.  Jarvenpaa, S.L., Tractinsky, N . and Vitale, M . "Consumer Trust in an Internet Store," Information Technology and Management (1), 2000. pp. 45-71.  61.  Jones, K. "Trust as an Affective Attitude," Ethics (107), 1996. pp. 4-25.  62.  Kahn, D., Pace-Schott, E. and Hobson, J.A. "Emotion and Cognition: Feeling and Character Identification in Dreaming," Consciousness and Cognition (11:1), 2002. pp. 34-50.  63.  Keen, P.G.W. "Are You Ready for "Trust" Economy?" Computer World (31:16), 1997. pp. 80-.  64.  Kelman, H.C. "Compliance, Identification, and Internalization: Three Processes of Attitude Change," Conflict Resolution (2:1), 1958. pp. 51-60.  65.  Kipnis, D. "Trust and Technology," In Trust in Organizations: Frontiers of Theory and Research, R. M . Kramer and T. R. Tyler (Ed.), 1986. pp. 39-50.  66.  Kollock, P. "The Emergence of Exchange Structures: A n Experimental Study of Uncertainty, Commitment, and Trust," American Journal of Sociology (100:2), 1994. pp. 313-345.  67.  Kramer, R . M . "The Sinister Attribution Error: Paranoid Cognition and Collective Distrust in Organizations," Motivation and Emotion (18), 1994. pp. 199-230.  68.  Kramer, R . M . "Trust and Distrust in Organizations: Emerging Perspectives, Enduring Questions," Annual Reviews of Psychology (50), 1999. pp. 569-598.  69.  Kramer, R . M . , Brewer, M . B . and Hanna, B . A . "Collective Trust and Collective Action: The Decision to Trust as a Social Decision," In Trust in Organizations: Frontiers of Theory and Research, R. M . Kramer and T. R. Tyler (Ed.), Sage, Thousand Oaks, C A , 1996. pp. 357-389.  70.  Kumar, N . "The Power of Trust in Manufacturer-Retailer Relationships," Harvard Business Review (74:6), 1996. pp. 93-106.  118  71.  Lagace, R.R. and Gassenheimer, J.B. "An Exploratory Study of Trust and Suspicion toward Salespeople: Scale Validation and Replication," American Marketing Association Proceedings), 1991. pp. 121-127.  72.  Langer, E.J. "The Illusion of Control," Journal of Personality and Social Psychology (32), 1975. pp. 311-328.  73.  Lee, M . K . O . and Turban, E. " A Trust Model for Consumer Internet Shopping," International Journal of Electronic Commerce (6:1), 2001. pp. 75-91.  74.  Lewis, D.J. and Weigert, A . "Trust as a Social Reality," Social Forces (63), 1985. pp. 967-985.  75.  Lim, K . H . , Sia, C.L., Lee, M . K . O . and Benbasat, I. "How Do I Trust You Online, and If So, W i l l I Buy?: A n Empirical Study on Designing Web Contents to Develop Online Trust," working paper), 2001.  76.  Lindskold, S. "Trust Development, the Grit Proposal and the Effects of Conciliatory Acts on Conflict and Cooperation," Psychological Bulletin (85:4), 1978. pp. 772-93.  77.  Luhmann, N . Trust and Power, Wiley, Chichester, U K , 1979.  78.  Luhmann, N . and Familiarity "Familiarity, Confidence, Trust: Problems and Alternatives," In Trust-Making and Breaking Relationships, D. Gambetta (Ed.), Basil Blackwell, Oxford, 1988. pp. 94-109.  79.  Lynch, J.G. and Ariely, D. "Wine Online: Search Costs Affect Competition on Price, Quality, and Distribution," Marketing Science (19:1), 2000. pp. 83-103.  80.  MacNeil, I.R. The New Social Contract, Yale University Press, New Haven, CT, 1980.  81.  Maes, P., Guttman, R.H. and Moukas, A . G . "Agents That Buy and Sell," Communications qf The ACM (42:3), 1999.  82.  Magrath, A.J. and Hardy, K . G . " A Conceptual Framework for Assessing the Level of Mutual Trust between Manufacturers and Their Resellers," Proceedings of the Proceedings of the 5th IMP Conference, 1989,  83.  Mayer, R.C., Davis, J.H. and Schoorman, F.D. "An Integrative Model of Organizational Trust," Academy of Management Review (20:3), 1995. pp. 709734.  84.  McAllister, D.J. "Affect- and Cognition-Based Trust as Foundations for Interpersonal Cooperation in Organizations," Academy of Management Journal (38:1), 1995. pp. 24-59.  85.  McKnight, D.H. and Chervany, N . L . "Conceptualizing Trust: A Typology and E Commerce Customer Relationships Model," Proceedings of the 34th Hawaii International Conference on System Sciences - 2001, Hawaii, 2001a,  86.  McKnight, D.H. and Chervany, N.L. "What Trust Means in E-Commerce Customer Relationships: A n Interdisciplinary Conceptual Typology," International Journal of Electronic Commerce (6:2), 2001b. pp. 35-59.  87.  McKnight, D.H., Choudhury, V . and Kacmar, C. "Developing and Validating Trust Measures for E-Commerce: A n Integrative Typology," Information Systems Research (13:3), 2002. pp. 334-359.  88.  McKnight, D.H., Cummings, L . L . and Chervany, N.L. "Initial Trust Formation in New Organizational Relationships," Academy of Management Review (23:3), 1998. pp. 473-490.  120  89.  Menon, N . M . , Konana, P., Browne, G.J. and Balasubramanian, S. "Understanding Trustworthiness Beliefs in Electronic Brokerage Usage," Proceedings of the 20th International Conference of Information Systems, Charlotte, N C , 1999, pp. 552555.  90.  Meyerson, D., Weick, K . E . and Kramer, R . M . "Swift Trust and Temporary Groups," In Trust in Organizations: Frontiers of Theory and Research, R. M . Kramer and R. T. Tyler (Ed.), Sage, Thousand Oaks, C A , 1996. pp. 166-195.  91.  Miller, J. "Trust: The Moral Importance of an Emotional Attitude," Practical Philosophy (3:3), 2000. pp. 45-54.  92.  Milliman, R.E. and Fugate, D. "Using Trust Transference as a Persuasion Technique: An Empirical Field Investigation," Journal of Personal Selling and Sales Management (8: August), 1988. pp. 1-7.  93.  Mollering, G . "The Nature of Trust: From Georg Simmel to a Theory of Expectation, Interpretation and Suspension," Sociology -The Journal of the British Sociological Association (35:2), 2001. pp. 403-420.  94.  Moore, G.C. and Benbasat, I. "Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation," Information Systems Research (2:3), 1991. pp. 192-222.  95.  Moorman, C , Deshpande, R. and Zaltman, G . "Factors Affecting Trust in Market Research Relationships," Journal of marketing (57), 1993. pp. 81-101.  96.  Morgan, R . M . and Hunt, S.D. "Commitment-Trust Theory of Relationship Marketing," Journal of marketing (58), 1994. pp. 20-38.  97.  Moukas, A., Zacharia, G., Guttman, R. and Maes, P. "Agent-Mediated Electronic Commerce: a MIT Media Laboratory Perspective," International Journal Of Electronic Commerce (4:3), 2000. pp. 5-21.  98.  Nass, C. and Lee, K . N . "Does Computer-Synthesized Speech Manifest Personality? Experimental Tests of Recognition, Similarity-Attraction, and Consistency-Attraction," Journal of Experimental Psychology-Applied (7:3), 2001. pp. 171-181.  99.  Nass, C. and Moon, Y . "Machines and Mindlessness: Social Responses to Computers," Journal of Social Issues (56:1), 2000. pp. 81-103.  100.  Nass, C , Moon, Y., Fogg, B.J., Reeves, B . and Dryer, D.C. "Can Computer Personalities Be Human Personalities," International journal of Human-computer Studies (43:2), 1995. pp. 223-239.  101.  Nicholson, C.Y., Compeau, L.D. and Sethi, R. "The Role of Interpersonal Liking in Building Trust in Long-Term Channel Relationships," Journal of the Academy of Marketing Science (29:1), 2001. pp. 3-15.  102.  Noteberg, A., Christiaanse, E. and Wallage, P. "The Role of Trust and Assurance Services in Electronic Channels: A n Exploratory Study," Proceedings of the Twentieth International Conference on Information Systems, 1999, pp. 472-478.  103.  Nwana, H., Rosenschein, J., Sandholm, T., Sierra, C , Maes, P. and Guttmann, R. "Agent-Mediated Electronic Commerce: Issues, Challenges and Some Viewpoints," Proceedings of the 2nd International Conference on Autonomous Agents, Minneapolis, M N , 1998,  122  104.  Orbell, J., Dawes, R. and Schwartz-Shea, P. "Trust, Social Categories, and Individuals: The Case of Gender," Motivation and Emotion (18), 1994. pp. 109128.  105.  Paese, P.W. and Sniezek, J.A. "Influences on the Appropriateness of Confidence in Judgment: Practice, Effort, Information, and Decision-Making," Organizational Behavior and Human Decision processes (48), 1991. pp. 100-130.  106.  Peterson, R A . and Merino, M . C . "Consumer Information Search Behavior and the Internet," PSYCHOLOGY  107.  & MARKETING (20:2), 2003. pp. 99-121.  Plank, R.E., Reid, D.A. and Pullins, E.B. "Perceived Trust in Business-toBusiness Sales: A New Measure," The Journal of Personal Selling & Sales Management (19:3), 1999. pp. 61-71.  108.  Ramsey, R.P. and Sohi, R.S. "Listening to Your Customers: The Impact of Perceived Salesperson Listening Behavior on Relationship Outcomes," Journal of the Academy of Marketing Science (25:2), 1997. pp. 127-137.  109.  Rao, A . and Bergen, M . "Price Premium as a Consequence of Lack of Buyers' Information," Journal of Consumer Research (19:December), 1992. pp. 412-23.  110.  Ratnasingham, P. "The Importance of Trust in Electronic Commerce," Internet Research (8:4), 1998. pp. 313-321.  111.  Reeves, B . and Nass, C. The Media Equation: How People Treat Computers, Televisions, and New Media Like Real People and Places, Cambridge University Press, 1996.  112.  Rempel, J.K., Holmes, J.G. and Zanna, M.P. "Trust in Close Relationships," Journal of Personality and Social Psychology (49:1), 1985. pp. 95-112.  123  113.  Riker, W . H . "The Nature of Trust," In Perspectives on Social Power, J. T. Tedeschi (Ed.), Aldine, Chicago, 1971. pp. 63-81.  114.  Sako, M . Prices, Quality and Trust, Inter-Firm Relations in Britain and Japan, Cambridge University Press, Cambridge, 1992.  115.  Sandholm, T. "The Best Terms for A l l Concerned," Communications of the ACM (42:3), 1999. pp. 84-85.  116.  Schurr, P.H. and Ozanne, J.L. "Influences on Exchange Processes: Buyer's Preconceptions of a Sellers' Trustworthiness and Bargaining Toughness," Journal of Consumer Research (11), 1985. pp. 939-953.  117.  Shapiro, S. "The Social Control of impersonal Trust," American Journal of Sociology (93:3), 1987. pp. 623-658.  118.  Shardanand, U . and Maes, P. "Social Information Filtering: Algorithms for Automating 'Word of Mouth'," Proceedings of the Computer-Human Interaction Conference (CHI'95), Denver, Colorado, 1995, pp. 210-217.  119.  Shih, T.K., Chiu, C.F., Hsu, H.H. and Lin, F.H. "An Integrated Framework for Recommendation Systems in E-Commerce," Industrial Management & Data Systems (102:8-9), 2002. pp. 417-431.  120.  Siegrist, M . , Cvetkovich, G. and Roth, C. "Salient Value Similarity, Social Trust, and Risk/Benefit Perception," Risk Analysis (20:3), 2000. pp. 353-362.  121.  Sim, K . M . and Chan, R. "A Brokering Protocol for Agent-Based E-Commerce," IEEE Transactions on Systems Man and Cybernetics Part C-Applications and Reviews (30:4), 2000. pp. 474-484.  122.  Simmel, G. The Sociology ofGeorg Smmel, Free Press, New York, 1964.  124  123.  Sinmao, M . V . "Intelligent Agents and E-Commerce,"), 1999.  124.  Smith, M . D . "The Impact of Shopbots on Electronic Markets," Journal of the Academy of Marketing Science (30:4), 2002. pp. 446-454.  125.  Smyth, B . and Cotter, P. " A Personalized Television Listings Service - Mixing the Collaborative Recommendation Approach with Content-Based Filtering Seems to Bring out the Best in Both Methods," Communications of The ACM (43:8), 2000. pp. 107-111.  126.  Smyth, B . and Cotter, P. "Personalized Electronic Program Guides for Digital TV," Al Magazine (22:2), 2001. pp. 89-98.  127.  Stewart, K J . "Transference as a Means of Building Trust in World Wide Web Sites," Proceedings of the Twentieth International Conference on Information Systems, 1999, pp. 459-464.  128.  Straub, D.W. and Watson, R.T. "Transformational Issues in Researching Is and Net-Enabled Organizations," working paper, 2001.  129.  Strub, P.J. and Priest, T.B. "Two Patterns of Establishing Trust: The Marijuana User," Sociological Focus (9:4), 1976. pp. 399-411.  130.  Strutton, D.E., Pelton, L . E . and Tanner, J.F.J. "Shall We Gather in the Garden: The Effect of Ingratiatory Behaviors on Buyer Trust in Salespeople," Industrial Marketing Management (25), 1996. pp. 151162.  131.  Sundar, S.S. and Nass, C. "Source Orientation in Human-Computer Interaction Programmer, Networker, or Independent Social Actor?" Communication Research (27:6), 2000. pp. 683-703.  125  132.  Swan, J.E., Bowers, M.R. and Richardson, L.D. "Customer Trust in the Salesperson: A n Integrative Review and Meta-Analysis of the Empirical Literature," Journal of Business Research (44:2), 1999. pp. 93-107.  133.  Swan, J.E., Trawick, F.I., Rink, D.R. and Roberts, J. "Measuring Dimensions of Purchaser Trust of Industrial Salespeople," Journal of Personal Selling and Sales Management (8), 1988. pp. 1-9.  134.  Swan, J.E., Trawick, F.I. and Silva, D.W. "How Industrial Salespeople Gain Customer Trust," Industrial Marketing Management (14), 1985. pp. 203-211.  135.  Sztompka, P. Trust: A Sociological Theory, Cambridge University Press, Cambridge, U K , 1999.  136.  Taylor, S.E. and Brown, J.D. "Illusion and Well-Being: A Social Psychological Perspective on Mental Health," Psychological Bulletin (103), 1988. pp. 193-210.  137.  Urban, G.L., Sultan, F. and Quails, W. "Design and Evaluation of a Trust Based Advisor on the Internet," working paper, 1999.  138.  Urban, G.L., Sultan, F. and Quails, W . "Placing Trust at the Center of Your Internet Strategy," Sloan Management Review (42:1), 2000. pp. 39-48.  139.  Wetsch, L.R. and Cunningham, P.H. "Measuring Determinants of Trust and Their Effects on Buying Intention for Online Purchase Decisions," working paper, 1999.  140.  Xiao, S.Y. and Benbasat, I. "Understanding Customer Trust in Agent-Mediated Electronic Commerce, Web-Mediated Electronic Commerce, and Traditional Commerce," Information Technology and Management (ITM) (Forthcoming), 2003.  126  141.  Yamagishi, T. "Trust as a Form of Social Intelligence," In Trust in Society, K . S. Cook (Ed.), 2, Russell Sage Foundation, New York, 2001.  142.  Yamagishi, T. and Yamagishi, M . "Trust and Commitment in the United States and Japan," Motivation and Emotion (18:2), 1994. pp. 129-166.  143.  Zacha, G., Moukas, A . and Maes, P. "Collaborative Reputation Mechanisms in Electronic Marketplaces," Proceedings of the 32nd Hawaii International conference on system Sciences, 1999,  144.  Zahir, S. "Designing a Knowledge-Based Interface for Intelligent Shopping Agents," Journal of Computer Information Systems (43:1), 2002. pp. 31-41.  145.  Zucker, L.G., Darby, M.R., Brewer, M . B . and Peng, Y . "Collaboration Structure and Information Dilemmas in Biotechnology: Organizational Boundaries as Trust Production," In Trust in Organizations: Frontiers of Theory and Research, R. M . Kramer and R. T. Tyler (Ed.), Sage, Thousand Oaks, C A , 1996. pp. 90-113.  127  RA internalization: high vs. low  Hl:+ hi  W  H2:X H3:+/\  /  H6:+  Cognitive trust in an RA H5:+  hi  W  H7:\  y  H8:-/  Familiarity: 'H4:+* Emotional trust Initial interaction vs. in an RA repeated interactions  H9:+  Figure 1: The Research Model  Intention to adopt an RA as a delegated agent  p \  Intention to adopt an RA as a decision aid  Note 1: Only significant path coefficients are shown above. For all path coefficients, see Table 11. Note 2: ** means significant at the 0.05 level. * means significant at 0.10 level.  Figure 2: Results of PLS Analysis  129  • f^ 1  ^iiiRiBi^iiigiiEsrrigrx  Ble Edit View Insert Format Tools Data Window Help  r  b .  >.^>^ Jeff O w n . F u n t u It, cus Lin MA  1  F  I  I  G  F  «§ I  I  .  H  ... .JL.  T  sst f_?.I ..?.. risk a I • 1 •US A lot. it e n do Its iob properly wfthe afven 0 0 8S 0 Y e s . seems unbiased as It provides inform*' *Sttj 0 U Not too qood.~ U provided al] the tnf ormatic 0 0 0 ink Ml; n but it is limited to product database lor Arr mi rliO «to wa s mm Y e s . d i d provide aPthe infol wouldneed W 1 * B tat 0 0 Wm, "n" It is suffloent to make a purchase.. . . •. fi 0 Ma u •«tl 1 «fi « o 0 wk (i i f O * i but would feel more oomfortable looking ul' 0 Si »so toccl *.' Wa " 0 ' "c b " " i f S o " o S80 It has all the technical Information that Iw^.I ! # S 0 !] i n mo 1 need power for my applloations . 11mm u w D 0 a sso D And minimum no minimum at least tOOOmK* 0 0 ssp q *§ how important Is this to me? Its extremely .'i am mm .. " f i H p i wm ; « 0 Brand that's somethinq 1 oan Qo without It'{ 1 u 0 u .j 1 u Installed RAM; yes definitely important. aL." U 0 0 fflM 0 11 Sf 0 extremely important for performance '! c o" IIS ""o •j Hard drive capacity, definitely. 20Gb at lej 1 .__ ~o """[T • 5 Sr| u 0 Compared to other features hard drive oaii. u 0 0 0 0  -• -j  ^aqrvttwe >ruvl.rom^BmTn'e  itocol  E  •  mm  0 0 Screen size. 12 inches at least: 101 will prr*' 0 but somewhat Important '•.•\-.-.'-\-.vv tl SB • '• Included drives, see what that Inofudes, ret. u s»o a»o M B 0 Memory stiok. automation super disks 1 do?: 1 n 0 r. « 0 probably drive the price up. this stuff shou !• WV 0 sic 0. » 0 lea) 0 0 J e e r arm = saltern lets see don't need win j.' 0 u 3 a .a _ 0  »p  "•'•^fcTcij^V.K"'*  .  «-/;CJS.S:J,JI .'i 'i*  JB Start  1  E  W  3 2 4 lor exp oor  -5  S K O  lis  "''"ill  ! IEa M >l H M..._^J?„. <am «s>»  MM  1 0 0 0 • 0 • 0 0 0 - 0 0 0 0 0 q 0 0 o 0 0 0 0 0 0 I 0 0 0 0 0 0 0 0 q b 0 i 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ISO 0 0 0 "c 0 0 0 0 M t p mo 0 0 0 • 4 i] mo 0 - 0 • 0 « 0 J«i mo 0 0 0 Wo mo :!K0 0 s#£ .[> ..tl u •mo! « 0 0  fc is  •  i  *i$iS  E  unk 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0  mo mo  s  yVdue2  1 2 6 7 10 Soonlttve trusl imotiona trui e tnte Info Vet Into Ber Cor Ber Into C4 E1 E2 E3 risk. oorr 1 0 0 »0 •-0 0 0 1 0 0 0 0 0 0 . 0 • 0 0 0 0 0 1 0 0 0 0 - 0 -b 0 0 0 0 - 0 0 0 • 0 0 - 1 0 0 0 1 0 0 0 0 0 0 0 0 0 -0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 q 0 0 1 0 0 0 0 0 0 0 0 p 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 D 0 0 0 0 * 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 q 0 0 0 c 0 0 0 0 0 0 0 0 0 0 0 # 0 0 0 0 0 0 0 0 1 0 0 1 0 q \ 0 0 0 0 0 0 0 0 0 q 0 0 0 0 0 0 0 0 0 0 0 0 0 0 a 0 0 0 0 0 0 0 0 0 0 a 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 p 0 0 0 0 0 0 0 0 0 0 0 0 0 tl 0 0 0 0 0 0 0 0 0 0 0 0 -0 0 0 -o 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 - 0 -o ' 0 0 1 0 0 0 -0 0 0 0 0 ' 0 0 0 0 0 0 (J M0 «?0 !»0 mo 0 c u u mo "o $,10 0 0 0 u" 0] Ir 0 i) MM 0 Mp  "M  M O SM  J.  S iJ«0 O '  ;•!' «5 £fii m' "d" 1i«V&r><3 ' 2-23 P M j <»»«<>•  F i g u r e 3: T w o Judges C ^ d e d t h e a P r o t o c o l s Independently  130  Trust Formation in Each G r o u p  C1-trust  C2-trust  C3-trust  ET  • groupl  62%  4%  3%  31%  •  group2  55%  7%  2%  37%  • group3  57%  7%  4%  32%  • group4  56%  6%  3%  35%  F i g u r e 4: Trust i n F o u r E x p e r i m e n t a l G r o u p s  Distrust Formation in Each Group  F i g u r e 5: Distrust i n F o u r E x p e r i m e n t a l G r o u p s  132  134  Scatter Plot  Int. Delegate Figure 8: The Scatter Plot  Table 1: Four Types of Recommendation Agents RA Types  Definition  Constraint-  A customer selects all the product features they desire. A CSRA  satisfaction RA  then recommends a list of products that satisfy the customer's  (CSRA)  hard constraints, ordered by how well the products satisfy the Customer's SOft constraints (Ansari, et al., 2000; Maes, et al., 1999).  Collaborative-  A CFRA possesses expertise in particular marketing segments. It  filtering RA  assigns each customer to a segment based on the customer's  (CFRA)  profile (i.e. background and attitudes), and then provides shopping advice based on previous purchase decisions made by other Customers in the Same Segment. (Ansari, et al., 2000; Maes, et al., 1999; Urban, etal., 1999).  Rule-based  An RBRA has expertise in domain knowledge, based on which it  RA (RBRA)  Will  provide Shopping advice to Customers (Maes, et al., 1999; Urban, et  al., 1999).  For example, an RBRA of wine knows whether red wine is  appropriate to accompany particular dishes, and whether a customer with heart disease should drink a particular kind of wine. Data-mining  DMRAs discover patterns in customer purchasing behavior, and  RA (DMRA)  then exploit these patterns to help customers find a diverse range of products that meet their needs (Maes, et al, 1999).  136  Table 2: Examples of Recommendation Agents Product brokering: what to buy? Maes, Guttman, and Moukas,  PersonaLogic (www.personalogic.com)  1999 Shardanand & Maes, 1995 Examples:  Firefly  www.cdnow.com; www.amazon.com; www.Landsend.com; www.activedecision.com; www.activemuseum.com Company brokering: where to buy?  Moukas, Guttman, & Maes, 2000  Tete-a-Tete  Doorenbos, Etzioni, &Weld, 1997  Jango, BargainFinder  Examples:  www.mvsimon.com; www.shoDDer.com; www.shoppina.com  137  Table 3: Experimental Design Familiarity Initial Interaction  Internalization  Repeated Interactions  Low  Groupl (23)  Group2 (25)  High  Group3 (28)  Group4 (24)  Note: The subject numbers in each group were not equal because 100 numbers were generated randomly (1, 2, 3, and 4) and the 100 subjects were assigned into the four groups based on the random numbers.  138  Table 4: Subjects Who Thought Aloud Familiarity Initial Interaction  Internalization  Repeated Interactions  Low  Groupl (11)  Group2 (11)  High  Group3 (15)  Group4 (12)  Table 5: Definitions and Measures for Independent Variables  Internalization (reflective) (new measure) Definition: Internalization  is a customer's perception of how well an RA  represents the customer's real needs. New  (zlnternl) This RA understands my needs.  New  (zlntern2) This RA knows what 1 want.  New  (zlntern3) This RA takes my needs as its own preferences.  Familiarity (reflective) (new measure) Definition: Familiarity  refers to a customer's familiarity with how an RA derives  its recommendation New  (zFamil) 1 am familiar with how this RA makes its recommendations.  140  Table 6: Definitions and Measures for Dependent Variables Source  Measures  Cognitive trust in an RA's competence (reflective) Definition: a customer's rational assessment  XhaX an RA has the capability to  give a good product recommendation. Plank, et al. 1999  (zCTC1) This RA is a real expert in assessing products.  (Hawes, et al.,  (zCTC2) This RA has good knowledge about products.  1993) New  (zCTC3) This RA considers all-important product attributes and my needs.  Cognitive trust in an RA's benevolence (reflective) Definition: a customer's rational assessment  that an RA will care about the  customer and intend to act in the customer's interest. Swan et al. 1988;  (zCTB1) This RA puts my interest first.  Crosby et al. 1990 Doney & Cannon  (zCTB2) This RA is concerned with my needs and  1997  preferences.  New  (zCTB3) This RA wants to understand my needs and preferences.  Cognitive trust in an RA's integrity (reflective) Definition: a customer's rational assessment  that an RA will be honest and  unbiased. New  (zCTH) This RA provides unbiased product recommendations.  Hawes et al.  (zCTI2) This RA is honest.  1989 New  (zCTI3) I consider this RA to possess integrity.  141  Emotional Trust in an RA (reflective) Definition: a customer's feelings of security and comfort about relying on an RA for decisions about what to buy. Swan et al. 1988  (zET1) I feel secure about relying on this RA for my decision.  New  (zET2) I feel comfortable about relying on this RA for my decision.  New  (zET3) I feel confident about relying on this RA for my decision.  New  (zET4) I feel content about relying on this RA for my decision.  Intention to adopt an RA as a delegated agent (reflective) Definition: The extent to which a customer is willing to let an RA make decisions about what to buy on behalf of the customer. New  (zDelegl) 1 am willing to delegate to this RA for my decision about which product to buy.  New  (zDeleg2) 1 am willing to let this RA decide which product to buy on my behalf.  Intention to adopt an RA as a decision aid (reflective) Definition: the extent to which a customer is willing to let an RA narrow down available choices; then the customer will evaluate product information and make purchase decision. New  (zDecAidl) 1 am willing to use this RA as an aid to help with my decision about which product to buy.  New  (zDecAid2) 1 am willing to let this RA assist me in deciding which product to buy.  New  (zDecAid3) 1 am willing to use this RA as a tool that suggests to me a number of products from which 1 can choose.  142  Table 7: Definitions and Measures for Control Variables Control propensity (reflective) (source: new) Definition: the degree of intensity of a customer's natural inclination to control other parties in general Alpha = 0.87 (zControh) I need to have full control of RA before I fully trust it. (zControl2) I am not willing to rely on a RA if I cannot fully control it. (zControl3) Full control of a RA is necessary for me to feel comfortable with its advice. Trust propensity (reflective) (source: Lee and Turban 2001) Definition: the degree of intensity of a customer's natural inclination to trust other parties in general. Alpha = 0.84 (zTP1) It is easy for me to trust a person/thing. (zTP2) My tendency to trust a person/thing is high. (zTP3) I tend to trust a person/thing, even though I have little knowledge of it. (zTP4) Trusting someone or something is not difficult. Preference for Decision Quality (reflective) (source: new) Definition: the extent to which a customer would like to improve decision quality at the expense of increased effort for shopping (adapted from Todd & Benbasat, 1992) Alpha = 0.78 (zqlteftl) I am willing to examine the product attributes very carefully in order to make sure that the product fits my preferences perfectly. (zqlteft2) I prefer to shop hard in order to get exactly what I want. Product Expertise (source: new) Definition: How well customers know about products. Measure: An average of three questions, including both self-reporting and objective tests (source of the questions and answers: www.activebuverquide.com).  143  Customer's expertise of notebook computer: (1) Which type of processor is a hardware-software hybrid, designed to provide a lighter, cooler, and less power-consuming processor for your notebook: AMD Duron, AMD K6-2, AMD K6-III, Intel Celeron, Intel Pentium II, Intel Pentium III, PowerPC, G3/750, Transmeta Crusoe, or PowerPC G4? (2) Why is the screen of the notebook often the most expensive part of the machine? What are the two main types of screen technology for notebook computers? Which type is more expensive? (3) Are you an expert with notebook computer? Please rate your expertise (1 means knowing very little, 4 means average, and 7 means knowing a lot)  Customer's expertise of desktop PC: (1) What is RAM? Is RAM important? Is installed RAM important? Why? (2) What does processor (CPU) type mean? Why is it important? (3) Are you an expert with desktop computer? Please rate your expertise (1 means knowing very little, 4 means average, and 7 means knowing a lot)  Customer's expertise of digital camera: (1) Why resolution is important? What are the four ranges of resolution? (2) What is ease of download? What kinds are available? Please list at least 4 kinds. (3) Are you an expert with digital camera? Please rate your expertise (1 means knowing very little, 4 means average, and 7 means knowing a lot)  144  Table 8: Reflective Constructs: Loadings and t-statistics Loading  T-Statistic  T-Statistic  (Jackknife)  (Bootstrap)  Loading  Internalization  T-Statistic  T-Statistic  (Jackknife)  (Bootstrap)  Emotional Trust in an RA  Zinternl  0.93  55*"  64***  zetl  0.91  40***  39***  Zintern2  0.93  50***  58***  zet2  0.94  62***  76***  Zintern3  0.92  46***  42***  zet3  0.93  58***  59***  Cognitive Trust in an RA's Competence  zet4  0.92  48***  55***  Intention to Adopt an RA as a Delegated zctd  0.84  2"j ***  23***  zctc2  0.79  -j4***  14***  zdelegl  0.95  111***  115***  zctc3  0.79  16***  zdeleg2  0.93  4g***  52***  Cognitive Trust in an RA's Benevolence zctbl  0.88  27***  zctb2  0.88  31***  zctb3  0.85  16***  Agent  Intention to Adopt an RA as a Decision Aid zdecaidl  0.94  68***  29***  zdecaid2  0.85  16***  13***  zdecaid3  0.92  30***  y-j ***  24***  Cognitive Trust in an RA's Integrity zctil  0.85  zcti2  0.90  zcti3  0.91  20***  16*** 22***  43***  44***  Note: *** means that t > 3.174 and p < 0.C)01.  145  Table 9: Correlations of Latent Constructs CT-  CT-  CT-  compt  benev  integ  Int-  Int-Dec-  Dele  Aid  Alpha  Internal  Internal  0.92  0.93  Familiar  1.00  0.40  1.00  CT-compt  0.74  0.61  0.37  0.81  CT-benev  0.84  0.74  0.42  0.74  0.87  CT-integ  0.87  0.51  0.38  0.54  0.62  0.89  ET  0.95  0.64  0.33  0.78  0.75  0.61  0.93  Int-Delegate  0.88  0.49  0.15  0.57  0.56  0.46  0.75  0.94  Int-Dec-Aid  0.89  0.54  0.31  0.65  0.64  0.55  0.73  0.52  Familiar  ET  0.90  Note 1: Diagonal elements are the square roots of average variance extracted (AVE). For adequate discriminant validity, diagonal elements should be greater than corresponding off-diagonal elements (inter-construct correlations) Note 2: Alpha for Cognitive Trust in an RA (including CT-compt, CTbenev, and CT-integ) = 0.90. Note 3: Alpha for Customer Trust in an RA (including Cognitive Trust and Emotional Trust) = 0.94.  146  Table 10: Correlations between Measures and Latent Variables Int. Internal  Familiar  CT-  CT-  CT-  comp.  benev.  integ.  ET  Int.  Dec.  Delegate  Aid  ZINTERN1  0.93  0.37  0.59  0.69  0.54  0.61  0.44  0.54  ZINTERN2  0.93  0.37  0.52  0.65  0.42  0.56  0.45  0.50  ZINTERN3  0.92  0.37  0.58  0.71  0.47  0.60  0.46  0.44  ZFAMIL1  0.40  1.00  0.37  0.42  0.38  0.33  0.15  0.31  ZCTC1  0.46  0.29  0.84  0.57  0.34  0.68  0.47  0.48  ZCTC2  0.40  0.24  0.79  0.52  0.47  0.57  0.38  0.54  ZCTC3  0.60  0.35  0.79  0.70  0.50  0.64  0.52  0.55  ZCTB1  0.60  0.40  0.66  0.88  0.58  0.66  0.54  0.54  ZCTB2  0.66  0.37  0.62  0.88  0.45  0.68  0.49  0.55  ZCTB3  0.67  0.32  0.66  0.85  0.60  0.61  0.43  0.60  ZCTI1  0.36  0.26  0.39  0.45  0.85  0.44  0.34  0.36  ZCTI2  0.44  0.30  0.45  0.55  0.90  0.50  0.40  0.48  ZCTI3  0.54  0.42  0.58  0.63  0.91  0.65  0.47  0.58  ZET1  0.59  0.32  0.70  0.67  0.56  0.91  0.66  0.67  ZET2  0.62  0.33  0.72  0.66  0.59  0.94  0.66  0.68  ZET3  0.61  0.30  0.76  0.76  0.58  0.94  0.75  0.71  ZET4  0.55  0.26  0.71  0.67  0.53  0.92  0.68  0.66  ZDELEG1  0.49  0.21  0.59  0.58  0.50  0.76  0.96  0.53  ZDELEG2  0.42  0.07  0.47  0.47  0.36  0.64  0.93  0.44  ZDECAID1  0.51  0.40  0.67  0.64  0.47  0.72  0.49  0.94  ZDECAID2  0.47  0.14  0.48  0.53  0.42  0.61  0.47  0.85  ZDECAID3  0.48  0.28  0.60  0.57  0.59  0.66  0.45  0.92  Note 1: All items are highly correlated to their corresponding latent variables, indicating good convergent validity. Note 2: The correlation between each item and its corresponding latent variable is greater than the correlations between this item and other latent variables, indicating good discriminant validity.  147  Table 11: The Structural Model Path  Direct  T-Statistic  T-Statistic  Indirect  Total  effect  (Jackknife)  (Bootstrap)  effect  effect  Hypothesis  lnternalization->CT-competence  0.55  5.28**  5.43**  0.55  HlaiS*  lnternalization->CT-benevolence  0.68  9.80**  11.91**  0.68  H1b:S  lnternalization->CT-integrity  0.43  3.36**  3.58**  0.43  H1c: S  lnternalization->ET  0.12  0.62  0.96  0.60  H2:N (M)  Familiarity->CT-competence  0.15  1.92**  1.60*  0.15  H3a: S  Familiarity->CT-benevolence  0.15  1.92**  2.66**  0.15  H3b: S  Familiarity->CT-integrity  0.21  1.74**  1.83**  0.21  H3c: S  Familiarity->ET  -0.05  -0.89  -0.95  CT-comp->ET  0.47  4.33**  4.42**  CT-comp->lnt.Delegate  -0.04  -1.21  -0.59  0.36  0.31  H6a:N (M)  CT-comp->lnt.Dec.Aid  0.12  0.59  0.80  0.22  0.34  H7a:N (M)  CT-bene->ET  0.22  1.29*  1.51*  CT-bene->lnt. Delegate  0.02  0.03  0.19  0.17  0.19  H6b:N (M)  CT-bene->l nt. Dec. Aid  0.13  0.92  0.86  0.10  0.24  H7b:N (M)  CT-integ->ET  0.18  1.53*  1.73**  CT-integ->l nt. Delegate  0.01  0.50  0.11  CT-integ->lnt.Dec.Aid  0.11  1.06  1.18  ET -> Int. Delegate  0.76  8.57**  ET -> Int. Dec. Aid  0.48  3.77**  1  0.48  0.14  0.08 0.47  0.22  a  H4:N (M) H5a: S  H5b: S  0.18  H5c: S  0.13  0.14  H6c:N (M)  0.08  0.19  H7c:N (M)  7.76**  0.76  H8: S  3.99**  0.48  H9: S  lnternalization->lnt.Delegate  0.45  0.45  Internalizations! nt. Dec. Aid  0.49  0.49  Familiarity->lnt. Delegate  0.06  0.06  Familiarity->lnt.Dec.Aid  0.10  0.10  Note 1: ** indicates significance at the 0.05 level; * indicates significance at the 0.10 level. Note 2: "S" means a hypothesis is supported; "N" means a hypothesis is not supported. Note 3: "(M)" means that a hypothesis was not supported due to a mediating effect (see Table 10).  148  Table 12: Tests for Mediating Effects Note: Mediating effects were tested using the three-step method suggested by Baron and Kenny (1986). Step 1: Prove that the independent variable (IV) significantly affects the dependent variable (DV). Step 2: Prove that the mediator variable (M) significantly affects the dependent variable. Step 3: In the regression: IV + M -> DV, if M is significant and IV is not significant, then M fully mediates the impact of IV on DV. If both M and IV are significant, then M partially mediates the impact of IV on DV. CT->ET->lnt. Delegate: full mediation 1. CT-comp -> ET -> Int. Delegate: full mediation Step 1  CT-comp-> Int.Delegate  CT-comp: 0.57**  Step 2  CT-comp-> ET  CT-comp: 0.78**  Step 3  CT-comp + ET -> Int. Delegate  CT-comp: -0.03  ET: 0.77**  2. CT-bene -> ET -> Int. Delegate: full mediation Step 1  CT-bene-> Int.Delegate  CT-bene: 0.56**  Step 2  CT-bene-> ET  CT-bene: 0.75**  Step 3  CT-bene + ET -> Int. Delegate  CT-bene: 0.01  ET: 0.74**  3. CT-bene -> ET -> Int. Delegate: full mediation Step 1  CT-integ-> Int.Delegate  CT-integ: 0.47***  Step 2  CT-integ-> ET  CT-integ: 0.61***  Step 3  CT-integ + ET -> Int. Delegate  CT-integ: 0.01  ET: 0.74***  CT->ET->lnt.Dec.Aid: full mediation 1. CT-comp -> ET -> Int.Dec.Aid: full mediation Step 1  CT-comp-> Int.Dec.Aid  CT-comp: 0.65**  Step 2  CT-comp-> ET  CT-comp: 0.78**  Step 3  CT-comp + ET -> Int.Dec.Aid  CT-comp: 0.20  ET: 0.58**  2. CT-bene -> ET -> Int.Dec.Aid: full mediation Step 1  CT-bene-> Int.Dec.Aid  CT-bene: 0.64**  Step 2  CT-bene-> ET  CT-bene: 0.75**  Step 3  CT-bene + ET -> Int.Dec.Aid  CT-bene: 0.22  ET: 0.57**  3. CT-integ -> ET -> Int.Dec.Aid: partial mediation Step 1  CT-integ-> Int.Dec.Aid  CT-integ: 0.56**  Step 2  CT-integ-> ET  CT-integ: 0.61**  149  Step 3  CT-integ + ET -> Int.Dec.Aid  CT-integ: 0.16"  ET: 0.64"  RA Internalization -> CT -> ET: full mediation 1. RA Internalization -> CT-comp -> ET: partial mediation Step 1  Internalizations ET  Internal.: 0.62"  Step 2  Internalizations CT-comp  Internal.: 0.64"  Step 3  Internalization + CT-comp -> ET  Internal.: 0.26"  CT-comp: 0.62"  2. RA Internalization -> CT-bene -> ET: full mediation Step 1  Internalizations ET  Internal.: 0.74"  Step 2  Internalizations CT-bene  Internal.: 0.64"  Step 3  Internalization + CT-bene -> ET  Internal.: 0.19  CT-bene: 0.61"  3. RA Internalization -> CT-integ -> ET: partial mediation Step 1  Internalizations ET  Internal.: 0.52"  Step 2  Internalizations CT-integ  Internal.: 0.64"  Step 3  Internalization + CT-integ -> ET  Internal.: 0.44"  CT-integ: 0.39"  Familiarity -> CT s ET: full mediation 1. Familiarity -> CT-comp -> ET: full mediation Step 1  Familiaritys ET  Familiarity: 0.33**  Step 2  Familiaritys CT-comp  Familiarity: 0.37**  Step 3  Familiarity + CT-comp -> ET  Familiarity: 0.05  CT-comp: 0.76**  2. Familiarity -> CT-bene -> ET: full mediation Step 1  Familiaritys ET  Familiarity: 0.33**  Step 2  Familiaritys CT-bene  Familiarity: 0.42**  Step 3  Familiarity + CT-bene s ET  Familiarity: 0.02  CT-bene: 0.74**  3. Fami iarity -> CT-integ -> ET: full mediation Step 1  Familiaritys ET  Familiarity: 0.33**  Step 2  Familiaritys CT-integ  Familiarity: 0.30**  Step 3  Familiarity + CT-integ -> ET  Familiarity: 0.11  CT-integ: 0.57**  Note: ** indicates significance at the 0.05 level.  150  Table 13: Protocol Analysis: Overview Table 13.a  Verbal Protocols  Analyzed  Agreed  6,987  6,012  717  574  V-trust V-distrust Written Protocols  Coded  2,166 3,141 1652 53% 1489 47% 556  W-trust W-distrust Total  Agreed  413  380 68% 176 32% 3,697  7,704  2,579  Table 13.b Analyzed Familiarity  Coded  initial  repeated  initial  repeated  N  26  23  26  23  Mean  99  192  50  91  SD  42  74  21  31  Min  49  52  22  33  Max  230  439  111  152  Note 1: "Analyzed" - All the transcribed protocols were analyzed. Note 2: "Coded" - A coded line refers to a line containing at least one process Coding Scheme 1 or in Coding Scheme 2.  Table 14: The Results of Written Protocol Analysis Cognitive Trust in an RA's Com]>etence compet.  expectation  control  unknown  integrity  info  verification  interface  benev.  11  12  13  14  15  16  17  18  19  11  17  67  26  5  0  4  31  51  4  2  188  35%  14%  2%  0%  2%  16%  27%  2%  1%  100%  Competence Assessment The RA is knowledgeable about products  21  The criteria represent all the important features of the product  15  The RA is able to retrieve products based on my search criteria  6  The criteria are specific  5  It gives good advice for me  5  It asks professional questions  2  The RA asked me a lot of questions  2  It is competent in understanding customer's needs  1  The 'get advice' option was very helpful  1  It can do its job properly w/the given resources  1  It helps me understand a lot of technical terms  1  It gives me recommendations  1  It gives a good picture for the recommended product  1  I trust the technology features  1  RA finds something I don't know before  1  It provides more attributes than I expected  1  Verification  51  27% 35  I can tell by the features on the products that...  9  Previous experience with the recommended product  7  Information Sharing  31  16%  The RA provides all the info I would need to know; detailed info  12  67  35%  Word of mouth, brand names of the recommended products  16  Total .  26  The RA gives clear explanations that are useful  2  RA have more information than I have  1  I think those information should be quite accurate  1  It specifies something that the recommended products have or don't have  1  Expectation Satisfaction  26  14%  The RA recommends the products that fit my expectations/requirements The RA provides more/better attributes than I expected  23 3  152  I  1  I  I  I  I  I  I  Coj»nitive Distrust in an RA's Competence compet.  expectation  control  unknown  integrity  info  verification  interface  benev.  11  12  13  14  15  16  17  18  19  -25 35% 11  -18  -2  25%  2%  -11  -2  -6  -6  -2  0  -70  15%  3%  9%  9%  2%  0%  100%  Competence Assessment  -11  The RA does not included a criteria that I am interested  14  16  -25  35%  The RA gives me "no results found" message  12  Total  -3  Too few products were found  -3  It lists everything regardless of my preferences  -3  RA has some incompetent functions  -2  The search was way too specific  -1  RA can't offer advice in subjective issues(that are more bias)  -1  My requirements are not accurate  -1  Expectation Satisfaction  -18  25%  The RA did not conform to the features I selected  -7  It would've been better if I could specify the attributes of...  -5  Results were not what I expected  -2  The RA should describe some terms in more detail.  -2  It should have a checkbox to show the items in red  -1  I wish I could find out more about the manufacturer  -1  Awareness of the Unknown  -11  15%  Not really because I'm not sure what I'm ordering exactly  -2  Not sure how long the product will last  -2  I know little about the products so I don't know if the RA is competent  -1  There's some criteria that I don't understand so I just leave it blank  -1  I don't know how the recommended products are ranked  -1  I can't understand the advice because he uses technical words  -1  I won't know for sure  -1  It is quite hard to know what is wrong as no other message is displayed  -1  The RA may not be able to determine the priorities of my needs  -1  Info Sharing  -6  9%  It didn't have full complete details  -4  I felt it is giving me a bit more information than I need  -1  I don't get much information from the RA  -1  153  17  Verification | 9% | | Doubtful because it is hard to believe that there are only 2 results from the specifications I provided  -6 -2  I can't test the product  -2  I don't trust any products recommended.  -1  I do not recognize the brand so I'd want to ask friends  -1  Cognitive Trust in an RA's Benevolence Note: include the lines in which C2 and one trust/distrust formation 1 process were coded compet.  expectation  control  unknown  integrity  info  verification  interface  benev.  11  12  13  14  15  16  17  18  19  0.5  5.5  1% 19  9  0%  21%  13%  1  0  2%  0.5  8%  1%  22  43  51%  100% 22  The RA tries to search the products that fits my needs  6  It does not try to push you to buy a particular product  6  I think that the RA keeps my interests in mind.  2  The RA even highlight the area that don't fit my needs  2  The RA does not appear to be trying to earn a commission  2  The RA helps me to know more about the topic  1  The RA doesn't try to overwhelm me with a lot of questions  1  The RA looks willing to help me buy products  1  The RA tries to understand fully what I want  1 9  21%  Control  12  1 2%  51%  Benevolence Assessment  13  3.5  Total  The RA gave me a lot of choice.  7  The RA lets me specify what I want  1  Yes because it is my personal sales agent  1  Yes, because I am in control of where I go/what I look at.  1 6  13%  Expectation Satisfaction Every criteria matches my expectations  4  Yes, because they would select all the necessary suggestions for me  1  Yes, no unanted suggestions  1 Cognitive Distrust in an RA's Benevolence  compet.  expectation  control  unknown  integrity  info  verification  interface  benev.  11  12  13  14  15  16  17  18  19  Total  -1.5  -3  -1  -0.5  0  0  -0.5  -1.5  -6  11%  21%  7%  4%  0%  0%  4%  11%  43%  154  -14 100%  16  Benevolence Assessment  12  43% |  |  |  -6  No the RA did not listen to me  -2  Even the 'info' option can be confusing  -2  The RA tends to show highm prices models  -1  Not really, because it didn't let me know what requirements to modify  -1  Expectation Satisfaction  -3  21% -3  Did not meet my needs w/features Cognitive Trust in an RA's Integrity compet.  expectation  11  12  unknown  control 13  integrity  14  15  info  verification  16  17  interface  benev.  18  19  Total  0  1  3  0  27  1.5  2  0.5  1  36  0%  3%  8%  0%  75%  4%  6%  1%  3%  100%  15  Integrity Assessment  27  75% 12  The RA seemed to provide an unbiased selection I don't see any bias. It asks me what I prefer and gives what I want  6  RA isn't biased, it won't lure me to buy a specific product  3  The RA's questions are all objective  2  The RA asks me about all the features  1  False info won't be shown otherwise no one will buy from the RA  1  The RA is not linked to a specific company so it's unbiased  1  The RA was quite honest ingiving me the various prices of the product  1  The RA gives unbiased description of each features.  1  The information provided is comprehensive  3  The actual salesperson won't tell me the price sold to other customers  1  I am free to pick any selection I like. RA should have no hidden motive  1  The RA seems unbiased and fair. I have a lot of control  1  Others  Cognitive Distrust in an RA's Integrity compet.  expectation  control  unknown  11  12  13  14  15  integrity  info  verification  interface  benev.  15  16  17  18  19  Total  0  -0.5  0  -2.5  -13  -2  -0.5  -0.5  0  -19  0%  3%  0%  13%  68%  11%  3%  3%  0%  100%  Integrity Assessment  -13  68%  It gives very few brands, so I wonder if it is pushing me some products A bit doubt. The RA tends to recommend products with high prices  -10 -2  155  -1  There's uncertainty about the existence of bias  -6  Others I'm not sure about whether it's biased although it claims it's unbiased.  -2  Somewhat trust, the rationale of ranking is not clear  -2  I can only get information from RA for what I choosed for my needs  -1  I'm not sure if it is biased because I can't see familiar brands like IBM  -1  £2: Knowledge-based Emotional Trust in an RA compet.  expectation  control  unknown  integrity  info  verification  interface  benev.  11  12  13  14  15  16  17  18  19  17  16  17  14  11  16%  14%  11%  0  3  0%  3%  Verification  20  25  4  19%  24%  3%  12  10  25  Fairly comfortable, I can verify with the information provided  10  I feel confident because I trust the brand of the recommended product  10  Feel comfortable, I have some experience on buying digital camera  2  I have heard satisfactory previous customer experiences with it  1  I am happy with the recommended product.  1  good because I was able to compare the product with other products  1  Info Sharing  20  19% 18  I learn a lot of new information in a short period. This is efficient  1  The RA tells me much more than I knew about  1  Competence Assessment  17  16%  The RA lists all the featurs that I would consider when buying a laptop  4  It helped me narrow my search to the most relevant products fairly well  4  The RA recommends some product which I need but I did not know  1  I can trust the RA. It is knowledgeable,  1  The RA is useful  1  The RA is easy to use  1  The RA tells me much more than I knew about  1  good because I was able to compare the product with other products  1  Had to rely on the 'get advice' option a lot.  1  I feel comfortable and happy because I don't need to talk to others  1  I don't need to shop around to collect the information for the product  1  Expectation Satisfication  102  10% 100%  24%  I can trust the RA to provide me with enough information  11  Total  14  14%  156  12  The recommended product has almost all the features I am looking for  13  The RA came up with product recommendations that nicely surprised me  1  I think the recommendation should be the best one for my needs  1  Control  Good, easier to narrow down my decision than last time  2  Pretty good, because it let me select the requirements that I need.  2  I can know every detail and then I chose the best one  2  I knew enough info about the laptops/companies to make my decision  2  Generally I feel good because it is my software  1  It helps to narrow down my needs, I gain more control  1  I feel I had a lot of control in the decision  1  Benevolence Assessment  19  11  11%  10  10%  It helps me to ...  7  The recommendation is based on my criteria not others' preferences  2  The RA is looking for my best interest  1  £ 2 : Knowledge-based Emotional Distrust in an RA compet.  expectation  control  unknown  integrity  info  verification  interface  benev.  11  12  13  14  15  16  17  18  19  12  14  11  Total  -16  -19  -5  -17  -4  -6  -15  -2  -4  -87  18%  22%  6%  19%  4%  7%  17%  2%  5%  100%  Expectation Satisfaction  -19  22%  Not confident. My needs not fulfilled with the requirements portion  9  The RA should output more search results so I can have more chooses  4  I prefer the RA to ask me what I need to use the computer for  1  It should point out what requirements to modify when it finds nothing  1  If search was faster, I would rely on it more  1  If the recommendations were narrower, I would rely on it more  1  I would like to see more support for other brand's models.  1  The questions should have been more specialized to me & my lifestyle  1  Awareness of the Unknown  -17  19%  I don't know about...  7  Not sure ...  5  Might be, may be ...  3  I am wondering if...  2  Competence  -16  18%  157  No enough suggestions or no suggestion given  17  10  It gives info that it finds, without verifying that is accurate or complete  2  I feel its asking general questions only  1  A little slow  1  The suggestions are not narrow enough.  1  The RA has not done anything for me to gain my trust  1  Verification  -15  17%  I want to try another RA for second opinion  7  I still want to see the real product before I decided to buy it.  5  I will print out the recommendation and go to my friends for more advice  3  158  Table 15: Verbal Protocols: the Structures of Trust and Distrust Cognitive Trust  Trust Distrust  Total  CTcomp C1 1053 57% -713 43% 1766 50%  Emotional Trust  CTCTbenev integ C2 C3 114 58 6% 3% -45 -31 2% 3% 159 5%  89 3%  Total CT vs. ET  E1 12 1% -5 0%  E2 585 31% -778 47%  CT E3 C1+C2+C3 1224 38 2% 66% -72 -789 4% 48%  17 0%  1362 39%  110 3%  2013 57%  Total  ET E1+E2+E3 635 34% -854 52%  CT+ET  1488 43%  3501 100%  1859 100% -1642 100%  Regard ing CT vs. ET, Pearson's chi-square = 427.9 (DF = 1, p < 0.01)  Note: CT-comp. - Cognitive Trust in Competence CT-benev. - Cognitive Trust n Benevolence CT-integ. - Cognitive Trust in Integrity CT - Cognitive Trust C1 - Knowledge-based CT-Competence Process C2 - Knowledge-based CT-Benevolence Process C3 - Knowledge-based CT-lntegrity Process ET - Emotional Trust E1 - CT Causing ET Process E2 - Knowledge-based ET Process E3 - Customer-characteristic-based ET Process  159  Table 16: Verbal Protocols: Trust and Distrust Formation Processes 11 503 33%  12 161 11%  13 58 4%  14 62 4%  15 29 2%  16 320 21%  17 209 14%  18 97 6%  19 78 5%  Total 1516 100%  Distrust  -296 22%  -275 20%  -89 6%  -379 28%  -19 1%  -112 8%  -78 6%  -102 7%  -24 2%  -1373 100%  Total  799 28%  436 15%  147 441 5%. 15%  48 2%  432 15%  287 10%  199 7%  102 4%  2888 100%  Trust  Pearson's chi-square = 503.5 (DF = 8, p < 0.01) Note:  11: Competence Assessment Process 12: Expectation Satisfaction Process 13: Control Process 14: Awareness of the Unknown Process 15: Integrity Assessment Process 16: Information Sharing Process 17: Verification Process 18: Interface Process 19: Benevolence Assessment Process  Table 17: Verbal Protocols: the Structure of Trust in Repeated Interactions Cognitive Trust CTCTcomp. benev. Interaction C1 C2 1 14.2 1.7 56% 7%  Total  Emotional Trust  CTinteg. C3 0.9 4%  E1 0.2 1%  E2 7.7 30%  Total  Total CT vs. ET ET E1+E2+E3 8.3 33%  CT+ET  E3 0.5 2%  CT C1+C2+C3 16.8 67%  25.1 100%  2  7.2 57%  0.7 6%  0.2 1%  0.1 1%  4.3 34%  0.2 1%  8.1 64%  4.5 36%  12.7 100%  3  8.3 57%  0.7 4%  0.4 3%  4.8 0.1 0% 33%  0.3 2%  9.4 64%  5.3 36%  14.7 100%  29.7 57%  3.1 6%  1.5 3%  0.3 1%  1.1 2%  34.3 65%  18.2 35%  52.5 100%  16.8 32%  Regarding CT vs. ET, Pearson's chi-square = 0.05 (DF = 2, p > 0.10) Note 1: The numbers are averages of coded-processes per subject. Note 2: 49 subjects were involved in interaction 1, including all the subjects. The quantity of subjects in interaction 2 was 23, including the subjects in repeated-interaction groups (group 2 and group 4 in Table 3). The quantity of subjects in interaction 3 was 23. Note 3: CT-comp. - Cognitive Trust in Competence CT-benev. - Cognitive Trust n Benevolence CT-integ. - Cognitive Trust in Integrity CT - Cognitive Trust C1 - Knowledge-based CT-Competence Process C2 - Knowledge-based CT-Benevolence Process C3 - Knowledge-based CT-lntegrity Process ET - Emotional Trust E1 - CT Causing ET Process E2 - Knowledge-based ET Process E3 - Customer-characteristic-based ET Process  161  Table 18: Verbal Protocols: the Structure of Distrust in Repeated Interactions  Interaction  Total  Cognitive Trust CTCTCTcomp. benev. integ. C2 C1 C3  Emo tional 1["rust  E1  E2  E3  Total  Total CT vs. ET CT C1+C2+C3  ET E1+E2+E3  CT+ET  1  -9.7 44%  -0.4 2%  -0.6 3%  0.0 0%  10.6 48%  -0.7 3%  -10.8 49%  -11.3 51%  -22.0 100%  2  -6.0 45%  -0.8 6%  -0.1 1%  -0.1 0%  -5.9 44%  -0.6 4%  -6.8 51%  -6.5 49%  -13.3 100%  3  -4.3 38%  -0.2 2%  0.0 0%  0.0 0%  -5.5 49%  -1.1 10%  -4.5 41%  -6.6 59%  -11.1 100%  -20.0 43%  -1.4 3%  -0.7 2%  -0.1 0%  21.9 47%  -2.3 5%  -22.1 48%  -24.4 52%  -46.5 100%  Regard ing CT vs. ET, Pearson's chi-square = 0.30 (DF = 2, p > 0.10)  Note 1: The numbers are averages of coded-processes per subject. Note 2: The quantity of subjects in interaction 1 was 49, including all subjects. 23 subjects were involved in interaction 2, including the subjects in repeatedinteraction groups (group 2 and group 4 in Table 3), and the quantity of subjects in interaction 3 was 23. Note 3: CT-comp. - Cognitive Trust in Competence CT-benev. - Cognitive Trust n Benevolence CT-integ. - Cognitive Trust in Integrity CT - Cognitive Trust C1 - Knowledge-based CT-Competence Process C2 - Knowledge-based CT-Benevolence Process C3 - Knowledge-based CT-lntegrity Process ET - Emotional Trust E1 - CT Causing ET Process E2 - Knowledge-based ET Process E3 - Customer-characteristic-based ET Process  162  Table 19: Verbal Protocols: Trust Formation Processes in Repeated Interactions Interaction 1  11 6.5 32%  12 2.1 10%  13 0.9 4%  14 0.8 4%  15 0.5 3%  16 4.3 21%  17 2.8 13%  18 1.4 7%  19 1.2 6%  total(l) 20.5 100%  2  3.8 35%  1.6 15%  0.4 3%  0.5 5%  0.1 1%  1.9 17%  1.5 14%  0.6 5%  0.5 5%  10.8 100%  3  4.1 36%  0.9 8%  0.2 2%  0.5 5%  0.1 1%  2.9 25%  1.7 15%  0.5 5%  0.3 3%  11.4 100%  14.5 34%  4.6 11%  1.5 4%  1.8 4%  0.7 2%  9.0 21%  6.0 14%  2.6 6%  2.0 5%  42.7 100%  Total  Pearson's chi-square = 8.2 (DF = 16, p > 0.10) Note 1: The numbers are averages of coded-processes per subject. Note 2: The quantity of subjects in interaction 1 was 49, including all subjects. 23 subjects were involved in interaction 2, including the subjects in repeatedinteraction groups (group 2 and group 4 in Table 3), and the quantity of subjects in interaction 3 was 23. Note 3:11: Competence Assessment Process 12: Expectation Satisfaction Process 13: Control Process 14: Awareness of the Unknown Process 15: Integrity Assessment Process 16: Information Sharing Process 17: Verification Process 18: Interface Process 19: Benevolence Assessment Process  163  Table 20: Verbal Protocols: Distrust Formation Processes in Repeated Interactions Interaction 11 12 1 -4.2 -3.8 22% 20%  Total  13 -1.2 6%  14 -5.3 28%  15 -0.4 2%  16 -1.2 7%  17 -1.0 5%  18 -1.4 7%  19 -0.3 1%  Total -18.7 100%  2  -2.1 -2.3 18% 2 1 %  -1.0 9%  -2.9 26%  0.0 0%  -1.2 10%  -0.8 7%  -0.7 6%  -0.3 3%  -11.2 100%  3  -1.9 22%  -1.5 17%  -0.3 4%  -2.4 28%  0.0 1%  -1.1 12%  -0.5 5%  -0.8 9%  -0.2 2%  -8.6 100%  -8.2 -7.6 2 1 % 20%  -2.5 7%  -10.5 27%  -0.4 1%  -3.5 9%  -2.2 6%  -2.8 7%  -0.7 2%  -38.5 100%  Pearson's chi-square = 8.4 (DF = 16, p > 0.10) Note 1: The numbers are averages of coded-processes per subject. Note 2: The quantity of subjects in interaction 1 was 49, including all subjects. 23 subjects were involved in interaction 2, including the subjects in repeatedinteraction groups (group 2 and group 4 in Table 3), and the quantity of subjects in interaction 3 was 23. Note 3:11: Competence Assessment Process 12: Expectation Satisfaction Process 13: Control Process 14: Awareness of the Unknown Process 15: Integrity Assessment Process 16: Information Sharing Process 17: Verification Process 18: Interface Process 19: Benevolence Assessment Process  164  Table 2 1 : The Effects of RA Internalization on the Structures of Trust and Distrust  Internalization low high  Emotional Trust  Cognitive Trust CTCTCTcomp. benev. integ. C2 C1 C3 19.3 57% 23.9 56%  2.0 6% 2.7 6%  0.8 2% 1.5 4%  Total  Total CI" vs. ET  CT E1 E2 E3 C1+C2+C3 Forming Trust Components 22.1 0.3 10.8 0.7 65% 1% 32% 2% 0.2 13.3 0.9 28.0 66% 1% 3 1 % 2%  ET E1+E2+E3 11.8  35% 14.4  34%  CT+ET  33.8 100% 42.4 100%  Regarding CT vs. ET, Pearson's chi-square = 0.01 (DF = 1, p > 0.10) Forming Distrust Components low  high  -17.4 45%  -1.0 3%  -0.8 2%  0.1 0%  -17.6 45%  1.9 5%  -19.3  -12.6 41%  -0.9 3%  -0.5 2%  0.1 0%  -15.2 50%  1.2 4%  -14.0  50%  46%  -19.5  50% -16.5  54%  Regarding CT vs. ET, Pearson's chi-square = 0.10 (DF = 1, p > 0.10) Note 1: The numbers are averages of coded-processes per subject. Note 2: The quantity of subjects in low-internalization RA groups was 22, including subjects in groups 1 and 2 in Table 3. The quantity of subjects in high-internalization RA groups was 27, including subjects in groups 3 and 4 in Table 3.  165  -38.8 100% -30.5 100%  Table 22: The Effects of RA Internalization on Trust/Distrust Formation Processes  Internalization low high  11 8.0 29% 12.6 36%  12 2.7 10% 4.0 11%  13 1.1 4% 1.2 3%  Truslt formation processes 14 15 16 17 18 0.9 0.3 7.3 3.6 2.4 3% 1% 27% 13% 9% 6.0 5.0 1.7 1.7 0.9 5% 2% 17% 14% 5%  19 total(l) 1.0 27.0 4% 100% 2.1 35.1 100% 6%  Pearson's chi-square = 5.63 (DF = 8, p > 0.10)  Internalization low high  12 11 -8.4 -5.9 26% 18% -4.2 -5.6 16% 22%  13 -2.0 6% -1.8 7%  Distrust formation processes 14 17 18 15 16 -8.2 -0.5 -3.2 -1.5 -1.8 26% 2% 10% 5% 6% -7.7 -0.3 -1.6 -1.8 -2.4 7% 9% 30% 1% 6%  19 total(l) -31.9 -0.5 1% 100% -0.5 -25.8 2% 100%  Pearson's chi-square = 5.74 (DF = 8, p > 0.10) Note 1: The numbers are averages of the quantity of coded-processes per subject. Note 2: The quantity of subjects in low-internalization RA groups was 22, including subjects in groups 1 and 2 in Table 3. The quantity of subjects in high-internalization RA groups was 27, including subjects in groups 3 and 4 in Table 3. Note 3:11: Competence Assessment Process 12: Expectation Satisfaction Process 13: Control Process 14: Awareness of the Unknown Process I5: Integrity Assessment Process 16: Information Sharing Process 17: Verification Process 18: Interface Process 19: Benevolence Assessment Process  166  Table 23: Regression Analysis Coefficients 11 12 13 14 15 16 17 18 19 Adjusted R 2 A  C1trust 0.75 0.24  E2-trust 0.21  C1distrust 0.61 0.49  -0.20  E2distrust 0.18 0.24 0.49  0.34 0.11  0.31 0.44 0.49  0.18  0.19 0.17 0.26  0.91  0.86  0.90  0.93  Note 1: This table includes four regressions: C1 -trust = f (11, 12, ... 19), E2-trust = f (11, 12, ... 19), C1-distrust = f (11,12, ... 19), and E2-distrust = f (11, 12, ... 19). Note 2: The regression method is Stepwise. The other methods (Backward and Forward) have also been used. The results of each regression model (indicating the independent variables that are significant, and suggesting standardized coefficients) are the same as shown in the table. Note 3: The coefficients in the table are all standardized coefficients. Note 4: Only significant coefficients are shown in the table above. Note 5: The high R 2 can be explained by the definitions of C1 and E2. C1 and E2 are knowledge-based cognitive trust in an RA's competence and knowledge-based emotional trust in an RA. 11-19 cover the most common methods by which customers process their knowledge about RAs, therefore C1 and E2 are satisfactorily explained by 11-19. Note 6:11: Competence Assessment Process 12: Expectation Satisfaction Process 13: Control Process 14: Awareness of the Unknown Process 15: Integrity Assessment Process 16: Information Sharing Process 17: Verification Process 18: Interface Process 19: Benevolence Assessment Process A  167  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0091325/manifest

Comment

Related Items