UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Impact of group support systems on judgment biases: an experimental investigation Lim, Lai-Huat 1995

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1995-983108.pdf [ 5.55MB ]
Metadata
JSON: 831-1.0088863.json
JSON-LD: 831-1.0088863-ld.json
RDF/XML (Pretty): 831-1.0088863-rdf.xml
RDF/JSON: 831-1.0088863-rdf.json
Turtle: 831-1.0088863-turtle.txt
N-Triples: 831-1.0088863-rdf-ntriples.txt
Original Record: 831-1.0088863-source.json
Full Text
831-1.0088863-fulltext.txt
Citation
831-1.0088863.ris

Full Text

IMPACT OF GROUP SUPPORT SYSTEMS ON JUDGMENT BIASES:AN EXPERIMENTAL INVESTIGATIONbyLAI-HUAT LIMB.Eng., National University of Singapore, 1988M.Sc., National University of Singapore, 1990A THESIS SUBMITthD IN PARTIAL FULFILMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinTHE FACULTY OF GRADUATE STUDIESFaculty of Commerce and Business AdministrationWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIAApril 1995© Lal-Huat Lim, 1995In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature)Department of Si”The University of British ColumbiaVancouver, CanadaDate / gDE-6 (2/88)ACKNOWLEDGMENTI would like to express my sincere appreciation to my advisor, Professor Izak Benbasat, for hisguidance and support throughout the process of this research. Professor Benbasat has been ofgreat assistance to me throughout my doctoral studies, and I have learned immensely from him.I thank Professor Gerald Gorn and Professor Carson Woo for their kindness and interest inbecoming members of my dissertation committee. I am grateful for their constructive commentson my research ideas and for provoking me to think carefully about what I was doing. ProfessorChino Rao provided helpful suggestions during the early part of this research effort, and I thankhim for his support.I am also grateful to others who have helped in the process. The MIS Division created the GSScomputing environment without which this research would not have been possible. RobbieNakatsu took time to assist in the recruitment of subjects. Gary Schwartz provided audioequipment and helped in its setting up. Professor Kelly Booth made available the use of a videocamera.I am thankful to my parents for the love of education they instilled in me. A special thanks tomy wife, Serena, for her complete support in this adventure, including her every effort in takinggood care of our daughter, Sophia.ivABSTRACTPast research has demonstrated that individual and group judgments are subject to systematicbiases. Although much effort has been devoted to the debiasing of individual judgments, nocorresponding work to date has been found on the debiasing of group judgments. Complicatingthis research gap is the fact that group and team work is gaining increasing importance inorganizational settings. The current study examines the usefulness of group support systems(GSS) in addressing two important judgment biases, namely, representativeness bias andavailability bias. Representativeness bias refers to the bias incurred in posterior-probabilityestimation by not properly utilizing information sources such as base rate. Availability biasoccurs when events of higher availability to the memory are correspondingly judged as occurringmore frequently.The formation of a judgment is seen from the perspective of an information integration process.Two orthogonal dimensions of information integration -- interpersonal and intrapersonal -- areinvolved in group judgments. Interpersonal information integration concerns the aspect ofinformation sharing among group members, and can be supported with the electroniccommunication channel of GSS. Intrapersonal information integration deals with the informationprocessing capacities and capabilities of individuals, and is supportable using cognitive-supporttools of GSS.A laboratory experiment with a 2x2 factorial design was conducted. One hundred and twentysubjects took part in the experiment. They were randomly allocated to 40 groups. Twoexperimental tasks, designed to examine the two judgment biases of interest, were solved byeach group. Data pertaining to both processes and outcomes were collected and analyzed.Representativeness bias was reduced by the use of cognitive support, in the form of a problemrepresentation tool. Groups with the problem representation tool made fewer references todiagnostic information versus base rate, leading to the use of more correct strategies whichcombined these two information sources. The use of the problem representation tool was foundIIto be responsible for causing this chain of events. On the other hand, electronic communicationdid not lead to a similar change in the pattern of group processes, and, correspondingly, did notreduce the representativeness bias. Although electronic communication is capable of improvingthe interpersonal aspect of information integration, the representativeness bias is primarily aresult of cognitive limitations, and benefits little from improved communication among groupmembers.Availability bias was reduced by both cognitive support and communication support. Cognitivesupport, in the form of electronic brainstorming, increased the information search scope ofissues, especially those issues of relatively low availability to the memory. Electroniccommunication allows parallel input and has a lower social presence than verbal communication.These features helped to reduce the extent of groupthink and widened the range of alternativesolutions proposed.Some interaction effects were observed on group members’ perceptions of the group process.For example, communication medium had an effect on group members’ satisfaction in groupswithout cognitive support, but not those with cognitive support. Correspondingly, cognitivesupport affected some perceptual variables in verbally-communicating groups, but notelectronically-communicating groups. Examples of such effects include an increase in perceivedsocio-emotional behavior and perceived informal leadership.inTABLE OF CONTENTSAbstract .11AcknowledgmentList of TablesList of FiguresChapter 1: Introduction1.1 Background1.2 Research Questions and Importance1.3 Conduct of Research1.4 Organization of the DissertationChapter 2: Literature Review2.1 Group Support Systems2.1.1 Empirical Research2.1.2 Components of GSS .2.1.3 Brainstorming Researchivxm1of Topic12.4.4.6668911V2.2 Judgment Biases2.2.1 Representativeness Bias.2.2.2 Availability Bias2.2.3 Group Judgment2.2.4Debiasing2.3 Summary1213151619Chapter 3: Theoretical Models and Hypotheses3.1 Two-Dimensional Information Integration Perspective3.2 A Match of GSS Tools and Judgment Tasks3.2.1 On-line vs. Memory-based Judgment Tasks3.2.2 Problem Representation3.2.3 Electronic Brainstorming3.2.4 Electronic Communication20203.3 Research Models3.3.1 Representativeness Bias3. .3.2 Availability Bias3.4 Hypotheses: Representativeness Bias3.4.1 Effect of Cognitive Support3.4.2 Effect of Communication Medium3.5 Hypotheses: Availability Bias3.5.1 Effect of Cognitive Support . .3.5.2 Effect of Communication Medium3333363838445151582424252729viChapter 4: Research Method . . . . 664.1 Research Design . . . . 664.2 Subjects . . . . 684.3 Experimental Tasks 694.3.1 Probability Estimation Task 694.3.2 Job Evaluation Task 704.4 Group Support System 734.4.1 Cognitive Support 734.4.2 Electronic Communication 764.5 Dependent Measures 764.5.1 Probability Estimation Task 774.5.2 Job Evaluation Task 804.6 Procedure . . 82Chapter 5: Analysis of Experimental Results . . . 865.1 Preliminary Analysis of Data 865.1.1 Homogeneity of Variance 875.1.2 Independent Samples 915.1.3 Normality of Error Terms 915.1.4 Assessment of the Reliability of the Perceptual Variables 945.1.5 Homogeneity of Group Composition 965.2 Probability Estimation Task 97vii5.2.1 Cognitive Support.975.2.2 Communication Medium 1005.2.3 Interaction Effects of Cognitive Support and Communication Medium . 1035.2.4 Correlational Analysis 1125.3 Job Evaluation Task 1185.3.1 Cognitive Support 1185.3.2 Communication Medium 1225.3.3 Interaction Effects of Cognitive Support and Communication Medium . 1265.3.4 Correlational Analysis 1345.4 Summary 139Chapter 6: Discussion of the Findings 1416.1 Probabffity Estimation Task 1416.1.1 Representativeness Bias 1416.1.2 User Perceptions 1446.2 Job Evaluation Task 1466.2.1 Availability Bias 1466.2.2 User Perceptions 1496.3 Limitations of the Study . 1506.4 Future Research Directions 1526.5 Conclusion 155References 157viiiAppendix A: Interaction Coding Schemes.173Appendix B: Experimental Materials 176Appendix C: Detailed Statistical Results 211ixLIST OF TABLESTable Page3.1 Differences between electronic brainstorming and electronic communication . . 333.2 Summary of hypotheses for representativeness bias 503.3 Summary of hypotheses for availability bias 655.1 Hartley’s test for homogeneity of variance (Probability estimation task) 895.2 Hartley’s test for homogeneity of variance (Job evaluation task) 905.3 Lilliefor’s test. for normality of the residuals (Probability estimation task)925.4 Lilliefor’s test for normality of the residuals (Job evaluation task) 935.5 Effect of cognitive support on outcomes and perceptual variables (Probability estimationtask) 985.6 Effect of cognitive support on process variables (Probability estimation task995.7 Effect of communication medium on outcomes and perceptual variables (Probabilityestimation task) 1015.8 Effect of communication medium on process variables (Probability estimation task)102x5.9 Interaction effect of cognitive support and communication medium on outcomes andperceptual variables (Probability estimation task) 1045.10 Interaction effect of cognitive support and communication medium on process variables(Probability estimation task) 1095.11 Correlations between process variables and outcomes and perceptual variables(Probability estimation task) 1135.12 Effect of cognitive support on outcomes and perceptual variables (Job evaluation task)1205.13 Effect of cognitive support on process variables (Job evaluation task) 1215.14 Effect of communication medium on outcomes and perceptual variables (Job evaluationtask) 1245.15 Effect of communication medium on process variables (Job evaluation task)1255.16 Interaction effect of cognitive support and communication medium on outcomes andperceptual variables (Job evaluation task) 1275.17 Interaction effect of cognitive support and communication medium on process variables(Job evaluation task) 1335.18 Correlations between process variables and outcomes (Job evaluation task) . . .135xiA. 1 Coding scheme used for substantive communication for probability estimation task173A.2 Coding scheme used for substantive communication for job evaluation task174A.3 Coding scheme used for procedural communication 175C. 1 Scale reliability coefficients and factor analysis results 213C.2 Item reliability statistics and factor loadings: Perceived task participation213C.3 Item reliability statistics and factor loadings: Perceived socio-emotional behavior214C.4 Item reliability statistics and factor loadings: Satisfaction with outcome214C.5 Item reliability statistics and factor loadings: Satisfaction with process215C.6 Item reliability statistics and factor loadings: Perceived informal leadership215xilLIST OF FIGURESFigure Page3.1 The two-dimensional information integration perspective 213.2 GSS Components and the two dimensions of information integration 233.3 Research model (Representativeness bias) 343.4 Research model (Availability bias) 364.1 Experimental design 654.2 Problem representation tool 715.1 Interaction effect on perceived socio-emotional behavior (Probability estimation task)1055.2 Interaction effect on satisfaction with process (Probability estimation task)1065.3 Interaction effect on perceived informal leadership (Probability estimation task)1075.4 Interaction effect on references to the diagnostic information (Probability estimation task)1105.5 Interaction effect on references to the base rate (Probability estimation task) . 108xiii5.6 Interaction effect on time to completion (Job evaluation task). 1285.7 Interaction effect on perceived socio-emotional behavior (Job evaluation task)1295.8 Interaction effect on satisfaction with process (Job evaluation task) 1305.9 Interaction effect on perceived informal leadership (Job evaluation task) . . . 131xivCHAPTER 1INTRODUCTION1.1 BackgroundJudgment and choice are pervasive activities in both business and everyday contexts. Forexample, judging the risk of an investment project, estimating the likelihood that a rivalcompany will introduce a certain product within the next month, judging the probability that aparticular loan applicant will be able to pay back, and so on, are situations that require eitherindividual or groups of managers to arrive at appropriate judgments. The use of groups andcommittees to make judgments or decisions1 is an important subset of such activities.Nevertheless, as shown by the stream of research started by Kahneman and Tversky (e.g.,[Kahneman, Slovic, and Tversky, 1982; Kahneman and Tversky, 1972, 1973; Tversky andKahneman, 1973, 1974]), human judgments are subject to systematic biases. Important examplesinclude the base-rate fallacy, insensitivity to sample size, overconfidence, availability bias, andso on. Both individual and group judgments have been found to be associated with one or moreof these biases. Although debiasing methods aimed at eliminating or reducing biases have beenexamined in contexts involving individuals making judgments (e.g., [Elsaesser, 1989; Wright,1983]), no parallel effort seems to exist for group judgments.Group judgment and decision making is a more complex process than individual judgment;1The terms “judgment” and “decision” are used interchangeably here [Slovic andLichtenstein, 1973].1consequently, there are many more ways things could go wrong within a group. Specifically,groups often suffer from process losses [Steiner, 1972] such as groupthink [Janis, 1982], free-riding, conformance pressure, etc. To address these difficulties, group support systems (GSS),a combination of communication, computer, and decision technologies for supporting problemformulation and solution in group meetings [DeSanctis and Gallupe, 1987], have been designedand found to be effective in improving communication among group members. Although GSShas been evaluated using various criteria, if and how well it serves as a debiasing tool remainsunexplored. Possibly owing to this neglect, both the design and evaluation of GSS have beencharacterized by a tilted emphasis toward the communication perspective, and relatively fallshort in terms of supporting the cognitive aspect of judgment and decision making processes.For example, the meta-analysis conducted by Benbasat and Lim [1993] shows that muchempirical work in GSS has been more concerned with the collective and group communicationphenomena than the cognitive and information processing aspects.1.2 Research Questions and Importance of TopicOur research objectives are to identify, from a theoretical perspective, the match between (newor existing) GSS group debiasing tools and judgment tasks, and to empirically determine theeffectiveness of these tools. This research is aimed at addressing three basic questions:(1) Does the use of GSS reduce judgment biases?(2) Wzat is the impact of GSS on user perceptions in a judgment context? and(3) How does judgment processes intervene between GSS use and judgment outcomes?2The GSS technology has been proposed to support group decision-making. Much empirical workhas also been conducted which shows usefulness of the technology under some realisticconditions. Nevertheless, an important and related area which has been researched intensely inits own right, judgment biases, has not been investigated in GSS research. There is nounderstanding of what role does GSS play in group judgment. At the same time, the lack ofunderstanding of group debiasing also calls for the undertaking of this study. Mapping back tothe first research question posed above, this research is aimed at assessing the usefulness ofGSS, as a group debiasing tool, in reducing judgment biases.Other than solution quality, it is also within the scope of this research to understand the impactof GSS use on user perceptions in a judgment context. User perceptions are important indicatorsof how the technology might be accepted, or even whether or not it might be adopted, inorganizational settings. Although a moderate amount of research has been conducted on theeffect of GSS use on perceptual variables, it is observed that the task factor can explain a sizablevariance in group interactions [Poole et al., 1985]; correspondingly, it is important to determineto what extent users’ perceptions are affected in judgment situations. Perceptions examined inthis study include satisfaction, task participation, and leadership behavior. Although related tothe last perceptual variable, influence itself is not within the scope of this study (e.g., [Clapper,McLean, and Watson, 1991]).There are two dimensions to the third research question. First, it is interesting to compare thejudgment processes with and without GSS, to understand in what ways the overall pattern of the3judgment processes has been altered by the use of GSS. Second, insights are needed on why theoutcomes are achieved. Correspondingly, it is also important to investigate the linkages betweenthe various aspects of judgment processes and the different facets of judgment outcomes.We believe that this research will fill an important gap in both the GSS literature as well as theliterature in judgment heuristics and biases. More importantly, it will serve to open up a newdirection for GSS research which is more closely associated with the investigation of traditionaldecision theories.1.3 Conduct of the ResearchThis research uses a laboratory experiment to examine the effect of GSS use on judgmentprocesses and outcomes. Multiple methods of measurement are employed to capture the variousconstructs involved. These include objective measures of solution quality, questionnaireinstruments for assessing user perceptions, and micro-level interaction analysis to determine thevarious aspects of judgment process. In sum, a multi-faceted analysis is employed to match thecomplexity of the behaviors under investigation./1.4 Organization of the DissertationThis dissertation is organized into six chapters. Chapter 2, the literature review, surveysliterature in two areas: group support systems and judgment biases. The chapter includes issuesof direct or indirect relevance to the current research. Chapter 3 describes the theoreticalperspective underlying this research, presents the research models guiding this research, and4derives the hypotheses for addressing the research questions. Chapter 4 describes the researchmethod used in this research, including the experimental design, research tasks, dependentmeasures, and experimental procedures. Chapter 5 presents the statistical analysis of theexperimental results, including primarily the analysis of variance and correlational analysis.Chapter 6 contains a discussion of the research findings and their interpretation, addresses thelimitations of the research, and discusses future research directions.5CHAPTER 2LITERATURE REVIEWThis study is informed by two broad fields, Group Support Systems (GSS) and Judgment Biases.Correspondingly, this chapter reviews relevant issues under these two areas. In particular,empirical evaluations of GSS and their underlying conception foundation are discussed. Forjudgment biases, this chapter focuses on two important biases of central interest to this research:representativeness bias and availability bias. Research on group judgment is also reviewed.Finally, the major debiasing approaches are described.2.1 Group Support Systems2.1.1 Empirical ResearchA group support system (GSS) is a combination of communication, computer, and decisiontechnologies for supporting problem formulation and solution in group meetings [DeSanctis andGallupe, 1987]. A predominant theoretical approach, based on which much empirical work onGSS has been conducted, is the communication-process perspective of GSS use [DeSanctis andGallupe, 1987]. According to this view, the decision process is revealed in the production andreproduction of positions regarding group action, which are directed toward the convergence ofmembers on a fmal choice. Supporting group decision making primarily involves changing, ina positive direction, the interpersonal exchange that occurs as a group proceeds with the problemsolving process.6Early empirical work explored various ways of implementing the desired support and testing itsimpact on group productivity (e.g., [Bui and Jarke, 1984; Gray, 1983; Lewis, 1982; Steeb andJohnston, 1981; Turoff and Hiltz, 1982]). While this phase of the research concluded that GSShas the potential to improve group processes and outcomes, it also highlighted the needs forbetter GSS designs and more rigorous research. The subsequent phase saw laboratoryexperiments that compared GSS-support to no-support using factorial designs that often includedan additional independent variable within the framework of small group research (e.g., [Gallupe,DeSanctis, and Dickson, 1988; George, Easton, Nunamaker, and Northcraft, 1990; Ho, Raman,and Watson, 1989; Lim, Raman, and Wei, 1994; Watson, DeSanctis, and Poole, 1988; Zigurs,Poole, and DeSanctis, 1988]). A wide range of dependent variables has been studied, includingperformance, satisfaction, consensus, and equality of influence [Benbasat, DeSanctis, and Nault,1993]. Mixed results emerged from these studies. For example, whereas some experimentsshowed improved decision quality or satisfaction, others showed no effect or reversed effects.A meta-analysis of 31 studies found the effect of GSS use to be contingent on taskcharacteristics, group characteristics, contextual factors, and technological factors [Benbasat andLim, 1993]. The conclusions of the meta-analysis are substantiated by a separate, qualitativereview [Dennis et al., 1991]. More recently, researchers have started to focus on examining theeffects of the specific features or tools of GSS (e.g., [Gallupe et al., 1994; Valacich, Dennis,and Connolly, 1994]).A main focus of the GSS research has to do with the communication perspective. Data fromorganizational case studies (e.g., [Sproull and Kiesler, 1986]), field experiments (e.g., [Eveland7and Bikson, 1988]) and laboratory experiments (e.g., [McGuire, Kiesler, and Siegel, 1987;Siegel et al., 1986]) indicate that computer-mediated communication technologies do reducebarriers to communication. Comprehensive reviews of GSS research can be found in [Benbasat,DeSanctis, and Nault, 1993] and [Dennis and Gallupe, 1993].2.1.2 Components of GSSAlthough different implementations of GSS may contain different decision and communicationaiding tools, some generic GSS tools may be defined conceptually. Nunamaker et al. [1991] haveidentified four categories of tools that support the different group activities: (1) exploration andidea generation, which includes electronic discussion besides brainstorming; (2) ideaorganization, for organizing ideas into specific categories; (3) prioritizing, which support theevaluation of multiple items; and (4) tools that provide formal methodologies to support policydevelopment and evaluation (e.g., stakeholder analysis). Whereas this categorization is usefulfor many decision making situations, we propose that for judgment situations, whichconceptually precede the choice phase2, a more specific set of tools can be defined. Buildingon Nunamaker et al. ‘s taxonomy, we propose that to support judgment activities, the followingcategories of GSS tools are needed: (1) information exchange, which primarily refers toelectronic discussion; (2) idea generation, which specifically refers to electronic brainstorming;and (3) problem-representation tools, aimed specifically at debiasing a certain class of judgmenttasks. The separation of idea generation from information exchange is also supported by2The decision-theory model recognizes two distinct components: judgment and risky choice;the latter is where judgments are combined with utilities for outcomes to determine actionchoices [Libby, 1981].8Nagasundaram and Dennis [1993], who contend that:Ka large part of idea-generation behavior in electronic brainstorming (BBS) can beexplained by viewing BBS as an individual, cognitive (rather than a social) phenomenonfrom the human information processing (IPS) perspective. BBS incorporates a set ofstructuring mechanisms meant to overcome the limitations of the human IPS.Consequently, a group using an BBS outperforms both verbal brainstorming and nominalgroups by operating not as a group but as a collection of individuals who interact withan evolving set of ideas rather than with individuals.” (p.463).Among the GSS components we proposed for supporting judgment, electronic communicationis perhaps the most familiar concept in GSS research, and has correspondingly received the mostattention. Problem representation is a relatively new concept in GSS research, although it hasbeen studied in the context of individual debiasing, and shall be reviewed in the correspondingsection. Next, we provide an overview of brainstorming research.2.1.3 Brainstorming ResearchDetermining and improving the effectiveness of the brainstorming process in groups has beena major concern in this body of research. Using a prescriptive approach, Osborn [1957]introduced a systematic technique, in the form of a set of rules, to help increase the effectivenessof the brainstorming process in groups. For example, one of the rules prohibits criticism of ideasduring the brainstorming process. The rules are intended to overcome major social andmotivational factors that could inhibit the generation of ideas. The technique was found toincrease the number of ideas produced [Lamm and Trommsdorff, 1973]. The performance ofnominal groups (i.e., non-interacting individuals) has also been compared to, and found to besuperior to, verbally brainstorming groups [Dunnett, 1964; Lamm and Trommsdorff, 1973;Mullen, Johnson, and Salas, 1991; Taylor, Berry, and Block, 1958].9Diehi and Stroebe [1987] tested three major interpretations for accounting for brainstormingproductivity losses: production blocking, evaluation apprehension, and free riding [Lamm andTrommsdorff, 1973; Collaros and Anderson, 1969; Williams, Harkins, and Latane, 1981], andconcluded that production blocking explained an overwhelmingly large proportion of the variancein performance between verbally brainstorming and nominal groups. In a follow-up study todetermine the causes of production blocking, Diehi and Stroebe [1991] found that the longer abrainstorming participant had to wait to verbalize an idea, the greater was the productivity loss;the need to have to listen to others’ ideas while rehearsing one’s own ideas further impaired theproductivity of verbally interacting groups. Diehl and Stroebe [19911 concluded that productionblocking occurs for one or more of three reasons, which coincide with the three forms ofproduction blocking described in [Nunamaker et al., 1991]. “Attenuation blocking” occurs whengroup members who are prevented from contributing comments as they occur, forget or suppressthem later in the meeting, because they seem less original, relevant, or important.“Concentration blocking” refers to the phenomenon that fewer comments are made becausemembers concentrate on remembering comments, rather than thinking of new ones, until theycan contribute them, as a result of short-term memory limitations. “Attention blocking” occurswhen new comments are not generated because group members must constantly listen to othersspeak and cannot pause to think; in other words, exposure to others’ ideas is distracting orinterferes with their thinking.There is evidence that electronic brainstorming can effectively address the process lossesmentioned above. In several recent studies, groups which brainstormed electronically have been10found to outperform verbally brainstorming and non-electronic nominal groups in the numberof ideas generated [Dennis and Valacich, 1993; Gallupe, Bastiannatti, and Cooper, 1991;Gallupe et al., 1992; Valacich, Dennis, and Connolly, 1994].2.2 Judgment BiasesMany decisions are based on beliefs concerning the likelihood of uncertain events such as thefuture value of the dollar. The judgments thus conceived, represented in the form of odds orsubjective probabilities, provide a basis for making choices between alternatives. For example,judgments on the future values of different currencies may lead to an investment decisionfocusing on certain currencies and disregarding the others. Economists and statisticians (e.g.,[von Neumann and Morgenstern, 1947]) have constructed a theory of rational choice for thesetypes of situations. Though the theory is inherently normative or prescriptive, it providedpsychologists (e.g., [Edwards, 1961]) with a rigorous framework and a point of departure fordeveloping a descriptive model of decision making under uncertainty. Bayes’ theorem and theconcept of subjective probabilities have received the greatest attention in descriptive research.The resulting literature is often categorized under the label TMprobability judgment”.Most recent theoretical and empirical analyses of probability judgment (particularly theinference-process aspect) are based on the seminal work of Amos Tversky and Daniel Kahneman[1974], who suggest that observed biases in judgments are caused by the use of heuristics.Decision makers employ heuristics so that complex inference tasks can be reduced to moremanageable proportions. Among others, two important heuristics have been proposed by Tversky11and Kahneman which account for many of the fmdings in probability judgment research:representativeness and availability [Kahneman and Tverslcy, 1972, 1973; Tversky andKahneman, 1973, 1974].2.2.1 Representativeness BiasWhen the representativeness heuristic is used, the probability that X came from Yis evaluatedbased on how much X resembles Y. In other words, class membership of an object is judged byits similarity to the stereotypical class member. The use of this heuristic leads to severalsystematic biases in probability estimation. These include insensitivity to prior probabilities (i.e.,the base-rate fallacy); insensitivity to sample size; misconceptions of chance; insensitivity topredictability; illusion of validity; and misconceptions of regression [Tversky and Kahneman,1974]. The base-rate fallacy, of particular concern to this research, refers to people’s tendencyto ignore base rates in favor of individuating or specific information, rather than integrate thetwo [Bar-Hillel, 1980]. Among the biases resulting from the use of the representativenessheuristic, the base-rate fallacy has probably been most researched empirically (see [Bar-Hillel,1983] for a review).Normative rules prescribe that a prior probability associated with a belief in a hypothesis remainsrelevant when some new or more specific evidence is received. Bayes’ model dictates that p(HI B), the probability of a hypothesis H, given the information about the base rate or prior p(H),evidence B, and the diagnosticity of the evidence p(E H), be estimated with:12p(H I E) = [p(E I H) x p(H)] I [p(E I H) x p(H) + p(E I not H) x p(not H)]When asked for p(H I E), the probability of the hypothesis given the base rate and the evidence,subjects often give p(E I H) as their answer. That is, they favor the diagnostic information atthe expense of the base rate. To illustrate, let us consider the following “cab problem” [Tverskyand Kahneman, 1982], one of the most well-known problems in base-rate fallacy research:“A cab was involved in a hit-and-run accident at night. Two cab companies, the Greenand the Blue, operate in the city. You are given the following data:(i) 85% of the cabs in the city are Blue and 15% are Green.(ii) A witness identified the cab as Green. The court tested the reliability of the witnessunder nighttime visibility conditions and concluded that the witness correctly identifiedeach one of the two colors 80% of the time and erred 20% of the time.What is the probability that the cab involved in the accident was Green rather thanBlue?”The correct answer is 41%, but the most commonly given answer is 80% due to subjects’ over-reliance on the diagnostic information.2.2.2 Availability BiasWhen the decision maker uses the availabifity heuristic, the frequency or probability ofoccurrence of an event is judged by the ease with which similar events are brought to mind. Forexample, one may assess the risk of heart attack among middle-aged people by recalling suchoccurrences among one’s acquaintances. This results in biases due to retrievability of instances.Similarly, one may evaluate the probability that a given business venture will succeed or fail byimagining various difficulties it might encounter. This results in biases due to iniaginability.Resulting biases due to retrievability of instances and biases due to imaginability have important13potential consequences for business judgments. When we judge the probability of events we havepreviously experienced, sensational and vivid events are more easily remembered.Overestimation of the probability of these events and underestimation of less spectacular eventsoften result. In fact, selective bias due to the use of the availability heuristic has been specificallypointed out by Janis [1982] as a major defect in decision-making.An example of the problems employed in availability research is to ask subjects to estimate therelative frequencies of appearance of a certain letter (e.g., R) in the first and third positions inEnglish words [Tversky and Kahneman, 1973]. Despite the fact that the tested letters are morefrequently found in the third position, a large majority of subjects judged the first position to bemore likely, as a result of the corresponding words being more available to recall. Much of theavailability studies, however, do not involve a direct measurement of the degree of availability.Forbringer’s [1991] study makes a useful exception. Forbringer [1991] examined whether jobperceptions as measured by the Position Analysis Questionnaire (PAOJ [Mecham, McCormick,and Jeanneret, 1977] were affected by the availability heuristic. The availability of PAQ itemswas evaluated by 202 subjects using four measures of availability: familiarity, meaningfulness,vividness, and ease of example generation. Based on these availability ratings, items were placedin one of three categories: low, medium, or high. Independent samples of subjects analyzed fourjobs (insurance underwriter; building maintenance man; insurance claims superintendent; andsecretary) using the PAQ. It was hypothesized and found that PAQ items which were rated asbeing more available produced significantly different ratings than those rated as being lessavailable, irrespective ofjobs. The effect of the availability heuristic on job perceptions was thus14strongly established.2.2.3 Group JudgmentOf particular interest to this research is group judgment. Whereas research on the biasingheuristics has mostly addressed individual decision makers, several empirical studies on groupjudgment exist. Argote, Seabright, and Dyer [1986] found base-rate fallacy to be more severein group judgments than in individual judgments, suggesting an obvious need for some type ofsupport. Argote, Devadas, and Melone [1990] found the tendency for groups to be moreignorant about base-rates than individuals when the individuating or diagnostic information isinformative rather than vague. Consistent with these findings, Brightman et al. [1983] reportedthat nominal groups behave more like Bayesian information-processors than interacting groups.Interested in the group consensus process, Stasson et al. [19881 used the social decision schemeapproach to study two representativeness tasks. For one task, the group consensus process canbe described with a model suggesting groups to be only slightly better than individuals; for theother task, the best fitting models correspond to indications that groups have a higher proportionof biased decisions than people working individually. Whereas studies on base-rate fallacy havegenerally discovered groups to show greater bias than individuals, studies on availability biaspaint a somewhat opposite picture.Following Lichtenstein, Slovic, Fischhoff, Layman, and Combs [1978] who studied availabilitybias in an individual context, Sniezek and Henry [1989] asked their subjects, in individual andgroup settings, to estimate the frequencies of some causes of death. The researchers found15groups judgments to be more accurate than individual judgments, although bias was still presentin group judgments. These results were replicated by Sniezek and Henry [1990] with acomparable task. In the same study described earlier, Stasson et al. [1988] also examined thegroup consensus process in an availability task. The researchers employed a classical task inavailability research [Tversky and Kahneman, 1973] by asking subjects to estimate the relativefrequencies of appearance of the letter KR in the first position of a word versus the third. Thebest-fitting model indicates groups to outperform individuals working alone, although, again,both display judgment bias. The findings suggest that availability bias observed in individualscan be only partially corrected using groups.Although group judgment research illuminates the difference between group and individualjudgment outcomes, the underlying processes of how group members combine individualjudgments are not described in this literature.2.2.4 DebiasingThe discovery ofjudgment biases has naturally inspired research efforts in the area of debiasing.Several methods of debiasing have been advocated. These methods vary in their degree ofsophistication as well as intervention into the judgment process, and include the use of anefficient” group (i.e., one that is devoid of “groupthink” and other deficiencies; e.g., see [Janisand Mann, 1977; Libby, 1981]), finding alternate means of representing the problem, structuringthe judgment process, providing automated normative models, and training.16(a) Using an Efficient GroupIn a practical sense, to have an efficient group is the same as to improve the group process. Inthis respect, GSS tools which are targeted at improving group communication process should becandidates for (at least partially) debiasing group judgment biases, although this conjecture hasneither been proposed by advocates of the use of groups in judgment, nor tested by GSSresearchers so far.(b) Problem RepresentationsVarious problem representations have been suggested and examined. These include “focusingtechniques” [Fischhoff et al., 1979], witnessing of the random sampling process [Gigerenzer etal., 1988], presenting slides representing the population cases [Christensen-Szalanski and Beach,1982], and so on. The earlier attempts have their limitations, in that some of these methodscannot be directly applied outside the laboratory context, and some others have undesirable “sideeffects”. For example, Fischhoff and Bar-Hillel [1984] showed that the focussing techniques alsoinduced subjects to pay more attention to and use the base rates in problems where suchinformation was not relevant. Despite these limitations, problem representation is a promisingapproach to debiasing, since the underlying objective here is to facilitate one’s understandingof the situation. Consequently, it is more likely than the other approaches to induce a longerlasting effect as well as greater acceptance by people involved. A recent study by Roy and Lerch[1993] explored the judgment process itself, and concluded that judgments are more accuratewhen no causal links are present in the problem statement. Consequently, they made use of agraphical aid to represent the judgment problem, and found a significant improvement in17judgment quality. Roy and Lerch’s approach is very similar to the concept of ‘probability map”proposed by Cole [1988], who tested the effectiveness of this and three other representations ofBayesian problems, and found them to lead to improved performance over the control condition[Cole, 1989].(c) Process StructuringStructuring the judgment process itself has also been proposed as a method of debiasing.Kahneman and Tversky [1979a] outlined a set of procedural steps for forecasting purposes. Asimilar approach has also been recommended by Hogarth [1980] for combining information forprediction. The general purpose of following through the structure is to highlight and bring tothe decision makers’ attention the salient statistical principles that should not be ignored.(d) Automated ModelsAnother popular debiasing approach is to provide decision makers with some kind of automatedaid which incorporates the normative model. Adelman et al. [1982] developed an iterativeBayesian decision aid which permits users to modify likelihood ratios, and automaticallycalculates the posterior probabilities and odds. The decision aid was tested in an experimentinvolving an intelligence task in which subjects were to estimate the posterior probabilities ofattacks from some opposing forces. Analysts equipped with the aid were found to producejudgments more consistent with predictions of the Bayesian model than those who were notaided. Using a similar approach, Wright [1983] provided computer-supported Bayesianaggregation to his subjects in a financial task, and reported on the usefulness of the decision18support. One impediment with this approach, however, is the lack of acceptance due to thedifficulty users have in understanding the use of Bayes’ rule. Elsaesser [1989] attempted toaddress this problem by incorporating explanations provided using Artificial Intelligencetechniques into such decision aids. Indications of an increase in user acceptance, in the form ofsubjective confidence, were reported.(e) TrainingFinally, it has also been proposed that the deficiencies of human reasoning and choice could becorrected by training and learning. However, the limitation of this approach has been raised byKahneman [1991] by pointing out that there is much evidence that even experts are not immuneto the cognitive illusions that affect other people.2.3 SummaryIn summary, this literature review shows the following points:(1) Much of past GSS research has treated GSS as an entity, thus preventing understanding ofthe effects due to its individual components, which are aimed at supporting different activities.(2) Group judgment biases exist and, in some cases, are more severe than individual biases.(3) Problem representation is a promising approach for debiasing.(4) Cognitive constraints of the human information processing system limit the productivity ofthe unaided brainstorming technique. Electronic brainstorming has been effective in reducingthese limitations.19CHAPTER 3THEORETICAL MODEL AND HYP(YTIIESESThis chapter presents the theoretical arguments for proposing the use of GSS in addressingjudgment biases. An information-integration perspective is used in this deliberation. Based onthe theoretical model thus constructed, specific hypotheses are formalized for empirical testing.3.1 Two-Dimensional Information Integration PerspectiveThis research employs a top-down approach in attempting to theorize the effects of GSS onjudgment biases. This research uses a two-dimensional information integration perspective(fliP) depicted in Figure 3.1. According to this perspective, the effect of GSS use on groupjudgments and choices is produced via two integration processes. Along the interpersonaldimension, GSS brings together the views, opinions, judgments and choices of individualmembers. The means by which the interpersonal aspect of information integration isaccomplished is communication, which has been the traditional focus of GSS design andevaluation. Left unsupported, group processes are often plagued by “process losses”, which referto aspects of the meeting process that impair outcomes relative to the efforts of the sameindividuals working by themselves or those of groups that do not experience them [Steiner,1972]. An example of process loss is “air time fragmentation”, which refers to the need topartition speaking time among speakers during a verbal meeting [Nunamaker et al., 1991]. Byincorporating electronic communication, GSS helps to reduce process losses such as air-timefragmentation through providing features such as parallel inputting, and forges a group judgment20that takes into account a broader range of possibilities and considerations.Group‘jember 1 2 3 mInformat ionNSourceI23nAlthough much of the GSS research has concentrated on the communication perspective, anequally important mechanism through which GSS may be useful in judgment and decision-making situations has to do with the more direct linkage between GSS and judgment tasks. Thisaspect involves cognitive support, dealing with the intrapersonal dimension of informationintegration. It is a well-known and undisputed fact that humans are limited in terms of cognitiveprocessing capabilities. Correspondingly, decision aids should always be welcomed [Hogarth,Interpersonal Information IntegrationFigure 3.1 The two-dimensional information integration perspective211980]. This aspect of GSS support has been relatively unexplored, although exceptions can befound (e.g., [Sengupta and Te’eni, 1993]).For the sake of improving clarity in this discussion, the two dimensions have beenconceptualized as though they were independent and orthogonal; however, in practice thesedimensions may be intertwined; one activity may trigger the other, and vice versa. For example,through sharing of information and views, individuals may obtain new information, and thustriggering a new round of intrapersonal information integration. Similarly, after an individualhas combined different information sources and arrived at a judgment, he might share it withthe group and thus affecting the interpersonal information integration process.In sum, two types of support can be provided by GSS to address the two aspects of informationintegration encountered in group judgment situations (see Figure 3.2). On the one hand, GSSprovides features that improve communication among group members, thus making the groupmore efficient. The use of an efficient group (i.e., one that is devoid of “groupthink” and otherdeficiencies) has been proposed as a method in and by itself for improving judgments (e.g.,[Janis and Mann, 1977; Libby, 1981]). GSS addresses communication problems by providingparallel communication and a communication channel that invites less social and political cues.On the other hand, GSS may also provide features that aim directly at augmenting theinformation processing capacity and capability of human decision makers, thus replacing the lessreliable heuristics.22IntrapersonalCognitive Support InformationIntegrationGroupJudgmentElectronic Interpersonal /Communication = Information_____________________IntegrationFigure 3.2 GSS Components and the two dimensions of information integrationThe emphasis of our approach (i.e., the integration process) is comparable to that of the“information integration theory” (lIT) [Anderson, 1974], one of the major decision andjudgment theories [Hammond et aL, 1980]. Whereas lIT proposes the concept of “cognitivealgebra” (e.g, add, subtract, multiply, etc) to describe the human cognitive activity invoked incombining information, TIIP suggests that intrapersonal information integration with GSSsupport may be achieved through viewing the problem or information search set differently. Thispoint will be elaborated later. A second dimension included in TIIP has been developed toaccount for the group process. In contrast to the intrapersonal information integration process,23the way different judgments and opinions are stimulated and combined is via a reduction ofgroup process losses [Nunamaker et aL, 1991].3.2 A Match of GSS Tools and Judgment Tasks3.2.1 On-line vs. Memory-based Judgment TasksFrom the perspective of the relationship between memory and judgment, Hastie and Park [1986]propose the taxonomy of “on-line” vs. “memory-based” judgment tasks. According to them,the key distinction concerns the source of inputs to the judgment operators. In on-line tasks,information for the operator follows a path from the stimulus environment external to the subjectinto working memory and directly to the judgment operator. An important example in this groupis the representativeness task (associated with the representativeness bias). On the other hand,in memory-based tasks the judgment is necessarily memory based and the subject must rely onthe retrieval of relatively concrete evidence from the memory in order to reach a judgment.Therefore, the availability task (associated with the availability bias) belongs to this category.The pattern of findings that emerges from the overview of research in group judgment lendssome support to this taxonomy, by showing clearly different results between therepresentativeness and availability tasks (i.e., groups showed greater representativeness bias andslightly lower availability bias than individuals). Although simple, Hastie and Park’s taxonomyis seen as adequate for the purpose of an initial effort in identifying types of GSS tools fordebiasing judgments.It would be superficial to assume that only two types of “generic” GSS tools will be needed to24deal with the biases associated with the two categories ofjudgment tasks, primarily because eachcategory (especially on-line) may not be totally homogeneous. Consequently, instead of tryingto find a generic set of tools for each category of judgment tasks, we shall attempt to identifythe appropriate tools for the most important member task in each category, i.e.,representativeness task and availability task.3.2.2 Problem RepresentationPrevious research has pointed out the effect of problem representation on the neglect of baserates [Gigerenzer, Hell, and Blank, 1988; Ginosar and Trope, 1980; Wells and Harvey, 1978].Roy and Lerch [1993] went a step further in identifying the underlying cognitive processes thatmay explain such effect. Based on empirical data, they concluded that causal representationsdominate information processing in base rate problems. In such representations, for example,a causal link is formed between the event “the cab was green” and the event “the witness saysthe cab was green”, such that the former causes the latter. Related to this internal representation,semantic confusion of different types of conditional probabilities has been observed [Dawes,1986; Eddy, 1982]. For example, when questions are posed in an inverse-to-causal order (e.g.,If the witness says the cab was green, what is the probability that it was in fact green?), peopletend to reinterpret the question to fit the causal order (i.e., If the cab was green, what is theprobability that the witness says it was green?). The fact that people tend to represent conceptsin terms of causal relationships in base-rate problems is not an isolated incident; recentdevelopments in cognitive psychology seem to indicate that people naturally organize narrativeinformation in terms of the causal relations contained in the text (e.g., see [Murphy and Medin,251985]). However, when no causal links are present in the representation, judgments are moreaccurate, or closer to the normative answer [Bar-Hillel, 1980; Roy and Lerch, 1993]. Becauseno causal theory can be derived, people resort to feature similarity for categorization. Using thecab problem [Tversky and Kahneman, 1982], for example, the representation would involve thefollowing categories: green cab identified (by witness) as green; green cab identified as blue;blue cab identified as blue; blue cab identified as green.Following from the above, a promising approach to improve judgments in base-rate fallacy-generating contexts would be to help people build non-causal representations. Previous resultshave shown that representing the problem situation in terms of the population distribution, andvisualizing the characteristics of the various components facilitates judgments of probabilities[Christensen-Szalanski and Beach, 1982; Pollatsek et al., 1987]. In particular, tables andgraphics have been tested as representational aids (e.g., [Cole, 1989]). Roy and Lerch [1993]conclude that a table is not sufficient to change the initial causal representation of the problem,and that graphs present the interaction of object features more clearly. They further recommendgraphical representational aids as critical elements in decision support systems (DSS) design forprobabilistic reasoning.Part of the current research represents a further extension on the study of problem representationas a debiasing tool, by incorporating into GSS a graphical problem representation tool knownas “probability map” [Cole, 1988, 1989; Roy and Lerch, 1993] to help in improving groupjudgments.263.2.3 Electronic BrainstormingIn availability task (and indeed, most memory-based tasks), judgment biases are caused by,among other things, limited information search sets. Correspondingly, the GSS tool necessaryto provide cognitive support is the idea generation, or electronic brainstorming, tool.All methods of idea generation seek to create contexts for individuals that stimulate thegeneration of ideas. For example, in verbal brainstorming groups, group members are allowedto interact and freely generate ideas, so that each participant can be made a stimulus source forthe others. The general principle underlying many idea generation methods is to create sourcesof variety in a participant’s environment - the greater the variety in the sources (or stimuli) ofideas, the greater the potential variety of ideas generated [Hoffman, 1959]. However, in unaidedbrainstorming situations, cognitive limitations of human as an information processing system(IPS) [Newell and Simon, 1972] severely restricts the productivity of the technique. Byproviding cognitive assistance to the IPS, electronic brainstorming has been found to be moreeffective than either verbal brainstorming [Gallupe et al., 1992] or nominal group ideageneration, where individuals work separately without communicating [Dennis and Valacich,1993; Valacich, Dennis, and Connolly, 1994].An electronic brainstorming system implements the brainstorming technique on a network ofcomputers, and includes three structuring mechanisms - parallel input, collective memory, andserially retrievable output - that together work around the limitations of the IPS [Nagasundaramand Dennis, 1993]. The parallel input mechanism allows participants to contribute ideas as soon27as they are generated. Thus, participants need not rehearse their ideas in short-term memoryindefinitely, and their short-term memory can be freed up for processing additional ideas.Nonetheless, ideas generated simultaneously by multiple participants cannot all be attended toand committed to memory by participants because of the serial nature of the IPS. To address thislimitation, the collective memory mechanism allows these ideas which have been generated inparallel to be stored and later retrieved by a participant at will from this external memory. Theserially retrievable output mechanism permits ideas to be accessed one at a time in any sequenceunrelated to the sequence in which they were generated. Furthermore, the collective memoryhelps decouple the parallel input from the serial output, and introduces at least one level ofindirection of communication among group members. These three mechanisms together createthe necessary and sufficient conditions for productionMunblocking” to occur [Nagasundaram andDennis, 1993]. “Production blocldng” refers to the phenomenon that ideas and comments aresuppressed due to cognitive limitations, and may be further differentiated as attenuation blocking,concentration blocking, and attention blocking [Nunamaker et al., 1991]. Attenuation blockingoccurs when group members who are prevented from contributing comments as they occur,forget or suppress them later in the meeting, because they seem less original, relevant, orimportant. Concentration blocking refers to the phenomenon that fewer comments are madebecause group members concentrate on remembering comments (rather than thinking of newones) until they can contribute them. Attention blocking is encountered when new comments arenot generated because group members must constantly listen to others speak and cannot pauseto think. Each of these aspects can be effectively dealt with using the structuring mechanismsof the electronic brainstorming system.283.2.4 Electronic CommunicationIt should be obvious that the GSS tools identified above are meant to support the intrapersonalaspect of the information integration process. A key difference separating group decision andjudgment process from its individual counterpart is the fact that group decision and judgmentoccurs as the result of interpersonal communication - the exchange of information amongmembers [DeSanctis and Gallupe, 1987]. The group decision process is revealed in theproduction and reproduction of positions regarding group action, which are directed toward theconvergence of members on a final judgment or choice [Desanctis and Gallupe, 1987]. Thecommunication activities exhibited in a decision- or judgment-related meeting include expressionsof preference, argumentation, socializing, information seeking, information giving, and solutiondevelopment [Bedau, 1984; Poole, 1983a]. With GSS, these activities may be supported withelectronic communication, referring to the ability to communicate with one or more groupmembers using electronic mail.The difference between electronic communication and verbal communication has been viewedfrom different perspectives [Lim and Benbasat, 1993]. For example, F. Lim and Benbasat [1991]point out that the two communication channels differ in terms of “paths” and “contents”. Theelectronic communication channel takes a somewhat indirect and longer path than the verbalchannel, since any group member’s input has to go through the computer system before reachingthe target receipient. However, given the very high speed of transmission rate of electroniccommunication, this difference is transparent at the user level. A second difference has to dowith the fact that electronic communication contains fewer social and political cues, and is29therefore relatively task-oriented. In comparison, verbal communication channel supports sociooriented communication better by accommodating verbal cues such as timing, pauses,accentuations, tonal inflections, etc. This distinction is supported by the findings of empiricalresearch [Hiltz and Turoff, 1978; Rice, 1984] and consistent with the concepts of the mediaricimess theory [Daft and Lengel, 1986; Daft, Lengel, and Trevino, 1987; Trevino, Lengel, andDaft, 1987].From another perspective, Zigurs [1988] defines the difference between the two communicationchannels in terms of: (1) the skill it requires of the communicator, i.e., verbalization vs.keyboarding; (2) the capacity of information transmitted (i.e., the amount of informationcommunicated), which is higher for electronic communication since it allows both textual andgraphical information; and (3) the inputting feature of the channel, which highlights the parallel-inputting procedure of electronic communication.Two distinctions between electronic communication and verbal communication which are ofparticular relevance to this study can be extracted from the two perspectives presented above.First, the “social presence” of the electronic communication channel is lower than that of theverbal communication channel [Rice, 1992]. “Social presence” is the degree to which a mediumis perceived as conveying the actual physical presence of the communicating participants [Shortet al., 1976]. This social presence depends not only on the words conveyed duringcommunication, but also upon a range of nonverbal cues including facial expression, directionof gaze, posture, attire, and physical distance and many verbal cues (timing, pauses,30accentuations, tonal inflections, etc.) [Argyle, 1969; Birdwhistle, 1970]. Because of the lowersocial presence in electronically-communicating groups, group members need not be as sensitiveor wary in making suggestions that seem to be in opposite directions to those made by others.Indeed, electronic communication is capable of facilitating free expression of views andopinions, a desirable element of group process [DeSanctis and Gallupe, 1985; Harmon andRohrbaugh, 1990; Hoffman, 1965; Shepherd, 1964].Second, the parallel communication made possible by the electronic communication channelallows group members to raise their thoughts and comments simultaneously, without having towait for another person to finish speaking. Consequently, process losses such as air-timefragmentation are reduced, and the interpersonal information integration is improved.Nonetheless, the usefulness of the electronic communication channel (versus the verbalcommunication channel) depends on the type of judgment task. Specifically, electroniccommunication is useful when the judgment task is constrained by both intrapersonal andinterpersonal aspects of information integration, and can be helped by improving either of theaspects. Availability task, a memory-based task, falls in this category, because availability biasis primarily associated with limitations in the information search scope, which can be expandedeither through providing cognitive support to individual group members or via informationsharing among group members. Electronic communication aids the latter. On the other hand,when the judgment task is restricted mainly by information processing capacity of groupmembers and benefits little from information exchange among them, the usefulness of electronic31communication is limited. This is the case with the representativeness task, an on-line task. Infact, the representativeness bias has been found to be greater in group judgments than inindividual judgments [Argote et aL, 1986, 1990; Brightman et al., 1983; Stasson et al., 1988].Several important differences between electronic communication and electronic brainstormingought to be noted (see Table 3.1). First, the intent and design of electronic brainstorming hasa distinct cognitive focus [Nagasundaram and Dennis, 1993]; in other words, individuals areconstantly interacting with an evolving list of ideas rather than other group members. On theother hand, the focus of electronic communication is mainly interpersonal. Second, the electronicbrainstorming tool is designed to encourage divergence of ideas, but the primary purpose of theelectronic communication tool is to help the convergence of views and opinions. Third, whereasin electronic brainstorming, the parallel input and serial output are decoupled by the collectivememory mechanism, which introduces a level of indirection of communication among groupmembers, communication using the electronic communication tool has to be synchronized.Indeed, Easton, George, Nunamalcer, and Pendergast [1990] found, when comparing electronicdiscussion tool to electronic brainstorming tool, that the number of ideas generated issignificantly lower with electronic discussion. They concluded that each tool worked best on thetask for which it was designed, thus supporting the premise that there should be a match betweenthe GSS tool and the aspect of the information integration process to be supported.The concept of viewing the effects of GSS use in terms of cognitive and information-exchangeaspects parallels that of viewing the effects of negotiation support systems in terms of decision32and communication aids [Lim and Benbasat, 1993].Table 3.1 Differences between electronic brainstorming and electronic communicationElectronic Brainstorming Electronic CommunicationFocus Cognitive InterpersonalPurpose Divergence of ideas Convergence of viewsSynchronization Indirection of communication Communication has to besynchronized3.3 Research ModelsIn this research, the impact of GSS is examined in terms of both judgment processes andjudgment outcomes. The exploration of the judgment processes provides insight into why theoutcomes occur as they would. The detailed variables vary according to the judgment tasks soas to cater to the specific aspects of processes and outcomes pertaining to each task. In thefollowing we discuss the specific research models for investigating the impact of GSS on therepresentativeness bias and the availability bias.3.3.1 Representativeness BiasThe research model used for investigating the representativeness bias is shown in Figure 3.3.The cognitive support in this case is realized in the form of the problem representation tool. Wepropose that the impact of GSS on the judgment process can be measured by six variables, aimedbasically at understanding the integrating process of the two relevant pieces of information, i.e.,diagnostic information and base rate. “References to the diagnostic information” indicates howfrequently the group refers to the diagnostic information. Similarly, “references to the base rate”33provides information on the group’s access of the base rate. “References to the use of singleinformation source” reflects the group’s resolution involving only one piece of information,which is obviously biased. On the other hand, “references to the use of dual informationsources” refers to the group’s use of two pieces of information, while they may or may not beproperly combined. “References to the use of the problem representation tool” is a measure ofhow often the group refers to the support tool, and thus is only meaningful within groupsprovided with the tool.Finally, “procedural comments” refers to the amount of procedural communication. Whereas theGSS Tools Judgment Process-Cognitive Support(ProblemRepresentation Tool)- ElectronicCommunicationNJudgment OutcomesN- References todiagnostic info- References tobase rate- References touse of single infosource- References touse of dual infosources- References touse of problemrepresentationtool- Proceduralcomments- Normalizederror- Time tocompletion- Perceived taskparticipation- Perceivedsocio-emot ionalbehavior- Satisfactionwith outcome- Satisfactionwith process-PerceivedinformalleadershipFigure 3.3 Research model (Representativeness bias)34previous variables aie components of what is called substantive or topical focus communication[Poole, 1983b], procedural statements differ from substantive, or information-processing,statements by commenting upon ongoing interaction, i.e., they are statements about what thegroup is doing, where it is going, and what it should do [Putnam, 1981]. They are indeed metamessages that direct the mechanics of group activity through reflecting upon, integrating, andcoordinating group talk with past behaviors and future contingencies. Procedural maneuvers alsofunction as means of controlling unwieldly or deviant members [Penland and Fine, 1974], andas strategies used during struggles for leadership emergence [Schultz, 1974, 1978; Bormann,1975].Three types of judgment outcomes are investigated. The key measure of judgment outcomes isthe judgment bias, which, in the case of the probability estimation task, can be assessed usinga “normalized error” measure. “Normalized error” measures the difference between the group’sresponse and the correct solution given by the Bayes’ model, normalized using the value of thecorrect solution. The second measure of judgment outcomes is “time to completion”, measuredas the total time taken for the group to complete the task (although there is a maximum timeimposed). The third type ofjudgment outcomes investigated is user perceptions. Five perceptionsare measured. “Perceived task participation” captures the extent to which group membersperceive themselves as participating in the decision process. “Perceived socio-emotionalbehavior” measures the extent to which group members perceive negative socio-emotionalbehavior to have taken place in the meeting; for example, how much they feel their opinionshave been rejected or they have rejected others’ opinions. “Satisfaction with outcome” and35“satisfaction with process” measure group members’ satisfaction with the solution and theprocess respectively. Finally, “perceived informal leadership” measures the extent to whichgroup members perceive informal leaders to have emerged during the judgment process.3.3.2 Availability BiasThe research model used for investigating the availability bias is shown in Figure 3.4. Thecognitive support in this case is realized in the form of the electronic brainstorming tool. Sixprocess variables, aimed basically at understanding how the information search scope is affected,are included. “Number of positive ideas mentioned during discussion” refers to the total countGSS Tools Judgment Process- Cognitive Support(ElectronicBrainstormingTool)- ElectronicCommunicationJudgment Outcomes—hV- No. of positive ideasmentioned duringdiscussion- No. of negativeideas mentionedduring discussion- No. of ideationalconnections with ideasgenerated duringelectronic brainstorming- No. of ideationalconnections with ideasmentioned duringearlier discussion- No. of specificproposals of estimates- Procedural comments- Availability bias- Time to completion- Total scores of high-availability items- Total scores of low-availability items- Perceived taskparticipation- Perceived socloemotional behavior- Satisfaction withoutcome- Satisfaction withprocess- Perceived informalleadershipFigure 3.4 Pesearch model (Availability bias)36of ideas mentioned during discussion in support of the issue at hand. In contrast, “number ofnegative ideas mentioned during discussion” refers to the total count of ideas mentioned duringdiscussion in refutation of the importance of the issue at hand. “Number of ideationalconnections with ideas generated during electronic brainstorming” measures the degree to whichideas generated during electronic brainstorming are used during subsequent discussion. “Numberof ideational connections with ideas mentioned during earlier discussion” measures the degreeto which ideas mentioned during earlier discussion are used in later deliberations. “Number ofspecific proposals of estimates” measures the total number of times an estimate of the frequencyof occurrence or importance of the issue at hand is proposed; this variable is useful for theavailability task since the availability bias is usually encountered in situations requiring theestimating of the frequency of occurrence or importance of events. Finally, “proceduralcomments” measures the amount of procedural communication.Three types of judgment outcomes are studied. The first type addresses the judgment bias.“Availability bias” is a direct measure of the judgment bias itself, operationalized as thedifference between scores (i.e., estimates) assigned to a set of high-availability items and thoseassigned to a comparable set of low-availability items. “Total scores of high-availability items”and “total scores of low-availability items” are thus components of the previous measure. Thesecond type of judgment outcomes investigated is speed. As the name suggests, “time tocompletion” measures the total time taken for the group to complete the task. The third type ofjudgment outcomes investigated is user perceptions. The same five perceptions measured in thestudy of the representativeness bias are included here. “Perceived task participation” captures37the extent to which group members perceive themselves as participating in the decision process.“Perceived socio-emotional behavior” measures the extent to which group members perceivenegative socio-emotional behavior to have taken place in the meeting; for example, how muchthey feel their opinions have been rejected or they have rejected others’ opinions. “Satisfactionwith outcome” and “satisfaction with process” measure group members’ satisfaction with thesolution and the process respectively. Finally, “perceived informal leadership” measures theextent to which group members perceive informal leaders to have emerged during the judgmentprocess.3.4 Hypotheses: Representativeness BiasThis section presents hypotheses of the effect of cognitive support and communication mediumon judgment processes and outcomes pertaining to the representativeness bias.3.4.1 Effect of Cognitive SupportNormalized ErrorGroups supported with the problem representation tool, compared to groups without the tool,produce more accurate solutions. The problem representation tool improves group members’understanding of the problem, increases their awareness of the saliency of the informationsources, and provides new and additional approaches for combining these information sources.Hypothesis la: Normalized errorNormalized error will be smaller with cognitive support than without cognitive support.38Time to CompletionBy representing the problem in a way that group members have not usually encountered before,the problem representation tool introduces new dimensions for them to understand and appreciatethe information sources of the problem and the problem itself. The comparison of the new andold approaches and the selection of one approach increases the problem-solving time.Hypothesis ib: Time to completionGroups provided with cognitive support will take longer to complete the task than groupsnot provided with cognitive support.Perceived Task ParticipationThe presence of the problem representation tool increases the possible number of approaches inthe group interaction by introducing new ways for combining information sources, thusproviding group members with more opportunities, as well as putting a greater demand on them,to participate in the consideration and selection of the potential strategies. Correspondingly,group members perceive a higher degree of task participation.Hypothesis 2a: Perceived task participationA greater degree of task participation will be perceived in groups provided with cognitivesupport than in groups not provided with cognitive support.39Perceived Socio-Emotional BehaviorGroups provided with the problem representation tool have higher degree of task participationthan those that are not. However, higher degree of participation, coupled with a greater numberof issues to explore and a wider range of choices from which to choose, also increases theopportunity for conflict. There is now a greater chance for suggestions or opinions to berejected, and for group members to feel tense or frustrated about each other.Hypothesis 2b: Perceived socio-emotional behaviorMore negative socio-emotional behavior will be perceived in groups provided withcognitive support than in groups not provided with cognitive support.Satisfaction with OutcomeTo the extent that a more thorough information integration process is experienced, which leadsto a more accurate solution, in groups provided with the problem representation tool than thosenot provided with the tool, we expect satisfaction with outcome to be higher in the supportedgroups.Hypothesis 2c: Satisfaction with outcomeGroups with cognitive support will be more satisfied with the judgment outcome thangroups without cognitive support.40Satisfaction with ProcessDespite its usefulness in reducing the representativeness bias, the problem representation toolis expected to cause a lower satisfaction with the interaction process. The presence of the toolincreases the complexity of the group interaction, causing a greater degree of task participationbut also a higher degree of negative socio-emotional behavior to be perceived by groupmembers. Correspondingly, group members are less satisfied with the process.Hypothesis 2d: Satisfaction with processGroups with cognitive support will be less satisfied with the process than groups withoutcognitive support.Perceived Informal LeadershipA greater degree of leadership behavior can be anticipated in a more complex situation than aless complex one [Stogdill, 1974]. The presence of the problem representation tool adds newways of viewing the problem; the rise of new dimensions brings about a need not only for theirmeaning to be interpreted, but also to ponder how they can be utilized. These new activities callfor a greater need of coordination among group members, thus creating the conditions forleadership behavior to happen.33Although not tested, group size is a potential variable capable of influencing leadershipbehavior. For example, there exists evidence that the opportunity for leadership tends to decreaseas the size of the group increases [Stogdill, 1974].41Hypothesis 2e: Perceived informal leadershipPerceived informal leadership will be higher in groups with cognitive support than ingroups without cognitive support.References to the Diagnostic InformationBase-rate fallacy has been caused by an over-reliance on the diagnostic information (versus baserate) both in individual and group judgment situations [Argote et al., 1990; Bar-Hillel, 1980].The problem representation tool offers a radically different way of viewing the informationsources, thus highlighting the saliency of the base rate to group members. Consequently, weexpect groups provided with the tool to make fewer references to the diagnostic information thangroups not provided with the tool.Hypothesis 3a: References to the diagnostic informationFewer references will be made about the diagnostic information by groups with cognitivesupport than by groups without cognitive support.References to the Base RateConsistent with the expectation that the diagnostic information is less referred to in groups withthe problem representation tool, it is expected that there are more references made to the baserate in groups provided with the tool than those that are not.42Hypothesis 3b: References to the base rateMore references will be made about the base rate information by groups with cognitivesupport than by groups without cognitive support.References to the Use of Single Infonnation SourceFrom an information integration perspective, the base-rate fallacy is a direct consequence ofgroups not properly combining the relevant information sources. In the extreme situations(inclusive of the base-rate fallacy), only one piece of information is utilized for generating theposterior probability. Because the problem representation tool produces a representation thattakes into account both the base rate and the diagnostic information, groups provided with thetool are expected to make fewer references to the use of a single information source than thosenot provided.Hypothesis 3c: References to the use of single information sourceFewer references will be made about the use of single information source by groups withcognitive support than by groups without cognitive support.References to the Use ofDual Information SourcesConsistent with the expectation that the use of a single information source is less referred to ingroups with the problem representation tool, it is expected that there are more references madeto the use of dual information sources in groups provided with the tool than those that are not.43Hypothesis 3d: References to the use of dual information sourcesMore references will be made about the use of dual information sources by groups withcognitive support than by groups without cognitive support.Procedural CommentsThe presence of the problem representation tool induces a greater degree of task participation.Given a finite length of meeting, the increase in task-related deliberations necessarily implies areduction in the communications devoted toward procedurally-related matters. Therefore, groupsprovided with the tool are expected to make fewer procedural comments than those not provided.Hypothesis 3e: Procedural commentsGroups provided with cognitive support will make fewer procedural comments thangroups not provided with cognitive support.3.4.2 Effect of Communication MediumNormalized ErrorAlthough electronic communication, as compared to verbal communication, is expected toimprove interpersonal information integration by reducing process losses, information sharingamong members does not seem to be the key in reducing the representativeness bias. The realbottleneck in the probability estimation task, an on-line task [Hastie and Park, 1986], appearsto be associated with the intrapersonal, rather than interpersonal, dimension of informationintegration. This argument is consistent with the fact that the use of groups has not been found44to reduce the bias [Argote et al., 1986, 1990; Brightman et al., 1983; Stasson et aL, 1988].Consequently, we expect the communication medium to have no effect on the representativenessbias.Hypothesis 4a: Normalized errorThere will be no difference in normalized error between groups using electroniccommunication and groups using verbal communication.Time to CompletionOwing to the longer time it takes to type using the keyboard, as compared to speaking, groupscommunicating electronically are expected to take longer in completing the task than thosecommunicating verbally.Hypothesis 4b: Time to completionGroups communicating electronically will take longer to complete the task than groupscommunicating verbally.Perceived Task ParticipationThe electronic communication channel, being a “leaner” communication medium than the verbalcommunication channel [Rice, 1992], correspondingly contains fewer social and political cues[DeSanctis and Gallupe, 1987]. A greater portion of the communication is task-related, asopposed to socio-emotionally related [Bales, 1950]. As a result, groups communicating45electronically perceive a greater degree of task participation than those communicating verbally.Hypothesis 5a: Perceived task participationA greater degree of task participation will be perceived in groups with electroniccommunication than in groups with verbal communication.Perceived Socio-Emotion,al BehaviorBecause of the inherent nature of the electronic communication channel, which supportsrelatively little socio-emotionally-related communication, the social and emotional needs of groupmembers cannot be adequately met. In comparison to verbal communication, diplomatic handlingof the rejections of ideas, opinions, and suggestions within electronic communication is lessfrequently accomplished. This is because, in electronic communication, it is less likely that suchrejections can be followed by thorough explanations without taking unduly long time and, bydefinition, it is not possible that the rejections can be smoothened by accompanying, adequatenon-verbal behavior. Related to the last point, agreements in electronically-communicatinggroups cannot be as easily conveyed using nods. Correspondingly, we expect groupscommunicating electronically to perceive a greater degree of negative socio-emotional behaviorthan those communicating verbally.Hypothesis Sb: Perceived socio-emotional behaviorMore negative socio-emotional behavior will be perceived in groups with electroniccommunication than in groups with verbal communication.46Satisfaction with OutcomeConsistent with the expectation that communication medium does not have an effect on therepresentativeness bias, it is expected that the degree of satisfaction with the solution iscomparable between groups communicating electronically and those communicating verbally.Hypothesis 5c: Satisfaction with outcomeThere will be no difference in satisfaction with outcome between groups with electroniccommunication and groups with verbal communication.Satisfaction with ProcessElectronic communication, as compared to verbal communication, represents a leanercommunication environment with fewer non-task cues and behavior. Such an environment causesa greater degree of task participation but also a higher degree of negative socio-emotionalbehavior perceived by group members. Correspondingly, group members are less satisfied withthe process.Hypothesis 5d: Satisfaction with processGroups with electronic communication will be less satisfied with the process than groupswith verbal communication.Perceived Informal LeadershipBased on the GSS literature, because electronic communication is characterized by a parallel47inputting mechanism, more egalitarian participation and less leadership behavior should beanticipated.Hypothesis 5e: Perceived informal leadershipPerceived informal leadership will be lower in groups with electronic communicationthan in groups with verbal communication.References to the Diagnostic InformationWithin a given length of time, the amount of interaction occurring within verbal communicationis greater than that within electronic communication, owing to the low speed of typing comparedto speaking. Correspondingly, the numbers of references made about the different types ofinformation sources, rules of information integration, support tool, and meeting procedures aresmaller in groups communicating electronically than those communicating verbally.Hypothesis 6a: References to the diagnostic informationFewer references will be made about the diagnostic information by groups with electroniccommunication than by groups with verbal communication.References to the Base RateHypothesis 6b: References to the base rateFewer references will be made about the base rate information by groups with electroniccommunication than by groups with verbal communication.48References to the Use of Single Information SourceHypothesis 6c: References to the use of single information sourceFewer references will be made about the use of single information source by groups withelectronic communication than by groups with verbal communication.References to the Use ofDual Information SourcesHypothesis 6d: References to the use of dual information sourcesFewer references will be made about the use of dual information sources by groups withelectronic communication than by groups with verbal communication.References to the Problem Representation ToolHypothesis 6e: References to the problem representation toolGroups communicating electronically will make fewer references to the problemrepresentation tool than groups communicating verbally.Procedural CommentsHypothesis 6f: Procedural commentsGroups communicating electronically will make fewer procedural comments than groupscommunicating verbally.Table 3.2 summarizes the above hypotheses dealing with representativeness bias.49Table3.2Summaryof hypothesesforrepresentativenessbiasDependentVariableCognitiveSupport (CS)ElectronicComm.(EC)vs.Verbal Comm.(VC)NormalizedErrorHia:CS<noCSH4a:EC=VCTimetoCompletionHib:CS>noCSH4b:EC>VCPerceivedTaskParticipationH2a:CS>noCSH5a:EC>VCSocio-emotional behaviorH2b:CS>noCSH5b:EC>VCSatisfactionwithOutcomeH2c:CS>noCSH5c:EC=VCSatisfactionwithProcessH2d:CS<noCSH5d:EC<VCPerceivedinformalleadershipH2e:CS>noCSH5e:EC<VCReferencestodiagnosticinfoH3a:CS<noCSH6a:EC<VCReferencestobaserateH3b:CS>noCSH6b:EC<VCReferencestotheuseof singleH3c:CS<noCSH6c:EC<VCinformationsourceReferencestotheuseofdualH3d:CS>noCSH6d:EC<VCinformationsourcesReferencestotheproblemN/AH6e:EC<VCrepresentationtoolProceduralcommentsH3e:CS<noCSH6f: EC<VC503.5 Hypotheses: Availabifity BiasThis section presents hypotheses of the effect of cognitive support and communication mediumon judgment processes and outcomes pertaining to the availability bias.3.5.1 Effect of Cognitive SupportAvailability BiasAvailability bias is a consequence of the limitations of the human information processing system.Events which are more available to the memory are also judged as more important or occurringmore frequently. The primary function served by the electronic brainstorming tool in thissituation is to enlarge the information search scope of group members and the group. The toolincludes structuring mechanisms that work around the limitations of the human informationprocessing system [Nagasundaram and Dennis, 1993]. The net result is a better and morethorough information search and integration, thus leading to reduced availability bias.Hypothesis 7a: Availability biasAvailability bias will be lower in groups provided with cognitive support than in groupsnot provided with cognitive support.Time to CompletionSince electronic brainstorming forms an integral part of a supported group’s meeting, groupsprovided with the tool are expected to take longer in finishing the task than those not provided.51Hypothesis Th: Time to completionGroups provided with cognitive support will take longer to complete the task than groupsnot provided with cognitive support.Total Scores of High-Availabililv ItemsThe increase in information search scope is marginal for those issues or items which are highlyavailable to the memory. With or without the tool, these items are judged as important oroccurring frequently in a relative sense. Therefore, we expect no difference in the total scoresof high-availability items between groups provided with the electronic brainstorming tool andthose not provided.Hypothesis 7c: Total scores of high-availabifity itemsThere will be no difference in the total scores of high-availability items between groupsprovided with cognitive support and groups not provided with cognitive support.Total Scores ofLow-Availability ItemsThe impact of the electronic brainstorming tool is most felt in items of low availability to thememory. Information related to these items are usually not well searched or integrated. Theincrease in information search scope due to the use of the tool causes a better understanding andjudgment concerning these items.52Hypothesis 7d: Total scores of low-availabifity itemsTotal scores of low-availability items will be higher in groups provided with cognitivesupport than in groups not provided with cognitive support.Perceived Task ParticipationThe use of the electronic brainstorming tool causes a larger number of ideas to be generatedabout the issues at hand. This cognitive process experienced by group members individuallyincreases their engagement with the issues at hand and the task itself. Consequently, groupsprovided with the tool perceive a higher degree of task participation than those not provided.Hypothesis 8a: Perceived task participationA greater degree of task participatoin will be perceived in groups provided with cognitivesupport than in groups not provided with cognitive support.Perceived Socio-Emotional BehaviorGreater awareness caused by the increased set of ideas generated also creates more opportunityfor potential conificts to occur since the brainstorming process allows individual group membersto input ideas without the consent or agreement of others, and the discussion of such ideas isdelayed until after the brainstorming session is complete. Consequently, we expect the perceptionof negative socio-emotional behavior to be higher in groups provided with the tool than thosenot provided.53Hypothesis 8b: Perceived socio-emotional behaviorMore negative socio-emotional behavior will be perceived in groups provided withcognitive support than in groups not provided with cognitive support.Satisfaction with OutcomeConsistent with the expectation that the electronic brainstorming tool leads to a reduction in theavailability bias, it is expected that groups provided with the tool are more satisfied with theoutcome than those not provided.Hypothesis 8c: Satisfaction with outcomeGroups with cognitive support will be more satisfied with the judgment outcome thangroups without cognitive support.Satisfaction with ProcessThe electronic brainstorming tool leads to a deeper exploration of the issues at hand. Thecognitive support not only causes a greater degree of task participation, but also a higher degreeof negative socio-emotional behavior perceived by group members. Correspondingly, groupmembers are less satisfied with the process.Hypothesis 8d: Satisfaction with processGroups with cognitive support will be less satisfied with the process than groups withoutcognitive support.54Perceived I,formal LeadershipA greater degree of leadership behavior can be anticipated in a more complex situation than aless complex one [Stogdill, 1974]. The electronic brainstorming tool brings about a surge inideas, whose meanings need to be interpreted, and significance examined. These new activitiescall for a greater need of coordination among group members, thus creating the conditions forleadership behavior to happen.Hypothesis 8e: Perceived informal leadershipPerceived informal leadership will be higher in groups with cognitive support than ingroups without cognitive support.Total Number of Positive Ideas Mentioned During DiscussionBecause of the design of the electronic brainstorming tool, groups provided with the toolundergo a dedicated, rigorous brainstorming session, during which group members produce ideasrelating to the issues at hand. Subsequently, during the group discussion, these groups areexpected to produce fewer ideas than those not provided with the tool.Hypothesis 9a: Total Number of Positive Ideas Mentioned During DiscussionFewer positive ideas will be mentioned during discussion by groups provided withcognitive support than by groups not provided with cognitive support.55Total Number of Negative Ideas Mentioned during DiscussionThe brainstorming session facilitated by the electronic brainstorming tool solicits primarilypositive ideas, i.e., ideas contributing toward the importance of the issues at hand. Therefore,the tool is expected to have no effect on group members’ mentioning of the negative ideas (i.e.,ideas refuting the importance of the issues) during the group discussion, subsequent to thebrainstorming session.Hypothesis 9b: Total number of negative ideas mentioned during discussionThere will be no difference in the total number of negative ideas mentioned duringdiscussion between groups provided with cognitive support and groups not provided withcognitive support.Number of Ideational Connections with Ideas Mentioned during Earlier DiscussionBecause groups provided with the electronic brainstorming tool mention fewer ideas duringdiscussion than those not provided, the number of ideational connections arising in laterdeliberations is also smaller in the supported groups.Hypothesis 9c: Number of ideational connections with ideas mentioned during earlierdiscussionFewer ideational connections will be made with ideas mentioned during earlier discussionby groups provided with cognitive support than by groups not provided with cognitivesupport.56Total Number of Specific Proposals ofEstimatesBecause the electronic brainstorming tool provides the group with a dedicated and uniquelystructured brainstorming session, group members would have explored the issues to some extentby the time they enter into the group discussion, and are thus in a much better position togenerate and consider specific proposals of estimates. In addition, supported groups need notspend as much time as unsupported groups on producing ideas during the group discussion.Therefore, we expect the total number of specific proposals of estimates to be higher in groupsprovided with the tool than those not provided.Hypothesis 9d: Total number of specific proposals of estimatesMore specific proposals of estimates will be found in groups with cognitive support thanin groups without cognitive support.Procedural CommentsThe electronic brainstorming tool induces a greater degree of task participation. Given a finitelength of meeting, the increase in task-related deliberations necessarily implies a reduction inthe communications devoted toward procedurally-related matters. Therefore, groups providedwith the tool are expected to make fewer procedural comments than those not provided.Hypothesis 9e: Procedural commentsFewer procedural comments will be made by groups provided with cognitive support thanby groups not provided with cognitive support.573.5.2 Effect of Communication MediumAvailability BiasPast studies have shown that when individuals get together and form a group, the availabilitybias can be somewhat reduced [Sniezek and Henry, 1989; Stasson et at., 1988], suggesting thatthe bias can be dealt with by addressing the interpersonal aspect of information integration. Itis well understood that group interactions are characterized by process losses [Nunamaker et at.,1991]. The GSS literature has provided evidence that electronic communication is capable ofaddressing some major process losses, and offering process gains. By improving theinterpersonal information integration process, electronic communication, compared to verbalcommunication, is expected to lead to a reduction in the availability bias.Hypothesis lOa: Availabifity biasAvailability bias will be lower in groups with electronic communication than in groupswith verbal communication.Time to CompletionOwing to the longer time it takes to type using the keyboard, as compared to speaking, groupscommunicating electronically are expected to take longer in completing the task than thosecommunicating verbally.Hypothesis lOb: Time to completionGroups with electronic communication will take longer to complete the task than groups58with verbal communication.Total Scores ofHigh-Availability ItemsBecause of their already high availability to the memory even with verbal communication, high-availability items are expected to display no significant change in terms of their correspondingjudgments with electronic communication. In other words, with either communication medium,these items are judged as important or occurring frequently in a relative sense. Therefore, weexpect no difference in the total scores of high-availability items between groups communicatingelectronically and those communicating verbally.Hypothesis lOc: Total scores of high-availabifity itemsThere will be no difference in the total scores of high-availability items between groupswith electronic communication and groups with verbal communication.Total Scores ofLow-Availabilitv ItemsThe impact of electronic communication (versus verbal communication) is most felt in items oflow availability to the memory. Information related to these items are usually not well searchedor integrated. The improvement in the interpersonal information integration due to electroniccommunication causes a better understanding and judgment concerning these items.Hypothesis lOd: Total scores of low-availability itemsTotal scores of low-availability items will be higher in groups with electronic59communication than in groups with verbal communication.Perceived Task ParticipationThe electronic communication channel, being a “leaner” communication medium than the verbalcommunication channel [Rice, 1992], correspondingly contains fewer social and political cues[DeSanctis and Gallupe, 1987]. A greater portion of the communication is task-related, asopposed to socio-emotionally related [Bales, 1950]. As a result, groups communicatingelectronically perceive a greater degree of task participation than those communicating verbally.Hypothesis ha: Perceived task participationA greater degree of task participatoin will be perceived in groups with electroniccommunication than in groups with verbal communication.Perceived Socio-Emotional BehaviorBecause of the inherent nature of the electronic communication channel, which supportsrelatively little socio-emotionally-related communication, the social and emotional needs of groupmembers cannot be adequately met. In comparison to verbal communication, diplomatic handlingof the rejections of ideas, opinions, and suggestions within electronic communication is lessfrequently accomplished. This is because, in electronic communication, it is less likely that suchrejections can be followed by thorough explanations without taking unduly long time and, bydefinition, it is not possible that the rejections can be smoothened by accompanying, adequatenon-verbal behavior. Related to the last point, agreements in electronically-communicating60groups cannot be as easily conveyed using nods. Correspondingly, we expect groupscommunicating electronically to perceive a greater degree of negative socio-emotional behaviorthan those communicating verbally.Hypothesis lib: Perceived socio-emotional behaviorMore negative socio-emotional behavior will be perceived in groups with electroniccommunication than in groups with verbal communication.Satisfaction with OutcomeConsistent with the expectation that electronic communication leads to a reduction in theavailability bias, we expect groups communicating electronically to be more satisfied with theoutcome than those communicating verbally.Hypothesis lic: Satisfaction with outcomeGroups with electronic communication will be more satisfied with the outcome thangroups with verbal communication.Satisfaction with ProcessElectronic communication, as compared to verbal communication, represents a leanercommunication environment with fewer non-task cues and behavior. Such an environment causesa greater degree of task participation but also a higher degree of negative socio-emotionalbehavior perceived by group members. Correspondingly, group members are less satisfied with61the process.Hypothesis lid: Satisfaction with processGroups with electronic communication will be less satisfied with the process than groupswith verbal communication.Perceived Informal LeadershipBased on the GSS literature, because electronic communication is characterized by a parallel-inputting mechanism, more egalitarian participation and less leadership behavior should beanticipated.Hypothesis lie: Perceived informal leadershipPerceived informal leadership will be lower in groups with electronic communicationthan in groups with verbal communication.Total Number of Positive Ideas Mentioned during DiscussionWithin a given length of time, the amount of interaction occurring within verbal communicationis greater than that within electronic communication, owing to the low speed of typing comparedto speech. Correspondingly, the numbers of ideas produced during discussion and ideationalconnections made during discussion, and references made to meeting procedures are smaller ingroups communicating electronically than those communicating verbally.62Hypothesis 12a: Total Number of Positive Ideas Mentioned during DiscussionFewer positive ideas will be mentioned during discussion by groups with electroniccommunication than by groups with verbal communication.Total Number of Negative ideas Mentioned durinci DiscussionHypothesis 12b: Total Number of Negative Ideas Mentioned during DiscussionFewer negative ideas will be mentioned during discussion by groups with electroniccommunication than by groups with verbal communication.Number of Ideational Connections with Ideas Generated during Electronic BrainstormingHypothesis 12c: Number of ideational connections with ideas generated during electronicbrainstormingFewer ideational connections will be made with ideas generated during electronicbrainstorming by groups with electronic communication than by groups with verbalcommunication.Number of Ideational Connections with ideas Mentioned during Earlier DiscussionHypothesis 12d: Number of ideational connections with ideas mentioned during earlierdiscussionFewer ideational connections will be made with ideas mentioned during earlier discussionby groups with electronic communication than by groups with verbal communication.63Total Number of Specific Proposals ofEstimatesNotwithstanding the argument presented for Hypothesis 12a, we expect groups communicatingelectronically to generate more specific proposals of estimates than those communicatingverbally. The exchange of specific proposals of estimates mimics a negotiation dance. Becauseof the lower social presence of electronic communication as compared to verbal communication[Rice, 1992], the former provides an environment which is less inhibitory and more conducivefor group members to make specific suggestions about estimates in the form of exchanges.Hypothesis 12e: Total number of specific proposals of estimatesMore specific proposals of estimates will be found in groups with electroniccommunication than in groups with verbal communication.Procedural CommentsFollowing the argument presented for Hypothesis 12a, we expect groups communicatingelectronically to make fewer procedural comments than those communicating verbally.Hypothesis 12f: Procedural commentsFewer procedural comments will be made by groups provided with electroniccommunication than by groups with verbal communication.Table 3.3 summarizes the above hypotheses dealing with availability bias.64Table3.3Summaryof hypothesesfor availabilitybiasDependentVariableCognitiveSupport (CS)ElectronicComm.(EC)vs.VerbalComm.(VC)AvailabilityBiasH7a:CS<noCSHlOa:EC<VCTimetoCompletionH7b:CS>noCSHiOb:EC>VCScoresofhigh-avail,itemsH7c:CS=noCSHlOc:EC=VCScoresoflow-avail,itemsH7d:CS>noCSHiOd:EC>VCPerceivedTaskParticipationH8a:CS>noCSHila:EC>VCSocio-emotionalbehaviorH8b:CS>noCSHiib:EC>VCSatisfactionwithOutcomeH8c:CS>noCSHiic:EC>VCSatisfactionwithProcessHSd:CS<noCSHiid:EC<VCPerceivedinformalleadershipH8e:CS>noCSHile:EC<VCNo.ofpositiveideasH9a:CS<noCSH12a:EC<VCNo.ofnegativeideasH9b:CS=noCSH12b:EC<VCIdeationalconnectionswbrainstormingN/AH12c:EC<VCIdeationalconnectionswdiscussionH9c:CS<noCSH12d:EC<VCNo.oftimesaratingsuggestedH9d:CS>noCSH12e:EC>VCProceduralcommentsH9e:CS<noCSH12f:EC<VC65CHAPTER 4RESEARCH METHODThis chapter describes a laboratory experiment that was conducted to test the hypothesesproposed in the previous chapter. The research is presented in terms of the experimental design,subjects, research tasks, group support system, variables, and procedures.The laboratory experiment setting, which is widely used in group research [Foschi and Lawler,1994], has some particular strengths [Benbasat, 1990; McGrath, 1984]. The effects ofindependent variables can be precisely measured, presumed causes can be manipulated andcontrolled, and strong inferences can be drawn about cause and effect relationships. Thisinvestigation takes advantage of these strengths by controlling and manipulating two variablesthat are assumed to significantly influence group judgment processes and outcomes, and byassessing several dependent variables that measure such effects.4.1 Research DesignA two-by-two factorial design was employed. Cognitive Support (present vs. absent) andCommunication Medium (electronic vs. verbal) were the independent variables in this design(see Figure 4.1).Groups assigned to cell 1 in Figure 4.1 were provided with both the cognitive support tool andthe electronic-communication facility of GSS. Groups assigned to cell 2 were provided with the66cognitive support tool, but were not allowed to use GSS for communication purposes. Groupsassigned to cell 3 were provided with the electronic-communication facility but not the cognitivesupport tool. Finally, in the baseline condition, i.e., cell 4, groups were not supported with GSS.COMMUNICATION MEDIUMElectronic VerbalCell 1: Cell 2:Cognitive Support Cognitive Supportresent + +Electronic Communication Verbal CommunicationCOGNITIVECell 3: Cell 4:No Cognitive Support No Cognitive SupportAbsent + +Electronic Comm un icat ion Verbal CommunicationFigure 4.1 Experimental designAll meetings were conducted face-to-face. Each GSS meeting was coordinated/assisted by atechnical facilitator. Each group was asked to work on two tasks, the Probability EstimationTask and the Job Evaluation Task. The order of the tasks was counterbalanced [Keppel, 1982].Each group consisted of three members. There are two reasons contributing to the use of three-67person groups. First, it increases the operational feasibility of the study. Second, Watson’s[19871 study shows that varying group size between three and four did not influence the effectof GSS use.4.2 SubjectsSubjects for the experiment, who volunteered for their participation, were graduate andundergraduate students of the University of British Columbia (with the exception of three whocame from two other major universities in North America). They were paid $30 upon thecompletion of their participation. Incentives to achieve good judgments were provided by makingrewards contingent upon performance [Tripathi, 19911. The best-performing groups in eachexperimental condition received additional cash rewards of $50, $30, or $20, depending ongroup performance.The advantages of using groups based on student subjects are twofold. First, they are availablein sufficient numbers to provide adequate statistical power. This dissertation is based upon theperformance of 40 groups involving 120 students. Rarely are sufficient numbers of real problemsolving groups available to conduct research with this level of statistical power. Second, studentsare relatively homogeneous in terms of their age, business experience, inteffigence andknowledge. This homogeneity reduces variance in group performance, and subsequentlyincreases the statistical power of the results.Gender composition was not a controlled variable in this experiment. Correspondingly, mixed68groups as well as same-gender groups were included.4.3 Experimental TasksTo examine the two important judgment biases of interest to this research, each group was askedto perform two tasks: probability estimation task (for the representativeness bias) and jobevaluation task (for the availability bias). Details of the tasks are documented together with otherexperimental materials under Appendix B. A pilot test was conducted before the actualexperiment to ensure that these tasks were engaging to subjects, and could be completed withina reasonable period of time.4.3.1 Probability Estimation TaskThere are two phases to this task. In the first phase, group members were presented individuallywith ten personality profiles of a sample of engineers and lawyers, and asked to identifyindividually the profession in each case. The results were manipulated to show that the members,as a group, were correct 80% of the time in identifying an engineer or lawyer, i.e., when theindividual presented was an engineer, the group correctly identified the person as an engineer80% of the time; when the individual presented was a lawyer, the group also correctly identifiedthe person as a lawyer 80% of the time. Subsequently, the group was presented with an eleventhpersonality profile belonging to “Chris”, said to be chosen at random from the sample. Thisdescription of Chris, adapted from [Kahneman and Tversky, 1973] and [Tversky and Kahneman,4To reduce the risk of subjects finding out about this manipulation prior to their participationin the experiment, the accuracy figure was alternated between 80% and 75%, i.e., half of thegroups received the accuracy figure of 80%, while the other half received 75%.691980], is clearly biased toward that of an engineer rather than lawyer. Group members were thenasked to discuss and decide whether Chris is an engineer or lawyer as a group. As expected,Chris was identified as a lawyer by all participating groups except one. The exception has beendiscarded from the data analysis.In the second phase, subjects were further told that the sample from which the descriptions wererandomly chosen consists of the following composition: 15 engineers and 85 lawyers. Thisconstitutes the base-rate information. Following this, the group was asked to estimate theprobability that their earlier choice (about Chris’ profession) is correct.The structure of this problem is similar to the “cab problem” used by Tversky and Kahneman[1982]. In short, the cab problem, which was administered to individuals rather than groups,involves a witness of a car accident testifying to a court in its attempt to decide whether the cabcausing the accident was blue or green. In relating to the current task, the issue of the group’saccuracy in identifying professions parallels that of the court witness’ accuracy in identifyingcabs. In fact, both the base rate and diagnostic probabilities were held constant across the twoproblems (with the exception pointed out in the previous footnote). On the other hand, thepresent task also encompasses personality profiles, which help to increase the apparent“richness” of the task and have been used in many base-rate studies (e.g., [Argote et aL, 1986,1990; Kahneman and Tversky, 1973; Tversky and Kahneman, 1980]).704.3.2 Job Evaluation TaskBackgroundThe Position Analysis Questionnaire (PAQ) is a rigorously researched arid the most popular jobanalysis instrument (for the development and history of PAQ, see [McCormick, 1976;McCormick, Jeanneret, and Mecham 1972]). It is used to make discriminations between andamong jobs, identify aptitude and ability requirements, identify job related experiencerequirements, identify job interest requirements, develop job applicant interview criteria, developperfonnance appraisal standards, conduct job evaluation studies, and create career path models[Mecham, McCormick, and Jeanneret, 1977]. PAQ consists of 187 worker-oriented” items and7 items concerning compensation. In analyzing the job with the PAQ, the analyst judges therelevance to the job of each job element using an appropriate rating scale (such as importanceto the job or frequency of occurrence).Forbringer [1991] examined whether job perceptions as measured by the PAQ were affected bythe availability heuristic. The availability of PAQ items was evaluated by 202 subjects using fourmeasures of availability: familiarity, meaningfulness, vividness, and ease of example generation.This evaluation was done in the absence of a specific job context provided to the subjects. Basedon these availability ratings, items were placed in one of three categories: low, medium, or high.Independent samples of subjects, separate from the previous group, analyzed fourjobs (insuranceunderwriter; building maintenance man; insurance claims superintendent; and secretary) usingthe PAQ. It was hypothesized and found that PAQ items which were rated as being moreavailable produced significantly different ratings than those rated as being less available,71irrespective ofjobs. The effect of the availability heuristic on job perceptions was thus stronglyestablished.The TaskFor the current study, subjects were asked to estimate (by rating) the frequency or importanceof performing eight job-related activities, described in eight PAQ items, by a Secretary. Fourof these items are ‘high availability” items, and the others ‘low availability” items, adaptedfrom Forbringer’s [1991] measure5.Forbringer’s research shows that owing to availability bias,activities which are highly available to memory are rated to be significantly more frequentlyperformed by a Secretary (and three other jobs) than those that have low availability. Byincluding both high- and low-availability items in our task, we sought to show that the use ofGSS tools will, through boosting the availability of (originally) low-availability items, reduce theerroneously large difference in ratings between high- and low-availability items.Subjects were asked to perform the task as a group. Agreement on ratings by group memberswere reached through group discussions.For each of the eight PAQ items, groups assigned to the “cognitive support” condition spent thefirst three minutes using the electronic brainstorming tool for idea generation, and the next three5The availability scores of the two groups of items were found to be significantly different(t=8.38; p < .001), in the expected direction, in a manipulation check involving some 20subjects who did not participate in the actual experiment. Items 1, 4, 5, and 7 are low-availability items, while items 2, 3, 6, and 8 are high-availability items. The detailed task iscontained in Appendix B.72minutes (maximum) on discussion to reach a group agreement on the rating. On the other hand,for each item, groups assigned to the “no cognitive support” condition could spend the entire sixminutes (maximum) on discussion. After finishing one item the group moved to the next untilall eight were completed. The discussion was conducted either electronically or verbally,depending on the communication condition the group was assigned to.4.4 Group Support SystemThe group support system (GSS) used in this study was based on the Meeting Place systemdeveloped by the MIS Division of the University of British Columbia. Particularly, the electronicbrainstorming facility and the electronic communication facility were directly athpted from thesystem.4.4.1 Cognitive SupportTwo different types of cognitive support were provided for the two tasks. The problemrepresentation tool was used in the case of the probability estimation task, while the electronicbrainstorming tool was used in the job evaluation task.Problem Representation ToolThe problem representation tool (PRT), incorporated into the GSS in this research, was adaptedfrom Cole [1988, 1989] and Roy and Lerch [1993]. The tool consists of a simple graphicaldisplay, known as “probability map”, representing a Bayesian problem in a non-causal fashion(see Figure 4.2).73The map consists of a total of 100 boxes, which are differentiated both in terms of base rate anddiagnostic information. For example, Figure 4.2 shows a situation where the base rate is 85-15(i.e., 85% lawyers vs. 15% engineers) and the diagnostic information is such that correctidentifications based on personality profiles are made 80% of the time (thus errors are caused20% of the time). Therefore, 85 of the 100 boxes are colored white and the other 15 boxesshaded. Out of the 85 boxes representing lawyers, 17 are crossed, denoting the fact that 20%of the 85 are (erroneously) identified as engineers. Also, out of the 15 boxes representingengineers, 12 are crossed, since 80% of the 15 are (correctly) identified as engineers.±±±±±±±__ --I__I-U I ILegend:LawyerLI Engineer“j’’ Identified as engineerFigure 4.2 Problem representation toolI liii74Consequently, if one is interested in the probability that a certain personality profile has beencorrectly identified as engineer, such probability can be easily derived by differentiating amongthe different types of boxes.In the current implementation, the colors of the graph were kept at black and white in order tobe consistent with previous studies which made use of the probability map. This prevents theeffect of color from becoming a potential confounding factor [Benbasat and Dexter, 1985].Electronic Brainstormin2 ToolThe electronic brainstorming tool (BBS) allows group members to generate ideas simultaneouslyabout a given topic. It works as follows. First, the facilitator initiates a session by typing in thetopic and sending out electronic “forms” to group members. Each group member receives oneform at a time, on which he or she could enter one idea6. The participant then sends back theform once it is completed, and receives another form from the system, and so on. Typically, theform contains a sample of those ideas already generated by the group. These seed ideas mayserve to prompt for further ideas. Ideas are not identified by their originator; this anonymousprocedure reduces process losses due to “conformance pressure” and “evaluation apprehension”,although it is possible to contribute toward “free riding” [Nunamaker et al., 1991]. During thisprocess, no explicit coordination is required among group members. Each works at his or herown pace. The process can be terminated by the facilitator after a specified time period. The6Subjects were told to generate ideas on instances where the concerned item is encounteredor relevant in the secretary’s job.75facilitator then broadcasts the results (i.e., the complete list of ideas generated) to theparticipants.At the implementation level, the paradigm implemented for brainstorming involves creating N+ 1“electronic index cards”, where N is the number of participants in the session. Conceptually,the session initiator (i.e., the facilitator) holds onto the extra card. Any time a participant returnsa card with another idea, that card is exchanged with the extra one held by the facilitator. Thisprocess continues until the allotted time is up.4.4.2 Electronic CommunicationThe communication facility provided by the group support system is similar to an ordinary email facility. Each group member may compose and send a message to the rest of the group todiscuss about the task. The parallel communication feature allows group members to send amessage at any time during the discussion; messages are broadcasted to the entire group. Verbalcommunication was not allowed for groups assigned to the electronic-communication condition,though the group members could see each other.4.5 Dependent MeasuresFor each experimental task, the dependent variables investigated can be grouped into threecategories. The first category contains objective measures of judgment outcomes. The secondcategory consists of perceptual measures of judgment outcomes, which were obtained with theuse of a questionnaire instrument. The third category concerns process measures, which involved76protocol analysis of video tapes of meetings (for verbal-communication groups) and computerlog files (for electronic-communication groups). The video tapes were transcribed; the videotapes and log files were subsequently coded and then analyzed.4.5.1 Probability Estimation TaskObjective Measures of OutcomesThe quality of the decision or judgment is the key outcome measure. For the specific contextof this task, it was assessed using a “normalized error” measure. “Normalized error” is defmedas the absolute difference between the group’s answer and the correct answer provided by theBayesian model, normalized by the magnitude of the correct answer. “Time to completion” wasalso measured to provide understanding on the speed or efficiency of the group in performingthe task.Perceptual Measures of OutcomesGreen and Taber’s [1980] instrument, employed previously in GSS research (e.g., [Watson,DeSanctis, and Poole, 1988]), was used to measure five perceptual constructs. The five measuresare: “perceived task participation”; “perceived socio-emotional behavior”; “satisfaction withoutcome”; “satisfaction with process”; and “perceived informal leadership”. Each variable wasmeasured using multiple questionnaire items with five-point Likert scales.Process MeasuresWhereas it is not possible to directly detect the information source that an individual employs77at a particular instant of time, outward references made, either verbally or electronically(through the use of GSS), provide satisfactory surrogates [Argote et aL, 1990; Ericsson andSimon, 1980; Payne, 1976]. Correspondingly, content analysis was used to measure the processvariables [Todd and Benbasat, 1987].The following outlines the conceptual method used in analyzing the process data. A frameworkis first described, involving the concepts of procedural vs. substantive group communication.From this framework two coding schemes are derived.Even at the highest levels of abstraction, group communication has been differentiated usingdifferent classifications. The usefulness of each classification depends on the purpose of analysisand the target of investigation. A popular differentiation by Bales [1950] sees groupcommunication as basically consisting of task related and socio-emotionally related. Anotherclassification, particularly apt for task-oriented groups (i.e., groups formed for the purpose ofsolving a single task or multiple tasks), distinguishes between procedural and substantivecommunication (e.g., [Putnam, 1981]). The latter classification is useful for the purpose of thecurrent study, since it not only provides an understanding of the information-processing aspect,but also allows the capturing of procedural comments made by group members.Procedural statements differ from substantive, or information-processing, statements by7This distinction corresponds to the “activity tracks” identified by Poole [1983b, p.326] astask process activities (procedural) vs. topical focus (substantive).78commenting upon ongoing interaction, i.e., these are statements about what the group is doing,where it is going, and what it should do. They are indeed meta-messages that direct themechanics of group activity through reflecting upon, integrating, and coordinating group talkwith past behaviors and future contingencies. Procedural maneuvers also function as means ofcontroffing unwieldly or deviant members [Penland and Fine, 1974], and as strategies usedduring struggles for leadership emergence [Schultz, 1974, 1978; Bormann, 1975]. Thus,procedural messages have also been used for understanding influence attempts in small groups[Zigurs, 1988].The approach adopted in this study for understanding group interaction process consists of acoding scheme with two generic components, the procedural component and the substantivecomponent. Putnam’s [1981] coding scheme was adapted here for examining proceduralcommunication. Out of its original ten categories, five focus on procedural matters (refer toAppendix A for the detailed categories). From an analysis perspective, the basic role served bythe procedural interaction is to provide a comparison against components of the substantiveinteraction. Consequently, Putnam’s five categories were coded separately but eventuallycombined to produce the variable “procedural comments”.For understanding substantive communication, the coding scheme used by Argote et al. [1990]in their base-rate research was adapted here (refer to Appendix A for the detailed categories).This coding system would allow understanding of how the support provided affected the typeof information sources group members referred to as well as the information integration rules79they used, thus providing explanations for the relationship between support provided andoutcome variables. Five variables were produced from this coding scheme: “references to thediagnostic information”; “references to the base rate”; “references to the use of singleinformation source”; “references to the use of dual information sources”; and “references to theproblem representation tool”.Two coders coded the interaction processes based on the coding schemes discussed. The inter-rater reliabilities for the substantive communication and procedural communication are 87% and81% respectively, calculated via Scott’s p1 [Scott, 1955], which takes into account the possibilityof chance agreement.4.5.2 Job Evaluation TaskObjective Measures of Outcomes“Availability bias” was measured as the difference in frequency or importance scores betweenthe four high-availability items and the four low-availability items. “Total scores of high-availability items” was obtained by summing the scores of the four high-availability items.Similarly, “total scores of low-availability items” was obtained by summing the scores of thefour low-availability items. “Time to completion” was measured as the total time taken for thegroup to complete the task.Perceptual Measures of OutcomesAs in the probability estimation task, Green and Taber’s [1980] instrument was also used here80to measure perceptions of the group. The five measures are: “perceived task participation”;“perceived socio-emotional behavior”; “satisfaction with outcome”; “satisfaction with process”;and “perceived informal leadership”. Each variable was measured using multiple questionnaireitems with five-point Likert scales.Process MeasuresThe approach used for the interaction analysis here parallels that used for the probabilityestimation task. Because the goal in the job evaluation task is to produce a rating of importancefor each of some eight items with respect to the job of secretary, substantive communication herebasically concerns four aspects: (1) positive ideas to corroborate importance of the item withrespect to the job; (2) negative ideas to refute importance of the item with respect to the job; (3)ideational connection [Sambamurthy et al., 1993], where an opinion is expressed based on, orassociated with, one or more ideas generated earlier; and (4) stating a rating. Correspondingly,the coding scheme used encompasses these aspects (see Appendix A for details of thecategories). This scheme would allow an understanding of how the provided support affected thescope of the information search during the discussion as well as a sense of how preoccupied thegroup members were with the exchange of ratings. Five variables were produced from thiscoding scheme: “number of ideas mentioned during discussion in support of importance ofitems” (operationalizing “number of positive ideas mentioned during discussion” mentioned inChapter 3); “number of ideas mentioned during discussion in refutation of importance of items”(operationalizing “number of negative ideas mentioned during discussion” mentioned in Chapter3); “number of ideational connections with ideas generated during electronic brainstorming”;81“number of ideational connections with ideas generated during earlier discussion”; and “totalnumber of times a specific rating suggested” (operationalizing “total number of specificproposals of estimates” mentioned in Chapter 3).Two coders coded the interaction processes based on the coding schemes discussed. The inter-rater reliabilities for the substantive communication and procedural communication are 91% and84% respectively, calculated via Scott’s p1 [Scott, 1955], which takes into account the possibilityof chance agreement.4.6 ProcedureA pilot test was conducted to ensure adequacy of the procedure. The basic steps encompassedin this experimental procedure are: (1) training, (2) task performance, and (3) response toquestionnaire. The exact procedure used for any group depends on the experimental conditionthe group belonged. For example, “cognitive support” groups received training on the use of theproblem representation tool and the electronic brainstorming tool, whereas “no cognitivesupport” groups did not. Also, “electronic communication” groups went through a hands-onsession on the use of the e-mail facility, while “verbal communication” groups did not. Theremaining difference lies much in the fact that “cognitive support” groups were provided withthe support during their solving of the tasks (vs. no support provided for the “no cognitivesupport” groups), and “electronic communication” groups communicated electronically duringdiscussions (vs. verbally by “verbal communication” groups). Consequently, we shall presentin the following the generic procedural components adopted by one experimental condition;82components applicable in the other conditions can be easily derived.The following procedural components, in the sequence presented, were used for those groupsthat had cognitive support, used electronic communication, and performed probability estimationtask before job evaluation task:• Subjects listened to a standard introductory script read out aloud by the administrator ofthe experiment.• Subjects were asked to fill in a background questionnaire to solicit information such aslevel of experience in group work, age, etc.• Training was provided on the use of electronic communication facility.• Training was provided on the use of the problem representation tool.• The “cab problem” [Tversky and Kahneman, 1982] was administered as a training taskfor the probability estimation task (maximum 10 minutes).8• Training was provided on the use of the electronic brainstorming tool.• Training task for the job evaluation task, consisting of two items, was administered withthe job target as “insurance underwriter”. For each item, the group spent half of the timeallocated (i.e., 3 minutes) on brainstorming with the use of the electronic brainstormingtool, and the other half of the time (3 minutes) on discussion electronically. (“Nocognitive support” groups spent the entire allocated time, 6 minutes, on discussion.)8Although training for the use of the problem representation tool also made use of the “cabproblem”, for those groups involved, the training for the task and that for the tool wereessentially integrated.83• To establish the group’s accuracy rate, subjects were given ten personality profiles toidentify individually. The group was subsequently informed of their performance.• Subjects were given an eleventh personality profile to identify as a group (5 minutesmaximum).• The group was asked to perform the probability estimation task with the informationpreviously established as well as some additional information (15 minutes maximum).• Subjects were asked to fill in a questionnaire for assessing their perceptions of thepreceding meeting.• The group was asked to perform the job evaluation task, consisting of eight items, withthe job target as “secretary”. For each item, the group spent half of the time allocated(i.e., 3 minutes) on brainstorming with the use of the electronic brainstorming tool, andthe other half of the time (3 minutes) on discussion electronically. (“No cognitivesupport” groups spent the entire allocated time, 6 minutes, on discussion.)• Subjects were asked to fill in a questionnaire for assessing their perceptions of thepreceding meeting.In solving the job evaluation task, the “cognitive support” group went through two distinctphases: an electronic brainstorming phase and a discussion phase, whereas the “no cognitivesupport” group only had one phase, i.e., discussion. This design is consistent with brainstormingresearch where the goal is to compare the effect of brainstorming with “natural groups”, whereno explicit brainstorming procedure is followed [Lamm and Trommsdorff, 1973; Brilhart andJochem, 1964; Weisskopf-Joelson and Eliseo, 1961]. Therefore, in such situations, the baseline84is a no-brainstorming group.Each “verbal communication” group was video-taped for the two tasks, while computer log fileswere kept for the “electronic communication” groups. To eliminate effects due to the merepresence of the video camera, a video camera was made present in all cases. Appendix Bcontains the experimental materials.85CHAFFER 5ANALYSIS OF EXPERIMENTAL RESULTSThis chapter presents the results of the statistical analysis of the experimental data. The overallanalysis consists of three steps. The first step involves a preliminary analysis of the data to checkthat the assumptions of ANOVA are satisfied and, where necessary, perform transformations toensure that they are. The preliminary analysis also includes an assessment of the reliability andvalidity of the scales used to measure the perceptual constructs. In the second step, theinvestigation of the relationship between the independent and dependent variables is carried outusing multivariate analysis of variance and analysis of variance. In the third step, the relationshipof process variables with outcomes and perceptual variables is examined using correlationalanalysis.The organization of this chapter is based on the basic steps outlined above. First, the preliminaryanalysis of the data is presented. Next, results of the ANOVA and correlational analysis arereported for theprobability estimation task, followed by those for thejob evaluation task. Withineach task, the fmdings of ANOVA are organized according to the two independent variables aswell as their interaction.5.1 Preliminary Analysis of DataThe dependent variables were examined to see if there was any interaction between the orderin which the two tasks were performed and cognitive support or communication medium [Keppel,861982]. No variable exhibited order effects in this regard. Correspondingly, task order has beenignored throughout the subsequent analysis.The validity of the ANOVA model requires the following conditions: (1) homogeneity ofvariances, (2) independent samples, and (3) normality of error terms. These requirements arediscussed in the following sections, together with the statistical techniques used to test that theseconditions were satisfied. Where a necessary condition did not hold, the data were transformedto meet the requirement.5.1.1 Homogeneity of VarianceThe error term should have constant variance for all factor levels. Hartley’s test [Neter et al.,1985] was used to test for homogeneity of variance. The test statistic, H, is based solely on thelargest sample variance and the smallest sample variance. The H statistic distribution, tabulatedin Neter et al. [1985], depends on the number of populations and the common degrees offreedom. The number of populations in this research is four. The Hartley test requires equalsample sizes for each sample. However, if the sample sizes do not differ greatly it may be usedas an approximate test where the average number of degrees of freedom of the samples is used.The average degrees of freedom, the critical H values, and the actual H values are shown inTable 5.1 for dependent variables under the probability estimation task, and in Table 5.2 fordependent variables under the job evaluation task.Referring to Table 5.1, Hartley’s test, at the 5% level, indicated that the sample variances for87four dependent variables were not equal. Since all these variables contain some zeros, they couldnot be subject to log transformation. “References to the diagnostic information”, “references tothe base rate”, and “references to problem representation tool” were transformed using thesquare-root function [Neter et aL, 1985] and passed the Hartley’s test (with H values of 6.42,7.12, and 4.97 respectively). “References to the use of single information source” wastransformed using the function 1I(Y + 1) [Weisberg, 1980] to obtain an H value of 3.79.88Table 5.1 Hartley’s test for homogeneity of variance (Probability estimation task)The number of populations being compared here is 2 instead of 4.Average df Critical H value Actual H valueNormalized error 9 6.31 2.60Time to completion 9 6.31 2.20Perceived task 9 6.31 2.23participationPerceived Socio- 9 6.31 5.23Emotional BehaviorSatisfaction with 9 6.31 3.50outcomeSatisfaction with 9 6.31 1.92processPerceived informal 9 6.31 3.22leadershipReferences to the 7 8.44 97.68*diagnostic informationReferences to the base 7 8.44 12.18*rateReferences to the use of 7 8.44 317.18*single informationsource for solutionReferences to the use of 7 8.44 4.79dual informationsources for solutionReferences to the 6 5.82’ 18.95*problem representationtoolProcedural comments 7 8.44 2.5389Table 5.2 Hartley’s test for homogeneity of variance (Job evaluation task)Dependent variable Average df Critical H value Actual H valueAvailability Bias 9 6.31 4.63Time to completion 9 6.31 2.76Total scores of high-availability items 9 6.31 4.92Time to completion of high-availability 9 6.31 5.25itemsTotal scores of low-availability items 9 6.31 4.25Time to completion of low-availability items 9 6.31 4.01Perceived task participation 9 6.31 2.04Perceived Socio-Emotional Behavior 9 6.31 6.48*Satisfaction with outcome 9 6.31 1.32Satisfaction with process 9 6.31 1.75Perceived informal leadership 9 6.31 1.83No. of ideas mentioned during discussion in 7 8.44 5.53support of importance of itemsNo. of ideas mentioned during discussion in 7 8.44 4.54refutation of importance of itemsNo. of ideational connections with ideas 6 5.821 5.01generated during electronic brainstormingNo. of ideational connections with ideas 7 8.44 27.45*mentioned during earlier discussionNo. of times a specific rating suggested 7 8.44 2.29Procedural comments 7 8.44 3.11The number ot populations being compared here is 2 instead ot 4.90Referring to Table 5.2, Hartley’s test, at the 5% level, indicated that the sample variances fortwo dependent variables were not equal. “Perceived socio-emotional behavior” and “number ofideational connections with ideas mentioned during earlier discussion” were subsequentlytransformed using the square-root function and both passed the Hartley’s test with H values of5.41 and 2.29 respectively.5.1.2 Independent SamniesSamples should be drawn from independent populations. This was achieved by having eachgroup (i.e., the experimental unit) randomly assigned to a treatment.5.1.3 Normality of Error TermsThe residuals should be normally distributed for each factor level. When the factor level samplesizes are not large and provided there are no major differences in the error term variances ofthe factors, the residuals for each factor level can be combined. This approach was adopted inthe analysis of error terms.Lilliefor’s test on the two-sided Kolmogorov D statistic was used to test for normality of theresiduals [Conover, 1980]. The hypothesis that residuals are normally distributed is rejected ifthe D statistic is greater than the critical value. Table 5.3 shows the Lihiefor’s probability forthe residuals of each model under the probability estimation task, and Table 5.4 shows thecorresponding items under the job evaluation task.91Table 5.3 Lilliefor’s test for normality of the residuals (Probability estimation task)Transtormea aata.A transformation of the form lI(Y + 1) applied to “references to the use of single informationsource for solution” satisfied the homogeneity of variance assumption, but the residuals did notconform to the normality requirement (see Table 5.3). Exploration of other transformations didnot provide a means of satisfying both the equal variance and normality of residualsrequirements. It is sometimes the case that a transformation to achieve constant variancesacrifices normality. When this is the case, normality is the least important of the requirements92Dependent variable Lilliefor’s ProbabilityNormalized error .168Time to completion .131Perceived task participation .216Perceived Socio-Emotional Behavior .477Satisfaction with outcome .658Satisfaction with process .126Perceived informal leadership .394References to the diagnostic informatio& .272References to the base rate’ .350References to the use of single information source for .032*solution’References to the use of dual information sources for .130solutionReferences to the problem representation tool’ .529Procedural comments .229[Weisberg, 1980]. Consequently, the dependent variable was left in the 1/(Y + 1) form.Table 5.4 Lilliefor’s test for normality of the residuals (Job evaluation task)Dependent variable Lilliefor’s ProbabilityAvailability Bias .000*Time to completion .807Total scores of high-availability items .066Total scores of low-availability items .007*Perceived task participation .565Perceived Socio-Emotional Behavior’ .526Satisfaction with outcome 1.000Satisfaction with process .668Perceived informal leadership .331Total number of ideas mentioned during discussion in .027*support of importance of itemsTotal number of ideas mentioned during discussion in .241refutation of importance of itemsNumber of ideational connections with ideas generated .312during electronic brainstormingNumber of ideational connections with ideas mentioned .519during earlier discussion’Total number of times a specific rating suggested .767Procedural comments .353Transformed data.Table 5.4 shows that the residuals of three dependent variables did not meet the normality93requirement. “Availability bias” was transformed using a square-root function and passed boththe Hartley’s test for homogeneity of variance (H = 4.77) and the normality test (with Lilliefor’sprobability = .173). Similarly, “total number of ideas mentioned during discussion in supportof importance of items” was also transformed using a square-root function and passed both theHartley’s test for homogeneity of variance (H = 1.27) and the normality test (with Liffiefor’ sprobability = .302). As for “total scores of low-availability items”, exploration of varioustransformations did not provide a means of satisfying both the equal variance and normality ofresiduals requirements. As mentioned earlier, it is sometimes the case that a transformation toachieve constant variance sacrifices normality. When this is the case, normality is the leastimportant of the requirements [Weisberg, 1980]. Consequently, the dependent variable was leftuntransformed.5.1.4 Assessment of the Reliability of the Perceptual VariablesFor the constructs relating to user perceptions of the judgment process, a multi-item instrumentin the form of the post-meeting questionnaire was used for measurement. The instrument waspreviously developed and validated by Green and Taber [1980]. Prior to the use of theseperceptual measures for the purpose of evaluating the hypotheses, the validity and reliability ofthe scales comprising this instrument were assessed.The instrument comprised of five multi-item scales for measuring five perceptual constructs:“perceived task participation”, “perceived socio-emotional behavior”, “satisfaction withoutcome”, “satisfaction with process”, and “perceived informal leadership”. The reliability and94validity of the instrument were assessed at two different levels. At the scale level, indicators ofreliability and validity were obtained for each scale pertaining to each task. The overallreliability of the five scales for each experimental task was assessed using Cronbach’s ALPHA[Cronbach, 1970]. The construct validity of the scales was assessed by performing PrincipalComponents Factor Analysis utilizing the Varimax rotation to obtain the eigenvalues and thepercentage of variance explained. At the item level, the items comprising each scale wereevaluated using various item reliability statistics, including the item standard deviation score, theeffect on Cronbach’s ALPHA if an item was deleted, the item-to-total scale correlation, and thecomponent loading [Dhaliwai, 1993]. Examining item reliability statistics helps to identify itemsthat reduced either the reliability or construct validity of the scales. An item reduces the scalereliability if (1) its deletion helps to improve Cronbach’s ALPHA, (2) it has a low correlationto the total scale, and (3) it has a low standard deviation score. On the other hand, an itemreduces the construct validity if it does not load strongly on any factor (loadings of less than .5).The detailed results on the relaibility and validity check are contained in Appendix C. In sum,the five scales comprising the instrument were found to be sound from the perspective ofreliability of measurement.5.1.5 Homogeneity of Group CompositionSubjects were asked to fill in a background questionnaire (in Appendix B) with items includingage, gender, experience in group work, experience in using computers, and so on. Thesevariables were tested against the two independent variables (i.e., cognitive support and95communication medium), and showed no differences across conditions. Correspondingly,homogeneity of the population with regard to these variables may be inferred.5.2 Probabifity Estimation Task5.2.1 Cognitive SupportOutcomes and Perceptual VariablesMultivariate analysis of variance (MANOVA) was performed separately for the outcomesvariables and perceptual variables. Since the independent variable (i.e., cognitive support)contains only two conditions, the F approximation is exact; therefore, all the multivariatestatistics tested (i.e., Wilk’s Lambda, Pillai Trace, Hotelling-Lawley Trace, and Theta) yield thesame F approximation [SYSTAT, 1992]. For the outcomes variables, the F approximation is4.28, and the corresponding p-value is .022. For the perceptual variables, the F approximationis 3.06, and the corresponding p-value is .023.ANOVA results for the outcomes and perceptual variables are summarized in Table 5.5. Foreach of the five perceptual variables, a group score was obtained for each experimental groupby averaging its three group members’ scores. The detailed ANOVA results are included inAppendix C. Two variables from the above category have been found to exhibit main effects ofcognitive support. As expected, “normalized error” was significantly reduced by cognitivesupport (F =7.76, p=.009). “Perceived informal leadership” was increased by cognitive support(F=15.80, p=.000). However, it should be noted that the variable also exhibited an interactioneffect (to be presented later).96For “satisfaction with process”, although the difference between the means were insufficient toproduce a significant effect (F =2.48, p=. 124), the relative sizes of the means were consistentwith the hypothesis. The sample size may have been too small to detect a significant effect asthe power of the test is only .48 for an alpha level of .10 [Cohen, 1988].Process VariablesMANOVA was performed for the process variables. The F approximation is 24.79, and thecorresponding p-value is .000.ANOVA results for the process variables are summarized in Table 5.6. As expected, “referencesto the use of single information source” was significantly lower with cognitive support thanwithout cognitive support (F=6.27, p=.Ol8).For “references to the diagnostic information”, although the relative sizes of the means wereconsistent with the hypothesis, the difference between the means were insufficient to producea significant effect (F =1.88, p =. 181). The sample size may have been too small to detect asignificant effect as the power of the test is only .34 for an alpha level of .10.97Table5.5Effectofcognitivesupportonoutcomesandperceptualvariables(Probabilityestimationtask)DependentVariableCognitiveSupport(CS)NoCognitiveSupport(NC)F-valuepRelatedHypothesisMeanMeanHypothesisSupported?(s.d.,n)(s.d.,n)NormalizedError.33.647.76•009***Hia:CS<NCSupported(.34,19)(.33,20)TimetoCompletion521.00479.50.24.627Hlb:CS>NCNotsupported(inseconds)(296.70,19)(238.99,20)PerceivedTask17.6717.67.00.985H2a:CS>NCNotsupportedParticipation(1.90,20)(1.33,19)PerceivedSocio-10.229.57.67.418H2b:CS>NCNotsupportedEmotionalBehavior(3.08,20)(2.64,20)Satisfactionwith18.1818.55.45.505H2c:CS>NCNotsupportedOutcome(2.07,20)(1.38,20)Satisfactionwith19.2320.172.48.124H2d:CS<NCNotsupportedProcess(1.72,20)(2.18, 20)PerceivedInformal10.237.9515.80.000”H2e:CS>NCSupportedLeadership(1.88,20)(2.15,20)*p<.10,**p<.05,***p<.01.98Table5.6Effectofcognitivesupportonprocessvariables(Probabilityestimationtask)DependentVariableCognitiveSupport(CS)NoCognitiveSupportFpRelatedHypothesisMean(NC)HypothesisSupported?(s.d.,n)Mean(s.d.,n)Referencestothe8.219.721.88.181H3a:CS<NCNotsupporteddiagnostic(11.56,14)(10.91,18)informationReferencestothe9.077.61.39.537H3b:CS>NCNotsupportedbaserate(11.11,14)(4.25,18)Referencestothe2.646.506.27.018**H3c:CS<NCSupporteduseofsingle(5.23,14)(10.05,18)informationsourceReferencestothe5.006.39.38.541H3d:CS>NCNotsupporteduseofdual(4.37,14)(7.13,18)informationsourcesProcedural17.2921.06.65.427H3e:CS<NCNotsupportedcomments(11.93,14)(13.77,18)*p<.1u,p<.O5,**p<.Ol.995.2.2 Communication MediumOutcomes and Perceptual VariablesMANOVA was performed separately for the outcomes variables and perceptual variables. Forthe outcomes variables, the F approximation is .36, and the corresponding p-value is .703. Forthe perceptual variables, the F approximation is 2.77, and the corresponding p-value is .035.ANOVA results for the outcomes and perceptual variables are summarized in Table 5.7. Asexpected, “perceived task participation” (F =3.73, p = .062), “perceived socio-emotionalbehavior” (F=6.25, p=.Ol’7), and “perceived informal leadership” (F=3.56, p=.O6’7) werehigher with electronic communication than with verbal communication . However, it should benoted that the last two variables also exhibited interaction effects (to be presented later).Process VariablesMANOVA was performed for the process variables. The F approximation is 4.41, and thecorresponding p-value is .004.ANOVA results for the process variables are summarized in Table 5.8. As expected, “referencesto the diagnostic information” (F =6.50, p= .017), “references to the base rate” (F =6.2,p = .019), and “references to the problem representation tool” (F = 11.7, p = .005) weresignificantly lower with electronic communication than verbal communication. However, itshould be noted that the first two variables also exhibited interaction effects (to be presentedlater).100Table5.7Effectofcommunicationmediumonoutcomesandperceptualvariables(Probabilityestimationtask)DependentVariableElectronicComm.(EC)VerbalComm.(VC)FpRelatedHypothesisMeanMeanHypothesisSupported?(s.d.,n)(s.d.,n)NormalizedError.49.49.01.937H4a:EC=VCSupported(.41,20)(.33,19)TimetoCompletion537.20460.26.73.400H4b:EC>VCNotsupported(inseconds)(248.64,20)(284.23,19)PerceivedTask18.1517.163.73.062*H5a:EC>VCSupportedParticipation(1.58,20)(1.55,19)PerceivedSocio-10.888.906.25.017**H5b:EC>VCSupportedEmotionalBehavior(2.79,20)(2.61,20)Satisfactionwith18.5818.15.63.431H5c:EC=VCSupportedOutcome(1.70,20)(1.80,20)Satisfactionwith19.4719.93.62.436H5d:EC<VCNotsupportedProcess(1.68,20)(2.28,20)PerceivedInformal9.638.553.56.067*H5e:EC<VCNotsupportedLeadership(2.11,20)(2.41,20)*p<.10,**p<.o5,***p<.ol.101Table5.8Effectofcommunicationmediumonprocessvariables(Probabilityestimationtask)DependentElectronicComm.(EC)VerbalComm.(VC)FpRelatedHypothesisVariableMeanMeanHypothesisSupported?(s.d.,n)(s.d.,n)Referencestothe6.2511.886.50.017**H6a: EC<VCSupporteddiagnostic(11.75,16)(9.86,16)informationReferencestothe5.7510.756.20.019**H6b:EC<VCSupportedbaserate(5.25,16)(9.36,16)Referencestothe5.134.50.53.473H6c:EC<VCNotsupporteduseofsingle(10.78, 16)(5.43, 16)informationsourceReferencestothe6.005.56.04.842H6d:EC<VCNotsupporteduseofdual(5.51,16)(6.68,16)informationsourcesReferencestothe5.0023.8611.70.005***H6e:EC<VCSupportedproblem(3.42,7)(14.87, 7)representationtoolProcedural17.6321.19.76.390H6f:EC<VCNotsupportedcomments(12.23,16)(13.77,16)‘p<.1u,p<.tb,‘p<.U1.1025.2.3 Interaction Effects of Cognitive Support and Communication MediumOutcomes and Perceptual VariablesMANOVA was performed separately for the outcomes variables and perceptual variables. Forthe outcomes variables, the F approximation is .68, and the corresponding p-value is .5 14. Forthe perceptual variables, the F approximation is 3.65, and the corresponding p-value is .010.The univariate interaction effects between cognitive support and communication medium on theoutcomes and perceptual variables are summarized in Table 5.9. Significant interaction effectswere found on four perceptual variables, “perceived socio-emotional behavior” (F =7.35,p=.OlO), “satisfaction with outcome” (F =2.94, p=.O95), “satisfaction with process” (F =5.06,p=.O3l), and “perceived informal leadership” (F=7.28, p=.Oll). For each of the significantinteraction effect found, pairwise comparisons utilizing the Bonferroni method were performedon the cell means. The Bonferroni method is probably the best one to apply in cases involvingonly a few pairwise comparisons [Neter et al., 1985]. Interaction effects with at least onesignificant pairwise comparison are graphically depicted in figures 5.1 to 5.3. Due to the five-point Likert scale used in the questionnaire items and the fact that each perceptual variable wasderived from five questionnaire items (except “perceived informal leadership” which was derivedfrom three), each variable may range from 5 to 25 (except “perceived informal leadership”which may range from 3 to 15). To facilitate comparison of the graphs, the related perceptualvariables were transformed into an index in the range 0 to 100.103Table5.9Interactioneffectofcognitivesupportandcommunicationmediumonoutcomesandperceptualvariables(Probabilityestimationtask)DependentCognitiveSupportandCognitiveSupportandNoCognitiveSupportNoCognitiveSupportFpVariableElectronicComm.Verbal Comm.andElectronicComm.andVerbalComm.MeanMeanMeanMean(s.d.,n)(s.d.,n)(s.d.,n)(s.d.,n)Normalized.31.36.67.60.30.585Error(.33,10)(.37,9)(.41,10)(.25,10)Timeto509.60533.67564.80394.201.28.266Completion(281.86,10)(329.17,9)(222.22,10)(234.57,10)(inseconds)PerceivedTask18.4716.8717.8317.481.52.225Participation(1.87,10)(1.65,10)(1.25,10)(1.46,9)Perceived10.1310.3011.637.507.35.010**Socio-Emotional(3.57,10)(2.69,10)(1.56,10)(1.67,10)BehaviorSatisfaction18.8717.5018.3018.802.94.095*withOutcome(2.15,10)(1.83,10)(1.15,10)(1.60,10)Satisfaction19.6718.8019.2721.075.06.031**withProcess(1.74,10)(1.67,10)(1.69,10)(2.31,10)Perceived10.0010.479.276.637.28.011**Informal2.36,10)(1.32,10)(1.88, 10)(1.54,10)Leadership“p<.lu,‘F’p<.u,‘°p<.U1.104Perceived socloemotional behavior8116110INo cognitive Cognitivesupport supportO Electronic communication• Verbal communicationFigure 5.1 Interaction effect on perceived soclo-emotional behavior(Probability estimation task)It can be seen from Figure 5.1 that when group members communicated verbally, cognitivesupport, compared to no cognitive support, resulted in marginally higher “perceived socioemotional behavior” (p =. 104 from Bonferroni pairwise comparison); when group memberscommunicated electronically, cognitive support had no effect. On the other hand, electroniccommunication caused significantly higher “perceived socio-emotional behavior” than verbalcommunication in groups receiving no cognitive support (p = .004); but communication mediumhad no effect in groups receiving cognitive support. None of the pairwise comparisons involving“satisfaction with outcome” was statistically significant.105Sat sfact ionwith processBa_60-40-20-0 INo cognitive Cognitivesupport supportD Electronic communication• Verbal communicationFigure 5.2 Interact ion effect on satisfaction with process(Probability estimation task)Referring to Figure 5.2, “satisfaction with process” was lower with cognitive support inverbally-communicating groups (p = .062), but cognitive support had no effect in electronically-communicating groups. The effect of communication medium was non-significant in groupsreceiving cognitive support as well as groups receiving no cognitive support. It is interesting tonote that the use of the “full” GSS (i.e., both cognitive support and electronic communication)did not reduce “satisfaction with process” as compared to the baseline (i.e., no cognitive supportand verbal communication) (p = .621).106Perceivedinformal leadershipNo cognitive Cognitivesupport supportD Electronic communication• Verbal communicationFigure 5.3 Interaction effect on perceived informal leadership(Probability estimation task)Referring to Figure 5.3, cognitive support led to a significant increase in “perceived informalleadership” in groups communicating verbally (p= .000), but it had no effect in groupscommunicating electronically. On the other hand, groups using electronic communicationreported higher “perceived informal leadership” than those using verbal communication whenthere was no cognitive support (p=.015), but communication medium had no effect in groupswith cognitive support.107Process VariablesMANOVA was performed for the process variables. The F approximation is 4.61, and thecorresponding p-value is .003.The univariate interaction effects between cognitive support and convnunication medium on theprocess variables are summarized in Table 5.10. Significant effect was found on “references tothe diagnostic information” (F =5.01, p= .033) and “references to the base rate” (F =6.63,p = .016). For each of the significant interaction effect found, pairwise comparisons utilizing theBonferroni method were performed on the cell means. The interaction effects are graphicallydepicted in figures 5.4 and 5.5.108Table5.10Interactioneffectofcognitivesupportandcommunicationmediumonprocessvariables(Probabilityestimationtask)DependentCognitiveSupportandCognitiveSupportandNoCognitiveSupportCognitiveSupportandFpVariableElectronicComm.VerbalComm.andElectronicComm.VerbalComm.MeanMeanMeanMean(s.d.,n)(s.d.,n)(s.d.,n)(s.d.,n)Referencestothe1.2915.1410.119.335.01.033**diagnostic(1.50, 7)(13.25,7)(14.79,9)(5.83,9)informationReferencestothe2.7115.438.117.116.63.016**baserate(4.07, 7)(12.52,7)(4.99,9)(3.59,9)Referencestothe.295.008.894.111.86.183useofsingle(.76,7)(6.76,7)(13.46, 9)(4.54,9)informationsourceReferencestothe5.294.716.566.22.003.958useofdual(5.12,7)(3.86,7)(6.04,9)(8.45,9)informationsourcesProcedural13.1421.4321.1121.00.80.378comments(9.32,7)(13.46,7)(13.56,9)(14.81,9)*p<.10,**p<.o5,***p<.ol.109References to thediagnostic informationNo cognitive Cognitivesupport support° Electronic communication• Verbal communicationFigure 5.4 Interaction effect on references to the diagnostic information(Probability estimation task)Referring to Figure 5.4, cognitive support led to a reduction in “references to diagnosticinformation” in electronically-communicating groups (p=.099), but had no effect in verbally-communicating groups. On the other hand, electronic comunication resulted in lower “referencesto diagnostic information” than verbal communication in groups receiving cognitive support(p=.021), but communication medium had no effect in groups receiving no cognitive support.110Referring to Figure 5.5, cognitive support had no significant effect on “references to the baserate” in groups communicating electronically or those communicating verbally. On the otherhand, electronic comunication resulted in lower “references to the base rate” than verbalcommunication in groups receiving cognitive support (p = .013), but communication medium hadno effect in groups receiving no cognitive support.References to thebase rate2015..10-5-0No cognitive Cognitivesupport supportD Electronic communication• Verbal communicationFigure 5.5 Interaction effect on references to the base rate(Probability estimation taskj1115.2.4 Correlational AnalysisTo explore the relationship of the process variables with the outcomes and perceptual variables,a Pearson correlational analysis was performed. The results are summarized in Table 5.11.112Table5.11Correlationsbetweenprocessvariablesandoutcomesandperceptualvariables(Probabilityestimationtask)NormalizedTimetoPerceivedPerceivedSatisfactionSatisfactionPerceivedErrorCompletionTaskSocio-withOutcomewithProcessInformalParticipationEmotionalLeadershipBehaviorReferencestothe.41**.65**-.08.18-.13-.19.19diagnosticinformationReferencestothe.19•44*K-.11.29.02.04.18baserateReferencestothe.58**.44.06.16.21-.17.07useofsingleinformationsourceReferencestothe-.14.13.36**.30*-.15.34*.09useofdualinformationsourcesReferencestothe.38**.32*-.15.11-.28.•33*problemrepresentationtoolProcedural.48**79**.08.47**-.06-.29.20commentsp<.1u,p<.u,’‘p<.u1.113“Normalized error” was found to be positively correlated with “references to the diagnosticinformation” (r = .41). In other words, more references made about diagnostic information (asopposed to the base rate) had led the groups to greater errors. This is consistent with the base-rate fallacy phenomenon [Bar-Hillel, 1980], which suggests that human judges have a tendencyto favor diagnostic information over base rate information, thus causing systematic judgmentbiases. There was also a positive correlation between “normalized error” and “references to theuse of single information source” (r=.58). Therefore, more references made about using onlya single piece of information (i.e., diagnostic information or base rate) in deriving the solutionresulted in higher error.As expected, a negative relationship was found between “normalized error” and “references tothe problem representation tool” (r=-.38). This is equivalent to saying that more referencesmade about the problem representation tool contributed toward better solutions. The result thusprovides evidence that it was the use of the cognitive support tool, in one way or another, thatwas (at least partly) responsible for the improvements in solutions. “Normalized error” was alsofound to be positively correlated with “procedural comments” (r= .48). Apparently, groupswhich were preoccupied by procedural matters were less capable in generating correct solutions.Interestingly, groups which had made procedural comments also made more references todiagnostic information (r= .72) as well as the use of single information source (r= .56).Preoccupation with procedural matters thus seemed to have distracted group members fromdevoting their energy and resources toward discovering a correct solution. Nevertheless, itshould be pointed out that procedural comments were not made at the expense of time for114substantive communication (which includes all process categories except “proceduralcomments”), suggested by the positive correlation between “procedural comments” and the totalsubstantive communication (r= .72).“Time to completion” was found to be positively correlated with several process variables. Theseinclude “references to the diagnostic information” (r= .65), “references to the base rate”(r=.44), “references to the use of single information source” (r=.44), “references to theproblem representation tool” (r = .32), and “procedural comments” (r = .79). These findings areexpected since making more comments in a meeting inevitably contributes toward an increasein the total time spent.“Perceived task participation” was positively correlated with “references to the use of dualinformation sources” (r = .36). Therefore, a higher degree of task participation was perceivedby group members when the group was able to fathom the use of a more complex, as well asmore correct, strategy in solving the problem.“Perceived socio-emotional behavior” was positively correlated with both “references to the useof dual information sources” (r= .30) and “procedural comments” (r= .47). Higher instances ofreferences made about more correct problem-solving strategies were associated with higherdegree ofperceived negative socio-emotional behavior, suggesting that there were probably moreconificting situations and greater degree of ambiguities arising out of the new complexitiespresented by the judgment strategies. Increasing procedural comments also led to an increase in115group members’ perceptions on negative socio-emotional behavior.“Satisfaction with process” was found to have negative correlations with “references to the useof dual information sources” (r=-.34) and “references to the problem representation tool” (r=.33). Higher instances of references made about more correct problem-solving strategies and theproblem representation tool perhaps contributed toward an environment that was not only richerin terms of issues to explore, but also possessed more opportunities for conflicts to arise, asindicated by the positive correlation between “references to the problem representation tool” andthe total substantive communication (r = .63). As a result, there was a lower satisfaction with theprocess. An alternate, and somewhat opposite, interpretation about the above ought to beexamined. It is possible that the use of the problem representation tool and the discussion ofmore correct strategies led to a more analytical and direct approach in the group, which broughtabout a quicker closure (i.e., less discussion) based on facts; such a quick closure was, in turn,responsible for the dissatisfaction with the process. However, this interpretation is not supportedby the positive correlation found between “references to the problem representation tool” and“time to completion” (r=.32), as mentioned earlier. In sum, the process in which the use of theproblem representation tool influenced the judgment outcome was by increasing the number ofapproaches for the group to consider by highlighting the saliency of the base rate and providingnew ways of combining the information sources, rather than directly replacing the predominant,biased approach.“Perceived informal leadership” was positively correlated with “references to the problem116representation tool” (r = .40). A greater involvement with the provided support tool thus appearedto be associated with a higher degree of informal leadership. This is understandable since somecoordination among the group members was necessary in order to use the tool.A further correlational analysis was conducted on four process variables, “references to thediagnostic information”, “references to the base rate”, “references to the use of singleinformation source”, and “references to the use of dual information sources”, to identify thepossible conceptual overlaps among them. All correlations were non-significant except thatbetween “references to the diagnostic information” and “references to the use of singleinformation source” (rA = .66) and that between “references to the diagnostic information” and“references to the base rate” (rB=.38). The first pair of relationships (i.e., rJ is easilyinterpretable as it is in line with the base-rate fallacy [Bar-Hillel, 1980], which suggests thatdecision makers often rely solely on the diagnostic information and ignore the base rate inderiving posterior probabilities. The smaller correlation found between base-rate references anddiagnostic-information references, rB, can be better understood by looking at how it changesaccording to whether or not there was cognitive support. Among groups provided with cognitivesupport, rB = .58 (up from .38); on the other hand, among groups not provided with cognitivesupport, rB is non-significant. Cognitive support, which led to better solutions, caused a strongerassociation between the base rate and the diagnostic information by group members (since bothpieces of information were combined in producing solutions), while such association was absentwithout cognitive support. These results suggest that the process constructs are indeed measuringdifferent aspects of the judgment process, and their relationships revealed in the analysis are in117agreement with the general pattern of results displayed.Correlations between “references to the problem representation tool” and the above processvariables were also examined to find out how references made about the information-sourcetypes and information-integration rules may be related to the use of the provided tool. A positivecorrelation was found between “references to the problem representation tool” and “referencesto the base rate” (r = .34). Thus, greater awareness about the base rate was associated withhigher usage of the tool.5.3 Job Evaluation Task5.3.1 Cognitive SupportOutcomes and Perceptual VariablesMANOVA was performed separately for the outcomes variables and perceptual variables. Forthe outcomes variables, the F approximation is 15.61, and the corresponding p-value is .000.For the perceptual variables, the F approximation is 1.06, and the corresponding p-value is .404.ANOVA results for the outcomes and perceptual variables are summarized in Table 5.12. Foreach of the five perceptual variables, a group score was obtained for each experimental groupby averaging its three group members’ scores. The detailed ANOVA results are included inAppendix C. As expected, “availability bias” was reduced by cognitive support (F =3.58,p = .067). “Time to completion” was higher with cognitive support than without cognitivesupport (F =45.81, p=.000). However, it should be noted that the variable also exhibited an118interaction effect (to be presented later). “Total scores of low-availability items” was higher withcognitive support than without cognitive support (F =7.57, p = .009), thus providing evidencethat the electronic brainstorming had caused the group to increase its information search scope,which was particularly salient on the low-availability Position Analysis Questionniare (PAQ)items. As Table 5.12 indicates, no significant main effect was found on the perceptual variables.Process VariablesMANOVA was performed for the process variables. The F approximation is 5.33, and thecorresponding p-value is .002.ANOVA results for the process variables are summarized in Table 5.13. “Total number of ideasmentioned during discussion in support of importance of items” was significantly lower withcognitive support than without cognitive support (F =10.49, p = .003). “Number of ideationalconnections with ideas mentioned during earlier discussion” was also lower with cognitivesupport than without cognitive support (F =7.15, p= .012). Cognitive support also showed amain effect on “procedural comments” in the same direction as the previous variables (F = 9.32,p = .018).119Table5.12Effectofcognitivesupport onoutcomesandperceptualvariables(Jobevaluationtask)DependentVariableCognitiveSupport(CS)NoCognitiveSupport (NC)FpRelatedHypothesisMeanMeanHypothesisSupported?(s.d.,n)(s.d.,n)AvailabilityBias4.356.253.58.067*H7a:CS<NCSupported(2.25,20)(3.18,20)TimetoCompletion2344.251562.4045.81.000*HTh:CS>NCSupported(inseconds)(204.35,20)(622.04,20)Totalscoresofhigh-16.4516.20.17.679H7c:CS=NCSupportedavailabilityitems(1.96,20)(1.85,20)Totalscoresoflow-12.109.957.57.009kH7d:CS>NCSupportedavailabilityitems(2.08,20)(2.24,20)PerceivedTask18.0318.27.33.571H8a:CS>NCNotSupportedParticipation(1.21,20)(1.51,20)PerceivedSocio-10.209.87.51.480H8b:CS>NCNotsupportedEmotionalBehavior(2.17,20)(2.59,20)Satisfactionwith19.8519.40.75.391H8c:CS>NCNotsupportedOutcome(1.40,20)(1.65,19)Satisfactionwith20.7020.62.11.744H8d:CS<NCNotsupportedProcess(1.47,20)(2.09,20)PerceivedInformal8.287.452.28.140H8e:CS>NCNotsupportedLeadership(1.48,20)(2.11,20)‘p<.1IJ,‘p<.I.b,‘p<.U1.120Table5.13Effectofcognitivesupportonprocessvariables(Jobevaluationtask)DependentVariableCognitiveSupport(CS)NoCognitiveSupport (NC)FpRelatedHypothesisMeanMeanHypothesisSupported?(s.d.,n)(s.d.,n)Totalnumber of29.0046.9010.49•003***H9a:CS<NCSupportedideasmentioned(22.25, 13)(25.90, 19)duringdiscussioninsupportofimportanceofitemsTotalnumber of7.007.47.14.711H9b:CS=NCSupportedideasmentioned(5.85,13)(5.17,19)duringdiscussioninrefutationofimportanceofitemsNumber ofideational11.7717.057.15.012**H9c:CS<NCSupportedconnectionswith(10.65, 13)(12.64, 19)ideasmentionedduringearlierdiscussionTotalnumberof59.6965.47.50.486H9d:CS>NCNotsupportedtimesaspecific(19.12,13)(20.95, 19)ratingsuggestedProceduralcomments64.9289.376.32.018**H9e:CS<NCSupported(22.39,13)(28.15,19)p<.u1.1215.3.2 Communication MediumOutcomes and Perceptual VariablesMANOVA was performed separately for the outcomes variables and perceptual variables. Forthe outcomes variables, the F approximation is 8.76, and the corresponding p-value is .000. Forthe perceptual variables, the F approximation is 4.79, and the corresponding p-value is .002.ANOVA results for the outcomes and perceptual variables are summarized in Table 5.14.“Availability bias” was significantly snialler with electronic communication than with verbalcommunication (F=.4.70, p=.O37). “Time to completion” was higher in groups whichcommunicated electronically than those verbally (F =16.83, p = .000). However, it should benoted that the variable also exhibited an interaction effect (to be presented later). “Total scoresof low-availability items” was higher with electronic communication than with verbalcommunication (F=9.04, p=.005). “Perceived socio-emotional behavior” was higher withelectronic communication than with verbal communication (F =21.38, p=.000). Similarly,“perceived informal leadership” was higher in electronically-communicating groups than inverbally-communicating groups (F= 13.44, p=.00l). However, it should be noted that the lasttwo variables also exhibited interaction effects (to be presented later).For “satisfaction with process”, although the relative sizes of the means were congruent with thecorresponding hypothesis, the difference was not sufficient to produce a significant effect(F =1.93, p =. 174). The sample size may have been too small to detect a significant effect asthe power of the test is only .36 for an alpha level of .10.122Process VariablesMANOVA was performed separately for the process variables. The F approximation is 11.55,and the corresponding p-value is .000.ANOVA results for the process variables are summarized in Table 5.15. Five out of the sixvariables showed significant main effects. The corresponding means were smaller with electroniccommunication than with verbal communication for the following variables: “total number ofideas mentioned during discussion in support of importance of items” (F =21.68, p = .000), “totalnumber of ideas mentioned during discussion in refutation of importance of items” (F =2.96,p=.097), “number of ideational connections with ideas generated during electronicbrainstorming” (F= 11.87, p= .005), and “number of ideational connections with ideasmentioned during earlier discussion” (F =34.00, p=.000). “Total number of times a specificrating suggested” was higher with electronic communication than with verbal communication(F=4.72, p=.038).123Table5.14Effectofcommunicationmediumonoutcomesandperceptualvariables(Jobevaluationtask)DependentVariableElectronicComm.(EC)Verbal Comm.(VC)FpRelatedHypothesisHypothesisMeanMeanSupported?(s.d.,n)(s.d.,n)AvailabilityBias4.506.104.70.037**HlOa:EC<VCSupported(1.96, 20)(3.45,20)TimetoCompletion2190.251716.4016.83.000*HiOb:EC>VCSupported(inseconds)(425.24,20)(671.87,20)Totalscoresof high-16.7015.951.56.219H10c:EC=VCSupportedavailabilityitems(1.59,20)(2.11,20)Totalscoresoflow-12.209.859.04.005***HiOd:EC>VCSupportedavailabilityitems(1.96,20)(3.23,20)PerceivedTask18.0518.25.21.653Hila:EC>VCNotsupportedParticipation(1.23,20)(1.49,20)PerceivedSocio-11.378.7021.38.000***Hub:EC>VCSupportedEmotionalBehavior(1.78,20)(2.14,20)Satisfactionwith19.3319.951.81.188Hllc:EC>VCNotsupportedOutcome(1.56,20)(1.46, 19)Satisfactionwith20.3021.021.93.174Hild:EC<VCNotsupportedProcess(1.84,20)(1.70,20)PerceivedInformal8.776.9713.44.001***Hue:EC<VCNotsupportedLeadership(1.42,20)(1.81,20)‘cp<.1o,‘p<.ol.124Table5.15Effectofcommunicationmediumonprocessvariables(Jobevaluationtask)DependentVariableElectronicComm.(EC)Verbal Comm.(VC)FpRelatedHypothesisMeanMeanHypothesisSupported?(s.d.,n)(s.d.,n)Totalnumberofideas26.0053.2521.68.000*H12a:EC<VCSupportedmentionedduring(20.36,16)(23.56,16)discussioninsupportofimportanceofitemsTotalnumberofideas5.888.692.96.097*H12b:EC<VCSupportedmentionedduring(4.91,16)(5.58, 16)discussioninrefutationofimportanceofitemsNumberofideational7.1725.7111.87.005***H12c:EC<VCSupportedconnectionswithideas(5.42,6)(12.13,7)generatedduringelectronicbrainstormingNumberofideational7.5022.3134.00.000***H12d:EC<VCSupportedconnectionswithideas(7.39,16)(11.18,16)mentionedduringearlierdiscussionTotalnumberoftimesa71.1355.134.72.038**H12e:EC>VCSupportedspecificratingsuggested(18.36,16)(19.04,16)Proceduralcomments82.0076.88.10.758H12f:EC<VCNotsupported(27.45,16)(29.86,16)p<.1U,‘‘p<.Ib,““p<.U1.1255.3.3 Interaction Effects of Cognitive Support and Communication MediumOutcomes and Perceptual VariablesMANOVA was performed separately for the outcomes variables and perceptual variables. Forthe outcomes variables, the F approximation is 2.73, and the corresponding p-value is .046. Forthe perceptual variables, the F approximation is 2.11, and the corresponding p-value is .090.The univariate interaction effects between cognitive support and communication medium on theoutcomes and perceptual variables are summarized in Table 5.16. Significant effects were foundfor the following variables: “time to completion” (F=8.21, p= .007), “perceived socio-emotionalbehavior” (F =6.39, p=.Ol6), “satisfaction with outcome” (F =3.94, p=.O55), “satisfaction withprocess” (F = 8.64, p = .006), and “perceived informal leadership” (F =2.88, p = .098). For eachof the significant interaction effect found, pairwise comparisons utilizing the Bonferroni methodwere performed on the cell means. The interaction effects are graphically depicted in figures 5.6to 5.9. Due to the five-point Likert scale used in the questionnaire items and the fact that eachperceptual variable was derived from five questionnaire items (except “perceived informalleadership” which was derived from three questionnaire items), each variable may range from5 to 25 (except “perceived informal leadership” which may range from 3 to 15). To facilitatecomparison of the graphs, the related perceptual variables were transformed into an index in therange 0 to 100.126Table5.16Interactioneffectofcognitivesupportandcomm.mediumonoutcomesandperceptualvariables(Jobevaluationtask)DependentVariableCognitiveSupport&CognitiveSupportNoCognitiveSupportNoCognitiveSupportFpElectronicComm.&Verbal Comm.&ElectronicComm.&Verbal Comm.Mean(s.d.,n)Mean(s.d.,n)Mean(s.d.,n)Mean(s.d.,n)AvailabilityBias4.204.504.807.701.65.208(1.55,10)(2.88,10)(2.35,10)(3.34,10)TimetoCompletion2415.702272.801964.801160.008.21.007***(195.76,10)(196.18,10)(480.08,10)(475.96,10)Totalscoresofhigh-17.1015.8016.3016.100.84.365availabilityitems(1.10,10)(2.44,10)(1.95,10)(1.85,10)Totalscoresoflow-12.9011.3011.508.40.92.344availabilityitems(1.73,10)(2.16,10)(2.01,10)(3.57,10)PerceivedTask18.0318.0318.0718.470.21.653Participation(1.20,10)(1.28,10)(1.33,10)(1.72,10)PerceivedSocio-10.999.6311.977.776.39.016**EmotionalBehavior(1.59,10)(2.59,10)(1.83,10)(1.02,10)Satisfactionwith20.0019.7018.6720.223.94.055*Outcome(1.49,10)(1.37,10)(1.39,10)(1.58,9)Satisfactionwith21.1020.3019.5021.738.64.006***Process(1.44,10)(1.46,10)(1.91,10)(1.68,10)PerceivedInformal8.777.808.776.132.88.098*Leadership(1.48,10)(1.38,10)(1.43,10)(1.87,10)*p<.10,**p<.05,**.p<.Ol.127Referring to Figure 5.6, cognitive support led to a longer meeting in groups communicatingelectronically (p= .054) as well as in groups communicating verbally (p=.000), although theeffect was more pronouned in the latter. On the other hand, electronic communication, whencompared to verbal communication, prolonged the meeting in groups receiving no cognitivesupport (p=.000), but communication medium had no impact in groups with cognitive support.Time to completion(in seconds)250020001500-10005000____No cognitive Cognitivesupport supporto Electronic communication• Verbal communicationFigure 5,6 Interaction effect on time to completion (Job evaluation task)128Perceived socioemotional behavior80-60-4no20-0No cognitive Cognitivesupport support° Electronic communicationVerbal communicationFigure 5.7 Interaction effect on perceived socio-emotional behavior(Job evaluation task)Referring to Figure 5.7, cognitive support had no effect on “perceived socio-emotional behavior”in groups with electronic communication, or in groups with verbal communication. On the otherhand, the perception was increased by electronic communication, as compared to verbalcommunication, in groups with no cognitive support (p = .000); but communication medium hadno effect in groups with cognitive support. None of the pairwise comparisons involving“satisfaction with outcome” was significant.129Sat sfact ionwith process60-40-20-0No cognitive Cognitivesupport supportElectronic communication• Verbal communicationFigure 5.8 Interaction effect on satisfaction with process(Job evaluation task)Referring to Figure 5.8, the effect of cognitive support on “satisfaction with process” was nonsignificant in electronically-communicating groups as well as verbally-communicating groups.On the other hand, electronic communication, as compared to verbal communication, reduced“satisfaction with process” in groups with no cognitive support (p = .025), but communicationmedium had no impact in groups with cognitive support. It is interesting to note that the use ofthe “full” GSS (i.e., both cognitive support and electronic communication) did not reduce“satisfaction with process” as compared to the baseline (i.e., no cognitive support and verbalcommunication).130Perceivedinformal leadership80-60-40-0 INo cognitive Cognitivesupport supporto Electronic communication• Verbal communicationFigure 5.9 Interaction effect on perceived informal leadership[Job evaluation task)Referring to Figure 5.9, the effect of cognitive support on “perceived informal leadership” wasnon-significant in electronically-communicating groups as well as verbally-communicatinggroups. On the other hand, electronic communication, as compared to verbal communication,increased “perceived informal leadership” in groups with no cognitive support (p = .003), butcommunication medium had no impact in groups with cognitive support.Process VariablesMANOVA was performed for the process variables. The F approximation is .96, and the131corresponding p-value is .460.The univariate interaction effects between cognitive support and communication medium on theprocess variables are summarized in Table 5.17. As the table indicates, no interaction effect wasfound on the process variables.132Table5.17Interactioneffectof cognitivesupportandcommunicationmediumonprocessvariables(Jobevaluationtask)DependentVariableCognitiveSupportandCognitiveSupportandNoCognitiveSupportNoCognitiveSupportFpElectronicComm.VerbalComm.andElectronicComm.andVerbalComm.MeanMeanMeanMean(s.d.,n)(s.d.,n)(s.d.,n)(s.d.,n)Totalnumberof12.3343.2934.2061.001.15.293ideasmentioned(10.41,6)(19.60,7)(20.77,10)(24.46,9)duringdiscussioninsupport ofimportanceofitemsTotalnumberof4.009.577.008.001.43.242ideasmentioned(3.10,6)(6.60,7)(5.58,10)(4.95,9)duringdiscussioninrefutationofimportanceofitemsNumber of2.6719.5710.4024.442.62.117ideational(2.50,6)(8.22, 7)(7.92,10)(13.12,9)connectionswithideasmentionedduringearlierdiscussionTotalnumber of65.6754.5774.4055.560.32.578timesaspecific(14.60,6)(22.07,7)(20.09,10)(17.73,9)ratingsuggestedProcedural64.8365.0092.3086.11.11.745comments(16.79,6)(27.70,7)(28.02,10)(29.61,9)*p<.10,**p<.0,,p<.01.1335.3.4 Correlational AnalysisTo investigate the relationship of the process variables with the outcomes and perceptualvariables, a Pearson correlational analysis was conducted. The results are summarized in Table5.18. Several significant correlations were revealed.134Table5.18Correlationsofprocessvariableswithoutcomesandperceptualvariables(Jobevaluationtask)rp<.1u,‘p<.u1.Avail.TotalScoresofScoresofPerceivedSocio-SatisfactionSatisfactioPerceivedBiastimehigh-avail,low-avail.TaskEmotionalwithnwithInformalitemsitemsParticipatnBehaviorOutcomeProcessLeadershipTotalnumberofideas.38**-.13-.15,45**.05-.29.13-.06mentionedduringdiscussioninsupportofimportanceofitemsTotalnumber ofideas.04.12.38**-.27.13.03-.00-.17-.29mentionedduringdiscussioninrefutationofimportanceofitemsNumber ofideational-.29.48**.02.28.19-.14.26.09-.06connectionswithideasgeneratedduringelectronicbrainstormingNumberofideational,37**-.09-.21.48**.13..30*.20.01connectionswithideasmentionedduringearlierdiscussionTotalnumberoftimesa-.07.36**-.18-.04-.02.46**-.30.34*.08specificratingsuggestedProceduralcomments.13.07-.24-.27.08•34*-.27.32*.09135“Availability bias” was positively correlated with “total number of ideas mentioned duringdiscussion in support of importance of items” (r=.38) and “number of ideational connectionswith ideas mentioned during earlier discussion” (r=.37). These correlations echo the findingsthat both these process variables were reduced by cognitive support and electroniccommunication (versus verbal communication), and that both independent variables had led tolower “availability bias”. One possible explanation for this set of consistent fmdings is that theseideas and their connections (identified mainly in groups which did not have electronicbrainstorming and verbally-communicating groups) were associated with the high-availabilityitems. Further explorations of the data showed the total number of ideas mentioned to bepositively correlated with the number of ideas mentioned for the high-availability items (r= .93);also, the total number of ideational connections had a positive correlation with those pertainingto the high-availability items (r = .74). Whereas the no-cognitive-support or verballycommunicating groups mentioned more ideas during discussion and made more ideationalconnections, they did not make any deeper explorations of the low-availability items, ascompared with the high-availability items. Consequently, those ideas indirectly contributedtoward an already-obvious understanding of the group which, in turn, led to increased biases.The finding thus reinforces the usefulness of the cognitive support (i.e., the electronicbrainstorming tool) over and above an ordinary session of verbal idea generation and discussionwithout the aid, as well as the greater effectiveness of electronic communication than verbalcommunication.Positive correlations were found between “time to completion” and “number of ideational136connections with ideas generated during electronic brainstorming” (r= .48), and between “timeto completion” and “total number of times a specific rating suggested” (r= .36). Logically, morecomments made in a meeting increased the total time taken. “Total scores of high-availabilityitems” was negatively correlated with “total number of ideas mentioned during discussion inrefutation of importance of items” (r=-.38). In other words, more “negative” ideas had led tolower scores being assigned to the importance of the high-availabilty items. At the same time,“total scores of low-availability items” exhibited a negative correlation with “total number ofideas mentioned during discussion in support of importance of items” (r=-.45). The latter wasfound to be associated with no-cognitive-support groups and verbally-communicating groups. Ithas been demonstrated earlier that these ideas which were generated during discussion did nothelp to explore more about the low-availability items, as compared to the high-availability items;correspondingly, mentioning more of them only served to reinforce the gap between the twocategories of items. Consistent with this explanation, “total scores of low-availability items” wasalso found to have a negative correlation with “number of ideational connections with ideasmentioned during earlier discussion” (r=-.48).“Perceived socio-emotional behavior” was negatively correlated with “number of ideationalconnections with ideas mentioned during earlier discussion” (r=-.30), and positively correlatedwith “total number of times a specific rating suggested” (r= .46) and “procedural comments”(r= .34). “Satisfaction with process” had negative correlations with “total number of times aspecific rating suggested” (r=-.34) and “procedural comments” (r=-.32). The exchange ofspecific ratings, reminiscent of a “negotiation dance” caused higher perception of negative137socio-emotional behavior and lower satisfaction with the process. Preoccupation with proceduralmatters also appeared to have similar effects. “Perceived informal leadership” was found to benegatively correlated with “total number of ideas mentioned during discussion in support ofimportance of items” (r=-.33) and “number of ideational connections with ideas mentionedduring earlier discussion” (r=-.44). The mentioning of ideas during discussion, much like thebrainstorming process, benefits from the participation and input of all members. Consequently,leadership behavior would not support and, at the same time, could not be supported by suchparticipatory and democratic process.5.3.5 Exploratory AnalysisAlthough the number of ideas mentioned during discussion was smaller in groups with electronicbrainstorming than in groups without electronic brainstorming, the total number of ideasgenerated (i.e., including those generated during electronic brainstorming, where applicable),is far greater in supported groups (mean = 151.83, s.d. = 44.70) than non-supported groups(mean=54.37, s.d. =28.68) (F =78.41, p=.000). This vast difference in the information searchscope helps to explain why bias reduction was better achieved by the electronic brainstormingtool.To find out more about how electronic communication, as compared to verbal communication,led to lower availability bias, the range of ratings suggested was analyzed, with the underlyingpremise being that electronically-communicating groups were more divergent in theirconsiderations of possible solutions, and less indulgent in groupthink. Supporting evidence was138found in this regard. The range of ratings suggested was significantly higher in electronically-communicating groups (mean= 13.53, s.d. = 3.69) than verbally-communicating groups(mean=6.81, s.d. =2.46) (F=33.67, p= .000). Furthermore, a significant correlation was foundbetween the range variable and availability bias (r=-.37). In other words, the greater the rangeexplored, the lower the availability bias.5.4 SummaryThis chapter has presented the results of the data analysis both in relation to the hypotheses andusing an exploratory approach. The major findings are summarized as follows:(1) Cognitive support was effective in reducing both the representativeness bias and theavailability bias.(2) Electronic communication (versus verbal communication) helped to reduce the availabilitybias but was ineffective against the representativeness bias.(3) Cognitive support and communication medium have different impacts on group interactionprocess. Overall, electronic communication (versus verbal communication) shrank the interactionprocess so that numerically, there were fewer interactions. However, at least in the case of thejob evaluation task, activities relating to the proposal of solutions increased substantially, andthe range of ratings proposed was significantly wider in electronically-communicating groupsthan in verbally-communicating groups. On the other hand, cognitive support induced a differentcombination of the types of references, which were instrumental in reducing biases. Forexample, groups with cognitive support referred less to the use of only one information source.(4) There was a relationship between process variables and both outcomes and perceptual139variables as expected. The representativeness bias was reduced as a result of groups paying moreattention to the base rate and employing more correct strategies. The use of the problemrepresentation tool in affecting the bias was also indicated. The availability bias was reduced asa result of an expanded scope in information search and a reduction in groupthink, which ledto increased awareness of issues whose availability was otherwise low and not obvious. Thetendency of no-cognitive-support groups and verbally-communicating groups to ignore low-availability issues in favor of further exploring what was already obvious was seen ascontributing toward increased availability bias.140CHAPTER 6DISCUSSION OF THE flNDINGSThis chapter discusses the implications of findings reported in the previous chapter. First, foreach task (and judgment bias), the “how” and “why” questions related to the influence of groupsupport system (GSS) components are addressed. Second, limitations of the research areaddressed. Third, directions for future research are proposed.6.1 Probabifity Estimation Task6.1.1 Representativeness BiasIt has been found in previous research that individuals provided with the probability map, aproblem-representation debiasing tool, achieved probability estimates with less systematic bias[Cole, 1989; Roy and Lerch, 1993]. The current study provides clear evidence that the problemrepresentation tool (PRT) is also capable in reducing the representativeness bias in groupjudgment situations; groups provided with PRT reported significantly lower representativenessbias than groups not provided with PRT. An exploration of the judgment processes reveals themechanism through which the bias reduction was achieved.Base-rate fallacy research has suggested that decision makers have a tendency to ignore baserates in favor of diagnostic information, thus causing the systematic judgment bias known as“representativeness bias” [Bar-Hillel, 1980]. The current research provides evidence for thisphenomenon. Process analysis shows that groups which erred more not only made more141references to diagnostic information, but also referred more to the use of only one informationsource. Therefore, more references made to using only a single piece of information (i.e.,diagnostic information or base rate) in deriving the solution resulted in higher error. Results alsoindicate that more references made about the problem representation tool contributed towardbetter solutions, suggesting that it was (at least partly) responsible for the improvements insolutions. On the other hand, groups which were preoccupied by procedural matters were lesscapable in generating correct solutions, as this distracted group members from devoting theirattention toward discovering a correct solution.The provision of PRT affects the relative references to the base rate versus diagnosticinformation, and to dual-information-sources integration versus single-information-sourceutilization, in the direction favorable to a less-biased solution. The initial imbalance betweengroup members’ (lower) attention paid to the base rate and their (higher) attention paid to thediagnostic information became more balanced. As a consequence, the relatively high frequencyof using only single information source (versus two information sources) in producing solutionsgave way to a greater occurrence of utilizing both pieces of information (i.e., base rate anddiagnostic information).Within groups provided with PRT, the use of PRT itself also helps to reshape the informationusage pattern. Greater use of PRT causes group members to make more references to the baserate. A more balanced focus on the two pieces of information leads to more correct informationintegration strategies that accommodates both information sources rather than just one of them.142The reduction in the representativeness bias is seen as a result of changes in multiple aspects ofthe judgment process. In particular, it is accompanied by an increase in the use of PRT, adecrease in diagnostic-information references, and a decrease in single-information-sourcereferences. In sum, it is found that the provision and use of PRT helps group members to bemore aware of the saliency of the base rate (versus diagnostic information), thus taking intoaccount both pieces of information in generating a more accurate (i.e., less biased) solution.As expected, electronic communication, as compared to verbal communication, has not led toan improvement in probability estimation. Although electronic communication addresses thedimension of interpersonal information integration, information sharing among members is notthe key in reducing the representativeness bias. Deriving from previous findings that the use ofgroups has not reduced the representativeness bias [Argote et al., 1986, 1990; Brightman et al.,1983; Stasson et al., 1988], the real bottleneck in the probability estimation task, an on-line task[Hastie and Park, 1986], appears to be associated with the intrapersonal, rather thaninterpersonal, aspect of information integration.An exploration of the judgment processes shows that the influence of the electroniccommunication produces a change in the pattern of the group process that is radically differentthan the change exhibited under the influence of PRT. In general, the amounts of the varioustypes of references made are reduced as a result of the slower typing speed associated withelectronic communication. Consequently, the conditions necessary for the reduction ofjudgmentbias, i.e., increase in awareness of the saliency of the base rate and the use of dual information143sources, are not realized, thus preventing any improvement in judgment accuracy fromoccurring.6.1.2 User PerceptionsSome interaction effects on user perceptions have been observed. The hypothesized increase in“perceived socio-emotional behavior” and decrease in “satisfaction with process” due to PRTare found only in verbally-communicating groups. The impact of PRT on user perceptions islargely achieved through its increase of problem-solving approaches (i.e., information integrationmethods involving dual information sources), thus creating a more complex group interactionthat involves more interpretations of the new dimensions as well as more choices to be made.Process analysis shows that increased references made about more correct problem-solvingstrategies were associated with higher degree of perceived negative socio-emotional behavior.A similar association is also found between higher instances of references made about morecorrect problem-solving strategies or the problem representation tool and lower degree ofsatisfaction with the process. Consequently, in verbally-communicating groups, we see thehypothesized effects on “perceived socio-emotional behavior” and “satisfaction with process”.Contrary to our expectation, electronic communication causes an increase in the perception ofinformal leadership behavior in groups without cognitive support. This result is almost oppositeto what the GSS literature suggests. A key difference between this study and the past studies,however, is that an effective and realistic reward mechanism is in place here. Groups with betterperformance are better rewarded. Consequently, rather than lying back and particpating in a144“democratic” discussion, group members who think they know better take charge of thesituation, causing a rise in leadership behavior. Another difference is that this study utilizes anon-anonymous procedure in the electronic communication. Therefore, the suggested dissociationbetween ideas and their proposers did not occur in this case. Although some people havesuggested that such practice may intimidate group members because they are afraid of beingcriticized for bad or lousy ideas, such fear or worry should become insignificant when there isa common and engaging goal for the group to achieve. Moreover, this procedure has theadvantage of preventing free-riding by some group members [Nunamalcer et al., 1991].Combining the above features with a parallel-inputting procedure made possible by electroniccommunication, group members who were more inclined to exercise influence could do so atany time they wish, without having to wait for others to stop spealcing before beginning to doso, as in the case of verbal communication. Consequently, we see an increase in perceivedleadership behavior with electronic communication, as compared to verbal communication, ingroups without cognitive support.A higher degree of task participation was perceived by group members when the group was ableto fathom the use of a more complex, as well as more correct, strategy in solving the problem.This finding is encouraging as it provides an indirect indication of decision makers’ ability toimplicitly recognize the superiority of better judgment strategies when the latter exist.1456.2 Job Evaluation Task6.2.1 Availability BiasThe current study shows that the electronic brainstorming tool is helpful in reducing theavailability bias. The structuring mechanisms provided by an electronic brainstorming system(EBS) - parallel input, collective memory, and serially retrievable output - work together aroundthe limitations of the human information processing system [Nagasundaram and Dennis, 1993].Since the early days of GSS development, EBS has, more frequently than not, formed an integralpart of GSS design [DeSanctis, Snyder, and Poole, 1994], and has been found to increase thenumber of ideas generated. The current research has demonstrated that the EBS, throughwidening the information search scope, helps to reduce availability bias. This fmding bothreaffirms the importance of BBS as a component of GSS as well as identifies a new function forthe tool.Results indicate that GSS reduces availability bias via widening the information search scope,particularly of those issues with low availability. Whereas the no-cognitive-support groups andthe verbally-communicating groups mentioned more ideas during discussion and made moreideational connections, they did not make any deeper explorations of the low-availability items;consequently, those ideas indirectly contributed toward an already-obvious understanding of thegroup which, in turn, led to increased biases. On the other hand, the electronic brainstormingprocess, with its unique structuring mechanisms, forces group members to explore each issuein depth, regardless of its degree of availablity. The finding thus reinforces the usefulness of thecognitive support (i.e., the electronic brainstorming tool) over and above an ordinary session of146idea generation and discussion without the aid.The current study has shown that electronic communication, which addresses the interpersonalaspect of information integration, helps to reduce availability bias when compared with verbalcommunication. This study has also provided evidence that the bias reduction was achievedthrough a reduction of groupthink. Groupthink has been defined by Janis [1982] as a mode ofthinking that people engage in when they are involved in a cohesive group, when the membersplace greater emphasis on unanimity and consensus than on the need to realistically explore andappraise alternative solutions. It is associated with, among other things, poor information search,incomplete survey of alternatives, and selective bias in processing information at hand [Janis,1982]. In the context of this study, greater extent of groupthink is exhibited by a smaller rangeof potential solutions explored, such that group members did not propose or suggest solutionsthat are too far different from each others’. By enabling the group to suggest and explore awider range of possible solutions before reaching a final agreement, electronic communicationhighly reduces the chance for the group to enter into the phenomenon of groupthink. Twoproperties of the electronic communication channel are identified as being particularly helpfulin causing this to happen.First, the “social presence” of the electronic communication channel is lower than that of theverbal communication channel [Rice, 1992]. “Social presence” is the degree to which a mediumis perceived as conveying the actual physical presence of the communicating participants [Shortet al., 1976]. This social presence depends not only on the words conveyed during147communication, but also upon a range of nonverbal cues including facial expression, directionof gaze, posture, attire, and physical distance and many verbal cues (timing, pauses,accentuations, tonal inflections, etc) [Argyle, 1969; Birdwhistle, 1970]. Because of the lowersocial presence in electronically—communicating groups, group members need not be as sensitiveor wary in making suggestions that seem to be in opposite directions to those made by others.Second, the parallel communication made possible by the electronic communication channel freesgroup members from having to follow a strictly serial sequence in the communication process.In general, the serial nature of verbal communication requires that each comment followslogically from the preceding one, thus making it easier for the group to indulge in groupthink[Janis, 1982].The above findings on the reduction of availability bias have broad implications from anorganizational perspective since, unlike representativeness bias which is pertinent to probabilityestimation problems, availability bias is embedded in many task contexts. For example, inreviewing job applications, employers may be more influenced by an applicant’s experiences thatare more familiar to them (e.g., if the applicant attended the same college as them), than byevents or experiences that are less familiar. While these biases may occur subconsciously, thisresearch shows they can be addressed with conscious efforts.However, an undesirable consequence of using EBS and the electronic communication is theincrease in meeting time. This is a cost that potential users should take into consideration.1486.2.2 User PerceptionsAmong other things, the use of electronic communication led to an increase in the number oftimes specific ratings were exchanged among group members; the exchange of specific ratings,reminiscent of a “negotiation dance”, caused lower satisfaction with the process. Being a leanercommunication medium than verbal communication, electronic communication allows fewersocial and political cues. With relatively little socialization in the meeting, members of groupswithout long working history may feel their social needs unmet. Preoccupation with proceduralmatters also led to reduced satisfaction. This is probably due to the task being relatively wellstructured and its goals relatively clear. The mentioning of ideas during discussion was foundto be negatively correlated with perceived leadership. The former, much like the brainstormingprocess, benefits from the participation and input of all members. Consequently, leadershipbehavior would not support and, at the same time, could not be supported by such participatoryand democratic process. This finding suggests that when idea solicitation is the goal, attentionshould be paid to the design of group composition such that groups that contain obvioushierarchy should be avoided.Some interaction effects have been found on user perceptions. Electronic communication, ascompared to verbal communication, has increased group members’ perception of negative socioemotional behavior, reduced their satisfaction with the decision process, and increased theirperception of informal leadership -- in groups without cognitive support. On the other hand,communication medium has been found to have no effect in groups with cognitive support. Theelectronic communication channel, while being useful in reducing certain process losses and149bringing about other process gains [Nunamaker et al., 1991], has apparently also triggered somenegative perceptions from users when used alone perhaps because of its “unnaturalness” inrelation to verbal communication. However, it is interesting to note that when electroniccommunication was combined with cognitive support, thus producing the “full” GSS, users didnot report a different level of satisfaction from those in the baseline situation. This suggests thatwhen user perceptions are of concern, there are certain merits in using a “full” GSS versus itsindividual components.While the increase in the perception of informal leadership behavior due to electroniccommunication (in groups without cognitive support) is contrary to our expectation, the fmdingis consistent across tasks. This cross-tasks consistency strengthens the explanation that we haveprovided for the finding, in terms of the reward mechanism, non-anonymous procedure, andparallel communication.6.3 Limitations of the StudyLike all social science studies, this research is subject to a certain universal dilemma. It musttry to reconcile three mutually conflicting objectives [McGrath, 19821. It is always desirable to:(1) generalize the findings to the respective populations, the external validity goal; (2) controlthe independent variables that may influence the outcome, the internal validity goal; and (3)study the phenomenon of concern in a realistic setting, the realism problem. However, there isno way to simultaneously meet all these objectives [Benbasat, 1990; McGrath, 1982].150The choice of a laboratory experimental approach necessarily means that the priority in thisresearch is accorded to internal validity, versus either external validity or high realism. Thefollowing reasons account for the preference for control over generalizability and high realism.First, the general topic under investigation, i.e., effect of GSS on judgment biases, is relativelynovel. Very little is known about the impact of GSS on judgment processes and outcomes. Alaboratory is an ideal place for establishing the fundamental features and effects of thetechnology before using it in practice. Second, there is a need to control the group task, as itcan explain as much as 50% of the variation in the group interaction process [Poole et al.,1985]. Lack of control of the group task will result in serious difficulty in discerning the effectof GSS on group judgment processes and outcomes. Third, a controlled setting with relativelyhomogeneous subjects helps to increase the statistical power of the findings. Variation in thesample introduces possible confounds, thus reducing statistical power. Relatively speaking,students are a fairly homogeneous group and are available in sufficient numbers to give adequatestatistical power for study.Nevertheless, the choice of the laboratory experimental approach and of control overgeneralizability leads to the following limitations. First, there is limited generalizability. Alaboratory experiment with student subjects may not be typical of organizational life. Anexperimental group lacks the realism, on-going nature and interpersonal involvement of adecision-making team in an enterprise. Furthermore, only three-member groups were used inthis study. Second, it is difficult to create a high sense of the real world in a laboratory setting.Student subjects may not have the same level of motivation, commitment, and involvement as151an actual group faced with a real problem. To compensate for this limitation, monetary rewardscontingent upon performance were used as incentives. In general, subjects treated the tasksseriously and appeared to act in a responsible fashion. Third, there is limited sample size. Thisstudy with 40 groups (120 subjects) has a reasonably large sample size among GSS research.However, this is still a small sample and the results are subject to the limitations that result fromsmall sample size. Fourth, the tasks used were a probability estimation problem concerningpersonality profile identification, and a job evaluation problem involving the position analysisquestionnaire (PAOJ. Although such task types are frequently embedded within organizationaltasks, the actual tasks used may not occur frequently in organizations. Further research withmore complex task contexts but falling within the same task types should be conducted to testthe robustness of the results with respect to the task variable. A plausible example is a financialtask involving the estimation of multiple, or even multistage [Cole, 1988], posteriorprobabilities. Nevertheless, the novelty of the research topic requires the use of “minimal tasks”[Weick, 1965].6.4 Future Research DirectionsSeveral extensions are possible from this research. In terms of empirical extensions, the effectof meeting structure in the availability task can be further examined by including a verbalbrainstorming condition for comparison. In the current implementation, we have comparedelectronic brainstorming with no explicit brainstorming, since the goal in this pioneer study isto establish the usefulness of GSS for debiasing, with a “natural” group as baseline. Futureresearch may further distinguish between how much debiasing is achievable through the structure152element, and how much through the electronic element. Notwithstanding this, however, it shouldbe pointed out that there exists empirical evidence on the superiority of electronic brainstormingover verbal brainstorming.The decision-theory model recognizes two distinct components: judgment and risky choice; thelatter is where judgments are combined with utilities for outcomes to determine action choices[Libby, 1981]. A natural extension from the current research is to look at how GSS may affectchoices. A more specific topic has to do with the relationship between the use of GSS and theframing effect. Framing effect refers to the differential evaluation of objectively equivalentinformation given different frames of reference. Thus, framing effects are observed whendecision makers alter their choice among decision alternatives in response to changes in theframe of reference of these alternatives [Diamond and Lerch, 1992]. Kahneman and Tversky[1979b] attributed the cause of framing effects to editing process: the transformation of objectiveinformation into utility values used in the process of selecting the best alternative. How GSS maybe used to frame decision sets as well as how GSS may affect the framing effect are bothinteresting issues to examine.Another possible extension has to do with the further development of the problem representationapproach within the context of a decision support system. Utilizing the multi-media capabilitiesof the newer computer systems, the pertinence of the different parameters such as base rate maybe brought to attention even more effectively. Other than probability map, other representationsthat may be built on include “signal detection curve” and “disease detection bar” [Cole, 1988].153How different forms of presentation may impact judgment bias is an empirical question thatought to be answered.Besides the two types of biases studied in this research, several others have also been introducedin the judgment bias literature [Kahneman, Slovic, and Tversky, 1982]. For example, anchoringand adjustment refers to the phenomenon that often judgments are adjusted based on a certainanchor point. Correspondingly, different anchor points yield different estimates, which arebiased toward the initial values. This bias is inherently associated with the sequential nature inwhich information is presented and received. GSS can be useful in a situation like this byproviding an effective means of representing and presenting more information at a time. Thedebiasing role of GSS in this and other judgment biases is worth exploring.GSS research has so far been adopting the contingency model [Benbasat and Lim, 1993].Correspondingly, much effort has been devoted to understanding how the impact of GSS ongroup processes and outcomes may vary given a different task type, a different group size, adifferent group composition, etc. Four categories of such moderating factors have beenidentified: task characteristics, group characteristics, contextual factors, and technological factors[Benbasat and Lim, 1993]. In a similar vein, the effect of GSS on judgment biases may also bemoderated by these categories of variables, although, in this case, the task category is basicallya reflection of the types of judgment biases themselves. Future research should therefore alsobe extended to investigating these moderating effects.154This research has opened the door to the investigation of a new class of GSS tools, namely,cognitive support. For the purposes of the tasks we studied, two types of cognitive support, interms of problem representation tool and electronic brainstorming, have been identified andresearched. As the body of research expands and takes into account other judgment task types,there is a need for a conceptual framework to be developed that would more preciselycharacterize the critical conceptual elements that define these tools. In the long run, thisdevelopment should also help to further refine the definition of GSS, with the aim and hope ofproviding a more integrative support to judgment and decision-making groups.This study has provided a good foundation as well as a starting point for the study of the use ofGSS in judgment situations. The findings that GSS tools are effective at reducing judgmentbiases should be of significance both to the continuing efforts at developing a stronger theoreticalbasis for the impact of GSS as well as to the better design of group debiasing tools.6.5 ConclusionThe current study provides clear evidence that GSS is capable in debiasing bothrepresentativeness bias and availability bias, two major judgment biases that have beenidentified. Through the provision of cognitive support and electronic communication medium,GSS augments group members’ ability to integrate information not only in the interpersonaldimension, but also in the intrapersonal dimension. Cognitive support has been seen to beeffective in dealing with both types of biases, whereas electronic communication is mainly usefulfor availability bias, primarily owing to the underlying nature of the bias which allows debiasing155to work relatively better with interpersonal integration of information.The above findings on bias reduction are important both from the perspective of debiasing effortas well as from the perspective of GSS research. While group judgment has been demonstratedto differ from individual judgment (e.g., [Argote et al., 1986, 1990; Stasson et aL, 1988]),nothing to date has been done on group debiasing. This phenomenon is complicated by the factthat team work is becoming increasingly important in organizations, which calls for the use of“improved decision-group technologies” [Huber, 1984a, 1984b]. The current study provides aclear indication, both theoretically and empirically, on how group biases may be reduced.The current study has also provided an important extension in terms of GSS research byproviding a linkage between the GSS literature and the judgment bias literature, which is noveland important. Furthermore, in the process of doing so, the current study makes use of ameasure of decision quality that can be objectively and systematically assessed, either based ona normative mathematical model (i.e., Bayesian model) or directly derived from subjects’responses. Owing to the nature of tasks employed, decision quality in some GSS studies has beenmeasured either through group members’ own perceptions or by third-party judges. Byemploying objective measures of decision quality, the current study serves to reaffirm theusefulness of GSS and strengthen previous findings in the GSS literature.156REFERENCESAdelman, L.; Donnell, M.L.; Phelps, R.H.; and Patterson, J.F. An iterative Bayesian decisionaid: toward improving the user-aid and user-organization interfaces. IEEE Transactions onSvstems, Man, and Cybernetics, SMC-12, 6, 1982, 733-743.Anderson, N. Cognitive algebra: integration theory applied to social attribution. In L. Berkowitz(ed.), Advances in &perimental Social Psychology, Vol. 7, New York: Academic Press, 1974.Argote, L.; Seabright, M.A.; and Dyer, L. Individual versus group use of base-rate andindividuating information. Organizational Behavior and Human Decision Processes, 38, 1986,65-75.Argote, L.; Devadas, R.; and Melone, N. The base-rate fallacy: contrasting processes andoutcomes of group and individual judgment. Organizational Behavior and Human DecisionProcesses, 46, 1990, 296-310.Argyle, M. Social interaction. London: Methuen, 1969.Bales, R.F. A set of categories for the analysis of small group interaction. American SociologicalReview, 15, 1950, 257-263.Bar-Hillel, M. The base-rate fallacy in probability judgments. Acta Psychologica, 44, 1980, 211-233.Bar-Hillel, M. The base-rate fallacy controversy. In R.W. Scholz (ed.), Decision making underuncertainty, Elsevier Science Publishers B.V., North Holland, 1983, 39-61.Bedau, H. Ethical aspects of group decision making. In W.C. Swap and Associates (eds.),157Group decision making, Beverly Hills: Sage Publications, 1984, 115-150.Benbasat, I. Laboratory experiments in information systems with a focus on individuals: acritical appraisal. In I. Benbasat (ed.), The information systems research challenge: experimentalresearch methods, Harvard Business School, 1990, 33-47.Benbasat, I.; DeSanctis, G.; and Nault, B.R. Empirical research in management supportsystems: a review and assessment. In Holsapple, C. and Whinston, A. (eds.), Recentdevelopments in decision support systems, NATO ASI Series Vol. 101, Springer-Verlach, 1993,383-437.Benbasat, I. and Dexter, A.S. An experimental evaluation of graphical and color-enhancedinformation presentation. Management Science, 31, 11, 1985, 1348-1363.Benbasat, I. and Lim, L.H. The effects of group, task, context, and technology variables on theusefulness of group support systems: a meta-analysis of experimental studies. Small GroupResearch, 24, 4, 1993, 430-462.Birdwhistle, R.L. Kinesics and contexts. Philadelphia: University of Philadelphia Press, 1970.Bormann, E.G. Discussion and group methods: Theory and practice. New York: Harper &Row, 1975.Brightman, H.J.; Lewis, D.J.; and Verhoeven, P. Nominal and interacting groups as Bayesianinformation processors. Psychological Reports, 53, 1983, 101-102.Brilhart, J.K. and Jochem, L.N. Effects of different patterns on outcomes of problem-solvingdiscussions. Journal ofApplied Psychology, 48, 1964, 175-179.158Bui, T.X. and Jarke, M. A DSS for cooperative multiple criteria group decision making.Proceedings of the 5th International Conference on Information Systems, 1984, 101-113.Christensen-Szalanski, J.J.J. and Beach, L.R. Experience and the base-rate fallacy.Organizational Behavior and Human Performance, 29, 1982, 270-278.Clapper, D.L.; McLean, ER.; and Watson, R.T. An experimental investigation of the effectof a group decision support system on normative influence in small groups. Proceedings ofthe12th International Conference on Information Systems, 1991, 273-282.Cole, W.G. Three graphic representations to aid Bayesian inference. Methods ofInformation inMedicine, 27, 1988, 125-132.Cole, W.G. Understanding Bayesian reasoning via graphical displays. CHI’89 Proceedings,1989, 381-386.Coilaros, P.A. and Anderson, L.R. Effect of perceived expertness upon creativity of membersof brainstorming groups. Journal ofApplied Psychology, 53, 1969, 159-163.Conover, W.J. Practical nonparametric statistics. New York, NY: John Wiley & Sons, 1971.Daft, R.L. and Lengel, R.H. Organizational information requirements, media richness andstructural design. Management Science, 32, 5, 1986, 554-571.Daft, R.L.; Lengel, R.H.; and Trevino, L.K. Message equivocality, media selection andmanager performance: implications for information systems. MIS Quarterly, 11, 1987, 355-366.Dawes, R.M. Representative thinking in clinical judgment. Clinical Psychological Review, 6,1986, 425-441.159Dennis, A.R. and Gallupe, R.B. A history of group support systems empirical research: lessonslearned and future directions. In L.M. Jessup and J.S. Valacich (eds.), Group Support Systems.New York: Macmillan, 1993, 59-77.Dennis, A.R.; Nunamaker, J.F., Jr.; and Vogel, D.R. A comparison of laboratory and fieldresearch in the study of electronic meeting systems. Journal of Management InformationSystems, 7, 3, 1991, 107-135.Dennis, A.R. and Valacich, J.S. Computer brainstorms: more heads are better than one. JournalofApplied Psychology, 78, 4, 1993, 531-537.DeSanctis, G. and Gallupe, R.B. Group decision support systems: a new frontier. Data Base,Winter 1985, 3-10.DeSanctis, G. and Gallupe, R.B. A foundation for the study of group decision support systems.Management Science, 33, 5, 1987, 589-609.DeSanctis, G.; Snyder, J.R.; and Poole, M.S. The meaning of the interface: a functional andholistic evaluation of a meeting software system. Decision Support Systems, 11, 1994, 319-335.Diamond, L. and Lerch, F.J. Fading frames: data presentation and framing effects. DecisionSciences, 23, 1992, 1050-1071.Diehi, M. and Stroebe, W. Productivity loss in brainstorming groups: toward the solution of ariddle. Journal of Personality and Social Psychology, 53, 3, 1987, 497-509.Diehl, M. and Stroebe, W. Productivity loss in idea-generating groups: tracking down theblocking effect. Journal of Personality and Social Psychology, 61, 3, 1991, 392-403.160Dunnette, M.D. Are meetings any good for solving problems? Personnel Administration, March-April, 1964, 12-29.Easton, G.K.; George, J.F.; Nunamalcer, J.F., Jr.; and Pendergast, M.O. Using two differentelectronic meeting system tools for the same task: an experimental comparison. Journal ofManagement Information Systems, 7, 1, 1990, 85-100.Eddy, D.M. Probabilistic reasoning in clinical medicine: problems and opportunities. In D.Kahneman, P. Slovic, and A. Tversky (eds.), Judgment under uncertainty: heuristics and biases,New York: Cambridge University Press, 1982, 249-267.Edwards, W. Behavioral decision theory. Annual Review of Psychology, 12, 1961, 473-498.Einhorn, H.J.; Hogarth, R.M.; and Klempner, E. Quality of group judgment. PsychologicalBulletin, 84, 1, 1977, 158-172.Elsaesser, C. Explanation of Bayesian conditioning for decision support systems. Unpublisheddoctoral dissertation, Carnegie Mellon University, 1989.Ericsson, K.A. and Simon, H.A. Verbal reports as data. Psychological Review, 87, 1980, 215-253.Eveland, J.D. and Bikson, T.K. Work group structures and computer support: a fieldexperiment. Transactions on Office Information Systems, 5, 1988, 354-379.Fischhoff, B. and Bar-Hillel, M. Focusing techniques: a shortcut to improving probabilityjudgments? Organizational Behavior and Human Pe,formance, 34, 1984, 175-194.Fischhoff, B. Slovic, P.; and Lichtenstein, S. Subjective sensitivity analysis. Organizational161Behavior and Human Performance, 23, 1979, 239-359.Forbringer, L.R. The role of the availability heuristic in biasing task description ratings of thePosition Analysis Questionnaire. Unpublished doctoral dissertation, University of Akron, 1991.Foschi, M. and Lawler, E.J. (eds.) Group processes: sociological analyses. Chicago: Nelson-Hall, 1994.Gallupe, R.B.; Bastianutti, L.M.; and Cooper, W.H. Unblocking brainstorms. Journal ofApplied Psychology, 76, 1, 1991, 137-142.Gallupe, R.B.; Cooper, W.H.; Grise, M.; and Bastianutti, L.M. Blocking electronicbrainstorms. Journal ofApplied Psychology, 79, 1, 1994, 77-86.Gallupe, R.B.; Dennis, A.R.; Cooper, W.H.; Valacich, J.S.; Bastianutti, L.M.; andNunamaker, J.F. Electronic brainstorming and group size. Academy of Management Journal,35, 2, 1992, 350-369.Gallupe, R.B.; DeSanctis, G; and Dickson, G.W. Computer-based support for group problemfinding: an experimental investigation. MIS Quarterly, 12, 2, 1988, 277-296.George, J.F.; Easton, G.K.; Nunamaker, J.F., Jr.; and Northcraft, G.B. A study ofcollaborative group work with and without computer based support. Information SystemsResearch, 1, 4, 1990, 394-4 15.Gigerenzer, G.; Hell, W.; and Blank, H. Presentation and content: the use of base rates as acontinuous variable. Journal of Experimental Psychology, 1, 3, 1988, 513-525.Ginosar, Z. and Trope, Y. The effects of base rates and individuating information on judgments162about another person. Journal of experimental social psychology, 16, 1980, 228-242.Gray, P. Initial observations from the decision room project. Proceedings of the ThirdInternational Conference on Decision Support Systems, 1983, 135-138.Green, S.G. and Taber, T.D. The effects of three social decision schemes on decision groupprocesses. Organizational Behavior and Human Peiformance, 25, 1980, 97-106.Hammond, K.R.; McClelland, G.H.; and Mumpower, J. Humanjudgment and decision making:theories, methods, and procedures. Praeger, 1980.Harmon, 3. and Rohrbaugh, 3. Social judgment analysis and small group decision making:cognitive feedback effects on individual and collective performance. Organizational Behaviorand Human Decision Processes, 46, 1990, 34-54.Hastie, R. and Park, B. The relationship between memory and judgment depends on whetherthe judgment task is memory-based or on-line. Psychological Review, 93, 3, 1986, 258-268.Hiltz, S.R. and Turoff, M. The network nation. Mento Park, CA: Addison-Wesley, 1978.Ho, T.H.; Raman, K.S.; and Watson, R.T. Group decision support systems: the cultural factor.Proceedings of the 10th International Conference on Information Systems, 1989, 119-129.Hoffman, L.R. Homogeneity of member personality and its effect on group problem solving.Journal ofAbnormal and Social Psychology, 58, 1959, 27-32.Hoffman, L.R. Group problem solving. In L. Berkowitz (ed.), Advances in experimentalpsychology, Vol. 2, New York: Academic Press, 1965.163SHogarth, R.M. Judgement and choice: the psychology of decision. New York: John Wiley &Sons, 1980.Huber, G.P. The nature and design of post-industrial organizations. Management Science, 30,8, 1984a, 928-951.Huber, G.P. Issues in the design of group decision support systems. MIS Quarterly, 8, 3, 1984b,195-204.Janis, I. Groupthink: psychological studies ofpolicy decisions andfiascoes. Boston: HoughtonMifflin, 1982.Janis, I. and Mann, L. Decision making: a psychological analysis of conffict, choice, andcommitment. New York: The Free Press, 1977.Joyce, E.J. and Biddle, G.C. Are auditors’ judgments sufficiently regressive? Journal ofAccounting Research, 19, 2, 1981, 323-349.Kahneman, D. Judgment and decision making: a personal view. Psychological Science, 2, 3,1991, 142-145.Kahneman, D.; Slovic, P.; and Tversky, A. (eds.) Judgment under uncertainty: heuristics andbiases. New York: Cambridge University Press, 1982.Kahneman, D. and Tversky, A. Subjective probability: a judgment of representativeness.Cognitive Psychology, 3, 1972, 430-454.Kahneman, D. and Tversky, A. On the psychology of prediction. Psychological Review, 80,1973, 237-25 1.164Kahneman, D. and Tversky, A. Intuitive predictions: biases and corrective procedures. In S.G.Makridakis (ed.), TIMS Studies in the Management Sciences, Vol.12, 1979a, 313-327.Kahneman, D. and Tversky, A. Prospect theory: an analysis of decisions under risk.Econometrica, 47, 1979b, 263-291.Keppel, G. Design and analysis: a researcher’s handbook (2nd ed.). Englewood Cliffs, NJ:Prentice-Hall, 1982.Lamm, H. and Trommsdorff, G. Group versus individual performance on tasks requiringideational proficiency (brainstorming: a review). European Journal ofSocial Psychology, 3, 4,1973, 361-388.Lewis, F. Facilitator: a microcomputer decision support system for small groups. Unpublisheddoctoral dissertation, University of Louisville, 1982.Libby, R. Accounting and human information processing: theory and applications. EnglewoodCliffs, N.J.: Prentice-Hall, 1981.Lichtenstein, S.; Slovic, P.; Fischhoff, B.; Layman, M.; and Combs, B. Judged frequency oflethal events. Journal ofF.tperimental Psychology: Human Learning and Memory, 4, 1978, 551-578.Lim, F.J. and Benbasat, I. A communication-based framework for group interfaces in computer-supported collaboration. Proceedings of the 24th Annual Hawaii International Conference onSystem Sciences, Vol.3, 1991.Lim, L.H. and Benbasat, I. A theoretical perspective of negotiation support systems. JournalofManagement Information Systems, 9, 3, 1993, 27-44.165Lim, L.H.; Raman, K.S.; and Wei, K.K. Interacting effects of GDSS and leadership. DecisionSupport Systems, 12, 1994, 199-211.McCormick, E.J. Job and task analysis. In M.D. Dunnett (ed.), Handbook of industrial-organizational psychology. Chicago, IL: Rand McNally, 1976, 65 1-696.McCormick, E.J.; Jeanneret, P.R.; and Mecham, R.C. A study of job characteristics and jobdimensions as based on the Position Analysis Questionnaire. Journal ofApplied Psychology, 56,1972, 347-368.McGrath, J.E. Dilemmatics: the study of research choices and dilemmas. In McGrath, J.E.;Martin, 3.; and Kulka, R.A. (eds.), Judgment calls in research. Berverly Hills, CA: Sage, 1982,69-102.McGrath, J.E. Groups: interaction andperfonnance. Englewood Cliffs, NJ: Prentice-Hall, 1984.McGrath, J.E. and Hollingshead, A.B. Putting the “group” back in group support systems: sometheoretical issues about dynamic processes in groups with technological enhancements. In Jessup,L.M. and Valacich, J.S. (eds.), Group support systems: new perspectives. New York:Macmillan, 1993, 78-96.McGuire, T.W.; Kiesler, S. and Siegel, J. Group and computer-mediated discussion effects inrisk decision making. Journal of Personality and Social Psychology, 52, 1987, 9 17-930.Mecham, R.C.; McCormick, E.J.; and Jeanneret, P.R. Position analysis questionnaire user’smanual. Logan, UT: PAQ Services, Inc., 1977.Mullen, B.; Johnson, C.; and Salas, E. Productivity loss in brainstorming groups: a metaanalytic integration. Basic and Applied Social Psychology, 12, 1, 1991, 3-23.166Murphy, G.L. and Medin, D.L. The role of theories in conceptual coherence. PsychologicalReview, 92, 1985, 289-3 16.Nagasundaram, M. and Dennis, A. When a group is not a group: the cognitive foundation ofgroup idea generation. Small Group Research, 24, 4, 1993, 463-489.Neter, J.; Wasserman, W.; and Kutner, M.H. Applied linear statistical models: regression,analysis of variance, and experimental design (2nd edj, Homewood, IL: Irwin, 1985.Newell, A. and Simon, H.A. Human problem solving. Englewood Cliffs, NJ: Prentice-Hall,1972.Nunamaker, J.F., Jr.; Dennis, A.R.; Valacich, J.S.; Vogel, D.R.; and George, J.F. Electronicmeeting systems to support group work. Communications of the ACM, 34, 7, 1991, 40-61.Osborn, A.F. Applied imagination: principles and procedures of creative thinking (2nd ed.).New York: Scribner, 1957.Payne, J.W. Task complexity and contingent processing in decision making: an informationsearch and protocol analysis. Organizational Behavior and Human Performance, 16, 1976, 366-387.Penland, P. and Fine, S. Group dynamics and individual development. New York: MarcelDekker, 1974.Pollatsek, A.; Well, A.D.; Konold, C.; Hardiman, P.; and Cobb, G. Understanding conditionalprobabilities. Organizational Behavior and Human Decision Processes, 40, 1987, 255-269.Poole, M.S. Decision development in small groups, II: A study of multiple sequences in167deicision making. Communication Monographs, 50, 3, 1983a, 206-232.Poole, M.S. Decision development in small groups, III: A multiple sequence model of groupdeicision development. Communication Monographs, 50, 4, l983b, 321-341.Poole, M.S.; Siebold, D.R.; and McPhee, R.D. Group decision-making as a structurationalprocess. Quarterly Journal of Speech, 71, 1985, 74-102.Putnam, L. Procedural messages and small group work climates: a lag sequential analysis. InBurgoon (Ed.), Communication Yearbook 5. New Brunswick, NJ: Transaction Books, 1981,331-350.Rice, R.E. Mediated group communication. In Rice, R.E. and associates, The new media:communication, research and technology. Beverly Hills, CA: Sage, 129-154, 1984.Rice, R.E. Task analyzability, use of new media, and effectiveness: a multi-site exploration ofmedia richness. Organization Science, 3, 4, 1992, 475-500.Roy, M.C. and Lerch, F.J. Overcoming ineffective representations in base-rate problems.Working paper, 1993.Sambamurthy, V.; Poole, M.S.; and Kelly, J. The effects of variations in GDSS capabilities ondecision-making processes in groups. Small Group Research, 24, 4, 1993, 523-546.Schultz, B. Characteristics of emergent leaders of continuing problem-solving groups. Journalof Psychology, 88, 1974, 167-173.Schultz, B. Predicting emergent leaders: An exploratory study of the salience of communicativefunctions. Small Group Behavior, 9, 1978, 9-14.1684Scotts, W.A. Reliability of content analysis: the case of nominal scale coding. Public OpinionQuarterly, 19, 1955, 321-325.Sengupta, K. and Te’eni, D. Cognitive feedback in GDSS: improving control and convergence.MIS Quarterly, March 1993, 87-113.Shepherd, C.R. Small groups: some sociological perspectives. San Francisco: ChandlerPublishing Co., 1964.Short, J.; Williams, E.; and Christie, B. The socialpsychology oftelecommunications. London:John Wiley & Sons, 1976.Siegel, J.; Dubrovsky, V.; Kiesler, S.; and McGuire, T.W. Group processes in computer-mediated communication. Organizational Behavior and Human Decision Processes, 37, 1986,157-187.Slovic, P. and Lichtenstein, S. Comparison of Bayesian and regression approaches to the studyof information processing in judgment. In L. Rappoport and D. Summers (eds.), Humanjudgment and social interaction. New York: Holt, Rinehart, and Winston, 1973.Sniezek, J.A. and Henry, R.A. Accuracy and confidence in group judgment. OrganizationalBehavior and Human Decision Processes, 43, 1989, 1-28.Sniezek, J.A. and Henry, R.A. Revision, weighting, and commitment in consensus groupjudgment. Organizational Behavior and Human Decision Processes, 45, 1990, 66-84.Sproull, L. and Kiesler, S. Reducing social context cues: electronic mail in organizationalcommunication. Management Science, 32, 1986, 1492-15 12.169$Stasson, M.F.; Ono, K.; Zimmerman, S.K.; and Davis, J.H. Group consensus processes oncognitive bias tasks: a social decision scheme approach. Japanese Psychological Research, 30,2, 1988, 68-77.Steeb, R. and Johnston, S.C. A computer-based interactive system for group decision making.iEEE Transactions on Systems, Man, and Cybernetics, SMC-11, 1981, 544-552.Steiner, I.D. Group process and productivity. New York: Academic Press, 1972.Stogdill, R.M. Handbook ofleadership: a survey oftheory and research. New York: Free Press,1974.Taylor, D.W.; Berry, P.C.; and Block, C.H. Does group participation when using brainstormingfacilitate or inhibit creative thinking? Administrative Science Quarterly, 3, 1958, 23-47.Todd, P. and Benbasat, I. Process tracing methods in decision support systems research:exploring the black box. MIS Quarterly, 11, 4, 1987, 493-512.Trevino, L.K.; Lengel, R.H.; and Daft, R.L. Media symbolism, media richness, and mediachoice in organizations. Communication Research, 14, 5, 1987, 553-574.Tripathi, K.N. Effect of contingency and timing of reward on intrinsic motivation. The Journalof General Psychology, 118, 1991, 97-105.Turoff, M. and Hiltz, S.R. Computer support for group versus individual decisions. IEEETransactions on Communications, 30, 1, 1982, 82-91.Tversky, A. and Kahneman, D. Availability: a heuristic for judging frequency and probability.Cognitive Psychology, 5, 1973, 207-232.170Tversky, A. and Kahneman, D. Judgment under uncertainty: heuristics and biases. Science, 185,1974, 1124-1131.Tversky, A. and Kahneman, D. Causal schemas in judgments under uncertainty. In M. Fishbein(ed.), Progress in social psychology, 1. Hilisdale, NJ: Eribaum, 1980.Tversky, A. and Kahneman, D. Evidential impact of base rates. In Kahneman, D.; Slovic, P.;and Tversky, A. (eds.), Judgment under uncertainty: heuristics and biases. New York:Combridge University Press, 1982, 153-160.Valacich, J.S.; Dennis, A.R.; and Connolly, T. Idea generation in computer-based groups: anew ending to an old story. Organizational behavior and human decision processes, 57, 1994,448-467.von Neumann, J. and Morgenstern, 0. Theory of games and economic behavior (2nd ed.).Princeton, NJ: Princeton University, 1947.Watson, R.T. A study of group decision support system use in three- and four-person groupsfor a preference allocation decision. Unpublished doctoral dissertation, University of Minnesota,1987.Watson, R.T.; DeSanctis, G; and Poole, M.S. Using a GDSS to facilitate group consensus:some intended and unintended consequences. MIS Quarterly, 12, 3, 1988, 463-478.Weick, K.E. Laboratory experimentation with organizations. In March, J.G. (ed.), Handbookof organizations. Chicago: Rand McNally, 1965, 194-260.Weisberg, S. Applied linear regression. New York, NY: John Wiley, 1980.171Weisskopf-Joelson, E. and Eliseo, T.S. An experimental study of the effectivenss ofbrainstorming. Journal ofApplied Psychology, 45, 1961, 45-49.Wells, G.L. and Harvey, J.H. Naive attributors’ attributions and predictions: what is informativeand when is an effect an effect? Journal of Personality and Social Psychology, 36, 1978, 483-490.Williams, K.; Harkins, S. and Latane, B. Identifiability as a deterrent to social loafmg: twocheering experiments. Journal of Personality and Social Psychology, 40, 1981, 303-311.Wright, W. An empirical test of a Bayesian decision support procedure in a financial context.Proceedings of the International Conference on Information Systems, 1983, 24 1-249.Zigurs, I. Effect of computer support on influence attempts. Unpublished doctoral dissertation,University of Minnesota, 1988.Zigurs, I.; Poole, M.S.; and DeSanctis, G. A study of influences in computer-mediatedcommunication. MIS Quarterly, 12, 4, 1988, 625-644.172APPENDIX AINTERACTION CODING SCHEMESThis appendix contains the coding schemes adopted in this research. The scheme used for codingthe substantive communication for the probability estimation task, adapted from [Argote et at.,1990], is shown in Table A. 1. It consists of five categories indicating the types of informationsource referred to by group members, their use of single or dual pieces of information inproducing outputs, and their references about the provided tool.Table A. 1 Coding scheme used for substantive communication for probability estimation taskCategory Sub-category1. References to the a. Problem-stated diagnostic informationdiagnostic/individuating Information b. Subject-generated diagnostic information2. References to the base rate a. Problem-stated base rateb. Subject-generated base rate3. References to integrating rules involving a. Single-piece of information - relevantthe use of single information source (with b. Single-piece of information - irrelevantno mention of tool)4. References to integrating rules involving a. Dual-pieces of information, partiallythe use of dual information sources (with correctno mention of tool) b. Dual-pieces of information, incorrectc. Dual-pieces of information, correct5. References to Problem-Representation a. Specific parts (i.e., grey or whiteTool squares, crosses)b. Ratio between parts - incorrectc. Ratio between parts - correctd. General references173The scheme used for coding the substantive communication for the job evaluation task, alsoconsisting of five categories, is shown in Table A.2. Ideas mentioned during discussion aredistinguished between those intended to support the importance of the issue at hand, and thoseintended to refute its importance. Also, ideational connections and proposals of estimates arecoded.Table A.2 Coding scheme used for substantive communication for job evaluation taskCategory Sub-category1. References to specific instances where a. Problem-stated instances (from jobthe item-in-question is encountered in description)Secretary’s job b. Subject-generated instances based onempirical observationsc. Subject-generated instances based ondeductive reasoning2. References to specific instances where a. Problem-stated instances (from jobthe item-in-question is not encountered in description)Secretary’s job b. Subject-generated instances based onempirical observationsc. Subject-generated instances based ondeductive reasoning3. Expressing an opinion based on ideas -generated during electronic brainstorming4. Expressing an opinion based on ideas -mentioned during earlier discussion5. Stating a rating a. Higher than the final ratingb. Lower than the final ratingc. Equal to the final rating174Table A.3 shows the scheme used for coding the procedural communication for both tasks. Thissystem is adapted from Putnam [19811. Although comments are coded into five categories, it istheir sum, indicating the total amount of procedural messages, that is of interest [Zigurs, 1988].Table A.3 Coding scheme used for procedural communicationCategory Explanation1. Inititation messages Requesting or suggesting deadlines, agendas, or lists ofactivities2. Goal-oriented messages Requesting or making statements about group goals orgroup jurisdiction3. Integrative messages Summarizing and integrating contributions4. Implementation Suggesting or requesting division of labor or implementationmessages of a course of action in getting the task done5. Process messages Requesting or suggesting procedural direction175APPENDIX BEXPERIMENTAL MATERIALSThis appendix contains the experimental materials used in the study, organized as follows:Part I - IntroductionResearch Participation FormConsent FormBackground information questionnairePart II - TrainingProblem Representation ToolElectronic Brainstorming ToolInformation Exchange ToolPractice Problem - Probability Esimation: The Cab ProblemPractice Problem - Job Evaluation: Insurance UnderwriterPart III - Probability Estimation TaskProfile TestsChris’ ProfessionProbability EstimationQuestionnaire IPart IV- Job Evaluation TaskJob Evaluation: SecretaryQuestionnaire II176Part IIntroduction177PARTICIPATION IN THE STUDYFor approximately the next three hours you will be participating in a study. We will ask you towrite your name on j materials so that you may be paid for your participation. However, allinformation given as part of this study is strictly confidential and will not be seen by anyoneother than the researchers.We ask that you answer the questions and perform the task as conscientiously and best you can.You caii malce a contribution to knowledge about decision making and information systems byundertaking your role in this study earnestly. Participate to the best of your ability. If you haveany questions at any time, please feel free to ask.Your participation consists of four parts. You will do one section at a time. When you havecompleted a section, please wait until the Facilitator gives you instructions for the next section.YOUR NAME:________________DATh:178<Faculty Letterhead>CONSENT FORMThis form is to be completed after you have read and understood the contents of the InformationSheet that is attached. You will be offered a signed copy of this form for your own record.Agreement to Participate:I have read and understood the contents of the Information Sheet provided and have decided toparticipate in the study.Agreement to Confidentiality:I understand that some of my classmates and friends may also be participating in this study. Irealize that my discussion of the details of this study with them may distort the results.Therefore, I agree not to discuss with any other participant regarding any aspect of the studyprior to their participation.Signature____________________________________of ParticipantName__________DateTelephone___ ____NumberThank you very much for your assistance in this study.Please return the completed form to the Facilitator.179BACKGROUND INFORMATIONPlease answer the following questions about yourself.1. Age________years2. Gender F M (circle one)3. Number of years in full-time employment________years4. Your level of experience in working in groups (circle one)very high medium low veryhigh low5. Your level of experience in maldng actual business decisions (circle one)very high medium low veryhigh low6. Your level of experience in using computers (circle one)very high medium low veryhigh low7. Have you been involved in some kind of group work (e.g., class projects, etc) with any ofthe current group members?No ()Yes ()If your answer is “yes”, please identify member(s) involved and briefly explain natureof group work:180181IIiieaPROBLEM REPRESENTATION TOOLA computer system for producing a graphical representation ofprobability problemstatementsIntroductionThis guide will enable you to quickly learn about PRT (problem representation tool). To getfamiliar with PRT, follow the instructions given by the Facilitator. The Facilitator will tell youwhen to start and when to proceed to the next learning stage. Please follow the instructionsprecisely. Do not be concerned about slow typing or typing mistakes.A Probability ProblemA cab was involved in a hit-and-run accident at night. Two cab companies, the Green (with its cabscolored green) and the Blue (with its cabs colored blue), operate in the city. You are given the followingdata:(i) 70% of the cabs in the city are Blue and 30% are Green.(ii) A witness identified the cab as Green. The court tested the reliability of the witness under nighttimevisibility conditions and concluded that the witness correctly identified each one of the two colors 75%of the time and erred 25% of the time.Question: What is the probability that the cab involved in the accident was Green rather than Blue?PRT represents the above problem by graphing 100 squares to symbolize the cab population.The squares are differentiated using color and pattern to show whether a particular cab isidentified as green or identified as blue, and also whether it is actually green or actually blue.Hands-OnTo use the PRT, click on the “PRT Trainer” icon. (Note: Do NOT click on the “PRT”icon. It has to be enabled by the Facilitator, who will do so at the appropriate moment.)Click on “View Chart”. The graph should appear.182ELECTRONIC BRAINSTORMING TOOLA computer system for assisting groups to brainstorm ideasIntroductionThis guide will enable you to quickly learn to use EBT (electronic brainstorming tool). To getfamiliar with EBT, follow the instructions given by the Facilitator. The Facilitator will tell youwhen to start and when to proceed to the next learning stage. Please follow the instructions givenprecisely. Do not be concerned about slow typing or typing mistakes.EBTBrainstorming within groups is a commonly used technique. Its primary purpose is to facilitatethe generation of ideas from the group members. EBT serves the same purpose by providing anelectronic means to accomplish that.EBT can only be initiated by the Facilitator because the brainstorming activity is normallyperformed by the group as a whole and thus requires coordination by the Facilitator.Hands-OnWhen the group is ready to perform electronic brainstorming (this will be indicated bythe Facilitator), the Facilitator will send you a “form” to enter your ideas.Enter one idea using the EBT form, then click on “Done”. You will then be givenanother blank form for you to enter another idea, and so on. You may find on the topof your form some ideas previously inputted by you or other group members; theseinputted ideas are meant to serve you as prompts for more ideas. Generate as many ideasas you possibly can.When enough time has been spent on the brainstorming session, the Facilitator willterminate the session, and send you the results. You will then notice that the “ViewResults” icon is surrounded by a blue square, indicating that there is some results183awaiting your attention.To retrieve results, click on the “View Results” icon. There may be more than one setof results due to more than one brainstorming sessions perhaps on different topics.Results not read yet will be indicated by dots. Click on a result set.When reading the brainstorming results, pay attention to all the ideas that have beengenerated, since the set essentially represents what your group collectively thinks abouta particular issue.184ll%1FORMATION EXCHANGE TOOLA computer system forfacilitating information exchange among group membersIntroductionThis guide will enable you to quickly learn to use JET (information exchange tool). To getfamiliar with JET, follow the instructions given by the Facilitator. The Facilitator will tell youwhen to start and when to proceed to the next learning stage. Please follow the instructions givenprecisely. Do not be concerned about slow typing or typing mistakes.Electronic Mailing (Email)The email facility is used for electronic communication with the rest of the group or any of theindividuals within the group. It includes several functions such as “Compose Message”, “Replyto a Message”, “Delete Message”, and so on. The most important function which you will usevery frequently is the “Compose Message” function.Hands-OnTo use the email facility, click on the “Message” icon.To compose a message, click on the “Compose” button.Do not type in anything at the “Subject” line, i.e., leave it blank.Type your message. When ready, click on “Send Message”.You should notice that the “Message” icon is now surrounded by a blue square. Thisindicates that someone has sent you a message you haven’t read yet. (If you have notalready done so, click on the “Message” icon to retrieve the message(s).) In this case,you have just sent a message to everybody else. A message not yet read is indicated bya dot next to it.185To read a message, simply click on it. When finished, you may either delete the messageby clicking on the “Delete Message” button, or leave it where it is. If you wish to replyto a message, we recommend that you use the “Compose Message” function, rather thanthe “Reply Message” button, which will send your reply to only one person (theoriginator), rather than the group.If a graph facility is provided in this study (check with your Facilitator about this), youmay include the graph in a message that you compose by clicking on the graph whileholding down the left mouse button, until an icon for the graph is generated, and thendragging the icon to your message space.186PRACTICE PROBLEMPROBABILITY ESTIMATION: THE CAB PROBLEMA cab was involved in a hit-and-run accident at night. Two cab companies, the Green (with itscabs colored green) and the Blue (with its cabs colored blue), operate in the city. You are giventhe following data:(i) 70% of the cabs in the city are Blue and 30% are Green.(ii) A witness identified the cab as Green. The court tested the reliability of the witness undernighttime visibility conditions and concluded that the witness correctly identified each one of thetwo colors 75% of the time and erred 25% of the time.Question: What is the probability that the cab involved in the accident was Green rather thanBlue?Your group should discuss and agree on a solution within 10 minutes (maximum).Start Time:__________Record your group solution here:Probability that the cab involved in the accident was Green rather than Blue isStop Time:__ __ __ __ _187PRACTICE PROBLEMJOB EVALUATION: INSURANCE UNDERWRITERIncluded in this task is a list of items relevant to different facets of a job. As a group you areasked to evaluate each item with respect to the job of Insurance Underwriter, whose jobdescription is included below. Use the response scale indicated below each item. For each item,discuss among yourselves, then decide on the appropriate rating. Record your group rating onthe form by circling the appropriate number.Job Description of Insurance UnderwriterReviews individual applications for insurance to evaluate degree of risk involved and acceptsapplications, following company’s underwriting policies: Examines such documents asapplication form, inspection report, insurance maps, and medical reports to determine degreeof risk from such factors as applicant’s financial standing, age, occupation, accident experience,and value and condition of real property. Reviews company records to ascertain amount ofinsurance in force on single risk or group of closely related risks, and evaluates possibility oflosses due to catastrophe or excessive insurance. Declines risks which are too excessive toobligate company. Dictates correspondence for field representatives, medical personnel, andother insurance or inspection companies to obtain further information, quote rates, or explaincompany’s underwriting policies. When risk is excessive, authorizes reinsurance, or when riskis substandard, limits company’s obligation by decreasing value of policy, specifying applicableendorsements, or applying rating to insure safe and profitable distribution of risks, using ratebooks, tables, code books, and other reference material.188Directions: Using the response scale below, rate the importance of each item to the job ofInsurance Underwriter.Importance to This Job0 Does not apply1 Very minor2 Low3 Intermediate4 High5 ExtremeJudging condition or qualityEstimating the condition, quality, and/or value of object.(0) (1) (2) (3) (4) (5)2. NegotiatingDealing with others to reach an agreement or solution.(0) (1) (2) (3) (4) (5)189Part IIIProbability Estimation Task190PROFILE TESTSProfile TestsIndicate whether you think the personality profile below belongs to anengineer or a lawyer. Click on the appropriate line and press done tomake your choice.Steve is 37 years old and unmarried. He is very shy and withdrawn,invariably helpful, but with little interest in people, or in the world ofreality. A meek and tidy soul, he has a need for order and structure anda passion for detail.o EngineerI think this person is a(n)0 Lawyer rbone ]Figure B. 1 Profile tests: Screen 1191Profile TestsIndicate whether you think the personality profile below belongs to anengineer or a lawyer, Click on the appropriate line and press done tomake your choice.Nancy is a 30 year old woman. She is married with no children. Awoman of high ability and high motivation, she promises to be quitesuccessful in her field, She is well liked by her colleagues.o EngineerI think this person is a(n)0 Lawyer [ Done ]Figure B.2 Profile tests: Screen 2192Profile TestsIndicate whether you think the personality profile below belongs to anengineer or a lawyer. Click on the appropriate line and press done tomake your choice.Jack is a 45 year old man. He is married and has four children. He isgenerally conservative, careful and ambitious. He shows no interest inpolitical and social issues, and spends most of his time on his manyhobbies which include home carpentry, sailing and mathematicalpuzzles.o EngineerI think this person is a[n)0 Lawyer [ Done ]Figure B.3 Profile tests: Screen 3193Profile TestsIndicate whether you think the personality profile below belongs to anengineer or a lawyer. Click on the appropriate line and press done tomake your choice.Susan is 50 years old. She has been married for 27 years, but has nochildren, She is an introverted woman who devotes much time to herhobbies, including chess, mathematical puzzles, and model trains.o EngineerI think this person is a(n)0 Lawyer [ Done ]Figure B.4 Profile tests: Screen 4194Profile TestsIndicate whether you think the personality profile below belongs to anengineer or a lawyer. Click on the appropriate line and press done tomake your choice.Jonathan is 35 years old, married, with two children. He is verydedicated and extremely well thought of by his colleagues. He has agood sense of humour, He recently moved to the east coast fromVancouver.o Engineerthink this person is a(n)0 Lawyer [ Done ]Figure B.5 Profile tests: Screen 5195Profile TestsIndicate whether you think the personality profile below belongs to anengineer or a lawyer, Click on the appropriate line and press done tomake your choice.Roger is 34 years old. Now that he has finished his education, he andhis wife are planning on starting a family. Roger appears to be a veryconfident and somewhat domineering individual. This may be aspillover from his work where he is used to giving orders and to makingdecisions that affect people ‘s lives.o EngineerI think this person is a[n)0 Lawyer [ DoneZFigure B.6 Profile tests: Screen 6196Profile TestsIndicate whether you think the personality profile below belongs to anengineer or a lawyer. Click on the appropriate line and press done tomake your choice.Rick is a 41 year old man. He is married with three children. He has agenuine interest in issues that affect other people ‘s lives. He constan Ipays attention to political events, and is seldom hesitant in making hisviews known, He is active at church, and does voluntary workwhenever he can afford the time.o EngineerI think this person is a(n)0 Lawyer [ Done ]Figure B.7 Profile tests: Screen 7197Profile TestsIndicate whether you think the personality profile below belongs to anengineer or a lawyer. Click on the appropriate line and press done tomake your choice.Tom is 48 years old. He has been married for 20 years and has fourchildren, He has been called a wonder dad” by his colleagues, mainlybecause what he likes most is to spend time with his family andchildren. He has a good sense of humour. Also, he has greatadmiration for Trudeau.o EngineerI think this person is a(n)0 Lawyer [ Done ]Figure B.8 Profile tests: Screen 8198Profile TestsIndicate whether you think the personality profile below belongs to anengineer or a lawyer. Click on the appropriate line and press done tomake your choice.Eddy is 39 years old, unmarried. He is socially active, and very wellliked by friends and colleagues alike. He is intelligent and creative. Heis willing and able to offer sound advice to friends who ask for it.o EngineerI think this person is aCn) 0 Lawyer [ Done ]Figure B.9 Profile tests: Screen 9199Profile TestsIndicate whether you think the personality profile below belongs to anengineer or a lawyer. Click on the appropriate line and press done tomake your choice.Ted is a 41 year old man. He has not been married. He is verydedicated to his work, and is very well-known in his profession. Formore than once and for good causes, his picture has appeared in localnewspapers. He is not shy from publicity, and has been invited tospeak at public functions, He is a critical person, both in his private lifeand work place.o EngineerI think this person is a(nJ0 Lawyer [ Done ]Figure B. 10 Profile tests: Screen 10200CHRIS’ PROFESSIONYou have just finished a personality-profile test individually.The following presents another profile which you are supposed to identify as a group:is 32 years old and unmarried. C. is of high intelligence, although lacking in truecreativity. C. has a need for order and cleanliness, and for neat and tidy systems in whicheverything finds its appropriate place. C.’ s writing is rather dry and mechanical, occasionallyenlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. C. has astrong drive for competition. C. seems to have little feel and little sympathy for other people anddoes not enjoy interacting with others. Self-centered, C. nonetheless has a strong moral sense.”You should discuss and come to an agreement on whether Chris is an engineer or a lawyerwithin 5 minutes (maximum). In solving this task, do not refer to any of the previous profiles.201PROBABILITY ESTIMATIONThe set of profiles from which Chris’ profile was drawn consists of those of engineers and thoseof lawyers. You are given the following data:(i) 85% of the proffles in the set belong to lawyers and 15% belong to engineers.(ii) Your group has identified Chris as an engineer I a lawyer. Earlier, the group has been testedfor its reliability in identifying professions given personality profiles, and found to have correctlyidentified each of the two professions % of the time and erred__________%ofthe time.Question: What is the probability that Chris is an engineer / a lawyer rather than a lawyer / anengineer?Your group should discuss and agree on a solution within 15 minutes (maximum).Start Time:_________Record your group solution here:The probability that Chris is an engineer / a lawyer rather than a lawyer / an engineer isStop Time:202QUESTIONNAIRE IDirections: We are interested in how your group approached the task. This questionnaire is composedof 23 statements. Please indicate in the space provided the degree to which each statement applies to youor your group. Indicate your choice by circling the appropriate marker. There are no right or wronganswers. Many of the statements are similar to other statements. Do not be concerned about this. Workquickly, just record your first impression.(1) (2) (3) (4) (5)Not Toa To Toa Toayat all little some great greatextent extent extent extent1. To what extent do you feel personally responsiblefor the correctness of the group solution? (1) (2) (3) (4) (5)2. To what extent did others express a negative opinionabout your behavior? (1) (2) (3) (4) (5)3. To what extent does the final solution reflect yourinputs? (1) (2) (3) (4) (5)4. To what extent did you make suggestions about doingthe task? (1) (2) (3) (4) (5)5. To what extent did you feel frustrated and tense aboutother’s behavior? (1) (2) (3) (4) (5)6. To what extent are you confident that the groupsolution is correct? (1) (2) (3) (4) (5)7. To what extent did one member strongly influencethe group decision? (1) (2) (3) (4) (5)8. To what extent do you feel committed to the group’ssolution? (1) (2) (3) (4) (5)9. To what extent did you express negative opinions aboutsomeone’s behavior? (1) (2) (3) (4) (5)10. To what extent did you reject other’s opinions orsuggestions? (1) (2) (3) (4) (5)11. To what extent did you show attention and interest inthe group’s activities? (1) (2) (3) (4) (5)12. To what extent were your opinions or suggestionsrejected? (1) (2) (3) (4) (5)203(1) (2) (3) (4) (5)Not Toa To Toa Toayat all little some great greatextent extent extent extent13. To what extent did you give information about theproblem? (1) (2) (3) (4) (5)14. To what extent did you ask others for their thoughts andopinions? (1) (2) (3) (4) (5)15. To what extent did you ask for suggestions from othersin the group? (1) (2) (3) (4) (5)16. To what extent did you feel one person influenced thefinal solution more than the rest of the group? (1) (2) (3) (4) (5)17. To what extent did you feel someone emerged as aninformal leader? (1) (2) (3) (4) (5)Very Neither Verydissatisfied satisfied satisfiednordissatisfied18. How satisfied or dissatisfied are you with thequality of your group’s solution? (1) (2) (3) (4) (5)19. How would you describe your group’s problem solving process?inefficient efficienta. (1) (2) (3) (4) (5)uncoordinated coordinatedb. (1) (2) (3) (4) (5)unfair fairc. (1) (2) (3) (4) (5)confusing understandabled. (1) (2) (3) (4) (5)dissatisfying satisfyinge. (1) (2) (3) (4) (5)204Part IVJob Evaluation Task205JOB EVALUATION: SECRETARYIncluded in this task is a list of items relevant to different facets of a job. As a group you areasked to evaluate each item with respect to the job of Secretary, whose job description isincluded below. Use the response scale indicated below each item. For each item, discuss amongyourselves, then decide on the appropriate rating. Record your group rating on the form bycircling the appropriate number.Job Description of SecretarySchedules appointments, gives information to callers, takes dictation, and otherwise relievesofficials of clerical work and minor administrative and business details: Reads and routesincoming mail. Locates and attaches appropriate file to correspondence to be answered byemployer. Takes dictation in shorthand or by machine and transcribes notes on typewriter, ortranscribes from voice recording. Composes and types routine correspondence. Filescorrespondence and other records. Answers telephone and gives information to callers or routescall to appropriate official and places outgoing calls. Schedules appointments for employer.Greets visitors, ascertains nature of business, and conducts visitors to employer or appropriateperson. May arrange travel schedule and reservations. May compile and type statistical reports.May record minutes of staff meetings. May make copies of correspondence or other printedmatter, using copying or duplicating machine. May prepare outgoing mail, using postagemetering machine.206Directions for Items 1 to 7: Using the response scale below, rate the importance of each itemto the job of Secretary.Importance to This Job0 Does not apply1 Very minor2 Low3 Intermediate4 High5 ExtremeEstimating speed of processesEstimating the speed of ongoing processes or a series of events while they are takingplace.(0) (1) (2) (3) (4) (5)2. Use of job-related knowledgeThe importance to job performance of specific job-related knowledge or informationgained through education, experience, or training, as contrasted with any related physicalskills.(0) (1) (2) (3) (4) (5)3. Short-term memoryLearning and retaining job-related informtion and recalling that information after a brieftime.(0) (1) (2) (3) (4) (5)4. [land-operated controlsControls operated by hand or arm for making frequent but not continuous adjustments.(0) . (1) (2) (3) (4) (5)5. Personal contact with buyers (both inside and outside the organization)I.e., purchasing agents, not public customers.(0) (1) (2) (3) (4) (5)207Importance to This Job0 Does not apply1 Very minor2 Low3 Intermediate4 High5 Extreme6. Attention to detailA need to be thorough and attentive to various details of one’s work, being sure thatnothing is left undone.(0) (1) (2) (3) (4) (5)7. Vigilance: Infrequent eventsA need to continually search for infrequently occurring but relevant events in the jobsituation.(0) (1) (2) (3) (4) (5)8. Written materialsUsing the response scale below, indicate the extent of use of written materials (e.g.,books, reports, office notes, articles, job instructions, or signs) in the job of Secretary.Extent of Use0 Does not apply1 Very infrequent2 Occasional3 Moderate4 Considerable5 Very substantial(0) (1) (2) (3) (4) (5)208QUESTIONNAIRE IIDirections: We are interested in how your group approached the task. This questionnaire is composedof 23 statements. Please indicate in the space provided the degree to which each statement applies to youor your group. Indicate your choice by circling the appropriate marker. There are no right or wronganswers. Many of the statements are similar to other statements. Do not be concerned about this. Workquickly, just record your first impression.(1) (2) (3) (4) (5)Not Toa To Toa Toayat all little some great greatextent extent extent extent1. To what extent do you feel personally responsiblefor the correctness of the group solution? (1) (2) (3) (4) (5)2. To what extent did others express a negative opinionabout your behavior? (1) (2) (3) (4) (5)3. To what extent does the final solution reflect yourinputs? (1) (2) (3) (4) (5)4. To what extent did you make suggestions about doingthe task? (1) (2) (3) (4) (5)5. To what extent did you feel frustrated and tense aboutother’s behavior? (1) (2) (3) (4) (5)6. To what extent are you confident that the groupsolution is correct? (1) (2) (3) (4) (5)7. To what extent did one member strongly influencethe group decision? (1) (2) (3) (4) (5)8. To what extent do you feel committed to the group’ssolution? (1) (2) (3) (4) (5)9. To what extent did you express negative opinions aboutsomeone’s behavior? (1) (2) (3) (4) (5)10. To what extent did you reject other’s opinions orsuggestions? (1) (2) (3) (4) (5)11. To what extent did you show attention and interest inthe group’s activities? (1) (2) (3) (4) (5)12. To what extent were your opinions or suggestionsrejected? (1) (2) (3) (4) (5)209(1) (2) (3) (4) (5)Not Toa To Toa 1ba’yat all little some great greatextent extent extent extent13. To what extent did you give information about theproblem? (1) (2) (3) (4) (5)14. To what extent did you ask others for their thoughts andopinions? (1) (2) (3) (4) (5)15. To what extent did you ask for suggestions from othersin the group? (1) (2) (3) (4) (5)16. To what extent did you feel one person influenced thefinal solution more than the rest of the group? (1) (2) (3) (4) (5)17. To what extent did you feel someone emerged as aninformal leader? (1) (2) (3) (4) (5)Very Neither Verydissatisfied satisfied satisfiednordissatisfied18. How satisfied or dissatisfied are you with thequality of your group’s solution? (1) (2) (3) (4) (5)19. How would you describe your group’s problem solving process?inefficient efficienta. (1) (2) (3) (4) (5)uncoordinated coordinatedb. (1) (2) (3) (4) (5)unfair fairc. (1) (2) (3) (4) (5)confusing understandabled. (1) (2) (3) (4) (5)dissatisfying satisfyinge. (1) (2) (3) (4) (5)210APPENDIX CDETAILED STATISTICAL RESULTSThis appendix contains detailed results of (1) the assessment of the reliability and validity ofscales for measuring perceptions and (2) the analysis of variance models examined. The latteris organized according to the experimental tasks. Within each task, the ANOVA outputs arepresented for the following groups of variables of interest: (a) outcomes and perceptual variablesand (b) process variables.C.1 Assessment of the Reiabifity and Validity of the Perceptual VariablesThe instrument comprised of five multi-item scales for measuring five perceptual constructs:“perceived task participation”, “perceived socio-emotional behavior”, “satisfaction withoutcome”, “satisfaction with process”, and “perceived informal leadership”.Table C. 1 presents the Cronbach’s ALPHA, eigenvalues, and the percentage of varianceexplained for each of the five scales; separate statistics are reported for the two tasks. The valuesof ALPHA for the five scales are all within or above the .60 to .80 range, which Nunnally[1967, p.226] argues is a sufficient reliability level for “basic research”. Besides reliability, theconstruct validity was also examined. For each of the five scales pertaining to each task, theitems comprising the scale were factor analyzed to assess the number of factors that emerged.In each of these cases, only the first eigenvalue was found to be significantly greater than 1.0,with the accompanying scree plot showing a break after the first factor. The eigenvalue of thesefirst factors range from 2.11 (“perceived task participation” for job evaluation task) to 3.04(“satisfaction with process” for probability estimation task). The percentage of varianceexplained by each of these first factors range from 42.25% (“perceived task participation” forjob evaluation task) to 78.90% (“perceived informal leadership” for probability estimation task).Further confirmatory factor analysis was performed by conducting Principal Component analysisusing the VARIMAX rotation on all 23 items that measured the five constructs. For theprobability estimation task, all five of the expected factors emerged with a reasonable degree ofclarity accounting for 63.44% of the total variance in the data set. A scree plot revealed a clear211break after the first five eigenvalues which were significantly greater than the 1.0 cut-off: 5.25,3.66, 2.20, 1.88, and 1.61. For the job evaluation task, all five of the expected factors alsoemerged with a reasonable degree of clarity accounting for 59.09% of the total variance in thedata set. A scree plot revealed a clear break after the first five eigenvalues which weresignificantly greater than the 1.0 cut-off: 5.81, 2.95, 1.83, 1.53, and 1.47.Tables C.2 to C.6 present the item reliability statistics (i.e., deletion effect on Cronbach’sALPHA, item-to-total scale correlation, standard deviation, and component loading) for the itemscomprising each of the five scales pertaining to each task. Examination of the deletion effect onCronbach’s ALPHA reveals only one item (Qi, in Table C.4) whose deletion would improvethe overall reliability of “satisfaction with outcome” for job evaluation task. However, the item-to-total scale correlation and standard deviation of this item provide some justification for itsretention. Although the item-to-total scale correlation is the lowest in this scale, it is substantiallygreater than the .40 cut-off that is generally used [Moore and Benbasat, 1991]. Also, thestandard deviation is no less than those of two other items. As the increase in ALPHA fromdeleting this item would have been marginal (from .68 to .69), the item was retained in thescale. Referring to the last column of tables C.2 to C.6, all the items in the five scales loadedstrongly on their respective factors for each of the two tasks, indicating a high level ofconvergent validity. The factor loadings range from .50 (Qil of “perceived task particiation”for job evaluation task, Table C.2) to .91 (Q16 of “perceived informal leadership” forprobability estimation task, Table C.6), which are above the .45 cut-off that is generallyconsidered [Comrie, 1973]. Furthermore, the majority of the loadings are in excess of .63,which Comrie [1973] considers to be the minimal cut-off level for very good loadings.In sum, the five scales comprising the instrument were found to be sound from the perspectiveof reliability of measurement.212TableC.1Scalereliabilitycoefficientsandfactoranalysisresults4ScaleNo.ofProbabilityestimationtaskJobevaluationtaskitemsCronbach’sEigenvaluePercentageofCronbach’sEigenvaluePercentageofALPHA(FactorvarianceexplainedALPHA(Factorvarianceexplainedanalysis)byfactoranalysis)byfactorPerceivedtask5.712.3446.74.662.1142.25participationPerceivedsocio-emotional5.822.9057.96.782.6953.81behaviorSatisfactionwithoutcome5.742.4849.68.682.2344.55Satisfactionwithprocess5.843.0460.87.833.0260.44Perceivedinformal3.872.3778.90.832.2274.12leadershipTableC.2Itemreliabilitystatisticsandfactorloadings:PerceivedtaskparticipationItemlabelProbabilityestimationtaskJobevaluationtaskDeletioneffectItem-totals.d.ComponentDeletioneffectItem-totals.d.ComponentonALPHAcorrelationloadingsonALPHAcorrelationloadings(base=.71)(base=.66)Q4.68.65.96.64.62.59.76.58Qil.71.59.77.61.66.50.74.50Q13.67.69.95.70.61.62.74.63Q14.67.741.06.73.56.73.93.75Q15.63.751.08.73.53.77.98.79213TableC.3Itemreliabilitystatisticsandfactorloadings: Perceivedsocio-emotional behaviorItemlabelProbabilityestimationtaskJobevaluationtaskDeletioneffectItem-totals.d.ComponentDeletioneffectItem-totals.d.ComponentonALPHAcorrelationloadingsonALPHAcorrelationloadings(base=.82)(base=.78)Q2.77.791.02.79.74.75.94.74Q5.79.75.97.75.73.76.89.75Q9.78.78.94.78.72.79.90.78Ql0.78.771.01.77.76.70.83.70Q12.80.71.89.72.76.67.70.69TableC.4Itemreliabilitystatisticsandfactor_loadings:_SatisfactionwithoutcomeItemlabelProbabilityestimationtaskJobevaluationtaskDeletioneffectItem-totals.d.ComponentDeletioneffectItem-totals.d.ComponentonALPHAcorrelationloadingsonALPHAcorrelationloadings(base=.74)(base=.68)Ql.74.63.94.60.69.61.66.59Q3.68.74.97.73.67.69.85.65Q6.72.66.93.67.64.67.62.72Q8.67.77.94.78.66.72.83.71Q18.69.72.94.73.63.63.66.66214TableC.5Itemreliabilitystatisticsandfactorloadings:SatisfactionwithprocessItemlabelProbabilityestimationtaskJobevaluationtaskDeletioneffectItem-totals.d.ComponentDeletioneffectItem-totals.d.ComponentonALPHAcorrelationloadingsonALPHAcorrelationloadings(base=.84)(base=.83)Q19a.81.78.88.78.81.74.71.75Q19b.80.81.88.81.80.79.79.79Q19c.84.66.74.66.80.77.67.78Q19d.80.80.88.80.79.82.84.81Q19e.78.84.84.85.81.77.80.76TableC.6Itemreliabilitystatisticsandfactorloadings:PerceivedinformalleadershipItemlabelProbabilityestimationtaskJobevaluationtaskDeletioneffectItem-totals.d.ComponentDeletioneffectItem-totals.d.ComponentonALPHAcorrelationloadingsonALPHAcorrelationloadings(base=.87)(base=.83)Q7.80.891.14.89.78.85.99.85Q16.77.911.18.91.74.881.05.88Q17.86.871.21.86.76.861.01.86215C.2 Analysis of VarianceC.2. 1 Probability Estimation TaskC.2.1.1 Outcomes and Perceptual Variables(a) Normalized errorThis variable is a measure of the degree of departure from the correct solution provided by theBayesian model. One group has been discarded from this analysis because it did not identify thepersonality profile in the way expected (i.e., the profile was identified as lawyer rather thanengineer).DEP VAR: NOR_ERR N: 39 MULTIPLE R: 0.435 SQUARED MULTIPLE R: 0.189ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS’ 0.912 1 0.912 7.755 0.0091E2 0.001 1 0.001 0.006 0.937CS*IE 0.036 1 0.036 0.304 0.585ERROR 4.118 35 0.118Note: (1) “CS” denotes the “cognitive support” variable;(2) “IE” denotes the “communication medium” variable.(b) Time to completionSimilar to the previous analysis, one group has been discarded from this analysis because it didnot identify the personality profile in the way expected.216DEP VAR: TM_C1B N: 39 MULTIPLE R: 0.248 SQUARED MULTIPLE R: 0.061ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCs 17272.389 1 17272.389 0.240 0.627IE 52229.232 1 52229.232 0.725 0.400CS*IE 92177.297 1 92177.297 1.280 0.266ERROR 2521437.600 35 72041.074(c) Perceived task participationDEP VAR:CTASKME N: 39 MULTIPLE R: 0.365 SQUARED MULTIPLE R: 0.133ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 0.001 1 0.001 0.000 0.985IE 9.267 1 9.267 3.726 0.062CS*IE 3.789 1 3.789 1.523 0.225ERROR 87.058 35 2.487(d) Perceived socio-emotional behaviorDEP VAR: CSOCME N: 40 MULTIPLE R: 0.533 SQUARED MULTIPLE R: 0.284ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 4.225 1 4.225 0.671 0.418IE 39.336 1 39.336 6.251 0.017CS*IE 46.225 1 46.225 7.346 0.010ERROR 226.522 36 6.292217(e) Satisfaction with outcomeSOURCE PCS 0.505IE 0.431CS*IE 0.095ERROR(t) Satisfaction with processN: 40 MULTIPLE R: 0.430ANALYSIS OF VARIANCESUM-OF-SQUARES DF MEAN-SQUARE1 8.7111 2.178CS*IE 1 17.778ERROR 36 3.511(g) Perceived informal leadershipN: 40 MULTIPLE R: 0.652ANALYSIS OF VARIANCESUM-OF-SQUARES DF MEAN-SQUARE52.136 1 52.13611.736 1 11.73624.025 1 24.025118.767 36 3.299C.2.1.2 Process Variables(a) References to the diagnostic informationThis variable has undergone a transformation to meet the requirements of ANOVA.DEP VAR:COSAT_ME N: 40 MULTIPLE R: 0.317 SQUARED MULTIPLE R: 0.101ANALYSIS OF VARIANCESUM-OF-SQUARES DF MEAN-SQUARE F-RATIO1.344 1 1.344 0.4541.878 1 1.878 0.6348.711 1 8.711 2.939106.689 36 2.964DEP VAR:CPSAT_MESOURCECSIESQUARED MULTIPLE R: 0.1858.7112.17817. 778126. 400F-RATIO2.4810.6205.063P0.1240.4360.031DEP VAR: CLEADMESOURCECSIECS*IEERRORSQUARED MULTIPLE R: 0.425F-RATIO15.8033.5577.282P0.0000.0670.011218DEP VAR:DIAGINF9 N: 32 MULTIPLE R: 0.550 SQUARED MULTIPLE R: 0.302ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 4.155 1 4.155 1.881 0.181IE 14.352 1 14.352 6.499 0.017CS*IE 11.057 1 11.057 5.007 0.033ERROR 61.839 28 2.209(b) References to the base rateThis variable has undergone a transformation to meet the requirements of ANOVA.DEP VAR:BASERAT9 N: 32 MULTIPLE R: 0.544 SQUARED MULTIPLE R: 0.296ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 0.678 1 0.678 0.391 0.537IE 10.744 1 10.744 6.197 0.019CS*IE 11.497 1 11.497 6.632 0.016ERROR 48.542 28 1.734(c) References to the use of single information sourceThis variable has undergone a transformation to meet the requirements of ANOVA.DEP VAR:SINGRUL9 N: 32 MULTIPLE R: 0.481 SQUARED MULTIPLE R: 0.232ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 0.989 1 0.989 6.268 0.018IE 0.083 1 0.083 0.528 0.473CS*IE 0.294 1 0.294 1.863 0.183ERROR 4.417 28 0.158219(d) References to the use of dual information sourcesDEP VAR:DUALRULE N: 32 MULTIPLE R: 0.122 SQUARED MULTIPLE R: 0.015ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 15.191 1 15.191 0.383 0.541IE 1.612 1 1.612 0.041 0.842CS*IE 0.112 1 0.112 0.003 0.958ERROR 1110.635 28 39.666(e) References to the Problem Representation ToolThis variable concerns references made about the problem reptesentation tool (i.e., cognitivesupport) provided. Thus, only groups in the Cognitive Support condition were included in thisanalysis. This variable has undergone a transformation to meet the requirements of ANOVA.DEP VAR:PRTTOOL9 N: 14 MULTIPLE R: 0.703 SQUARED MULTIPLE R: 0.494ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PIE 21.730 1 21.730 11.702 0.005ERROR 22.282 12 1.857(t) Procedural commentsDEP VAR: PROCEDP N: 32 MULTIPLE R: 0.261 SQUARED MULTIPLE R: 0.068ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 111.917 1 111.917 0.648 0.427IE 131.560 1 131.560 0.762 0.390CS*IE 138.810 1 138.810 0.804 0.378ERROR 4833.460 28 172.624220C.2.2 Job Evaluation TaskC. 2.2.1 Outcomes and Perceptual Measuresía) Availability biasThis variable is defined as the difference between scores assigned to high-availability items andthose assigned to low-availability items. It has undergone a transformation to meet therequirements of ANOVA. An outlier, identified using a residual plot, has been removed fromthis analysis.DEP VAR: DIFF9 N: 39 MULTIPLE R: 0.475 SQUARED MULTIPLE R: 0.225ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCs 1.053 1 1.053 3.580 0.067IE 1.382 1 1.382 4.699 0.037CS*IE 0.484 1 0.484 1.646 0.208ERROR 10.296 35 0.294(b) Time to completionDEP VAR: TM_TOT N: 40 MULTIPLE R: 0.814 SQUARED MULTIPLE R: 0.663ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 6112894.225 1 6112894.225 45.805 0.000IE 2245338.225 1 2245338.225 16.825, 0.000CS*IE 1095279.025 1 1095279.025 8.207 0.007ERROR 4804393.300 36 133455.369t’c) Total score of high-availability itemsDEP VAR: HI N: 40 MULTIPLE R: 0.259 SQUARED MULTIPLE R: 0.067ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 0.625 1 0.625 0.174 0.679IE 5.625 1 5.625 1.564 0.219CS*IE 3.025 1 3.025 0.841 0.365221ERROR 129.500 36 3.597(d) Total score of low-availability itemsDEP VAR: LO N: 40 MULTIPLE R: 0.572ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARECS 46.225 1 46.225IE 55.225 1 55.225CS*IE 5.625 1 5.625ERROR 219.900 36 6.108(e) Perceived task participationSQUARED MULTIPLE R: 0.327F-RATIO P7.568 0.0099.041 0.0050.921 0.344DEP VAR:DTASK_ME N: 40 MULTIPLE R: 0.137 SQUAREDANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIOCS 0.544 1 0.544 0.279IE 0.400 1 0.400 0.205CS*IE 0.400 1 0.400 0.205ERROR 70.200 36 1.9500’) Perceived socio-emotional behaviorThis variable has undergone a transformation to meet the requirements of ANOVA.DEP VAR:DSOCME9 N: 40 MULTIPLE R: 0.663 SQUARED MULTIPLE R: 0.439ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 0.035 1 0.035 0.402 0.530IE 1.868 1 1.868 21.376 0.000CS*IE 0.558 1 0.558 6.386 0.016ERROR 3.147 36 0.087MULTIPLE R: 0.019P0.6000.6530.653222fg Satisfaction with outcomeSOURCECsIECS * IEERRORth) Satisfaction with vrocessSOURCECSIECS*IEERRORN: 40 MULTIPLE R: 0.477ANALYSIS OF VARIANCESUM-OF-SQUARES DF MEAN-SQUARE0.069 1 0.0695.136 1 5.13623.003 1 23.00395.900 36 2.664(1) Perceived iifonnal leadershipP0.3910.1880.055C. 2.2.2 Process Variables(a) Total nwnber of ideas mentioned during discussion in support of importance of itemsDEP VAR:DOSAT_ME N: 39 MULTIPLE R: 0.396 SQUARED MULTIPLE R: 0.157ANALYSIS OF VARIANCESUM-OF-SQUARES DF MEAN-SQUARE F-RATIO1.600 1 1.600 0.7543.835 1 3.835 1.8068.375 1 8.375 3.94474.322 35 2.123DEP VAR:DPSAT_14E SQUARED MULTIPLE R: 0.227F-RATIO0.0261.9288.635P0.8730.1740.006DEP VAR:DLEADME N: 40 MULTIPLE R: 0.590 SQUARED MULTIPLE R: 0.348ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARECS 6.944 1 6.944IE 32.400 1 32.400CS*IE 6.944 1 6.944ERROR 86.778 36 2.410F-RATIO2.88113. 4412.881P0.0980.0010.098The variable has undergone a transformation to meet the requirements of ANOVA.223DEP VAR:IDEASSU9 N: 32 MULTIPLE R: 0.721 SQUARED MULTIPLE R: 0.520ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO pCS 25.708 1 25.708 10.486 0.003IE 53.162 1 53.162 21.684 0.000CS*IE 2.813 1 2.813 1.147 0.293ERROR 68.646 28 2.452(b) Total number of ideas mentioned during discussion in refutation qf importance of itemsDEP VAR:IDEASREF N: 32 MULTIPLE R: 0.346 SQUARED MULTIPLE R: 0.120ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 3.920 1 3.920 0.140 0.711IE 82.944 1 82.944 2.956 0.097CS*IE 40.139 1 40.139 1.430 0.242ERROR 785.714 28 28.061(c) Nwnber of ideational connections with ideas generated during electronic brainstormingThis variable deals with ideational connections involving ideas generated earlier during electronicbrainstorming; correspondingly, this analysis involves only groups within the “cognitive support”condition.DEP VAR:IDCONEBS N: 13 MULTIPLE R: 0.720 SQUARED MULTIPLE R: 0.519ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PIE 1111.430 1 1111.430 11.867 0.005ERROR 1030.262 11 93.660(d) Nwnber of ideational connections with ideas mentioned during earlier discussionThis variable has undergone a transformation to meet the requirements of ANOVA.DEP VAR:IDCONDI9 N: 32 MULTIPLE R: 0.765 SQUARED MULTIPLE R: 0.585224ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCs 9.147 1 9.147 7.148 0.012IE 43.511 1 43.511 34.002 0.000CS*IE 3.351 1 3.351 2.619 0.117ERROR 35.830 28 1.280(e) Total number of times a specfIc rating suggestedDEP VAR: RATGALL N: 32 MULTIPLE R: 0.432 SQUARED MULTIPLE R: 0.186ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 181.373 1 181.373 0.498 0.486IE 1721.714 1 1721.714 4.723 0.038CS*IE 115.340 1 115.340 0.316 0.578ERROR 10207.670 28 364.560(t) Procedural commentsDEP VAR: PROCEDJ N: 32 MULTIPLE R: 0.439 SQUARED MULTIPLE R: 0.193ANALYSIS OF VARIANCESOURCE SUM-OF-SQUARES DF MEAN-SQUARE F-RATIO PCS 4532.544 1 4532.544 6.317 0.018IE 69.659 1 69.659 0.097 0.758CS*IE 77.584 1 77.584 0.108 0.745ERROR 20091.822 28 717.565225

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0088863/manifest

Comment

Related Items