UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A Commitment Reasoning Support System Lau, Lik Alaric 1993

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1994-0056.pdf [ 2.55MB ]
JSON: 831-1.0087309.json
JSON-LD: 831-1.0087309-ld.json
RDF/XML (Pretty): 831-1.0087309-rdf.xml
RDF/JSON: 831-1.0087309-rdf.json
Turtle: 831-1.0087309-turtle.txt
N-Triples: 831-1.0087309-rdf-ntriples.txt
Original Record: 831-1.0087309-source.json
Full Text

Full Text

A Commitment Reasoning Support System: CRSSByLik Alaric LauB.S., The Pennsylvannia State University, University Park, 1990A THESIS SUBMITTED IN THE PARTIAL FULFILMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinTHE FACULTY OF GRADUATE STUDIESMANAGEMENT INFORMATION SYSTEMSFACULTY OF COMMERCE ANDBUSINESS ADMINISTRATIONWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIADecember, 1993© Lik Alaric Lau, 1993In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature)Department of_________________The University of British ColumbiaVancouver, CanadaDate______DE-6 (2/88)AbstractThe concept of commitment has recently been adopted by researchers to model cooperativeorganizational activities. A commitment is an agreement by an agent to a requester to carryout a requested action. A Commitment Reasoning Support System (CRSS) is proposed in anattempt to support the reasoning process in deciding whether or not to make a commitment.An implementation of CRSS is constructed on the NeXT Computer System. An empiricalstudy was conducted on the CRSS implementation. The findings show that the systemis especially useful for commitment requests with high complexity and/or with importantimplications. The study also provides important conceptual insights in commitment reasoningfor future research.11ContentsAbstract iiTable of Contents iiiList of Figures vi1 Introduction 12 Concepts of Commitment 32.1 The Literal Meaning 32.2 Commitment from Sociology and Social Psychology 32.2.1 The Side Bet/’Have to’ Notion 32.2.2 The Sentiment/Flow/’Want to’ Notion 42.2.3 The Connection between the Two Notions 52.2.4 The Positive Elements, Negative Elements, and the Links 52.2.5 Value Systems and Beliefs 62.3 The Relationships of Decisions and Commitments 62.4 Related Work 82.4.1 Distributed Artificial Intelligence 82.4.2 Multi-Criteria Decision Making Support Systems 92.4.3 Indented Text Issued-Based Information Systems 112.4.4 Contract Theory and Reasoned-Action Theory 121112.4.5 The Request-Commitment Support Model 122.5 Definition of Commitment in this thesis 133 Factors Related to Commitment Reasoning 153.1 Resources 153.2 Utility 163.3 Constraints 173.4 Sentiments and Internal Values 173.5 Connections between Commitments 184 The Reasoning Process for Commitment 204.1 Reasoning and Logic 204.2 Commitment Reasoning as a default reasoning process 214.3 Theorist 234.4 Preferred Subtheories 254.4.1 Concepts 254.4.2 The Formal Framework 265 The Commitment Reasoning Support System 285.1 Theoretical Foundation 285.1.1 Side Bet Analysis 305.1.2 Sentimental Factors Analysis 315.2 System Design 325.2.1 The Main Menu 345.2.2 The Side Bet Analysis 355.2.3 Sentimental Factors Analysis 405.2.4 Casebase 425.3 Implementation 44ivO Two CRSS Examples 406.1 Presentation in a Faculty Meeting 466.2 Purchasing Car Stereo 497 Empirical Study of CRSS 537.1 Objective 537.2 Testing Procedure 547.3 Testing Results 568 Conclusion and Future Work 60Bibliography 63Appendices: 67A Domain-Independent Predicates in CRSS Rulebase 67B CRSS Core Rulebase 70C Empirical Testing Procedure and Data 74C.1 Testing Procedure 74C.2 Testing Data 77VList of Figures5.1 Brickman’s Model of Commitment 295.2 CRSS System Design 335.3 CRSS Main Menu 345.4 CRSS Side Bet Analysis 365.5 CRSS Rule Inspector 375.6 The Conceptual Model of the Core Rulebase Used in the Empirical Study 395.7 CRSS Sentimental Factor Analysis 415.8 CRSS Casebase 436.1 Flowchart of the Firing of Rules 476.2 CRSS Final Decision Screen 496.3 Purchasing Request 506.4 Side Bet Analysis of the Purchasing Request 516.5 Final Decision of the Purchasing Request 52viChapter 1IntroductiotiThe concept of commitment was first widely studied and developed in social psychology andsociology fields. The essential idea from these domains is that a commitment is a bondor a constraint on a person’s courses of actions. Recently, the social approach to modelling organizational activities has been studied by distributed artificial intelligence (DAI)researchers {Gasser9l] [Hewitt9l] and organizational information systems (OIS) researchers[Bond9O] [Hewitt86]. In particular, the concept of commitment is incorporated by researchersto model the cooperative activities in organizations. For example, it has been suggested thatan organization can be modelled as a network of intelligent agents with mutual commitments[Bond9O].In this thesis, a specific form of cooperative activities are modelled by the concept ofcommitment: an agreement by an agent’ to the requester to carry out the requested action[LuWoo92]. In order to support the reasoning process for deciding whether or not to make acommitment request, relevant concepts of commitments from different research areas (artificialintelligence, social psychology and sociology) are incorporated to build a reasoning supportsystem: The Commitment Reasoning Support System (CRSS).The research reported in this thesis attempts to (1) show the possibility and the applicability of providing computer support in the reasoning process of making commitments,‘an agent is an organization worker in this context1(2) to gain insights on the use of such a support, and (3) to find out the applicability ofdefault reasoning to support this type of reasoning process. To attain these objectives, animplementation of CRSS is purposed and an empirical study has been conducted on theimplementation.In the next chapter, the concepts of commitment in different areas of study are explored.The factors related to commitment reasoning process are discussed in Chapter 3. Chapter4 illustrates how the commitment decision making reasoning process can be modelled by adefault reasoning framework. The theoretical foundation, the system design, and the implementation of the CRSS are discussed in Chapter 5. Chapter 6 shows two examples toillustrate the functions of the CRSS implementation, followed by a discussion of the empiricalstudy of CRSS in Chapter 7. Finally, this thesis ends with ‘Conclusion and Future Work’ inChapter 8.2Chapter 2Concepts of Commitment2.1 The Literal MeaningIn the Webster Dictionary a commitment means ‘an agreement or pledge to do something inthe future According to [Bond9O], it is originated from the Latin word cummittere, whichconveys the idea of an action to unite individual entities into one whole.2.2 Commitment from Sociology and Social Psychology2.2.1 The Side Bet/’Have to’ NotionThe concept of commitment was first studied in the field of sociology by Howard Becker[Becker6O] {Bond9OJ. He suggested that commitments were determined by side bets. Thatis, when an individual participates in multiple settings, his1 commitment to a course ofaction in a particular setting is determined by the side bets resulted from his simultaneousparticipation in other settings (or activities) {Becker6Ol.A side bet could be any item ‘that the person has linked to a course of action that wasinitially related to that item; that the person will lose if he does not follow the course ofaction in the future; that he has chosen by prior actions to make the linkage (intentionally‘For ease of reading, the masculine form of pronoun is used to refer to both males and females.3or unintentionally); and that the person is aware of having made the linkage (when he isconsidering derailing from the course of action)’(p.33 [Brickman77]). For instance, a personis considering switching from his current cellular phone company to a new one (for which hisphone number must be changed). Even though the new company provides better servicesand greater convenience, he might decide not to switch due to the side bets he has madewith his current phone number, namely having the number listed in the Yellow Pages andall his business cards. This person is said to be committed to staying with his current phonecompany.Other researchers have also defined ‘commitment’in similar ways. In [Gerard67j, a commitment is defined as ‘any constraints that operate against changing behavior’. While in[Kiesle7lJ, a commitment means ‘the pledging or binding of the individual to behavioral acts.’[Brickman77l points out that the term ‘commitment ‘is used sometimes by people to refer toa course of action they ‘have to’ follow, or else they stand to lose (or not to gain) somethingvaluable to them.To summarize, the Side Bet or ‘Have To’ notion of commitments is determined purely byutilitarian considerations [JanMan77]. Utilitarian considerations can be of two types: (1)utilitarian considerations for self, and (2) utilitarian considerations for significant others (forinstance, family members, spouse, friends, co-workers, etc.) [JanMan77].2.2.2 The Sentiment/Flow/’Want to’ NotionAnother notion of commitment is described as a course of action that a person ‘wants to’follow [Brickman77]. This type of commitments are purely determined by the internal values(or sentimental factors) assigned to them. [Csikszen75] describes this kind of commitmentas a flow, which is an activity that has a person’s total attention, and is self-motivating,appearing to need no external rewards. For example, there are people who commit themselvesto volunteer work on weekends which has no apparent utilitarian reward. It is their conscienceor other internal values that motivate them for such commitments.[JanMan77J explains that the actual driving forces behind commitments of this type are:4(1) self-approval considerations (e.g., personal ideals, ethical and moral considerations, andself-images), and (2) social approval considerations (e.g., image to and opinion from thepeers, close friends, and family). These considerations may not take as tangible forms as theutilitarian considerations, but depends on the nature of commitment, they sometimes havemore dominant influence than utilitarian ones [JanMan77j.2.2.3 The Connection between the Two NotionsThese two notions seem to suggest contradictory meanings to the term ‘commitment TheSide Bet (or ‘have to’) notion has a utilitarian perspective on the concept of commitment,while the Flow (or ‘want to’) notion has a intrinsic perspective. However, they both suggest acommon underlying characteristic of commitment: a binding force which resists temptationsand negative factors, to bind a person to a course of action. From this perspective, these twonotions are differed only by their binding forces. The Side Bet notion of commitment by[Becker6O] is shaped by external values: money, time, material, skill, etc. While the Flownotion is motivated by internal values: personal ideals, self-image, moral considerations, etc.Furthermore, [Brickman77j points out that in reality commitments usually do not come inextreme forms as the above examples, they are usually somewhere between the continuum ofthe ‘want to’ (the Flow notion) and ‘have to’ (the Side Bet notion). That is, most instancesof commitments are driven by both external (utilitarian considerations for self and significantothers) and internal (self- and social approvals) forces.2.2.4 The Positive Elements, Negative Elements, and the LinksIn [Brickman77j, a commitment is said to contain three sets of elements: positive elements,negative elements, and the links between the two. In other words, for every commitment,there are some elements which attract the agent to make the commitment, and there aresome elements that repel him from making it; and the agent’s final decision depends on howhe evaluates the values of the pros and cons of the commitment (i.e., how he links up therelative importance of the two sets of elements).5This view is also endorsed by many other social scientists [Lewis5lJ [JanMan77]. Basically,they suggest that a person would only commit to a course of action if he links more2 positivefactors than negative factors to it.2.2.5 Value Systems and BeliefsIn the process of making a decision to commit to some course of action, we implicitly assignintrinsic values to that action. The value system of a person can be viewed as a set of beliefsabout the world. The same action may have entirely different meanings to two people whohold different beliefs about them. For example, if there is an invitation to attend a socialfunction which conflicts the class schedule; one person might decide to accept the invitationbecause he believes that socializing opportunities and interpersonal skills are more importantthan academic knowledge; while another person might refuse to go because he believes that itis ‘wrong’ to skip class for any reasons other than emergencies. [Becker6O] states that the valuesystem of a person is largely determined by his family, educational, and social backgrounds.Hence, how an agent makes commitments is affected by his belief or value system [Becker6Oj[Bond9OJ {Brickman77}. From this viewpoint, a commitment reflects how an agent’s value system links the external actions with his internal values.2.3 The Relationships of Decisions and CommitmentsThe process of decision making is the selection of an act or a course of action from amongalternative acts or courses of actions such that it will produce optimal results under somecriteria of optimization [JanMan77J.There are three characteristics that distinguish a system as a DSS [BoncHolWhin8l]:1. a model is incorporated in the system.2. it provides useful information to higher-level management aiming at supporting relatively unstructured decision making situations.2’more’ can be in qualitative terms as well as quantitative terms.63. it provides powerful and user-friendly interfacing for problem solving.In summary, a DSS is a system that provides a user-friendly interface to support thesemi-structured decision making processes done by higher-level management.Recall the meaning of commitment from the Webster’s Dictionary: ‘an agreement orpledge to do something in the future There are very intimate relationships between theconcept of commitment and most psychological formulations of the decision-making process{JanMan77]. [Lewis5l] suggests that a commitment has the effect of ‘freezing’ a decision(i.e., commitment is what makes the decision maker sticks with his decision). The concept ofcommitment affects a decision maker both before and after the decision is made [JanMan77j—before making the decision, the decision maker often considers the consequences of beingbound by each alternative acts; while after the decision is made, the decision maker feelsobligated to carry out his decision.In essense, a commitment is a concept that binds a decision maker to his decision. Fromthe definition of a decision making process, a commitment reasoning process can bedefined as a decision making process for which the agent selects either to commit or not tocommit to a requested action such that it will produce optimal results. The CommitmentReasoning Support System can, therefore, be viewed as a specialization of decision supportsystem with the following characteristics:• It focuses on handling decision making situations at a personal level, namely situationswhen the user needs to decide whether or not to make the requested commitment; whilemuch of the DSS research effort is devoted to situations at the corporate level.• It aims at supporting the making of decisions that are meant to be carried out andfollowed through. That is, the emphasis is on the accountability of decision making(see Section 5.2.4). The system enhances such accountability by: (1) alerting the agentabout the consquence of his decision before he actually decides, (2) providing supportfor the user to understand his rationale for making the decision, and (3) by keepingrecords of his decisions and decision rationale in case when they are questioned in thefuture.7• It attempts to address the sentimental aspect of personal decision making. Thesesentimental factors (e.g., considerations of self- and social approvals) play importantroles on decision making process [JanMan77] but are not emphasized by decision makingmodels.Based on the characteristics mentioned above, the system proposed in here is called Commitment Reasoning Support System (CRSS) instead of a decision support system (DSS).2.4 Related Work2.4.1 Distributed Artificial IntelligenceIn the current state of Distributed Artificial Intelligence (DAI) research, researchers are striving to adopt a social approach as the foundation of DAT [Gasser9lj. It is believed thatproperties in a multiple agent system cannot be characterized only by the properties of individual agents, but must be characterized through the social interactions of these agentssuch as trials of strength, commitments, negotiation, and cooperation [Hewitt9lj. {Bond9O]views the interactions between agents as a network of commitments — agents make mutualcommitments to establish the functionality of the settings they are in. For instance, in orderfor an agent to calculate the annual budget correctly, he has to assume that other agents havemade the commitment to supply accurate information, and in turn when these agents receivethe budget report later, they will assume that the agent who calculated the budget has usedtheir information accordingly to produce the budget, and that they can rely on it to makefurther commitments. [Bond9O] also suggests that agents can be characterized by the typesof commitments made, the kind of resources used, the set of agents committed to, and theset of beliefs held.These research projects provide valuable insights on how to model organizational activitieswith the concept of commitment, which are incorporated into our work (e.g., the utility ofresources, the inter-connection between commitments, the interaction between value systemsand commitments). However, little work has been done on supporting the decision making8process in making a commitment within organizations.2.4.2 Multi-Criteria Decision Making Support SystemsA Multi-Criteria Decision Making (MCDM) problem is a decision making problem with atleast two conflicting criteria and at least two alternative solutions [Tabucanon88j.Criteria are measures, rules, and standards that guide decision making. All attributes,objectives, or goals, which have been judged relevant in a given decision making situation arecriteria. Criteria are said to be in conflict if the full satisfaction of one will result in impairingor precluding the full satisfaction of the others [Tabucanon88j. Attributes are characteristicsused to describe a thing’s: (1) objective traits such as profit, loss, production volume, etc.,and (2) subjective traits such as prestige, goodwill, beauty, etc. Objectives are aspirationsthat also indicate directions of improvement of selected attributes such as maximize profit,minimize losses, etc. And Goals are aspirations with given levels of attributes such as toincrease profit by 5% [Tabucanon88j.The commitment reasoning process can be considered as a Multi-Criteria Decision Makingprocess because:• The positive and negative factors present conflicting criteria of the process. If weview criteria as goals to be met, the positive factors are goals projected to be met ifcommitted, and negative factors are goals projected not to be met if committed. Thesetwo sets of criteria are in conflict since presumably it is impossible for the agent tobenefit from the positive factors while avoiding the negative factors if he commits tothe requested action.• To commit and not to commit are the two alternative solutions.However, the current research of MCDM differs from our research of commitment reasoning in the following aspects:• Current MCDM research focuses on solving structured decision making situations suchas production planning, allocation decisions, project selections, etc.9Commitment reasoning emphasizes on unstructured decision making situations (i.e.,situations which an agent needs to decide whether or not to commit to a requestedaction). The unstructuredness comes from the sentimental considerations (such asfriendship, reputation, and other self-image factors) in these decision making situations.It is believed that sentimental factors place a crucial part in commitment reasoning[Brickman77] Becker6O] [JanMan77} [Lu92].• Situations considered by MCDM researchers usually have a large set of alternative solutions with objective criteria [Tabucanon88], while situations in commitment reasoninghave a mixture of objective and subjective criteria and a set of two alternativesolutions: commit or not commit.• MCDM research emphasizes on building mathematical models to support decision making at a corporate level. Commitment reasoning research, on the other hand, focuseson supporting decision making process at an individual level. The subjectivity andthe unstructuredness of the factors involved in the process at this level render the useof those mathematical models impractical.Despite the different focuses of MCDM and commitment reasoning research, the followinginsights for commitment reasoning have been gained from the MCDM research:• It can be viewed as a MCDM process.• There are conflicts between criteria in a commitment reasoning process.• Ranking the importance of the criteria is an important first step in solving a MCDMproblem.In response to these findings, the CRSS rulebase design includes a built-in ranking featureto handle the conflicting criteria (see Section 5.1.1).102.4.3 Indented Text Issued-Based Information SystemsA different but related piece of work was done in [Conklin9lJ. A system called ‘indented textIssued-Based Information System’ (itIBIS) is described. The idea behind this project is toformalize the documentation of design rationale (DR) (which is defined as ‘the informationthat explains why an artifact is structured the way that it is and has the behavior that it has’(p.359 [Conklin9lj)). Essentially, the system formalize the documentation by allowing theusers to state the design problem, all possible solutions, the final decision, and the argumentsfor adopting the final decisions in a specified text notation.The field trial of itIBIS shows that it enhances the quality of design decisions by:• allowing designers to easily refer to DR of previous related design problems recorded initIBIS.• allowing team members to clearly understand other members’ positions and argumentswhich are recorded in itIBIS during the group meeting.• allowing the designers to more thoroughly consider the factors that are relevant to thedesign problems. This benefit is achieved by requiring users to explicitly state the designproblem, all possible solutions, the final decision, and the arguments for adopting thefinal decisions. By doing so, a designer can see a clearer picture of the entire designproblem, and thus can search his memory for salient information which can lead to ahigh quality design decision.Similar decision support facilities are featured in the Decisional Balance Sheet in [JanMan77]and in the Decision Construction Set in [HelMarWoo]. This piece of research is discussedhere because we adapted the design rationale idea from itIBIS in our Commitment ReasoningSupport System. Detail usage of this work and how it benefits commitment reasoning arediscussed in Section 5.1.2, 5.2.3, and Contract Theory and Reasoned-Action TheoryThe concept of contract is widely studied in law, economics, and sociology [Macneil8Oj [Lu92}.[Lu92] states that the essential difference between a contract and a commitment is that a contract is usually perceived as having an emphasis on economic exchanges, while a commitmentemphasizes more sentimental factors.Reasoned-Action Theory [FisAjz75] claims that a person’s intention to a behavior is determined by his:1. attitude to the behavior (i.e., the expected behavior consequences and the evaluationsof these consequences)2. subjective norm (i.e., the person’s perception of what others think of the behavior)[Lu92] provides a detailed analysis of the relationship between Reasoned-Action Theoryand commitment. Conceptually, if we view making (or not making) a commitment as a behavior, the Reasoned-Action Theory does provide a model for commitment reasoning. However,Reasoned-Action Theory does not provide enough details about commitment for it be usefulfor our work.2.4.5 The Request-Commitment Support ModelIn [Lu92j, a commitment reasoning support system called RECOS (REquest-COmmitmentSupport model) was proposed. Many of our concepts in this model are adopted from andinspired by [Lu92}. Both the RECOS and the CRSS aim at achieving one goal: to supportthe reasoning process conducted by an agent in order to decide whether or not to make acommitment. However, the CRSS differs from the RECOS model in the following main ways:• The RECOS project emphasizes surveying the various studies on the concept of commitment.The CRSS project utilized the knowledge gained from RECOS to construct an implementation to support the decision making process of commitment making.12• RECOS provides invaluable insights and analysis of the decision making process ofcommitment making at a conceptual level.CRSS aims at acquiring insights of the decision making process of commitment makingat an implementation level (which may not be found on the conceptual level).• RECOS provides a general framework for supporting the decision making process ofcommitment making.The CRSS project used the empirical study results from RECOS and additional research findings to construct a more precise commitment reasoning framework (based onBrickman’s commitment model [Brickman77]).• Even though the RECOS project suggests a framework which provides precious ideason the design of such a system, it merely suggests ‘what to build’.The CRSS project addresses many of the implementation issues by providing the solutions for ‘how to build In particular, much research was done in coming up with arulebase design (which adopts features from a default reasoning framework) and a setof domain-independent rules for the system (a cole rulebase).2.5 Definition of Commitment in this thesisRecall that the main purpose of this project is to model organizational activities in a waythat effective computer support can be provided for them. In this section, a precise definitionof commitment is given to further clarify the focus of this project:Definiton 2.5.1 A commitment is an agreement by an agent to a requester to carry out arequested action.In an organization, workers very often agree or promise to carry out tasks that are requestedby other workers. For instance, a programmer promises to write a program requested byhis supervisor, the purchasing department agrees to order new softwares requested by other13departments, a worker assures to carry out a sick colleague’s work by his request, etc. Allthese agreements, promises, assurance, and guarantees, according to the above definition, areconsidered as commitments.Anyone who has worked in an organization knows that this type of commitment requestsoccur regularly at different levels of complexity and importance. Every time an agent receivesa request (which may appear in hand-written memo form, e-mail form, verbal form, or anyother form), he has to go through a reasoning process in order to decide whether to committo the request. The reasoning process is defined as followed:Definiton 2.5.2 A commitment reasoning process is the reasoning process conducted byan agent in order to decide whether or not to make a commitment.The vision of this project is to have a computer system in the future which can support thecommitment reasoning process to an extent that (1) some routine requests, such as requests fora weekly meeting or requests issued by certain people, are handled by the system automaticallybased on knowledge in the rulebase, and that (2) reasoning support is provided for the agentwhen uncommon or important requests are received. A Commitment Reasoning SupportSystem is proposed to realize this vision.14Chapter 3Factors Related to CommitmentReasoningAfter the discussion of the different concepts of commitment in Chapter 2, one can perceivethat a commitment reasoning process is a process which takes various types of factors intoconsideration. In this chapter, an effort is made to identify these factors.3.1 ResourcesThe relationship between resources and commitments has been discussed in various works[Becker6O] [Bond9O] [Gerson76] [Gasser9l] [Hewitt9l]. In particular, [Gerson76j expressescommitments as constraints on the use of resources. Similarly, in {Gasser9l], commitmentsare treated as the use of resources. In some literature, these resources include not only time,money, CPU time, etc., but also intangibles such as sentiments, knowledge, opportunities,etc. [Bond9O}.In [Gerson76] [Hewitt9 1] [Lu92j, resources are money, time/space/material, skill/technology,authority, and sentiment. Money, time/space/material, and skill/technology are more intuitively perceived as resources in organizations because:1. their direct involvement in commitments152. they are quantifiable and/or measurable entitiesAuthority, in this thesis, refers to the position an agent is holding in the organization (e.g.,general manager, administrator, etc.). It can be viewed as a resource because when an agentissues a commitment request, the requestor’s authority (relative to the requestee’s authority)can be used as a resource that influences the outcome of the request. [JanMan77] refers tothis type of resource factors as utilitarian factors.Sentimental factors such as favors, competence, friendship, reputation, and other self-image factors are treated under a separate heading—’Sentiments and Internal Values’(in Section 3.4) due to the following reasons:1. they are not as suitable to be quanitifed as the tangible factors (e.g., it is hard or evenimpractical to quantify how much friendship is needed to ask an agent to commit).2. they play a distinct and important part in the commitment reasoning process [Brickman77j[JanMan77] [Lu92] (also refer to Section 2.2).3.2 UtilityIn economics context, utility stands for subjective pleasure, usefulness, or satisfaction derived from consuming goods [SamuelS8]. In making a commitment, there is always an exchange of utility during the process, namely, an agent decides to make a commitment because he believes that he will gain more than he will lose after satisfying the commitment[ThibKell59}fLuWoo9l]. In [Brickman77}, it is claimed that it is how the agent links thepositive factors and the negative factors of a commitment that determines whether he willtake on or reject the commitment.The factor of utility is closely related to resources, in that it deals with the agent’s beliefsabout the gains and losses of resources in making certain commitment. However, utility ismore than merely the difference between the benefit and the cost of an action, utility of anaction is an aggregate measure of all dimensions of worth such as cost, benefit, risk and other16properties related to the agent’s situation [Doyle92][ThibKell59]. In our Commitment Reasoning Support System, resource utility (e.g., gain/loss of time/space/material, skill/technology)is dealt with by means of rules in the CRSS rulebase. Agents can set up different rules of utility functions for different commitment types. As for sentimental and internal values’ utility(i.e., utility related to self-image of helpfulness, loyalty, competence, etc.), users are presentedwith all the relevant factors in a table form, so that they can analyze factors which may beaffected if they were to make or not to make certain commitments.3.3 ConstraintsConstraints are rules and regulations that an agent is bound by in making commitments.Hence, constraints include corporate regulations and other written laws. [JanMan77} explainsthat constraints could also be social (e.g., approval from peers and family). In Section 3.5,we will see how some connections between commitments can be viewed as constraints as well.3.4 Sentiments and Internal ValuesAccording to [Hewitt9l], ‘sentiment deals with how various participants feel about each other:goodwill, reputation, obligation, etc.’ In other words, sentiment deals with the beliefs of anagent on things such as self-images, images viewed by other agents, images of commitment,and the instrinsic values of behaviors.According to [Brickman77], a commitment is a means to link people’s internal beliefs toexternal actions. That is, when people carry out certain actions, they must have sufficientpositive internal values attached to those actions to justify the negative features of the actions.Although CRSS has a more rigid definition of commitment, it is certain that our value systemsplay an important role in the commitment reasoning in our context. For instance, if Agent Areceives a request to do a task for Agent B, when Agent A is deciding whether to commit, hewill consider his relationship with Agent B, the implication of this commitment to his career,his self-image, and his perceived image seen by other agents. All these are linked to Agent17A’s value system in one way or another. If Agent A has a belief that helping a friend is veryimportant, and he thinks Agent B is a good friend, then he is likely to commit. While if hebelieves that his career is most important, then he will only commit when he thinks it willnot interrupt with his career pursuit.[JanMan77] interprets sentimental factors as self-approval and social approval. Self-approval is how a person interprets the effects of his actions on his morality, personal ideals,and other self-image issues. Social approval is how a person interprets the effects of his actionson how the people around him view him. [JanMan77J also agrees that these sentimentalconsiderations have significant impacts on a person’s commitment reasoning process.Section 2.2 provides a detailed discussion on the sentimental aspect of commitment.3.5 Connections between CommitmentsIn [Becker6o}, an agent’s commitments are constrained and determined by side bets (i.e., theagent’s activities or commitments in other settings). In [Gasser9l], commitment is also seena result of an agent’s participation in multiple settings. [Bond9Oj states that an organizationcan be viewed as a ‘web of commitments’. Despite the different contexts of the citings, theyall agree that commitments are interconnected and are closely related to each other. [Lu92]states that there are direct (a natural and predefined relationship between commitments) andindirect (interlinked by resources) connections. Regardless of the type of connections, it isintuitive that when an agent is considering a commitment request, its connections with hisother commitments are a factor that he will not ignore during his commitment reasoningprocess. For instance, when an organizational worker is asked to relocate to a foreign branch,it is highly unlikely, that he does not consider this request’s implications on (1) his commitments in current social settings (such as taking care of his parents) and (2) on the futurecommitments (such as selling part of his properties, not seeing his friends and relatives forsometime, adapting to a new culture, etc.)At this stage, the CRSS does not handle commitment connections directly. We thinkthat handling commitment connections requires some extensive research in areas like conflict18resolution, and consistency maintenance, which are beyond the scope of our area of the currentproject.Nevertheless, connections between commitments can be indirectly handled by providingrules on resource constraints and regulation constraints in the CRSS rulebase. For example,the user can specify constraint rules such as if commitment type A is made, then commitmenttype B cannot be made. Indirect connections can be handled partially through some resourcemanagement rules (e.g., if available resources are less than required resources, then agentcannot commit).19Chapter 4The Reasoning Process forCommitmentIn order to incorporate the concept of commitment into organization information systems, therepresentation of commitment must be dealt with in a formalized manner. [Bond9O] pointsout three representational issues of commitments: (1) expression of commitments (how torepresent an entity called a commitment), (2) generation of commitments (i.e., how onedecides to make a commitment), and (3) the simple negotiation of commitments (how todecide a reasonable set of commitments for agents to make under given circumstances). Inthis project, the focus is on the first and second representational issues— the expression andthe generation of commitments.4.1 Reasoning and LogicThe most common way to model human beings’ reasoning process is through the use of logic.There are more than one way to characterize what logic is. From the artificial intelligencepoint of view, logic is about the study of knowledge representation languages in which thenotion of entailment can be captured [Ramsay88]. In other words, it is about the study oflanguages such that when facts and rules are stated, there is a means to infer new facts.20The problem with classical First Order Logic (FOL) for modelling the human reasoningprocess is its monotonicity. That is, as the number of premises increases, the number ofconclusions drawn cannot decrease. For every set of premises A and formulas p, q:A F q A U {p} F qThis means new additional (and possibly more reliable) information can never invalidate oldconclusions [Brewka9lj. This posts a serious problem, since [Brewka9l] points out that thetwo most crucial reasoning capabilities every intelligent agent should have are:• the ability to handle incomplete information in a reasonable way (i.e., to draw sensibleconclusions in the absence of complete information). -• the ability to handle inconsistent information.For these reasons, we must rely on features of a logic that is nonmonotonic to model theprocess of commitment reasoning.4.2 Commitment Reasoning as a default reasoning processA reasoning process is said to be nonmonotonic when we draw conclusions based on incompleteinformation of the world, and upon arrival of more information, we are ready to retractsome of the conclusions made previously. In the context of commitment reasoning, when anagent conducts his reasoning process in an open system environment, where information isincomplete and new information may emerge at anytime into the system, an agent must: (1)make default conclusions when information needed to draw definite conclusions is incomplete,and (2) must be able to retract some conclusions when more accurate information has arrived.For example, an agent may receive a request from a co-worker, who is an old friend of theagent, to do certain task for him. The agent may at first decide to help him based on theirfriendship and his avalibility of resources, but the agent may later retract this decision afterrealizing that he will be tied up with other more important commitments.21Default reasoning is a particular type of nonmonotonic reasoning [Brewka9O] in that default assumptions are made in the presense of incomplete information. Using the popularexample of flying birds [Reiter87j, we want to represent that:‘birds normally fly’ or‘if z is a bird then by default assume z flies’so that we can use the rule to conclude Tweety flies when we only know Tweety is a bird.In other words, we would like to jump to conclusions in the absence of information to thecontrary. Default reasoning is nonmonotonic, because ‘inferences sanctioned by default arebest viewed as beliefs which may well be modified or rejected by subsequent observations’ (p.153[Reiter8O]).The commitment reasoning can be viewed as a type of default reasoning process in thatan agent makes certain default assumptions in the absense of complete information about arequest. In open system environments such as organizations, complete and accurate information cannot be obtained at any particular point of time. Therefore, organizational workersmust make their decisions based on certain assumptions. For instance, they may have a rulelike:If I gain resources more than I lose by making the commitment, then typically IcommitHence knowing that a request will provide more gains than losses, the agent will make adefault conclusion to commit based on the assumption that he has the required resources todo so. However, this conclusion is retractable (or conditional); that is, if the agent discoversthat he actually does not have the required resources to commit, he will retract the conclusionand will decide not to commit.Two formal frameworks for default reasoning- Poole’s Theorist framework [Poole88] [Poole89]and Brewka’s Preferred Subtheories framework [Brewka9O] [Brewka9l] will be examined. Partof the CRSS proposed in this thesis is based on the second framework by Brewka. A Descrip22tion of Poole’s Theorist framework (which Preferred Subtheories is extented from) is includedhere to assist the readers in understanding the rationales of the CRSS design.4.3 TheoristTheorist is actually the programming language in an implementation of a logical frameworkfor default reasoning. For the sake of simplicity, we shall refer to this logical framework itselfas Theorist from now on. The basic idea is that the system has a set of facts known to betrue, and a set of possible hypotheses (or defaults). We want the system to be able to drawcorrect conclusions and not to imply things that are known to be false based on the set offacts and some subset of possible hypotheses that are consistent with the facts [Poole88].F denotes a setof closed formulas which we called ‘facts’. Because facts are known to betrue, they should be consistent with each other. For the set of defaults (or possible hypotheses)which is denoted by z (z is a set of formulas (possibly inconsistent)), the definitions ofscenario and extension are given as followed [Poole88]:Definiton 4.3.1 A scenario ofF, t is a set of D UF where D is a set of ground instancesof elements of Z such that D U F is consistentDefiniton 4.3.2 An extension of F, z2. is a set of logical consequences of maximal (withrespect to set inclusion) scenario of F, .In other words, a scenario is the union of the set of facts with a subset of instantiated defaultsthat are consistent with the facts. An extension is a set of all the logical consequences thatcan be derived from the largest union found in some given sets of facts and defaults.This framework captures the sementics of default reasoning by providing a set of defaults.They are assumptions of the world which we prepare to accept as being true as long as theydo not conflict with the set of known facts. And we are ready to give up some defaults assoon as they become inconsistent with the current set of facts. This can be illustrated by thefollowing example:23Example 4.3.1 Assume we have the following facts arid defaults concerning flying animals:F = {Vx bat(’x) —+ rnammal(’x), Vx bat(x) —* fly(x), mammal(wayne)}= { mammal(x) —* -‘ fly(x) }With the given facts we can apply the default (in this case we have only one), and concludethat {—‘ fiy(wayne) }. However, if we later add bat (wayne) into the set of facts, then the defaultis inconsistent with the facts and now the system will conclude that {fiy(wayne)}.In [Poole88j, an implementation called Theorist is proposed to capture this framework.It is a programming language that allows users to program the facts and defaults into thesystem, and it will generate the logical consequences entailed from the facts and defaultsaccording to users’ instructions.The following is a description of the main Theorists syntax [Poole88]:fact w, where w is a formula, means “Vw” E F.’default n, where n is a name (predicate with only free variables as arguments)means n E zS.default ri: w, where w is a formula, and ri is a name with the same free variablesas w, means that w is a default, with name n. Formally this means that n Eand “Vu = w” E F.explain g, where g is a formula, asks whether we can explain g from the F and. A consistent explanation is returned.Definiton 4.3.3 If g is a closed formula, then an explanation of g from F, i is a scenarioofF, t which implies g.Hence, when a user types in explain g, he is asking the system if g is a logical consequenceof any of the scenarios of F, , if so show him/her the defaults used in coming up with suchconsequence. The details of this system is not within the scope of our current interest. Moreexplanation is provided in [Poole88] and [Poole89j.llf is the universal closure of w. That is, if w is a formula with free variables v, then Vtv means Yvw.Similarly 3w is the existential closure of w, that is 3vw.244.4 Preferred Subtheories4.4.1 ConceptsThe Preferred Subtheories framework is proposed by Brewka [Brewka9O]. It is considered tobe the extension of Poole’s framework for default reasoning. In Poole’s framework, it has a setof facts (F), which are closed formulas that are consistent and are known to be true; and a setof defaults (s), which are possible hypotheses. The way the system handles inconsistenciesis to use the maximal consistent subsets of the formulas. More specifically, the frameworkwill use the maximal subset of U F, which is consistent, to entail logical consequences (i.e.,an extension). However, there could be multiple maximal subsets which include different setsof formulas which entail inconsistent logical consequences. The following example illustratesthe problem:Example 4.4.1 Consider the following facts and defaults of ‘bird flies’:Fact : penguin(tweety)Default : penguin(X) —* b’ird(X)Default : penguin(X) —*-iflies(X)Default : bird(X) —÷ flies(X)We have two maximal subsets of consistent formulas, namely(penguin(tweety), penguin(X) —* bird(X), bird(X) —* flics(X))(penguin(tweety), penguin(X) —* b’ird(X), penguin(X) —4-if lies(X))The problem with Poole’s framework is it does not provide preference of which maximalsubset of formulas to use. In the above example, an intelligent system should choose the setof formulas that can conclude that tweety doesn’t fly.The Preferred Subtheories framework solves this problem by claiming that instead of twolevels of reliability, which are the facts and the defaults, we could have multiple levels of25reliability. Therefore, when inconsistencies arise, the more reliable defaults are preferred overthe less reliable ones. Using this method, Example 4.4.1 can be expressed as follows:Ti : penguin(tweety)T2: penguin(X) —* bird(X)T2: penguin(X) - -‘flies(X)T3 : bird(X) —* flies(X)With Ti being the most reliable level, this theory concludes that bird(tweety) and—if lies(tweety).Consider the above theory without:• penguin(X) —* —if lies(X). The theory would conclude bird (tweety) and flies(tweety). By inserting a more reliable rule — penguin(X) -4—iflies(X), we say that the system is able to retract the former conclusion flies(tweety)and draw a new conclusion:-if lies (tweety).4.4.2 The Formal FrameworkHere are some formal definitions for the Preferred Subtheories Framework from [Brewka9Oj:Definiton 4.4.1 A default theory T is a tuple (Ti,... ,Tn), where each Ti is a set ofclassical first order formulas.For any two Ti and Tj, such that i <j, we say that information in Ti is more reliable thanthat of Tj’s. Although the definition of the default theory per se does not indicate anysort of leveling in reliability, the leveling feature will become clear as soon as the preferredsubtheory is defined:Definiton 4.4.2 Let T = (Ti,... ,Tn) be a default theory. S Si U... USn is a preferredsubtheory of T if for all k(i k n) Si U ... U Sk is a maximal consistent subset ofT1U...UTk.This definition says that to obtain a preferred subtheory, we start with Ti, get its maximalconsistent subset, and then we go to T2 and obtain maximal number of formulas from T226which are consistent with the ones acquired in Ti. Then we go on to the next level and soon until we arrive at Tn.There are two notions of provability:Definiton 4.4.3 A form’ula p is weakly provable from T if there is a preferred subtheorySofTsuch thatSHp.Definiton 4.4.4 A formula p is strongly provable from T if for all preferred subtheoriesSofTsueh thatSl—p.The next chapter will discribe how some of the features from the Preferred Subtheoriesframework are adpoted in the CRSS rulebase.27Chapter 5The Commitment ReasoningSupport System5.1 Theoretical FoundationThe Commitment Reasoning Support System (CRSS) proposed here is based on the conceptof commitment described in [Brickman77] (Sections 2.2.1, 2.2.2, 2.2.3, and 2.2.4). Essentially,[Brickman77j says that a commitment consists of a ‘have to’ (utilitarian or side bet) aspectand a ‘want to’ (sentimental or flow) aspect, and each aspect has some positive factors andnegative factors. The linkage between the positive and negative factors is made according tothe value system of the person that makes the commitment. Our diagrammatic interpretationof this concept is given in Figure 5.1.Brickman’s model is chosen because it points to the importance of the sentimental aspectof a commitment which is not emphasized by many researchers [JanMan77] [Brickman77j[Lu92]. Furthermore, Brickman’s model is a result of thorough analysis of the different studiesconducted on the concept of commitment. By consolidating relevant findings from otherstudies with Brickman’s own insights, a more complete and precise model of commitment isresulted.To operationalize Brickman’s model, CRSS consists of two subsystems to handle the28A CommitmentFigure 5.1: Brickman’s Model of Commitment‘have to’ part and the ‘want to’ components of a commitment. Basically, CRSS assists usersto determine the positive and negative aspects of both components of the commitment. Thelinkage of the ‘have to’ is determined by the rulebase reasoning in the Side Bet Analysis, whilethe linkage of the ‘want to’ is left for the user to determine by showing a table of sentimentalfactors. And upon making the final decision on whether or not to make the commitment, theuser is required to write a short paragraph to explain and clarify his decision rationale (whichcorresponds to the linkage between the ‘have to’ and ‘want to’ parts of the commitment).Have To/Side Bet Want To/Flow295.1.1 Side Bet AnalysisA Side Bet Analysis is provided in the CRSS to support the reasoning process concerningthe utilitarian factors and rules [JanMan77] (i.e., the ‘have to’ or the side bet portion of acommitment). This subsystem uses rulebase reasoning to handle the utilitarian factors suchas resource allocation (Section 3.1) and utility (Section 3.2), constraints (Section 3.3), andconnections between commitments (Section 3.5). The Analysis generates a recommendation ofwhether or not to commit according to the rules of utilitarian factors (i.e., the recommendationis the operationalization of the linkage of the ‘have to’ portion).The rulebase in the Side Bet Analysis adopts features from a default reasoning frameworkcalled Preferred Sutheories described in Chapter 4. The basic idea is to group the reasoningrules such as, if [not enough resource to commit] then [do not commit], in levels of priority,with the more important rules in higher levels of priority. Conclusions drawn from rulesin higher priority override contradictory conclusions drawn from lower priority rules. Thisfeature allows some former conclusions to be retracted if new rules, which refute those formerconclusions, are inserted into higher levels.The reasons for adopting such a framework for our CRSS rulebase are:• In a monotonic rulebase system when new rules are entered, the system integrity mustbe maintained manually or by some dedicated consistency maintenance procedures[Buchanan83]. As the number of rules increases in a monotonic system, maintainingconsistency could become very complicated (e.g., checking and modifying the existingrules in order to accommodate a new rule).The nonmonotonicity of the proposed framework effectively decreases the chances ofinconsistency (i.e., two or more rules drawing conflicting conclusions) by allowing usersto put rules into multiple levels. By cranking the rulebase through the inference engine,users can find out what rules are fired and the levels they reside in. If conflicting rulesneed to be added, they can simply be put into a separate level without any modificationson the existing rules. In other words, the task of maintaining the consistency of the30rulebase is much simpler. This is highly suitable for an open system environment wherenew and possibly conflicting information and rules may emerge at anytime.• It more naturally resembles the reasoning capabilities of an intelligent agent than amonotonic system. According to {Brewka9l], an intelligent agent uses subsets of anentire set of possibly inconsistent knowledge to draw sensible conclusions in differentsituations. The prioritization of rules in this framework provides a natural mechanismto choose relevant subsets of rules to draw sensible conclusions, while in a monotonicsystem such mechanism is imbedded in the rules themselves (in the condition clausesof the rules). This means when new rules are required to reflect the emerging newinformation in a monotonic system some existing rules may need to be modified inorder to generate the new subsets needed for sensible conclusions. While in the proposedframework, the new subsets are created by inserting new rules in the appropriate level.• It allows users to prioritize their reasoning rules explicitly, so that they can have aclearer idea of how certain conclusions are reached, which in turn will enhance thedecision making process in the future.• It is pointed out from Multi-Criteria Decision Making (MCDM) research that rankingthe importance of the criteria is an important first step in solving a MCDM problem[Tabucanon88}. Since a commitment reasoning process is also a MCDM process (referto Section 2.4.2), this framework can provide support for ranking the criteria.The rulebase language used in CRSS will be discussed in Section Sentimental Factors AnalysisThe Sentimental Factors Analysis supports the aspect of commitment reasoning that dealswith all the sentimental factors (as discussed in Section 3.4). As pointed out in [Hewitt9l],sentiment is how people feel about each other’s behavior. In other words, people have differentinterpretations (according to their value systems) about different behaviors. Also pointed31out in [JanMan77], sentimenal considerations such as self-approval and social approval playimportant roles in the kind of commitments a person makes.In CRSS, we feel that it is impractical and ineffective to quantify sentimental factors andput them in rule form. For example, it would not be sensible to quantify how much theagent thinks of himself as being helpful by committing to help a friend. A list of sentimentalfactors are extracted from various literature in commitment studies [Gasser9l] [Hewitt9l}[Bond9O] [Brickman77] (e.g., guilty, helpfulness, willingness to learn, superiority, etc.) withthe intent to bring these factors to the agent’s awareness, and let the agent interpret the‘meaning’ of the commitment according to his value system (i.e., linkage of the ‘want to’portion). Bringing the relevant factors to the user’s awareness is a crucial factor to gooddecision making [Conklin9l] [JanMan77] (Section 2.4.3) — research has shown that peopletend to make only satisfactory decisions but not optimal decisions due to the overlooking ofsome important factors [Brickman77].At the end of the Analysis, the user has to enter his final decision on whether or notto make the commitment, and to write a short paragraph to explain and clarify his decisionrationale (the linkage between the ‘have to’ and ‘want to’ parts of the commitment). Researchhas shown that explicitly stating the decision rationale can improve the quality of the decisionsmade [Conklin9l] [JanMan77] [HelMarWoo] (Section 2.4.3).5.2 System DesignThere are three components in the CRSS (indicated as ovals in Figure 5.2): (1) MainMenu, (2) Side Bet Analysis (interacts with a CRSS Rulebase), and (3) SentimentalFactors Analysis (interacts with a CRSS Casebase).The three components of CRSS are displayed sequentially. In other words, users will firstsee the Main Menu where the request is displayed and then pressing [Process Request] willlead the user to Side Bet Analysis where the information for the request is converted to factsin the rulebase. Pressing the [Sentiment] button takes the user to the Sentimental FactorsAnalysis, where relevant sentimental factors can be examined, and the final decision and the32LEGENDProcessC ] Repository. Process FlowFigure 5.2: CRSS System Design33Decision Is made. The request and all resulsfrom CRSS are saved in Casebase. The entirereasoning support process is terminated.Figure 5.3: CRSS Main Menudecision rationale are recorded.The sequential ordering of the three subsystems is strictly an implementation limitation(the screen cannot fit all 3 subsystems simultaneously), no precedence is assigned to them inour design. The user can access any of the subsystems by clicking the corresponding buttonswhenever he finds necessary. The functions of each component are discribed below.345.2.1 The Main MenuIn this component, a commitment request is presented in a special electronic mail messageform called Commitment Request Form (see Figure 5.3). The first three fields on theForm indicate the sender, the receiver, the received date of the request respectively. The TaskType field indicates the type of commitment (used by CRSS Casebase for case retrieval). TheTask Discription is a detailed discription of the request task. The Task Requirement/Rewardsis where the rewards, resource requirements (including date and period of the commitment)are indicated.At the current stage of implementation, this subsystem is responsible for the followingfunctions:• Allow users to browse new requests.• Give users the options to go to the other two parts of the system.• Process a request — convert the information of the current commitment request intofacts in the rulebase so that the system can perform the Side Bet Analysis.5.2.2 The Side Bet AnalysisThis part of CRSS is based on the Preferred Subtheories framework.The Side Bet Analysis has the following capabilities (Figure 5.4):1. Allow user to alter (add, delete, & edit) rules and facts by pressing the [Edit Rulebase]button on the top right corner of the window in Figure 5.4. As indicated in Figure5.5, a rule-construction panel is used to assist users to construct rules.2. Allow user to re-process the request by the modified rulebase if necessary.3. Display the result of the process (i.e., a recommendation and the rules that are fired).This is the bottom window in Figure 5.4.4. Allow users to go to Sentimental Factor Analysis (the icon with an eye on the right ofthe window in Figure 5.4) when they are satisfied with the rulebase recommendation.35Figure 5.4: CRSS Side Bet AnalysisThe following is the rulebase language adopted for the Side Bet Analysis:• Predicates are the main form of representation (always start with lowercase alphabetes). They discribe the relationships between entities. For example the predicate:authority(robert ,advisor)means that Robert’s authority is an advisor. In other words, authority is the relationshipbetween the entity Robert and advisor.36In the CRSS, a set of Domain-Independent Predicates and a core rulebase areprovided (see Appendices A and B). Users are allowed to add, edit, or delete anydomain-dependent predicates according to their specifications.• Variable names always start with uppercase aiphabetes (e.g., Vi, Agent, TASK), whileconstants start with lowercase alphabetes (e.g., robert, advisor).• A fact is stated as a predicate with a period (e.g., cmt_type(design_project) )• A rule is stated as follow:R##.##: IF condition THEN actionEach rule begins with a rule number: R##.##. The ## before the period is the levelnumber where the rule is resided in, the other # is a number that is unique to thatrule in that level (the convention is to use the number of rules). For example R2.12,means it is the twelfth rule at level 2.A condition can be:Figure 5.5: CRSS Rule Inspector37— conjunctives (AND) and/or disjunctives (OR) of predicates— conjunctives and/or disjunctives of comparison of variables or mathematic expressions of varibles. For example:Van Var2 OR Va,r3 Var2/VanlAn action can be a conjunctive of the following:- COMMIT or NOT COMMIT— NOT R##.##. This action is for blocking the application of a rule.— predicate. This is used to create new predicates.— REPLACE predicate WITH predicate. This is for changing the values of the arguments in the predicate. For example, REPLACE time.avail(40) WITH time...avail(30)changes the argument from 40 to 30.An example of a legitimate rule:R1.1:IF constraint(T1) AND cmLtype(T2) AND T1=T2 THEN NOT COMMITThis rule says that if there is a constraint on commiting to the commitment type T2,then conclude NOT COMMITThere are three possible recommenations from the Side Bet Analysis: COMMIT, NOTCOMMIT, or UNDECIDED. UNDECIDED means that none of the rules that conclude COMMIT or NOT COMMIT are fired. Regardless of the recommendation, this subsystem will showall rules that have been fired to reach a particular recommendation (see the bottom windowof Figure 5.4).A Core RulebaseA core rulebase is included in the Side Bet Analysis to provide some basic knowledge for determining the availability of required resources, the importance of the task and the requestor,38Agent’sInformationRequest’sInformationFigure 5.6: The Coneeptual Model of the Core Rulebase Used in the Empirical Studythe utilitarian gain/loss of the requested commitment, etc. The provision of a core rulebaseis an attempt to allow the Side Bet Analysis to draw senible conclusions based on utilitarianinformation without the users entering any rules at the initial stage of using the system. Theconceptual model of the core rulebase used in the empirical study is shown in Figure 5.6.The core rulebase is constructed based on the research of commitment detailed in Chapters2 and 3.It is well known that two different experts of the same domain may have entirely differentconceptual models for an expert system, and that getting experts and users to agree on one39conceptual model is a vast research problem by itself [TricDavi93j. Our assumption is that theconceptual model of the CRSS core rulebase is a commonly understandable, agreeable modelin the commitment reasoning context, for which most users can build their domain knowledgeupon without major difficulties in understanding and agreeing with the model. For example,the core rulebase in Figure 5.6 contains rules which subtract the number of occupied-hoursby the total number of working hours to determine the time available for a commitment. Itis unlikely that there are many very different ways to determine the availability of time.In other words, we assume that the core rulebase is readily agreeable and understandableenough that in most cases it does not suffer the consensus problems in domain-dependentexpert systems mentioned in [TricDavi93j. Hence, the limitation of the usefulness of thiscore rulebase exists when meaningful conclusions cannot be drawn in domains where (1) onecommon conceptual model cannot be extracted or (2) where the conceptual model of the userbeing incompatible with the core rulebase’s. One viable solution to the latter limitation is toprovide users with a choice of multiple core rulebases with different conceptual models (seeChapter 8).5.2.3 Sentimental Factors AnalysisThe Sentimental Factors Analysis provides a table that shows sentimental factors that couldbe affected if committed and if not committed to request. The user can choose whether anyof the sentiments are increased, decreased, or unchanged, and he can also alter the list ofsentiments according to his context (Figure 5.7). By listing the relevant sentimental factorsin a table format, the agent can easily see how some factors could be affected if he was tomake a certain decision.The Sentimental Factors Analysis provides the following functions:1. display the table of sentimental factors, allow the user to choose whether a factor wouldbe increased, decreased, or unchanged, and to put in short comments to explain hischoice.40Figure 5.7: CRSS Sentimental Factor Analysis2. allow users to add/delete sentimental factors according to their needs (i.e., the table ofsentimental factors are changable).3. provides access to casebase which allows the retrieval of previous cases ofthe samecommitment type. The retrieved information included (Figure 5.8):• the rules fired and recommendation generated.• the sentimental factors in the Sentimental Factor Analysis.• the final decision and the justification for making the decision. It is in a format of41a small paragraph where the agent describes in his own words about why he madethat decision. This paragraph is called the Linkage, since the agent makes hisfinal decision according to how his value system links up the pros and cons of allfactors related to the request. In other words, the agent makes his final decisionby weighing the importance of the positive and negative effects of the utilitarianand the sentimental aspects, as well as the relative importance between the twoaspects.4. allow users to save the current request into casebase.5. allow users to enter the final decision of the commitment request (which is to be savedinto the case base). As mentioned above, the user needs to enter a short paragraph(called Linkage) of his justification for making the decision.5.2.4 CasebaseIn the CRSS, a simple casebase is implemented. The main purposes of the casebase are:• provide decision support by allowing users to refer to analyses of previous commitmentrequests of similar type.• keep track of all requests processed by the system in case decisions are questioned inthe future (this is to emphasize the accountability of commitment making mentioned inSection 2.3).• make debugging and upgrading of CRSS in the future easier (since previous cases canbe used as a means to detect errors).For the above purposes, the simple casebase in CRSS is implemented as a database withrecords of a predefined format. Each record represents a case and contains the followinginformation:42Figure 5.8: CRSS Casebase• the original Commitment Request Form with Commitment Type as the key for eachrecord.• the recommendation and rules fired from Side Bet Analysis.• the table of sentimental factors in Sentimental Factor Analysis.• the final decision made by the user, and a short explanation for why the decision wasmade.43The current version of the casebase only allows the stored cases to be retrieved for display intext form on the Casebase screen of CRSS (see Figure 5.8). This means that no informationfrom the current version of casebase can be retrieved and used by other parts of CRSS.Furthermore, the only matching mechanism is the manual selection of Commitment Typefrom the list of cases shown in Casebase menu. Future work can be done to expand thecapability of the casebase in these two aspects (see Chapter 8).5.3 ImplementationThe implementation of CRSS is done on the NeXT computer system. This platform waschosen for its support for building mouse-driven user interface. The Interface Builder makesprogramming the user interface a relatively simply task. Most of the programming is done inObjective-C, the NeXT version of Object-Oriented-C, while a part of the rulebase interpreteris written in LEX and YACC.Both the CRSS repositories, the casebase and the rulebase, are kept in text files. Theyboth have fixed internal formats which are recognized by corresponding read/write subroutines.The rules in the CRSS rulebase are translated into Prolog statements and then executed bya Prolog interpreter. The sophisticated inferencing capability of the Prolog interpreter justifiesthe extra work of converting the rulebase into Prolog statements; it would be much morecomplicated to program the entire inference engine from scratch. Outputs from the Prologinterpreter are interpreted by CRSS’s own interpreter (i.e., the CRSS rulebase interpreterinterprets things such as the correct leveling of rules, the blocking of rules, the conflict ofconclusions, etc.) in order to provide the correct outputs to the user. The part of CRSS’sown interpreter that is responsible for translating the rulebase language into Prolog statementsis written in LEX and YACC. LEX and YACC are programming languages specially designedfor writing lexical analyzers and parsers respectively. These two programming languages can44easily be translated into C statements1which can be integrated with the rest of the codes.This approach is less complicated than writing the interpreter in Objective-C.In principle, the CRSS’s interpreter goes from the rules of the highest priority to thelowest. It records the level of the first rule which concludes COMMIT or NOT COMMITand overrides all the contradictory conclusions made by lower level rules, but all fired rules(including the overridden rules) are recorded and shown to the user. A list of rules that areblocked are kept in a file, procedures are called to check that blocked rules will not be firedand that higher priority rules cannot be blocked by lower priority rules. The end product isa conclusion (COMMIT, NOT COMMIT, or UNDECIDED) and a list of all rules that arefired.‘C is a subset of Objective-C45Chapter 6Two CRSS ExamplesThe following are two examples to illustrate the features in CRSS. The first example is one ofthe scenarios used in the CRSS empirical study. The second example is a real life purchasingcommitment problem with a substantial financial implication encountered by the author.6.1 Presentation in a Faculty MeetingExample 6.1.1 The agent is a master’s student in an academic institution. He receives arequest coming from Bob Smith, the Associate Dean of his Faculty, asking him to represent theGraduate Students’ Association (GSA) in the bi-monthly Faculty Meeting. The meeting willlast 2 hours and the agent needs to conduct a 30-minute presentation in front of the Facultymembers (Figure 5.3). The presentation is to summarize the activities and the financialstatus of GSA.1. The request comes in the form in the Main Menu of CRSS in Figure 5.3. When theuser presses [Process Request] at this point, the Side Bet Analysis screen will showup like Figure 5.4. All information from the request form is already converted to factsin the rulebase (in the current version of CRSS, the request information is convertedto predicates in the rulebase manually), and the request has already been processed atthis point.46Figure 6.1: Flowchart of the Firing of Rules2. As indicated in Figure 5.4, CRSS suggests COMMIT. The user can go through theinformation provided in the bottom window of Figure 5.4 to find out how CRSS arrivedat this recommendation:• the requestor is an important person,• the task type is public in nature, and thus is an important task type,• and the user will gain presentation skill after committing to the task.The rulebase recommends COMMIT despite the fact that there is a time conflict, andthat the commitment does not provide any monetary compensation for the time the_____J {_____Ri .11 no presentationpresentation skill Iskill will gaIn such Skill JRi.2presentationispublicR8.2if will gain resourcethen COMMITBLOCKED—47agent will spend. Although not implemented in the current CRSS, this window couldpotentially display the use of rules in a graphical form as shown in Figure 6.1.3. The user has the options to edit the rulebase and then re-process the request. A RuleInspector is provided to support rule editing (Figure 5.5). After editing the rulebase,one can re-process the request by clicking the [Process Request] button on the SideBet Screen.The nonmonotonicity of this rulebase effectively decreases the chances of inconsistency(i.e., two or more rules drawing corifficting conclusions) by allowing users to put rulesinto multiple levels. By cranking the rulebase through the inference engine, users willfind out the fired rules and the levels they reside in, hence he can decide what level toput the new rules in order to have the result he desires. The process can be repeateduntil the desirable result is reached. This nonmonotonic feature effectively simplifiesthe task of maintaining consistency of the rulebase by reducing the need for temperingwith the rest of the rulebase.4. After the user has obtained the result in the Side Bet Analysis, he can click on the[Sentiment] to proceed to the Sentimental Factor Analysis screen (Figure 5.7). Onthis screen, the agent can indicate effects he thinks the current commitment requestwould have if he decided to commit or not commit. He can retrieve cases with the samecommitment type from casebase (Figure 5.8) for references.5. At the end, the user is to input his final decision as whether to commit or not (in theSentimental Factor Analysis section), and he is to write down a short explanation of therationale of his decision (Figure 6.2). This explanation has the following two purposes:• to ensure that the user understands his decision rationale (it is the linkages betweenpositive and negative factors, and between the ‘have to’ and ‘want to’ aspect).• to make the casebase more informative for future use (i.e., future users will obtainmore information on how the final decision was reached from this paragraph).486. Finally, the user can click on the [End CRSS} button to end the entire CRSS session.6.2 Purchasing Car StereoExample 6.2.1 The agent wants to decide whether he should purchase an automobile compact disc/radio stereo unit. It costs about $500-$600 which is a large amount of money for theagent’s financial status. On one hand, the agent does not want to spend that kind of money ona luxurious item, but on the other hand, he has a strong desire to acquire the unit as a meansto improve the sound quality of his car stereo. Therefore, he issues himself a commitmentrequest in CRSS hoping that CRSS can provide him some reasoning support.1. The agent sends a request to himself as shown in Figure 6.3.2. After the user has pressed [Process Request] at this point, then the Side Bet Analysisscreen will show up like Figure 6.4. The CRSS rulebase suggests that NOT COMMITfor the following reasons:Figure 6.2: CRSS Final Decision Screen49• the agent does not have enough monetary resource for the commitment• the overall value of the commitment is a loss, meaning that besides a loss of $500,the rulebase shows no other gain for committing.3. Since this request concerns little sentimental factors, the agent skips the sentimentalfactor analysis and goes on to make his final decision (Figure 6.5). He decides to committo buying the unit only if it costs less than $550.50Figure 6.3: Purchasing RequestFigure 6.4: Side Bet Analysis of the Purchasing RequestWhat makes this example interesting is that it was actually a real-life problem facedby the author himself. Before using the CRSS, the author could not make up his mind onwhether or not he should buy it. In fact, he had changed his mind several times before hedecided to seek help from the CRSS. After using the CRSS, the author not only felt thatthe CR55 clarified his reasoning process by making him understand that he was basicallyspending $550 in return for the satisfaction of owning and listening to a stereo unit, he alsofelt that by recording his final decision and justification (‘linkage’) in CRSS, his decision wasmade ‘offical’ and he no longer had the intention to change his mind. At the end, the author51did not buy the unit since he could not get a deal below $550.This example illustrates that (1) CRSS does help the user to clarify his thoughts (sometimes by explicitly stating the obvious but overlooked facts) in his reasoning process, and (2)that it helps to bind the user to his final decision by requiring the user to put his decisionand decision rationale in writing. The second point is an especially important contributionbecause it is the binding effect that characterizes the concept of commitment which inspiredthe CRSS research. As pointed out in Section 2.3, this binding effect augments the accountability of decision making which is not emphasized in the current DSS research. After all, aDSS is of little use if the user never implements his decisions.Figure 6.5: Final Decision of the Purchasing Request52Chapter 7Empirical Study of CRSSA series of semi-structured empirical testings were conducted on CRSS implementation. Threesubjects were used in the testings. All subjects were master’s students of the ManagementInformation Systems Division of the Faculty of Commerce. One of them is a second yearstudent, and the others are third year students. Each testing lasted approximately 2.5 to 3hours and were all conducted in separate days.Before conducting the actual testing with the subjects, a pilot study was conducted witha full-time research assistant in the Faculty of Commerce. The intention was to detect anyimplementation problems as well as testing procedure problems.The basic structure of the study is to let subjects use CRSS on a set of provided commitment request scenarios, and then collect their feedbacks (details of the testing procedure areprovided in Section 7.2 and Appendix C).7.1 ObjectiveThe objective of the testing is twofold:1. Gain implementation insights: the usefulness of the CRSS, as a system to supportcommitment reasoning. In particular, we hope to find out:• situations when CRSS is most practical, and situations when it is not as practical.53• the limitations of CRSS. For instance, the limitations on the expressiveness of therulebase.• the possibilities for implementation improvements (that is, how information canbe presented on screen to the user to provide better support).• the possibility of providing a core rule base in the Side Bet Analysis. The intentionis to make CRSS a functional system without the painstaking effort of entering therules on the user side.2. Gain conceptual insights: in particular, the commitment reasoning process of thesubjects vs. the commitment reasoning model of CRSS, which can:• provide valuable knowledge for the advance of commitment reasoning research inthe future.• provide ideas for conceptual improvements of CRSS. For instance, there might havebeen factors in the reasoning process for which the conceptual model of CRSS doesnot cover.• act as an indicator of the usefulness and the limitations of CRSS.• indicate the usefulness and problems of the application of default reasoning.7.2 Testing ProcedureThe assumption for the testing is that subjects are being rational’ or at least attempting touse CRSS as a tool to enhance their reasoning rationally. A person makes a decision basedon what he ‘feels like’ is an example of irrational reasoning.Since CRSS is designed to be used in organizational environments, we decided to chooseour subjects of the testing from the an organization that we are most closely related to —our Faculty of Commerce. A series of academic commitment request scenarios were used forwhich subjects could easily relate to:1To be rational means to have well-understood and well-supported reasons for every decision made.541. Request by an associate dean to present in a Faculty meeting (Example 6.1.1).2. Request by a staff member of the Job Placement Office for an interview on how toimprove the Office’s service.3. Request by a master’s student to help out in a new student orientation.4. Request by the thesis supervisor to become a marker.5. Request by another master’s student to be a subject for an empirical study.6. Request by yet another master’s student to comment on his research paper. This requestis carried out three times in order to study the change in the reasoning process whenthe subject repeatedly receives similar requests from the same requestor.A core rulebase, 6 sets of sentimental factors, and a hypothetical weekly class scheduleare provided for the testing. The core rulebase consists of rules for resource allocation,scheduling, and various cost/benefit analysis (see Section 5.2.2, and Appendix A and B).The sentimental factors are based on the pilot studies’ results and research results (refer toChapters 2 and 3).At the beginning of the testing, the structure of the CRSS and the purpose of the testingare briefly explained to the subject. During each testing, the subject is to process all therequest scenarios consecutively with the CRSS.Subjects are allowed to alter the core rulebase and the sentimental factors to fit their needs.Assistance is provided when called upon by the subject. During and after the processingof each request, the subject’s comments are recorded. Questions are raised to the subjectwhenever necessary to elicit feedbacks. A discussion at the end of each testing session iscarried out to summarize the subject’s overall comments concerning both the implementationand conceptual aspects of CRSS. For a detail description of the testing procedure see AppendixC.557.3 Testing ResultsThe results are catagorized into implementation and conceptual:1. Implementation Results:All subjects found the core rulebase of the Side Bet Analysis useful in providingresources analysis such as locating conflicts in schedule, calculating the availabilityof required resource, and calculating the utilitarian gain/loss of the commitment.From the practicality prospective, this feature of CRSS is highly desirable becausevery few users are willing to spend a long period of time entering their rules beforeusing a new system.This result confirms with the assumption that the core rulebase normally canprovide reasonable amount of support without major customization. However, wedo not eliminate the possibility that there may be domains where such a corerulebase is not applicable.• No major difficulties were encountered when subjects needed to put in their ownrules in the rulebase (with the researcher’s assistance), although some rules neededto be expressed slightly different in the rulebase. Subjects found the leveling featureof the rulebase very practical for prioritizing their rules.The integrity of rulebase was unaffected after the entering of new rules, meaningthat the rest of the rulebase did not need to be changed when entering new rules.• The rulebase language was perceived as ‘programming language-like ‘ by some subjects. It was suggested that it should be translated into a more ‘natural language’format or presented in a form which users can easily understand the reasoning ofthe rulebase.We have adopted the suggestion from the first subject to present the reasoningprocess of the rulebase in a flowchart form (Figure 6.1). The subjects followedhad found the flowcharts easier to understand than a list of rules.56• All subjects found the Sentimental Factors Analysis useful in reminding them thesentimental factors they need to consider. However, all subjects were unsure of thesignificance of Sentimental Factors Analysis in relation to Side Bet Analysis. Onesubject perceived the Sentimental Factors Analysis as more important, while theother two thought the Sentimental Factors Analysis was a supplement of the SideBet Analysis.The confusion was caused by the order that the two parts were presented to thesubjects — Side Bet Analysis came before Sentimental Factors Analysis. Thedesign of CRSS does not assign precedence to them, however the screen was notlarge enough to hold both parts side-by-side simultaneously.We compensated the problem by stating to the subjects in the subsequent sessionsthat the two parts had no precedence over each other, they should be consideredas two attributes of a commitment.• In terms of overall performance of CRSS, subjects in general felt that the system was still a research version, but they expressed that they would use CRSSwhen the request was ‘difficult’ to decide. Summarizing the feedbacks from allsubjects, the difficulty of a request lies in the complexity and the importance ofit. The complexity of a request is proportional to the number of factors needs tobe considered, while the importance of a request is proportional to the magnitudeof its implications to the agent. Therefore, a ‘difficult’ request is one with highcomplexity and/or importance implications.• Since the scenarios used in the empirical study are hypothetical, the system’seffect on the accountability of the decision making could not be tested directly.However, the fact that the subjects expressed their willingness to use CRSS on‘important’ commitment requests gives some indication that they felt that CRSScould provide a finalizing effect on their decisions. That is, they felt that theywould be more likely to stick with their decisions after using CRSS.2. Conceptual Results:57• All subjects agreed that the CRSS rulebase was an appropriate model for theirreasoning processes concerning resource allocation and scheduling under most situations. However, it was noticed that subjects did not always think like the CRSSmodel of reasoning. Although in most cases it was possible to convert the way thesubjects reasoned into the model, CRSS was not able to support effectively in thefollowing situations:— When all factors in CRSS suggested commit, yet the subjects still insisted on‘not commit’, yet they could not provide explanations for doing so. This couldbe caused by subjects’ reluctance to reveal the real reasons, or by the factthat subjects simply did not understand their own reasoning process at themoment.One possible explanation of why the subjects were unaware of the importantfactors that shaped their decision is ‘the operation of repression and otherpsychological defense mechanisms’ (p. 140, [JanMan77fl.—When subjects let some psychological factors, such as bad mood, laziness, etc.,to decide whether or not to commit.Under these situations, the subjects were considered to be irrational and weretherefore considered to be outside the boundary of the application of CRSS. However, these phenomena should inspire interesting research ideas in the future.• One of the subjects expressed concern that the CRSS rulebase result might have aleading effect in her reasoning process, and that its conceptual model might not beas commonly understandable and agreeable as it was assumed to be. To addressthe concern, we decided to ask subjects to go through certain requests with theirown reasoning process prior the use of CRSS, so that they could compare theirconceptual model with the rulebase’s conceptual model. At the end, one subjectagreed the rulebase’s conceptual model usually coincided with her own model,and the other two subjects expressed that its model could easily be understoodalthough they might not necessarily always think that way. The two subjects also58stated that they sometimes used one single most important factor (or rule) to makedecisions. For example, one subject decided she would not commit to any requestas long as there was a time conflict with a class, regardless of the requestor orthe nature of the task. CRSS provides support in such a reasoning process in thefollowing ways:—The single most important factor can be entered as a rule (at a high level) inthe rulebase. This way, all other less important rules will be overridden.—CRSS can help identify that determining factor. In several occasions, subjectsindicated the determining factors would not have been determined if they hadnot used CR55. CRSS supported the reasoning process by reminding the userof relevant factors and rules.• Sentimental Factors Analysis was found especially useful when:—the requestor and requestee had close organizational and/or social relationship— the commitment was a public one (that is, the carrying out of the commitmentwill be seen by the public).• As pointed out in the implementation results, the CRSS rulebase was easy to modify and provided sensible conclusions. However, both the subjects and the authorfelt that some of the sentimental factors, such as self-image and social approvalfactors, could not be sensibly represented in rule-form. Both the subjects and theauthor indicated that the ‘table’ format used in Sentimental Factor Analysis wasa more effective representation. This observation reflects a possible limitation ofapplying default reasoning in DSS that needs to be compensated by other means.59Chapter 8Conclusion and Future WorkIn this thesis, the concept of commitment is studied. Our focus is in studying the charateristics of commitment so as to develop a system that can support the process of commitmentreasoning. We have proposed a system called the Commitment Reasoning Support System(CRSS), and have developed an implementation of it on the NeXT Computer System. Thesystem is based on Brickman’s concept of commitment which is characterized by the interactions between the utilitarian and sentimental factors.The semi-formal empirical study conducted on CRSS shows encouraging results and interesting findings. In the practicality aspects, results show that CRSS can provide reasoningsupport by:• providing a core rulebase to handle scheduling, resource allocation, and other quantifiable factors in coiumitment reasoning; the system also allows users to modify therulebase to fit their needs.• reminding the user to consider relevant sentimental factors in the process.• finalizing the reasoning process and the final decision by requiring the user to recordhis final decision and his reasoning process. That is, it increases the accountability ofdecision making.60CRSS is most useful for requests with high complexity and/or important implicationssince it helps to clarify and reveal the relvalent factors need to be considered, and to finalizethe decision made (i.e., to make the user feel committed to the decision by providing himwith a thorough analysis of the request). -In terms of conceptual aspects, we observed that psychological factors played an importantrole in the reasoning process, that subjects sometimes used one single determining factor tomake their decisions, and that resistance occured when subjects were asked to explain theirreasoning processes.Combining the findings of the two aspects, the following are the possible extensions ofCRSS such that it may become a useful component in organizational environments (onewhich supports complicated and important requests as well as routine requests):• have a more natural interface for the rulebase, so that the non-technical users can easilyunderstand the inputs and the outputs of the rulebase. One possible approach is to usea diagrammatic interface as suggested by one of the subjects (see Figure 6.1).• further research to incorporate the psychological aspects of human reasoning processinto CRSS.• provide a core rulebase which contains organizational knowledge. Although CRSS isintended to support commitment reasoning at an individual level, organizational levelknowledge such as codes of conduct, administrative procedures, conventional way ofdoing things, etc., needs to be included in the rulebase in order to provide sensiblesupport. One possible way to obtain organizational knowledge is to connect CRSS withvarious knowledge bases and databases in the organization.• provide multiple core rulebases (with different conceptual models) to choose from (1)when one common conceptual model cannot be agreed on within a domain, or (2)when the CRSS is used in different domains within the same organization (e.g., thefinance department, the engineering department, etc.). The provision of multiple corerulebases requires different ways of interaction between the rulebases and organizational61knowledge bases. A possible way to select the appropriate core rulebase is through theuse of a casebase. That is, users can retrieve the core rulebase used in a matched case.• expand the capability of the ca.sebase (1) by providing a more sophisticated case matching mechanism so that matching can be achieved in a vaiety of ways, and (2) by enablingcore rulebases and sentimental factors used in previous cases to be applied to the currentcase.• incorporate CRSS into the organizational electronic mail network so that commitmentrequests can sent and received by workers via electronic mail.OASIS [MartWoo92] [WooLoc92] could be a possible implementation platform for these extensions.In summary, this project has not only shown the possibility of providing computer support for commitment reasoning, it has also shown the applicability of default reasoning oncommitment reasoning (via the CRSS rulebase). The leveling feature is not only an intuitiveway for the user to represent his reasoning process, it also simplifies the task of consistencymaintenance which can be very complicated in conventional rulebase systems.Furthermore, the empirical study results demonstrate the usefulness of CRSS in handlinghighly complicated and important commitment requests, as well as providing new insightsin commitment reasoning. Although the current version of CRSS is still far from being asystem that can be as practical as spreadsheets and database systems in organizations, itnevertheless has provided a building block for a promising research area which may benefitmany organizational workers in the future.62Bibliography[BarOuc86] Barney, J., B., Ouchi, W., G. (eds.), Organizational Economics, Jossey-Bass,San Francisco, 1986.[Becker6Oj Becker, H. S., Notes of the Concept of Commitment, The American Journal ofSociology, Vol. 66, No. 1, 32-40, 1960.[BoncHolWhin8lj Bonczek, R. H., Holsapple, C. W., Whinston, A. B., Foundations of Decision Support Systems, Academica Press Inc., New York, 1981.[Bond9O] Bond, A. H., Commitment: A Computational model for organizations of cooperating intelligent agents, Proceedings of Conference of Office InformationSystems, Cambridge, Massachusetts, April, 25-27, 1990[Brewka9Oj Brewka, G., Preferred Subtheories: An Extended Logical Framework for DfaultReasoning, Artificial Intelligence, Vol. 36, pp 1043-1047, 1991.[Brewka9l] Brewka, G., Handling Incomplete Knowledge in Artificial Intelligence, LectureNotes in Computer Science, Vol. 474: Information Systems and Artificial Intelligence: Integration Aspects, pp 11-29, 1991.[Brickman77] Brickman, P., Commitment, Conflict, and Caring, Prentice-Hall Inc., New Jersey, 1977.[Buchanan83j Buchanan, B. 0., Barstow D., Bechtel R., Bennett J., Clancey W., KulikowskiC., Mitchell T., Waterman D. A. Constructing an Expert System In Hayes-Roth63F., Waterman D. A., Lenat D. B. (Ed.) Building Expert Systems, Harper, NewYork, 1951.[Conklin9l] Conklin, E. J., Yakemovic, K. C. B., A Process-Oriented Approach to DesignRationale, Human-Computer Interaction, Volume 6, pp.35’7-39l, 1991.[Csikszen75] Csikszentmihalyi, M., Beyond Boredom and Anxiety: The Experience cf Playin Work and Games, Jossey-Bass, San Francisco, 1975.[Doyle92] Doyle, J., Rationality and Its Roles in Reasoning, Computational Intelligence,vol. 8, no. 2, 376-409, 1992.[FisAjz75} Fishbein, M., Ajzen, I., Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research, Addison-Wesley Publishing Company, 1975.[Gasser9l] Gasser, L., Social Conceptions of Knowledge and Action: DAI Foundations andOpen Systems Semantics, Artificial Intelligence, vol. 47, 107-138, 1991.[Gerard67j Gerard, H. B., Jones, E. B., Foundations of Social Psychology, John Wiley &Sons, Inc., 1967.[Gerson76} Gerson, B. M., On ‘Quality of Life’, American Sociology Review, vol. 41, 793-806, 1976.[HelMarWoo] Helms, R.M., Marom, R., Woo, C. C., Decision Construction Set: A PersonalDecision Support System, Draft Version 2.111, IBM Canada Lab, Centre forAdvanced Studies, 1992.[Hewitt86] Hewitt, C., Offices are Open Systems, ACM Transactions on Office InformationSystems, vol.4, no.3, 271-287 1986.[Hewitt9l] Hewitt, C., Open Information Systems Semantics for Distributed Artificial Intelligence, Artificial Intelligence, vol. 47, 76-106, 1991.64[JanMan77] Janis, I., Mann, L., Decision Making, A Psychological Analysis of Conflict,Choice, and Commitment, The Free Press, New York, 1977.[KiesIe7l] Kiesle, C. A., The Psychology of Commitment Experiments Linking Behaviourto Belief Academic Press, New York and London, 1991.[Lewis5lj Lewis, K., (Collected Writings) In D. Cartwright (Ed.), Field Theory in SocialScience: Selected Teoretical Papers, Harper, New York, 1951.[Lu92] Lu, F., RECOS: A Request-Commitment Support Model, M.Sc. Thesis, MISdivision, Faculty of Commerce, The University of British Columbia, Vancouver,1992.[LuWoo9l] Lu, F., Woo, C. C., A Decsion-Making Model for Commitments, Working Paper91-MIS-019, Faculty of Commerce, University of British Columbia, 1991.[LuWoo92] Lu, F., Woo, C. C., RECOS: A Request-Commitment Support Model, Proceedings of the second International Society for Decision Support Systems (ISDSS)conference, Ulm, Germany, June 22-25, 1992.[MacneilSOJ Macneil, I., R., The New Social Contract, An Inquiry into Modern ContractualRelations, Yale University Press, New Haven and London, 1980.[MartWoo92] Martens, C., Woo, C., C., OASIS2: An Environment For Developing Organizational Computing Systems, In Proceedings of the International ComputerScience Conference, pp 460-466, Hong Kong, December 1992.[Poole8S} Poole, D. L., A Logical Framework for Default Reasoning, Artificial Intelligence,Vol. 36, pp 27-47, 1988.[Poole89} Poole, D. L., An Architecture for Default and Abductive Reasoning, Computational Intelligence, Vol. 5, pp 97-110, 1989.[Ramsay88j Ramsay, A., Formal Methods in Artificial Intelligence — Cambridge Tracts inTheoretical — Computer Science 6 Cambridge University Press, 1988.65[Reiter8OJ Reiter, R., A Logic for Default Reasoning, Artificial Intelligence, Vol. 13, pp81-132, 1980.[Reiter87] Reiter, R., Nonmonotonic Reasoning, Annual Reviews - Computer Science, Vol.2, 147-186, 1987.[SamuelS8] Samuelson, P. A., Nordhaus, W. D., McCallum, J., Economics, Sixth CanadianEdition, McGraw-Hill Ryerson Ltd., 1988.[Tabucanon88] Tabucanon, M. T., Multiple Criteria Decision Making in Industry, Studies inProduction and Engineering Economics, Vol. 8, Elsevier, New York, 1988.[ThibKell59j Thibaut, J. W., Kelley, H. H., The Social Psychology of Groups, Wiley, NewYork, 1959.[TricDavi93] Trice, A., Davis, R., Heuristics for Reconciling Independent Knowledge Bases,Information Systems Research, Vol. 4, No. 3, pp 262-288, 1993.[WooLoc92] Woo, C. C., Lochovsky, F. H., Knowledge Communication in Intelligent Information International Journal on Intelligent and Cooperative Information Systems, Vol. 1, No. 1, 1992.66Appendix ADomain-Independent Predicates inCRSS RulebaseTangible Resources Predicatesresource(TYPE, NAME, QUANTITY)TYPE : money, space, material, skill/technologyNAME : specific types of money, space, material, skill, and technologyQUANTITY: the available quantity of the resource—money — in dollars and cents—space — in square meters—material — in appropriate units (e.g. paper in number of sheets, gasolinein liters, etc.)—skill — in skill levels (depends on the domain of application, for thepurpose of the empirical study, +1 means the agent has the skill, —1means he does not have the skill).— technology — the availability of software and hardware. A particulartechnology can have two status: +1 (available), -1 (not available).resource..value(TYPE, NAME, QUANTITY) — the value (in dollars) perunit of a kind of resource (defined by the agent; same format as the resourcepredicate).67occupied(DAY,BEGIN,END,EVENT) — on DAY of each week from BEGINto END hours is occupied by EVENT.time..avail (X) — the amount of hours available for each week.• Authority Predicatesauthority(X,pos) — Agent X’s position is pos in the organizationimportant(requestor) — the requestor is an important person• Commitment Type Predicatescmt_type(TYPE) — the commitment is of type TYPE (e.g., presentation,meeting, commenting, etc.).cmt_ri.ature(NATURE) — NATURE of a commitment can be private, public,dangerous, or other adjectives to describe the nature of the commitment.important(task) — the commitment request is important• Requestor Predicatesrequestor(X) Agent X is the requestor of the commitment• Constraints Predicateconstraint(T) — commitment type T is prohibited to be committed to• Commitment Requirements Predicatesrequired...resource(TYPE, NAME, QUANTITY) — same format as the resourcepredicate. QUANTITY — positive value means the amount is rewarded tothe requestee, negative value means the amount is required from the requesteewhen carrying out the commitment.cmtdate(DAY,BEGIN ,END ,DATE) — meaning that the commitment needsto be carried out on DAY of the week from BEGIN to END hours, on exactlyDATE.68cmt_time(Y) — meaning that Y hours are needed for carrying out the commitment.• Resource Utility Predicatesnoresource(TYPE, NAME, QUANTITY) — indicates that there is a shortage of a certain required resource (short by QUANTITY amount). This predicate is usually generated by a rule that calculates the difference between therequired resource and the available resource, while resource(TYPE, NAME,QUANTITY) is a predicate to record the amount of resource available to theagent.cmt_value(Z) — the calculated value of commitment in terms of dollars.time.conflict(DAY,CB,CE) — there is a time conflict with the commitmenton DAY of the week between CB to CE in hours.will_gain_resource(TYPE,NAME,QUANTITY) — agent will gain resourceif commited (same format as the resource predicate).should_not_cmt(REASON) — meaning that the agent should consider not tocommit due to REASON.69Appendix BCRSS Core Rulebaseauthority(self ,msc).authority(smith, asso_dean).authority(nialcolm,msc).authority(fu,msc).authority(wong ,placement).authority(edwards ,advisor).instructor ( conim525 ,smith).resource (money,grant ,200).resource_value (material, coffee_cookies, 3).resource_value (material, t_shirt ,20).resource_value (material,luncheon,20).resource_value (time ,cmt_time,11).resource_value (money,_, 1).time_avail (40).cmt_value(0).cmt_time(0).70occupied(fri ,1200 ,1400, gsa.meeting).occupied(mon,900,1100,comm522).occupied(wed,900,1100,comin522).occupied (mon, 1400,1600, comm58O).occupied(wed, 1400,1600,comm580).occupied (mon, 1100,1300 ,mis_workship).exam(comm525 ,1200,1400,9/5/93).occupied(tues,1200,1400,comm525).occupied(thur,1200,1400,comm525).occupied(tues,1400,1600,comm635).occupied(thur, 1400, 1600,comm635).occupied(sun,1200 ,2200 ,study).Ri .0: IF required.xesource(TYPE,NAME,VALUE1) AND resource(TYPE,NAME ,VALUE2)AND VALUE3 VALUE1+VALUE2 AND VALUE3 < 0 THEN no.xesource(TYPE,NAME,VALUE3)R1.i1: IF require&resource(A,B,C) AND C < 0 AND NOT resource(A,B,_)THEN noresource(A,B,C)R1.12: IF require&resource(A,B,C) AND C > 0 THEN will_gainresource(A,B,C)Ri.2: IF cmt_type(presentation) THEN cmtnature(public)R1.3: IF cmt_type(meeting) THEN cmt..nature(public)R1.4: IF requestor(X) AND authority(X,asso_dean) THEN important(requestor)R1.41: IF requestor(X) AND authority(X,advisor) THEN irnportant(requestor)R1.5: IF occupied(DAY,B,E,_) AND cmt_date(DAY,CB,CE,.) AND CB > B ANDCB < E THEN timeconflict(DAY,CB,CE)Ri.51: IF exam(COURSE,B,E,DATE) AND cmt_dateC..,CB,CE,DATE) AND CB >B AND CB < E THEN examcon±lict(COURSE,CB,CE,DATE)71R1.6: IF occupied(DAY,B,E,.i AND cmt_date(DAY,CB,CE,) AND CE > B ANDCE <= E THEN timeconflict(DAY,CB,CE)R1.7: IF requestor(X) AND instructor(COURSE,X) AND examconflict(CQtJRSE,CB,CE,_)THEN NOT R2.11R1.8: IF cmt_date(_,B,E,_) AND X = (E-B)/100 AND cmt_time(Y) AND ZX+Y THEN REPLACE cmt_time(Y) cmt_time(Z)R1.9: IF occupied(_,B,E,_) AND X = (E-B)/100 AND time_avail(Y) AND Z= Y—X THEN REPLACE time..avail(Y) time_avail(Z)R1.91: IF require&resource(X,K,Y) AND resource_value(X,K,Z) AND cmt_value(A)AND T = Z*Y AND T2= A+T THEN REPLACE cmt_value(A) cmt_value(T2)R1.92: IF cmt.time(X) AND resource_value(time,_,Z) AND cmt_value(A) ANDT (Z*X) AND T2= A-T THEN REPLACE cnit_value(A) cmt_value(T2)R2.O: IF cmtnature(public) THEN important(task)R2.1: IF timeconflict(DAY,CB,CE) THEN NOT R8.2R2.11: IF exainconflict(_,_,_,_) THEN NOT COMMITR2. 12: IF noresource(skill,B,C) THEN wilLgainresource(skill,B,—C)R2.2: IF timeconflict(DAY,CB,CE) THEN shoul&not_cmt(timecon±lict)P.2.3: IF time_avail(X) AND cmt_time(Y) AND X<Y THEN should..not_cmt(timeshortage)P.2.4: IF cmt_va].ue(X) AND X<O THEN shouldnot_cmt(loss)P.6.0: IF important(task) AND important(requestor) THEN COMMITP.6.1: IF wilLgain.resource(_,_,_) AND important(task) THEN COMMITP.7.0: IF no.resource(TYPE,NAME,VALUE) AND TYPE <> skill THEN NOT COMMITP.8.1: IF important(task) OR importaxtt(requestor) THEN COMMITP.8.2: IF wilLgain.resource(_,,_) THEN COMMITP.9.0: IF shouldnot_cmt(X) THEN NOT COMMIT72R1O.O: IF NOT should...riot_cntt(X) THEN COMMIT73Appendix CEmpirical Testing Procedure andDataC.1 Testing ProcedureBefore the testing begins:1. Explain to the subject the purpose of the empirical study.2. Briefly describe the components of CRSS.3. Explain to the subject that in the empirical study his role is a master’s student at theFaculty of Commerce of UBC. Given a hypothetical weekly class schedule, he needs todecide on whether to commit to six commitment request scenarios (see Section C.2).4. Ask the subject to raise any questions before the testing begins.The following procedure is carried out for each scenario:1. Show the subject the commitment request in the Commitment Request Form of theMain Menu. Ask the subject to raise any questions if he does not understand any partsof the request.2. When the subject indicates complete understand of the request, he is told to analyzethe request by himself before proceeding to use CRSS.743. When the subject is ready, he is told to first process the request by the Side Bet Analysisby pressing the [Process Requestj.4. At the Side Bet Analysis, the subject is directed to read the lower window which containsthe result generated from the rulebase. The subject is encouraged to ask questions aboutthe rulebase and results generated until he is clear about how the rulebase functions.5. Ask the subject to comment on the following concerning the Side Bet Analysis:(a) The ease of understanding of the rulebase language. If the subject expresses difficulty in understanding the language, he is asked to suggest possible amendments.(b) The ease of understanding of the reasoning process of the rulebase (i.e., the conceptual model of the rulebase).(c) The reasoning process of the rulebase in comparison with his own reasoning process.If they are different, the subject is asked to identity the difference as precise aspossible. The facilitator will try to resolve the difference by modifying the rulebase.6. If rulebase is modified, the request will be re-processed to make sure the rulebase cangenerate the desired result.7. The subject is asked to address any other concerns of the Side Bet Analysis beforemoving on to the Sentimental Factors Analysis.8. At the Sentimental Factors Analysis, a list of sentimental factors (per scenario) is loadedon to the screen. The subject is asked to think of sentimental factors as ‘things concerning how one expects oneself and others to perceive oneself if a commitment is madeor not made.’ The facilitator will then describe how the Analysis works.9. The subject is asked if further explanation is needed on how the Sentimental FactorsAnalysis functions.10. The subject is allowed to alter the list of sentimental factors to suit his needs, and thealterations made are recorded.7511. Then the subject is instructed to indicate the effect of the commitment on the sentimental factors: increased, unchanged, or decreased.12. Upon completing the Sentimental Factors Analysis, the subject is required to commenton the usefulness of this analysis in supporting the reasoning process:(a) How it is useful.(b) When it is useful.(c) The usefulness in comparison with the Side Bet Analysis.13. The subject is then told to combine the results from the two analyses and make adecision on wether or not to commit to the request. When the decision is made, thesubject is instructed to press [Final Decision] button to record his final decision andhis decision rationale (in the Linkage field).14. When finished, the entire analysis is saved into a casebase. The subject is then introduced to the Casebase component of CRSS, and he is reminded that the casebase canbe used to support the analytical process in the future use of CRSS.15. At the end of each session, the following questions are asked regarding the scenario justprocessed with CRSS:(a) Do you think the system is useful in this case?(b) How do you think it would be more useful in this case?(c) Do you think CRSS helps you to solidify your decision? That is, do you think ithelps you to feel more committed to your decision?(d) Which part(s) of the system do you like most in this case?(e) ‘Which part(s) of the system do you dislike most in this case?(f) Which part(s) confuses you the most?16. The subject is instructed to ask any questions before proceeding to the next scenario.76When all scenarios are completed, these questions are asked before ending the session to getthe subject’s overall impression of CRSS:1. How useful do you think CRSS is in supporting commitment reasoning?2. In what ways does CRSS coincide and complement your commitment reasoning process?3. What kind of commitment requests do you think CRSS would be most useful for?4. Would you use it in real life situations? If so, when?5. Which part(s) of the system do you like most based on the six scenarios you haveprocessed with CRSS?6. Which part(s) of the system do you dislike most based on the six scenarios you haveprocessed with CRSS?7. What improvement(s) would you like to see in CRSS?8. Please give a summary about the overall impression of CRSS.C.2 Testing DataSix request scenarios:1. Requestee: Subject’s nameRequestor: Bob SmithReceived Date: 21st August, 1993Task Type: PresentationTask Discription: You are invited to be the representative of the Graduate StudentAssociation (GSA) in the hi-monthly Commerce Faculty meeting. You would be asked todo a 30-minute presentation in front of the Faculty during the meeting. The presentationwill be a summary of the activities of GSA for the previous 2-month period.77Task Requirements:skill required: presentation skillknowledge required: GSA activitiesmeeting date: 9/15/93, 14:00-16:002. Requestee: Subject’s nameRequestor: John LeeReceived Date: 27th August, 1993Task Type: PresentationTask Discription: You have been chosen to be one of the ‘Orientation Leaders’ for‘UBC Orientation 94’. You would be assigned with a group of newly enrolled graduatestudents for a 3-day period. And your responsibilities include holding a campus tour,helping the students to register, coordinating various social events for the orientation,and helping the new students get acquaintted with the new environment.Task Requirements:skill required: presentation skillskill required: leadershipreward: T-shirt, luncheon with Faculty Dean at Pan Pacific.meeting date: 8-30-93 to 9-1-93, 7am to 6pm.3. Requestee: Subject’s nameRequestor: Grace WongReceived Date: 27th August, 1993Task Type: InterviewTask Discription: Faculty of Commerce’s Placement Office is conducting a seriesof interviews with Commerce students. The purpose of the interviews is to collect78students’ comments on our placement services in order to improve them in the future.You have been chosen to be one of interviewees. You would be interviewed by oneof our staff members for a 3-hour period. You would be asked about your family,social and academic backgrounds, your impression of the Faculty of Commerce and thePlacement Office, and things to improve for the Office. Your participation would begreatly appreciated by the Placement Office.Task Requirements:reward: free coffee and cookiesmeeting date: 9-8-93, 1pm - 4pm.4. Requestee: Subject’s nameRequestor: Steve EdwardsReceived Date: 30th August, 1993Task Type: markerTask Discription: UBC needs YOU as a marker for COMM 336. The responsibilities include marking of 8 homework assignments, in which 1-2 of them will begroup projects. Also, amongst the individual assignments, 1-2 of them will be spreadsheet/database/expert system assignments, while the others are case studies. Themarker will be paid at a 50 hrs subjective/ 50 hrs objective rate, which turns outto be $1500/term. The marker is expected to receive a marking assignment every 2weeks, and each assignment wi’l take about 12-16 hrs to finish.Task Requirements:time required: 12-16 every 2 weeks through out the term.skill required:mis knowledge (with respect to COMM 336’s material).skill required:unbiased judgement.5. Requestee: Subject’s name79Requestor: Philip FuReceived Date: 30th August, 1993Task Type: commentingTask Discription: You are asked by one of your fellow Phd students to comment onhis paper. Your comments should emphasize on both the strength and the weaknessof paper. This job would probably provide you interesting research ideas as well asbroaden your knowledge in MIS.Task Requirements:time required: 2 hoursreward: research ideasreward: MIS knowledge6. Requestee: Subject’s nameRequestor: Nigel MalcolmReceived Date: 30th August, 1993Task Type: testing subjectTask Discription: You are asked by another MSc student to be a subject for theempirical study of his thesis project. As a subject, you need to work on his ‘GroupSupport System’ with 3 other subjects from the Faculty of Liberal Arts. The testingwill require much interaction (directly as well as via computers) amongst subjects. Thenyou need to comment on your experience with the system. The entire study will take 2to 3 hrs. You will be paid $15 for this task.Task Requirements:reward: $15testing time: Tuesday, 1:00pm - 4:00pm, 9/15/9380


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items