Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Enhancing the explanatory power of intelligent, model-based interfaces Nakatsu, Robbie T. 2001

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2001-715116.pdf [ 19.31MB ]
Metadata
JSON: 831-1.0090866.json
JSON-LD: 831-1.0090866-ld.json
RDF/XML (Pretty): 831-1.0090866-rdf.xml
RDF/JSON: 831-1.0090866-rdf.json
Turtle: 831-1.0090866-turtle.txt
N-Triples: 831-1.0090866-rdf-ntriples.txt
Original Record: 831-1.0090866-source.json
Full Text
831-1.0090866-fulltext.txt
Citation
831-1.0090866.ris

Full Text

ENHANCING THE EXPLANATORY POWER OF INTELLIGENT, MODEL-BASED INTERFACES by ROBBIE T. NAKATSU B.A. , Yale U n i v e r s i t y , 1986 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE  FACULTY OF GRADUATE STUDIES  ( F a c u l t y o f Commerce and Business A d m i n i s t r a t i o n D i v i s i o n of Management I n f o r m a t i o n Systems) , We accept t h i s t h e s i s as conforming to the r e q u i r e d standard  THE UNIVERSITY OF BRITISH COLUMBIA J u l y 2001 ® Robbie T. Nakatsu, 2001  In  presenting  degree at the  this  thesis  in  University of  partial  fulfilment  of  of  department  this thesis for or  by  his  or  scholarly purposes may be her  representatives.  permission.  Department The University of British Columbia Vancouver, Canada  j  for  an advanced  Library shall make  it  agree that permission for extensive  It  publication of this thesis for financial gain shall not  DE-6 (2/88)  requirements  British Columbia, I agree that the  freely available for reference and study. I further copying  the  is  granted  by the  understood  that  head of copying  my or  be allowed without my written  Abstract This thesis considers intelligent, model-based systems and the system design choices that will result in enhanced explanatory power (i.e., more transparency and flexibility in the system). Intelligent systems provide advice to the end-user to assist in decision-making and problem-solving and model-based interfaces make use of a graphical representation of a model, which an end-user is free to interact with and explore to better understand the system's advice. Having a model-based interface is believed to facilitate the development of an appropriate mental model of the system, so that a user can become a more proficient problem-solver when interacting with an intelligent system. A primary objective of this dissertation is to offer empirically-based guidelines on how specific enhancements to explanatory power will improve system effectiveness. Two separate experiments were conducted to explore explanatory power by manipulating its three components: 1) content-based enhancements, 2) interface-based enhancements, and 3) the selection of an appropriate advisory strategy. Experiment I explored whether providing end-users with graphical-based hierarchies representing the structure of expert system rule-bases (interface-based enhancement) as well as with deep explanations, or underlying domain principles explaining system actions (content-based enhancement), actually improved problem-solving ability. Experiment II was an investigation that demonstrated how manipulations of advisory strategy affected explanatory power. For this experiment, two versions of an intelligent system were created: a high-restrictive system, in which the system prescribed for the end-user the order and manner in which the procedures of an intelligent system were to be used; and a low-restrictive system, in which the end-user was free to use an intelligent system's procedures in any sequence and in any manner he/she chose. Multiple methods of measurements were employed to understand the effects of different treatments in the study. Problem-solving performance, as assessed by a series of tasks, and execution time were the main dependent variables of interest. Questionnaires and essay questions were administered to all subjects to gain a deeper understanding of user preferences. Finally, to gain a richer understanding of the problem-solving process, all of the subjects' actions were captured in a computer log, so that problem-solving strategies could be reconstructed and analyzed.  ii  Table of Contents Abstract  ii  Table of Contents  iii  List of Tables  ...viii  List of Figures  .xiii  Acknowledgements  xvi  Chapter 1 Introduction  1  1.1 Research Objectives 1.1.1 Defining Explanatory Power 1.1.2 How Explanatory Power Can Influence System Effectiveness  3 4 7  1.2 Background •. 1.2.1 Expert Systems: Current Trends and the State of the Field 1.2.2 Defining Expert Support Systems  •  ;  9 9 13  1.3 The Intelligent Systems Under Investigation: Expert Strategy and LogNet  18  1.4 Research Contributions  20  1.5 Organization of Thesis  22  Chapter 2 Model-Based Reasoning  24  2.1 Background and Motivations  25  2.2 Defining Model-Based Reasoning Systems  28  2.3 Examples of Model-Based Reasoning Systems  31  2.4 Toward More Flexibility in the User Interface  34  )  Chapter 3 Explanatory Power of a User Interface  38  3.1 Criteria for Evaluating Explanation Quality...  40  3.2 Content-Based Enhancements ...... 43 3.2.1 Explanation Use in Traditional Rule-Based Expert Systems 43 3.2.2 Limitations of Rule Traces .....45 3.2.3 More Sophisticated Explanations . 47 3.2.4 A Classification of Explanation Types: A Summary of Content-Based Enhancements...... 50 3.2.5 Empirical Studies on Explanations Usage 52 r  3.3 Interface-Based Enhancements 3.3.1 Characteristics of the User Interface 3.3.2 Research Examples 3.3.3 Empirical Studies on Expert System Interfaces  iii  58 59 62 67  3.4 Advisory Strategies 3.4.1 Types of Strategies 3.4.2 Enhancing Explanatory Power via the Use of Advisory Strategies: A Summary 3.4.3 System Restrict!veness 3.4.4 Relationship of System Restrictiveness to Advisory Strategies.. 3.4.5 Empirical Studies on System Restrictiveness  70 71 75 77 81 82  3.5 A Framework of the Explanatory Power of a User Interface: A Summary of Chapter 3  86  Chapter 4 Theoretical Foundations  89  4.1 Theories of Learning. 4.1.1 Theory of Assimilative Learning 4.1.2 Mental Models Theory  89 90 94  4.2 Diagrammatic Reasoning . 4.2.1 Larkin and Simon's Theory of Diagrammatic Reasoning 4.2.2 Organizational Factors of Memory: Hierarchic Structures and Other Organizations  103 104 108  4.3 A Theory of Failure-Driven Learning: Dealing with Exceptional Situations  Ill  4.4 The Production Paradox.... 4.5 A Research Framework for Experiment I and Experiment II 4.5.1 Experiment I 4.5.1 Experiment II  115 117 119 120  4.6 A Summary of Chapter 4  121  Chapter 5 Expert Strategy: Hierarchic Models of an Expert System Rule Base 123 5.1 Motivation and Background......  123  5.2 A Transportation Mode Selection Expert System  128  5.3 The Model-Based Reasoning Mechanism Driving Expert Strategy  '.  131  5.4 Object-Oriented Design of Rule-Bases  134  5.5 Problem-Solving Using Expert Strategy  137  5.6 Deep Explanations Support  -.  141  5.7 Variations of Data Structures: Trees, Networks, Bills of Materials, and Breadth Hierarchies... 144 5.8 A Summary of Chapter 5  152  Chapter 6 LogNet: Building Logistics Networks Using Model-Based Reasoning Techniques 154 6.1 The Domain Object Model  155  6.2 An Illustration of Model-based Reasoning on a Network Model  159  6.3 Assumptions and Benchmarks of LogNet  161  6.4 Model-Based Heuristics  165  iv  6.5 Free Form and Guided Versions of LogNet 6.5.1 Free Form LogNet 6.5.2 Guided LogNet  173 174 175  '.  6.6 Deep Explanations Support  .....  6.7 Problem-Solving Using LogNet 6.7.1 A Consolidation Problem 6.7.2 A Decentralization Problem  181 184 184 190  Chapter 7 Experiment I: An Investigation of the Effectiveness of Hierarchic Structure and Deep Explanations Support..  196  7.1 The Independent Variables: Between Subjects  196  7.2 Subjects  200  7.3 The Experimental Procedures  200  :  7.4 Dependent Variables  203  7.5 Logging  204  7.6 A Framework for the Problem-Solving Tasks and Research Hypotheses 7.6.1 Problem 1: Simple What-if Analysis 7.6.2 Problem 2: Interaction Effects Between Two Variables 7.6.3 Problem 3: Why-not? Problem or Counterfactual Reasoning 7.6.4 Problem 4: Identification of Faulty Rules  205 207 208 210 215  7.7 Other Research Hypotheses  217  Chapter 8 Experiment I Results, Part 1  223  8.1 Problem-Solving Performance 8.1.1 Problem 1: Simple What-if Analysis 8.1.2 Problem 2: Interaction Effects Between Two Variables 8.1.3 Problem 3: Why not? Problem 8.1.4 Problem 4: Identification of Faulty Rules 8.1.5 A Summary of Problem-Solving Performance 8.2 Problem-Solving Efficiency 8.2.1 Execution Time 8.2.2 Number of Actions....  ~  225 226 230 234 238 244 245 247 248  ;  8.3 Usage of Explanations 8.3.1 Usage of Deep Explanations for the Four Problems 8.3.2 Compensatory Usage of Deep Explanations  • •••  ••••••  250 250 252  Chapter 9 Experiment I Results, Part II  255  9.1 Problem-Solving Strategies 9.1.1 Strategies used to solve Problem 1 9.1.2 Strategies used to solve Problem 2 9.1.3 Strategies used to solve Problem 3....  255 255 257 266  V  9.1.4 Strategies used to solve Problem 4  270  9.2 User Perceptions 9.2.1 Questionnaires 9.2.2 Essay Questions  '.  278 278 283  Chapter 10 Experiment II: An Investigation of How System Restrictiveness Can Enhance the Explanatory Power of an Intelligent User Interface 290 10.1 Overview of the Research Design 10.1.1 System Restrictiveness 10.1.2 Priming Condition 10.1.3 Task Type 10.2 Subjects  290 292 293 294 298  ;  10.3 The Experimental Procedures  >  298  10.4 Dependent Variables  302  10.5 Research Hypotheses  303  Chapter 11 Experiment II Results.  312  11.1 Scoring the Structured and Unstructured Tasks  312  11.2 Effects of System Restrictiveness on Problem-Solving Performance  316  11.3 Deep Explanations Usage  322  11.4 Determinants of Learning  324  11.5 Effects of Learning on Problem-Solving Performance  326  11.6 User Perceptions 11.6.1 User Confidence  328 328  11.6.2 Perceived Usefulness  333  11.7 Problem-Solving Efficiency  336  11.8 Essay Questions  344  Chapter 12: Discussion and Conclusions 12.1 Discussion of Experiment 1  350 351  12.2 Discussion of Experiment II  360  12.3 Summary Discussion and Conclusions.  368  12.4 Problems and Limitations  382  12.5 Implications for Designers of Intelligent Systems 12.5.1 Features of Expert Strategy and LogNet 12.5.2 Recommendations to Designers of Intelligent Interfaces  385 385 389  Bibliography  392  Appendix A: Sample Deep Explanation Fragments.....  406  Appendix B: Experiment I Materials....  409  Appendix C: Multivariate Analysis of Variance on Experiment I Data  431  Appendix D: Experiment II Materials  433  vii  List of Tables Table 1.1: Differences Between DSS and ESS  . 16  Table 2.1: Terminology Used in Model-Based Reasoning  30  Table 3.1: Factors For and Against System Restrictiveness, adapted from Silver (1991)  ,  80  Table 6.1: Road Mileage Between Selected U.S. Cities  159  Table 6.2: Input Parameters of LogNet  164  Table 6.3: A Summary of the Seven Model-Based Heuristics of LogNet  173  Table 8.1: Descriptive Statistics for Problem 1 Score  226  Table 8.2: A N O V A Table of the Effects of Hierarchic Support and Deep Explanations Support on Problem 1 Score 228 Table 8.3: Treatment vs. Problem 1 Categories Crosstabulation  229  Table 8.4: Chi-Square Test of Problem 1 Score  229  Table 8.5 Descriptive Statistics for Problem 2 Score  .  230  Table 8.6: A N O V A Table of the Effects of Hierarchic Support and Deep Explanations Support on Problem 2 Score 232 Table 8.7: Treatment vs. Problem 2 Categories Crosstabulation  233  Table 8.8: Chi-Square Test of Problem 2 Score  231  Table 8.9: Descriptive Statistics for Problem 3 Score  235  Table 8.10: A N O V A Table of the Effects of Hierarchic Support and Deep Explanations Support on Problem 3 Score 236 Table 8.11a: Mean Ranks: Deep vs. No Deep..... ' Table 8.11b: Kruskal Wallis Test.....  238 238  Table 8.12: Descriptive Statistics for Problem 4 Score  239  Table 8.13: A N O V A Table of the Effects of Hierarchic Support and Deep Explanations Support on Problem 4 Score.... 240  viii  >  (  Table 8.14: Treatment vs. Problem 4 Categories Crosstabulation  242  Table 8.15: Chi-Square Test of Problem 4 Score  243  Table 8.16: New Classification Scheme for Problem 4  244  Table 8.17: Chi-Square Test of Problem 4 Score...  244  Table 8.18: A Summary of the Research Hypotheses  ......  245  Table 8.19: Mean Execution Times and their Standard Deviations of the Four Problems (in minutes) 247 Table 8.20: A N O V A Results of the Effects of Hierarchic Support and Deep Explanations on Execution Time (Problems 1 - 4)  248  Table 8.21: Mean Number of Actions and their Standard Deviations of the Four Problems '. ,  248  Table 8.22: A N O V A Results of the Effects of Hierarchic Support and Deep Explanations on Number of Actions (Problems 1-4)  249  Table 8.23: Average Number of Deep Explanation Requests  250  Table 8.24: Multivariate Tests of Within-Subjects Effects  252  Table 8.25: Repeated Measures A N O V A Table of Between-Subject Effects (Hierarchic Vs. Flat) on Deep Explanations Requests 253 Table 8.26: T-Test Comparing the Mean Deep Explanations Requested Between Hierarchic and Flat Treatments on Problem 4  254  Table 9.1: Number of Subjects Adopting Bottom-Up and Top-Down Strategies for Problem 2  264  Table 9.2: Chi-Square Test of the Relationship Between Interface Type and Problem 2 Strategy 264 Table 9.3: Average Problem 2 Scores for Bottom-Up vs. Top-Down Problem Solvers  265  Table 9.4: Regression Coefficients Testing Relationship Between Problem 2 Strategy and Problem 2 Score 266 Table 9.5: Descriptive Statistics for Percent-Upper  ix  272  Table 9.6: A N O V A Table of the Effects of Hierarchic Support and Deep Explanation Support on Percent-Upper < 273 Table 9.7: Regression Coefficients Testing Relationship Between Percent-Upper and Problem 4 Score 274 Table 9.8: Descriptive Statistics for Number of Rule Traces Requested  275  Table 9.9: A N O V A Table of the Effects of Hierarchic Support and Deep Explanation Support on the Number of Rule Traces Requested for Problem 4 276 fable 9.10: Regression Coefficients Testing Relationship Between Number of Rule Traces And Problem 4 Score 276 Table 9.11: Regression Coefficients Testing Relationship Between Number of Deep Explanations Requested and Problem 4 Score 277 Table 9.12: Items to Measure User Trust in the System  279  Table 9.13: Items to Measure Perceived Usefulness of the System  279  Table 9.14: Descriptive Statistics for User Trust Instrument  281  Table 9.15: A N O V A Table of the Effects of Hierarchic Support and Deep Explanations Support on User Trust 281 Table 9.16: Descriptive Statistics for Perceived Usefulness Instrument  282  Table 9.17: A N O V A Table of the Effects of Hierarchic Support and Deep Explanations Support on Perceived Usefulness...... 282 Table 9.18: Hierarchic Interface Strengths  285  Table 9.19: Hierarchic Interface Weaknesses  286  Table 9.20: Flat Interface Strengths  287  Table 9.21: Flat Interface Weaknesses.  288  Table 9.22: Comments on Deep Explanations  289  Table 11.1: Average Scores of Four Repeated Measures  '317  Table 11.2: Multivariate Tests of Repeated Measures Analysis  318  Table 11.3: Average Scores Broken Out By Treatment Order  319  Table 11.4: Descriptive Statistics for treatment order * system.  322  Table 11.5: Average Number of Deep Explanations Requested  323  Table 11.6: A N O V A Table of the Effects of System Restrictiveness and Priming Condition on the Total Number of Deep Explanations Requested  324  Table 11.7: Average Learning Test Score  325  Table 11.8: A N O V A Table of the Effects of System Restrictiveness on the Learning Test Score 325 Table 11.9: Regression Coefficients Testing Relationship Between Number of Deep Explanations Requested and Learning Test Score 326 Table 11.10: Regression Coefficients Testing Relationship Between Learning Test Score and Structured Task Score 327 Table 11.11: Regression Coefficients Testing Relationship Between Learning Test Score and Unstructured Task Score 327 Table 11.12: Average Confidence Levels on the Four Tasks  329  Table 11.13: Multivariate Tests of Within-Subjects Confidence Level.  329  Table 11.14: Average Confidence Levels...  330  Table 11.15: Repeated Measures Analysis on Confidence.Level  331  Table 11.16: Items to Measure User Trust in the System  332  Table 11.17: Average Scores of User Trust  332  Table 11.18: Multivariate Tests of Within-Subjects Effects  333  Table 11.19: Items to Measure Perceived Usefulness  .•  334  Table 11.20: Average Scores on Perceived Usefulness Questionnaire  335  Table 11.21: Multivariate Tests of Within-Subjects Effects  335  Table 11.22: Average Execution Times  336  Table 11.23: T-Tests Comparing Free Form and Guided Groups on Execution Times.... •••  xi  338  Table 11.24: Average Number of Actions Executed  339  Table 11.25: T-Tests Comparing Free Form to Guided Groups on Number Of Actions  .  340  Table 11.26: Average Execution Times  .  Table 11.27: T-Tests Comparing Primed Vs. Non-Primed Groups on Execution Time Table 11.28: Average Number of Actions Executed Table 11.29: T-Tests Comparing Primed Vs. Non-Primed Groups on Number Of Actions  341  342 343  343  Table 12.1: Results of Experiment I  352  Table 12.2: Results of Experiment TJ  361  Table Cl:  431  Multivariate Tests of the Experiment I Data  Table C.2: Univariate Tests of the Experiment I Data  xii  432  List of Figures Figure 2.1: A Pressure-Regulator, adapted from de Kleer and Brown (1985)."  31  Figure 2.2: A Simple Device, from Davis and Hamscher (1988)...  33  Figure 3.1: Graphic Inference Network, from Rook and Donnell (1993)  69  Figure 3.2: A Comparison Between Graphic-based and Text-based Explanation Formats, from Rook and Donnell (1993) 69 Figure 3.3: System Restrictiveness as a Subset of All Possible Processes, adapted from Silver (1991) 77 Figure 3.4: A Framework of the Explanatory Power of a User Interface  86  Figure 4.1: A General Framework for Learning, adapted from Mayer (1985)  90  Figure 4.2: Assimilation Encoding Theory, adapted from Mayer (1979)  92  Figure 4.3: Assimilative Learning, as it Relates to Explanatory Power  103  Figure 4.4: Supply and Demand Curves from Economics, adapted from Larkin and Simon (1987) 107 Figure 4.5: Research Framework  118  Figure 5.1: Directed Hypergraph of Six Interrelated Rules, adapted from Ramaswarmy, Sarkar, and Chen (1998) 125 Figure 5.2: The Hierarchic Model of the Transportation Mode Selection Problem... 130 Figure 5.3a: Breadth Hierarchy: First Screen  149  Figure 5.3b: Breadth Hierarchy: Second Screen  150  Figure 6.1: A Network Configuration Model of a Logistics Environment Using LogNet ;..  156  Figure 6.2: Attributes of Each Object Type of a Logistics Network Model  158  Figure 6.3: Free Form vs. Guided Panels  174  Figure 6.4: Initial Screen of Guided LogNet  175  Figure 6.5a: Guided LogNet Flowchart  178  xiii  Figure 6.5b: Guided LogNet Flowchart, continued..;  179  Figure 6.5c: Guided LogNet Flowchart, continued  180  Figure 6.6a: Deep Explanation Example: A Description of the Decentralization 1 Algorithm  182  Figure 6.6b: Deep Explanation Example: Limitations of the Decentralization 1 Algorithm  183  Figure 6.7: A Consolidation Problem  185  Figure 6.8: Network Benchmarks for the Consolidation Problem.  186  Figure 6.9a: Consolidation 1 Recommendation  188  Figure 6.9b: Consolidation 2 Recommendation  189  Figure 6.10: Solution to the Consolidation Problem  190  Figure 6.11: A Decentralization Problem..  191  Figure 6.12: Network Benchmarks for the Decentralization Problem  192  Figure 6.13: Decentralization 1 Recommendation...  193  .;  Figure 6.14: Solution to the Decentralization Problem  194  Figure 7.1: Between-Subjects Treatment Groups  197  Figure 7.2a: Model-Based Interface  198  Figure 7.2b: Equivalent Flat Interface  198  Figure 7.3: A Classification of Problem Types  '206  Figure 7.4: Strategy for Solving Problem 3..  213  Figure 7.5: Strategy for Solving Problem 4  216  Figure 8.1: Average Scores of the Four Treatment Groups for Problem 1  227  Figure 8.2: Histogram of Problem 1 Scores....  228  Figure 8.3: Average Scores of the Four Treatment Groups for Problem 2  231  xiv  Figure 8.4: Histogram of Problem 2 Scores  232  Figure 8.5: Average Scores of the Four Treatment Groups for Problem'3....  235  Figure 8.6: Histogram of Problem 3 Scores  237  Figure 8.7: Average Scores of the Four Treatment Groups for Problem 4  239  Figure 8.8: Histogram of Problem 4 Scores  241  Figure 8.9: Number of Rules Found for Each Treatment Group  242  Figure 8.10: Bar Chart of Average Number of Deep Explanation Requests  251  Figure 9.1: Relationships of Variables for Problem 2....  258  Figure 9.2: Strategy as an Intervening Variable  263  Figure 10.1: Unstructured Task Example  295  Figure 10.2: Solution that LogNet does not find..  295  Figure 10.3: Sample Question in the LogNet Test  301  Figure 10.4: Research Model of Experiment TJ  304  Figure 11.1: Solution to the first structured task  313  Figure 11.2: Solution to the first unstructured task..  315  Figure 11.3: Interaction Effect Between Structured Task Score and System , Restrictiveness  320  Figure 11.4: Interaction Effect Between Unstructured Task Score and System Restrictiveness  321  Figure 12.1: The Four Problems of Experiment 1  353  Figure 12.2: The Task-Technology Fit Perspective............  368  xv  Acknowledgements This thesis required the support of a number of individuals, whom I would like to recognize. I am most grateful to my advisor, Professor Izak Benbasat, for supporting me through all these years. Admittedly, I faced many moments during the course of putting this dissertation together when I had few deliverables to show, when things were not working out as expected, when obstacles seemed insurmountable; I was fortunate,  i therefore, to have the continued support from Izak from beginning to end. He guided me through tricky research design issues and his insights and advice proved invaluable. He will always serve as a model of professionalism for me in my academic and research career. I was also fortunate to draw on the expertise of my supervisory committee: Professor Kelly Booth from Computer Science, and Professor Garland Chow, from Transportation and Logistics at Commerce. I enjoyed working with them very much. They provided insightful and helpful comments throughout. I am deeply grateful for the time they expended, and patience they showed in seeing this thesis through to its completion. I was privileged to be given the opportunity to work with them.  A number of other individuals were instrumental in moving this thesis forward. Gordon Weir of Gensym Canada provided me with the software used to develop the systems. His help was crucial in getting me started at a time when I was uncertain what was even possible as far as developing intelligent interfaces was concerned. I also want to thank the hard-working graduate students who provided research assistance: Ankur Marwah and Jennifer Pan conducted many of the experiments; Sherrie Xiao assisted with some of the programming tasks; Dongmin Kim, Max Sequeira, and Wilson Lee assisted with the  xvi  data analysis. I would have accomplished far less had it not been for their conscientious efforts.  xvii  Chapter 1 Introduction In recent years, many critics of intelligent systems, and expert systems in particular, have argued for more flexibility in the user interface if further progress in the field is to take place (see e.g., Hayes-Roth and Jacobstein, 1994; and Franklin, 1997). While there have been several notable and successful commercial applications, practitioners and researchers alike have recognized that traditional expert systems interfaces are largely rigid and intolerant of an end-user's need to understand the assumptions and reasoning underlying the rule-base. The end-user, in effect, is forced to view the rule-base as a "black box": while some limited forms of explanation may be available, there is no easy and intuitive way to understand why and how a given set of inputs produced a final recommendation. Especially problematic for the end-user is gaining a deeper understanding of the meta-level, problem-solving strategies employed by the expert system to make decisions.  A  1  The classic example is M Y C I N (Buchanan and Shortliffe, 1985), an expert system that was developed at Stanford University in the 1970's to aid physicians in diagnosing bacterial infections. In a typical consultation with MYCIN, an end-user (typically a physician) would answer questions about patient data such as symptoms, test results, and patient characteristics. Based on these responses, M Y C I N would reach a diagnosis on what the infection might be. However, from the end-user's perspective, the system seemed to ask a series of unrelated questions and there was no sense of coherence to what was being asked. In short, an end-user was unable to form an appropriate mental model of how the system was working.  1  By contrast, in the newer generation of expert systems, the end-user remains very much in the decision-making loop throughout. Acting more in an advisory capacity, the system aids the user in reaching a decision, and the user may request more help and information by the system at any point during the consultation. As a result, the user may iteratively refine his or her solution, or may question the recommendation reached by the expert system. Such systems have been called Expert Support Systems (ESS), a cross between the more rigid, traditional rule-based expert systems and the more interactive and flexible decision support systems (see e.g., Luconi, Malone, and Scott Morton, 1986; and Turban and Watkins, 1986).  This dissertation investigates some of the techniques that may be employed to increase the transparency and flexibility of intelligent systems so that they are more in line with the spirit of expert support systems serving in an advisory capacity to human decisionmaking.  Transparency refers to an end-user's ability to see the underlying mechanism  of the intelligent system so that it is not merely a black box; andflexibilityrefers to the ability of the system to adapt to a wide variety of end-user interactions, so that it is not merely a rigid dialogue.  These two system characteristics are considered in more detail  in Chapter 3, which provides a framework for understanding the  explanatory power of a  user interface:  This thesis considers intelligent, model-based systems and the system design choices that will result in enhanced explanatory power (i.e., more transparency and flexibility in the  2  system).  Intelligent systems provide advice to the end-user to assist in decision-making  and problem-solving (see e.g., Carroll and McKendree, 1987) and  model-based  interfaces make use of a graphical representation of a model, which an end-user is free to interact with and explore to better understand the system's advice. Throughout this thesis, the term "graphical" is used to signify a graphical representation of a model. Having a model-based interface is believed to facilitate the development of an appropriate mental model of the system, so that a user can become a more proficient problem-solver when interacting with an intelligent system.  It is useful at this point to make the distinction between intelligent systems and intelligent online tutorials (or intelligent online help systems). This thesis is not about intelligent online tutorials that monitor and guide an end-user to learn about a system's features, but focuses instead on the actual interaction between the system and the end-user while problem-solving. Two intelligent, model-based systems have been developed to study the end-user interaction while engaged in problem-solving tasks. While there may be pedagogical effects to using such systems, this is not the primary focus of this dissertation.  1.1 Research Objectives This research study addresses two primary objectives. The first is to define and conceptualize the "explanatory power" of intelligent systems. The second is to offer empirically-based guidelines on how specific enhancements to explanatory power will improve system effectiveness.  3  1.1.1 Defining Explanatory Power One of the primary objectives of this dissertation is to define the construct "explanatory power". This thesis defines and identifies three components of explanatory power: 1) content-based explanations, 2) characteristics of the user interface, and 3) advisory strategies. The three components interact with one another to create intelligent interfaces endowed with greater (or lesser) degrees of explanatory power. The three components are summarized here.  First, this research looks at content-based enhancements. It has long been recognized that computer-generated rule traces, as provided by traditional rule-based expert systems, have been inadequate in providing a user with an explanation that is instructive, inclusive,'flexible, and easy to understand. Research in this area has suggested that other types of explanation facilities are needed for more effective and flexible problem-solving. One type of explanation would provide a user with an understanding of the overall problem-solving strategy that an expert system employs to reach conclusions (e.g., Clancey, 1983). Another type of explanation would provide deeper explanations based on underlying first principles as well as deep causal models of the domain (e.g., Chandrasekeran et al., 1988). These enhanced explanations may give the end-user a clearer view of how a rule-base is reaching a recommendation, and hence, may enable him or her to be a more participative user of an intelligent system. This thesis considers both these methods of enhancing the content-based dimension of explanatory power, which has already been explored, to some extent, by Other MIS researchers (e.g., Dhaliwal (1993), Mao (1995), Nah (1997), Ye (1990)).  4  Second, this study will explore the effectiveness of graphical, model-based interfaces, and the interface design choices that can enhance explanatory power. Many researchers have suggested that a highly visual, object-oriented modeling environment would allow for more effective and interactive user interfaces (see e.g., Angern and Luthi, 1990; Hollan, Hutchins and Weitzman, 1984; and Stelzner and Williams, 1988). This thesis studies intelligent systems in which the end-user is able to interact with and inspect components of a model-based organizational structure (hierarchic or network-based) through the use of a direct manipulation, graphical user interface. Such graphical models may render an intelligent system more transparent and open to inspection for its endusers. These graphical models may represent either domain knowledge (structural knowledge about the domain) or strategic knowledge (a model depicting the underlying control structure of how an expert system rule-base is reaching conclusions). Because end-users are able to inspect aspects of the model (for example, by "clicking" on a model component to inspect its status), they are expected to better understand why an expert system reached a given conclusion or recommendation. Hence, they are expected to have more confidence in the system's conclusions as well as understand when an intelligent system has reached a faulty conclusion—that is, has made a faulty decision outside its delimited range of expertise.  Complementary to these graphical models is the use of model-based reasoning techniques to enhance these interfaces. Model-based reasoning attempts to solve problems by analyzing the. structure and function of a system, for example, how components of a system are connected and interrelated to one another, as described by a model (Davis and  5  Hamscher, 1988; Kunz, 1987). These model-based reasoning techniques can contribute to the explanatory power of an intelligent system in several ways: they allow for more coherent explanations as opposed to, say, individual rules in traditional expert systems (as given by a rule-trace explanation); they tie in more closely to the graphical user interface, since the system designer is forced to structure the knowledge base around the graphical model, making the system more transparent and "visualizable"; and they allow for the specification of powerful rules, such as spatial reasoning on a network model, or the specification of generic rules that hold for large classes of problems.  Third, this research looks at the advisory strategies of intelligent systems. There are many system design choices that fall under this rubric. In general, this category of enhancements entails understanding the ways that the advice is actually delivered to the end-user. This dissertation will explore, in particular, a construct called system  restrictiveness, one of the ways in which an advisory strategy can be manipulated and varied. Silver (1991) defines system restrictiveness as: "the degree to which, and the manner in which, a DSS [decision support system] limits its users' decision-making processes to a subset of all possible processes" (p. 115). Highly restrictive systems may limit an end-user's interactions by prescribing a specified sequence of actions, or by limiting the set of operations that may be performed on a system. Less restrictive systems, on the other hand, are more open-ended, and may allow for a more free-form interaction. For example, the user may be permitted to use the operators (or procedures) of the system in any order, or a larger set of operators is made available. A system  having lower restrictiveness, it is believed, will be one endowed with a greater degree of explanatory power.  1J.2 How Explanatory Power Can Influence System Effectiveness A second primary objective of this dissertation is to look at the behavioral aspects of information systems use. To this end, this thesis is an empirical investigation as to how effective and tractable these enhanced interfaces actually are for human end-users. It accomplishes this goal by utilizing the experimental method to understand and determine how interface design choices will either enhance or detract from an end-user's ability to effectively use these systems. As Carroll and Mckendree (1987) remark, far too little behavioral work has been conducted on design research in advice-giving systems. Another significant contribution of this research study, then, is that it generates empirically-based design guidelines on how to create more effective and more usable intelligent interfaces that support the needs of human end-users.  A central proposition of this dissertation is that interfaces having greater explanatory power will enable end-users to more readily acquire a good conceptual model of how the system works (i.e., the user's mental model). As a result, end-users of these systems will be more effective problem-solvers, and will be able to use these systems in more flexible and intelligent ways.  To test this proposition, two separate experiments were conducted to explore explanatory power by manipulating its three components: content, interface characteristics, and  7  advisory strategies. The first experiment investigated how both interface-based enhancements and content-based enhancements contributed to explanatory power. This experiment explored whether providing end-users with graphical-based hierarchies representing the structure of expert system rule-bases (interface-based support) as well as with deep explanations, or underlying domain principles explaining system actions (content-based support), actually improved problem-solving ability. The second experiment was an investigation that demonstrated how manipulations of advisory strategy affected explanatory power. For this experiment, two versions of an intelligent system were created: a high-restrictive system, in which the system prescribed for the end-user the order and manner in which the procedures of an intelligent system were to  be used; and a low-restrictive system, in which the end-user was free to use an intelligent system's procedures in any sequence and in any manner he/she chose.  Multiple methods of measurements were employed to understand the effects of different treatments in the study. Problem-solving performance, as assessed by a series of problem-solving tasks, and execution time were the main dependent variables of interest. Questionnaires assessing user confidence and user satisfaction, and essay questions to gain a deeper understanding of user preferences were administered to all subjects. In addition, interviews were conducted on selected subjects. Finally, to gain a richer understanding of the problem-solving process, all of the subjects' actions were captured in a computer log. Problem-solving strategies were reconstructed and analyzed based on the data obtained from the computer logging.  8  1.2 Background The background discussion begins by describing the current state of expert systems technology, both itsTailures and its successes. This section also looks to its future: an important trend under way in the field since the 1980s is the movement away from rigid expert systems dialogues towards expert support systems that are meant not to replace human decision makers, but to aid them in problem-solving. Such expert support systems accomplish this by providing the human end-user with more flexibility in the user interface. A definition of expert support systems is given in this section, which attempts to make more definite and concrete the notion of a flexible expert systems user interface.  1.2.1 Expert Systems: Current Trends and the State of the Field Researchers and practitioners in the field have long recognized that expert systems are very limited in terms of the type of problems they are capable of handling. Consider, for example, what Neches, Swartout, and Moore (1984) point out as major weaknesses of expert systems, over a decade ago: •  They have limited (or no) explanatory capabilities.  •  They are difficult to modify.  •  They cannot be easily adapted to new domains, even to closely-related domains.  Although some progress has been made in the field, what Neches et al. state as major weaknesses of expert systems technology in 1984 still applies today. In short, expert  9  J  systems remain narrowly focused and extremely brittle: "AJ systems typically confine themselves to a narrow domain; for example, chess-playing or diagnosing an illness. They tend to be brittle, and thus break easily near the edges of their domain, and to be utterly ignorant outside it" (Franklin, 1997, p. 11).  There also appears to be a widely held perception that much of the well-publicized successes of expert systems technology have been more hype than actual gains in workplace performance, and that the great potential of expert systems has just not panned out as anticipated. In line with this perception, Gill (1995) observes that several AI vendors have failed, and major companies have become disillusioned with the technology, many reducing or even eliminating their commitment to expert systems altogether. Gill investigated how the first wave of commercial expert systems, built during the early 1980s, fared over time, and found that most had fallen into disuse during a five-year period from 1987-1992; only about a third continued to thrive. He identified several significant factors for their failure: lack of system acceptance by end-users, problems in maintaining the systems, shifts in organizational priorities, and inability to retain developers.  Despite the widespread recognition of the technology's limitations and weaknesses, there have been many notable successes of expert systems use in commercial applications. Before the 1980's, expert systems were largely confined to the research laboratory, with landmark systems such as M Y C I N and X C O N (Barker and O'Connor, 1989) demonstrating the commercial potential of expert systems technology. The rate of  10  application to the larger commercial world was dramatic in the 1980's: during this time period, Durkin (1996) estimates that over two-thirds of the Fortune 1000 companies applied the expert systems technology to daily business applications. This was due, in large part, to the introduction of easy-to-use expert system shells, which made developing complex rule-bases less cumbersome. Moreover, companies took the time to learn how to embed the technology in current business processes. Eom (1996) surveyed publications on operational expert systems from 1980-1993, and found that many expert systems have had a profound impact on organizations, in some cases shrinking the time for tasks from days to hours, minutes br seconds. In addition, he found that there were many non-quantifiable benefits such as improved customer satisfaction, improved quality of products and service, and more consistent decision making.  A recent survey of expert system shells turned out over 60 different products (see Commercial Expert System Shells, 1996). These products ranged from very inexpensive shells running on PCs costing less than $100, to very powerful, industrial-strength shells running on powerful workstations and mainframes costing several thousands of dollars. There are several noteworthy trends regarding these products, which can serve to illustrate the direction the industry has been moving in: •  The shells incorporate several newer AI techniques, other than rule-based reasoning. For example, case-based reasoningj(e.g., CBR Express, CPR, The Easy Reasoner), fuzzy logic (e.g., FuzzyCLIPS), neural networks, genetic algorithms, model-based reasoning (e.g., Gensym's G2), frame-based reasoning (e.g., FLEX) and other techniques.  11  •  The newer and more powerful shells include support for the development of powerful graphical user interfaces. RTWorks enables graphical objects that can be tied to variables that dynamically control attributes such as color, scale, rotation, motion, animation, etc. Gensym's G2 enables the development of graphical object models to capture structural information about the application domain (e.g., a hydraulic system, a business process, a nuclear power plant).  •  Many of the expert system shells operate not necessarily as stand-alone products, but integrate with other development environments (e.g., with Rete++, the programmer can develop object hierarchies and integrate these into C++ programs; likewise R A L was designed to permit seamless integration of rules and objects into C programs; several shells enable access to external databases, programmable logic controllers, and other external systems).  •  Many of these shells are very application-specific, rather than general-purpose shells. A few examples: C O M D A L E / C is a shell used for industrial process monitoring and control; Techmate is designed specifically for capturing the expertise of maintenance engineers and technicians; Angoss Knowledge Seeker is used for data mining.  In summary, expert systems technology, as evidenced by the shells out on the market today, is developed, more and more frequently, as embedded technology within larger applications, rather than as standalone expert systems. This observation is in line with many recent survey articles on the current state of expert systems technology (Durkin, 1996; Liebowitz, 1997; Hayes-Roth and Jacobstein, 1994). In part, this means that these  12  shells must include capabilities other than rule-based reasoning. Indeed, as the above survey shows, many of these shells come equipped with powerful graphical modeling environments, provide support for integration into other environments, and utilize a variety of AI techniques, other than just rule-based reasoning. Some shell vendors have adopted a different strategy to respond to this new marketplace—they have become "niche" providers, serving only a particular class of applications.  To reflect this new emphasis on embedded intelligence, this dissertation studies advicegiving, intelligent systems, rather than standalone expert systems. The enhancements to explanatory power described in this research reflect techniques and interface design choices that may be embedded within larger systems to improve their effectiveness. These enhancements are not limited to expert system technologies, but reflect design techniques that may be applied to many types of intelligent systems.  1.2.2 Defining Expert Support Systems  Successful implementations of expert systems applications are found, for the most part, in very narrow, and well-defined domains. For example, Durkin (1996) points out that the predominant role of expert systems has been related to diagnosis applications. One reason for their predominance is their relative ease of development. As Durkin notes, most diagnostic problems have a finite list of possible solutions and a limited amount of information required to solve a problem. However, unlike diagnosis problems, tasks such as design and planning are more difficult to develop since their steps vary greatly, and there is a great deal of "soft" or subjective knowledge required to solve such problems.  13  A newer generation of expert systems, called expert support systems (ESS), attempts to provide more flexibility and interactivity for the end-user to solve a broader range of problems. These systems are meant not to replace the human end-user, but to support 1  the end-user in problem-solving tasks. Because of their increased flexibility, these systems are believed to be more robust for situations in which the end-user may need to take more control in the task domain. In addition, as Hayes-Roth and Jacobstein (1994) point out, conventional systems typically have limited flexibility in dealing with "incomplete, uncertain, errorful, or disorderly data" (p. 30). The end-user is required to enter input in a particular order, and the input is expected to be complete and consistent. Frequently, what is needed instead is a system that is able to deal more gracefully with such data. (Or, at the very least, the user should be able to backtrack, and modify and test out the impact of different inputs).  What qualities might such flexible systems possess? Hayes-Roth and Jacobstein (1994) suggest that such systems might generate candidate solutions for difficult design, . syntheses, or analysis problems. Typically, these systems might suggest (rather than dictate) such candidate plans or designs for human decision makers to approve. In a similar vein, Angehern and Luthi (1990) suggest that "a knowledge-based component interacts with a user by exchanging 'examples', and stimulates them in generating and exploring alternatives during a decision-making process" (p. 17). Angehern and Luthi  This does not suggest that traditional rule-based expert systems will become obsolete. For certain well-structured situations, traditional rule-based expert systems technology is quite adequate. In fact, there may be some problem-solving situations in which you would not want to provide the end-user with too much flexibility. 1  14  also suggest that such a system, in order to enhance its interactivity, be a highly visual (graphical) environment that enables an end-user to incrementally describe problems— that is, the end-user is able to build, complete, and test different models and scenarios.  At this point, a more precise definition of expert support systems is useful. What is meant by expert support systems (ESS), and how are they related to decision support systems (DSS), and expert systems (ES)? What are the distinguishing characteristics of ESS? What makes each of these types of systems flexible (or inflexible)?  First, a comparison to DSS and ES will help to illuminate the nature of ESS. Turban and Watkins (1986) define a DSS as "an interactive, computer-based information system that utilizes decision rules and models, coupled with a comprehensive database" (p. 122). By comparison, an expert system is "a computer program that includes a knowledge base containing an expert's knowledge for a particular problem domain" (p. 122). These definitions are not really comparable, so Turban and Watkins also provide some comparisons between the two systems based on different attributes. See Table 1.1 for these differences.  From this attribute-by-attribute comparison, a few generalizations about the differences between DSS vs. ES can be made. DSS, on the one hand, are more interactive systems that handle less structured and broader problem-solving domains. These systems are meant to support the human decision maker, not replace him or her. ES, on the other hand, are more rigid systems in which the system makes the recommendation, as determined by the consultative session with the end-user. Because of this, ES problem  15  domains tend to be more structured and narrower than DSS problem domains. This explains why they are so brittle. ESS are a hybrid solution to problem domains that require both interactivity and flexibility as well as more structured expertise. They inherit characteristics from both DSS and ES:  Objectives: Assist the decision maker and replicate a human advisor. Often the ESS recommendations are made as suggestions, rather than as hard-and-fast givens.  Who makes the recommendations?: Both the human end-user and the system. (cooperative decision-making) Major orientation: Both decision making and the transfer of expertise. Major query direction: Bi-directional—human to machine, and machine to human. Characteristics of the problem domain: Complex and broad, although portions of the system must be well-structured to offer the expert advice. Reasoning Capabilities: Yes, and sometimes more powerful than traditional rule-based systems (e.g., model-based reasoning, case-based reasoning, frame-based , reasoning)  Explanation Capabilities: Yes  Attributes  DSS  ^fiiilllllll^  Objectives  ES  Assist the decision maker  Replicate a human advisor  Who makes the recommendations? Major orientation  The human and/or the system Decision making  The system  Major query direction  Human queries the machine  Machine queries the human  Characteristics of the problem domain Reasoning Capability  Complex, broad None  Narrow, well-structured domain Limited  Explanation Capability  Limited  Limited  Transfer of expertise.  Table 1.1: Differences Between DSS and ESS (adapted from Turban and Watkins, 1986)  16  Luconi, Malone, and Scott Morton (1986) (p. 4) offer this definition of ESS: "expert support systems help people (the emphasis is on people) solve a much wider class of problems". Their view focuses on the cooperative nature of ESS—these systems are meant to work so that the human user works together with the expert system. The expert system provides some of the knowledge and reasoning steps, while the human end-user actively directs the problem-solving process and fills in the gaps left by the expert system knowledge base.  Luconi; Malone, and Scott Morton further provide a framework for comparing traditional data processing systems, DSS, ES, and ESS. They identify four components of problem solving to make their comparisons:  Data: the dimensions and values that are relevant to the problem. Procedures: the sequence of steps (also called operators) used to solve a problem. Goals and constraints: the desired results (goals) and what can and cannot be done (constraints).  Strategies: the way that a decision maker decides which procedures to apply to achieve his or her goals.  -  In a pure ES, the computer system performs all four components by itself. By contrast, in an ESS, both the computer system and the human end-user perform the four components. In such a way, the expert support system can be viewed as a system designed to support cooperation between human and computer. DSS are like expert support systems, except \  17  that strategies are performed solely by the human end-user—in this regard, they are less cooperative than expert support systems.  1.3 The Intelligent Systems Under Investigation: Expert Strategy and LogNet To demonstrate and assess the three components of explanatory power, two intelligent, model-based systems were developed. The first system, which is called  Expert  Strategy, is a graphic-based system that organizes an expert system's rule-base into a hierarchy of object nodes. This system was specifically developed by the author to study the research questions of this dissertation. Chapter 5 describes in more detail the mechanism of Expert Strategy, as well as the application used in this research study. In this system, each object node represents a distinct and definable conceptual entity of the rule-base. Typically, the top-most node, or root node of the hierarchy, represents the recommendation itself, and the second-level nodes represent the variables needed to make the recommendation. These second-level variables may, in turn, depend on a third level of variables (and so on for the subsequent levels, until we are able to create a multileveled representation of the underlying rule-base). These hierarchies are transparent because the end-user can inspect any node (by point-clicking on it) to find out why certain decisions were made or were not made, or simply to obtain an explanation of a node itself.  Furthermore, the hierarchic object models are not merely static diagrams, but rather dynamic models of the underlying rule-base that change according to the way that the > end-user interacts with them. By modifying a lower-level node on the hierarchy, the end-  18  user can watch Expert Strategy automatically modify higher-level nodes through a forward-chaining mechanism. Hence, the end-user can visually follow a "chain" of reasoning as the rule-base attempts to make a recommendation. In addition, the end-user can modify any object node in the hierarchy to test assumptions of the rule-base and to watch how the modification affects the other nodes of the hierarchy, as well as the final recommendation itself.  ,  The second system, LogNet, is also an expert support system that employs model-based reasoning techniques. This system was also developed to study the research questions of this dissertation. Chapter 6 discusses in more detail how LogNet works. Unlike Expert Strategy, LogNet does not model the rule-base; rather, it models the actual domain of discourse—the objects of the domain and how they are interconnected to one another. LogNet enables end-users to modify and inspect a business logistics network model through a graphical, direct manipulation interface. A logistics network can be envisioned as a system of nodes and their interconnections used for the distribution and movement of some goods. With LogNet, end-users can build logistics networks by adding nodes (i.e., plants, warehouses, and customer zones), adding connections (i.e., transportation links) between these nodes, or deleting nodes and connections. In doing so, this system captures "structural" domain knowledge of a logistics network.  Different network configurations can be incrementally created and tested by the end-user, and such performance measures as costs (inventory, transportation, and warehousing) and customer service levels (e.g., average distance between customer and assigned  19  warehouse) can be calculated by the system. LogNet attempts to design the most costeffective network satisfying a certain customer service level. It addresses the facility location problem: How many warehouses are needed? Where should these warehouses be located and to which customer market(s) should they be assigned? Also, which plant(s) should supply each warehouse? LogNet is an expert support system because it offers design recommendations to guide an end-user towards a better network design. These design recommendations are meant to serve as suggestions, rather than as hardand-fast directives.  Both Expert Strategy and LogNet are developed using Gensym Corporation's G2, a complete graphical and object-oriented development environment for building and deploying intelligent applications. G2 utilizes both rule-based reasoning and modelbased reasoning techniques (Gensym Corporation, 1997). Expert Strategy is the system used in Experiment I, which is an investigation of how hierarchic structure can enhance the explanatory power of a rule-based expert system. LogNet is the system used in Experiment II, which is an investigation of how the appropriate selection of an advisory strategy can be used to enhance explanatory power.  1.5 Research Contributions One of the chief reasons for using expert support systems and other intelligent systems is to support the needs of human end-users, and turn them into better decision-makers in complicated problem-solving domains. However, for far too many of these systems, there is a lack of empirically-based design guidelines on how best to deliver these  20  systems. One of the primary research contributions of this dissertation, then, is to provide empirical support for or against the claims that interfaces endowed with greater explanatory power will result in improved end-user performance. To address this research objective, this research looks at both quantitative measures of performance and more in-depth qualitative analyses to understand how end-users react to different manipulations of explanatory power.  A second research contribution of this dissertation is to extract some of the major themes in this research to develop a more comprehensive notion of explanatory power. The literature on explanations support in expert systems has been heavily oriented toward natural-language processing and content-based enhancements, while this thesis considers the interactions of content, user interface characteristics, and advisory strategies to capture more fully this multi-dimensional construct. Hence, this dissertation develops a framework with which to further study and investigate the explanatory power of intelligent interfaces.  A third research contribution of this dissertation is understand explanation-seeking behavior from a cognitive perspective. This study not only investigates the effects of the various types of enhancements to explanatory power, but also looks at how the explanations were actually used in problem-solving. This thesis analyzed user behaviors and problem-solving strategies engaged in by the subjects who participated in the two experiments. From this analysis, a framework for understanding these behaviors is  21  described, which in turn provides a theoretically-based conceptual foundation for understanding user behaviors of information systems.  Finally, this dissertation also makes a number of novel technical contributions. It utilizes model-based reasoning techniques to create dynamic hierarchic models of an expert system's rule-base, and it uses model-based reasoning techniques to solve problems in business logistics network design. There are no known previous studies that have employed model-based reasoning techniques in such a manner.  1.6 Organization of Thesis  This dissertation is organized into twelve chapters. Chapter 2 reviews the literature,in model-based reasoning and especially focuses on how this AI (artificial intelligence) technique can enhance the flexibility of a traditional rule-based expert system. Chapter 3 develops a framework for understanding the explanatory power of a user interface, reviewing the literature on visual interactive interfaces, explanations, and system restrictiveness. Chapter 4 covers the theoretical foundations used in this research, reviewing the literature in cognitive psychology and educational psychology, which serve as a underlying foundation for the derivation of the hypotheses in this study. Chapter 5 describes an intelligent system called Expert Strategy, which provides an end-user with a graphical visualization of an expert system's rule base. Chapter 6 describes an intelligent system called LogNet, which gives the end-user advice on how to design costs  effective business logistics networks. Chapter 7, Chapter 8, and Chapter 9 discuss the first experimental study (Experiment I), which is an investigation of the explanatory  22  power of hierarchic object models of Expert Strategy. Chapter 7 presents the research design and research methodology of Experiment I, while Chapters 8 and 9 present the quantitative analysis of the data of Experiment I. Chapter 10 and Chapter 11 discuss the second experimental study (Experiment IJ), which is an investigation of the relationship between system restrictiveness and explanatory power when using LogNet. Chapter 10 presents the research design and research methodology of Experiment TJ and Chapter 11 presents the quantitative analysis of the data of Experiment JJ. Chapter 12 concludes the dissertation with a summary discussion and conclusions. It addresses implications for designers of intelligent systems as well as some limitations of this research study.  23  Chapter 2 Model-Based Reasoning Model-based reasoning is one class of techniques used to extend and enhance the capabilities of expert systems. This chapter will focus on how model-based reasoning is especially able to enhance the flexibility of a traditional rule-based expert system. There are other techniques as well, such as case-based reasoning, and frame-based reasoning, which can also be used to create more flexible expert systems. This research does not consider these additional techniques.  Two intelligent systems, Expert Strategy and LogNet, which employ the model-based reasoning techniques discussed in this chapter, were designed to study the research questions pertinent to this dissertation. This chapter will discuss some of the motivations underlying the need for model-based reasoning systems, and define some of the terminology used in this area. It will also present some examples of how model-based reasoning can be used. Finally, a discussion of how model-based reasoning techniques can contribute to the flexibility of an intelligent interface, as well as a discussion on some of its limitations concludes this chapter.  The discussion of model-based reasoning in this chapter may easily be skipped, without loss of continuity in understanding the rest of this dissertation. This chapter was included because model-based reasoning serves as the inspiration for the two intelligent systems, Expert Strategy and LogNet, that were created to illustrate enhancements to explanatory power. Moreover, model-based reasoning represents one way to build more flexibility into the user interface—that is, to transform rigid advice-giving systems into more  24  flexible and intelligent systems. Chapter 5 and Chapter 6 discuss in further detail how the model-based reasoning techniques are employed to create intelligent systems: in one system (Expert Strategy), to create dynamic hierarchic structures that represent the structure of an expert system rule-base; and in the other system (LogNet), to create a system capable of making design recommendation on whether a logistics network should become more decentralized or more consolidated.  2.1 Background and Motivations Object models are an important form of knowledge representation because expertise often lies in one's ability to reason about objects and how they are interconnected—be it causally, relationally, spatially, or otherwise—in a domain of discourse. To limit the design of expert systems to logic if-then rules might be to seriously ignore a powerful way of capturing knowledge: such an expert system would be far less intelligent than it may otherwise be capable of being.  Model-based reasoning,systems, in large part, came into being to address the void created by systems employing only rule-based reasoning and other limited forms of reasoning (e.g., flowcharts, fault dictionaries, decision trees, bayesian probability theory). Much of the work done in this area has focused on fault diagnosis and troubleshooting, in particular, for devices such as electronic circuits. Model-based diagnosis, as Davis and Hamscher (1988) note, starts with an observed malfunction of a device, and works backward to determine which underlying component(s) might be the cause. In this area, Davis and Hamscher (1988) discuss some of the advantages of model-based reasoning  25  systems when compared to traditional rule-based approaches. For troubleshooting tasks, rule-based systems would employ rules that associate symptoms to their underlying faults. A model-based system, by contrast, would reason not based on experiential association, but rather on the underlying structure of the device. While rule-based systems can only account for empirical associations that have been acquired to date (often in a haphazard or happenstance way), model-based systems may provide for more systematic and comprehensive coverage of a domain. This means that such model-based systems will be easier to understand and maintain, for both developers and users alike. Another important advantage of model-based systems is that they tend to be more deviceindependent. The model may be specified in such a generic way that it is supposed to work for many different device configurations. Rule-based approaches are not as likely to be so flexible, but highly device-specific, meaning that a different set of rules is required to specify the behavior of each new device.  Research in model-based reasoning systems has not been limited to physical devices. Patil et al. (1988) explore the artificial intelligence techniques used for diagnosing diseases in medicine. Of particular relevance to model-based reasoning is their experimental program called A B E L (Patil, 1981). ABEL's knowledge base attempts to capture the underlying causal mechanisms of disease processes. A B E L views diagnosis as a process of building detailed causal models that explain a patient's illness. Such models "allow the program to reason with the details of the disease process, to recognize how one disease can alter the presentation of another, and to sort out component elements due to each disease" (Patil, 1988, p. 373). Patil discusses many types of traditional  26  approaches, as opposed to model-based reasoning, used to diagnose diseases: the use of clinical algorithms, flowcharts, pattern recognition, probability theory, and decision analysis. Each of these approaches, as Patil notes, suffers from serious weaknesses when applied to the broad and complicated domain of medical diagnosis. For one, these approaches cannot account for new situations—for instance, the outbreak of a new or rare, disease. For another, these approaches are difficult to update and maintain. For example, a flowchart can become so large as to become unmanageable with the specification of each new disease.  In a very complex environment, possibly consisting of thousands of components and many more interactions among these components, a model-based system provides a way to cope with cognitive complexity. The user of a very complex information system may have to contend with hundreds of operating procedures. As Williams, Hollan, and Stevens (1981) observe, such an individual is unlikely to memorize all these procedures, and hence, model-based reasoning may provide the user with "a means of generating the complex procedures from a relatively compact set of models" (p. 89). Embeddedness is one way to provide users with cognitively manageable models of the domain. This means that models are specified at different levels of detail (i.e., multi-level representations). One component of a top-level model, for example, may in turn, be specified by a sub-level (also called an embedded model) composed of many other subcomponents and their interconnections. Embeddedness allows us to define hierarchic organizations of a complex system.  27  Model-based reasoning is known by other names, the most notable of which are reasoning from first principles, a term used to signify that reasoning is based on basic principles of causality, and deep reasoning, a term intended to distinguish model-based reasoning from more surface-oriented forms such as rule-based, associational reasoning (Davis and Hamscher, 1988). Other related terms include qualitative reasoning and commonsense reasoning, terms which have been closely associated in the artificial intelligence literature with qualitative physics, or reasoning about the physical world (Forbus, 1988). n  2.2 Defining Model-Based Reasoning Systems Kunz (1988) defines model-based reasoning in the following manner: "Model-based reasoning involves the analysis of the structure and function [or behavior] of a formal symbolic model of a system." (p. 94) This definition provides a good starting point towards defining what model-based reasoning is, and illustrating how it may be employed to create more flexible systems.  First, what is meant by "a formal symbolic model of a system"? A formal symbolic model, as described by Kunz, is one that explicitly represents both the structure and functional behavior of a system. Kunz compares and contrasts other types of models to these. Simple diagrams are models, graphical in nature. While they represent model structure, they do not capture functional behavior (not explicitly, at least). Heuristic models describe relationships between inputs and outputs, based on the way that experts describe the behavior of systems. Rule-based expert systems typically rely on this type  28  of knowledge. Such heuristic knowledge, as Kunz points out, emphasizes the behavioral aspects of a system, relating inputs (observations) to outputs (data interpretation), such as in the diagnosis of malfunctioning systems. These heuristic models, however, do not attempt to create an explicit representation of the structure of a system. In summary, three types of models can be distinguished based on whether they describe structure or behavior:  .  •  .  \  ^  Symbolic Models Diagrams Heuristics  Describes Structure? Yes Yes No  Describes Behavior? Yes No Yes  How do we describe formal symbolic models? A discussion of the components of model-based reasoning is useful at this point. It has already been stated that formal symbolic models require an explicit description of both structure and behavior. Structure is the way that the individual components of a system are interconnected (as given by a system's topology); behavior refers to what each of these components is supposed to do (Davis and Hamscher, 1988).  To describe object models, De Kleer and Brown (1985) distinguish three kinds of constituents used to model a system: materials, components, and conduits (or  connections). Materials is what passes through a system. In a physical system this may be an actual physical element such as water, air, electricity, etc. In a manufacturing domain, this may be a product (e.g., an automobile), which moves through the different processes of a manufacturing plant. In a service-oriented system, this may be an abstract  29  thing such as a purchase order, or a customer service request, which is moved through the different entities of an organization.  Components are the constituents that can act on and  change the form of the materials. Components typically have inputs and outputs.  Conduits, or connections, are the simple constituents that transport materials from one component to another. Unlike components, they do no actual processing on the materials. The topological model is most typically conveyed to the user through the use of a graphical model, which depicts how the components (represented by nodes) of a system are interconnected (represented by links between the nodes). See Table 2.1 for a summary of the terminology used in model-based reasoning systems.  Describing Object Models Structure  How components of a system are interconnected; typically, specified by the topological model.  Behavior  What the components of the system do; how they interact (e.g., may be specified by a causal model).  Constituents of Object Models Materials  What passes through the system  Components  Nodes; they can process and change the materials, or they transform inputs into outputs  ( Conduits or Connections  Links between nodes; they transport materials from one node to the next, but perform no actual processing.  Table 2.1: Terminology Used in Model-Based Reasoning  30  2.3 Examples of Model-Based Reasoning Systems To clarify some of the concepts presented in the preceding section, this section will present some examples of how model-based reasoning techniques can be used to describe the structure and behavior of actual physical systems. Again, the emphasis in such systems is to understand and reason about the structure of the system.  Example 1: A Pressure Regulator. An example of an actual physical device will help to illustrate the concepts of model-based reasoning. This example is of a device called the pressure-regulator and a textual description of its properly functioning behavior follows (de Kleer and Brown (1985)): An increase in source (A) pressure increases the pressure drop across the valve (B). Since the flow throughjhe valve is proportional to the pressure across it, the flow through the valve also increases. This increased flow will increase the pressure at the load (C). However, this increased pressure is sensed (D) causing the diaphragm (E) to move downward against the spring pressure. The diaphragm is mechanically connected to the valve, so the downward movement of the diaphragm will tend to close the valve thereby pinching off the flow. Because the flow is now restricted the output pressure will rise much less than it otherwise would have, (p.Ill)  See Figure 2.1 below for a graphical depiction of the pressure-regulator.  B  x'% x YYYY*Y*YYYY*  *************  Figure 2.1: A Pressure-Regulator, from de Kleer and Brown (1985)  31  1  In model-based reasoning systems, behavior is determined from structure. One way to characterize behavior is to specify how individual components of a system causally interact to produce an observable behavior. Device behavior of the pressure-regulator described above is characterized by a cascading series of events in which one component of the device affects its neighboring component, which in turn, affects another neighboring component, and so on, until the final behavior of the device—that is, how an input relates to an observable system behavior—can be explained in a coherent way. The above description provides a kind of causal mapping of how the components of the pressure-regulator interact to maintain and regulate pressure. We are given an explanatory account of how the components of the device, which to an outside observer is a black box, produce outward behaviors. Hence, explanatory power arises very naturally in model-based reasoning systems.  The principle of locality specifies that components can act on other components only if  ( they are directly connected to them (De Kleer and Brown, 1985). In the description above, only neighboring components act on each other. (For example, A acts directly on B, but A cannot act directly on, say, D). Also important to causality is the notion of directionality: information (or materials) in a system may move in only a specified direction, as it does in the pressure regulator example (in general, however, this may not be the case). Several formalisms have been developed on how to produce causal models to explain the behaviors of devices and systems (see e.g., de Kleer and Brown, 1983; de Kleer, 1984; andForbus, 1988).  32  Example 2: Diagnosis of a Faulty System Component. In this second example, adapted from Davis and Hamscher (1988), a model of a simple device is given. See Figure 2.2. It contains five inputs, labeled A through E, five components interconnected by a specified topology, and two outputs, labeled F and G. The multiply components (mult-1, mult-2, and mult-3) receive two inputs and multiply them; the add components (add-1 and add-2) receive two of the outputs from the multiply components and add them together. Finally, these components produce the final (observable) outputs, F and G.  A=3  MULT-l ADD-1  B=3  C=2  MULT-2  D=2 E=3  F=12 •[F=10]  ADD-2  G=12 "[G=12]  MULT-3  Figure 2.2: A Simple Device, from Davis and Hamscher (1988)  From the diagram, we can predict from the device's structure that the output F should be 12 and that the output from G should also be 12. However, suppose that we actually observe that F is 10, rather than 12. Assuming that there is only one faulty component at a time (note: this may not necessarily be a correct assumption, because two or more  33  components may interact to produce a faulty behavior; to simplify matters we assume single-point failure), we can eliminate mult-3 and add-2 as a faulty components since output F is not connected to these components on the upstream side. We might also observe that G is 12, and infer from this that mult-2, mult-3, and add-2 are all working properly. Hence, by subtracting out the known good components from the candidate bad components, we conclude that the faulty component is either mult-1 or add-1.  The important point to note about this reasoning process is that there is a model that predicts what the device is supposed to do. When there is a discrepancy between this prediction, and what the device actually does, the model-based reasoning technique works backwards to find the faulty component. Much of the previous work on modelbased reasoning systems, in fact, operates on this basis.  In the above example, since we are reasoning from the structure of the device to determine which component is malfunctioning, we are utilizing model-based reasoning. We could have used a heuristic-based approach instead, as a traditional rule-base expert system would, and encode our diagnostic procedures as a series of IF-THEN associational rules, in which observable behaviors (F and G) are related to system states (e.g., mult-1 is malfunctioning).  2.4 Toward More Flexibility in the User Interface Finally, this chapter concludes with a discussion of how model-based reasoning techniques can add more flexibility to traditional rule-based systems. Note that systems  34  can be created that integrate model-based reasoning with rule-based reasoning—the development of one type need not preclude the other type. Since this dissertation is concerned with enhancing the explanatory power of intelligent interfaces, the following discussion will suggest ways that model-based reasoning may be added to an intelligent interface to enhance its overall power and flexibility.  More transparency in the system. Rule-based systems, which employ simple inputoutput associational heuristics, do not allow for the inspection of components of a system since structure of the system is not made explicit. As Kunz points out, these systems are black box models: "they emphasize the relation between input and output of a system component without special effort to assure that intermediate results relate to any state in the modeled system" (p. 76).  More coherence in the system. Model-based reasoning may serve as a unifying foundation for a complex expert support system. By structuring an expert support system around an underlying domain model, a system can be developed with more organization and coherence. Such coherence can be used to integrate several related problems into one common framework (Kunz, 1988; Nardi and Simons, 1986). This may, in part, address the problem of brittle expert systems that break down for problems even closelyrelated to the original problem.  Generic power of model-based reasoning. Model-based reasoning can apply to an entire class of structures and behaviors (e.g., Kunz, 1988). This means that separate rules  35  do not need to be specified for each specific case in the problem-solving domain. Rather, they can be specified for large classes of structures and behaviors. Gensym's G2 software (Gensym Corporation, 1997) is a good example of a development environment that allows for the specification of general rules to describe the behavior of entire classes of model-based systems.  Support for graphical model-based interfaces. Model-based reasoning systems typically exploit the use of powerful graphical user interfaces that allow for multi-leveled descriptions of complex domains. Users, for example, may be able to click on components of a symbolic model to obtain a more fine-grained description of a component. Moreover, other capabilities, such as the use of multimedia and animation may be developed that capitalize on a human end-user's inherent capabilities to process graphical information more effectively and efficiently than, say, information presented through a more limited text-based interface.  \  2.5 Limitations of Model-Based Reasoning One might be easily tempted to conclude that model-based reasoning is always a superior form of reasoning and represents an evolutionary step forward from its predecessor, rulebased reasoning. However, such an easy conclusion belies the fact that model-based reasoning is not suitable for all types of problems and domains, and may actually be detrimental in certain cases. It is certainly not a panacea that can solve all classes of problems requiring expertise.  36  Davis and Hamscher (1988) point out that certain classes of problems will not work well with model-based reasoning techniques. First of all, the system may simply be too difficult to model: there may be very subtle interaction effects among a variety of components, and predicting system behavior from a model structure would be impossible to do. In such situations, it would be much easier to capture the knowledge in the form of IF-THEN rules. On the other end of the spectrum, the problem-solving domain may be so simple, that the use of a model may be overkill. It would be much easier to simply enumerate the rules, which can much more easily capture the expertise required to problem-solve. On the other hand, the designer of an intelligent system may utilize model-based reasoning for base cases in a problem-solving domain, and fine-tune these cases with the addition of rules. In this case, it is possible to integrate model-based reasoning with rule-based reasoning, which would result in a more powerful system than a system developed using only one of the techniques alone could.  37  Chapter 3 Explanatory Power of a User Interface  This chapter will formalize and define more precisely the explanatory power of a user interface. For intelligent systems, explanatory power indicates the ability or strength of an interface to explain a system's underlying reasoning (the "understandability" of the system). Two related characteristics are relevant to understanding explanatory power: transparency, or the ability to see the underlying mechanism of the system so that it is not merely a black box; and flexibility, or the ability of the interface to adapt to a wide variety of end-user interactions, so it is not merely a rigid dialogue, but an open-ended interaction that allows the end-user to explore and understand the system more fully. While transparency of the system is a quality related to the informational content of the system itself, flexibility is more related to the nature of the end-user interaction with the system. The distinction is a subtle one, for it is easy to confuse the two qualities. More flexibility in the user interface can lead to more transparency—having a more open-ended interaction can enable an end-user to seek out more ways to better understand the system; by the same token, having a more restrictive interface can impair the end-user's ability to seek out more transparency in the system, even if it does already exist. Hence, this research views flexibility as a separate quality of a user interface that exists independently of interface transparency. The explanatory power construct, as utilized in this dissertation, integrates and captures both transparency of the user interface and flexibility of the end-user interaction.  38  Making the distinction between informational qualities vs. interaction-related qualities of user interfaces is an important first step to developing a framework for explanatory power. The goal of this chapter is to describe and better understand the multidimensional nature of explanatory power, specifically, to investigate some of the determining factors of explanatory power. By doing so, we will be in a better position to offer design guidelines, which can be used by system designers to enhance an intelligent system's effectiveness.  The research framework looks at three types of enhancements to explanatory power.  Content-based enhancements focus on augmenting the actual informational content of an intelligent system. In particular what types of explanations will enhance explanatory power? In this chapter, a review of explanation types based on content is offered to suggest ways that different types of explanations may increase explanatory power. However, these content-based enhancements may not offer any gains in explanatory power, since the end-user may not bother to utilize them. Hence, this research also looks at ways to enhance the interactive experience of the end user.  Two additional determinants of explanatory power, related to enhancing the humancomputer interaction, are considered.  Interface-based enhancements relate to the  interface design choices that systems designers make to increase the effectiveness of intelligent systems. A number of possibilities are suggested that may increase the flexibility of the user interface, and as a result, its explanatory power. Still, the interface Characteristics, in themselves, may not result in improved problem-solving performance.  39  J  Hence, the third determinant of explanatory considered is advisory strategy, or the manner in which the explanation is delivered to the end-user.  A number of types of  strategies are considered, but the focus in this research study will be one particular method of manipulating an advisory strategy, by controlling system restrictiveness.  The remainder of this chapter proceeds in the following manner. Section 3.1 looks at the psychological basis of explanation, and suggests criteria for the evaluation of good explanations. Section 3.2 explores content-based enhancements, describing both the current state of expert systems explanation capabilities, as well as suggestions for improvements. Section 3.3 explores interface-based enhancements, focusing specifically on characteristics of an interface that lead to enhanced explanatory power. Section 3.4 explores advisory strategies, the system strategies used to deliver the explanations, which one might employ to further enhance a system's explanatory power. Finally, a summary of the framework is given in Section 3.5.  3.1 Criteria for Evaluating Explanation Quality Explanation is fundamental to what people do to understand and make sense of the world they live in: "People want to understand the world—personally, socially, and physically. They do this by constantly creating and modifying explanations and indexing memories by the explanations they caused to be formed." (Schank, 1986)  The process of explanation can be viewed as being similar to the process of assimilative learning. In seeking out explanations, a person is seeking to develop more powerful and  40  integrated models of the world. New knowledge must be integrated with pre-existing knowledge in order for assimilative learning to occur.  Schank (1986, p. 143) describes explanation as a learning process which, roughly, contains the following steps, which begins when we perceive an anomaly in a situation, and ends with a more powerful mental model of the world:  •  Find an anomaly  • • • •  Establish what kind of explanation will make it less anomalous Formulate the explanation pattern that will suffice Explain Establish whether the explanation: —makes clear the anomaly but does no more than that, or^ —must be generalized beyond the current case If generalization is necessary, then find the right level of generalization Write the rule that has just been formulated Find the breadth of its application Verify Reorganize at a greater level of generality  • • • •  •  We see that this description of the explanation process is really a description of assimilative learning—it is not merely enough to add new knowledge to a learner's memory if effective learning is to occur. Rather, the learner is engaged in an assimilation process, a process that involves reorganizing his or her prior knowledge structures to account for the anomaly just experienced. One purpose of explanation, then, is to develop more powerful models of the world, models that are generalizable to many different kinds of situations.  What, then, makes for a good explanation? Schank suggests that explanations must often take a broader view of a situation, that they must include more behavior than was just  41  observed, and that they must instruct us on how to behave in situations of a similar kind in the future. In short, they must be both inclusive and instructive. Ellis (1989) also observes that good explanations must be flexible, taking into account the needs of the individual user: they must not only be detailed enough so that they are comprehensible to the learner, but they must also be pitched at a level high enough so that unnecessary detail is suppressed. This suggests that an explanation will not necessarily be good r  across all levels of user expertise and user styles. Obviously a user interface endowed with greater explanatory power will be one in which these criteria for good explanations are met.  Another view of explanation quality is offered by Lester and Porter (1996), who suggest five criteria for the evaluation of explanations:  •  Coherence: The extent to which the system is able to generate a global assessment  •  Content: The extent to which the explanation's information is adequate and focused.  •  Organization: The extent to which the information is well-organized.  •  Writing style: The quality of the prose (if it is textual).  •  Correctness: For scientific explanations, the extent to which the explanations agree with the established scientific record.  Obviously, explanatory power is inextricably tied to explanation quality, for no matter how enhanced and sophisticated an explanation facility a system designer creates, an  42  explanation is only as good as the quality of the explanation itself. Hence, in the discussion of the three types of enhancements to explanatory power that follows, one must keep in mind that the quality of the explanations must be good enough so that an end-user will find them useful. Otherwise, no matter what types of enhancements are finally implemented, they will not be used if the end-user does not find them tractable enough to use.  3.2 Content-Based Enhancements How can we improve the informational content of intelligent systems so that their internal mechanism is more visible for inspection (i.e., the transparency of the system is enhanced)? This section will answer this question by addressing content-based enhancements to explanatory power. The research involved in content-based enhancements has a well-documented history in the AI and Information Systems (IS) literatures, which this section will review. From this overview, a classification of explanation types by content is provided. Finally, this section reviews empirical studies in explanation usage that are relevant to this thesis.  3.2.1 Explanation Use in Traditional Rule-Based Expert Systems Computer-generated explanations have long been associated with expert systems use. Explanations use in expert systems begins with MYCIN, developed by Edward Shortcliffe at Stanford Medical School in the 1970's for the diagnosis and treatment of bacterial infections. M Y C I N is considered to be one of the classic expert systems,  43  certainly the most widely-cited, and it has introduced several features that have become standards in expert systems technology: rule-based knowledge representation, probabilistic rules to capture uncertainty, the backward chaining method, explanation, and a user-friendly interface (Turban, 1995). To aid with system debugging, Shortcliffe added a R U L E command that, when requested, asked MYCIN which rule was currently being used. The rule was displayed in LISP, but later was displayed in English to make this simple explanation more user-friendly (Buchanan and Shortcliffe, 1986). Later, this R U L E command was changed to W H Y to enable "the user to examine the entire reasoning chain upward to the topmost goal by asking W H Y several times in succession." (p. 333). The HOW explanation was also developed that enabled the user to descend the branches of the reasoning network. Today, W H Y is commonly used by the user to ask why a certain type of input is needed by the system (typically the rule requiring the input is displayed). The typical HOW question is posed by users when they would like to know how a certain recommendation or conclusion was reached (typically the entire rule trace of the reasoning process is given).  Why does the user need an explanations facility? Buchanan and Shortcliffe (1986) offer several reasons. First, both the system builder and the end-user need to understand the knowledge base in order to maintain it and use it effectively. Second, systems builders can use explanations for debugging purposes. Third, explanations serve an educational function. Users who feel they learn something about the knowledge base are more likely to feel more comfortable with such a system. Fourth, explanations can help to convince users that the conclusions the expert system reaches are reasonable, and lead to their  44  acceptance.  In addition, Turban (1995) suggests a number of ways that explanations  can help end-users understand the limits of the system's expertise: 1) explanations can uncover shortcomings of the rules and the knowledge base; 2) explanations can bring to light situations that were unanticipated by the user; 3) explanations can clarify assumptions underlying the system's operations; and 4) explanations can be used to conduct sensitivity analysis, e.g., how sensitive a given set of inputs are to an expert system's conclusions.  3.2.2 Limitations of Rule Traces Although current figures are not available on expert systems in use today, it has been suggested that the explanation facilities in a large number of expert systems, perhaps even in a majority of them, do not offer explanations based on anything more sophisticated than a rule trace (Ellis, 1989). Yet, explanations based on rule traces are clearly limited in terms of providing explanations that are instructive, inclusive, and easy to use and understand. This subsection and the next will report on the limitations of ruletrace explanations and some of the ways researchers have suggested for improving the capabilities of traditional explanation facilities.  The How-Why paradigm of explanations use described in the previous section obviously offers only a limited form of explanation. For one, other types of questions might need to be asked. For example, "what-if questions might enable users to explore the effect of changing assumptions in the rule-base, or to perform sensitivity analysis on input variables. "Why-not?" questions or "Why did you not conclude that <such and such> is  45  true?" are frequently asked when probing real human experts (Ellis, 1989), yet this capability is a relatively difficult one to develop in an expert system.  Probably more serious is that the "why" and "how" questions are based on rule traces, which an end-user is likely to have difficulty in comprehending. One solution to this is to replace rule-traces produced directly from computer code, with canned text that is easier for the end-user to understand. However, the replacement of computer-generated explanations with canned text explanations comes at a stiff price: user questions must be anticipated in advance and it is unlikely that all such questions will be thought of ahead of time (Moffit, 1994). Maintenance of such canned text explanations (keeping them in sync with an ever changing rule-base) could also create problems further down the line. Moreover, by using canned text explanations, the system has no conceptual model of what it is saying so that it is not possible to develop more advanced types of explanations, such as providing explanations at different levels of abstraction to the end-user (Swartout, 1983).  Another problem with rule traces is that they sometimes provide too much detail, which an end-user is simply not interested in seeing. As Swartout (1983) astutely observes in discussing the M Y C I N rule traces: "Parts of the program [i.e., rule traces] appear mainly because we are implementing an algorithm on a computer. If these steps are described by physicians, they are likely to be uninteresting and potentially confusing." (p. 312). There is often too much algorithmic detail in a rule trace that an end-user cannot understand, or  46  may not care to understand. An effective explanation must be pitched, to such an enduser, at a higher level.  Several researchers have also commented on the opacity of rules, or a rule's inability to make visible to the end-user the underlying reasoning process. Clancey (1983), for one, has noted that rules typically do not contain justifications or fail to shed light on underlying causal processes. This may be due, in part, to the way that expert knowledge is compiled: "rules are 'compiled' in the sense that they are optimizations that leave out unnecessary steps—evolved patterns of reasoning that cope with the demands of ordinary problems." (Clancey, 1983, p. 225) However, these intermediate steps frequently need to be explained to end-users who may not understand how an expert system reached a conclusion.  Clancey (1983) has pointed out how strategic.knowledge can be hidden in the premises of rules. He defines strategic knowledge as "an approach for solving a problem, a plan for ordering methods so that a goal is reached" (p. 233). For example, these rules, by the simple ordering of the premises, dictate to the inference engine which conditions should be checked before the others. Such strategic knowledge is effectively lost to the end-user requesting a rule trace:  3.2.3 More Sophisticated Explanations The preceding subsection suggests a number of improvements to traditional rule traces, some of which have already been explored by a number of researchers. Obviously, an  47  explanation facility cannot do everything, and part of the problem of good explanation design lies in understanding what types of explanatory capabilities can be feasibly developed, within a reasonable amount of time, for a given task domain and for a given class of users. The following discussion considers some extensions to traditional rule traces.  In N E O M Y C I N , an offspring of MYCIN, Clancey and Letsinger (1981) (see also Clancey, 1983; and Hasling et al., 1984), consider providing explanations that capture the overall approach used by the system to solve a problem—that is, the underlying strategic knowledge. One suggestion made to this effect is to capture strategic knowledge in metarules. These meta-rules provide the high-level knowledge for controlling the use of rules. An example of a meta-rule from N E O M Y C I N is given below.  If  (1) the infection is pelvic-abscess, and (2) there are rules which mention in their premise enterobacteriaceae, and (3) there are rules which mention in their premise gram-pos-rods, T H E N there is suggestive evidence (.4) that the former should be done before the latter  The firing of the above rule will cause one goal (2) to be pursued before another goal (3). Such meta-rules make the expert system's problem-solving strategy more explicit and therefore, potentially more visible to the end-user requesting an explanation.  In XPLALN, Swartout (1983) suggests capturing strategic knowledge as a tree of goals, called a refinement structure. "Refining" a goal means turning it into more specific subgoals. Hence, the top of the tree is a very abstract, high-level goal, and the lower  48  levels of the tree represent less abstract steps needed to implement this goal. Eventually, the level of the system primitives (i.e., built-in system operations) is reached.  Aikens (1983) identifies the problem of the user being unable to follow a line of reasoning under traditional explanation facilities. For example, M Y C I N is unable to deal with more than one rule at a time, so that a line of reasoning is provided as a result of a user requesting a succession of "WHY" explanations. To make the line of reasoning more visible, Southwick (1988), for one, has suggested the development of a hierarchy of landmarks or topics to guide the end-user. He defines a topic as "a logical and i  conceptual entity in the knowledge base of an expert system." Such topics serve as landmarks or anchoring points in the knowledge base. These topics, ideally, should have some intuitional appeal for an end-user so that a system can use them as convenient explanatory segments. Along a similar vein, Mockler (1989) suggests that development of dependency diagrams to graphically model a knowledge-based system. These diagrams show knowledge segments and their interrelationships, in a hierarchical fashion. They provide an overall summary view of the knowledge base. Such a tree structure would enable end-users to better comprehend a knowledge base, unlike a flat set of rules.  Another criticism leveled against traditional explanation facilities is that they are incapable of justifying their actions, or providing an underlying reason for a system action. One solution to this lack of justification would be to provide end-users with model-based explanations that justify system actions and results by linking them to a deep causal model (Southwick, 1991). Such deep explanations are believed to give end-  49  users an understanding of the underlying reasons for a recommendation. Swartout (1983) identifies two types of deep knowledge that might be provided as explanations: the  domain model is descriptive knowledge about the domain and consists of such things as taxonomic knowledge and causal relationships; the domain principles are prescriptive knowledge and consist of such things as methods and heuristics used for problem solving. More powerful explanations might be generated by an expert system that ties in to domain models (structural, descriptive knowledge) as well as to domain principles (procedural, prescriptive knowledge).  Wallis and Shortliffe (1982) advocate the development of causal networks in order to create better explanations for medical consultation systems. The causal network, they contend, can serve as an integral part of the reasoning system, and can be used to guide the generation of customized deep explanations. For example, we might assume the causal chain in a system to be of the form tl -> t2 -> t3 with each element assigned a measure of complexity. Suppose further that t2 is deemed to be too complex by a novice user. The system can tailor such explanations so that more fine-grained (and more complex) elements in the chain of causality are hidden from such users; in this case, tl -> t3 only is revealed to the user.  3.2.4 A Classification of Explanation Types: A Summary of Content-Based Enhancements This section summarizes the three types of explanations that contribute to the overall explanatory power of an ESS user interface: rule traces, strategic knowledge, and deep i  justifications. This classification is similar to taxonomies developed by other researchers  50  (see e.g., Gregor and Benbasat, 1999; Chandrasekeran, Tanner and Josephson, 1989; and Southwick, 1991). Under each type, points about how the explanatory power of the user interface may be increased are provided.  Rule Traces:  '  •  Different types of questions are allowed such as "why not?" and "what if?".  •  Rules are displayed in a natural language as opposed to computer-generated code.  •  Rules traces are displayed at the right level of detail.  Strategic Knowledge: •  The problem-solving strategies are made explicit to the end-user (or are available upon request)  •  The overall line-of-reasoning is made visible to the end-user.  •  The strategic knowledge is appropriately structured for the end-user (e.g., a tree of goals, or a tree of topics).  Deep Justifications: •  An explanation is tied to an underlying domain model that provides structural knowledge (e.g., causal relationships about the domain) and taxonomic knowledge about the domain.  •  An explanation is tied to underlying domain principles that provide knowledge about methods and heuristics used to problem-solve.  51  The two intelligent systems employed in this research study illustrate all three types of content-based enhancements. Expert Strategy provides structured rule traces, a hierarchic visualization of strategic knowledge, and deep explanations that contain underlying domain principles (see Chapter 5 for more information). LogNet provides deep explanations support on how to design logistics networks (see Chapter 6 for more information).  3.2.5 Empirical Studies on Explanations Usage A number of researchers have conducted empirical studies on explanations usage in expert systems. This section will review some of the studies that have been conducted to ' date. This discussion will focus on whether or not explanations usage led to enhanced performance. In addition, we would like to understand what types of explanations (rule trace, strategic, deep, others) were responsible for these performance gains. Recent overviews of empirical investigations on explanations use can be found in Gregor and Benbasat (1999) and Nah (1998).  Dhaliwal (1993) found that the use of explanations in a financial expert system increased both the accuracy of individual decision-making and user perceptions of system usefulness. He investigated three explanation types: how, why, and strategic. Some specific findings related to content types were found: 1) the three types of explanations (why, how, and strategic) were used in different proportions; 2) WHY and HOW explanation types were used more than strategic explanations; and 3) the use of Why explanations provided as feedback improved the accuracy of judgmental decision  52  making. Dhaliwal also found that explanations were used more frequently when there. was a moderate level of disagreement with an ESS's recommendations. At high levels of disagreement with the system conclusions, users perceived their differences with the system to be too large so that they did not request further explanations; on the other hand, at high levels of agreement with the system, users also did not seek out explanations since there was no perceived conflict with the system conclusions.  Ye (1990; Ye and Johnson, 1995) utilized an expert system in the auditing domain and provided three types of explanations (identical to the classification given in the preceding section): trace, justification, and strategy. His results indicated that usage of these explanations could make a system's advice more acceptable to auditors. He also found that justification was the most effective type of explanation to effect changes in user attitudes toward the system, as evidenced by the frequency with which this type of explanation was requested, as well as the large amount of time that users spent analyzing f these explanations. One of the distinguishing features of this study was the use of Toulmin's model of argumentation (Toulmin, 1958), a model that provides a basis for examining the elements of argumentation. Ye contends that arguments that conform to Toulmin's model will be more effective, and lead to better and more persuasive arguments.  Did use of explanations improve problem-solving performance? The empirical studies give mixed results. As mentioned above, Dhaliwal (1993) found that the accuracy of decision-making was increased with the use of WHY (rule-trace) explanations. Gregor  53  (1996), in her experimental study, found that the use of explanations of all types increased problem-solving performance. Mao and Benbasat (2000) looked at both deep explanations usage as well as reasoning trace explanations. In their research study, they looked at three types of deep explanations: terminological knowledge (knowledge about concepts and relationships in a domain), domain descriptive knowledge (abstract factual knowledge about a domain, or "textbook rudiments"), and problem-solving knowledge (knowledge about how tasks can be accomplished). They measured a construct called judgment congruence, which refers to the degree to which users of a knowledge-based system were brought closer to the underlying expert judgments of the KBS. Such convergence, they contend, could mean that the knowledge of the experts was transferred to the users through the KBS explanations. They found that the use of deep explanations was far more effective than reasoning-trace explanations in effecting judgment congruence.  However, as Gregor and Benbasat (1999) maintain, the mere availability of explanations does not necessarily lead to improved problem-solving performance. In one study conducted by Gault (1994), no difference in performance was found between treatment groups with and without explanations. However, Gregor and Benbasat observe that no measurements of actual explanations usage were taken by Gault, which might explain why no statistically significant differences were found.  A few studies actually indicate that enhanced explanations will not improve learning and problem-solving performance. Steinbart and Accola (1994) randomly assigned subjects  54  to either rule-trace or justification (deep) explanations. Their results showed that explanation type did not affect either learning or user satisfaction. They speculate on why this might be the case. One possibility is that users may not attend to the explanations. (However, Steinbart and Accola discount this possibility in their experiment since one of the researchers was in the room at all times to observe whether subjects were skipping over the explanations). Another possibility is that the task may be so structured and the relationships between inputs and outputs (or causes and effects) so straightforward that the explanations are adequately expressed in the rule traces. Still a third possibility is that the time frame for the experiment was too short, so that it was not possible to demonstrate any effects of the alternate treatments. Other researchers (e.g., Moffit, 1989; and Odom and Dorr, 1995) also failed to find any benefits from more complex and detailed justification explanations than less detailed and more shallow explanations.  The literature in human factors and ergonomics also offers some empirical evidence that enhancing the content of explanations can improve problem-solving performance. While many of these studies were conducted earlier (in the 1970's) than those found in the expert systems explanations literature, they are precursors to the empirical work conducted more recently (in the 1980's and 1990's), and their recommendations remain germane today.  Brigham and Laios (1975) conducted experiments in which subjects were asked to control a relatively complex and slow-response laboratory process plant for a total of .  55  three hours. They investigated whether the provision of four types of information would enhance performance. The first type was the availability of intermediate information from the plant—that is, information about the state of intermediate plant variables other than the control input and plant output variables. The second type was information about the structure and dynamics of the plant—that is, information about how the components of the plant were connected, as well as relationships of the intermediate variables. The third type was a time-history record of the control inputs and plant outputs. The fourth type was whether a subject was able to observe the performance of an automatic controller (and actually see the process). The experimental task involved changing the output of the plant from one value to another as quickly and precisely as possible. Statistical results revealed that it was the combination of intermediate information and , instructions about the plant structure that led to the most rapid learning and the best overall performance. Some of the subjects with intermediate information only eventually developed the same "mental model" and achieved equal performance, but they took a longer time to do so. Note how the information about intermediate variables and plant structure and behavior, in this study, can be likened to the deep explanations support found more recently in the research on explanations use.  In another experimental study in the ergonomics literature, Shepherd et al. (1977) investigated three different training methods and its effect on the diagnosis of plant failures in a simulated task environment. The experiment consisted of two manipulations of the training method and a control group. The "theory" group received a conventional description of the plant—e.g., the flow of material using a plant diagram was  56  demonstrated. The "rules" group were taught a set of rules for diagnosis. The "no story" group, the control group, received no prior instruction except for a brief description of the instruments to be used. The results of the study showed that all three groups performed equally well in diagnosing the eight familiar faults. However, the rules group was consistently the most proficient in diagnosing the unfamiliar faults. Shepherd etal. conclude that the teaching of conventional theory (or underlying domain principles) is of limited value, and so training should also provide generalizable rules that users can apply directly to the diagnostic tasks.  More recently, Morris and Rouse (1985) conducted an experimental study to determine what kind of knowledge the operator of a dynamic system needs to know in order to be an effective problem-solver. Two different kinds of knowledge were explored: knowledge of how to control the system (procedures), and knowledge of how the system works  (principles or deep knowledge). For the experimental task, they used PLANT  (Morris, Rouse, and Fath, 1985), a computer-based system that simulates a hydraulic system of tanks interconnected by valves so as to produce an unspecified product. The P L A N T operator's task was to supervise the flow of fluid so as to maximize production, given the physical limitations of the system.  Four groups of subjects were distinguished based on the type of instructions they received: 1) procedures, 2) principles,.3) neither procedures nor principles, and 4) both procedures and principles. Each subject was given both familiar and unfamiliar problemsolving tasks. For example, pump and valve failures were familiar, in that they occurred  57  frequently (two to five times within a one-hour session), and the manner in which these problems should be solved was given in the procedures. On the other hand, tank ruptures were unfamiliar since they occurred only once during an experimental session,(near the end of the experiment), and there were no explicit procedures given on how to handle these problems.  The results of the experiment showed that procedures were beneficial, but principles were not. Subjects receiving procedures controlled P L A N T more effectively; subjects receiving P L A N T principles, however, did not perform the tasks any better. Surprisingly, ^ instruction of deep principles also had no observable effect on the subject's diagnosis of even the unfamiliar problems. This led Morris and Rouse (1985) to conclude that principles, in and of themselves, would not lead to enhanced performance: "The impact of principles may have been minimal because the information was not in a form that was directly usable by subjects—i.e., was not directly related to what they should be able to do" (p. 705). The implication of this finding is an important one to consider in this research: the mere presence of deep explanations (or any type of content-based enhancement) will not necessarily lead to increased explanatory power if the end-user does not know how to use these explanations, or if they are not directly related to the task at hand.  3.3 Interface-Based Enhancements Whereas content-based enhancements are concerned with enhancing the actual explanations themselves, interface-based enhancements focus on designing the user  58  interface to foster transparency and flexibility in the system. The trend toward highpowered PC's and workstations during the 1980's and 1990's has given rise to the user interface taking on a more central role, in the development of expert support systems. In addition, the increased need to support the user's cognitive task so that the human user remains an active user, working in a cooperative manner with the system, has also necessitated that interface design considerations take on a more prominent role (HayesRoth and Jacobstein, 1994; Stelzner and Williams, 1988). Indeed, most of the current and most powerful shells on the marketplace contain powerful tools to design objectoriented, graphical user interfaces (e.g., Gensym's G2, Gensym Corporation, 1997).  Still another reason for the importance of the user interface is the increasing size and complexity of systems. Large-scale, industrial strength systems in organizations require that the user interface manage complexity well. This means that such interfaces must enable end-users to browse quickly through large amounts of information and to obtain multiple views of the same knowledge, in order to support the varying task requirements of the different users of the organization.  o \  3.3.1 Characteristics of the User Interface In this section, several characteristics of the user interface are considered that may enhance the explanatory power of a system. The central focus in this discussion will be on supporting the end-user—providing him or her with the capabilities to interact with  59  the system in a wide variety of ways, as well as supporting the management of cognitive complexity.  Stelzner and Williams (1988) identify five major requirements of the ESS user interface, which will provide a framework for the discussion: 1) the natural idiom; 2) immediate feedback; 3) recoverability; 4) granularity; and 5) multiple interfaces to the same knowledge.  The natural idiom. This refers to the ability of the interface to represent the end-user's domain so that it maps more closely to an end-user's mental model. Stelzner and Williams argue, therefore, that the central metaphor of the interface should be that of the modeled world itself: instead of describing the domain (using a text-based conversational dialogue, for example), the end-user should be performing actions on a graphical model of the domain (using a direct-manipulation graphical user interface). Hence, the end-user is in direct contact with the domain model, and the interaction with the end-user is more natural and easier for an end-user to learn and to use.  f  Immediate feedback. This refers to the ability of the interface to allow the user to receive immediate feedback on the effects of his or her actions. This quality supports the feeling of acting directly on the objects of the domain model, and removes the perception of the computer acting as an intermediary (Hutchins, Hollan, & Norman, 1985, as reported in Stelzner and Williams, 1988). Animation is one technique that might be used to endow the system with immediate feedback. Stelzner and Williams provide the example of a knowledge-based simulation of a factory, in which the graphical user  60  interface contains a detailed layout of the factory to aid engineers in the determination of what operating strategy is best. Animation of basket movement through the factory allows the engineer to quickly identify bottlenecks and underutilized resources in the factory.  Recoverability. This refers to the ability of the interface to allow the end-user to back out of changes made to. the system. End-users may wish to test out the effects of different changes on a system, and may desire an easy way to back out of these changes. Such a capability will encourage an end-user to explore and experiment with a system's . capabilities. There are many possible ways of building recoverability into an interface. An interface endowed with a high degree of recoverability may include undo facilities, history lists of the most recent actions performed, hypertext links that enable the end-user to jump back to the relevant portion of the problem-solving situation, and direct manipulation interfaces that enable end-users to select objects of the domain and easily modify their attribute values.  Granularity. This refers to the ability of the interface to allow the user to request different levels of detail, depending on the situation the user is currently in. Especially in very complex systems composed of multiple components, this capability is very crucial to supporting the cognitive limitations of an end-user. Such an end-user can become overwhelmed by the enormous amount of detail required to understand the system. An interface that supports granularity may be one that enables the creation of multi-leveled, hierarchic descriptions of a domain. An end-user can drill down the branches of the  61  hierarchy to obtain more detail, or go up the branches to obtain a higher-level view of the domain. An interface endowed with granularity would enable such a user to change the , level of granularity frequently and with ease during a user session.  Multiple interfaces to the same knowledge. Given that different tasks and different users may have different and specific requirements for utilizing knowledge, a system that enables the development of multiple interfaces to the same knowledge would permit greater flexibility. For example, in a real-time process control system, it might be critical for an operator to receive alerts whenever there is a malfunctioning in one of the components of the system. One interface might employ the use of multimedia, say the use of sound, to alert the user of a potentially dangerous situation. Another user of the system (say the supervisor) may not need these alerts, but would require hourly summaries of the outputs of the system, and the detection of underutilized resources in the system. A different interface and view of the system is obviously required.  3.3.2 Research Examples Examples of powerful user interfaces are provided in this section to illustrate some of the ways that explanatory power can be increased through interface design choices. Descriptions of such systems abound in the AI/Expert Systems literature. This section will merely offer a glimpse into some of the ways that researchers have used the interface (  as a means of enhancing explanatory power of a system.  62  An early example of the use of powerful graphical user interfaces in the AI literature is\ the system called S T E A M E R (Williams, Hollan, and Stevens, 1981; Hollan, Hutchins, and Weitzman, 1984). S T E A M E R is a computer-based instructional system used to teach propulsion engineering. Propulsion plants are physically large (approximately one third of the space on a ship) and typically are made up of thousands of complex components (e.g., valves, pumps, tanks, etc.). Operators are needed to control the plant, monitor its operations, and recover from any casualty conditions that might arise. The primary thrust behind S T E A M E R is to provide a graphics interface to allow students to inspect and manipulate the steam plant, and to teach them how to operate the plant. S T E A M E R is an AI application since it is an intelligent tutor that supervises the student's interaction and provides explanations.  A color graphics interface is the focal point of STEAMER. Through this interface, an end-user can view and manipulate the plant at a number of different hierarchical levels and from a number of different perspectives. The views range from high-level, fairly abstract representations of the plant to views of individual subsystems. There are also diagrams specifically constructed to reveal aspects of plant operation not normally available in an actual plant, but which might have explanatory power for understanding some specific aspect of plant operation. Other views display gauge panels, which show sets of gauges as you might actually see on a ship.  There are many ways that the interface of S T E A M E R contributes to explanatory power. Students using S T E A M E R are able to see simulated system fluid flows. Fluid levels are  63  shown dynamically and system components are color-coded to indicate state: "The combined use of color and animation allows the student to easily see and understand the state of the simulated plant at'any instant" (Williams, Hollan, and Stevens, 1981, p. 86). In addition, the system enables students to inspect components of the steam plant. Students can look inside a control valve or a boiler to observe how fluids and mechanical components are interacting. This quality better enables students to understand the internal mechanisms of STEAMER. Finally, a replay facility permits students to return to critical points in the decision-making process and follow the new consequences, enabling a user to recover from situations he does not want to be in. This encourages the user to explore the plant, and test out different scenarios.  OPAL (Musen, Fagan, and Shortliffe, 1988) is a knowledge entry program that was developed to allow oncologists to specify new cancer treatment plans to be executed by an expert system called ONCOCfN. O P A L utilizes an intuitve visual programming environment that permits complex therapy plans to be drawn graphically as a flow chart.  It has long been recognized that knowledge acquisition, or the process by which expertise is transformed into formalisms that can be used by an expert systems inference engine, is problematic and difficult to accomplish—known in the AI literature as the "knowledge acquisition bottleneck" (see e.g., Zhu and Stillman, 1995). One of the primary causes of the knowledge acquisition bottleneck is the human factor: there are difficulties in expressing expertise verbally as it becomes more and more domain-specific, and many areas of expertise themselves remain poorly understood.  64  The underlying motivation of OPAL, then, is to alleviate some of the problems associated with knowledge acquisition as it relates to the domain of oncology. By employing an interface endowed with explanatory power, oncologists can encode their knowledge directly without having to rely on knowledge engineers as intermediaries. Under OPAL, the oncologist can draw flow charts, or schemas, which provide high-level graphical descriptions of the complex protocols they use. The oncologist does this by adding icons to the graph, positioning icons appropriately, and drawing connections between them. O P A L then converts these visual representations into a form usable by the ONCOCIN knowledge base. More recently, Gaines and Shaw (1999) have argued for embedding symbolic models in documents to annotate and explain them. As in OPAL, they hope that the development of such visual models are simple enough so that there is no longer as great a need for knowledge engineers to encode expertise into a knowledge base. Moreover, the visual presentation of such knowledge is easier to understand for the nonprogrammer than the formalisms used by expert systems.  GUIDON-WATCH (Richer and Clancey, 1985) is a graphic interface to N E O M Y C I N (Clancey, 1983) that uses multiple windows, which allow an end-user to browse through the N E O M Y C I N knowledge base and view the reasoning process during a consultation. Different windows in GUIDON-WATCH display different aspects of the knowledge base of NEOMYCIN. For example, in the taxonomy window, a user can obtain a hierarchy of disorders. The causal relations window is a lattice with causal and subtype links between findings and hypotheses.  65  What is especially interesting about this interface is that a user can also view more dynamic aspects of the reasoning process. Whereas static knowledge includes facts about findings, domain rules, or simple taxonomic knowledge (e.g., meningitis is a subtype of infection), dynamic knowledge is case-specific and refers to information that becomes known only during run-time. In GUIDON-WATCH, dynamic task windows provide users with views of the diagnostic strategy as it being implemented by the system. A task stack window displays the current stack of task calls in the order in which they were f  called. Every rule executed, finding, and hypothesis can be selected so that the user can easily get more detailed information on an item of interest. Through this window, a user can obtain the line of reasoning or path through which the expert system traverses while making a system determination. A dynamic task tree window displays a graph that depicts the dynamic history of task calls. This allows the user to obtain a structural view of the diagnostic strategy that N E O M Y C I N is using.  The vast literature in information visualization (e.g., Gershon and Eick, 1997) can also be used as a source of information about ways to design user interfaces so that they exploit human perceptual systems. Robertson, Mackinlay, and Card (1991,1993) describe the use of cone trees to visualize hierarchical structures in 3-D space. 3-D hierarchies, they contend, can maximize the effective use of available screen space, and therefore, enable visualization of the entire structure. Other researchers have investigated the use of interactive animations that can reduce the cognitive load of users.  Such interfaces have  been argued to be beneficial in complex domains since "they engage special inherently graphical reasoning processes that humans have" (Furnas, 1991). One interesting twist  66  to interface design recognizes a human's peripheral or background processing capabilities. Ishii et al. (1998) of the MTT Media Lab looked at what they refer to as "ambient media", or the various subtle displays of light, sound, and movement that can be monitored in the background, rather than consuming our full attentional cognitive resources.  3.3.3 Empirical Studies on Expert System Interfaces While there are many examples in the Al/expert systems literature on prototype systems and discussions about their interface features, there is a dearth of literature on actual empirical studies that investigate the effects of interface characteristics and presentation formats of explanation content on problem-solving performance. One experimental study conducted by Rook (1990), however, falls squarely into this category of empirical studies.  Rook (1990; Rook and Donnell, 1993) investigated the presentation format of inference explanations used in an expert system. He hypothesized that graphic-based explanations vs. textual rule traces would increase performance in understanding a rule-based expert system. Their experiment employed a 3 x 2 factorial design. One factor was the user mental model, purportedly developed as a result of training prior to the problem-solving tasks: Group 1 developed a mental model that was formed by a graphic-based conceptual description of the expert system's inference processes. Group 2 developed a mental model based on a text-based description of the inference processes. Group 3 was a control group that received no prior training on the expert system. The second factor was ^  67  manipulation of the way that the inference explanation was presented. Level 1 was a simple textual rule trace. Level 2 was a graphic inference net that depicted relationships between expert system nodes. The user mental model factor was between-subjects, while the inference explanation format was a within-subjects factor, with all subjects receiving both types of explanations.  The graphic inference net provides a structural view of the knowledge base of the expert system. In the particular application used in this experiment, it consists of 45 nodes and 84 links between the nodes. In Figure  3.1, the structure of the knowledge base is  depicted graphically. Each of the circles in the figure represents a state in the network. The links between nodes represent the rules that combine several nodes into one. A solid circle indicates that the rule is disjunctive (an OR node), while an open circle represents a conjunctive rule (an A N D node). When a node is selected, information about the node is provided (both its children and its nearest sibling nodes are formatted into a graphical explanation). See  Figure 3.2 for a sample explanation of a node, and compare this to the  textual rule-trace, also given the figure. Note that the text-based and graphic-based explanations are equivalent in terms of the informational content that they provide.  Each subject interacted with the expert system on a total of eight problems; for four problems they received the graphic-based explanation, and for four problems, they received the text-based explanation. Each problem-solving task required a subject to identify how the expert system reached certain conclusions. The primary task was to  68  isolate and identify nodes or states that contributed most significantly to the expert system's conclusions.  O = AND node • =• OR node  Figure 3.1 Graphic Inference Network, from Rook and Donnell (1993)  SDIV LEVEL INADEQUATE  /NEUTRON RODS DEFECTIVE  1121  BORON INSERTION DEFECTIVE f 4)  I  f CONDENSER I VACUUM FAULTY I f 71 1  REACTOR WATER LEVEL LOW  TURBINE CONTROL INADEQUATE I f.6)  SDIV LEVEL INADEQUATE (.12) NEUTRON RODS DEFECTIVE (.32) IF  BORON INSERTION DEFECTIVE (.4) AND CONDENSER VACUUM FAULTY (.7) AND TURBINE CONTROL INADEQUATE (.6) THEN (.8) NEUTRON RODS DEFECTIVE ;.32)  REACTOR WATER LEVEL LOW (.45)  Figure 3.2 A Comparison Between Graphic-based and Text-based Explanation Formats, from Rook and Donnell (1993)  69  The hypothesis that the graphic explanation displays would increase performance was not supported. However, Rook did find a significant interaction effect between user mental model and inference explanation format. In particular graphic displays led to high performance for graphic-based mental model subjects, but poor performance for both text-based and control group subjects.  Another study conducted by Mao (1995; Mao, Benbasat, and Dhaliwal, 1996) investigated the effects of hypertext accessible explanations in an ESS. Having enhanced accessibility to deep explanations via the use of hypertext significantly increased the number of deep explanation requested, which led to higher knowledge transfer from the, system to the user, especially for novice users. Hypertext reduced cognitive effort, and . hence, subjects were better able to access and assimilate the explanations.  3.4 Advisory Strategies The third type of enhancement to explanatory power considered are the strategies employed to deliver the advice to the end-user. Once explanatory content has been created, and the appropriate interface type and interface features selected, a designer of an intelligent interface may also utilize an appropriate advisory strategy to enhance the system's effectiveness.  Selection of an inappropriate strategy may result in sub-optimal  usage of the explanations that the system offers: end-users may not bother to request the explanations at all, or the explanations, when requested, may not be usable or useful at a given point during a user's interaction with the system. This section will consider different types of advisory strategies, and when and how they may be employed to  70  enhance a system's effectiveness. Finally, system restrictiveness, a quality of the system related to the advisory strategy is defined and examined in more detail.  3.4.1 Types of Strategies What are the different ways in which advice can be delivered to the end-user? This section considers a variety of strategies that an intelligent system may employ. Specifically, a system may consider a variety of strategies that address the following questions: At what point during a consultation should advice be presented? Should advice be user-invoked or automatically provided by the system? How much advice should be given? Should the system allow for the use of "special" modes of operation that protect the system from unintended consequences?  ;  The timing of the advice is the first type of strategy that will be considered. Under this category of advisory strategies, one must consider timing issues of advice delivery. Dhaliwal and Benbasat (1996) make a distinction between  feedforward advice and  feedback advice, using the cognitive feedback paradigm (Todd and Hammond, 1956). Under this paradigm, there are three differences between feedforward and feedback: temporal order, cues focused upon, and case specificity. In terms of temporal order, feedforward is always presented prior to task completion, while feedback is presented after task completion. In terms of cues focused upon, feedforward focuses on input cues, whereas feedback is advice related to outcomes. Finally, in terms of case specificity, feedback is case-specific since it provides specific advice regarding the outcome (or recommendations) that the expert system makes, while feedforward tends to be more  71  generic to the task at hand. Some researchers have chosen to think of feedforward as non-case-specific training provided prior to task performance. Case specificity is a content issue more than an issue of timing, but itis useful to note how the timing of the advice can (and should) affect its content.  As Dhaliwal and Benbasat (1996) note, there is evidence in the literature that both feedforward and feedback help in fostering learning. Feedforward can aid the user to better understand the task requirements during problem solving, while feedback can clarify the conclusions a system reaches, and therefore, can lead to improvements in judgmental accuracy. A study by Lewis and Anderson (1985) demonstrated the efficacy of feedback advice. They looked at people learning to play a computer game and found that feedback advice immediately after the user error occurred was the most effective type of advice. In addition, they found that such users should also "see" the consequences of the errors they would have made before being allowed to go on. Dhaliwal (1993) also found that the use of Why explanations provided as feedback improved the accuracy of judgmental decision-making in his study on explanations use in a financial analysis task using an expert system.  The  provision mechanism by which the advice is delivered is another type of strategy  that can affect explanations usage. Two types of provision mechanism are considered.  User-invoked explanations are explicitly requested by the user, whereas automatic explanations are not under user-control, but are automatically provided as determined by the system (Gregor and Benbasat, 1999). Gregor and Benbasat remark that automatic  72  explanations, as opposed to user-invoked explanations will be attended to more since the cognitive effort to use them is low. This distinction corresponds to the active vs. passive distinction, which Fischer, Lemke, and Schwab (1985) employ in their research on help systems: active help systems interrupt the user's actions, while passive help systems wait until the user explicitly requests advice.  Moffit (1989, 1994) conducted an experiment on the effectiveness of user-invoked vs. automatic explanations provision. She called the automatic explanations "embeddedtext" explanations, since these explanations were embedded within the interface dialogue, and hence, the end-user would automatically see them. Her experimental evaluation sought to discover which explanations provision mechanism could enhance learning the most when using a production-oriented scheduling expert system. Subjects were randomly assigned to one of four treatments: no explanation; user-invoked, rule-trace facility; user-invoked, canned text facility; and embedded text (automatic provision).  Both declarative and procedural knowledge were tested and measured. The embedded text treatment appeared to offer the greatest advantage in terms of these measurements of learning. This result led Moffit to conclude that the more difficult it was to access the explanations (i.e., user-invoked), the more the subjects perceived the expert system as a separate computerized tool—as opposed to a natural part of the human-computer interaction—and they, therefore, became less aware of its informational value.  73  Controlling the  amount of advice is another way that a system designer may wish to  affect explanations use. Having enhanced explanatory content is generally considered a good thing, and the more the better. However, there is a body of research that suggests just the opposite (see Carroll and McKendree, 1988, for a summary): having advice could actually distract a user whose goal is something other than learning. Proponents of user discovery of the system, for one, argue that advice should be provided only when the user explicitly requests it, countering the design recommendation that Moffit's study seems to suggest—that explanations should be embedded within the dialogue. As Carroll and McKendree (1988) observe, "the discovery approach takes advantage of opportunistic learning, that is, making the most of each unique personal experience" (p. 23). Many researchers seem to be in agreement with the discovery learning approach. Brown, Burton, and de Kleer (1982) argue that providing large amounts of advice was often quite deleterious, and that it is often better to leave the user alone, especially if the problem-solving task is small. They also suggest that no advice be provided at all if the user gets too far off track, the implication being that it is unclear what sort of advice can help a user in such a situation.  One empirical investigation shows that providing too much advice can be harmful. Carroll and Kay (1985) tested four versions of a system designed to teach the basics of word processing. The control version gave no advice at all. The prompting advice version told the user exactly what to do in the current system state to avoid making errors. The feedback advice version told the user how to recover when an error had been committed. The fourth and final version gave both prompting advice and feedback  74  advice. The subjects who were given the final version, containing the most explanatory advice, performed the most poorly in the transfer of learning tasks.  Finally, the use of special modes of system operation is considered as another class of advisory strategies. Carroll and McKendree (1988) distinguish between two special modes of operation. Control blocking means that a portion, or subset, of the system's functions is made inaccessible to the user to prevent their accidental usage. One example of this type of strategy is to use a training wheels approach, in which a portion of the system is rendered inaccessible to novice users, who often access advanced functions by mistake, and then become distracted and confused by the consequences (Carroll and McKendree, 1988). Carroll and Carrithers (1984) showed that such an approach could lead to more efficient learning of a word-processing application. Another type of special mode is a protected mode in which a user action is protected from harmful consequences. One type of protected mode is a reconnoiter mode in which the actions of system commands are simulated without actually affecting the system's data (Jagodzinsky, 1983) A user of this system can switch to reconnoiter mode and try different things out, without fear of destroying or damaging system data, and then return to normal mode. Such a mode of system operation can encourage the user to seek out exploratory and creative ways of using the system.  3.4.2 Enhancing Explanatory Power via the. Use of Advisory Strategies: A Summary The preceding section considers a number of ways that explanatory power may be enhanced through the appropriate selection of an advisory strategy. Following is a  75  summary of advisory strategy choices that a system designer may employ to increase system effectiveness. This framework is not meant to be an exhaustive listing of possibilities in this area.  Timing of the advice: •  Appropriate use of feedforward and feedback should be made in the delivery of explanatory content  •  Feedback should be provided immediately after a system error has occurred (not delayed)  Provision mechanism: •  Automatic explanations should be provided if the system designer wishes the enduser to use them since the cognitive effort required to use them is low  •  User-invoked explanations should be provided if the system designer does not want to force explanations on the end-user; rather they are requested at the enduser's discretion  Amount of advice: •  The appropriate amount of advice should be delivered to the end-user: too little advice means the interface lacks explanatory content; too much advice may be too confusing and distracting for the end-user  •  Excessive amounts of advice may hamper discovery learning  Special Modes: •  Control blocking may be used to limit an end-user's access to system functions that may be confusing and distracting to use  •  Protected modes may be implemented to promote user discovery and creative uses of information systems.  The selection of appropriate advisory strategies to enhance explanatory power is not exact science but more an art of good systems design. Tradeoffs need to be considered  76  and the consequences of advisory strategies should be determined before an appropriate strategy is selected.  3.4.3 System Restrictiveness • Four types of advisory strategies were considered in the preceding sections. Now, attention will be drawn to a feature known as system restrictiveness. This section will define what system restrictiveness is and how it relates to the four types of advisory strategies. System restrictiveness is a construct that will be used in the second experimental study (see Chapters 10 and 11), so it is important that this construct be elaborated on and developed in this section.  AH Possible Processes  Figure 3.3 System Restrictiveness as a Subset of All Possible Processes, adapted from Silver (1991) Silver (1986, 1991) provides the most comprehensive framework on system restrictiveness, especially as it relates to the use of decision support systems (DSS). He defines system restrictiveness as "the degree to which, and the manner in which, a DSS limits its users' decision-making processes to a subset of all possible processes" (1991, p. 115). If we view system restrictiveness as a continuum, more restrictiveness means that  77  this subset is reduced and less restrictiveness means that this subset is enlarged. See  Figure 3.3.  '  To understand this in more concrete terms, Silver (1991) looks at the components of a DSS and how they are affected by system restrictiveness:  An end-user view of a DSS  consists of four principal components: • • •  operators navigational aids adaptors  •  sequencing rules  Users invoke  operators to access the system's functional capabilities. For example, in a  spreadsheet application, commands such as "sort", "sum", "multiply", "filter", "move", etc. are all operators. To perform their functions, operators require inputs such as data, parameter values, and control values.  Navigational aids serve the purpose of steering a  course of action through the DSS. A complex DSS may present the user with many operators from which to choose, so determining what to do next may be very difficult. Some navigational aids produce suggested courses of action, while others may offer relevant information without actually making a recommendation.  Adaptors are defined  as "the components users invoke to modify existing operators and navigational aids or to create new ones" (1991, p. 103). These components contribute to a system's adaptability or customizability. Silver provides a few examples: a user modifies an operator to measure angles in radians rather than in degrees; a user modifies the calculation of standard deviation so that it changes from "population" standard deviation (n in the denominator) to "sample" standard deviation (n-1 in the denominator). Finally,  sequencing rules determine which operators, navigational aids, and adaptors are allowed 78  to be invoked at any point during a user session. These can be thought of as the rules that govern which sequences of actions the user is allowed and not allowed to take. Unlike operators, navigational aids, and adaptors, sequencing rules are not user-invoked but operate on the system automatically and in the background.  How these four components appear to a user in a DSS can determine the degree of system restrictiveness. In particular, a DSS can constrict decision-making processes by restricting (Silver, 1991, p. 126): • • • • •  the the the the the  set of operators inputs to the operators (data, parameters, control values) outputs from the operators (e.g., representations) sequencing of operators set of adaptors  For example, by limiting the set of operators, a user may be compelled to perform a particular problem-solving technique. By limiting the sequencing of operators (via sequencing rules), the operators can only be invoked in a pre-determined order. By limiting the set of adaptors, the system loses customizability. In all these cases, the enduser interaction loses flexibility and develops more rigidity. Interestingly enough, navigational, aids, Silver contends, does not play a role in determining system restrictiveness. This is a subtle point. Since navigational aids are user-invoked, they influence decision-making processes by informing and suggesting, not by dictating. Hence, they guide; they do not restrict.  At first glance, it may appear that having more system restrictiveness is not a good characteristic for a user interface to possess. A "restrictive" system connotes rigidity, and  79  lack of flexibility. Indeed the trend toward expert support systems that support an enduser, rather than replace him, suggest that highly restrictive systems are to be eschewed in favor of more open-ended and flexible interactions. However, Silver takes the correct and more balanced viewpoint that restrictiveness is a relative quality, and that there are factors favoring greater restrictiveness as well as factors favoring lesser restrictiveness. On the one hand, Silver points out that a system with too little restrictiveness may simply overwhelm its users, providing them with so many different functionalities that they are unable to use the features effectively. Building more restrictiveness into such systems  FACTORS FAVORING GREATER RESTRICTIVENESS: • • • • •  Prescribes the use of normative decision-making techniques Proscribes users from following bad decision-making processes Lends structure to a decision-making process Promotes ease of use and learning Fosters structured learning  FACTORS FAVORING LESSER RESTRICTIVENESS: •  Meets unspecified needs: provides a collection of capabilities sufficiently broad such that decision makers can choose for themselves which functions to employ in any given situation.  •  Supports changing decision-making environments: the people, tasks, and settings that constitute a decision-making environment all change with time. Allows users discretion Fosters creativity Fosters exploratory learning  • • •  Table 3.1: Factors For and Against System Restrictiveness, adapted from Silver (1991) 80  can lead to more structured learning, and result in a system that is easier and more tractable to use. On the other hand, a system with too much restrictiveness, may result in users who feel too constrained, and uncomfortable with using the prescribed approach to problem-solving. As a result,'such users may avoid using the system altogether. Moreover, such systems discourage exploratory learning and creative problem-solving. Silver's arguments are summarized in Table 3.1  3.4.4 Relationship of System Restrictiveness to Advisory Strategies How does system restrictiveness relate to advisory strategies? System restrictiveness can be used to characterize a particular advisory strategy. Many of the advisory strategies presented in Section 3.4.1 either increase or decrease system restrictiveness. It is worthwhile to look at each of the advisory strategy types and to determine their effect on system restrictiveness.  Timing of the advice. This strategy type is related to the way that the advice is sequenced. More restrictive systems will enact stringent rules on when a particular type of advice is to be delivered. Less restrictive system will enable end-users to seek out advice at any point during a consultation.  Provision mechanism. User-invoked systems that allow end-users discretion on when advice is to be accessed is a less restrictive way of delivering advice. Automatic systems that force users to look at advice, even if they choose not to, will result in more system restrictiveness.  81  Amount of advice. There is no one clear-cut way that the amount of advice will affect system restrictiveness. In some cases, more advice will decrease system restrictiveness, since it may encourage end-users to see many different ways of using the system. Having more advice can increase transparency of the system, which will tend to promote a more open-ended dialogue. However, excessive amounts of advice, as Carroll and McKendree (1988) note, can also have the opposite effect, since it may actually discourage user discovery of the system. Such users may become distracted by the advice.  Special modes. Control blocking, which attempts to limit an end-user's access to system functions will increase system restrictiveness. On the other hand, protected modes, which attempt to protect end-users from harmful system consequences, will decrease system restrictivenss, since this type of protection is meant to encourage exploratory behavior.  3.4.5 Empirical Studies on System Restrictiveness Wheeler, Mennecke, and Scudder (1993) conducted a laboratory experiment, which manipulated the restrictiveness in a group support system (GSS). The study utilized a 2X2 factorial design. One factor was the group composition: subjects were categorized as either preferring high procedural order (HPO) or low procedural order (LPO). This determination was based on a questionnaire that assessed a person's preference for small group work climate. The second factor was system restrictiveness: groups in the restrictive condition received a GSS that provided a detailed, multilevel agenda on their screens. Each group was restricted to follow the sequence of activities under the agenda.  82  In addition, a facilitator provided restrictive comments when the group stepped outside of this agenda. Groups in the non-restrictive condition did not receive such an agenda on their screens, but were allowed to request any of the tools (or none at all), and to proceed in any manner they chose to. Experimental results revealed that non-restrictive groups performed better in terms of higher decision quality for solving problems in the case. In addition, LPO groups also performed significantly better than HPO groups. In terms of satisfaction with the group process, the hypothesis that HPO group members would be more satisfied with the decision process in the restrictive environment and that LPO group members would be more satisfied with the decision process in the nonrestrictive environment was not supported.  The key finding in this study, somewhat surprising, is that increased restrictiveness of the GSS did not lead to improved decision quality, a finding that contradicts the underlying assumption made in many GSS that structuring group interaction enhances performance. In this study, groups left to their own devices yielded superior decisions.  Wheeler and Valacich (1996) conducted another study on restrictiveness of a GSS. This experimental study employed a full 2 X 2 X 2 factorial design, the three factors being facilitation, GSS configuration, and training. Each factor was operationalized at two levels. The facilitation factor was either facilitated or unfacilitated, with facilitated groups receiving restrictive verbal comments that steered a group to use the GSS in a faithful and prescriptive manner. The GSS configuration was provided at two levels. Level 1 received a meeting agenda, and Level 2 did not. Training was manipulated at  83  two levels, with one condition receiving training on how and why to use the activities, sequences, and philosophy of the GSS heuristics, while the other condition did not receive such training.  Results of the data analysis revealed that presence of facilitation, the use of Level 1 GSS, and additional training led to more faithful use of the GSS heuristics. In addition, the presence of more than one of the above also led to more faithful use than any one of the three alone. In terms of decision quality, the statistical analysis revealed that when partitioning the groups based on their degree of faithful or unfaithful appropriation, the more faithful groups demonstrated higher decision quality. Hence, an opposite conclusion was reached in this study from the one conducted by Wheeler, Menecke, and Scudder (1993): more restrictive systems led to more faithful use, which in turn, led to higher decision quality.  Murphy (1990) explored expert system use on the development of expertise among novices in the domain of auditing. While the experimental study he conducted was not about system restrictiveness per se (or was not labeled as such), the results are relevant to this discussion since he investigated unaided vs. aided (expert systems) use and its effects on learning. The main independent variable of interest was the decision aid type. Three treatment groups were studied: (1) expert system with explanations, (2) expert system without explanations, and (3) non-automated practice aid. The main proposition tested in this study is that novices who perform decision-making tasks using an expert system will not have the same level of expertise as novices who perform the same task without the  84  use of an expert system. He hypothesized that first-hand knowledge gained through unaided experience would result in better learning than second-hand knowledge obtained through the use of an expert system. In particular, use of non-automated practice will force subjects to search for information on their own, while subjects using expert systems will learn in a more structured environment, in which they are not permitted to direct their own learning. That expert systems lead to more structured problem-solving is a phenomenon explored by Sviokla (1986), in which he observed that expert systems increase the efficiency of decision-making at the expense of increased rigidity in the task.  Murphy looked at both semantic and episodic memory in assessing learning. The main finding in his study was that novice subjects in the non-automated practice condition appear to have developed a more complete semantic understanding of the domain than the subjects using the expert systems. In a second problem-solving task, he found significant differences between subjects using the non-automated practice aid vs. subjects using the expert system with explanations, on measurements of both semantic and episodic memory. Both these results suggest that expert system use inhibited the development of expertise and that provision of explanations, in particular, was not efficacious. A more restrictive environment, then, hampered the development of expertise.  Silver (1988) conducted an experiment in which he showed that system restrictiveness is not an absolute quality, but rather lies "in the eyes of the beholder" (Silver, 1991). Hence, one individual may perceive a system to be excessively restrictive, while another  85  individual, perhaps a novice user, may feel more comfortable having a more restrictive interaction. Silver's (1988) experimental results, in fact, indicate that users' subjective judgments of system restrictivenss often differ from objective restrictiveness.  3.5 A Framework of the Explanatory Power of a User Interface: A Summary of Chapter 3 This section provides an overview of the framework on explanatory power described in Chapter 3. The emphasis in this chapter has been on the three types of enhancements to explanatory power: content, characteristics of the user interface, and advisory strategies. See Figure 3.4 for the framework, which includes both the enhancements and outcomes of explanatory power.  ENHANCEMENTS Content User Interface Characteritsics  PERFORMANCE  EXPLANATORY POWER  i  w  *  W  Informational Power Computational Power  Learning Problem-solving  Advisory Strategies  Figure 3.4: A Framework of the Explanatory Power of a User Interface  The focus of the discussion, up to this point, has been on the left side of the diagram in  Figure 3.4—that is, how content, the user interface, and advisory strategies can lead to enhanced explanatory power. The next chapter, Chapter 4, will discuss some of the  86  theories in the cognitive science and educational psychology literatures to explain how explanatory power can affect learning and problem-solving ability.  Two different aspects of explanatory power are included in Figure 3.4, informational power and computational power of a user interface. These concepts can be used to compare and contrast two different user interfaces. Larkin and Simon (1987), in their discussion on comparing representations, define two representations as  informationally  equivalent "if all the information in the one is also inferable from the other, and vice versa" (p. 70). Two representations are computationally  equivalent "if they are  informationally equivalent and, in addition, any inference that can be drawn easily and quickly from the information given explicitly in the one can also be drawn easily and quickly from the information given explicitly in the other, and vice versa" (p. 70).  From these definitions, we derive the notion of informational power, which is strictly related to content alone, while computational power evaluates efficiency and usability as well. Interfaces having greater informational power contain superior content, and interfaces having greater computational power have superior content that can be used and accessed more efficiently and effectively. Two user interfaces are said to be equivalent, in terms of explanatory power, if they are computationally equivalent (which, by Larkin and Simon's definition, means they are informationally equivalent as well).  Let us consider how each of the three determinants of explanatory power can affect informational power and computational power. Content-based enhancements, of the type  87  discussed in Section 3.2.4, strictly speaking, can only increase informational power of the user interface. Interface-based enhancements of the type discussed in Section 3.3.1, and selection of an appropriate advisory strategy of the type discussed in Section 3.4.1, can also increase the computational power of a user interface. In summary, explanatory power of a user interface, then, is a more comprehensive construct than explanations content alone since it also considers gains to be achieved in computational power.  Finally, it is worthwhile to point out that the three determinants of explanatory power— content, user interface, and advisory strategy—often interact with one another in interesting ways to increase system effectiveness. While it is useful to separate out the three determinants individually, in actual practice it is often difficult to speak of one determinant, in isolation, creating more explanatory power. In fact, two determinants frequently work in tandem to create more explanatory power, so that it is often difficult to determine where one determinant ends and the other begins. For example, the support for graphical representations in the user interface may allow for the development of hierarchic strategic models that may be more powerful than the specification of strategic knowledge through a text-based interface. In this particular instance, both the interface, and the actual content of the system are enhanced. Similarly, the ability to inspect the components of a system may result in an interface that allows for more powerful deep justifications, which would not be possible in a more static user interface. Still another example would provide for appropriate feedback advice in a timely manner, in which content is dynamically generated and case-specific (interaction of advisory strategy and content).  88  Chapter 4 Theoretical Foundations The theoretical foundations of this research study are covered in this chapter. The goal is to discuss theories that predict how interfaces endowed with greater explanatory power will affect outcomes such as learning and problem-solving performance. The chapter will proceed with this discussion as follows: Section 4.1 will discuss theories of learning, including Mayer's theory of assimilative learning and mental models theories in cognitive psychology. Section 4.2 will discuss theories of diagrammatic reasoning, or how the provision of graphical representations may enhance performance. Section 4.3 will look at Kolodner's theory of expertise, which explores how novices evolve into experts, by looking at the development of their episodic memory. In this discussion, the focus will be on how experts learn to deal with exceptional situations. Section 4.4 will shed light on why explanations are requested at all by looking at the production paradox. After having explicated these theories of learning and problem-solving, we will derive the hypotheses to be tested in the experimental studies.  4;1 Theories of Learning How does the enhanced explanatory power of a user interface affect learning? Schank's (1986) view of explanation as a process of integrating and assimilating new information (see Section 3.1) suggests that explanations serve the role of developing more powerful mental models of the world. In a similar fashion, user interfaces having explanatory  :  power will foster an assimilative learning process in which the user is actively engaged in developing powerful mental models of the system. This section will draw on two bodies  89  of literature to explicate this learning process: assimilative encoding theory, and mental models theory.  4.1.1 Theory of Assimilative Learning Mayer (1979, 1985, 1989) has developed a framework for research on learning from explanative material. Explanative material refers to material that explains how some system works. For example, Mayer (1989) focuses on how learners build models of systems, and how these models help learners to think more systematically when they solve problems. Typically, these models involve an understanding of the structure of the system as well as the underlying mechanism operating on the structure. Figure 4.1 provides the general framework for discussing research on learning and problem-solving performance of explanative material.  Characteristics of Learning Material  Recall Performance Encoding Process  Learning Outcome  Learner Characteristics  Retrieval' Process Problem Solving Performance  Figure 4.1: A General Framework for Learning, adapted from Mayer (1985)  Mayer's theory of assimilative learning relates the learning material and learner characteristics (independent variables) to recall performance and problem-solving  90  performance (dependent variables). Three intervening constructs are used to build his theory: encoding process, learning outcome, and retrieval process.  The focus of this section is on the encoding process. This refers to the learning processes, or the ways that learners encode to-be-learned information. Mayer (1975, 1979) discussess three learning theories that describe this process: a one-stage model, a two-stage model, and a three-stage model. Theory 1 (Reception Theory) is a one-stage model, which posits that performance is a function only of the amount of information that is received by the learner. This theory predicts that the mere presence of more information will improve learning performance. Theory 2 (Addition Theory) is a twostage model, which predicts that more will be learned if the leaner possesses "prerequisite anchoring concepts," or pre-existing knowledge. As in Theory 1, the learner receives the information, and in stage 2, his pre-existing knowledge can influence how much is learned. Finally, Theory 3 (Assimilation Encoding Theory) is the most complete learning theory of the three, since in addition to the two stages of Theory 2, a third stage is added . in which new information is actively integrated with pre-existing knowledge. This stage involves integrating anchoring knowledge and new information, and then transferring the new cognitive structure to long-term memory.  Figure 4.2 provides a graphical illustration of the three-stage Assimilation Encoding Theory. The three stages are labeled (a), (b), and (c)— broken down into (cl) and (c2). The three stages are summarized below:  91  Short Term Memory (Input)  Working Memory w  (a) i  (cl)  (c2)  r Long Term Memory (b)  Figure 4.2: Assimilation Encoding Theory, adapted from Mayer (1979)  Reception: Information is received into Working Memory (WM) (a) Availability: Information is checked against prerequisite anchoring concepts in Long Term Memory (LTM) (b) Activation and Assimilation: Anchoring knowledge is transferred from L T M to W M (cl) so that it can be actively integrated with the new information. Once integrated, the new cognitive structure is transferred to L T M (c2).  The addition encoding vs. assimilation encoding distinction made by Mayer is much like the distinction made between rote and meaningful learning by Ausubel (1968) or the differentiation between deep vs. shallow knowledge reasoning proposed by Hollnagel (1987). In addition encoding, individuals merely select factual details, and relate them only in arbitrary ways, whereas in assimilation encoding, individuals are actively engaged in integrating and assimilating new knowledge to create deeper models of the domain.  92  For.example, a novice computer operator may issue a simple set of commands (acquired r  through rote memorization) on a computer terminal to perform a given task, but have no understanding of how to modify these commands under a different set of circumstances. A more experienced operator may have a more integrated model of the computer operating language, which enables him or her to modify these commands to perform under novel conditions.  The learning outcome in Figure  4.1 refers to the content and the structure of the newly  acquired information. On the one hand, the learning outcome for addition encoding will be a set of unrelated facts. On the other hand, the learning outcome for assimilation encoding will be "integrated explanative representation that consists of all the key components and the causal relations among them." (Mayer, 1985, p. 69) The learning outcome for assimilation encoding can be viewed as the user's mental model of the domain. The next section will elaborate on the concept of a learner's mental model and how knowledge is organized and structured as a result of assimilative learning.  The retrieval process in Figure  4.1 refers to searching through memory and generating  an answer to a problem. Numerous experimental manipulations have shown that material that is better assimilated will lead to more efficient retrieval (Anderson, 1990). Flierarchic organizations, for example, allow a person to structure the search of memory and retrieve information more efficiently. The experiments conducted by Bower et al. (1969) on the free recall of word lists, show that subjects were much better at retrieval  93  when they were given a complete presentation of conceptual hierarchies as opposed to a random ordering of the same words.  Assimilative vs. arbitrary learning may result in different kinds of test performance (Mayer, 1985). For recall, subjects who have integrated learning outcomes should perform better on recall of explanative information, whereas subjects who have arbitrary learning outcomes should perform best on recall of isolated facts. For problem-solving tests, subjects who have assimilated learning outcomes should perform better on creative, or far transfer problem solving, while those with arbitrary learning outcomes should excel in problem-solving tasks that are nearly identical to those for which they were trained.  4.1.2 Mental Models Theory As previously noted, the learning outcome of the assimilation encoding process is the learner's mental model, or internal representation of the system. Individuals who engage in assimilation encoding are more likely to build more powerful mental models, which enable them to generate solutions to far transfer problem-solving tasks.  Mental models is a concept that has been studied extensively in the cognitive sciences. Johnson-Laird (1983, 1989) wrote the seminal work on mental models, in which he laid the groundwork for much of the work, both theoretical and empirical, on mental models research to date. In this work, he says that mental models "enable individuals to make inferences and predictions, to understand phenomena, to decide what action to take and to control its execution, and above all to experience events by proxy." Another important  94  work in the literature is a collection of papers edited by Gentner and Stevens (1983), which focus on a user's understanding of dynamic, physical systems. Later survey works in the mental models literature include papers in the human-computer interaction area (Carroll and Olson, 1988) and in man-machine studies (Rouse and Morris, 1986), as well as a paper by Wilson and Rutherford (1989) that reviews and critiquesthe work on mental models from the dual viewpoints of psychology and human factors.  In the information systems field, mental models will usually be used in the more limited sense to mean an internal representation of knowledge or expertise about a system. In this sense, there is a surprising similarity and overlap in the definitions that various researchers offer. Carroll and Olson (1988) define a mental model as "knowledge of how the system works, what its components are, how they are related, what the internal processes are, and how they affect the components" (p. 47). They differentiate a user possessing a mental model from a simple sequence of actions based on rote memorization but without a deep understanding of a system. Williams, Hollan, and Stevens (1983) offer a similar definition: "A mental model is a collection of 'connected' autonomous objects. Running a mental model corresponds to modifying the parameters of the model by propagating information using the internal rules and specified topology." Both definitions utilize the notion of how the components of a complex system are interconnected, or how they are related to one another—i.e., the topological or structural information.  95  Important in the Williams, Hollan, and Stevens definition is the notion of the ability to run a mental model. A mental model is not a static structure, but a generalizable one that is modified and used adaptively for each specific situation. Parameter passing and propagation are the mechanisms that allow a mental model to be used in an adaptable manner. The prediction of future states or the prediction of how a system will respond under novel circumstances may be implemented through these mechanisms. The retrieval process is the step in Mayer's Theory of Assimilative Learning where a user searches through long term memory, for the appropriate mental model, and generates a solution to a problem, by running the mental model.  Norman (1983) makes an important distinction between mental models and conceptual models. A mental model is the user's model of a target system. Through interaction with the system, it is a naturally evolving model. As a person develops more expertise with the system, the model develops and becomes more refined over time. Hence, at any given point in time, the mental model of a system is a dynamic, usually incomplete specification of the target system—unless the user is an expert and already has a complete understanding of the system. A conceptual model, on the other hand, is typically the designer's conceptualization of the target system. It is invented to provide an accurate, consistent, and complete representation of the target system. Ideally, the user's mental model of a system should be in accord with the designer's conceptual' model, otherwise the potential for user error and misuse of the system is great.  How does a mental model promote understanding and problem-solving capability? It does so by organizing knowledge in some meaningful way. Without a meaningful  96  representation of knowledge, the human information processing system would be incapable of making sense of a complex system. An effective organization allows one to comprehend complexity and utilize the knowledge, in particular, in far transfer problemsolving tasks.  The concept of interconnectedness is important to understanding how knowledge is organized. Mayer (1975) makes a useful distinction between internal connections and  external connections. Internal connections are the links between one component of a system to another component (or understanding of the structure of a system). External connections refer to the links between new, to-be-learned material, and pre-existing knowledge. Both types of connections are important to the way we assimilate new information. The mental models literature contains a variety of studies showing how learners who are able to make both types of assimilative connections are able to become more effective problem-solvers.  The most concrete examples of mental models that provide a specification of the internal connections of a complex system are the mechanistic mental models, also known as the device models. These are models of actual physical devices. Such models can explain the intrinsic mechanism of a complex device, such as a piece of machinery, which is built up from combinations of simpler components (de Kleer and Brown, 1983). It is through these models that we are able to understand the behavior of devices and predict their behaviors under a novel set of circumstances.  97  Several experimental manipulations have shown that the provision of a device model can facilitate learning and problem-solving. The Kieras and Bovair studies (1984) empirically tested the hypothesis that being provided a device model of a system would facilitate problem solving. The device itself was presented on a computer screen. Two groups of participants were tested on procedures for operating the control panel: the device model group was provided with knowledge of how the components of the device were connected to one another (both graphical and textual descriptions) and what process acted on the components; the control group received no model training. Both groups were trained on two kinds of procedures for operating the device, normal and malfunctioning procedures. Four of the procedures were designed to be inefficient.  As predicted by the theory, the device model group outperformed the control group on a variety of performance metrics. First, participants in the model group learned the procedures faster. Second, long-term retention of the material (participants were given a memory test one week later) was superior for the device model group. Third, participants in the device model group short-cut the inefficient procedures four times more often than their control group counterparts.  A device more familiar to us, the pocket calculator, has also undergone a series of studies to investigate the role of mental models in its use (Young, 1981; Halasz and Moran, 1983; Bayman and Mayer, 1984). For example, problem solving using the Reverse Polish Notation, stack-based calculator was studied (Halasz and Moran, 1983). The stack in one version of this calculator consists of four registers, X , Y , Z, and T. When a user  98  hits ENTER, the prior contents of X are copied into Y, Y is copied into Z, Z is copied into T, and the contents of T are lost. Two groups of participants were studied: the model group was provided with the stack model mechanism together with a set of stepby-step procedures for solving typical problems; the control group was provided with the same step-by-step procedures given to the model group, but were not given a description t •  of the stack mechanism. Halasz and Moran found that the model had little effect in solving routine problems for which both groups were given explicit training. However, in far-transfer problem-solving situations, the model group decisively outperformed the control group.  Whereas internal connections focus on the interconnections among the components of a complex system, retaining the original structure of the material, external connections attempt to draw on a person's prior knowledge to understand a new target system. For example, learners often try to assimilate new and difficult-to-understand material by drawing connections to a model of something they already know, or a model that is more familiar and tangible for them. Such external connections are referred to as reasoning by analogical representations.  When we use analogical representations to understand, we make inferences about how properties.of something known can be applied to something unknown (Carroll and Mack, 1985). Hence, we make use of two domains of knowledge, a base domain (the domain already known to the learner) and the target domain (the to-be-learned domain). In a classic experiment in the mental models literature, Gentner and Gentner (1983) discuss  99  \ electricity in analogical terms and compare it to a hydraulics (water) system, which is a domain more concrete and familiar to learners. Gentner and Gentner (1983) empirically investigate, how the water model can be used to make electricity concepts more understandable for learners.  The water system analogy is a useful one to make since several mappings (i.e., external connections) can be made between the two domains: water flows through the pipes of a hydraulic system just as electricity flows through the wires of an electrical system; volts is the term for electrical "pressure"; milliamperes is the term for electrical "volume". One insight derivable from this analogy, which might be hard for a novice to understand without the analogy, is the distinction between electrical current and electrical pressure. Electrical rate of flow (current) is analogous to the amount of water that passes a point at a time; electrical pressure (voltage) is analogous to the force exerted by the water per unit area. As Gentner and Gentner point out, novices in electricity often fail to differentiate between the two concepts—they seem to merge the two concepts into a generalized strength concept. Hence, an analogy can serve to make more salient the differences between concepts that may seem similar to people unfamiliar with a domain.  In short, the important point to remember in drawing external connections is that it may be more effective to present a learner with material that he or she is more familiar with, and with material that is conveyed in an idiom more familiar to the learner. By doing so, the learner will have an easier time assimilating the new information. In terms of interface design and the understanding of computer systems, this means that familiar and  100  concrete objects might be used to foster the development of an appropriate mental model of the interface. Hence, a direct manipulation interface that requires a learner to drag a document to a trash can icon may be more effective than an arcane text-based command that has no natural mapping to a learner's prior knowledge structures. Interfaces that are consistent with these natural mappings are more likely to lead to assimilative encoding and learning. \  •  •  4.1.3 Relationship of Assimilative Learning Theories to Explanatory Power of a User Interface The preceding discussion has drawn from both theories of assimilative learning and mental models. This section will attempt to bring these areas together to develop a more . integrative research framework for studying the explanatory power of user interfaces.  We return, once again, to Mayer's Framework for Assimilative Learning (see Figure 4.1), which will serve as a foundation for the development of the research framework. Characteristics of Learning Material is the explanatory power of the user interface itself, or if looked at in terms of the framework developed in Chapter 3 (see Figure 3.4), it is the content, user interface characteristics, and advisory strategies that a system designer employs to enhance explanatory power.  Two types of encoding processes were discussed: a two-stage addition-encoding process or a three-stage assimilative encoding process. Our framework predicts that user interfaces having a higher degree of explanatory power will foster an assimilative encoding process, for problem-solving tasks in which a deep understanding of the system  101  is required. This prediction follows naturally from Schank's view of explanation as a process of assimilating and reorganizing knowledge so that it makes to-be-learned material more meaningful. Users interacting with interfaces having.less explanatory power will merely engage in addition encoding..  Because the interfaces endowed with greater explanatory power are believed to foster assimilative learning, the learning outcome will be a user's mental model that is well integrated and better organized. This means, for example, that a user will understand how components of a system are interconnected to one another. Moreover, causal relationships within the system, as well as processes acting on the components of the system can be better understood, since an interface with a high degree of explanatory power can make these interconnections and relationships more transparent and explicit to the end-user.  Finally, Mayer's framework suggests that a more assimilated learning outcome or mental model, will lead to a more efficient retrieval process, which in turn will result in enhanced recall and problem-solving performance. Mental models theory suggests, in particular, that end-users who possess well-developed mental models will be superior problem-solvers especially in creative, or far transfer tasks. Such end-users will be able to perform novel tasks for which they have not been given explicit training. They may "run" their mental models so that they are generalizable to the new context or task. In addition to far transfer tasks, Mayer (1985) also looks at recall of conceptual information,  102  and retention of the information in nonverbatim format as two other performance indicators of assimilative learning.  Explanatory Power  Assimilation - Encoding  w  Mental Model  w  Efficient Retrieval  w  Performance Outcomes  w  Figure 4.3: Assimilative Learning, as it Relates to Explanatory Power  A summary of the framework for understanding how explanatory power affects learning, and in turn performance, is provided in Figure 4.3. Note that this framework is an elaboration of the framework illustrated in Figure 3.4 in the previous chapter.  4.2 Diagrammatic Reasoning The model-based interfaces utilized in this research make use of graphical object models, which an end-user is expected to explore to better understand the system. This section reports on the theoretical foundations of diagrammatic reasoning, and the advantages that are to be gained by interacting with a graphical representation of a system, as opposed to a textual representation of the same system.  These theories are particularly relevant for  Expert Strategy (see Chapter 5), in which hierarchic models have been created for endusers to better understand expert system rule bases.  103  4.2.1 Larkin and Simon's Theory of Diagrammatic Reasoning In section 3.4.6, a distinction was made between informational power and computational power of user interfaces. Informational power concerns the actual content that a user interface offers. Computational power, in addition to informational content, is about how quickly and easily inferences can be drawn from the information provided in a user interface. Thus, computational power is a quality related to efficiency and the ease with which pertinent information can be accessed and applied.  Larkin and Simon (1987) define a representation as consisting of both and the  data structures  programs operating on them to make new inferences. The data structures are the  node-link structures; e.g., schemas employing attribute-value pairs. They note: "Such structures have been called variously list structures, colored directed graphs, scripts, and frames. The differences, when there are any, are inconsequential for our purposes" (p. 71). Programs are the instructions that act on a data structure to perform problem-solving tasks, and may be represented as production systems (see e.g., Newell and Simon, 1972; Anderson, 1987; 1987), in which each instruction is represented as an IF-THEN rule, C -> A, where C are the conditions and A are the associated actions that are performed when C is true. The conditions are tests on portions of the data structure—when they are met, the actions are taken.  Two types of representations are defined and differentiated by Larkin and Simon. A  sentential representation is "a data structure in which elements appear in a single sequence" (p. 72). In contrast, a diagrammatic representation is "a data structure in  104  which information is indexed by two-dimensional location" (p. 72). Hence, a sentential representation is a list-like structure (a one-dimensional structure), whereas a diagrammatic representation is indexed by location in a plane (a two-dimensional structure). The chief difference between the two representations is that a diagrammatic representation can preserve the spatial and geometric relations among the components of a problem, while a sentential representation cannot.  In Larkin and Simon's view, the advantages of a diagrammatic representation, as opposed to a sentential representation, are not that they contain more information, but that they offer superior computational advantages. To understand how and why diagrams are computationally more powerful, they look at three processes: (1) search; (2) recognition; and (3) inference.  Search operates on the data structure by locating the elements that match the conditions of one or more productions. Larkin and Simon consider how a diagrammatic representation is computationally more efficient in searching. On the one hand, search in a sentential representation requires looking linearly down the list until an element is found. In addition, search may be time-consuming since the several elements needed to match the conditions of the productions may be widely separated apart. On the other hand, search in a diagrammatic representation can be more efficient since a diagram can group information that is used together in a two-dimensional plane. A human user of a diagram need only focus on that one location, and no searching is required through the remaining data.  105  Recognition involves "a match between the elements in the data structure and the conditions of the productions in the program" (p. 73). As Larkin and Simon remark, ease of recognition is strongly affected by what is explicit in a representation, and what is only implicit. One of the examples they employ to illustrate this point is a familiar one to any student of an introductory course in economics: the graphical depiction of supply and demand curves. See  Figure 4.4. The line D is the demand curve, indicating the quantity  that would be purchased at each price. It is a downward sloping to show that the higher the price, the smaller the quantity will be demanded. The lines S and S' are supply curves that slope upward to show that the higher the price, the larger the quantity that will be supplied. From the diagram, we can easily obtain the equilibria of the two supply curves, E and E ' , from the intersection of the supply and demand curves. We can see how,the equilibrium moves from E to E ' when the supply curve shifts upward.  Finally,  inference is the process by which new, or inferred elements are added to the data  structure. Larkin and Simon point out that inference capability is largely independent of the representation—that the advantages of having one representation over the other are not that great if the two representation are informationally equivalent. Yet, they also note that diagrammatic representations can support a large number of perceptual inferences that are easy for humans, but which are difficult using a sentential representation.  In Figure 4.4, notice how the diagram permits the usage of perceptually obvious features (e.g., slopes to indicate how price and quantity are related, and intersections to determine the equilibrium), which may be employed to support a large number of inferences. For  106  example, we may make inferences regarding how the equilibrium is affected if the demand is more elastic (or the flatter the demand curve). Such inferences would be very difficult to make without the perceptual enhancements that a diagram offers.  Q  Figure 4.4: Supply and Demand Curves from Economics, adapted from Larkin and Simon (1987) In summary, diagrammatic representations are advantageous over sentential representations in that they offer gains in computational power. These computational gains can be viewed in terms of three processes: •  Search is quicker since related pieces of information can be grouped together in a diagram.  •  Recognition is quicker because diagrams make more pieces of information easily identifiable; this same information must be explicitly constructed at a high cognitive cost in a sentential representation.  •  Diagrams support a large number of perceptual inferences that are easy for humans to make.  ' .  107  \  4.2.2 Organizational Factors of Memory: Hierarchic Structures and Other Organizations  One of the purposes of a diagram is to organize information in some meaningful way. The literature on organizational factors in memory is discussed in this section, since the organization of information in a diagram influences how it is learned and recalled by a person. The experimental study on Expert Strategy (see Chapter 7) explores if the provision of hierarchic structure can enhance the explanatory power of the expert system, so we would like to understand how such an organizational structure could enhance performance.  Bower (1970) reviews some of the research conducted on organizational factors of memory. He reviews an experimental study conducted by Bower, Clark, Lesgold, and Winzenz (1969), in which the influence of structural organization upon the free recall of conceptual'word hierarchies was demonstrated. In this experiment, one group of subjects was given a set of hierarchic trees, in which 112 words were conceptually organized. A control group was given the same 112 words, except that the words were randomly presented. The subject's task was to recall as many words as possible, in any order. Over three trials, the subjects having the organized representation recalled about two to three times as many words as did the subjects in the randomized condition.  The explanation for this result is that the organized presentation gave the subjects a systematic retrieval plan. Such a plan tells an individual where to start, and how to proceed systematically from one unit (of the hierarchy) to the next. In addition, the plan  108  enables an individual to monitor the adequacy of his recall, telling him which parts of the hierarchy have been left out, and which parts of the hierarchy remain.  Another way to view the benefits of hierarchic organizations is to view it as a preferred human learning strategy. As Bower (1969) remarks: "a hierarchy is an extremely familiar and efficient organizational scaffold, encountered throughout life" (p. 41). A human learner, upon being presented with a large amount of material, must find some strategy to learn and recall it effectively. One strategy, which may arise naturally, is to "divide and conquer": to subdivide the material into smaller groups, and then learn these parts as integrated packets of information (Bower, 1969). Such a strategy can be applied recursively, so that information is grouped at many descending levels, thus creating a multi-tiered hierarchic organization. Hence, it may be a very natural for a learner to create hierarchies in order to make sense of large amounts of information.  The advantages of hierarchic organizations can also be explained in terms of the inherent limitations of a human information processing system. Simon (1996) and others have described the cognitive limitations of human problem solvers in numerous experimental studies. In particular, a human information processing system operates from a wellknown short-term memory bottleneck, which can only hold a few chunks of information at a time. Human subjects, while engaged in problem-solving tasks in complex domains, have been found to get into trouble because they forget where they are, what assignments of variables in the problem have already been made, and "what assumptions are implicit in the assignments they have made conditionally" (Simon, 1996, p. 63). These  109  difficulties can be ascribed to limitations of short-term memory, as well as the relatively long time (8 seconds) required to move items from the limited short-term memory to the large-scale long-term memory. Moreover, Simon (1996) has also argued that the human information processor is largely serial in nature—it is only capable of processing a few symbols at a time. Although this claim has been challenged in recent years by the parallel, connectionist models of cognition (see e.g., Rummelhart and McClelland, 1986), Simon argues that parallel processing occurs primarily in the sensory organs (especially eyes and ears); however, after the stimuli have been recognized, the limitations of shortterm memory impose seriality during the subsequent stages of information processing. Given these limitations on short-term memory and the serial processing of information, ' hierarchic organizations may be employed by a human information processing system in order to retrieve large amounts of material more systematically. Since information cannot be received  en masse, but rather must be Tunneled through the short-term memory  bottleneck, a hierarchic structure can be used to process this information, one unit at a time. Hence, the hierarchic structuring of material can be viewed as a kind of human adaptation to the problem of the short-term memory bottleneck.  Hierarchies are one form of organizational structure that has been studied extensively, and shown to be of benefit to the processing of new material. Other researchers have looked at other types of organizations. Graesser and Goodman (1985) look at conceptual graph structures. In their proposed theory, knowledge is represented as a network of labeled statement nodes that are interrelated by directed arcs. These graph structures represent specific schemas and text passages. Trabassso and van den Broek (1985)  110  explore the use of causal network representations of narrative events. They report on a study in which events in a causal chain were recalled more often than dead-end events in both immediate and delayed recall  4.3 A Theory of Failure-Driven Learning: Dealing with Exceptional Situations Occasionally, an intelligent system will break down, and the end-user must find a creative solution to a problem-solving task, without the aid of the system. What kind of system will promote an interaction with an end-user such that he/she is better able to recognize a situation where the system can no longer provide good advice? In addition, how should ah end-user, once the situation is recognized as a system breakdown, proceed with the problem-solving task?  The second experimental study (see Chapters 10 and 11) addresses these issues by investigating how system restrictiveness can either enhance or impair an end-user's ability to deal with exceptional, or novel cases. This section will describe a theory of dealing with novel cases; in particular it will describe how novices evolve into experts through the development of their episodic memory.  Kolodner (1983) describes a theory of developing expertise. She looks at a person's memory as he evolves from novice to expert. She notes that two things happen during that evolution. First, knowledge is built up incrementally on the basis of experience. Facts, once unrelated, become more integrated, based on an individual's experiences. Second, reasoning processes are refined, and more powerful rules are developed over time. Specifically, an individual, over the course of many experiences, notices failures  111  and successes, as well as the differences and similarities between cases encountered in the past.  As a result, such an individual is able to derive more generalizable knowledge,  as well as refined reasoning processes to recognize unusual or novel circumstances.  Kolodner starts by considering two types of memory.  Semantic memory refers to the  facts that we know, arranged in a hierarchical network.  Episodic memory, on the other  hand, encodes experience. It records and organizes episodes or events in a person's life. The distinction between semantic and episodic memory can be viewed in terms of acquired book knowledge vs. the actual practice and experience of using the knowledge: "When a person has only gone to school and acquired book knowledge, he is considered a novice. After he has experience using the knowledge he has learned, and when he knows how it applies both to common and exceptional cases, he is called an expert" (Kolodner, 1983, p. 498).  The evolution from novice to expert is a process that requires introspection and examination of knowledge while engaged in problem-solving experiences. During problem-solving, the individual is evaluating and understanding his experiences in terms of previous ones (either a previous case, or generalized knowledge). In the process, the individual may integrate the new experience into his long-term memory so that it can be retrieved for understanding a later case. During the process, Kolodner considers two triggers that affect learning: similarity and failure.  Similarity means that a case in the past can be matched to the current one being  examined. When one is reminded of a similar case in the past, the similarities between  112  the two cases can be extracted to form a new generalized case. This is similar to Schank's theory of scripts (Schank, 1980); e.g., the well-known restaurant script in which an individual forms a generalized script of going to a restaurant through the many different experiences of restaurant-going the individual has experienced in the past.  Kolodner observes that the generalization of episodes is important for a number of reasons. First, it is economical in terms of storage. Once a generalized episode has been created, the details of the individual episodes that support it do not have to be stored. Second, it serves an organizing function. The generalized episode can function as an organizing point through which similar episodes can be aggregated together. If memory is structured in such a way, memory search and retrieval can be quicker and more efficient. Third, a generalized episode can make reasoning more efficient and automatic. When a new episode is reminiscent of a generalized episode, an individual need not reason through this particular case, but instead'can draw on the already worked-out reasoning of the generalized episode.  Failure, on the other hand, signifies that a generalized episode will not work for the new case. Hence, failure triggers additional analysis if effective learning is to occur. Kolodner has identified six processes included in failure-driven incremental learning:  1. 2. 3. 4. 5. 6.  initial decision noticing the failure assigning blame correcting the failure explaining the failure memory update  113  The first step is the initial decision, which is arrived at using the nprmal procedures of the generalized episode. The individual realizes that an incorrect or sub-optimal decision has been made; hence the second step is noticing the failure. In certain cases, it may not be obvious to the individual that failure has occurred. Sometimes, reminding the individual of previous cases similar to the one currently being worked on may be beneficial, so that knowledge from those cases can be used to understand the new case. Once the failure is recognized, the individual is in a position to find the cause of the failure, or assigning blame. After the causes have been found, the individual can correct the failure. The next step involves explaining the failure so that the individual understands why he made the mistake. Finally, as a result of this experience, long-term memory must be changed, so that the individual can draw on this new episode in the future.  Kolodner's theory of failure-driven learning is a useful way of describing the way that experts learn to handle novel situations. As an individual encounters an exceptional case, they are indexed according to their distinguishing features. An individual may possess a generalized episode that differs from the exceptional case in a few ways. If an explanation has been made, it is stored in long-term memory with the exceptional case. When a new case similar to the exceptional case is encountered, knowledge about the exceptional case can be brought to bear on the current case.  Kolodner's theory of failure-driven learning is also an adaptive theory of learning. In this view, new information is not to believed immediately; rather, it should be possible to assign a credibility rating to a piece of information. For example, new knowledge based  114  on just a few episodes should be assigned a lower credibility rating than knowledge that has been acquired over a longer period of time, and over many more episodes.  Other researchers have looked at novel events and exceptional circumstances, in the context of expert systems, which typically break down if they are stretched beyond the limits of their expertise. Coombs and Hartley (1987), for instance, present an algorithm for reasoning about novel events.  4.4 The Production Paradox Will explanatory advice be requested at all, even if good explanations facilities exist? Carroll and Rosson (1987) address this issue in their discussion of the conflict between a user's goal of learning vs. working, which they refer to as the Production Paradox. In using a computer system, end-users would like to accomplish as much work as possible; however, such a goal reduces their motivation to spend any time learning about the system. Carroll and Rosson observe that as a result of having such a focus on work throughput, such users are likely to stick with less-effective procedures that they already know, rather than expend the effort to learn more effective and efficient ways of using the system.  Carroll and Rosson observe many instances of learners, at every level of expertise, trying to avoid learning a system. For example, they observe that learners try to avoid reading altogether: they would rather do real work rather than read lengthy descriptions and instructions of how to use a system. As a result, in structured practice sessions, they often deliberately get off track and end up in tangled paths of self-exploration through a  115  system. New users often want to jump right into a system without first consulting help systems and engaging in practice sessions. They become restless and frustrated when training introduces them to a new function, but expects them to refrain from using the system. For more experienced and savvy users of computer systems, the Production Paradox takes on a more subtle quality. These users find that they must assess the tradeoff involved in spending time to learn a system: in the long-term, it may result in learning more efficient methods of performing tasks, but in the short-term it will take them away from performing their work.  One way to tackle the Production Paradox is to look at the cognitive effort perspective or cost-benefit principle (Gregor and Benbasat, 1999), a principle which argues that users will not expend effort to access and read explanations unless the expected benefit of doing so is perceived to exceed the cost of the cognitive effort. By this principle, learning will be more likely to occur if 1) the benefits of learning the system are known and made more salient; or 2) the costs of accessing the explanations are reduced.  A number of ways to tackling the Production Paradox are suggested by Carroll and Rosson. One approach is to take the first course of action suggested above: make the benefits of learning the system more salient for the user. Along these lines, Carroll and Rosson suggest incorporating aspects of a computer game in the interfaces of applications so that they stimulate playfulness and learning. For example, a user interface could incorporate many abstract elements of a game environment such as risk, discovery and  116  uncertainty. By doing so, a user interface can sustain interest in the system and motivate users to seek out more instructional help.  A second approach is to reduce the costs of accessing and using the explanations, or help . facility. Many design suggestions fall under this category: designing a system so that it is safer and more risk-free—e.g., incorporating recoverability in the user interface, so that end-users can backtrack out of undesirable actions; making information about a new function easier to access and understand; devising context-sensitive explanations facilities; embedding explanations within the interface dialogue, so that an end-user does not have to go through the extra effort of requesting them; devising a training wheels interface in which advanced functions are disabled during the early stages of a user's learning. In short, the important point to remember is that a user interface can be designed so that the cognitive effort associated with accessing and using the explanations is reduced.  .  4.5 A Research Framework for Experiment I and Experiment II This section will integrate the theoretical frameworks presented in Chapters 3 and 4. For a review, refer to the model of explanatory power depicted by Figure 3.4, and the model of assimilative learning given in Figure 4.3. The model is given on the next page in Figure 4. 5.  117  Enhancements to Explanatory Power Content Interface Advisory Strategy Assimilative Learning  Task Performance  Task Type Requires visualization? Requires deep knowledge? Structured vs. Unstructured  Figure 4.5:  Research Framework  Task type was also included in the research framework. The task-fit technology perspective has been studied by other researchers (see e.g., Benbasat, Dexter, and Todd, 1986; Goodhue and Thompson, 1995; Goodhue 1995; and Vessey and Galleta, 1991). They maintain that the nature of the task and the type of decision support provided (i.e., in this dissertation, the enhancements to explanatory power) jointly influence decision performance.  Enhanced explanatory power, together with task type, are believed to jointly influence the degree of assimilative learning, as discussed in Section 4.1, which in turn will lead to better performance outcomes. Although there is only one box in Figure 4:5 to represent assimilative learning, this model could easily be expanded to include the assimilation  118  encoding process, as depicted in Figure 4.1 and Figure 4.2. The assimilative learning box is dotted to indicate that assimilative learning either was or was not measured. Explanatory power and task type were manipulated in both Experiment I and Experiment  n.  4.5.1 Experiment I Experiment I was primarily an investigation of the effectiveness of graphical hierarchies that depict the structure of expert system rule bases. These graphical hierarchies represent an interface-based enhancement to explanatory power over a flat interface, which simply lists the variables of an expert system rule base in alphabetical order. In addition, deep explanations support that provided underlying domain principles associated with a system recommendation, represent a content-based enhancement to explanatory power. To manipulate task type, two task characteristics were considered: (1) tasks that do or do not require visualization of the structural information of expert system rule bases; and (2) tasks that do or do not require a deeper understanding of expert system rule-bases. Tasks that do require visualization of the expert system rule base were predicted to benefit from the graphical hierarchies; tasks that require a deeper understanding of the expert system were expected to benefit from deep explanations support.  Although assimilative learning was not measured in this experiment, it was assumed that more assimilative learning would occur for tasks that required a deeper understanding of the expert system, and when subjects were provided with the appropriate enhancement to  119  explanatory power. In particular more assimilative learning was predicted to occur when 1) subjects were engaged in tasks requiring visualization of the expert system rule base and were provided with the graphical hierarchies; and 2) subjects were engaged in tasks requiring a deeper understanding of the expert system rule base, and were provided with deep explanations support. More assimilative learning, in turn, was predicted to enhance task performance outcomes.  4.5.2 Experiment II Experiment 13 investigated manipulations of advisory strategy. Two versions of an intelligent system were created to study how advisory strategy would influence task performance: a low-restrictive system in which a user could request the advice-giving procedures in any order and a high-restrictive system in which the system prescribed to the user the order and manner in which the advice was used. The low-restrictive system contained more explanatory power because the user could more directly manipulate the procedures; hence more understanding of the system was possible. In addition, all subjects were provided with deep explanations support that described the model-based reasoning algorithm the system used to reach a recommendation. Experiment LT also looked at two types of tasks: (1) structured tasks, or tasks that an intelligent system is capable of finding the optimal solution; and (2) unstructured tasks, or tasks in which an intelligent system's advice-giving capabilities break down, or are unable of finding the optimal solution.  120  Enhanced explanatory power (i.e., the low-restrictive system and deep explanation support) was predicted to foster to a higher degree of assimilative learning, depending on task type. In this experiment, assimilative learning was measured with a learning test. A higher degree of assimilative learning was predicted for unstructured problem-solving tasks using the low-restrictive version of the system. In addition, more deep explanations usage was predicted to lead to more assimilative learning on the unstructured tasks. Subjects who engaged in more assimilative learning were predicted to be better problem solvers on the unstructured tasks only. For the structured tasks, enhanced explanatory power was predicted not to make a difference.  4.5 A Summary of Chapter 4 This chapter has discussed four theories that are relevant to the explanatory power of a user interface. In Chapter 7 (Experiment I) and Chapter 10 (Experiment II), the discussion related to the research hypotheses will elaborate on these theories. A summary of the four theories and how they are related to this research study follows:  •  The theory of assimilative learning describes the process of integrating new , information, resulting in more powerful mental models of systems. This theory is relevant especially in the processing of deep explanations, which are provided in both Experiment I and Experiment n.  •  Theories of diagrammatic reasoning were discussed in terms of how they facilitate search, recognition, and perceptual inferences in complex problem solving domains. In addition, diagrammatic representations serve the purpose of  121  organizing large amounts of information, so that they can be more efficiently accessed. These theories are particularly germane to Experiment I, in which hierarchic structure is believed to facilitate the understanding of expert system rule bases. A theory of how an expert learns to deal with exceptional or novel cases was introduced in Section 4.3. This theory will be useful in the discussion of Experiment II, in which a system's (i.e. LogNet's) advice will break down. It is hypothesized, in particular, that a low restrictive system (i.e., more open-ended interaction) will facilitate failure-driven learning, and lead to more effective problem-solving in exceptional cases. The Production Paradox is especially relevant to Experiment II. One of the treatments in this experiment will assess whether giving a subject an explicit incentive to learn about the system's underlying rationale will, in fact, lead to better understanding of the system.  122  Chapter 5 Expert Strategy: Hierarchic Models of an Expert System Rule Base  The need for more powerful explanations that enable a user to follow or trace a line of reasoning more easily and more interactively was suggested previously in Chapter 3. Expert Strategy, a graphic-based system that organizes an expert system's rule base into a hierarchy of nodes, was developed with this goal in mind. This chapter will describe the underlying motivations of Expert Strategy, the model-based reasoning mechanism driving the dynamic aspects of the system, an example of its application in a transportation mode selection problem, and some of the ways that Expert Strategy can aid problem-solving and understanding of expert system rule bases. The transportation mode application is used in Experiment I, which is the subject of Chapters 7, 8, and 9.  5.1 Motivation and Background Rule-based reasoning systems have become one of the most popular techniques for capturing and manipulating domain knowledge in expert systems. However,.several researchers have commented that rule-based representations leave the reasoning process invisible to both end-users and developers alike (e.g., Ramaswarmy, Sarkar, and Chen, 1997; Southwick, 1988; Chander and Radhakrishnan, 1997; and Higa and Lee, 1998). This may result in both end-users unable to understand the reasoning process of the expert system, as well as developers unable to effectively maintain and update the rule bases.  123  In part, the problem lies in the difficulty in understanding the structural relationships among the rules in the knowledge base. Kidd and Cooper (1985) point out that "people may find it very difficult to grasp the structure of a pure rule based representation because of the modularity and uniformity of the production rules and also because the relationships between the various rules are not made explicit" (p. 97-98). Along a similar vein, Lee and Gray (1998) argue that the rules of a knowledge base have no salient features or landmarks—they are largely "flat" rules having the same IF.. . T H E N . . . structure. To address this concern, Southwick (1988) argues that a developer explicitly split the knowledge base into topics: "a logical or conceptual entity in a knowledge base of an expert system" (p. 49). By doing so, he contends that the rule base will be easier to understand since it now contains discrete, easy-to-understand chunks. Lee and Gray (1998) adopt a different approach with a similar outcome, where they propose instead the use of a neural network algorithm based on a Hopfield net, one which structures an unstructured rule base by clustering rules based on lexical similarity. Through use of this algorithm, the rule base is easier to comprehend since it creates higher-level abstract structures from the lower-level rules.  Other researchers have addressed the issue by devising graph-based approaches in order to better "visualize" the rule bases. Ramaswarmy, Sarkar, and Chen (1998) use directed hypergraphs to capture the complex interdependencies across clauses in a rule base. Their goal is to be able to automatically detect structural errors in a rule base, which they classify as being one of the following: redundancy (two or more rules lead to the same conclusion), conflict (the same premise leads to mutually exclusive conclusions),  124  circularity (a premise leads back to itself), and incompleteness (deadends and unreachable goals). See Figure 5.1 for an example of a directed hypergraph representing the following six rules:  Rule 1: Rule 2: Rule 3: Rule 4: Rule 5: Rule 6:  al bl bl dl el fl  dl el fl gl gl hi  +cl +C1 +el +fl  The above notation represents an attribute-value pair as a lower case alphabet, indicating an attribute, followed by a numeric digit, which indicates the value for that attribute. Hence, Rule 4 above, which is represented as dl + el -> g l , can be translated to IF (D=dl and E=el) T H E N (G=gl). In Figure 5.1, rules are represented as directed arcs in the diagram, and compound clauses are boxed in.  Figure 5.1: Directed Hypergraph of Six Interrelated Rules, adapted from Ramaswarmy, Sarkar, and Chen (1998)  125  Various other graphical approaches to representing rule bases have been undertaken by other researchers: Chander and Radhakrishnan (1997) propose the use of a goal graph; Agarwal and Tanniru (1992) and Nazareth (1993) employ Petri nets to verify the integrity of rule based systems; Mockler (1988) draws dependency diagrams to model the relationships among the conceptual entities of a rule base system.  Expert Strategy is the first system (the second being LogNet, which is the subject of Chapter 6) developed for the purpose of studying enhancements to explanatory power. It is a graphic-based system that organizes an expert system rule-base into a hierarchy of object nodes, so that end-users will have an easier time "visualizing" the expert system rule base. The hierarchic models are intended to both segment the knowledge base into more manageable chunks, as well as to create a graphical representation of the rule base, so that the interrelationships among these chunks are made explicit. The following section, Section 5.2 describes the transportation mode selection application used in the first experimental study, which investigated whether the provision of hierarchic structure (i.e., interface-based enhancement to explanatory power) helped end-users to better understand an expert system rule base.  What benefits are to be gained through an end-user interaction with the hierarchic models? There are various ways that these models can enhance the explanatory power of the expert system interface. Chandrasekeran, Tanner, and Josephson (1988) state that the explanation of expert systems problem-solving itself has three components:  126  \  • • •  Explaining why certain decisions were made or were not made. Explaining the elements of the knowledge base itself. Explaining the problem-solving strategy and the control behavior of the problem solver.  With the hierarchic models, end-users can engage in a more flexible problem-solving process, which can provide support for all three components of the problem-solving process. Section 5.5 considers in more detail how an end-user might interact with hierarchic models.  Unlike a traditional expert system dialogue in which the system prompts the end-user for data inputs in a prescribed sequence, the hierarchic models are intended to be as interactive for the end-user as possible. The motivation behind this free-form interaction is to allow, and indeed to actively encourage, the end-user to inspect parts of the object model to better understand why an expert system reached a certain conclusion. Such an end-user, for example, may disagree with an expert system's conclusions and may want to understand what input variable was the cause of the erroneous recommendation. Through an exploration of the components of the hierarchy, the end-user should be able to track how the value of the input variable propagated through the rule base to reach the given conclusion. The hierarchy of the object nodes provides a structure to make this line of reasoning more visible for the end-user, and therefore, easier to track.  Previous work on capturing strategic knowledge was done by Dhaliwal (1993), Mao (1995), and Nah (1997). In the experimental studies that they conduced, subjects did not use strategic explanations extensively. The work on Expert Strategy, in part, was intended to create strategic explanations that were more powerful and easier to apply to problem-solving situations.  127  5.2 A Transportation Mode Selection Expert System An application will help to illuminate how Expert Strategy works. In this application, the end-user is a transportation broker who makes decisions concerning what type of transportation mode (air, trucking, rail, small package service) to use in order to transport a client's shipment. The broker makes this determination based on the following factors:  • • • • • • •  Shipment Weight: how heavy is the shipment? Weight of one item: what is the weight of one item to be shipped? Value of one item: what is the dollar value of one item to be shipped? Fragility Rating: how fragile is the item? (high, average, low) Shipping Distance: how far is the item to be shipped? Transit Time: in how many days must the shipment reach its destination? Special Products: are special products to be shipped? (minerals, agricultural  • • •  products, automobiles, none of the above) Perishable: are the items to be shipped perishable? Door-to-Door Service Required: is door-to-door service required? Frequency of Service Required: how frequently is the shipment made? (daily, weekly, monthly, infrequent)  This information is obtained from the client and these inputs are entered into the Expert Strategy system. Based on these inputs, Expert Strategy reaches its recommendation.  Figure 5.2 provides a screen shot of the hierarchic model representing this problemsolving case. This model is composed of nodes (denoted by squares in Figure 5.2) and links among these nodes. There are three types of nodes. The root node is the topmost node, which represents the topmost goal of the decision-making problem, in this case, the determination of a transportation mode. At the bottom, the  leaf nodes are the end nodes  of the hierarchy, and they represent the raw, or unprocessed input variables: In Figure 5.2 the leaf nodes can also be distinguished from the other nodes by the question mark (?) appended at the end of each of its labels, to denote that the system is asking for user input. For example, Shipment Weight?, Weight of one item?, and Value of one item?, all  128  \  represent leaf nodes. In between the root node and the leaf nodes are the intermediate nodes. These nodes read in values from the node(s) attached directly below it (i.e., its child node(s)) and process this input to produce an output, which, in turn, is passed to the node attached directly above it. For instance, as illustrated in Figure 5.2, the Load Type node reads in the value passed to it by the Shipment Weight? node to produce the output large-shipment; likewise, the Loss and Damage Rating node reads in the Product Value s  Type and the Fragility Rating, its two child nodes directly attached to it, to produce the output unimportant. These two outputs, in turn, are passed to the Transportation Mode node, which uses these values (and others) to make a final determination of the  • transportation mode to be suggested to the end-user. Figure 5.2 is an example of a simple hierarchy or tree. In Section 5.7, we consider other possibilities, and why some applications may not fit into this simple definition of hierarchy  In Figure 5.2, some of the values are null, signifying that this information is unknown. However, the expert system will reach a conclusion as soon as there is sufficient information to make a determination. Hence, not all user inputs are required. This may jesult at times in a sparse hierarchy in which only a small portion of the hierarchy is actually involved in making a recommendation.  An end-user can visually track what effect a modification on one node will have on the rest of the hierarchy. For example, he/she may want to change shipping distance from 2000 miles to 3000 miles and observe how this change will affect, in sequence (from bottom to top), the Distance Range, the Haul Type, and ultimately the Transportation  129  ro =:.  3  /  1  O 03 Q.  o o o  fD  (a ra  o  oM r -  o in  \  Vi  a> <Q (D  <a  \  o BP-  s  a « ° ST  JO  s  a o  «a  o c  re  1 0  fD  3  r>  —:  fD  i  OO  fD  -a  33  o  M  ~  o CL  H re  CL  33  g co  o  *  =  3)  fD fD  \  I—  o a. a. fD  C -aO  ,tJ  N  E  « o  fD  -a Q.  ro  2. w  fD  fD  < fD Q . fD •-o  JO  CO  fD fD  -£ i =;• o fD fD Q. "O  fD  a  Vi  O  3 C  o a  re  Cfl re  a S  33 CO O fD fD O  — t  a a  £  m co A  re  o  \  o w  cr ro  y> TI O  —  »i o c re"  El  3  o o  ~7~ 33 n  \  II  fD  {Q  O  130  CT  c  7  Mode itself. The upward pointing links, representing the hierarchy links, indicate that he/she may change the value on lower-level nodes and watch how the information is' propagated upwards. In other words, forward chaining takes place, in which changing values on a lower level can subsequently change the values of the higher levels of the tree structure, level by level.  The end-user is not limited to visually tracking how the variables are changed. He or she may also inspect the individual nodes to view what rule was fired to produce the given output (or conclusion) for that node. For example, by point-clicking on the Load Type node in Figure 5.2, the end-user is able to obtain the following rule trace:  If Shipment Weight > 5 0 0 0 lbs. and <= 15,000 lbs. T H E N Load Type = large-shipment  In this instance, an input of 15,000 on the Shipment Weight node resulted in an output of "large-shipment" on the Load Type node. Similarly, the user may click on the Transportation Mode node to obtain this rule trace for the final recommendation:  If Load Type = L a r g e - S h i p m e n t a n d Haul Type = (long-slow or medium-slow) a n d L o s s and Damage Rating = (unimportant) a n d Flexibility Rating = null THEN Transportation Mode = R A I L  5.3 The Model-Based Reasoning Mechanism Driving Expert Strategy How does Expert Strategy dynamically propagate the information throughout the hierarchy to reach a recommendation? It does so by utilizing model-based techniques, the subject of Chapter 2. Model-based reasoning attempts to solve problems by  131  analyzing the structure and function of a system, which are described by a symbolic model (Kunz, 1987). Most of the problems studied in the literature associated with model-based reasoning concern the diagnosis of faulty devices.  Although Expert Strategy is not dealing with physical devices, the concepts of modelbased reasoning used to model an expert system's rule base are much the same: in both cases, we are looking at structure and determining behavior from this structure. In Expert Strategy, we are structuring the set of rules, which make up the rule base, into a hierarchy of nodes, each node connected by links. In a traditional expert system, by contrast, these IF-THEN rules are.merely flat rules, in which the relationships among the rules are not made explicit. .  The algorithm, which implements the forward-chaining mechanism on the hierarchic models of Expert Strategy, utilizes model-based reasoning techniques. It begins by detecting where on the hierarchy the end-user has entered an input, or changed a node. It accomplishes this via event-driven processing; that is, it detects the event: whenever a node is modified by an end-user. It then finds the node connected directly above it (i.e., the parent node), if any such node exists. The rule base will invoke only those rules associated with this parent node and determine whether this parent node needs to be modified based on the new value passed to it by its child node. This process is repeated, level-by-level, until the system reaches the topmost node, or the root node. The principle of locality, which states that components can only act on neighboring components, is  132  always in effect, since only directly attached nodes can affect one another, and only in the specified (upwards) direction.  This upward propagation mechanism can be summarized with the following steps:  1. Whenever any node is modified, find the node directly attached above it (i.e., the parent node), if any such node exists. 2. Recalculate the value of the parent node, based on the new value of the child node (i.e., fire or invoke all the relevant rules in the rule base that involve an assignment of the parent node's value). 3. If this results in a new value, repeat steps 1 and 2 above for the next-level parent node (i.e., the parent of the parent node).  A  Hence, we observe that Expert Strategy ascends a branch of the hierarchy, level by level, until it reaches the topmost level. Each time it reaches a given node, it invokes only the rules relevant to that node. In this manner, it is utilizing its hierarchic structure to fire the relevant rules in the rule base. While this is happening behind-the-scenes, the end-user can observe the hierarchy updating itself, via the graphical user interface, and can follow a line of reasoning through the hierarchy. A variety of animation effects may be employed to make this line of reasoning more salient.  133  The pseudo-code for this model-based reasoning technique is given by the following:  Repeat Until Node N is the topmost (root) node If Node N receives a value, then find Node P (parent node) directly connected to Node N on the output side of Node N Fire all rules in the rule base that assign a value to Node P Node N <r Node P End Loop 5.4 Object-Oriented Design of Rule-Bases Expert system rule bases developed Using the hierarchic models of Expert Strategy reap many of the benefits associated with object-oriented design. Essentially, the nodes of the hierarchies represent self-contained objects, and as such, can be viewed as possessing many of the same beneficial properties typically associated with object-orientation (see e.g., Coad and Yourdon, 1990; and Wirfs-Brock and Johnson, 1990): abstraction, encapsulation, and inheritance.  Abstraction. An object node in Expert Strategy embodies an abstraction that is meaningful to end-users. Instead of referring to lower-level details of the domain of discourse, such as the specific data, and individual rules, the object nodes that make up the hierarchies are higher-level abstractions that end-users can be immediately familiar with. In the transportation mode selection example given in Figure 5.2, nodes such as load type, loss-and-damage rating, and flexibility rating are used to identify some of the entities in the domain involved in making a transportation mode selection. Similarly, if  134  one were to develop an expert system whose job it is to make decisions regarding the approval of commercial loans, nodes such as financial strength of the company, management character, collateral, and competitive environment of the industry might be included in the hierarchy. These nodes, in turn, might be further decomposed. The important point to remember here is that the system is being developed for an end-user, one who may not be interested in seeing all the implementation details of the system, i.e., the specific rules used to derive a recommendation. The hierarchic model provides a convenient higher-level view of the rule base, one that is more manageable and considerably less complex for the end-user to deal with. Of course, the end-user is free to explore the components of the hierarchy if a finer granularity of detail is sought.  Encapsulation. The implementation details of an individual object are hidden from the other objects. Each node on the hierarchy need not know about the internal structure of any other node. In addition, a node receives inputs only from directly attached nodes (thereby further limiting interaction among object nodes). For both the end-user and the developer of the expert system, this modularity in design is a great benefit in terms of understanding the system. For the developer concerned with modifying and updating an expert system rule base, this means that modifying the rules of one node will not have unintended repercussions throughout the other nodes of the system, since the rules are confined to just that one node.  135  In Expert Strategy, rules are encapsulated within individual nodes. In the transportation selection problem of Figure 5.2, for example, the following nodes have the specified rule assignments:  Distance Range (inputs Distance):. If Distance <= 50 miles then Distance-Range = local If Distance > 50 miles and <= 400 miles then Distance-Range = short If Distance > 400 miles and <= 1000 miles then Distance-Range = medium If Distance > 1000 miles then Distance-Range = long Speed Rating (inputs Transit-Time): If Transit-Time <= 2 days then Speed-Rating = express If Transit-Time > 2 days and <= 5 days then Speed-Rating =: quick If Transit-Time > 5 days and <= 10 days then Speed-Rating = average If Transit-Time > 10 days then Speed-Rating = slow Haul Type (inputs Distance-Range and Speed-Rating): If Distance-Range = long and Speed-Rating = (express or quick) then Haul-Type = long-fast If Distance-Range = long and Speed-Rating = (average or slow) then Haul-Type = long-slow If Distance-Range = medium and Speed-Rating = (express) then Haul-Type = medium-fast If Distance-Range = medium and Speed-Rating = (quick or average or slow) then Haul-Type = medium-slow If Distance-Range = (local or short) then Haul-Type = short  Encapsulation means that Haul Type need not be concerned with how the Distance Range and Speed Rating nodes are assigned. The Haul Type node simply reads in as input, the values assigned from the Distance Range and Speed Rating nodes, without any knowledge about how these values were assigned. Therefore, one node's interface to other nodes is clearly defined, and due to encapsulation, can never be corrupted.  136  Inheritance. Each node in the hierarchy can be considered to be an instance of a larger class. Hence, it is possible to create node instances that inherit their properties from class definitions. We may define a class of nodes that assign a rating to a node based on a certain set of rules. We may then create an instance of this class, which will inherit these rules, and fine-tune the rules so that they are tailored for that individual node. Inheritance, of course, makes possible the re-use of nodes so that they need not be designed from scratch.  In addition, an entire hierarchic model, or large portion thereof, may be re-used to create variations of problem-solving situations. For example, after developing a hierarchic model for commercial loan approval decisions, to create a modified version for personal loan approval decisions, the original hierarchy can be re-used by modifying the nodes to reflect the new situation.  '  5.5 Problem-Solving Using Expert Strategy Expert Strategy empowers end-users with the ability to actively question and understand the underlying rule base. How an end-user utilizes the hierarchic models will depend, of course, on the extent to which he or she agrees with the system's recommendations and also on the level of understanding desired. Some problems are fairly routine and require no further understanding. For other situations, further probing and exploration of the components of the expert system rule base'are necessary if the end-user is to trust and accept a system's recommendations. Additionally, expert systems are always constrained in terms of the types of problems that can be solved—they typically solve problems only  137  for narrowly-defined domains. An end-user may need to know when a system's expertise has been pushed to its limits, and if possible, how to modify a recommendation given this delimited expertise.  It is useful to anticipate the types of situations in which a deeper understanding of the expert system rule base is required, as this will help us to judge whether hierarchic models will actually help end-users in their problem-solving tasks. In terms of problemsolving and understanding, we may ask: What types of questions will the end-user need to ask the system to gain a deeper understanding of the expert system? To tackle this question, Ellis' breakdown of question types typically asked of experts is used (Ellis, 1989): (1) how?, (2) why?, (3) what-if? and (4) why-not?.  The How? and Why? question types should be familiar to those knowledge engineers who use even the most rudimentary expert system shells to develop their rule bases. Most typically, the How? question is posed by users when they would like to know how a certain recommendation or conclusion was reached (typically a rule trace of the reasoning process is given), and the Why? question is commonly requested when the user would like to know why a certain type of input is needed by the system.  In Expert Strategy, How? and Why? questions are not explicitly requested, but are easily obtained through an exploration of the components of the hierarchic model. In a traditional expert system dialogue, the How? question will result in a listing of all the rules that were fired, in succession, to reach the final recommendation. Under Expert  138  Strategy, the end-user can exert more control over this question. Such an end-user can obtain the individual rules by clicking on the appropriate nodes, and can gain a more finegrained understanding by descending the branches of the hierarchy and by clicking on lower-level nodes to obtain the rules that were fired at these lower levels. The difference between the two interactions is significant: it means that on the one hand (under a traditional expert system dialogue) the end-user is bombarded with quite a lot of information, given by the rule-trace dump, whereas the end-user of Expert Strategy is able to filter out unnecessary rules, by focusing on one node at a time. Moreover, under Expert Strategy, the end-user can view the line of reasoning in the context of the hierarchic structure of the problem domain.  For the  Why? question, a different problem under traditional expert system dialogues is  encountered: too little information. Returning to the problem given in Figure 5.2, under a traditional expert system dialogue, the system might request that the end-user input the shipment weight (in lbs.). The user may ask Why? at this point (or "Why do you need to know this?"), and the system might respond: "I need to know this so that I can determine what the Load Type is." The problem with this interaction is that the end-user is unable to see the larger context of the problem-solving situation. By contrast, an end-user interacting with a hierarchic model is able to see that shipment weight determines load type, which, in turn determines (along with other variables) the transportation mode. In short, the hierarchic object modes provide a higher-level context (i.e., a meta-level view) that is not possible by asking a single, isolated Why? question.  139  What-if? type questions, or hypothetical reasoning, enable the end-user to test the effect that changing one variable will have on the final system recommendation. For example, what impact will changing the shipping distance from 1000 to 2000 miles have on the transportation mode selection? Many expert system environments include some form of this capability. However, hierarchic models are especially well-suited to this kind of questioning. All the nodes on the_hierarchies, leaf nodes as well as intermediate nodes, can be modified to observe what effect such a change would have on the system recommendation. In addition, the end-user can observe changes to intermediate variables, or view how the modification "cascades" through the hierarchy. Finally, endusers are not limited to changing one variable only. They may also modify two or more variables all at once, and observe their possible interaction effects. So while traditional expert system dialogues may allow for some limited forms of what-if testing, hierarchic models are superior because they allow end-users more flexibility and power in modifying and visualizing the assumptions of the rule base.  Why-not? type questions, or counterfactual reasoning, take the form "Why did you not conclude <such and such>?" For example, in the transportation mode selection problem, an end-user might ask the system, "Why did you not conclude that the recommendation is Trucking?" This type of question is usually quite difficult to answer, and yet it frequently arises in expert-client dialogues. (Consider for a moment the type of interactions you engage in with real-life experts. For example, you may ask your tax accountant why you can't take a certain deduction, or you may ask your doctor why you do not have a certain disease given a set of symptoms.) Frequently, there is not a single answer to this kind of  140  question (it is less structured than the previous three question types); however, it is ,an important one to consider since it may naturally arise in an end-user's mind, and answering it may certainly deepen one's understanding of the reasoning the expert system uses to reach conclusions.  A satisfactory answer to this question may require a meta-level or higher-level understanding of the expert system rule base. Individual rule-traces, of the type provided by traditional expert system dialogues, may be inadequate in conveying to the end-user a deeper and higher-level understanding of the rule base. The hierarchic models may provide onetool to aid the end-user in gaining this higher-level understanding, because it provides a top-down view of the rule base. Further, it encourages end-users to explore and seek out what is driving the rule base due to its transparent and flexible graphical interface. However, it should be recognized that it is not the only way to gain a deeper and higher-level understanding. Others, for example, have proposed that meta-level problem-solving strategies be made explicit (Clancey, 1983) or that deeper explanations based on deep causal models of the domain and underlying first principles be provided to the end-user (Chandrasekeran, Tanner, and Josephson, 1988).  5.6 Deep Explanations Support Expert Strategy also provides deep explanations that explain the underlying domain principles associated with a given recommendation. In Chapter 3 (see Section 3.2.3), it was suggested that deep justifications, which can explain system actions at a deeper level than would a shallow rule trace, might help end-users to feel more comfortable with a  141  system recommendation. In the transportation mode selection problem, the deep explanations offered by Expert Strategy reflect the underlying domain principles in transportation and logistics that are associated with the rules. In Expert Strategy, they are dynamically generated whenever the system reaches a transportation mode recommendation: the explanation is dynamically constructed from textual fragments, and they are tailored to the specific recommendation at hand. The example below will demonstrate its usage.  In this example the rule trace and its associated deep explanation are given. The system makes the determination that TRUCKING is the transportation mode that should be selected.  IF load type = (very-large-shipment or large-shipment) AND haul type = medium-fast THEN transportation mode = TRUCKING  The deep explanation associated with this rule trace is as follows:  DEEP EXPLANATION OF THE SYSTEM RECOMMENDATION Load Type: For large to very large shipment sizes, RAUL and TRUCKING are the preferred transport modes. RAIL tends to move large and very large shipment sizes of at least a full carload (greater than 15,000 lbs.), due to the heavy load capability of railroads, as well as the cost structure associated with railroads (i.e., high fixed costs and low variable costs). TRUCKING is the most versatile of the shipment modes handling small, average, large, and very large loads. AIR service, on the other hand, is generally restricted to average and small shipments since it is more constrained by the physical dimensions of a cargo plane.  142  Haul Type: In choosing between RAIL and TRUCKING, if haul type = medium-fast, then the transport mode of choice is TRUCKING. This is because, between the two choices, TRUCKING can move cargo faster than RAIL over medium-distance hauls  The first fragment of the deep explanation is based on the value given by the load type. Based on the load type alone, Expert Strategy narrows the candidate choices to RAIL and TRUCKING. A domain principle is then provided explaining why the system has narrowed the decision to two options. The second fragment is based on the value given by haul type. Expert Strategy further narrows the decision by selecting TRUCKING, and gives its associated explanation. Hence, the deep explanation is given in fragments, explaining each of the clauses in the premise of the rule, one by one, until the final recommendation can be explained. Appendix A provides more examples of deep explanations fragments of the transportation mode selection problem.  A few observations about this deep explanations facility are in order at this point. First, the facility will create textual fragments for each clause in an IF-THEN rule. The decision to utilize this approach was made so that these fragments could be reused for other rules. For example, the following rule makes the recommendation that TRUCKING should be used when the loss and damage rating is considered instead:  IF load type = (very-large-shipment or large-shipment) AND loss and damage rating = (very-important or important)  THEN transportation mode = TRUCKING  143  Rather than create a deep explanation from scratch, we are able to reuse the fragment from the previous example, which provides a textual explanation for the load type clause, when it is very-large-shipment or large-shipment. However, now Expert Strategy uses another fragment for the loss and damage rating, tailored to this specific case:  In choosing between RAIL and TRUCKING, if the loss-and-damage-rating = very-important or important, then the transport mode of choice is TRUCKING. This is because trucking is a more secure way of transporting goods than rail is, due in part to the longer transit times involved in shipping goods by rail, as well as the increased handling associated with transport by RAIL. In addition to offering the possibility ofReusing fragments, this method of explanation creation also provides consistent explanations for similar situations. On the other hand, this method is clearly not suitable for all situations. In particular, there may be interactions among the clauses in a rule, so the fragment-by-fragment approach may not work well for these cases. To remedy this problem, Expert Strategy allows the designer of deep explanations to override the (default) fragments and create a more coherent explanation that can explain across the clauses.  5.7 Variations of Data Structures: Trees, Networks, Bills of Materials, and Breadth Hierarchies The transportation mode selection problem described in Section 5.2 considers a simple hierarchy in which a single root node (the system recommendation) is decomposed into sub-problems, each of which is further decomposed into more components, until a multileveled structure of the problem-solving domain is constructed. In this section, other possibilities are described so that graphical structures may be developed for a wider class  144  of problems. Indeed, many problem-solving domains do not fit into neat and tidy hierarchic descriptions. Furthermore, there is the problem of whether these hierarchic models will scale up to more complex domains.  It is useful, at this point, to define more precisely what a hierarchy is. Simon (1996) defines it as "a system that is composed of interrelated subsystems, each of the latter being in turn hierarchic in structure until we reach some lowest level of elementary subsystem" (p. 184). In addition, Simon notes that the strict definition of/hierarchy assumes that "each of the subsystems is subordinated by an authority relation to the system it belongs to" (p. 185). In other words, a formal hierarchy consists of a "boss" and a set of subordinate subsystems.  The transportation mode selection problem fits this formal definition since the root node (the system recommendation) is composed of five subsystems—load type, loss and damage rating, haul type, modes capable, and flexibility rating—and these five subsystems are subordinated to the root node. The five subsystems, in turn, are decomposed further into their own subsystems until lowest level of elementary subsystem is reached—i.e., the input variables themselves.  '  Kroenke (2000) distinguishes among trees, networks, and bills of materials. Trees are data structures that fit the formal definition of a simple hierarchy: the elements of the structure have only one-to-many relationships with one another. That is to say, each element, with the exception of the root node, has at most one parent, but a parent can  145  have multiple child nodes attached to it. These are equivalent to the simple hierarchies defined earlier.  Networks are data structures in which the elements can have more than  one parent (i.e., data structures allowing for multiple inheritance).  Bills of materials are  hierarchies in which a node can appear more than once in the data structure. This type of data structure is common especially in manufacturing applications in which you wish to depict the relationships between products and the parts that they constitute.  Clearly, not all problem-solving situations can be described using simple hierarchies or trees, as in the transportation mode selection problem described in the previous section. This section will employ a Help Desk application to illustrate the need for more flexibility in creating graphical descriptions of rule bases. Help Desk systems are a fairly common application of expert systems technology (see e.g., Zhao, Leckie, and Rowles, 1996; Acorn and Walden, 1992; and Logan and Kenyon, 1992). These expert systems are used to assist customer support technicians in resolving a customer's problems. As Logan and Kenyon (1992) observe, these application domains are characterized by great problem variety: first, because help desks support multiple domains (oftentimes, difficult problems must be referred to a particular expert in the company); and second, because help desks are increasingly being asked to support a number of products and services. The trees described previously cannot account for this wide problem variety.  One solution to this problem is to develop hierarchic models that depict greater breadth in the problem-solving situation. Such hierarchic models are referred to as breadth  hierarchies. These breadth hierarchies tend to be wide, as opposed to deep (i.e., multi-  146  leveled). Furthermore, these breadth hierarchies may contain a number of root nodes. 1  That is, rather than reaching a single conclusion or recommendation, they contain several topmost nodes that reach different types of conclusions, depending on the type of problem solved.  A simple example was developed to demonstrate some of the issues involved in modeling a problem-solving domain characterized by great problem variety. The discussion that follows describes a fictional help desk application for a technical support staff, which must respond to customer problems with a software product that they support. See Figures 5.3a and 5.3b for the graphical model developed for this system.  With the help of the expert system, the technical support personnel must determine the cause of the problem.. First, they enter inputs into the system:  1. 2. 3. 4.  Error code (if any) the customer sees on the screen Operating system (Windows 3.1, Windows 98, Windows NT, Unix, Other) Can you print? (yes, no) What is the processor-type of your computer (486, Pentium, Pentium JJ, Other)  From these initial inputs, the system attempts to determine what type of problem is involved (printer problem, memory problem, incorrect installation, other problem). Based on this identification, the system requires the technician to enter more information. For example, if the system has identified the problem as a memory problem, it needs information about 1) hard disk capacity and 2) R A M capacity, as well as the operating  Strictly speaking, these breadth hierarchies are not hierarchies, or not as defined previously. They are more like an amalgam of hierarchies, as they can be subdivided into several hierarchic structures. 1  147  system (already entered from the initial inputs node). From this information, the system makes a memory problem determination. (See the branch of the hierarchic model in Figure 5.3a which determines the memory problem recommendation.)  From this example, we notice a few differences between the breadth hierarchy and the simple hierarchy (or tree). First, not all the nodes of the breadth hierarchy feed upwards to a single root node. In fact, most of the nodes in the breadth hierarchy are irrelevant to the specific problem being addressed by the system, leading to sparsely populated structures, in which only a few branches are actually utilized for a given problem at hand. Second, nodes are repeated in the breadth hierarchy for greater clarity to the end-user: For example, the operating system node is displayed multiple times (as an input, as a variable for printer problem, and as a variable for operating system). In a formal. hierarchy, this cannot be the case,  Notice also the addition of the other node (see Figure 5.3a) in this hierarchic model. There is a practical reason as well as an application-oriented reason for the inclusion of this node. The practical reason is that the entire hierarchy will not fit on one screen (due to its great width). The end-user may click on the other node, and as a result, the system wilf present the user with a second screen of information, as given by Figure 5.3b. The more application-oriented reason for this is that printer problems, memory problems, and incorrect installation may be the most common problems of the system, and so all other problems are relegatedto a second screen. Hence, the technician using this system is  148  H "d  3. ro 3 ro  •3ca  em •>=  perating  oo O  ><  o  x  CD  &>  3  (TO"  S3  em  rati  60  OO O "< T3 w CD 3 era .  00 O ro  T3 • ro  5' era  ro  1  re ro  rS  5. 3 83  B  WBBMWKi  5  O  5 § K: 3  J» o ^ l-H ro — •^3 ro w  —" o rT o •-o 3  149  low esp o a  cn -^3  3  era'  c TO [Jl  Li  00  a OQ  n  923 oO  o  5" » re o o n a c« o  M  CO  a* m  OQ  re re 3  oo  > ° 5* 3  do  150  n  3  looking at a hierarchic model depicting only the most common problems. To get to the less common problems, the user must click on "other" to retrieve other problem areas.  The problem of fitting a large number of nodes on a single screen raises the question of whether hierarchic descriptions will scale up to more complex and larger problemsolving domains. A designer who must create graphical models of large domains must consider other alternatives to the problem of displaying large hierarchies that won't fit on a single screen. One alternative is to create the hierarchy on one screen and have the user scroll (either up/down or left/right) to get to the relevant portion of the hierarchy. Another option is to let the user zoom in/out on a single screen full of informationzooming out to get a larger perspective, and zooming in to focus in on the relevant portion of the model. However, scrolling and zooming are clearly limited in terms of promoting an easy-to-use interface—one can easily imagine how frustrating it might become for an end-user who must scroll through pages of a hierarchy to get to the 'i  relevant information.  A more effective approach would be to subdivide a large hierarchy into separate screens of information. For example, one could put only the top level of the hierarchy on one screen (in this example, this would mean the nodes for printer problem, memory problem, incorrect installation, and other) and have the user click on these nodes to obtain more detail. Such a solution would allow for embedded descriptions of hierarchic models. The choice of how to display large hierarchies, of course, will be based on the task (how the  151  hierarchy will be used), and also the capabilities of the software itself (e.g., does the software support scrolling, zooming in/out, windowing capabilities, etc.).  Current technologies on the horizon offer even more tantalizing possibilities for displaying complex domains. Animation effects may be employed to highlight which branches of a hierarchic structure are relevant to the current problem. In this manner, such animation effects may be used to deal with the problem of getting lost in a large and sparsely populated hierarchy. 3-D Cone Trees (Robertson, Mackinlay, and Card, 1991) may be used for visualizing large hierarchies. The use of three dimensions can make better use of available screen space, and enable visualizations on the whole structure. Furthermore, rotation of three-dimensional treelike structures allows further flexibility, in that end-users can view a hierarchy from a number of different perspectives.  5.8 A Summary of Chapter 5 At its core, the system described in this chapter, Expert Strategy, was developed to build in more transparency and flexibility in the expert system user interface. Expert Strategy was developed in the spirit of expert support systems, in which an intelligent system serves in an advisory capacity to human decision-making, rather than simply relegating all decision-making to the system itself. A number of benefits of Expert Strategy were suggested:  •  The hierarchic models provide an explicit structure to an expert system rule base. Not only do they provide convenient high-level segments (or landmarks), but they depict how the segments are interrelated in a hierarchic structure.  152  •  Because they are a graphical representation of an expert system rule base, the hierarchies enable end-users to more easily visualize a line of reasoning; this is easier to understand than a simple listing of rules that were executed (i.e., the rule trace).  •  Designers of rule-based expert systems can reap the benefits of object-orientation by creating objects that represent the conceptual entities of a rule-base.  •  End-users or problem-solvers using Expert Strategy can explore the components of the hierarchy and perform What-if? analysis on any entity of the knowledge base. This allows for a high degree of transparency in the expert system rule base, as well as enabling a more open-ended and flexible interaction with the enduser.  •  Deep explanations support,, an auxiliary capability of Expert Strategy, enables end-users to request underlying domain principles behind the system recommendation. Such explanations support can be valuable to the end-user who wants to feel more comfortable with the system recommendations.  The efficacy of interfaces developed using Expert Strategy is the subject of Experiment I, discussed in Chapters 7, 8, and 9. This research study boils down to an empirical question: Will these enhanced interfaces result in problem-solving performance? It is believed that the provision of hierarchic structure and deep explanation support will enable end-users to more easily acquire a good mental model of how the rule base is structured. As a result, such end-users will be more effective problem-solvers, able to solve more difficult problems requiring a deeper understanding of the rule base.  153  Chapter 6 LogNet: Building Logistics Networks Using Model-Based Reasoning Techniques LogNet is a different type of expert support system than Expert Strategy. Unlike Expert Strategy, it does not model the rule base; rather, it models the actual domain of discourse—the objects of the domain and how they are interrelated to one another. Also, the models of LogNet are network-based, whereas the models of Expert Strategy are hierarchic. However, both LogNet and Expert Strategy employ graphical models that can be directly manipulated by the end-user, and as a result, furnish them with the explanatory power and transparency of expert support systems. In addition, both systems employ model-based reasoning techniques to implement their capabilities.  This chapter will discuss the implementation details and features of LogNet. Its intent is to illustrate how model-based reasoning techniques can be used to create intelligent interfaces for a logistics network design problem. In addition, this chapter will describe the two versions of LogNet that are the subject of Experiment  TJ:  the  low-restrictive  version of Lognet (Free Form LogNet) in which the user is given the freedom to request the advice-giving procedures in any order; and the  high-restrictive version of LogNet  (Guided LogNet) in which the user must use the procedures in a prescribed, pre-specified order. Chapter 9 will discuss, in more detail, the research design of Experiment n , a study of how system restrictiveness can enhance explanatory power of a user interface.  LogNet is a system that aids in the design of business logistics networks. The definition of  Strategic Logistics Management, as provided by the Council of Logistics  Management (1986) is:  154  the process of planning, implementing and controlling the efficient, cost-effective flow and storage of raw materials, in-process inventory, finished goods, and related information from point-of-origin to point-of-consumption for the purpose of conforming to customer requirements.  The coverage of this area is broad, encompassing business problems that concern customer service, traffic and transportation, warehousing and storage, plant and warehouse site selection, inventory control, order processing, procurement, material handling, and demand forecasting, to name some of the major areas of study within the field (Lambert and Stock, 1993). The selection of business logistics network design as the application domain was intentional, since there are many problems in logistics strategy planning that could benefit from a model-based graphical visualization. Moreover, there is a paucity of research in business decision-making problems utilizing model-based reasoning techniques, since most of the research is focused on the diagnosis of malfunctioning physical devices.  6.1 The Domain Object Model  At the heart of LogNet is the domain object model, or the network configuration model. One way to view and model a logistics environment is as a network of nodes interconnected by transportation links. The problem of specifying the model would be one of specifying the network structure through which manufactured goods flow. To model the logistics environment, three general types of nodes are considered : first, the 1  Limiting the logistics network to only three types of nodes is a simplification of the problem. For example, this network does not consider suppliers who source the factories. Additionally, there may be two types of warehousing such as regional warehousing and local warehousing; or products may be moved first to distributors before they actually reach customer markets. 1  155  factories, where the products are manufactured; second, the warehouses, which receive the finished products from the factories for storage and possibly for further processing; and third, the customer zones (or markets), which place orders and receive the desired products from the assigned warehouse(s). Product moves through the logistics network via different transportation options (e.g., rail, trucking, shipping, air), which are represented by the connections or links between the nodes. There are two types of transporation links: inbound links move products from factories to warehouses, and outbound links move products from warehouses to customers. See Figure 6.1 for an example of a logistics network model created using LogNet. Squares represent factories, circles represent warehouses, and triangles represent customer zones.  Figure 6.1:  A Network Configuration Model of a Logistics Environment Using LogNet  156  LogNet enables end-users to build, directly manipulate, and inspect components of the network model. For instance, different network configurations can be created, and such performance measures as cost and customer service levels can be obtained. In addition, end-users may click on components of the network to modify their attribute value(s), and test these modified values against the network benchmarks. Some of the managerial decisions that might be considered include the following:  •  Facility Location: How many warehouses are needed? Where should these warehouses be located and to which customer market(s) should they be assigned to? Also, which factory(s) should source each warehouse?  •  Warehouse Sizing: What size should the warehouses be (i.e., how much inventory , should each warehouse be capable of holding)? Also, should the firm rent public warehousing space to provide for inventory requirements during peak season periods?  •  Transportation Mode Selection: What type of transportation mode (e.g., rail, truck, and air) should be selected to move goods from one node to the next? How do cost structures and quantity discounts offered by the transportation carriers affect the network configuration?  •  Dynamic Planning: Given that demand and cost patterns shift over time, how should future needs be accounted for? How can a company make its logistics decisions based on longer time horizons?  LogNet was developed as a constrained system that solves an artificial facility location problem; specifically, LogNet concentrates on the warehousing and distribution aspects of logistics planning. For this decision-making problem, LogNet will aid the user in . deciding how to create an appropriate network model, one that satisfies a certain customer service level, at the lowest possible total cost (more about these benchmarks later in this chapter).  157  Attributes of each component and transportation link. Each component on the network configuration model represents a selectable object (factory, warehouse, or customer zone). Point-clicking on any component of the object model enables the user to modify any attributes for the given object. In addition, users may click on transportation links to display and modify attribute values. See Figure 6.2 for attributes of each object type.  o  Attributes for Factory:  Attributes for Warehouse:  Attributes for Customer Zone:  Monthly-demand (no. uni s) Capacity (no.units) Utilization (%)  Monthly-demand (no. units) Capacity (no. units) Montly-fixed-costs ($) Monthly-handling-costs ($) Inventory-carrying-cost ($)  Monthly-demand (no.units) Yearly-growth-rate (%)  Attributes for Transportation Link: Transport-mode (air, shipping, rail, truck) Quantity-transported (no. units) Transport-mode-type (inbound, outbound) Distance-between-nodes (in miles) Origin-node (city) Destination-node (city)  Figure 6.2:  Attributes of Each Object Type of a Logistics Network Model  158  6.2 An Illustration of Model-based Reasoning on a Network Model Spatial reasoning is possible to the extent that LogNet contains information concerning distances between all nodes. See Table 6.1 for a sample of a seven-node system of selected U.S. cities. Distances are road miles between two cities. Whenever two nodes are connected by the user, the system automatically updates the distance-between-nodes attribute for that transportation link.  A limited form of spatial, model-based reasoning is possible through this table mechanism. Consider the following situation: A user adds a warehouse node to the network configuration. The system can automatically detect which customer zone(s) should be assigned to this new warehouse through the invocation of this multi-step procedure:  (1)  a new warehouse i s added t o t h e network model T H E N t h e network model and F O R E A C H customer-zone do the following: (3a) I F t h e d i s t a n c e between t h e customer-zone and t h e warehouse c u r r e n t l y connected t o t h i s customer-zone i s g r e a t e r than t h e . d i s t a n c e between the customer-zone and t h e new warehouse T H E N (3b) d e l e t e the c o n n e c t i o n between t h e customer-zone and the warehouse c u r r e n t l y connected and (3c) connect the customer-zone and t h e new warehouse.  WHENEVER  (2)  SCAN  Atlanta Atlanta Chicago Dallas Denver Los Angeles New York Seattle  0 674 795 1398 2182 841 2618  Chicago  674 0 917 996 2054 802 2013  Dallas  795 917 0 781 1387 1552 2078  Denver  1398 996 781 0 1059 1771 1307  Los Angeles  New York  2182 2054 1387 1059 0 2786 1131  Table 6.1: Road Mileage Between Selected U.S. Cities  159  841 802 1552 1771 2786 0 2815  Seattle  2618 2013 2078 1307 1131 2815 0  Analysis of the procedure illustrating model-based reasoning: The above procedure consists of three parts—labeled (1), (2), and (3). Part (1), the Whenever part of the procedure, detects when a specified event has occurred, in this case, whenever the user has added a new warehouse node to the network model. When this has occurred, part (2) executes, which scans the current network model and checks each customer zone node, one at a time. For each customer zone, the if-then rule executes—part (3) of the procedure. The antecedent (part 3a) checks if the customer-zone is currently assigned to a warehouse that is farther away than the new warehouse site (as obtained through the table mechanism described above). If this is true, the old connection is deleted (par