UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The aetilology of error : cognitive profiling events within the mine workplace Sweeney, E. Douglas 2009-01-04

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
24-ubc_2010_spring_sweeney_douglas.pdf [ 5.85MB ]
Metadata
JSON: 24-1.0068771.json
JSON-LD: 24-1.0068771-ld.json
RDF/XML (Pretty): 24-1.0068771-rdf.xml
RDF/JSON: 24-1.0068771-rdf.json
Turtle: 24-1.0068771-turtle.txt
N-Triples: 24-1.0068771-rdf-ntriples.txt
Original Record: 24-1.0068771-source.json
Full Text
24-1.0068771-fulltext.txt
Citation
24-1.0068771.ris

Full Text

i  THE AETIOLOGY OF ERROR: COGNITIVE PROFILING EVENTS WITHIN THE MINING INDUSTRY   by  Douglas E Sweeney, P. Geol., P. Eng.   M.Sc. Mining Engineering, University of Alberta, 2004 B.Sc. Mining Engineering, University of Alberta, 1994 B.Sc. Geology, McMaster University, 1983    A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY  in  THE FACULTY OF GRADUATE STUDIES  (Mining Engineering)    THE UNIVERSITY OF BRITISH COLUMBIA  (Vancouver)   December 2009    © Douglas E Sweeney, 2009ii   ABSTRACT   Investigation of accidents, incidents and other unintended events in the workplace continues to evolve in the mining industry, as it has for other heavy industries. Traditional investigation approaches are grounded in causation – the determination of cause and effect relationships manifested by the evidentiary record. This approach, while intuitive and widely accepted, is not inclusive of the more distal elements of causality such as the influence of cognitive error and the perception of risk. This research examines the role of cognitive error in decisions that contribute to events; the nature of these errors and how they are indicative of organizational culture. The main objective of this research is to develop and evaluate a cognitive error tool that can be used in the analysis of events within the mining industry. The current taxonomies are few, and are not available in a robust and structured model easily applied by accident investigators. This research seeks to address this by offering a theory of event causality based upon decision errors (Decision Error); taxonomy of decision errors (Lost Error); and, a model for profiling cognitive error (Cognitive Profiling). Further, through cognitive profiling, it will be shown that there is a collective, or distributed, cognition that exists precursory to an event that heretofore has not been addressed by conventional causation modelling of events in the mine workplace. This research contributes to the field of human error analysis by proposing taxonomy based upon decision errors; and to the field of cognitive science by examining the role that risk perception has in cognition within the workplace.  It provides a lexicon and a methodology that is exploratory in determining those events in the mine enterprise that are prone to escalation toward disaster; and by what errors in management such outcomes can be triggered. This research contributes to the field of accident theory and investigation by expanding on the notion of causation to include causality; and by defining accident, incidents and other events as systems. It is shown that when events and their investigation are considered as systems with inputs, outputs, and processes; then there is another system at play – the human error system that is antecedent to events. This research challenges the way that events are seen in the mine workplace. iii  . TABLE OF CONTENTS   Abstract ........................................................................................................................... ii Table of Contents ........................................................................................................... iii List of Tables ................................................................................................................... x List of Figures ................................................................................................................ xii Glossary .......................................................................................................................xvii Acknowledgements ......................................................................................................xxii Dedication ................................................................................................................... xxiii 1 INTRODUCTION ...................................................................................................... 1 1.1 A Statement of the Problem ............................................................................... 2 1.2 The Question That This Research Will Address ................................................. 2 1.3 Scope of Application .......................................................................................... 2 1.4 Objectives of the Research ................................................................................ 3 1.5 Significance of this Research ............................................................................. 3 1.6 Motivation for this Research .............................................................................. 5 1.6.1 The Actuarial Toll - OECD .......................................................................... 5 1.6.2 The Actuarial Toll - Canada ........................................................................ 6 1.7 Contributions of this Research ........................................................................... 6 1.8 Innovation .......................................................................................................... 8 1.9 Quality ............................................................................................................... 8 1.10 Limitations of this Research ............................................................................... 9 1.11 Originality of Work ............................................................................................. 9 1.12 Organization of Work ....................................................................................... 10 1.13 Genesis of Concept ......................................................................................... 10 1.14 Literature Sources ........................................................................................... 12 1.14.1 Texts......................................................................................................... 12 1.14.2 Internet Web Searches ............................................................................. 12 1.14.3 Periodical Journals ................................................................................... 13 1.14.4 Digitized Databases .................................................................................. 13 iv  1.14.5 Academic Theses ..................................................................................... 13 1.14.6 On-line Discussion Forums ....................................................................... 14 1.15 How to Read This Dissertation ........................................................................ 14 2 Accident Theory ................................................................................................... 15 2.1 Causation ........................................................................................................ 16 2.2 Attributed Causes ............................................................................................ 16 2.3 System Decomposition .................................................................................... 17 2.4 Causality .......................................................................................................... 17 2.4.1 Representativeness Heuristics.................................................................. 18 2.4.2 Availability Heuristics ................................................................................ 18 2.5 Social Context ................................................................................................. 19 2.6 History of Accident Theory ............................................................................... 20 2.6.1 Unequal Initial Liability Theory .................................................................. 20 2.7 The Sequence-of-Event Model of Accidents .................................................... 21 2.7.1 Advantages............................................................................................... 21 2.7.2 Disadvantages .......................................................................................... 22 2.8 The Work of Heinrich ....................................................................................... 22 2.9 The Work of Bird and Germain ........................................................................ 25 2.10 The Epidemiological Model of Accidents .......................................................... 27 2.10.1 Advantages............................................................................................... 28 2.10.2 Disadvantages .......................................................................................... 28 2.11 The Work of Haddon........................................................................................ 29 2.11.1 Hosts ........................................................................................................ 30 2.11.2 Agents ...................................................................................................... 31 2.11.3 The Environment ...................................................................................... 31 2.12 Perspective ...................................................................................................... 31 2.13 Systems Model for Accidents ........................................................................... 32 2.14 The Work of Reason ........................................................................................ 32 2.14.1 Coupling ................................................................................................... 34 2.14.2 Interaction ................................................................................................. 35 2.14.3 Self-organized Criticality ........................................................................... 35 2.15 Injury Compensation Models ........................................................................... 36 2.15.1 Emerging Workers‘ Compensation in Canada .......................................... 37 2.16 Why Model? .................................................................................................... 39 v  2.16.1 Attribution Theory ..................................................................................... 41 2.17 Causation and Perspective .............................................................................. 42 2.17.2 Events as Systems ................................................................................... 43 2.17.3 Error Analysis as a System ....................................................................... 45 2.18 Conclusions ..................................................................................................... 46 3 Cognitive Science ................................................................................................ 48 3.1 The Cognitive Mill ............................................................................................ 49 3.1.1 Cognitive Dissonance ............................................................................... 50 3.1.2 Self-Justification ....................................................................................... 52 3.1.3 Cognitive Consonance .............................................................................. 53 3.1.4 Risk Polarization ....................................................................................... 55 3.2 Group Cognition .............................................................................................. 56 3.2.1 Distributed Cognition ................................................................................ 57 3.3 Discussion ....................................................................................................... 61 3.4 Conclusions ..................................................................................................... 62 4 Human Error Theory ............................................................................................ 63 4.1 Introduction ...................................................................................................... 63 4.2 The Nature of Human Error ............................................................................. 64 4.2.1 The Tenet of Intent ................................................................................... 64 4.2.2 The Tenet of Tolerability ........................................................................... 65 4.2.3 The Tenet of Consequence ...................................................................... 65 4.2.4 The Notion of Failure ................................................................................ 66 4.3 Human Reliability Assessment ........................................................................ 67 4.4 Taxonomies ..................................................................................................... 70 4.4.1 Error Genotypes ....................................................................................... 70 4.4.2 Error Phenotype ....................................................................................... 71 4.4.3 Exogenous and Endogenous Errors ......................................................... 71 4.5 An Age of Error ................................................................................................ 73 4.6 Existing Taxonomies........................................................................................ 74 4.6.1 Cognitive Reliability and Error Analysis Method ........................................ 75 4.6.2 Skill – Rule – Knowledge Model ................................................................ 76 4.6.3 Seven Stages of Action Model .................................................................. 80 4.6.4 The Generic Error Modelling System ........................................................ 81 4.7 Knowledge-based Error Mechanisms of Failure ............................................... 83 vi  4.7.1 Selectivity Bias ......................................................................................... 83 4.7.2 Workspace Limitations .............................................................................. 83 4.7.3 Out-of–sight, Out–of-mind ......................................................................... 83 4.7.4 Confirmation Bias ..................................................................................... 84 4.7.5 Over-confidence ....................................................................................... 84 4.7.6 Biased Reviewing ..................................................................................... 84 4.7.7 Illusionary Correlation ............................................................................... 84 4.7.8 Halo Effects .............................................................................................. 85 4.7.9 Problems with Causality............................................................................ 85 4.7.10 Problems with Complexity ......................................................................... 85 4.7.11 Problems in Diagnosis .............................................................................. 85 4.8 Discussion ....................................................................................................... 86 4.8.1 Decision Error ........................................................................................... 88 4.9 Conclusion ....................................................................................................... 89 5 Decision Error Theory .......................................................................................... 90 5.1 Decision Errors Defined ................................................................................... 90 5.1.1 Nomenclature ........................................................................................... 91 5.1.2 Errors of Commission ............................................................................... 92 5.1.3 Errors of Omission .................................................................................... 92 5.1.4 Error of Mistaken Belief ............................................................................ 93 5.1.5 System Error ............................................................................................. 93 5.1.6 Unintended Consequences ....................................................................... 94 5.1.7 Decision Error Logic ................................................................................. 95 5.1.8 Standards of Care..................................................................................... 97 5.1.9 Duty of Care ............................................................................................. 99 5.2 Lost Error Taxonomy ..................................................................................... 100 5.2.1 Situational Awareness ............................................................................ 102 5.3 Decision Error Analysis .................................................................................. 103 5.3.1 Radar Diagram Structure ........................................................................ 103 5.3.2 Characterization of Actors ....................................................................... 103 5.3.3 Radar Diagram Principles ....................................................................... 104 5.4 Decision Error Analysis Tutorial ..................................................................... 105 5.4.1 A Hypothetical Event Scenario................................................................ 105 5.4.2 Compromised Standards of Care ............................................................ 106 vii  5.4.3 Observations: Decision Error Analysis .................................................... 107 5.4.4 Discussion .............................................................................................. 109 5.4.5 Decision Error Analysis Critique .............................................................. 110 5.5 The Cognitive Profiling Methodology ............................................................. 115 5.5.1 Cognitive Deficit ...................................................................................... 116 5.5.2 Cognitive Dissent .................................................................................... 117 5.5.3 Cognitive Deferral ................................................................................... 118 5.5.4 Cognitive Profiling Tutorial ...................................................................... 119 5.5.5 Observations: Cognitive Profiling ............................................................ 120 5.5.6 Interpretation .......................................................................................... 120 5.5.7 Significance ............................................................................................ 124 5.5.8 Characteristic Cognitive Profiles ............................................................. 125 5.6 Conclusions ................................................................................................... 133 6 Historical Case Studies ...................................................................................... 136 6.1 Introduction .................................................................................................... 136 6.2 Methodology .................................................................................................. 136 6.2.2 Scope of Analysis ................................................................................... 140 6.2.3 Nomenclature ......................................................................................... 141 6.2.4 Limitations and Bias ................................................................................ 142 6.3 Westray Mine Disaster ................................................................................... 144 6.3.1 Nature of the Enterprise .......................................................................... 144 6.3.2 Summary of Events ................................................................................ 145 6.3.3 Parties to the Enterprise ......................................................................... 146 6.3.4 Consolidated Findings of the Commission .............................................. 146 6.3.5 Decision Error Analysis of the Westray Disaster ..................................... 153 6.3.6 Cognitive Profiling of the Westray Disaster ............................................. 156 6.3.7 Mission Criticality .................................................................................... 157 6.3.8 Significance and Outcomes .................................................................... 158 6.4 Piper Alpha Production Platform Disaster ...................................................... 172 6.4.1 Chronology of Events ............................................................................. 173 6.4.2 Parties within the Enterprise ................................................................... 175 6.4.3 Consolidated Findings ............................................................................ 175 6.4.4 Decision Error Analysis of the Piper Alpha Disaster ................................ 177 6.4.5 Cognitive Profiling of the Piper Alpha Disaster ........................................ 180 viii  6.5 The Ocean Ranger Platform Disaster ............................................................ 192 6.5.1 Parties within the Enterprise ................................................................... 193 6.5.2 Chronology of Events ............................................................................. 193 6.5.3 Consolidated Findings ............................................................................ 195 6.5.4 Decision Error Analysis of the Ocean Ranger Disaster ........................... 196 6.5.5 Interpretation .......................................................................................... 199 6.5.6 Cognitive Profiling of the Ocean Ranger Disaster ................................... 199 6.5.7 Significance and Outcomes .................................................................... 200 6.6 Sunshine Mine ............................................................................................... 211 6.6.1 Nature of the Enterprise .......................................................................... 211 6.6.2 Chronology of Events ............................................................................. 211 6.6.3 Parties within the Enterprise ................................................................... 215 6.6.4 Consolidated Findings ............................................................................ 215 6.6.5 Decision Error Analysis of the Sunshine Mine Disaster ........................... 216 6.6.6 Cognitive Profiling of the Sunshine Mine Disaster ................................... 218 6.6.7 Mission Criticality .................................................................................... 219 6.6.8 Significance and Outcomes .................................................................... 219 6.7 The Balmoral Mine Disaster ........................................................................... 229 6.7.1 Nature of the Enterprise .......................................................................... 229 6.7.2 The Event ............................................................................................... 229 6.8 Mine Workings ............................................................................................... 230 6.9 Chronology of Events .................................................................................... 231 6.9.1 Consolidated Findings ............................................................................ 232 6.9.2 Decision Error Analysis of the Balmoral Mine Disaster ........................... 233 6.9.3 Cognitive Profiling the Balmoral Mine Disaster ....................................... 234 6.9.4 Mission Criticality .................................................................................... 237 6.9.5 Significance and Outcomes .................................................................... 238 6.9.6 Interpretation .......................................................................................... 238 6.10 Mission Criticality ........................................................................................... 254 6.10.2 Quantification of Mission Criticality ......................................................... 261 6.10.3 Mission Criticality as a Predictive Tool .................................................... 262 6.11 Case Synopses ............................................................................................. 262 6.12 Significance ................................................................................................... 265 6.13 Conclusion ..................................................................................................... 267 ix  7 Contemporary Field Research ........................................................................... 268 7.1 Methodology .................................................................................................. 268 7.1.1 Data Collection ....................................................................................... 269 7.1.2 Qualitative Risk Assessment .................................................................. 271 7.1.3 Semi-quantitative Risk Assessment ........................................................ 274 7.1.4 Data Validation ....................................................................................... 277 7.1.5 Cost Valuation ........................................................................................ 277 7.1.6 Statistical Analysis .................................................................................. 277 7.2 Analysis ......................................................................................................... 278 7.2.1 The First Cohort ...................................................................................... 278 7.2.2 The Second Cohort ................................................................................. 281 7.2.3 Number of Events per Calendar Month ................................................... 281 7.2.4 Data Correlated With Respect to Mine Department ................................ 284 7.2.5 Data Correlated With Respect to Mechanism of Injury ............................ 287 7.2.6 Data Correlated With Respect to Occupation .......................................... 290 7.2.7 Data Correlated With Respect to Job Experience ................................... 292 7.2.8 Data Correlated With Respect to Event Cost .......................................... 293 7.2.9 Data Correlated With Respect to Hour of Day ......................................... 294 7.3 Conclusions ................................................................................................... 294 8 Conclusions and Contributions ........................................................................ 328 8.1 Aetiology of Error ........................................................................................... 330 8.1.1 Decision Error Theory ............................................................................. 330 8.1.2 Decision Error Analysis ........................................................................... 331 8.1.3 Cognitive Profiling ................................................................................... 331 8.1.4 Discussion .............................................................................................. 332 8.2 Contribution to the Field ................................................................................. 334 8.3 Critique of Decision Error Theory ................................................................... 336 8.4 Critique of Lost Error Taxonomy .................................................................... 337 8.5 Critique of Decision Error Analysis ................................................................. 338 8.6 Critique of Cognitive Profiling ......................................................................... 338 8.7 Arguments for Further Work .......................................................................... 339 8.8 Statement of Accomplishment ....................................................................... 341 9 REFERENCES ..................................................................................................... 342  x  LIST OF TABLES  Table 2-1: Table illustrating the Haddon Matrix as applied to an event in the mine workplace ................................................................................................. 30 Table 2-2: Table summarizing the five Meredith principles for workers compensation funds .................................................................................. 38 Table 3-1: Table summarizing symptoms of Groupthink as enumerated by Janis (1982). ...................................................................................................... 54 Table 4-1: Table illustrating the complementary nature of HRA and causation attribution .................................................................................................. 67 Table 4-2: Table of unprecedented disasters defining the 1970‘s and 1980‘s high technology era .......................................................................................... 74 Table 4-3: Taxonomy of human error and their performance levels (UK P&I Club, 2008) ........................................................................................................ 87 Table 5.1: Table specifying the various workplace parties making decision errors contributing to the event scenario ........................................................... 112 Table 5.2: Table summarizing the distribution of the decision errors by the workplace parties .................................................................................... 113 Table 6-1: Table summarizing the parties to the Westray enterprise and their roles ....................................................................................................... 146 Table 6-2: Table summarizing the standard or duty of care not met by the enterprise parties .................................................................................... 153 Table 6-3: Table summarizing the 64 decision errors contributing to the Westray Mine disaster according to enterprise party............................................. 165 Table 6-4: Table summarizing the distribution of the Westray decision errors by the workplace parties .............................................................................. 166 Table 6-5: Table summarizing mission criticality elements associated with the Westray disaster ..................................................................................... 171 Table 6-6: Table summarizing the parties to the Piper Alpha enterprise and their roles ....................................................................................................... 175 Table 6-7: Table summarizing the standards of care not met by Piper Alpha enterprise parties .................................................................................... 177 Table 6-8: Table summarizing the 33 decision errors contributing to the Piper Alpha disaster according to enterprise party ........................................... 185 Table 6-9: Table summarizing the distribution of the Piper Alpha decision errors of the enterprise parties .......................................................................... 186 Table 6-10: Table summarizing criticality elements associated with the Piper Alpha disaster ......................................................................................... 191 Table 6-11: Table summarizing the roles and responsibilities of the Ocean Ranger parties ........................................................................................ 193 Table 6-12: Table summarizing the standards of care not met by Ocean Ranger enterprise parties .................................................................................... 197 xi  Table 6-13: Table summarizing the 38 decision errors contributing to the Ocean Ranger disaster according to enterprise party ......................................... 205 Table 6-14: Table summarizing the distribution of the Ocean Ranger decision errors by the enterprise parties ............................................................... 206 Table 6-15: Table summarizing criticality elements associated with the Ocean Ranger disaster ...................................................................................... 210 Table 6-16: Table summarizing the roles and responsibilities of the Sunshine Mine parties ............................................................................................ 215 Table 6-17: Table summarizing the 26 decision errors contributing to the Sunshine Mine disaster according to enterprise party ............................. 223 Table 6-18: Table summarizing the distribution of the decision errors by Sunshine Mine enterprise parties ........................................................................... 224 Table 6-19: Table summarizing criticality elements associated with the Sunshine Mine disaster .......................................................................................... 228 Table 6-20: Table summarizing the 35 decision errors contributing to the Balmoral mine disaster according to enterprise party .............................. 243 Table 6-21: Table summarizing the distribution of the decision errors by Balmoral mine enterprise parties ........................................................................... 244 Table 6-22: Table summarizing criticality elements associated with the Balmoral mine disaster .......................................................................................... 250 Table 7-1: First cohort of data according to year of investigative report ....................... 278 Table 7-2: Table summarizing data from the second cohort with respect to  calendar month ....................................................................................... 281 Table 7-3: Table summarizing data from the second cohort with respect to mine department ............................................................................................. 284 Table 7-4: Table summarizing data from the second cohort with respect to mechanism of injury ................................................................................ 287 Table 7-5: Table summarizing data from the second cohort with respect to worker occupation .............................................................................................. 290 Table 7-6: Table summarizing data from the second cohort with respect to job experience .............................................................................................. 292 Table 7-7 Table summarizing data from the second cohort with respect to event cost ......................................................................................................... 293     xii  LIST OF FIGURES  Figure 1.1: Schematic illustrating ‗completing the loop‘ of accident investigation (Sweeney, 2004) ........................................................................................ 4 Figure 1.2: Diagram illustrating the various disciplines contributing to this research ..................................................................................................... 7 Figure 1.3: Diagram illustrating the organization of this dissertation into its three constituent parts ....................................................................................... 10 Figure 2.1: Schematic illustration of the Sequence-of-Events Model ............................. 22 Figure 2.2: The original accident triangle depicting injury ratios (Heinrich, 1931) ........... 23 Figure 2.3: The original Domino Theory of accident causation (Heinrich, 1931) ............ 24 Figure 2.4: Graphic illustration of the Safety Management model (Heinrich et al, 1980) ........................................................................................................ 25 Figure 2.5: Graphical illustration of the Loss Causation model (Bird and Germaine, 1974) ....................................................................................... 26 Figure 2.6: Bronfenbrenner's epidemiological model of illness and injuries (Runyan, 2003) ......................................................................................... 29 Figure 2.7: Illustration of the Reason's 'Swiss cheese' human systems model .............. 33 Figure 2.8: Schema of mapping enterprises by system interaction and coupling ........... 34 Figure 2.9: Diagram illustrating the value of shifting investigative perspective ............... 40 Figure 2.10: Diagram illustrating the broad spectrum of determinants comprising causality ................................................................................................... 43 Figure 2.11: A schematic illustration of the recursive nature of systems applied to investigation.............................................................................................. 44 Figure 2.12: Schematic ‗completing the loop‘ of accident investigation (Sweeney, 2004) ........................................................................................................ 44 Figure 2.13: A schematic illustration of the recursive nature of systems applied to events ....................................................................................................... 45 Figure 2.14: A schematic of the recursive nature of systems applied to cognitive profiling ..................................................................................................... 46 Figure 3.1: Influence of management and worker cognition on behaviour (Sträter, 2005). ....................................................................................................... 49 Figure 3.2: The cognitive mill model of human cognitive processing (Sträter, 2005) ........................................................................................................ 50 Figure 3.3: Group cognition model based upon fallibility of barriers (Reason and Sasou, 1998) ............................................................................................ 59 Figure 3.4: The role of distributed cognition in accident causation (Busby and Hughes, 2003) .......................................................................................... 60 Figure 4.1: Schematic illustrating an HRA model of cognitive decomposition (Hollnagel, 2005) ...................................................................................... 68 Figure 4.2: A schema differentiating error genotypes from phenotypes (Whittingham, 2004) ................................................................................. 71 xiii  Figure 4.3: Graphical illustration of a techno-social system (Rasmussen and Svedung, 2002) ........................................................................................ 72 Figure 4.4: The structures methodology of CREAM in evaluating human error (Hollnagel, 1998) ...................................................................................... 76 Figure 4.5: The SRK taxonomy depicting three hierarchical control strategies (Rasmussen, 1983) .................................................................................. 77 Figure 4.6: The decision ladder model for Cognitive Task Analysis (Rasmussen, 1987) ........................................................................................................ 79 Figure 4.7: Schematic illustrating the Seven Stages of Action taxonomy (Norman, 1981) ........................................................................................................ 80 Figure 4.8: Schematic illustrating the Generic Error Modelling System taxonomy (Reason, 1990) ......................................................................................... 82 Figure 4.9: Schematic illustrating inclusion of error types in the Tripod model (UK P&I Club, 2008) ........................................................................................ 86 Figure 5.1: Depiction of the four genotypes of decision error theory (Sweeney, 2004) ........................................................................................................ 90 Figure 5.2: Flow-chart illustrating the logic of decision error classification and their determination .................................................................................... 96 Figure 5.3: A schema illustrating the primacy of standards of care used in this research ................................................................................................... 98 Figure 5.4: Lost Error Taxonomy schema, an adaptation of that of Norman (1981) ..... 101 Figure 5.5: An unpopulated example of a decision error analysis radar diagram (Sweeney, 2004) .................................................................................... 104 Figure 5.6: Hierarchical ‗command and control‘ structure of traditional Canadian hard-rock mines ...................................................................................... 109 Figure 5.7: Decision error analysis diagram illustrating the decision errors contributing to an underground mine event resulting in a fatality ............. 114 Figure 5.8: Diagram illustrating the sequence of analysis in the cognitive profiling methodology ........................................................................................... 115 Figure 5.9: A ternary diagram depicting the three cognitive genotypes (Sweeney, 2004) ...................................................................................................... 116 Figure 5.10: Ternary diagram illustrating the respective cognitive profiles of the workplace parties .................................................................................... 119 Figure 5.11: A series of nine characteristic profiles illustrating cognitive profile prototypes ............................................................................................... 125 Figure 6.1: Geographic distribution of the case studies profiled within this dissertation ............................................................................................. 137 Figure 6.2: Location of the Westray coalmine in Pictou county, Nova Scotia ............... 144 Figure 6.3: Decision error analysis diagram for conditions A through H of the Westray Mine disaster ............................................................................ 167 Figure 6.4: Decision Error Analysis diagram for conditions I through P of the Westray Mine disaster ............................................................................ 168 Figure 6.5: Decision error analysis diagram for conditions Q through Y of the Westray Mine disaster ............................................................................ 169 xiv  Figure 6.6: Ternary diagram illustrating the cognitive profiles of the Westray parties ..................................................................................................... 170 Figure 6.7: Decision error analysis diagram for conditions A through H of the Piper Alpha disaster ................................................................................ 187 Figure 6.8: Decision error analysis diagram for conditions I through P of the Piper Alpha disaster ......................................................................................... 188 Figure 6.9: Decision error analysis diagram for conditions Q through V of the Piper Alpha disaster ................................................................................ 189 Figure 6.10: Ternary diagram illustrating the cognitive dispositions of the Piper Alpha parties ........................................................................................... 190 Figure 6.11: Decision error analysis diagram for conditions A through H of the Ocean Ranger disaster ........................................................................... 207 Figure 6.12: Decision error analysis diagram for conditions I through P of the Ocean Ranger disaster ........................................................................... 208 Figure 6.13: Ternary diagram illustrating the cognitive dispositions of the Ocean Ranger parties ........................................................................................ 209 Figure 6.14: Sectional view of the Sunshine Mine workings and ventilation circuit. ...... 214 Figure 6.15: Decision error analysis diagram for conditions A through F of the Sunshine Mine disaster .......................................................................... 225 Figure 6.16: Decision error analysis diagram for conditions G through K of the Sunshine Mine disaster .......................................................................... 226 Figure 6.17: Ternary diagram illustrating the cognitive dispositions of the Sunshine Mine parties ............................................................................ 227 Figure 6.18: Decision error analysis diagram for conditions A through G of the Balmoral mine disaster (1 of 3) ............................................................... 245 Figure 6.19: Decision error analysis diagram for conditions H through N of the Balmoral mine disaster (2 of 3) ............................................................... 247 Figure 6.20: Decision error analysis diagram for conditions O through Q of the Balmoral mine disaster (3 of 3) ............................................................... 248 Figure 6.21: Ternary diagram illustrating the cognitive dispositions of the Balmoral mine parties ............................................................................. 249 Figure 6.22: Illustration of the Ferderber mine workings and the location of the decedents after the failed crown pillar (Beaudry, 1981) .......................... 251 Figure 6.23: Illustration of the sequence of geo-mechanical failure events occurring in the 2-7 stope of the Ferderber mine (page 1 of 2)................ 252 Figure 6.24: Illustration of the sequence of geo-mechanical failure events occurring in the 2-7 stope of the Ferderber mine (page 2 of 2)................ 253 Figure 6.25: Ternary diagram illustrating the cognitive profiles of eight historical case studies............................................................................................ 263 Figure 6.26: Fractal like geometry of error systems dependent upon scale and complexity .............................................................................................. 266 Figure 7.1: Graphical illustration of a risk behaviour model comprised of three risk-taking regimes .................................................................................. 271 Figure 7.2: Matrix graphically illustrating risk as a product of event likelihood and consequence .......................................................................................... 275 xv  Figure 7.3: Matrix of a graphical illustration of three increasing zones of risk and uncertainty .............................................................................................. 276 Figure 7.4: Scatter plot of the number of accidents/incidents with respect to time. ...... 298 Figure 7.5: Cognitive ternary diagram illustrating decision errors according to calendar month ....................................................................................... 299 Figure 7.6: Scatter-plot illustrating percentage of errors of commission by calendar month ....................................................................................... 300 Figure 7.7: Scatter-plot illustrating percentage of errors of mistaken belief by to calendar month ....................................................................................... 301 Figure 7.8: Scatter-plot illustrating percentage of errors of omission according to calendar month ....................................................................................... 302 Figure 7.9: Scatter plot of the average effective risk by month with best-fit curve and error bars ......................................................................................... 303 Figure 7.10: A scatter plot illustrating mission criticality with respect to calendar month. .................................................................................................... 304 Figure 7.11: Cognitive ternary diagram illustrating decision errors according to mine department ..................................................................................... 305 Figure 7.12: Graphical illustration of mine department ranked by number of events in 2005. ....................................................................................... 306 Figure 7.13: Graphical illustration of mine department ranked by number of events per FTE. ...................................................................................... 307 Figure 7.14: Graphical illustration of mine department ranked by average effective risk. ........................................................................................... 308 Figure 7.15: Graphical illustration of mine department ranked by average job experience. ............................................................................................. 309 Figure 7.16: Distribution of event cost factor by model and by actual reported. ........... 310 Figure 7.17: Graphical illustration of mine department ranked by average event cost. ........................................................................................................ 311 Figure 7.18: Cognitive profile ternary diagram illustrating decision errors by mechanism of injury ................................................................................ 312 Figure 7.19: Graphical illustration of mechanism of injury ranked by number events. .................................................................................................... 313 Figure 7.20: Graphical illustration of mechanism of injury ranked by average effective risk. ........................................................................................... 314 Figure 7.21: Graphical illustration of mechanism of injury ranked by job experience in years. ................................................................................ 315 Figure 7.22: Graphical illustration of mechanism of injury by average event direct cost in dollars. ......................................................................................... 316 Figure 7.23: Graphical illustration of mission criticality by mechanism of injury. ........... 317 Figure 7.24: Graphical illustration of mine occupation ranked by number of events in 2005. ....................................................................................... 318 Figure 7.25: Cognitive profile of decision errors compared to occupation. ................... 319 Figure 7.26: Graphical illustration of mine occupations ranked by average effective risk in 2005. .............................................................................. 320 xvi  Figure 7.27: Graphical illustration of mine occupation ranked by job experience in 2005. ...................................................................................................... 321 Figure 7.28: Graphical illustration of mine occupation ranked by average event cost in 2005. ........................................................................................... 322 Figure 7.29: Graphical Illustration of mine occupation ranked by average mission criticality. ................................................................................................. 323 Figure 7.30: Cognitive profile of decision errors as compared to job experience (years). ................................................................................................... 324 Figure 7.31: Illustration of the variation of frequency of events with respect to hour – a.m. ............................................................................................. 325 Figure 7.32: Illustration of the variation of frequency of events with respect to clock hour – p.m. .................................................................................... 326 Figure 7.33: Cognitive profile of decision errors over the years 2002 to 2005. ............. 327 Figure 8.1: Progression of the iceberg principle correlated with time and successive models .................................................................................. 329 Figure 8.2: Cognitive profiling model inclusive of cognitive dissonance as an error genotype ................................................................................................. 340   xvii   GLOSSARY  Truth is a good dog; but beware of barking too close to the heels of an error, lest you get your brains kicked out. Samuel Taylor Coleridge (Bartlett, 2000)  A  ACCIDENT: An unplanned event that results in harm to people, damage to property or loss to process (IAPA, 2007). ACCIDENT CAUSATION: The many factors that act together to cause accidents. They include: personal factors, job factors, and lack of management control factors (IAPA, 2007) ACCIDENT INVESTIGATION: The process of systematically gathering and analyzing information about an accident. This is done for the purposes of identifying causes and making recommendations to prevent the accident from happening again (IAPA, 2007). ACTOR: Any person who is the originator of a behaviour, decision or action and is party to an accident scenario (this dissertation) ADMINISTRATIVE CONTROLS: A category of hazard control that uses administrative/ management involvement in order to minimize employee exposure to the hazard (IAPA, 2007) AGENT: Any substance, force, organism or influence that affects the body, a part of the body, or any of its functions. The effects may be beneficial or harmful (IAPA, 2007). ALARP: An acronym for ‗As Low As Reasonably Practicable‘. This term represents the level to which workplace risks are controlled to the degree considered practical and achievable (IET, 2007).   C  CODE OF PRACTICE: A set of prescriptive instructions documenting procedures and standards that are requisite to a specific hazard with such force of intent that failure to comply may result in legal proceedings (IET, 2007).  COMPETENT PERSON: A person who has sufficient skill, knowledge and experience to work safely without continuous direction. They also work within their scope of practice (IET, 2007). CONSEQUENCE: outcome or impact of an event (AS/NZS 4360, 2004). CONTROL: Measures designed to eliminate or reduce hazards or hazardous exposures. Examples include: engineering controls, administrative controls, personal protective xviii  equipment. Hazards can be controlled at the source, along the path to the worker, or at the worker (IAPA, 2007). COST: Of activities, both direct and indirect, involving any negative impact, including money, time, labour, disruption, goodwill, political and intangible losses (AS/NZS 4360, 2004).  D  DANGER: The circumstance in which negative outcomes to people, assets, production, reputation or the environment is plausible and reasonably foreseeable (IET, 2007).  DUTY OF CARE: An obligation imposed upon a person or persons requiring that their action fall within a standard of care towards others that reflects caution, care and prudence consistent with that of a reasonable person (Bruce, 1998). DUE DILIGENCE: The taking of every precaution reasonable in the circumstances for the protection of the health and safety of workers (IAPA, 2007).  E  ENTERPRISE: A project or undertaking at the economic level involving all parties that govern its success, including but not limited to: federal and local governments, the community, corporate management, operations management, regulatory agencies, contractors, workers and the public at large (This dissertation). ENVIRONMENT: The surrounding conditions, influences, and forces to which an employee is exposed in the workplace (IAPA, 2007). ERROR: An act, assertion, omission or belief on the part of an individual or individuals that deviates from a known standard, norm, rule or expectation (This dissertation).  EVENT: occurrence of a particular set of circumstances (AS/NZS 4360,2004).  F  FIRST AID INJURY: An injury or illness requiring treatment by a designated first aid professional as per the requirements of the prevailing statutory authority (IET, 2007). FREQUENCY: A measure of the number of occurrences per unit of time (AS/NZS 4360, 2004).  H  HARM: Any negative outcome including injury, illness, environmental excursion, financial loss or reputation (This dissertation).  HAZARD: A source of potential harm (AS/NZS 4360, 2004). xix  HEALTH AND SAFETY PROGRAM: A systematic combination of activities,  procedures, and facilities designed to ensure and maintain a safe and healthy workplace (IAPA, 2007). HUMAN ERROR: This term is used today to include not just workers‘ errors, but engineering deficiencies and lack of adequate organizational controls which together account for the majority of accidents (IAPA, 2007).  I  INCIDENT: An unwanted event which, in different circumstances, could have resulted in harm to people, damage to property or loss to a process. Also known as a near miss (IAPA, 2007).   L  LATENT PERIOD: The time that passes between exposure to a harmful substance or agent and the first sign(s) of damage or illness (IAPA, 2007). LIKELIHOOD: Used as a general description of probability or frequency (AS/NZS 4360, 2004). LOSS: Any negative consequence, financial or otherwise (AS/NZS 4360, 2004). LOSS CONTROL: Measures taken to prevent and reduce loss. Loss may occur through injury and illness, property damage, poor work quality, etc. (IAPA, 2007). .  M  MISTAKE: A lapse in judgement or error that results in an unintended consequence (Norman, 1983). MONITOR: to check, supervise, observe critically or measure the progress of an activity, action or system on a regular basis in order to identify change from the performance level required or expected (AS/NZS 4360, 2004)  N  NATURE OF INJURY: The main physical characteristics of a workplace injury or illness (for example, burn, cut, sprain, dermatitis, hearing loss). (IAPA, 2007) NEGLIGENCE: The omission to do something, which a reasonable person, guided upon those considerations which ordinarily regulate the conduct of human affairs would do, or something, that a prudent and reasonable man would not do (IET, 2007).   O ORGANIZATION: Group of people and facilities with an arrangement of responsibilities, authorities and relationships (AS/NZS 4360, 2004)  xx  P  PERSONAL PROTECTIVE EQUIPMENT: Any device worn by a worker to protect against hazards. Some examples are: respirators, gloves, ear plugs, hard hats, safety goggles and safety shoes (IAPA, 2007). POLICY: A documented statement of intent by an organization that compels others to comply with a standard or expectation and for which consequences are implicitly or explicitly set out in the event of non-compliance (this dissertation). PRACTICABLE: Technical feasibility without reference to costs (IET, 2007).  PRESCRIBED: As set out in the regulations under any Act (IAPA, 2007). PROBABILITY: A measure of the chance of occurrence expressed as a number between 0 and 1 (AS/NZS 4360, 2004). PROCEDURE: A step-by-step description of how to do a task, job, or activity properly (IAPA, 2007).  Q  QUALIFIED WORKER: One who is accepted as having the necessary physical attributes, who possesses the required intelligence, training and education, and has acquired the necessary skill and knowledge to carry out the work in hand to satisfactory standards of safety, quantity and quality (IET, 2007).  QUALIFIED PERSON: A person who is accepted as trained in accordance with a known standard, competent to carry out the duties without direction (this dissertation).   R REASON TO BELIEVE: A conviction or belief that does not require empirical support or evidence (IAPA, 2007). RESIDUAL RISK: Risk remaining after implementation of risk treatment (AS/NZS 4360, 2004). RISK: The chance of something happening that will have an impact upon objectives. (AS/NZS 4360, 2004). RISK ACCEPTANCE: An informed decision to accept the consequences and the likelihood of a particular risk (AS/NZS 4360, 2004). RISK ANALYSIS: Systematic process to understand the nature of and to deduce the level of risk (AS/NZS 4360, 2004). RISK AVOIDANCE: A decision not to become involved in, or to withdraw from, a risk situation (AS/NZS 4360, 2004). RISK ASSESSMENT: The overall process of risk identification, risk analysis and risk evaluation (AS/NZS 4360, 2004). RISK CONTROL: That part of  risk management which involves the implementation of policies, standards, procedures, and physical changes to eliminate or minimize adverse risks (AS/NZS 4360, 2004). xxi  RISK EVALUATION: The process used to determine risk management priorities by comparing the level of risk against predetermined standards, target risk levels or other criteria (AS/NZS 4360, 2004). RISK IDENTIFICATION: The process of determining what can happen, why and how something could happen (AS/NZS 4360, 2004). RISK MANAGEMENT: The culture, processes and structures that are directed towards realizing potential opportunities whilst managing adverse effects (AS/NZS 4360, 2004). RISK REDUCTION: Action taken to lessen the likelihood, negative consequences, or both, associated with a risk (AS/NZS 4360, 2004). RISK RETENTION: acceptance of the burden of loss, or benefit of gain, from a particular risk (AS/NZS 4360, 2004). REASONABLY PRACTICABLE: A computation made in which the quantum of risk is placed on one scale, and the disadvantages involved in the measure necessary for averting the risk is placed upon the other. A balance between: risk and cost, inconvenience, effect on production (IET, 2007).   S  SAFETY: The absence of risk of injury or asset damage/loss (IET, 2007).  SAFETY AUDIT: Monitoring of the implementation of a safety policy by subjecting each area of an activity to a systematic critical examination with the purpose of minimising loss, and providing a quantified assessment of performance (IET, 2007).  SAFETY CASE: Formal explanation of methods to be adopted to reduce risk of accident often used in high potential risk situations - e.g. Petro-chemical, Nuclear Installations (IET, 2007).  SAFETY COMMITTEE: A committee representative of all staff with the objective of promoting co-operation in investigating, developing and carrying out measures to ensure the health, safety and welfare of the employees (IET, 2007).  SAFETY CULTURE: This term has no widely agreed definition. It may be described as a product of the individual and group values, attitudes, competencies and patterns of behaviour that determine the commitment to, and the style and proficiency of an organisations health and safety programmes (IET, 2007).  SAFETY INSPECTION: Systematic assessment of safety standards for plant, place of work, working. Carried out by a manager and not a safety adviser/engineer (IET, 2007).  SAFETY MANAGEMENT SYSTEM (SMS): Management of Safety in order to promote a strong Safety Culture and achieve high standards of safety performance (IET, 2007).  SAFETY MONITORING: Periodic checks on observance of corporate safety standards and procedures processes or areas (IET, 2007).  STANDARD: A guideline, rule, principle, or model that is used as a means to compare, measure or judge performance, quality, quantity, etc. (IAPA, 2007). W  WORKPLACE: Any place where work is taking place or may be taking place (this dissertation). xxii  ACKNOWLEDGEMENTS    I gratefully acknowledge the support of the faculty and staff at the Norman B. Keevil Institute of Mining Engineering who have provided encouragement and support throughout my studies. In particular, I thank my principal supervisor Professor Malcolm Scoble; and, Dr. Scott Dunbar and Dr. Michael Hitch without whom I would not have had the courage to take this journey – much less complete it.  I also acknowledge the assistance and support of the management and staff of the Highland Valley Copper Mine. It is only through their leap of faith that I was able to secure disclosure of sensitive records and benefit from their high standards of operational excellence spanning decades of mining within the province of British Columbia. They know who they are; I could not have completed this project without their cooperation and counsel. Finally, to my family who have sacrificed so much that I might complete this journey. With unwavering perseverance and humour, you have taught me how to balance family, work, and studies through these last few years. You have my gratitude, respect and love for all that you have endured.   xxiii  DEDICATION    Dedicating this research is a challenge. The horror and sorrow associated with bearing witness to an injury in the workplace is a deeply moving experience. To an extent, this dissertation is a retrospective of a career investigating fatalities and other serious events in the industrial workplace. It is difficult to express how documenting the scene of a fatality haunts you; how much it reminds you that in an instant in time someone‘s future is extinguished forever. It is impossible to express on behalf of all the injured; the deceased; and their next-of-kin, how much they would all give to reverse a decision in time – to deny tragedy to the cruel hand of error.  I dedicate this work to all those who no longer have voices and to my parents who sacrificed so much that I might speak for them.  Who can discern his errors?  Forgive my hidden faults (Psalm 19:12)   1    ―Sometimes we may learn more from a man‘s errors, than from his virtues‖ Henry Wadsworth Longfellow (Bartlett, 2000)  1 INTRODUCTION  Fallibility is part of the human condition. Man‘s capacity for error is generally underestimated, but always is a sober reminder that the enterprises for which we toil are not without risk: risk of failure, risk of tragedy, and risk of disaster. This research considers human error from a cognitive perspective. It asks three interrelated questions: can we define the safety culture in terms of group cognition; how does group cognition manifest itself in the workplace; and, what are the benefits of profiling cognitive errors as an evaluative tool in mining related incidents and accidents? The product of this research is an analytical framework by which one can examine, classify and profile events in the mine workplace. The motivation and premise of this research is that an explicative tool is lacking with respect to events in the workplace that provides insight into the safety culture (ethos) of an organization or enterprise.  Mining in British Columbia traditionally has been a leader in workplace safety and environmentally sustainable practices. A new standard of social conscience is emerging within the mining community that expands sustainability to include the interests of the community, the aboriginal first peoples and the public at large. This trend is particularly true in the province of British Columbia, which has had to manage forestry, fisheries, tourism and mining in what is one of the more demanding, socially conservative jurisdictions in which to explore for and extract minerals. Mining builds on a tradition of social responsibility and leadership that has been its legacy. This research takes advantage of the long standing and well-developed standards, norms and statutes that have served mining so well by making it the safest heavy industry within British Columbia (MEMPR, 2005).    2  1.1 A Statement of the Problem As long as there has been mining, there have been events (accidents, incidents, and environmental excursions) in the workplace that are unplanned, unpredictable and always deleterious to the enterprise of mining.  By virtue of the shadow of uncertainty that these events cast upon shareholder confidence, public support,  and regulatory oversight it is clear that these events are unacceptable and no longer considered part of ‗doing business‘. In recent decades, sincere and credible efforts has been made to investigate, analyze and extirpate these events; however, the holy grail that remains elusive is to understand and develop mechanisms for change of the organizational culture - or ethos - that govern these events. The problem therefore is to devise a model that examines the investigative record, and then predicts what human and/or organizational factors support and sustain an ethos of error within the mine enterprise. 1.2 The Question That This Research Will Address Can we through the back-analysis of events within the workplace, develop a technology that is predictive, heuristic and practical in profiling the psychological precursors and cognitive errors that contribute to accidents and incidents? Secondly, is this a new lens through which we can look at mining enterprises and their organizational structure? If so, to what extent does this research contribute to organizational theory and a path yet to be followed to best management practices? 1.3 Scope of Application The crucible for this research is the mining industry. The principles and precepts are equally applicable to any industry or sector, the common element being human error - the subject of this research. The word event occurs repeatedly in this dissertation, to represent any destabilizing scenario within the enterprise of mining that puts the integrity of the operation at risk. Typically, events include the usual suspects: accidents, incidents, production cessation and environmental excursions. A modern perspective would be remiss were events not to include the less definable occurrences of public outcry and challenge by First Nations. They are increasingly relevant in British Columbia and no less subject to human error.  3  1.4 Objectives of the Research The principal objective described in this dissertation is to introduce a model of error analysis that will promote disclosure and examination of decision errors within the mining enterprise. To this end, a cognitive profiling model is presented as a tool to explore the contribution made by human errors and the organizational precursors that are antecedent to them. Furthermore, through the introduction of taxonomy of these errors based upon contemporary accident theory, this research will provide a framework by which cognitive error can be recognized, classified, and profiled. This objective can be broken down into four goals. This research will: i. Propose a new model for industrial events, one that is inclusive of the back-analysis of accidents and incidents to arrive at the psychological and organizational precursors that contribute to events in the workplace. ii. Provide a link between human error contributing to an event and the standard of care that would be appropriate to mitigate, if not prevent the occurrence. iii. Devise a means by which industry can evaluate the criticality of their mining enterprise and the potential for an event escalating to a disaster. iv. Introduce cognitive profiles commonly associated with organizations experiencing serious events and propose warning signs predictive in their occurrence. 1.5 Significance of this Research The mining industry has entered the second century in which there have been increasing expectations upon operators to demonstrate self-awareness for social responsibility. A concept of sustainable management is emerging that includes numerous new dimensions of awareness: cultural consultation and accommodation, community engagement, regulatory compliance, resource management, public safety, worker health and safety and economic diversification being the notable examples. Each of these dimensions has their own challenges, but all are subject to the immutable laws of risk and uncertainty; notions that this research seeks to examine through the back-analysis of accidents and incidents (events). Currently there are few models that have 4  the capacity to evaluate the investigative record; and none of which the author is aware that explicitly considers the contribution made by decision error. The significance of this research is the novel and innovative approach of closing the loop of the accident investigation cycle through the analysis of events in the workplace (Figure 1.1). To be proactive, mining enterprises must recognize the merits of considering decision errors of persons involved in day-to-day operations that have the potential to contribute to an event scenario; and engage these same human resources to become self-aware and adaptive to error control strategies. This research will make the case that human error is not limited to mine operations. Rather, we consider the entire enterprise as a source of human error. This enterprise approach will encourage the evaluation of all the workplace parties (operations, corporate management, unions and the regulators) in a mutual effort to candidly facilitate learning from human error.  Figure 1.1: Schematic illustrating ‗completing the loop‘ of accident investigation (Sweeney, 2004)  5  1.6 Motivation for this Research Accident reports often resort to naming human error (pilot error, operator error) as the ‗cause‘ of an event in the workplace. Frequently considered as a ‗blame‘ setting (Busse, 2002; Storbakken, 2002), such characterizations fail to accurately model events; and often alienate those persons involved in the event scenario. The identification of human error should stimulate a deeper and more probative investigation, rather than arriving at statements of culpability or causation. Still, it is essential to consider human factors in the understanding of workplace events. Identifying human factors in causation should not connote ―human error‖ as the cause of the event. Properly framed, human error can be examined in a less judgmental and incriminating manner that treats persons contributory to an event scenario as a participant in a larger error-forcing system.  There is a lack of appreciation of this possibility in many contemporary investigations, and a paucity of tools or models available to consider this ‗big picture‘. This research addresses this need and provides a methodology by which investigators can evaluate decision errors in the first instance; and provide analysts a tool for the back-analysis of investigations, in the second instance. By taking a candid and objective look at decision errors as symptoms of events in the workplace instead of causes, this research aims to provide a more appropriate and less judgmental lexicon of causality. In doing so, an organization will benefit by instilling within their enterprise social responsibility and personal accountability. Long-term, these organizational traits will translate into fewer errors, less risk and uncertainty and ultimately greater profitability through fewer events. Traditional mining companies typically organize their operations in conservative and predictable structures. An additional need that this research will address is how to examine decision errors within these structures, and seek to understand how the various groups perceive and act on risk. It is anticipated that, in so doing, mine management will gain insight and revelation as to the role risk perception plays in the cognitive processes antecedent to decisions, and adopt effective risk communication and mitigation strategies. 1.6.1 The Actuarial Toll - OECD A report to the 27 member nations of the Organization of Economic and Co-operative Development (OECD, 1989:133-159) reveals that: 6  i. In 1987, there were over 16,000 fatalities of workers, reported by the OECD Member nations (OEDC, 1989:152).  ii. In 1987, there were over 10,000,000 loss time accidents to workers, reported by the OECD Member nations. This is out of the 300,000,000 workers who comprise the reporting population (OEDC, 1989:133). iii. In 1987, the direct accident insurance expenditures represent between 3 and 7 percent of the total social security expenditures, or by another measure 1 to 3 percent of the gross domestic product, depending upon nation (OEDC, 1989:134). iv. In 1987, 15 percent of the fatalities in mines reporting to the OECD Member nations occurred in Canada. However, in terms of injury rate, the injury rate of mineworkers was on average with those of Member nations (OEDC, 1989:144). 1.6.2 The Actuarial Toll - Canada A report from Human Resources and Development Canada (HRDC, 2000:1-48) reveals that: i. In 1998, statistically on average, every day there were three fatalities of workers in Canada. This represents a ratio of 1 to 18,000 (HRDC, 2000:9) ii. In 1998, there was a loss time injury every 37 seconds, somewhere in Canada (HRDC, 2000:10). iii. In 1998, the cost of compensation payments to workers in Canada was $77,500 per minute (HRDC, 2000:10). iv. In 1998, the percentage of workers in Canada participating in the mining industry was 1.25%. The percentage of workers in the mining industry reporting injuries was 1.07% (HRDC, 2000:12). 1.7 Contributions of this Research This research flows from, and contributes to three scientific bodies of knowledge. They are accident investigation, cognitive science and human error theory (Figure 1.2). Although borrowing liberally from the latter two disciplines, it is the researcher‘s belief 7  that it is the contribution to the former field – that of accident investigation, that this research will make the greatest contribution. In recent decades, the field of accident investigation in the mining industry has evolved and become increasingly effective. The mining industry has both benefited from and contributed to founding principles of accident investigation such as sequence-of-event theory (Heinrich, 1931) and loss control (Bird, 1973). Comparatively however, the civil transportation, nuclear and medical sectors have made greater progress in the incorporation of the human factor, arguably because of criticality and complexity of their respective technologies.   Figure 1.2: Diagram illustrating the various disciplines contributing to this research  The cognitive profiling model presented in this research is predicated upon the salient principle that behaviours in the past are potentially predictors of behaviours in the future. Whereas the researcher makes no claim of prediction of events with numerical 8  certainty, the utility of this research is the detail with which a future event can and will be described. Collectively: the time of day, seasonal considerations, mechanisms of injury, failure mechanisms, organizational structure, and worker vocation all provide a descriptive profile of what a future event might look like. Further, cognitive profiling will offer the analyst some insight into the cognitive errors that are likely to contribute to an event, and thereby suggest a preventative action or remedy.  1.8 Innovation One definition of innovation is the creation and implementation of new processes, products and methods of delivery that result in significant improvements in efficiency, effectiveness and quality (Albury and Mulgan, 2003). The definition resonates with this research insofar as the model presented herein is unique, powerfully adaptive and offers to shift the utility of investigation of events from reactive and reflective to proactive and predictive. Additionally, decision error theory and cognitive profiling will increase effectiveness and therefore the quality of the investigative process by providing feedback to the workplace parties (at the enterprise level) by scrutinizing decision errors. It is anticipated that through such scrutiny, that mine operations will become more self-aware and less averse to introspection and organizational change. Lastly, key to innovation is that the new idea or invention is acted on or in some way put into effect. This research will demonstrate that cognitive profiling is an analytical tool that is practical and timely. 1.9 Quality The quality of this research is not dependent upon statistical validity or revelation of phenomena. Indeed, the irony is that the inspiration and information upon which this work is founded already exists in the form of the investigative record: and is in plain sight. This research flows from years of empirical observation and firsthand experience. In this regard, the proof presages the research, and without biasing the outcome, provides a solid basis to situate decision error theory and attendant cognitive profiling model. To be clear, there is a considerable body of work in the disciplines of accident theory, human error theory and cognitive science from which to draw. The quality of concept is borne out through case studies and field research that is heuristic and adaptive in nature as opposed to presuming scientific rigour or precision. Lastly, the 9  quality of this research is evident in the manner by which the risk to mine operations can be measured and ranked as regards to their propensity to escalate to disaster.  1.10 Limitations of this Research This dissertation draws upon existing human error research that is in its early stages of development in specific enterprises other than mining. Predominate among these is research within the civil transportation (Benner, 1995; Bove, 2002; Dekker, 2004), nuclear power (Perrow, 1984) and the medical sectors (Haddon, 1980; Reason, 1990, 2005). Research in these fields is strongly conclusive as to the contributions made to accidents and incidents by human error. It is speculative as to what applicability and commonality these findings have to the mining industry, or any heavy industry, as arguably there are distinct differences in culture and risk. Research in human error, regardless of the field of interest, benefits from the pioneering work of such theorists as Rasmussen, 1974; Hollnagel, 1988; Reason, 1990 and Dekker, 2004.  It is this tradition that this research borrows from, in the belief that the principles and behaviours of people as regards to error are universal, whatever the enterprise. The human error studies in the aviation, medical, and nuclear industries is necessarily rigorous; and the available data supports such a rigour. Whereas mining has a strong history of safety systems and safety culture, the data are not as transparent or sophisticated as those enterprises that are in the public domain. This research therefore is empirical in nature and by necessity draws upon a combination of both historical case studies and contemporary research to develop the cognitive profiling model. In this respect, models presented trade statistical rigour for utility and practicality, and by no means suggests mathematical certainty in respect to its conclusions. 1.11 Originality of Work This dissertation presents a new and innovative model for considering the contribution made by cognitive error to the provenance of events in the mining industry. It does so with regard to existing human error taxonomies, but does not presume to add to them. Indeed, this research is the product of efforts to simplify and distil principles from the broad field of human error, with due respect to the field of industrial psychology. 10  1.12 Organization of Work This dissertation is comprised of three parts (Figure 1.3). Part 1 is the traditional academic treatise that sets out the purpose, scope and body of knowledge related to this research. Part 2 consists of a series of five case studies applying cognitive profiling to historical disasters in a variety of industrial settings. Each case study is unique, and stands on its own merit; however, collectively they serve to show that regardless of industry type, technology or the nature of the enterprise - similar profiles emerge indicating commonalities as regard to organizational behaviour. In Part 3, the accidents and incidents from a contemporary operating mine are analyzed and profiled and presented as a field study. The study is a stand-alone report that was submitted to the operating mine in fulfilment of a written non-disclosure agreement.   Figure 1.3: Diagram illustrating the organization of this dissertation into its three constituent parts 1.13 Genesis of Concept The researcher first considered the question of what contributes to events in the industrial workplace in the mid 1980s, as an inspector of mines. At that time, a credible, structured methodology to evaluate serious accidents was needed, the purpose of which was to apply appropriate strategies of intervention and mitigation. It was a simple enough concept. Nonetheless, it was a tall order given the early days of accident theory. Investigators made considerable progress in accident theory over the next decade, to the extent that analytical methods were incorporating epidemiological models and were increasingly widening the scope of cause and effect relationships. Still, there was no Part 1Thesis ProperDissertation Part 2Case StudiesFive Historical DisastersPart 3Field StudyOperating Poly-metalic Mine11  methodology for the evaluation of investigations into events on a holistic scale. At the same time, the tolerance and acceptability of accidents and incidents in the mine work place (and all workplaces) was diminishing. In the mid 1990s, the need for an explicative tool for the evaluation of accident investigations became critical for the expressed purpose of adducing whether an event was a result of misfeasance, malfeasance or otherwise. Depending upon the answer to this question, the event was subject to administrative penalties and sanctions – or not. Specifically, a model or tool had to incorporate the following principles: i. Establishing the duty of care of the parties involved in the event scenario. ii. Assessing the extent to which the party knew of, or ought to have known of an applicable standard of care. iii. Respectful of persons who were by circumstance making understandable errors, or errors for which there was exculpatory evidence. iv. Classify the errors in such a way that was defensible and had rigour. It became immediately apparent to this researcher that simplicity was the key. What did all of the accidents within his experience have in common? What distinguished an honest error from an error that clearly demonstrated a lack of due diligence? What influence did human factors have, such as fatigue, noise exposure and heat exhaustion? At what point was human error subject to scrutiny? The answer to these questions, and many others, was that persons (parties) contributing to the event scenario made a conscious decision to do something, or not do something, or were somehow impaired or did not have the capacity to make a decision. A typology based upon decision error was born. It was not popular, as the ideology of the day was to distribute culpability for an accident systemically, organizationally, or not at all.  Decision error theory, and its derivation of cognitive profiling, was introduced in a M.Sc. thesis (Sweeney, 2004), in which the focus of study was an evaluative tool for accident and incident investigations. This research flows from, and applies this early research, to the contemporary field study and historical case studies in this dissertation. It will be shown that these case studies offer a well spring of event causal analysis; one that is surprising as a source of evidentiary and analytic record from which to draw upon. 12  These historical case studies will prove to be particularly revealing of those elements of causality, as insidious as they are common in the escalation of events toward disaster. 1.14 Literature Sources The search for sources of literature was conducted over many years; more formally during the years of 2004 through 2009. There are six categories of literature sources. They are: i. Books on the subject matter ii. Internet web searches on the subject matter iii. Periodicals and journals on the subject matter iv. Digitized databases of public domain studies on the subject matter v. Academic theses on the subject matter vi. On-line discussion forums 1.14.1 Texts  The texts pertaining to this subject were purchased on line or borrowed from the library at the University of British Columbia or Thompson Rivers University. Too many to list here, those purchased are the most recently published in the categories of cognitive science and human error theory. Texts on loan from university libraries were most often associated with historical treatments of accident theory and case histories. 1.14.2 Internet Web Searches The internet was used to narrow down and search articles and books from the general to the specific. Google™  was the search engine of choice for general searches; Google™  Scholar, for more specific searches. A number of free academic search engines were also experimented with, with mixed success. They were Wiley Interscience Search®, Infomine®, Web Lens® and Bubl Link®. A subscription online service of JSTOR accessed through Thompson Rivers University library services met with greater success, in the absence of which, access to the journals and articles would have been cost prohibitive. 13  1.14.3 Periodical Journals The periodicals and journals accessed were specific to the domains of interest pertinent to this research. They were Cognitive Science Society, Safety Science, Journal of Safety Research, International Journal of Risk Assessment and Management, the Journal of Accident Investigation, the Australian Journal of Mine Safety and the Journal of Organizational Behaviour. All were searched exhaustively to the limit of their availability on-line. 1.14.4 Digitized Databases Increasingly, one can order databases of accident records and public domain documents on line. The two that were particularly useful to this research were the records from the US Department of Labour, Occupational Safety and Health Administration available at http://www.osha.gov/pls/publications/publication.html and those of the province of Manitoba at www.gov.mb.ca/labour/safety. Available products were purchased on compact disk (CD). 1.14.5 Academic Theses Academic theses and treatises pertaining to this research were searched online through a number of academic search engines. They were the Thesis Portal of Canada and the Networked Digital Library of Theses and Dissertations. More productive, were the dedicated web sites offered by respective universities in jurisdictions known for research in the subject matter. The University of British Columbia, University of Glasgow, Ryerson University and the University of Oregon are a few notable examples. They were accessed for the relevancy and volume of research in areas of mining engineering, cognitive research, safety theory, and risk perception respectively. Given the multifaceted nature of this research, there is no shortage of research on the subject matter. Key papers contributing to this research are those of Busse, 2002; Sklet, 2002; Trepass, 2003; Koning, 2006; and Visser, 2007. The work of Massaiu, 2005; Ardvidsson, 2006; Garcia, 2006; Storbakken, 2007 and Bove, 2002 also influenced this research. The search terms were accident causation, workplace cognition, risk perception, human error, accident investigation, error taxonomy and variations thereof. 14  1.14.6 On-line Discussion Forums Accident investigation is a techno-social science, and there is a surprisingly small group of theorists that span the domains of workplace, public safety and public transportation.  A membership-by-invitation on-line discussion group is that of the website Investigating Investigations (©1997-2007) hosted by Ludwig Benner Jr., and accessed at http://www.iprr.org/. The purpose of this site is to ‗advance the state-of-the-art of investigations, through investigation process and research.‘ The forum is a fertile ground for discussion of all things related to the science and art of investigation and posts numerous research papers, journal papers and resources. 1.15 How to Read This Dissertation This dissertation relies heavily upon the medium of graphics. The deep maroon colour is evident in all figures and graphics and is used to identify contributions made by others to this research, and to emphasize important concepts introduced by the researcher. The chapters flow from the general to the specific in support of the conclusions. Words presented in italics are for the purpose of emphasis of concepts introduced by other authors contributory to this research, and concepts that are thematic in this dissertation.   15   ―Error is certainty‘s constant companion. Error is the corollary of evidence. And anything said about truth can equally be said about error: the delusion will be no greater.‖ Louis Aragon (Bartlett, 2000)  2 ACCIDENT THEORY Accident theory is the cornerstone of accident investigation.  Theory supports the investigative method, and the method supports the analysis of accidents. This distinction is an important one to this research, and other researchers (Benner, 1975; Sklet, 2004; Hollnagel, 2004) have shown that the theory influences the outcome of an investigation. Accident theory has naturally changed over time, and is implicit in emerging investigation technologies. Accident theory and its models reflect the culture and mores of the times. More often evolved than designed, accident models are the product of conditions and constraints of the day, inherently biased by the perceptions and philosophy of the theorist. It is this perception, on the part of both the theorist and the investigator, which is central to the understanding of accident theory. The traditional view of workplace accidents is that of spontaneous occurrences (events) in time (Woodcock, 1989). To most, the causes of these events are by necessity, the raison d’être of investigation. As self-evident as these two assumptions appear to be, they are no longer truisms in emerging accident models. A more contemporary view is that events in the workplace should be referred to as event ‗scenarios‘, as they are more akin to processes, as opposed to singular, spontaneous occurrences in time (Benner and Hendrick, 1987). This notion of ‗event‘ is a long held belief, or perception, that most certainly contributed to the original thinking of accidents as a product of a single cause. Perception colours reality. Similarly, the notion of causes does not pass serious scrutiny in modern models, as there is no consensus or definition of what ‗cause‘ means (Woodcock, 1989; Benner, 1980). The vernacular of cause is no longer both necessary and sufficient to explain event scenarios. A review of 16  contemporary accident models will illustrate that we still seek to understand their provenance, their organizational context - their aetiology. 2.1 Causation Causation is the act or agency that produces an effect (Merriam Webster, 1993). It is without doubt the most misunderstood and misapplied concept in accident theory (Benner, 1985; Woodcock, 1989). Used in the enabling sense (sufficient condition) or in the mandatory sense (necessary condition), scientifically - ‗cause‘ should be considered a stochastic concept. In many cases we cannot say with certainty what, or if, something is a factor of causation and the caveat of ‗balance of probability‘ is applied. Balance of probability implies a variation of Occam‘s razor: ‗that all things being equal the most likely solution is the best.‘ There are a number of principles that apply to accident models and by extension the determination of causation (Huang, 2007). These principles can work in both directions with respect to time. That is, these principles are equally applicable for the purposes of investigation (hindsight), or prevention (foresight). They are attributed causes, system decomposition and causality. 2.2 Attributed Causes If an event occurs in any setting, it is within every person‘s self-interest to know about it and to understand why it happened. Depending upon the setting however, motivations may vary, if not be in conflict. In the workplace, employers often are predisposed to business continuity; organized labour to worker representation; and the regulatory agency to statutory compliance. Naturally, individual motivation will reflect subjective experiences and opinions, and these will colour their objectivity concerning investigation. These predispositions or predilections comprise what are attributed causes to an event (Huang, 2007). It is important that they do not become part of the investigative report of record, although they often do. Attribution of cause is instinctual as it is universal. People, regardless of their culture, status or affiliation will seek resolution of events for which they have little insight or control. They will often do so by making assertions of cause and effect which may, or may not be, correct. They draw upon their own experiences in an effort to reconcile, or attribute the cause of one thing as a result of another on the basis of correlation. The resulting model of causation is frequently 17  inaccurate and rarely complete. Causal attribution is the road of good intention that often diverts us away from the destination of understanding and prevention. 2.3 System Decomposition System decomposition is the deconstruction of the system into smaller sub-systems or components (Huang, 2007). Essentially, it is how you eat an elephant – one bite at a time. By breaking down the overall system into smaller and logical pieces, the analysis is more manageable and resources can be allocated accordingly. A mine operation can be broken down into mining, milling and services, and mining further broken down to mine design, mine operations, and mine maintenance. And so on. Any, or all, of the sub-systems can contribute to factors of causation.  2.4 Causality Causality is essentially the principle that one state can affect another state (Huang, 2007). When the effect of state ‗A‘ is the occurrence of state ‗B‘, we can deduce that there is cause and effect. However, state ‗A‘ may be related to state ‗B‘ in a number of ways. State ‗A‘ could be management commitment to environmental sustainability, or lack thereof. State ‗B‘ could be poor worker attitudes toward pollution prevention. Management commitment could be lacking, but this does not necessarily mean it ‗caused‘ the workers to have poor attitudes toward pollution. Societal values, familial values and social-economic considerations may also be an influence.  Causality is complex. It has many dimensions that tend to be simplified and overlooked. People are intrinsically reductionists; we are products of our past, that we tend to overstate; and are poor prognosticators of future complexity, which we understate (Kida, 2006; Van Hecke, 2007). Causality is dependent upon representativeness and availability heuristics (Reason, 1990); principles that explain why we often prepare for too few contingencies of failure. Causality is the more fulsome, albeit less deterministic, manner in which things are related. This is not to say that one thing does not result in another; rather, that the ‗causes‘ are subtle, time dependent and often influenced by unknown and unseen factors that act as catalysts or triggers. This uncertainty, or lack of connectedness associated with causality, is why traditional 18  accident and incident investigations hold so strongly to the notion of causation (cause and effect). 2.4.1 Representativeness Heuristics Representativeness heuristics (RH) is the principle that we limit our perception of causality to causes and effects that we are familiar with (Huang, 2007). In many instances we will set aside a factor that influences an outcome in preference to one for which there is similitude - even if illusionary. Representativeness heuristics motivate us to indulge in our biases and then validate them by self-justification. An example is that of a worker choosing not to wear eye protection. The worker ‗has been doing the job for 25 years, and has not had an eye injury yet‘. He concludes that if a risk really existed, he would have already experienced an injury. Further, he asserts that by wearing eye protection he is in danger of reduced visibility owing to restricted field of view and dirty lenses. In doing so, he discounts the effectiveness of eye protection as experienced by a larger population, in preference to his own; albeit lesser experience. He marginalizes the likelihood of the hazards that are known, by speculating on hazards that are much less likely in support of the status quo and a worldview that he is comfortable with.   2.4.2 Availability Heuristics Availability heuristics (AH) is the principle that we are limited in identifying causal relationships to the extent that we have the capacity to identify, comprehend and explain them (Huang, 2007). We cannot act on causality that we fail to recognize. If we recognize a possible causal relationship, we may not understand it, and further if we cannot explain it in a concise way, we may discard it in favour of causality that we can. An example is the hazard of asbestos. The physical properties of asbestos fibres are not immediately recognizable, or apparent to the naked eye. As acicular fibres that are smaller than fifty microns in size, it is counterintuitive to most that they would be a problem for respiration. Further, their causal connection to mesothelioma (asbestosis) is a stochastic relationship expressed in the language of industrial hygiene and pathology. It required decades for management, and workers, to accept the correlation between asbestos and lung cancer. Eventually, the sheer numbers of cases of mesothelioma and the attendant dread of this disease compelled people to consider the possibility that respired asbestos causes cancer. Consequently, after considerable scientific 19  investigation into mesothelioma, the industrial community and the public alike accepted the high risk of cancer inherent to the exposure of respiratory asbestos (AMRC, 2008). The availability heuristics test was satisfied; the causal link between asbestos and mesothelioma was established, and the public became asbestos averse. 2.5 Social Context We generally appreciate that only through investigation can we understand the causation of accidents, and set standards for their prevention. Lesser appreciated perhaps is that only through standards and the rule of law can society have sway over conduct in the workplace. This was known as long ago as 1760 BC. As Draconian as it may have been, the Babylonian Code of Hammurabi set the standard of the day and provided the first rule of law. Several of the 282 tenets conceivably relate to a contractual obligation between employer and employee. Johns (2007) interprets: On the other hand carelessness and neglect were severely punished, as in the case of the unskilful physician, if it led to loss of life or limb his hands were cut off, a slave had to be replaced, the loss of his eye paid for to half his value; a veterinary surgeon who caused the death of an ox or ass paid quarter value; a builder, whose careless workmanship caused death, lost his life or paid for it by the death of his child, replaced slave or goods, and in any case had to rebuild the house or make good any damages due to defective building and repair the defect as well. The boat-builder had to make good any defect of construction or damage due to it for a year's warranty.   These tenets set the stage for what we might now refer to as a ‗social justice‘ model, or a ‗retribution model‘; depending upon whether you were on the arbiter or the recipient. Clearly, the perception of the day must have been one of deterrent by reckoning, and this would naturally influence the way society scrutinized events resulting in injury. This relationship between accident theory (models) and the way in which we investigate them is no less true today - as it has been throughout time. This is the concept of self-perception theory (Bem, 1972), that proposes how we frame or model events, influences how we explain them or attribute their causes. Any accident model contains the equivalent of a conceptualized blueprint for the accident investigation, and its ultimate explanation. It is a ‗how to‘ structure in which the investigator sets and prioritizes his objectives, and collects and analyzes the evidence. 20  As the adage goes, ‗if the only tool in your tool kit is a hammer, then everything looks like a nail‘ (author unknown). Similarly, if your accident model seeks to find the ‗guilty parties.‘ then blame will be the outcome. In the vernacular of one researcher (Huang 2007:41), insofar as accident methodology is concerned, ‗what you look for is what you find‘, and ‗what you find is what you fix.‘ 2.6 History of Accident Theory Many authors have documented the progression of accident theory with time (Benner, 1975; Harvey, 1985, Davies et al, 2003, and Stranks, 2007). There will always be debate as to why accidents occur; however, there is much concordance as regards to the emergence of accident theory. There are at least four schools of thought or models of accident theory, more if you consider their variants (Benner, 1985). There are the sequence-of-events, epidemiological, systemic (Dekker, 2005) and unequal initial liability theories. Two cautionary principles to consider when evaluating accident theory (models) are: i. Consider accident theory in the context of the times. We do not hold the same values, beliefs or perceptions of risk of those that theorized on accident theory during the 1930‘s.  ii. It is easy to confuse what are models for investigation, and what are methods of analysis (Benner, 1975). Simply stated, any comparison of methods of investigation or their analysis should be limited to those subscribing to the same accident theory or model.  2.6.1 Unequal Initial Liability Theory At the turn of the 20th century, the prevailing theory was that of accident proneness (Visser, 2007).  This theory, also known as the Unequal Initial Liability Theory (Stranks, 2007), asserts that there are those within society that are predisposed to accidents and owing to their own carelessness cause calamity and misfortune. Central to this theory is the notion that persons prone to accidents have inherent character flaws or personality characteristics that put them at risk. Compensation funds were in their infancy, and accidents were largely subject to litigation. Dickinson and Flemming (1950:769) write: 21  For more than a quarter century there has been in the psychological literature a concept that some individuals are more likely to have accidents than are people at large. Their greater liability to accidents has been called ‗accident proneness‘, which ‗may be regarded as a combination of human abilities which make a person highly proficient in bringing about accidents‘. The implications of this concept may best be brought out by casting its treatment into three sections: (A) Are there accident-prone individuals? (B) What causes accident proneness? (C) What can be done to decrease the number of accidents due to accident proneness? As predicted by self-perception theory (Section 2.5), if accidents are modelled in a blame setting (vehicle insurance is an example of an at-fault system) then the method of investigation and the outcomes are necessarily influenced. The standard of investigation for vehicular accidents does not meet that of accidents occurring in the workplace. Domestic accidents are likely to be investigated to an even lesser standard than vehicle accidents. It is paradoxical that when we perceive accidents in a blame setting for the purpose of settling insurance claims and determining liability, the investigative rigour is less. One can only speculate what reductions in injuries could be realized if vehicle accidents and domestic accidents were to be investigated with the same rigour as in other domains (workplace, transportation and environmental events). 2.7 The Sequence-of-Event Model of Accidents The sequence-of-events model considers multiple failures as a chain-of-events in which the antecedent failure directly causes a succeeding failure, eventually leading up to the defining event (Hollnagel, 2004). The sequence-of-events model is also known as the cause attribution model (Perneger, 2005) because, as its name suggests, the model seeks to attribute the cause(s) of an event. The theory holds that in order to prevent an event, one has only to stop a failure or establish a barrier between any two failures (Woodcock, 1989) (Figure 2.2). Still relevant for simple events, the model is limited to technical failures or failures in which the cause and effect relationship is obvious (Leveson, 2004).  2.7.1 Advantages The model is appealing as it is intuitive. The sequence-of-events model keeps the narrative simple and explicitly (not explicatively) states the cause-effect relationships (Harvey, 1985). It has dominated the discipline of accident investigation from the early 22  1960s until the late 1970s; influencing such analytic methods as Fault Tree Analysis, Failure Mode and Effect, and Energy and Barrier Analysis. The model lends itself graphically, and communicates causes and their effects well (Dekker, 2004). 2.7.2 Disadvantages The sequence-of-events model fails to establish any intrinsic connection between the failures (Dekker, 2004). The model promotes looking for causes and failures, and in doing so can lead an investigator from a more explicative approach (Hollnagel, 2004). There is a question of subjectivity as to what are the failures, and how far back one goes to establish the chain (Benner, 1975). The investigator has discretion, thus introducing opportunity for bias into the analysis (Harvey, 1985; Perneger, 2005). When attributed to unsafe acts or unsafe conditions, the failures interpretations of the data rather than a pure presentation of the physical evidence (Benner, 1985).  Figure 2.1: Schematic illustration of the Sequence-of-Events Model 2.8 The Work of Heinrich Heinrich was a pioneer in accident theory at a time when there was little data or research of accident prevention. As an engineer for an insurance company, Heinrich studied the causes of 75,000 accident cases and noted the overwhelming rarity of actual accidents to minor accidents and near misses (Heinrich, 1931). He identified a ratio of 300:1 of accidents and incidents; describing what is now known as the ―Heinrich ratio‖ (Busse, 2002) (Figure 2.2). This ratio implies that by intervening in near misses, a more 23  serious event is pre-empted (Hollnagel, 1988). Heinrich was well acquainted with actuarial science, which permeates his theories; theories that are still popular today.  Heinrich proposed that accidents were the product of five cascading dominos; thus coining the term ‗domino theory‘ (Figure 2.3), and published the concept as early as 1931 (Heinrich, 1931). His work epitomises sequence-of-event accident modelling; however, it has its detractors owing to the inference that there can be a single cause to an event (Stranks, 2007). The inference is probably justified, for Heinrich was drawing from accident records written during a period when investigators of accidents were disposed to accident proneness theory. Heinrich asserted that the immediate causes of accidents consist of unsafe acts and unsafe conditions, with the former contributing as high as 88 percent of the time. The veracity of this number has drawn considerable criticism and doubt (Petersen, 1988), as revisionist theorists consider it blame oriented.  Figure 2.2: The original accident triangle depicting injury ratios (Heinrich, 1931)  24   Figure 2.3: The original Domino Theory of accident causation (Heinrich, 1931)  By framing the causes of events as attributable to personality characteristics, Heinrich limited his modelling of accidents to more anthropocentric factors, or failures. Heinrich et al (1980) later introduced a new model of safety management featuring recursive hazard control (Figure 2.4). The feedback loop was based on the determination of an acceptable level of safety by considering hazards in a monitor-analyse-remedy fashion. In this regard, Heinrich et al were moving towards the notion of risk, and its management. However, as control process go, the model had a characteristically very long feedback response time (Huang, 2007). That is, in the Safety Management model there was an early indication that causality was something more complex than causation; that the cause and effect implicit of the Domino Theory was not sufficient or as inclusive as was previously thought to be the case. There was a deeper, more incipient meaning to causation that was emerging. Heinrich et al (1980) anticipated that there were artefacts within the workplace such as policies, plans, and procedures that reflect the principles and beliefs of their makers and to this extent defined a standard of care and conduct to which the workplace parties were expected to conform. 25   Figure 2.4: Graphic illustration of the Safety Management model (Heinrich et al, 1980)  2.9 The Work of Bird and Germain The work of Heinrich strongly influenced that of Bird and Germain (1974), co-authors of the Practical Loss Control Leadership marketed by the International Loss Control Institute (ILCI).  The International Safety Rating System (ISRS) was based on the Loss Causation model (Figure 2.5), which saw global application as an emerging technology in loss prevention during the 1980s and 1990s (Kjellen, 2000). Building on 26  Heinrich‘s earlier work, the Loss Causation model improved on sequence-of-events modeling, but still incorporated aspects of the Domino Theory (Vinicoli, 1994).   Figure 2.5: Graphical illustration of the Loss Causation model (Bird and Germaine, 1974)  This explicit and expanded application of the Domino Theory included the phases of loss of control, basic causes, and immediate causes, the incident, and the loss. The first domino was ‗lack of control‘ and the nascent concepts of environmental and personal factors were introduced as basic causes; very much under management control. The subsequent unsafe acts and unsafe conditions were considered as immediate causes of an incident that had a potential for downgrading to an accident 27  (Storbakken, 2007).  The twenty-module program, known as the ILSI program, proposed a very detailed schema for the codification of accidents (mechanism of injury, body part, and type of injury) that is still popular today with compensation boards and underwriters (Kjellen, 2000). Further, by integrating ‗loss control‘ into the existing management systems in the workplace, the Loss Causation model made the case that loss prevention was a function of management no less important than production, organization and other priorities (Vinicoli, 1994). Society‘s perception of events in the workplace was shifting. The management of losses was an expected and prudent way of conducting business. This change in perception from losses being a ‗cost of doing business‘ altered the way in which events in the workplace were investigated and reported. Consequently, the Loss Causation model was, and still is, a very successful application of sequence-of-events theory.  2.10 The Epidemiological Model of Accidents By the late 1970‘s, it was apparent that accident models should identify how cause and effect relate - organizationally, environmentally and socially (Dekker, 2004). The epidemiological model does not seek to determine cause, but to reveal statistical relationships of populations (age, experience, vocation and training) between risk factors and the outcome of the event (Haddon, 1980). The strategy is to identify the associated personal and situational characteristics to any variable that co-varies with the occurrence of an event (Harvey, 1985). Epidemiological models apply the same rigour and structure to accident theory (injuries) as is used in infectious disease control (Huang, 2007). In the control of infectious diseases, a host-agent-environment model exists that describes how an agent (virus) can infect a host (bird) within an environment conducive to infection. To apply the analogy to an event in a mine, we consider a scenario involving a fatality of a worker in an underground mine. The host (the worker) is exposed to loose rock (the agent) in the back of the mine, in which the rock is geo-mechanically weak and poorly supported by bolting and screening (the environment). Intrinsically, the model seeks to relate the experience and training of the worker, the mine design, the monitoring and support of the rock mass with the working conditions and organizational nature of the mine. The investigation is necessarily broader and more inclusive. An epidemiological model considers both active and latent failure. Active failures are the failures that we typically think of triggering an event, and are proximal to the 28  event. Latent failures are those that are not as obvious and conceivably may exist for days to decades in dormancy until having triggered by circumstances (Dekker, 2004). An example in mining is the occurrence of high-wall failures. Decades ago, it was common practice to dump waste rock over a high-wall without a high-wall design contemplating the drainage of ground water. In this scenario, as time progresses, the movement of water results in the erosion of fine-grained material as it percolates through the waste dump causing sub-surface channelling.  Eventually a storm event occurs. The interstitial pore spaces of the waste rock are saturated, and due to reduced cohesion and internal friction within the granular material, tension cracks occur – resulting in high-wall failure. The active failure may be the lack of inspection, monitoring and control; however, the latent failure is the waste dump design lacking water drainage and diversion.  2.10.1 Advantages By design, this model is inclusive of multiple causes, and identifies factors as opposed to causes.  This model provides a more meaningful analysis of factors distal to the event, and does not attribute causes to events, but seeks to establish more stochastic associations between risk factors and their outcomes. Epidemiological models are broader in scope and context than sequence-of-events models. Events are considered inclusive of environmental and social factors, as a techno-social system. The model also encourages the investigator to scrutinize the organizational contributions to an event (Dekker, 2004). 2.10.2 Disadvantages Epidemiological models tend to be linearly sequential. Time flows only in one direction and time appears in most epidemiological models as the determining dimension. Epidemiological models tend not to explain the process by which holes in defences (both active and latent) come about (Dekker, 2004). Investigators fall into the old paradigm of being satisfied with identifying them as failures. They also may generalize the non-conformities as system, organizational or cultural failures. These characterizations do not take full advantage of the utility and comprehensiveness of epidemiological models. Although perception bias is reduced, the complexity and scope of the model introduces selection bias (how evidence is selected), information bias (what 29  is data and what is information) and confounding bias (lack of comprehension regarding error interaction) (Perneger, 2005).  Figure 2.6: Bronfenbrenner's epidemiological model of illness and injuries (Runyan, 2003) 2.11 The Work of Haddon William Haddon was a physician as well as an engineer. Schooled and skilled in curative and preventative aspects of medicine, Haddon worked with road designers on highway traffic safety. Building on conventional epidemiology theory, Haddon recognized that injuries and illness were two sides of the same theoretical coin. Haddon was influenced Drs. John E. Gordon and James J. Gibson, early progenitors of epidemiological theory applied to injuries. Haddon proposed a structure that facilitated epidemiological modelling in a graphical, concise format known as the Haddon Matrix (Huang, 2007; Runyan, 2003). In the matrix (Table 2-1), the host, agent and environmental factors are enumerated across the matrix, whilst time flows down the matrix. The host refers to persons at risk. The agent of injury can be any form of energy. The environment can be either physical or social; the former speaking to the setting in which the event occurs, the latter referring to norms, mores and cultural considerations. 30  Given Haddon‘s medical background, it was only natural that the matrix provided an aetiological perspective of accident theory. As such, the model has utility as a means of identification of risk factors and a method to devise strategies for their prevention (Runyan, 2003). By example, consider the Haddon Matrix as applied to an event involving a worker exposed to an unguarded piece of energized equipment (Table 2-1). The matrix allows a structured analysis of the hazard before, during and after an event.   Phase Host (Workers) Agent (Energized Equipment) Physical Environment (Mine) Social Environment (Workplace)   Pre- event Instruct workers as to the regulatory standards in the workplace (the requirement for lock and tagging) Design and construct equipment with attachments for locks and tags to assist in compliance with standards  Establish preventative maintenance programs to reduce unplanned work Encourage right to refuse unsafe work, right to know, and right to participate legislation   Event Train workers not to work on energized equipment (apply lock and tag procedures)  Ensure that equipment is in the zero energy state to reduce worker exposure to hazard  Provide personal protective equipment and hazard detection alarms/systems  Employ accident prevention strategies and workplace monitoring   Post- event Ensure that all workers are trained and knowledgeable in emergency procedures Maintain ease of access and safe passage for workers and rescue workers to the work areas  Investigate and review all near miss incidents to ensure efficacy of safety systems Ensure funding for emergency personnel appropriately trained in elevated and confined spaces Table 2-1: Table illustrating the Haddon Matrix as applied to an event in the mine workplace  2.11.1 Hosts In an epidemiological model, hosts are the recipients of harm. For the purposes of accident theory, the same is true; only with wider scope. Hosts are the recipients of harm or potential harm and can be people, assets, production, or the environment. Hosts define that which is protected and for which controls and defences are put in place. For the purposes of traditional mine settings, hosts are the various workplace parties, the equipment, production values or the receiving environment of mine effluent. 31  2.11.2 Agents In an epidemiological model, an agent is the mechanism or vehicle by which the host is subjected to a pathogen. For accident theory, in the most common sense, the agent is energy – potential, kinetic, thermal, nuclear, electrical or chemical. However, as both the environment and humans are subject to disease and toxins, the agent can also be a biologic pathogen. Agents can be passive or dynamic. An example of a passive agent is someone or something at rest falling from height. An example of a dynamic agent is the weather, with all of its fluctuations and unpredictability. 2.11.3 The Environment  In an epidemiological model, the environment is that which the pathogen originates or comes from. For accident theory the environment can be physical or social, and defines the nature of the environment in which a host and agent are present. The mine workplace environment is one that is particularly of issue as regards to risk, and should factor prominently in any analysis or event, epidemiologically speaking. Paradoxically, the mine environment is often omitted from investigations of events, as investigators are over familiar with the hazards, to the point of complacency.  2.12 Perspective The principal disadvantage of epidemiological models is the perspective of the observer, or investigator. The model is one that ‗sees‘ accident causation from the perspective of an outside observer looking in. In doing so, the model does not facilitate appreciation as regards to how actors within the event scenario could have recognized the hazards, or the risks, for what they were (Dekker, 2004). Further, it does not help us to understand why the actors saw those risks the way that they did – as acceptable, or not. It does not identify decisions made by actors within the event scenario and evaluate those decisions for veracity of assumptions from the point of view of the decision maker.  The epidemiological model considers events through the objective lens of probability, in terms of host/agent/environment interactions. The typical mine workplace offers analogous examples of environments and hosts suitable for epidemiological modelling, however, the identification of agents is not as intuitive or meaningful. This limitation underlines the lack of theoretic foundation of the epidemiological model and 32  suggests that disease prevention models and injury prevention models are comparable only to a point. 2.13 Systems Model for Accidents All of the previous models apply the premise of analysis by deconstruction. By doing so, they do not consider how things are supposed to work; how the components come together to make the whole and interact with each other (Dekker, 2004). In many cases, causation can only be determined and made sense of by considering the system holistically. Increasing complexity of organizations and technology requires accident modelling based as much on synthesis as analysis.    Systems are dynamic, and consequently event models should consider an event scenario as a process of disequilibrium or instability within the system. Systemic theory holds that an event can occur when the performance of the system is unable to meet the demands of the environment (Huang, 2007). Complex techno-social systems may start out with an initial state of balance or equilibrium, but spontaneously reach a critical state of self-organized criticality spontaneously without any intentional alteration of operational parameters (Blanchard et al, 2000). The faculty to describe this state of ‗criticality‘ of systems, much less predict it, is still in its infancy and is very much the promise and the challenge of developing system models. 2.14 The Work of Reason A contemporary example of system modelling is the ‗Swiss cheese‘ model (Reason, 1990) (Figure 2.7). In this model, failure trajectories line up, and the concept of latent failures and active failures is central. The slices of Swiss cheese represent various defences to prevent an occurrence, and the holes represent failures and flaws in those defences (active and latent). The trajectory through the slices represents the circumstance when all of the factors come together to create a destabilizing system culminating with an event. The model also incorporates the idea of defences existing and then being defeated by a variety of mechanisms, metaphorically referred to as pathogens. It is curious to note that once again, medical jargon has crept into the lexicon of accident theory. Poor management practices, inadequate procedures, failed 33  engineered controls and lack of training are cited as examples of human systems subject to error and failure (Reason, 1990).    Figure 2.7: Illustration of the Reason's 'Swiss cheese' human systems model   The Swiss cheese model introduces the concept of psychological precursors at the organizational level, thus opening the door to evaluating safety culture and the organizational ethos. The model is graphical and encompasses the idea of a hierarchy of controls (elimination, substitution, engineered, administrative, and personal protection controls) and their vulnerability. The model does not however, explain or account for how these trajectories occur, nor suggest their remedy (Dekker, 2004). It incorporates stochastic constructs of risk, and in this regard is suggestive of epidemiologic influences. Reason‘s Swiss cheese model has become iconoclastic, and synonymous with human systems, and therefore is an early progenitor of system theory; particularly as applied to the medical profession. It is acknowledged, however, that the Swiss cheese model is no less relevant to other industry sectors; and, has substantially influenced this research.  34  Within complex techno-social systems are flaws in design and sub-systems that can interact in inexplicable and unpredictable ways (Perrow, 1984). System models do not require that a component or sub-system fail, or otherwise be the cause of anything. The system itself, under normal operating conditions, has such interaction and coupling that catastrophic failure occurs because of changing operating conditions, or degrading compliance and operability. The degree of coupling and interaction is thought to be a measure of complexity of systems (Perrow, 1984) and a good indicator of the insidious potential of complex systems in emerging high technology enterprise (Figure 2.8).  Figure 2.8: Schema of mapping enterprises by system interaction and coupling   2.14.1 Coupling Coupling refers to the degree of ‗connectedness‘ between sub-systems or their components. The systems can be social, organizational, physical, or process domains. 35  Regardless, coupling is the amount of buffer in time, space or behaviour between components that will allow for intervention if there was a problem or upset condition (Perrow, 1984). For some enterprise, tight coupling is a good thing; the pharmaceutical industry for example is highly coupled with tight controls so that the pharmaceutical product meets a high standard – every time. There is little margin for error. Other enterprises, like mining, benefit from low coupling and realize the benefit of being able to start and stop different parts of their mining cycle depending upon the geology and operating conditions. If an upset condition occurs in the tailings, the mining system can accommodate this by stockpiling ore and modifying the mill circuit. Government organizations, research and development, universities and most manufacturing benefit from, and are examples of, loosely coupled systems (Perrow, 1984:97) (Figure 2.8). 2.14.2 Interaction Interaction refers to the degree of complexity or linearity within systems. Linear interactions are at one end of the spectrum, and complex interactions at the other (Figure 2.8). Linear interactions are typically sequential, transparent to the operator, and generally planned and anticipatory. Complex interactions are more subtle and problematic from the point of view of the potential for system upset conditions. They are unplanned and unexpected, and are not conducive to comprehension or detection (Perrow, 1984). Complex interactivity is common to petrochemical plants, avionics, and nuclear power generation systems. Typically within the enterprise of mining, systems are not very complex (Figure 2.8), however deep mining and complex mill circuits are pushing the envelope, and mines of tomorrow and beyond are likely to increase in complexity with technological advancement (Sweeney and Scoble, 2006).  2.14.3 Self-organized Criticality Self-organized Criticality is a theory proposed by Bak et al. (1987) that holds that dynamic systems over time incrementally move toward criticality, or a point at which they appear to be operating under normal operating conditions, but are moments away from failure. Although the original research was applied to avalanche theory of granular material, the theory has seen applicability in geology, ecology, biology, economics, sociology and physics. Potentially, the Chernobyl, Space Shuttle Challenger and the Three Mile Island disasters were systems that exhibited the characteristics of self-36  organized criticality. These enterprises, presumed to be in a state of equilibrium, responded to minor perturbations within their system design. Stable systems have the capacity to absorb or accommodate minor perturbations in proportion to their magnitude. Self-organized critical systems respond differently, as described by Bak and Paczusky (1995:6690): The basic idea is that large dynamical systems naturally evolve, or self-organize, into a highly interactive, critical state where a minor perturbation may lead to events, called avalanches, of all sizes. The system exhibits punctuated equilibrium behaviour, where periods of stasis are interrupted by intermittent bursts of activity. Since these systems are noisy, the actual events cannot be predicted; however, the statistical distribution of these events is predictable. Thus, if the tape of history were to be rerun, with slightly different random noise, the resulting outcome would be completely different. Some large catastrophic events would be avoided, but others would inevitably occur.  Self-organized critical systems are remarkable, in that the distribution response to perturbations appears to follow a power-law mathematically, and their escalation from incidents to disaster is not unlike thermodynamic systems (Blanchard et al., 2000). Necessarily then, any accident model must have the facility to determine what are normal operating conditions of a system; and within these, what are its limits. This can only be achieved by a holistic approach, and hence the benefit of systemic modelling. Self-organized criticality, while not well understood, has great potential as applied to event causality, and this research introduces the concept as a mechanism that is explicative of latency and the intrinsic stochastic nature of events in the workplace. 2.15 Injury Compensation Models As mentioned previously, perceptions strongly influence accident theory models of both theorists and investigators - and their outcomes. This is true at the societal level as well; as society‘s perceptions of risk and causality is manifested in their elected representatives in a democratic society, to the extent that elected representatives set public policy. It is revealing to examine the evolution of injury compensation funds as a measure of how society‘s perception and values have changed and shaped through time. As it turns out, Canada‘s contribution to the development and implementation of injury compensation funds is one of distinction and leadership. Canada was one of the first nations (Canada, the United Kingdom and Germany) to introduce workers legislation 37  governing workers compensation; and within Canada, the province of British Columbia was one of four provinces to first do so (British Columbia, Saskatchewan, Quebec and Ontario). 2.15.1 Emerging Workers’ Compensation in Canada The province of British Columbia narrowly missed the opportunity to become the first jurisdiction in the world to have a public funded accident compensation fund. In 1878 a bill was introduced to the BC Legislature known as the ‗Workman‘s Protection Act‘, which did not make a second reading for reasons unknown (Chaklader, 1998). During this period in Canadian history, coalmines on Vancouver Island were renown throughout the world for hazardous conditions as much as the quality of their coal. The only remedy to injured workers was to sue the employer, which many did not have the financial resources to do. A lawsuit had less than a 30% chance of being successful, which reflected society‘s perception that the risk of injury and death to coal miners was an ‗assumption of risk.‘ Societies‘ understanding of accident causation reflected their perception of risk: that mine accidents were not preventable. Six years later in 1884, Germany enacted the first workmen‘s compensation law. During this period, Saskatchewan, Ontario and Quebec enacted Factories Act, the first safety regulations for the workplace. Employers in contravention were subject to fines; however, there was no compensation for injured workers.  The province of BC enacted the first Canadian fund in 1891 called the ‗Employer‘s Liability Act.‘‘. The Act required that the employer be liable only if an injury was a direct result of their negligence (Chaklader, 1998). Until this time, employers could legally provide a defence of ‗faulty machinery or equipment‘ in the event of an injury – again reflecting the society‘s perception of the inevitability of accidents.  In 1897, the province of Saskatchewan passed the Workmen‘s Compensation Act providing workers compensation for injuries. However, it was limited to ‗dangerous work‘ and the worker could not have contributed to the cause of the accident. It was not until 1902 that British Columbia enacted the first Workmen‘s Compensation Act, a rather liberal adoption of that of England passed in 1897 (Chaklader, 1998). An injured worker received compensation for lost wages resulting from injuries received ‗as a result of or in the course of employment‘, and this was a turning point for society. Societal values had changed and the assumption of liability to workers for accidents in the workplace was no 38  longer acceptable. The province of Ontario introduced a similar fund in 1910 and was instrumental in defining the guiding principles of Workers‘ Compensation, embodied in the Meredith Report that set the cornerstone for compensation boards throughout Canada (Table 2-2) (AWCBC, 2007).  # Meredith’s Cornerstone Principle 1 No-fault compensation: Workplace injuries are compensated regardless of fault. The worker and employer waive the right to sue. There is no argument over responsibility or liability for an injury. Fault becomes irrelevant, and providing compensation becomes the focus.  2 Collective liability: The total cost of the compensation system is shared by all employers. All employers contribute to a common fund. Financial liability becomes their collective responsibility.   3 Security of payment: A fund is established to guarantee that compensation monies will be available. Injured workers are assured of prompt compensation and future benefits.   4 Exclusive jurisdiction: All compensation claims are directed solely to the compensation board. The board is the decision-maker and final authority for all claims. The board is not bound by legal precedent; it has the power and authority to judge each case on its individual merits.  5 Independent board: The governing board is both autonomous and non-political. The board is financially independent of government or any special interest group. The administration of the system is focused on the needs of its employer and worker clients, providing service with efficiency and impartiality.  Table 2-2: Table summarizing the five Meredith principles for workers compensation funds  The significance of these principles cannot be overstated. Not only are they the guiding principles of one of the most enduring of Canadian values and institutions; they set the tone for how Canadians perceive accidents in the workplace. This is truly a ‗no-fault‘ system, and this fact alone has enormous implications for how Canadian citizens internalize and cogitate on accidents in the workplace. Within the collective ethos that is Canadian society, the attribution of events must be value neutral and as blame averse as practical. Our method of investigation is shaped by how we as a society, frame the causation (the theory) of accidents. We look for system errors, organizational errors, errors of cultural - errors of any nature other than those attributable to persons. In 1909, an explosion in a coalmine took the lives of 32 workers near Nanaimo. The forest industry was also reporting a record number of fatalities. Consequently, a Royal Commission on Labour was set up in 1912 to amend the Act. Mining companies 39  and unions alike were polarized respecting how to protect workers from gassy mines. The Commission reported back in 1914. It was not until 1917 that the Act was amended including a provision for workers to receive medical aid compensation – the first of its kind in North America. The Act was further amended in 1938 to include benefits to widows and dependents. In 1942, a new commission was set up to review the Act to ameliorate increasing alienation of both Labour and Management on the issue of workmen‘s compensation.  It was not until 1954 that the Act was amended, due to World War II and several Royal Commissions. Recovery of lost wages and benefits to dependents was increased. In 1955, 1968 and 1972 the Act was again amended, with cost of living adjustments and increased benefits once again setting a global standard for workmen‘s compensation. In 1993, the Act was amended once again to include farm workers and domestic workers. The Act was renamed the Workers‘ Compensation Act to be more in alignment with societies values of inclusion of women in the workplace. A fourth and final Royal Commission occurred in 1996. No doubt, the Act will be amended again, in accordance with the changing perception of risk on the part of the citizenry of British Columbia. 2.16  Why Model? A model is actually a framework; ‗a structure in which all of the ideas and thoughts one has about a subject can be organized‘ (Hendrick and Benner, 1987:8). Reflecting on the advancement in accident theories that have been made in recent decades, it is tempting to conclude that the systemic model will win the day. History will judge, however it will be some time before the ‗new investigators‘ will be sufficiently trained and empowered to investigate events in the workplace within the scope and context of a systemic approach. The reason for this is that a systemic approach to the investigation of events in the workplace requires more of the investigator(s) than is currently afforded to them by mine management. This is not an indictment of mine management, but an observation that there are ‗disconnects‘ between the typical organization structure of mine operations, and the goals and objectives of systemic modelling (Figure 2.9).   Systemic investigations are holistic; inclusive of human, organizational, techno-social factors and the way these interact as a system. This is a ‗big picture‘ view of the organization, one that requires a perspective of the entire enterprise – from the influence 40  of the regulatory authorities down to that of the worker at the face (Figure 2.9). It is imperative that all departments and their personnel be considered as potential decision makers in an event scenario. This is problematic. Most mine organizations delegate the investigation of events in the workplace to persons with narrowly defined job descriptions limited in scope and authority. This is not to say that they are not professionals, or discharge their duties in a professional way. In the absence of seeing all of the parts of the whole, it is difficult for investigators to take a holistic approach. If they are restricted to the time and resources limited to determining cause and effect relationships (causation), then they will not be successful in understanding the event and the inherent complexities of techno-social interactions required of systemic modelling.  Figure 2.9: Diagram illustrating the value of shifting investigative perspective  Health and safety, environmental and human resource professionals seldom have access to information at the enterprise level. When they do, sadly it is because the event is so grievous or tragic that the investigation is necessarily thorough and inclusive. 41  Clearly, this is not proactive. The traditional hierarchical organizational structure of mine operations is not aligned with modern systemic accident investigation approaches. This is also true of parties outside operations, but still within the enterprise of mining. Chief among these are corporate management, the regulatory community and organized labour. These enterprise parties have an immense capacity for influence of a mine operation, yet, rarely are they considered contributory to an event in the workplace.  The possibility that a corporate officer or a regulatory official made a decision error that contributed to the very event scenario that they are arbiters of, is a particularly vexatious one. However, within the evolving landscape of regulatory reform, there are indications that corporate officers, general contractors, regulatory inspectors and public officials are all accountable. Bill C-45, in the wake of the Westray Inquiry (Richard, 1996), is a recent enactment that serves to improve accountability in the workplace. It is federal legislation passed in 2004 that brings health and safety offences into the criminal courts of Canada, should a person or persons be convicted. It is anticipated that successful prosecutions under this Act will, in the fullness of time, engage senior officers and public officials to act within enlightened self interest (self-perception theory) and actively participate in the prevention of events in the workplace. 2.16.1 Attribution Theory Attribution theory holds that people attribute causality based upon their own behaviours, or self-perception of those behaviours and the circumstances in which they occur (Bem and McConnell, 1970). Further, peoples‘ self-perception of those behaviours will influence how they recall events. In this respect, they are evaluating the covariance between their behaviours or actions and causation (Gyekye and Saliminen, 2004) with a sub-conscious bias toward externalizing attribution. Workplace parties involved in an undesirable outcome (event) tend to attribute causation to external factors such as actions of others, lack of training, poor communication; a whole host of reasons excepting their own involvement and attribution.  Conversely, when the occasion provides a positive outcome, attribution theory holds that workplace parties will lean towards internal attribution. That is, they will consider it a reflection of their skills and abilities and will model future behaviours on decisions on this self-perception. It is prudent therefore that in evaluating decisions of parties to an event, that it is done without blame or value judgement as the parties will 42  devolve into external attribution and the decision record will be incomplete if not confounded. This research will examine the collective role that culture plays in the attribution of causality through the analysis of decision error. 2.17 Causation and Perspective Causation is a concept that has outlived its usefulness in modern-day accident modelling. One can get lost in the fuzzy logic and convoluted world of causation (Davies et al. 2003). There has been much debate over the lexicon as well as the rules of causation, but at the end of the day, causation limits understanding and prejudices the outcome in an increasing litigious society.  Invariably, causes of events in the workplace are a matter of perspective, as the cause of an event from someone inside an event scenario will be different from that of someone outside the event scenario (Dekker, 2004). The reasons are elementary. Parties inside and outside an event scenario are subject to varying biases (Reason, 1990). Parties inside the event scenario have the benefit of the knowledge of ‗what were they thinking‘ and the reasons why a decision or action took place. The outside perspective is one of objectivity, one that tends to be limited to the ‗what‘ and the ‗how‘ (Wright et al, 2007). Understanding is not complete without the benefit of both perspectives. Hence, a systemic investigation is optimal in rolling back the layers of perceptions and biases in an effort to adduce what the parties within the enterprise were doing, and thinking. The cognitive element is a factor that has eluded many investigators and constitutes the threshold at which many investigations have ended. Only by asking what the thought processes were, and what decisions were made, is it possible to track the human factors and techno-social interaction of the workplace parties. Causation, as an attribution of cause, is not far removed from blame. 2.17.1 The Role of Determinants Replacing causation is a more general notion of causality that can be best described in the language of determinants. Determinants are influencing, or determining factors, that consider a constellation of elements contributory to an event (Figure 2.10). They are neither necessary nor sufficient to ‗cause‘ an event, but in their aggregate they come together as a system of hazards, errors, decisions and actions that have internal order and structure. This internally chaotic model of accidents (Perrow, 1984) does not require upset conditions, or the exceeding of design parameters, for events to occur. 43  This normalization of deviance (Vaughan, 1996) suggests that deviant behaviour and aberrant conditions are somehow internalized by organizations motivated to do so. As examples, we look to Space Shuttle Challenger disaster of 1986, the Westray Mine disaster of 1992, and the Exxon Valdez disaster of 1989. Common to all three disasters was that these enterprises were operating within acceptable limits and behaviours of the day, from the point of view of someone internal to the event scenario.   Figure 2.10: Diagram illustrating the broad spectrum of determinants comprising causality  This is more the case when you examine these disasters at the enterprise level. In each disaster, the regulatory agencies and local governments were implicated in their occurrence. Broadly speaking, determinants are categorized as political, technological, environmental, cultural, organizational, and human factored (Figure 2.10).  2.17.2 Events as Systems Investigations are systems (Figure 2.11).This dissertation contends that events are also systems (2.13). A system is a ‗network of many variables, in causal relationships with one another‘ (Dörner, 1996:73). The analysis of cognitive error is no less a system, completing the investigation loop (Figure 2.12). The inputs are decision errors discerned by investigation, and the outputs are cognitive precursors as determined by cognitive profiling. The cognitive profiling methodology proposed in this dissertation will show that decision errors are influenced by the perceptions of risk of the 44  decision makers, and that these perceptions are covariant with the psychological and cognitive precursors articulated by Reason (1990, 2005).  Figure 2.11: A schematic illustration of the recursive nature of systems applied to investigation  Figure 2.12: Schematic ‗completing the loop‘ of accident investigation (Sweeney, 2004)  45   Figure 2.13: A schematic illustration of the recursive nature of systems applied to events  2.17.3 Error Analysis as a System Contemporary methodologies in the investigation of events in the workplace are incomplete. Accident theorists are moving towards accident modelling that increasingly takes a system view of accidents (Benner and Hendrick, 1987; Perrow, 1984), which by their example are leading us toward a more holistic approach. The current paradigm of investigations is very much one wedded to responding to a consequence and working backwards to devise a model of cause and effect (causation).  A more holistic approach would be to define the error system inclusive of the event, and establish the determinants (inclusive of Reason‘s psychological precursors) of causality endogenous to the system (Figure 2.13). In so doing, one addresses the determinants of the event, as well as those events not yet realized. This is true prevention; one that is not limited to the immediate cause and effects; but includes all constituent errors in the error-forcing system.  The term ‗error-forcing system‘ remains purposely vague in this dissertation. The mechanism of ‗forcing‘ remains elusive and the concept of error as a system is more the point insofar as analysis is concerned. The revelation that errors can, and do, have self-organized criticality is paradoxical in light of the energy and resources that we as a society expend to prevent them. Nonetheless, it is by modelling events as symptomatic 46  of error systems that we achieve a more enduring and probative appreciation for the psychological precursors and perception of risk that are antecedent to them.   Figure 2.14: A schematic of the recursive nature of systems applied to cognitive profiling  2.18 Conclusions This dissertation defines an event as a dynamic system of techno-social interactions between workplace parties, their technology and their working environment manifesting increasing disorder, or entropy. It is important to note that loss of operational integrity is the deleterious effect, but not necessarily a culminating effect. Loss of operational integrity may, or may not, result in an incident, accident or environmental excursion. The implication is that an event can be in progress that is not physically manifested. A mining operation that accepts increased risk and uncertainty may be operating outside of design parameters or the expectation of the parties within the enterprise – or it may not. Yet, without taking a systemic approach to investigation of upset conditions and close encounters, the event may go unnoticed until the active and latent factors exceed defences (Reason, 1990), and criticality occurs. Davies et al. (2003) note that from the point of view of the observer events, causes, and their consequences are not simply properties of the physical world. Further, they argue that an observer cannot apply corrective action to events they don‘t perceive until such time as consequences or causes of those events are realized. The 47  perspective of the observer is paramount and upon it hinges the very notion of ‗cause.‘ If for some reason the observer is not present as the event transpires, or lacks awareness, or ignores indicators that the event scenario (system) is in progress; then, they have a limited perception of the event, its risk and presiding uncertainty.  Systems are subject to external influences, but by virtue of their structure and integration, have a capacity for achieving internal equilibrium in response to these influences. Accordingly, it is proposed that within an event scenario (system) there is a similar, but opposing mechanism that causes disequilibrium – and that mechanism is entropy (Figure 2.12). By acknowledging that events are systems, we are recognizing their complexity; their dynamic nature; and, that they are not singular moments in time. They are not serendipitous or products of misadventure. Rather, events are unintentional products of humanity‘s effort toward enterprise; yet within them are seeds of disorder akin to self-organized criticality (Section 2.14.2). The question then becomes: where or to what do we attribute this disorder, and can disorder be predicted, if not prevented? The answer as determined by this research in the examination of historical records is – an unequivocal yes.  This dissertation proposes that events are systems, rather than singularities. Akin to Reason‘s (1990) pathogenic trajectories, event systems have destabilizing influences that require countering as entropy (disorder) increases with time. Inherent in this model of events, is the first hypothesis of this research, that: Events are not random: they are physical manifestations of interactive systems between humans and their environment in which the likelihood of their occurrence is presaged by, and proportionate to, the dissonance between actual risk and its perception.    48    ―Irrationally held truths may be more harmful than reasoned ones.‖ Thomas Huxley (Bartlett, 2000)  3 COGNITIVE SCIENCE Cognitive science is the discipline within the field of psychology concerned with human information processing, and includes attention, perception, learning, and memory; their structures and representation (Dawson and Medler, 2004). As the workplace becomes increasingly complex and automated, we can expect there to be a commensurate increase of the cognitive load carried by all of the workplace parties. We are steadily transforming from a world in which physical demands have dominated the workplace to one in which cognitive demands will be the determinants of events in the workplace. Sträter (2005:6) makes the case: Humans at the working level are forced to make decisions based on constraints from targets set at the management level, the procedures and interfaces given, the required communication with working partners and the operational tasks to be performed. This leads to the phenomena of induced mental workload. The term ‗induced‘ comprises the additional effort due to the type of interaction with the system. A frequently selling argument of automation is that it reduces workload. However, induced workload may cause an even higher net workload for the user than the workload an automated system is designed to reduce.  If we accept that the cognitive demand on workplace parties is increasing, then it follows that this demand will necessarily influence the provenance of errors contributing to events (Sträter, 2005). Human error will shift from proximate to the event, to more distal, as those parties making decisions respond to demands and constraints at the system and organizational level. The impact of these decisions and any associated error becomes more latent and distributed within the organization with the degree of separation from decisions and their unintended result. This ‗cognitive fog‘ confounds accident investigation and requires us to understand the cognitive context for error, both 49  individually and collectively. It is therefore essential that we appreciate the linkages between management design decisions and the functional operating decisions at the working level. Both require cognitive processing and collectively determine behaviour and ultimately the amount of risk accepted in the workplace (Figure 3.1). In other words, ‗we behave in a certain way based on the thought patterns which preceded the behaviour‘ (Gibson, 2001).   Figure 3.1: Influence of management and worker cognition on behaviour (Sträter, 2005).  3.1 The Cognitive Mill The human brain is an information processor and is constantly comparing external stimuli of the ‗external world‘ with that of its own representations, or ‗internal world‘, (Sträter, 2005). As a process referred to as the cognitive mill, cognition is iterative, subconscious and stability seeking. That is, the cognitive mill is like an inertial guidance system in which our world view is being sampled through experience, perception and reasoning and then aligned with reality (Figure 3.2). Nominally, the internal and external worlds are in balance or otherwise in agreement; however, should there be discordance then a state of cognitive dissonance exists and some accommodation or intervention is sought. The implication is that in the absence of cognitive dissonance, there is no 50  perception or mismatch between the learned behaviour and the event scenario and it is therefore unlikely that a decision maker will alter their established behaviour. Cognitive dissonance is thereby a prerequisite for corrective action or behaviour change on the part of the observer.   Figure 3.2: The cognitive mill model of human cognitive processing (Sträter, 2005) 3.1.1 Cognitive Dissonance Cognitive dissonance is a cognitive science term that refers to a state of dissonance or discord between one's perceptions and their behaviours. In effect, two cognitions are competing for accommodation and a tension exists between a decision maker‘s perception of how things should be, and how things appear to be. In the absence of mitigating information, the decision maker is compelled to accept the duality or seek resolution by acquiring new beliefs, attitudes or information. Cognitive dissonance can cause decision makers to suspend disbelief, or resist accepting mental cues that they are uncomfortable with – effectively deferring responding appropriately to the new reality, as it would require them to depart from established behaviours and norms (Aronson and Travis, 2007). Cognitive dissonance impairs the decision maker‘s 51  ability to accurately assess and respond to a new perception of risk. Cognitive dissonance explains why people behave counter intuitively when provided with information that conflicts with their worldview. Ironically, when confronted with evidence contrary to their beliefs, an individual who holds a position (as regards to risk for example) often exhibits an increased commitment to their belief. They are prone to biases and heuristics such as confirmation bias and self-justification, through which they are able to shore up and defend their beliefs. As an example, consider an underground mineworker who is a smoker and disposed to smoking underground in areas where smoking is prohibited. When presented with information that stipulates that such a practice puts others at risk of fire, or the health effects of second hand smoke, their compulsion to smoke is dissonant with their perception that they are putting others in harm‘s way. People do not do well with dissonant perceptions, and seek resolution by one of two mechanisms. They will accept the information and change their behaviours, or reject the information in support of their behaviours. Research in dissonance (Aronson and Travis, 2007; Plous, 1993) predicts that we are often pre-disposed to the latter, particular in matters of risk and its perception. Thus, subconsciously our underground miner seeks to resolve his dilemma and must formulate a response that will achieve consonance. He can adjust his worldview to incorporate this new, but, dissonant information and modify his behaviours by complying with the expectation – or he can dissent. Dissent is lower energy physiologically, as it accommodates his compulsion. Mentally, however, dissent introduces the need for counter measures, as perceptions of guilt and remorse are associated with non-compliance. This is the paradox of cognitive dissonance. In order for the underground miner to achieve consonance, he must provide evidence or information that not only supports his behaviours, but also defeats the argument that his behaviours are harmful. If he is mildly dissonant, he may argue that there are no flammable materials underground and that the rest of the crew are smokers as well. If he is strongly dissonant, he may argue that smoking reduces risk by providing an indicator of ventilation speed and direction; or that smoking provides a means of detecting oxygen deficiency. The stronger the dissonance on the part of the miner, the stronger is his bias.  Either way, our underground miner must resolve the apparent discord between his preferred worldview (his behaviours) and his perception of risk in the workplace. In 52  order to effect compliance, mine management can apply a number of traditional strategies. Mine management can institute severe disciplinary policies that change the risk equation by making the risk of smoking subject to dismissal; or, they may chose to reward the correct behaviour by providing a benefit.  Both of these traditional strategies are likely to result in some measure of efficacy; however fleeting. A better approach would be to reduce, if not remove, the mechanism of dissonance by providing the miner a safe place to smoke, or with the assistance of a smoking cessation program.   3.1.2 Self-Justification In the absence of reforming ideas, beliefs and attitudes the decision maker is left with one alternative (the lower energy one) – to reconcile cognitive dissonance by shoring his beliefs and attitudes with self-justification. Dissonance is the engine that drives self-justification (Aronson and Tavris, 2007). Self-justification restores self-image and at the root of every decision is the belief that the decision, and by extension, the decision maker are validated. The more pain, discomfort, or effort required to arrive at the decision or action in question, the more committed the decision maker is likely to be toward that decision (Aronson and Mills, 1959). Aronson and Tavris (2007: 19) write: Neuroscientists have recently shown that these biases in thinking are built into the way the brain processes information – all brains, regardless of their owners‘ political affiliation. For example, in a study of people who were being monitored by magnetic resonance imaging (MRI) while they were trying to process dissonant or consonant information about George Bush or John Kerry, Drew Weston and his colleagues found that the reasoning areas of the brain virtually shut down when participants were confronted with dissonant information, and the emotion circuits of the brain lit up happily when consonance was restored. These mechanisms provide a neurological basis that once our minds are made up, it is hard to change them.  Clearly, humans, as sentient beings, are not comfortable with dissonance. We claim, and more often hear, that we should learn from our mistakes. How many of us have the courage of that conviction? History records examples of men of exceptional character who did: Abraham Lincoln, Thomas Edison and Robert E Lee to name but a few (Aronson and Tavris, 2007:223). They conclude: Perhaps the greatest lesson in dissonance theory is that we can‘t wait around for people to have moral conversions, personality transplants, sudden changes of heart, or new insights that will cause them to sit up straight, admit error, and do the right thing. Most human beings and 53  institutions are going to do everything in their power to reduce dissonance in ways that are favourable to them, that allow them to justify their mistakes and maintain business as usual. They are not going to be grateful that their methods of interrogation have put people in prison for life. They are not going to thank us for pointing out to them that why their study of some new drug, into which they poured millions, is fatally flawed. And no matter how deftly or gently we do it, even the people who love us dearly are not going to be amused when we correct their fondest self-serving memory ... with the facts.  3.1.3 Cognitive Consonance The antithesis of cognitive dissonance is cognitive consonance. Cognitive consonance is the state of harmony and equanimity that exists between a person's attitudes, beliefs and behaviours with their worldview. Perhaps counter intuitive to the process of decision making is that cognitive consonance is not necessarily a good thing. As much as groups for purposes of reaching consensus seek concordance, cognitive consonance can devolve to groupthink in the absence of examination of goal setting and the objective evaluation of risk. Thus, we appreciate that from the point of view of the individual, cognitive consonance represents a lower energy demand state than is the case for cognitive dissonance. However, in the collective of group decision making some degree of cognitive dissonance is appropriate and indicative of healthy truth testing. The Bay of Pigs fiasco of 1961 and the battle of the Somme of 1916 are both examples of decision making that arguably were the product of excessive cognitive consonance among those that influenced decision-making (Reason, 1990).  3.1.3.1 Groupthink  In the classic example of the Space Shuttle Challenger (Vaughan, 1996), engineers employed by a NASA contractor suspended all rational thought and established standards to accept an imperative presented by NASA mission management to proceed with the pending launch. In doing so, they (the engineers) replaced a single cognition (exceeding a launch design parameter) with another - the acceptability of the risk. NASA expressed increasing expectations and pressure to the contractor to concur with the decision to launch. The decision was without merit however, but the degree of dissonance was not sufficient to cause an intervention and the engineers working for the contractor collectively acquiesced to a ‗go for launch‘, that history has recorded as a classic case of groupthink. 54  The phenomenon of groupthink (Janis, 1982) is explicative to accident theorists respecting the final moments preceding events that are synonymous with tragedy (Sunshine Mine Disaster in 1972) and infamy (Westray Mine Disaster in 1992). Groupthink speaks to a mechanism whereby cognition transcends the individual/collective boundary and describes an interaction between members of the workplace social unit whereby decision makers forsake rationality and good judgement in deference to authority, peer pressure and status. Janis (1982:9) writes that groupthink ‗is a mode of thinking that people engage in where they are deeply involved in a cohesive in-group, when the members‘ strivings for unanimity override their motivation to realistically appraise alternative courses of action,‘ and  attributes groupthink to eight specific symptoms of group interaction (Table 3-1).  # Symptom or Attribute Description  1 Illusion of invulnerability Members share excessive optimism and a collective acceptance for risk. 2 Collective Rationalization Members discount new information that contradicts their worldview or warnings that might require them to commit to another action.  3 Illusion of Morality: Members share an unquestioned belief in the group‘s inherent morality inclined to ignore moral and ethical consequences. 4 Excessive Stereotyping Members stereotype those holding opposing thoughts as incompetent, weak or inferior. 5 Pressure for Conformity Members apply direct and defensive pressure to any dissenter in the group who would offer a contrary or unsupportive argument. 6 Self-Censorship Members strive to align themselves with consensus, suppressing doubts and countervailing opinion.  7 Illusion of Unanimity Members share a belief that majority rules and that in the absence of any opposing view, consensus is established and supported by all. 8 Mind guards Certain members become self-appointed guardians of the values and beliefs, and protect the group from information or argument that weakens group complacency. Table 3-1: Table summarizing symptoms of Groupthink as enumerated by Janis (1982).  55  In the context of the events in the mine workplace, groupthink is an interesting phenomenon, that attributes to a small number of decision makers their contribution in the destabilization of a critical event and ultimately in its escalation towards disaster. Ostensibly, the phenomenon of groupthink is an example of collective cognition impairing an individual decision maker‘s ability to act on sufficient cognitive dissonance to bring to bear objectivity, critical thinking and rationality in the acquisition of a more realistic perception of risk associated with a worldview. In reference to the Space Shuttle Challenger disaster of 1986, Vaughan (1996:405) writes: For at its essence, the case is a picture of individual rationality irretrievably intertwined with position in a structure. Position in the engineering profession, the aerospace industry, and the various organizations made up the labyrinth NASA-contractor network was a key determinant of individual and collective determinations of risk. Position determined social mission. Position determined access to information. Position determined responsibility for acting on information and the actions legitimately could be taken. Position contributed to ability to interpret information and the worldview brought to the organization. Perhaps most important, position determined power to shape opinions and outcomes in one‘s own and other organizations. 3.1.4 Risk Polarization Risk Polarization, describes the dynamic whereby cognitive consonance results in a shift in risk perception. This can occur when individuals holding a more moderate view alter their views to accommodate the extreme views of others. This polarization of the group is subject to the organizational structure and labour relations constraints within the workplace. We anticipate that the more stratified organizations are, the greater the opportunity there is for risk polarization, owing to the influence of position and status. Similarly, for organizations in which labour relations are difficult and politically charged, we can expect there to be a high degree of cognitive consonance on both sides of the bargaining table.  Unfortunately, cognitive consonance is likely to permeate other domains and endeavours and one can expect to see consonance in matters such as risk perception and its quantification. Thus, cognitive consonance, once established within an organization or social unit, is likely to become entrenched with the possibility of devolving into groupthink. Risk polarization is therefore symptomatic of a social unit that exhibits cognitive consonance, to the extent that individual perceptions of risk shift 56  sympathetically towards perceptions that reflect social integration. In this manner, the individual decision maker achieves consonance in terms of their perception of risk vicariously through others, and may not benefit from direct ideation of risk and an accurate representation of hazards within the workplace. 3.2 Group Cognition Most young adults graduating today from high school have a basic awareness if not understanding of events like the sinking of the Titanic, the Halifax explosion, Chernobyl and the space shuttles Challenger and Columbia disasters. How many would have any awareness of the Piper Alpha Disaster of 1988, the Sunshine Mine disaster of 1972 or the Springhill Mine disaster of 1958, which claimed 167, 91 and 75 lives respectively? How many would know that at Three Mile Island in 1979, the core of the reactor experienced a true meltdown – or the reasons why? As much as the questions are rhetorical, they serve to illustrate our capacity as human beings to limit our awareness inter-generationally.  Similarly, in a much smaller social unit: that of the workplace, there is a tendency to give a passing interest to failures both at the organizational and at the individual level. Individually, we all can relate in the humility of coming to terms with our errors; that much is understandable, particularly in a blame culture. Organizationally is it the same thing? Are we predisposed collectively to accommodate error? Do we share an organizational hubris? These questions speak to the existence of psychological precursors to accidents and incidents (events) in the workplace. They suggest that events are not as isolated and unrelated as we would (like to?) believe. They suggest that there may be a group dynamic; an ethos towards error – a culture that however inadvertent sustains if not cultivates human error, and therefore the inevitability of events in the workplace. There is increasing support for the notion that beyond individual cognition, collective cognition exists within social groups (Busse, 2002; Reason and Sasou, 1997; Busby and Hughes, 2003). Most taxonomies of human error are attributed to individuals, and there is a paucity of modelling of human error as applied to group dynamics within the workplace. This research aims to address this by considering interaction of workplace parties based upon the analysis of decision errors made, and the effects of their perception of risk on those decisions. Accident theory is evolving toward the inclusion of complex techno-social systems and organizational influence (culture). 57  Consequently, determinants are becoming more distal to causality, but no less contributory. By implication, organizational and system theory and artefacts of their design must be considered communal or collective in nature. Any attribution of error at the organizational level is more appropriate to the many, than the few.  Cognition as it applies to human error can be considered occurring in three modes of mental processing. They are: attention allocation, pattern recognition and decision making (Alexanderson, 2003). It is the latter mode, decision making, that is within the scope of this research. By no means is it intended to marginalize the contribution of the other two modes of cognition. It is decision making however that has plurality in terms of parties to the decision. It is the making of a decision that involves the mental processing of risk outcomes. And finally, it is the perception of risk that is affected by the organizational culture and ‗distributed‘ throughout the societal fabric of the workplace. 3.2.1 Distributed Cognition The workplace as a collective is a social aggregation, which is brought together not by chance, but by common purpose and mission. Recent research has suggested that groups or organizations (workplace or otherwise) have the capacity to function as information processing systems (Gibson, 2001). If we expand our understanding of the meaning of cognition, we can appreciate how this might be the case. If cognition is more than a process of the brain, but is inclusive to the concept of ideation; no matter the source, then conceivably groups forming ideas is cognition – in the aggregate.  Accepting that cognition can be attributed to social units and that decisions are influenced, if not explicitly made by these groups (committees, teams and collective bargaining units), we are obligated to shift our paradigm for the attribution of causality concerning events. Moreover, such a paradigm benefits our understanding of causality owing to the insight distributed cognition provides us as regards to how information (risk, hazards and events) is disseminated and integrated within the social fabric of the workplace. Busby and Hibberd (2006:26) explain: In terms of defining distributed cognition, Hutchins‘ central concern is how information is represented and how representations are transformed and propagated in the performance of tasks. Propagation can occur across a social group, across the boundary between what is internal and external to the individual actor, and across time (Hollan, Hutchins, & Kirsh, 2000). 58  This means that cognition is associated with processes that extend beyond the individual human mind, and the appropriate unit of analysis becomes a sociotechnical system, or functional system (Roger & Ellis, 1994), rather than the individual person. Thinking and learning, it is then claimed, depend on the characteristics of relevant knowledge, such as its retrievability, and not on whether it is located in person or surroundings – the so-called ―equivalent access‖ hypothesis (Perkins, 1993).   We understand how norms and standards are explicated within contemporary mining workplaces. As an industry, mining is both consistent and progressive in articulating expectations, behaviour and performance related, within the workforce. This is accomplished by regulation and codes at the statutory level, policies and systems at the organizational level, and procedures and practices within operations. These artefacts are the structures with which the enterprise expresses its acceptance or aversion to risk. They by design, transcend individuality and are rarely attributed to a single actor or limited in terms of those who are responsible for compliance. The advent of electronic communication makes this more the case, as such artefacts are both instantaneous and anonymous. Coupled with an increased reliance of teams and committees in the development of policies, practices and procedures, the transmission and distribution of these artefacts via electronic media further removes us from the idea of their attribution in the singular.  An early model of group cognition (distributed cognition), as it applies to human error, is that of Reason and Sasou (1998). Reason and Sasou provide an appealing model in which to consider distributed cognition within the context of individual and shared errors (Figure 3.3). Further, they introduce a subset of these errors as being dependent or independent. Independent errors are errors for which the actor(s) had correct and complete information upon which to base their action or decision. In contrast, dependent errors have the distinction of the information being incorrect or incomplete. The model offers four error types as taxonomy to be used against a backdrop of three phenotypes. They are: failure to detect, failure to indicate and failure to correct (Figure 3.3).  Within this schema, error is propagated in a ‗Swiss cheese model‘ fashion as trajectories). Any combination of shared and individual error has opportunity for detection, indication or correction (barriers to error) – and failing any or all can result in team errors. The model is as simple as it is robust. It opens the door for the attribution of 59  error to teams and other social units, and distributes the cognition and the attribution of error in a pluralistic fashion.  In many ways this model makes antiquated existing theories of causation, as the distinction between cause and effect is further blurred.   Figure 3.3: Group cognition model based upon fallibility of barriers (Reason and Sasou, 1998)  Within contemporary system accident theory, for which distributed cognition naturally lends itself, there is a paucity of methodology in its treatment. Busby and Hughes (2003) write: Our intention is to look for failures of distribution as contributors to accidents and incidents in hazardous, complex systems. The notion of distributed cognition was a useful one in several respects. Firstly, developing and applying knowledge is socially distributed in the systems we are studying, and this distribution is usually problematic. It is very common for people involved in the system to refer to ‗communications problems‘. Secondly, the process of designing and operating such systems draw upon knowledge and that has been developed by people at earlier times. For example, design is strongly influenced by standards and codes of practice. They have important functions, such as accumulating empirical knowledge in the engineering discipline, economizing on resources by reusing knowledge, and protecting against whimsical practices. But when a designer applies a standard, he or she rarely knows the same things as the people who compiled the standard, and rarely analyzes the applicability of the standard in as much detail as a design done from scratch. It is certainly as if the design process is distributed over different people and different times – with the designer 60  incorporating partial solutions embodied in standards developed by other people at other times.  In a new paradigm of distributed cognition, decision making becomes less of a cognitive function and more cognitive processing – particularly as regards to the perception of risk. Busby and Hughes (2003) illustrate this in their model (Figure 3.4):  Figure 3.4: The role of distributed cognition in accident causation (Busby and Hughes, 2003)  One can appreciate that there are expedients that govern our behaviours, be they cognitive or otherwise. There are economies with copying others, simply following the rules or minimizing effort by any other means (Figure 3.4). To this extent, Busby and Hughes (2003) suggest that the cognitive effort of others occurring over time is a mechanism for distributed cognition. They go further and illustrate that within these economies are also assumptions – the biases and risk perceptions that then sustain motivations on the part of the recipient actors through time. Indeed, the originators of the artefacts being copied, followed, or otherwise employed - may no longer be physically present. The legacy of their mental processing is still present, however, by way of replication or adaption of their endeavours. Their cognition is distributed over time and throughout the social unit; and, entrained within are their assumptions, values and perceptions. 61  3.3 Discussion This dissertation argues that human error plays a major role in the occurrence of accidents within the mining enterprise – and that of all enterprises. Further, human error is a factor unlike other causal factors; as we are the unwitting architects of the system in which all factors reside. Hence there is an inherent lack of objectivity in the analysis of events, as the examination of error has within it the very biases and heuristics that in the first instance are present when the systems were designed. These biases and heuristics are influenced by the perception of risk and its detection. Investigators have by their vocation and circumstance a lower tolerance of risk than the subjects of their investigation, and this bias is a disadvantage to their endeavours to the extent that it is not appreciated or taken into consideration. Identifying human error in an event scenario is, however, not necessarily sufficient to identifying causality. With respect to the actual occurrence of any event, the design of organizational artefacts might not be appropriate or conducive to cognitive processing and the prevention of the event. The salient issue is that the person or persons present at the time of the event must be aware of, and have contextual understanding of these artefacts to be compliant with them. Complicating the issue is that as decisions relevant to the event are more distributed, the more removed the decision makers are from the consequence of the event. There is a lack of appreciation for the role of distributed cognition in general and the lack of mechanisms for its detection in particular. This is a shortcoming that this research is intended to address. Distributed cognition by its nature entrains all the economy as well as all of the limitations of its contributors. However, as a social dynamic there is a ‗distillation‘ of values and perceptions of risk that is anything but averaging in its outcome. Were these values and perceptions to be skewed to the more risk averse, this research would not be necessary. In the experience of this researcher however, the reverse is normally the case whereby a recipient of distributed cognition is likely to be less aware of, or cautious, in the application of artefacts in the workplace than was the intention of the originators. The mechanism by which the perceived risk is diminished from the actual risk, through the distribution of cognition by these workplace artefacts is essential to the understanding of decision error and therefore of interest to this research. Busby (2001:251) writes: 62  Hutchins writes about a ‗culture‘ as passing partial solutions from one generation to the next. In this study, it was a specific aspect of culture – the norms which designers shared – which provided much of this solution passing, and played a part in many errors. Norms were often seen in fact as things that had been implemented in the aftermath of past errors, in order to avoid their recurrence. They ranged from informal understandings (how, for example, structural and piping design was demarcated), through explicit procedures (for example, how to represent un-modelled piping branches), to codes that specified necessary properties of an artefact (which could be as simple as the height of a handrail). These norms were especially important when different people‘s work had to be consistent, but when in isolation they would have found it hard to predict what their colleagues‘ would have been. 3.4 Conclusions The goal of this dissertation is to demonstrate the use of cognitive profiling as a method of analysis of decision error and its taxonomy. The lexicon of cognitive error, like the syntax of a language, serves to define and focus this modelling process. Modelling decision error within a cognitive framework will enable the analyst to measure risk perception beyond that of the individual; but, inclusive of the social unit through distributed cognition theory. Although the decisions of the collective are generally distal and not as directly linked in causality, they are no less potent in the ideation of risk and its perception. Perception of risk becomes a barometer of an organization‘s acceptability of risk and a strong indicator of the direction it, as a social unit, is evolving as measured by its artefacts; its treatment of events in the workplace; its ethos of error. Cognitive demand is increasing in the workplace (Busse, 2002; Strȁter, 2005). Inherent in human cognition is bias, heuristics and error (Slovic et al., 2002). In the absence of any countervailing strategy or mitigation, it is reasonable to suggest then that cognitive error is also increasing within the mine workplace. Further, increased organization of the mine workplace, its social units and its artefacts, would lead those that subscribe to distributed cognition to appreciate the second hypothesis of cognitive profiling events in the mine workplace: Decision error, as it contributes to causality, is not limited as an attribute of individuals, but is distributed within the cognition of the social unit and its system(s) of governance.   63   ―It is easier to perceive error than to find truth, for the former lies on the surface and is easily seen, while the latter lies in the depth, where few are willing to search for it.‖ Johann Wolfgang Von Goethe (Bartlett, 2000)  4 HUMAN ERROR THEORY 4.1 Introduction Human error became a mainstream study of interest because of increasingly horrific disasters occurring in the 1970s and early 1980s (Trepess, 2003). The Flixborough disaster of 1972, followed by Three Mile Island in 1979 and Chernobyl in 1986 served to galvanize the attention of both government and public alike (Reason, 1990). Clearly, human error is timeless as it is ubiquitous; and as a civilization, we have only recently realized the true enormity of our vulnerability in the face of increasing technology and its complexities (Perrow, 1984). One might ask whether human fallibility has insidiously crept into our modern existence; or, rather has it always been an unwelcome companion and we have chosen to ignore it? We have only to look at the historical record to realize that as a society we have developed a singularly convenient ability to move beyond disasters, ostensibly to heal and rebuild. Such is our nature. But, in so doing is it not also true that we often fail to take the time to truly understand the reasons of our failures; at the very least the lessons to be learned from them?  Human error is a broad and highly studied subject. Yet, as a subject matter, it is not a comfortable one. Investigators of human error draw upon their humility; their fallibility and ultimately their humanity. As theory is built upon theory, human error theorists have increased the scope and relevance of this field to all enterprises and intellectual disciplines. As example, a pioneer in human error, Norman (1981) set the stage in his categorization and analysis of slips (execution failures). Building on this work is that of Rasmussen (1987), who broadened the field with his work in skill, rule and knowledge errors. Similarly, Reason (1990) built on the foundation of Rasmussen and 64  Norman in his composite model of human error: the Generic Error Modelling System (GEMS). Norman‘s understanding of errors of execution has lead to understanding of errors in design, which is currently generating understanding of errors at the organizational and the systems levels (Dörner, 1996; Whittingham, 2004; Strȁter, 2005). Accident theorists are currently experiencing a renaissance in the study of human error that layer by layer is unveiling the intricacies of human behaviour within the workplace. 4.2 The Nature of Human Error The study of human error transcends many disciplines. Contributions are being made by the disciplines of psychology, reliability engineering, cognitive science and system software (Busse, 2002). Fundamental to understanding human error is its scope and definition. A common element in many definitions is the notion that the error or action is intended to achieve a desired outcome (Whittingham, 2004). With this in mind, the definition of Whittingham (2004:6) is adopted which defines human error as: A human error is the unintended failure of a purposeful action, either singly or as a planned sequence of actions, to achieve an intended outcome within set limits of tolerability pertaining to either the action or the outcome.  There are three tenets within this definition of human error worthy of note within the context of events in the workplace. First, there must be an a priori purposeful action, or intent. Second, the error is outside the limits of established tolerability. Lastly, implicit in the definition, is that a failure flows from the error or action resulting in a consequence. Unfortunately a measure of human error is commonly perceived as concomitant with the severity of consequence (Woods and Cook, 2008).  4.2.1 The Tenet of Intent Strictly speaking, an error cannot have occurred unless there is a standard to which the action or perceived error can be compared. In this regard, we do not define human error by a negative outcome. It is neither a necessary, nor a sufficient condition. An error is established only when the consequence is clearly outside the tolerability of the purposeful action. Intent of the action is inextricably related to tolerability, and by this reasoning, precludes chance or random events from the analysis. The tenet of Intent forces us to examine the rationality of our standards in terms of a desired outcome.  65  4.2.2 The Tenet of Tolerability Tolerability is a natural requisite of human fallibility. Whereas intent speaks to the expectation on the part of the operator, tolerability goes to the quintessence of what really matters – the acceptability of risk. Tolerability encompasses those deviations that accommodate the action, and prescribes those that cannot, and by this measure establishes limits of failure (Whittingham, 2004). Tolerability, in a systems framework, is the amount of variability implicit in the actions of operators that is permissible to achieve the desired results. Tolerability predicates the amount of acceptable risk. 4.2.3 The Tenet of Consequence Most familiar as a tenet, is consequence – and largely misunderstood. The reason is one of availability heuristics. If we consider two scenarios with identical error, but with markedly different consequences or outcomes, we discover a paradox. As an example: consider failing to lockout an energised device within the workplace. Most workers in heavy industry can describe a time in which they either forgot to lockout, or did not know that lockout was required, while working on energised equipment. For the vast majority, the consequence would be a stern conversation with someone in authority setting them straight regarding the risk. Tragically, for some, the consequence is extreme – in the form of a disabling or fatal injury. The error remains the same; however, the outcomes are tragically different.  In the former instance, the error is characterized as small; in the latter instance, the error would be considered grievous. How can this be? Availability heuristics teaches us that as information processors, human beings lack the capacity for sanctioning our actions in the absence of empirical consequences. Errors without consequences, are too often overlooked or ignored; and rarely reported. We are limited by, and prisoners to, our own worldview shaped by our perceptions. A dilemma associated with consequence is cause and effect (Woods and Cook, 2008). In our zeal to ‗discover‘ the cause of an event (often confused with consequence), we as sentient beings, suffer from the cognitive equivalent of myopia. The consequence of an error, as stated previously, is not required to establish that we have exceeded tolerability parameters. Too often however, it is the signal that something is amiss. Typically the greater the consequence, the more likely we are to find the one ‗fix‘ or factor that by its elimination would have prevented the consequence – but not 66  necessarily the event. Clearly, consequence as a tenet is a necessary, but not sufficient condition in establishing human error. Busby (2001:234) writes: Error has a basic importance in most human tasks. It is a necessary element in learning a task and adopting it to changing needs, but is also one of the main influences that limits performance in a task. This importance increases at the organizational level, where error can be very widespread. Error is also revealing: it often helps us understand the nature of a task that has become habitual, automated or just taken for granted. When an organism is well adapted to its environment satisfactory performance says more about the environment than its internal nature. It is when performance fails that this internal nature becomes evident. We require a new paradigm with which to frame human error, one that considers consequences as perturbations of a system in disequilibrium, one that is organizationally holistic - and heuristic. We are best served by considering human error not as discrete failures of individuals (although these do occur), but rather as forced errors that are a product of interaction of people within their workplace environment and as prescribed by organizational and its techno-social artefacts (norms, rules and standards). 4.2.4 The Notion of Failure In the context of human error the word ‗failure‘ has a connotation all its own. It is not a positive one; nor is it particularly helpful as a declarative statement. That said; literature on human error is replete with reference to failure - as are the vast majority of accident reports. By way of example, within their discourse on learning from error, Cannon and Edmondston (2001:162) write: We conceptualize failure as a deviation from expected and desired results. This includes both avoidable errors and unavoidable negative outcomes of experiments and risk taking. It also includes interpersonal failures such as misunderstanding and conflict. Our conceptualization is deliberately broad, encompassing failures of diverse types and magnitude, because we propose that opportunities for learning exist in both minor understandings and major mishaps. We note also that the amount or significance of learning is not necessarily proportional to the size or scope of a failure. Clearly, learning can emerge from major failures such as launching a highly visible product only to have it rejected by the market, or implementing a new technology that cannot be made to work in the intended context. Additionally, however, significant learning can come from uncovering a small failure to communicate in a work relationship, and such seemingly small failures can lead, ultimately, to highly preventable major failures.  67   Although there is no argument against the substance of what the authors offer in his discourse, the word ‗error‘ can be substituted for the word ‗failure‘ and their meaning would not be altered. There is subtle shift in tone and context in a way that is not trivial; one that is less judgemental. We should bear in mind that failure is more appropriate as a verb than as a noun, acknowledging the fact that we fear, if not disdain, the latter. Failure (or error), once determined, whatever the context, should be a starting point for examination, not an end-point (Busse, 2002). Therefore, this researcher stipulates in this dissertation that ‗failure,‘ as a noun, is reserved for the description of degraded mechanical components, and the verb is more appropriate for its human condition analogue.  Although the definition of human error subsumes the notion of failure, in the absence of consideration of the environment and circumstances in which the human error occurred, any explicit reference to human failure is misleading, if not prejudicial.  4.3 Human Reliability Assessment One approach to the understanding and the prediction of human errors is the methodology known as human reliability assessment (HRA). HRA encompasses a class of models for the purpose of analyzing and predicting human error. As a methodology, HRA has been subject to much debate that as a process it is more psychosocial than technical in its derivation (Hollnagel, 2005). Nevertheless, HRA addresses three fundamental questions, each of which has its counterpart within the process of causation attribution (Table 4-1).  Human Reliability Assessment (HRA) Causation Attribution (Investigation) 1 What are the errors that can occur? What were the circumstances of the event? 2 How likely are the errors to occur?  What were the reasons for the event? 3 What means are there to reduce the likelihood of error? How do we prevent a recurrence of the event? Table 4-1: Table illustrating the complementary nature of HRA and causation (Hollnagel, 2005)  HRA is a methodology that is forward-looking as opposed to investigations, which are by nature post-event. HRA provides us with some insight as how to better 68  frame human error – more by its limitations than by its example. There have been many HRA techniques developed: Technique for Human Error Rate Prediction (THERP), Accident Sequence Evaluation Program (ASEP), Cognitive Reliability and Error Analysis Method (CREAM) and A Technique for Human Event Analysis (ATHEANA), are a few of the more common examples (Hollnagel, 2005). The details of these models are beyond the scope of this dissertation; however, the precepts of HRA bear closer examination within the context of causality. Heavily weighted in epidemiologic and human factors theory, HRA complements the investigation of events, in particular from the perspective of sequence-of-events modelling. Further, HRA provides a cognitive analogue to system destabilization known as cognitive decomposition. The analysis essentially accomplishes this by a reduction of cognitive function into its constituent parts and then expresses errors therein: as failure probabilities. In consideration of any given event scenario, a primary goal of HRA is the quantification of probabilistic risk (Hollnagel, 2005). Herein lies the promise, as the quantification of human error is the holy grail of risk assessment and fundamental to forward-looking (predictive) models. Hollnagel (2005) illustrates the graphical utility of HRA, through the analysis of cognitive decomposition (Figure 4.1).   Figure 4.1: Schematic illustrating an HRA model of cognitive decomposition (Hollnagel, 2005)  69  Cognitive decomposition, as depicted above, illustrates the essentials of HRA analysis. The nodes represent opportunities for intervention and the arrows the error trajectories or probabilistic outcomes. In its full evolution, the cognitive decomposition model characterizes degrading cognitive processing in which the outcome results in failure. Notwithstanding the unfortunate reference on the word ‗failure,‘ the model is indicative of a binary event tree approach that is common in technical systems (e.g. fault tree analysis, failure mode and effect analysis). Hollnagel (2005:163) argues that such an approach is impractical - and invalid. He opines: HRA has in common with many accident analysis methods the assumption that it is reasonable to consider the inherent variability of human performance by itself, hence that a performance failure is an attribute of the human component rather than the circumstances during which the actions take place. In this sense of ‗human error‘ is – metaphorically, at least – the sought for signal rather than the noise. This assumption is strangely inconsistent with one of the main tenets of information processing approach, which states that:  A man, viewed as a behaving system, is quite simple. The apparent complexity of his behaviour over time is largely a reflection of the complexity of the environment in which he finds himself.  To be fair, Hollnagel‘s (2005) treatment of human reliability assessment was directed at the earlier models of HRA, and models that are more recent have  made considerable inroads in integrating the cognitive dimension of human failure with the cultural, organizational and environmental dimensions of the workplace. It is ultimately a challenging and complex analysis that has crossed the line for which a single practitioner can master, owing to the depth of knowledge required in reliability engineering, systems design, and cognitive science. Clearly, to pursue this emerging discipline, the next generation of accident theorists and investigators will require additional, if not a new set of skills than those that currently exist in the industrial workplace. This is the challenge. We can learn from HRA, to the extent that models predicated on human error need to be circumspect respecting approaches taken. The lesson learned is that humans do not make errors in the absence of a culture or organization that enables if not contributes to error, by the very artefacts and systems that prescribe the workplace. Further, these artefacts (norms, rules and standards) are not limited to the organizational level. Particularly in the mining industry, the rules, norms and standards governing the workplace are an interdependent mix of corporate policy, generally accepted industry 70  practices and regulatory statute. In a democratized society, statutes are in a constant state of review and reform, strongly influenced by organized labour, industry associations and the public at large.  We emphasize that the perception of risk that these parties bring to the analysis will ultimately dictate the terms and height to which the bar is set as regards to the acceptability of risk. Any attribution of human error to events within the workplace must be inclusive of the influence of the collective worldview, as well as that of other discrete social units comprising the enterprise. 4.4 Taxonomies Any treatment of human error and its taxonomy is necessarily reliant upon a lexicon of terminology that is at first glance strange to the uninitiated. Taxonomy itself is a term borrowed from the field of biology to mean nomenclature or classification. So too are numerous other terms that describe the cognitive and human error schemas; consequently this research will respect this convention. Human error is, as previously stated, ubiquitous as it is broad and therefore it is essential that as we compare models and schema, we appreciate what exactly is in or out of the box in terms of analysis. We start with the general, and move to the particular, as established by Whittingham (2004). An important taxonomic distinction respecting human error is the degree to which error can be organized as a genotype or a phenotype. In biology, the distinction between these terms is self evident; not so for the taxonomy of error - but no less fundamental.  4.4.1 Error Genotypes The origin of an error determines its genotype. A variation of endogenous error, errors classified by genotype originate within the cognition of an individual(s), such that the even if the task was executed correctly the task could be intrinsically flawed. Within the context of human error therefore, the genotype refers to errors that are defined by their mental processing, and not the manifestation of the error in terms of cause and effect. Whittingham (2004) illustrates this with an example involving a task in which a faulty item passes an inspection process (Figure 4.2). The immediate determinate is that there was a deficiency in the lighting conditions. In terms of genotype, the error is attributable to the inability of the person(s) to mentally process a ‗fault‘ owing to the lack 71  of acuity. How that deficiency is manifested in terms of effect is not relevant to the genotype. 4.4.2 Error Phenotype  How an error manifests beyond the faculty of cognition determines the phenotype. It is a variation of exogenous error. The error, having a physical effect on the workplace, is observable. In the inspection process example above, the error in terms of phenotype is the failure of detection of the faulty item through the physical process of inspection (Figure 4.2). The cognitive aspects of the error are not relevant to the phenotype.  Figure 4.2: A schema differentiating error genotypes from phenotypes (Whittingham, 2004) 4.4.3 Exogenous and Endogenous Errors In reference to their biological origins, endogenous and exogenous refer to originating with and without the body, respectively. Used by many disciplines, ‗body‘ can represent any number of allegorical entities. In economics, endogenous can refer to internal to a nation‘s economy, and exogenous those influences outside of a nation. The terms are useful in attributing causation, as they implicitly define the scope of the event under investigation. Conventional investigations into mining events consider 72  determinants and factors of causation within the care and control of the mine proper to be endogenous. The influences from the corporate management may or may not be endogenous to the investigation, depending upon the corporate culture. It is rare that investigations consider influences from collective bargaining units, industry associations or the regulatory community. These would be exogenous factors in the traditional sense. It is central to this research however, from the perspective of considering the role of risk perception on the collective cognition, that we consider all parties to the enterprise as endogenous to the investigation until determined otherwise.  Rasmussen and Svedung‘s (2002) schema illustrates this enterprise perspective, identifying the various parties to the enterprise; both individuals and organizations—in a complex techno-social system (Figure 4.3).   Figure 4.3: Graphical illustration of a techno-social system (Rasmussen and Svedung, 2002)  73  In this schema, the authors implicate the changing techno-social forces as being retarding forces to effective vertical integration of risk within the enterprise. Ironically, it is only through such vertical integration that an effective and cohesive perception of risk can be realized. This dissertation examines the role and contributions of decision errors to event scenarios in the mine workplace. As a specific subset of human errors, we characterize decision errors in terms of both their genotype and phenotype. The model presented in this research will use the decision error phenotype (physical manifestation) to adduce something about the genotype (psychological precursors) of the decision error. We will illustrate that the cognitive profiles presented through decision error analysis can apply to individuals, small social units or to the mining enterprise as a collective. The analysis is only limited to the extent that the investigation of the event considers the various parties to the enterprise as endogenous to the investigation. It is axiomatic that the contribution that human error or behaviour makes cannot be considered if we exclude parties from the process of investigation and the evidentiary record.  4.5 An Age of Error Models of human error have been in existence for many years. However, only in recent decades has their study garnered the attention of accident theorists (Norman, 1981; Rasmussen, 1987; Reason, 1990). The focus of these early models was in high technology, such as civil aviation, petrochemical and nuclear plants. This is attributable to the litany of disasters that took place during the 1970‘s and 1980‘s – many of which reaped unprecedented deaths and losses (Table 4-2). Public outcry was matched only by their increased awareness.  Public safety, worker safety and environmental concerns gained a voice as once again the public‘s perception of risk and its uncertainty urged lawmakers and industry to reform standards and regulations (Reason, 1990; Perrow, 1984). It was becoming increasingly clear that high technology and large-scale enterprise did not guarantee reliability; thus, a new era of reliability and performance engineering was being ushered in (Hollnagel, 1998; Reason, 1990).  Error prediction and modelling has become increasingly accessible with the advent of the personal computer as a platform for an emerging software market specializing in risk analysis. The stage has been set for a renaissance in human error modelling and its classification. This research will show through the analysis of historical 74  case studies (including many of those in Table 4-2) that human error transcends time in the historical record, and that an effective taxonomy of error can be elucidated retrospectively with an appropriate methodology. We introduced this methodology in Chapter 5, as cognitive profiling.  Year Disaster  Location Description 1972 Sunshine Silver Mine Kellogg, Idaho 91 miners killed in the worst metal mine event in US history  1972 Three Mile Island Disaster Middletown, Pennsylvania The worst civil nuclear accident in the history of the United States 1974 Flixborough Disaster Flixborough, UK 28 persons killed and the hamlet heavily damaged 1977 Tenerife Airport Disaster Tenerife, Canary Islands 583 persons killed in the worst aviation accident in global history (collision of two 747‘s on runway) 1984 Bhopal Chemical Disaster Bhopal, India 1,408 citizens of the community died in the worst chemical plant disaster in world history 1986 Space Shuttle Challenger Offshore Florida The worst accident in US space program history (of the day) killing 7 1986 Chernobyl Nuclear Disaster Chernobyl, Ukraine Untold fatalities and the worst industrial disaster in world history 1988 Piper Alpha Disaster North Sea, Scotland 167 workers died in production platform fire – worst in history 1989 Exxon Valdez Disaster Valdez, Alaska 11 million gallons of spilt oil resulting in the most expensive clean-up in US history Table 4-2: Table of unprecedented disasters defining the 1970‘s and 1980‘s high technology era 4.6 Existing Taxonomies The classification of human error has empirical underpinnings, largely based upon advanced technology and military operations (Rasmussen, 1987; Reason, 1990). However, there is ample evidence to suggest that other industries are equally susceptible to human error. Existing taxonomies are to a varying extent  based upon the information technology and civil aviation industries, as indicated by the work of Busse 75  (2001), Bove (2002) and Leveson (2004). The cockpit of an aircraft in particular lends itself well to the study of human error, with its contained working environment, man-equipment interface and well-established procedures and regulations. Yet, the modern airframe is still subject to the same vagaries of exogenous factors (weather state and maintenance standards), and the endogenous factors (human perception and cognition) as the mining, or any other enterprise. In this respect, the aviation industry provides us with a crucible in which to examine the techno-social interaction of the crewmembers and their collective cognition. For the purposes of this research, we acknowledge that regardless of the domain (industry type, scope of research) in which human error taxonomies are devised we can draw upon them for principles and lessons. Human error taxonomies provide the syntax for the mechanisms of causality underlying human error, and their analysis (Busse, 2001). 4.6.1 Cognitive Reliability and Error Analysis Method  Hollnagel (1998) has devised a human reliability assessment (HRA) model that referred to as Cognitive Reliability and Error Analysis Method (CREAM). This model provides a unique approach to human error analysis insofar as it can be used both as a performance prediction tool (prospectively) and as an event investigation tool (retrospectively), and hence its inclusion in this research. The CREAM model addresses the early shortcomings of HRA by being inclusive of both the genotype and phenotype taxonomies. Ultimately aimed toward using the phenotype (behaviour) taxonomy of error in the analysis of the error mode, CREAM also provides insight into the genotype (cause) of the error in question. The three genotypes are individual, organizational and technologic causes of error. The eight phenotypes are timing, duration, sequence, object, force, direction, distance and speed. Hollnagel provides subgroups of the latter in the form of actions; actions at the wrong time, place, type and object. The method offers a structured analysis of tasks at risk through the classification by phenotype in the first instance, followed by genotype to determine or predict causation (Figure 4.4). Hollnagel (1998) makes the distinction between observable phenomena (manifestation) and the cognitive mechanism (cause), providing an empirical schema for their classification. The strength and utility of CREAM is in this differentiation, which separates behaviour from its cause and effect antecedents. In doing so, this model also avails the analyst with a structured methodology (that is in short supply) to the 76  separation of subjective and objective error. This feature harkens back to the reasoning of Dekker (2004), that in the evaluation of an event, one must consider two worldviews: that of the observer and that of the participant in the event scenario. The former provides a measure of the state of entropy and the latter a measure of the perception of risk.  Figure 4.4: The structures methodology of CREAM in evaluating human error (Hollnagel, 1998) 4.6.2 Skill – Rule – Knowledge Model A model, which dominates the discipline of human error analysis as it applies to human performance, is that of Rasmussen (1983) and known as the Skill-Rule-Knowledge (SRK) model. In this taxonomy of behaviour, Rasmussen envisions three levels of conscious control existing depending upon the degree of interaction between the operator and their environment (Bove, 2002). These levels of control, in order of complexity are skill-based behaviour, rule-based behaviour and knowledge-based behaviour (Figure 4.4). Skill based behaviours (SBB’s) are thought to exhibit the lower cognitive demand as it is prone to automation and repetition. Examples of skill-based 77  behaviour are activities involved in trades or the arts in which operators respond to ‗signals‘ without conscious effort for control.  Next in this hierarchal taxonomy are rule-based behaviours. Rule-based behaviours (RBB’s) are characterized as sequential subroutines in a familiar working environment, in which the subroutines are reliant upon stored rules and well-defined standards (Bove, 2002). The operation of complex machinery such as aircraft and heavy equipment fall in this category in which the operators respond to ‗signs‘ to establish the operational state of their work environment (Figure 4.5).  Finally, at the highest level of cognitive demand are knowledge-based behaviours that respond to ‗symbols‘. Symbols are abstract constructs and representations such as language and mathematics (Figure 4.5). Rasmussen cautions that knowledge-based behaviours, while attractive in their sophistication, require considerable investment in time and effort to master, and consequently are employed when the lower cognitive demand will not suffice (Rasmussen, 1983). Examples of knowledge-based behaviours are activities involving problem solving and diagnostics in which the goal is explicitly formulated (Busse, 2001).  Figure 4.5: The SRK taxonomy depicting three hierarchical control strategies (Rasmussen, 1983)  Rasmussen‘s (1983) SRK taxonomy is still a touchstone of human error modelling in providing insight into the apparent opportunity for mismatch between the 78  human element and their tasks. Variability within human performance or the workplace environment (or both) are attributed to be human error or component failures, respectively (Busse, 2001). Implicit in this model is that there be adequate and correct sensory input, a priori standards (stored rules) in place and the appropriate cognition for both rule-based behaviour and knowledge-based behaviour to take place. Busse (2001:42) summarizes: In general, skill-based performance flows without conscious attention and the actor will be unable to describe the information used to act. The higher level rule-based co-ordination in general is based on explicit know-how, and the rules used can be reported by the person, although the cues releasing a rule may not be explicitly known. During unfamiliar situations, for which no rules for control are available from previous encounters, the control must move to a higher conceptual level, in which performance is goal controlled and knowledge based. The goal is explicitly formulated. Then a useful plan is developed. Different plans are considered and their effect tested against the goal, physically by trial and error, or conceptually by means of ‗thought experiments‘.  Rasmussen‘s SRK taxonomy is the foundation for which he offers a ‗decision ladder‘ representing the intricate cognition that is dynamic to the process of decision-making (Rasmussen, 1987). Rasmussen‘s decision ladder is composed of eight ‗states of knowledge‘ and eight ‗information processing‘ activities in a sequential and logical framework (Figure 4.6). In this model, Rasmussen explicitly reveals the dynamic complexity between the various nodes of prototypical ‗states of knowledge‘ and the cognitive activities that link them.  Upon first inspection, the decision ladder appears very complex, however depending upon the connectedness between the decision-maker and their environment, the framework in reality presents ten cognitive pathways, one of which: the decision-maker subscribes. The ten pathways are inclusive of the nine short cuts (pathways with inherent error susceptibility) and the default path way in which all of the states of knowledge and cognitive activities take place (the nominal path). Rasmussen‘s model is significant, lest we as analysts are under the misapprehension that decision-making is a discrete and binary process. The decision ladder illustrates that for individual cognition, decision-making is dynamic; however, for distributive cognition the implication is that there would be permutations of decision pathways that would provide for intrinsic incoherence. This incoherence is intriguing to this research, as it is consistent with the 79  cognitive fog that appears to surround many of the decisions identified in case studies into the disasters such as the Piper Alpha production platform, the Sunshine Silver mine fire, and the Chernobyl nuclear power plant (Table 4-2). This incoherence presents challenges to the characterization of distributed cognition; however, it also offers opportunity for understanding and study.  Figure 4.6: The decision ladder model for Cognitive Task Analysis (Rasmussen, 1987)  80  4.6.3 Seven Stages of Action Model One model, that of Norman (1981) is significant in its simplicity and symmetry as applied to the formation of intention and its execution. That model is the Seven Stages of Action, and this model has strongly influenced subsequent human error taxonomies. Norman‘s taxonomy (Figure 4.7) enumerates the cognitive actions that are intermediary to a decision-makers worldview (the world) and carrying out an action in alignment with a prescribed decision (the goal). Central to this model are the two sides of the cognitive processing (actions): the evaluation actions and the execution actions. Perception, interpretation and evaluation fall into the former group and intention, sequencing and execution fall into the latter.   Figure 4.7: Schematic illustrating the Seven Stages of Action taxonomy (Norman, 1981)  The source of human error is explicitly within the domains of evaluation and execution; however, implicit is a correct worldview. In Norman‘s model, to the extent that the actor does not have a correct ideation of the world the formation of the goal and its 81  execution is degraded. Any degradation in the perception of the world, its interpretation and evaluation constitutes the ‗gulf of evaluation‘ between the actor‘s worldview and the goal. Similarly, any degradation in the formation of intention, specification of sequencing and execution constitutes a ‗gulf of execution‘ between the actor‘s goal and their worldview (Figure 4.7). This process is one of circular logic in which the errors are potentially self-correcting through the changing ideation of goals becoming concordant with a correct perception of the real world.  Norman goes further. Within the action of specification of sequence, he identifies errors that he calls mistakes and slips. Mistakes are associated with specification of sequencing that are concordant with the intention (or plan) but are inappropriate to the goal. In contrast, slips are associated with execution of sequencing that is discordant with the intention, but appropriate to the goal. Mistakes tend toward cognitive deficiency - errors in understanding or the adequacy of information. Slips are more akin to inadequacy caused by memory and inattention.   4.6.4 The Generic Error Modelling System Reason (1990) improves on both the work of Norman (1981) and Rasmussen (1983) with his taxonomy called the Generic Error Modelling System (GEMS). Reason‘s GEMS model is an effort to improve on his predecessors‘ model by providing an integrated model of error mechanisms not prescribed by the Seven Stages of Action model. He does this by starting with the presumption of an unsafe act, and then making the distinction between intended or unintended actions (Figure 4.8). In doing so, Reason is consistent with the tradition of Heinrich et al (1980), and Bird and Germaine (1985) in the identification with unsafe acts and their primacy with cause. Reason subdivides further and explicates slips and lapses as deriving from unintended action; and mistakes and violations as a derivation of intended action.  In doing so, Reason bases his taxonomy on skill-based slips and lapses on the unintended action side and rule-based and knowledge based errors on the intended action side (Busse, 2001). Consistent with Norman (1981), Reason designates mistakes as higher cognition demand and being appropriate with intention, but inadequate for the reaching of the goal. By contrast, slips and lapses are inconsistent with intent, and inconsistent with the goal. Thusly, GEMS taxonomy is based upon three types of error: slips, lapses and mistakes. Intended violations fall outside of the taxonomy, as they are 82  neither deviations from the intention nor the goal in the conventional sense of the schema.  Reason‘s GEMS model represents a phenotype taxonomy in which the distinction of error types is based upon their outward physical manifestation, and therefore can be considered a behaviour model that has a capacity to determine cognitive control processes. The utility of Reason‘s GEMS taxonomy is the specification of mechanisms of failure that he explicates in association with each error type. In particular, Reason provides mechanisms for rule-based and knowledge-based error beyond that of Rasmussen (1983) and Norman (1981). Within rule-based error, he discriminates between ‗misapplication of good rules‘ and the ‗application of bad rules‘ (Reason, 1990). We interpret the former as the lack of perspicacity in applying rules that are correct, and the latter as the selection of rules that are wrong, respectively.  Figure 4.8: Schematic illustrating the Generic Error Modelling System taxonomy (Reason, 1990)  83  4.7 Knowledge-based Error Mechanisms of Failure Within the error type of knowledge-based rules, Reason (1990) provides eleven failure mechanisms as a foundation. These failure mechanisms represent a genotypic characterization of cognitive error based largely on biases and heuristics. We have seen in Chapter 2 how availability and representativeness heuristics influence our perception of risk as applied to causality. Reason expands on this theme both in terms of the variety and the scope with which biases and heuristics contribute to cognitive error. Each of Reason‘s eleven mechanisms for knowledge-based error is of particular relevance to cognitive profiling. 4.7.1 Selectivity Bias Selectivity bias is the tendency for variability in evidence or information to as determined by the manner in which the data selected. Reason (1990) is more specific and brings into question whether the decision-maker‘s selection is based upon information that is relevant in terms of logic, as opposed to its psychological salience. An example of selectivity bias is persons who subscribe to psychic phenomena. Often, people who are believers in psychic phenomena will call upon anecdotal evidence of its support and not consider countervailing evidence of its repudiation. They do so because they are predisposed to a favourable determination as to the validity of psychic phenomena, and select evidence that supports their belief, consciously or not. 4.7.2 Workspace Limitations The workspace that Reason (1990) refers to is the cognitive workspace. Reason argues that the human mind has limitations in terms of the order that it considers inferential data. Thus, it is more economical to consider information in the order perceived by working memory, than in any other order. We can extrapolate from this principle that evidence that is out of sequence with that of the order in which received will not be considered with the same weight as evidence entered in sequence.  4.7.3 Out-of–sight, Out–of-mind A variation on availability heuristics, Out-of-sight, Out-of-mind refers to the tendency of people to affiliate with the immediacy of knowledge, but also to ignore 84  information that is unfamiliar to them. Thus, a decision-maker is likely to fall back on experiences with similitude to the solving of a problem in preference to models requiring adaption or careful consideration. It would appear that cognitively, we as human information processors instinctually strive for economy over quality. 4.7.4 Confirmation Bias Complementing availability heuristics, is the tendency of problem solvers having once formed a conclusion – no matter the paucity of evidence supporting it, are predisposed to supporting the conclusion even in the face of mounting evidence to the contrary. Once again, there appears to be an intrinsic lack of economy in the disposal of established ideas in deference to new ones for which, there is better evidence. 4.7.5 Over-confidence Reason (1990:89) offers that: ―A plan is not only a set of directions for later action, it is also a theory concerning the future state of the world. It confers order and reduces anxiety.‖ Confounded by confirmation bias, a decision or plan once established will take on an inertia that, in spite of new or stronger evidence--will resist change. Further, such resistance is more likely to the extent that the plan is elaborate, is the product of considerable resources or people, or has hidden objectives. 4.7.6 Biased Reviewing Reiterating the notion of the cognitive workspace having limited capacity, Reason (1990) makes the case that a problem solver may not review in entirety their evidence and rationale respecting a solution or decision. Indeed, given the limited capacity of workspace their review may only be inclusive of that information and evidence that directly supports their determination. 4.7.7 Illusionary Correlation Illusionary correlation speaks to the lack of capacity problem solvers have in detecting and establishing co-variation and its logic (Reason, 1990). Racial stereotyping is a common example of illusionary correlation. Racial stereotyping occurs when an 85  identified minority is assigned a statistically significant characteristic, when in reality, no such correlation exists.  4.7.8 Halo Effects The halo effect refers a cognitive bias exhibited when there is attribution concerning one trait, and that attribution is applied to a second trait because the human mind is averse to two different attributions. By example, a Nobel prize-winner for science will be afforded more credibility for their political views by virtue of their scientific acclaim, which has no bearing on their political views or affiliation. 4.7.9 Problems with Causality Problems with causality essentially flow from representativeness and availability heuristics as covered in Chapter 2. Reason (1990) explains that insofar as a problem solver is prone to oversimplification of causality, they also are similarly predisposed to under-represent the state of future impacts. Reason also brings into play the role of hindsight bias, which also contributes to a distorted perception of causality. Hindsight bias effectively prejudices a problem solvers ability to model and solve one problem because they remember by similitude a previous problem for which they have already determined a solution.  4.7.10 Problems with Complexity Problems with complexity are a heuristic that refers to the lack of congruity of the human cognitive processes with the highly interactive and tightly coupled problems and processes occurring in reality. We, as problem solvers, are limited in terms of our capacity to deal with many parallel problems, or problems that change faster than our ability to mitigate them. Coupled with problems with causality, complex and dynamic scenarios simply exhaust our capacity to reason and mediate. 4.7.11 Problems in Diagnosis Reason (1990) argues that diagnosis of problems involves two separate logical reasoning tasks. The first is the evaluation of information related to the determination of symptoms. The second is the synthesis of theory that explains the symptoms and 86  observations. The problem according to Reason is the lack of consistent application of reasoning to the detection of symptoms and to the determination of their explanation. That is, both cognitive tasks must have applied the same rigour of reasoning. 4.8 Discussion There is much concurrence concerning the role of human error in events within the workplace (Dekker, 2002; Leveson, 2004; Whittingham, 2004; Hollnagel, 2005; Reason, 2005). Human error however, has numerous dimensions and domains that are currently under examination by theorists based upon the early taxonomic work of Norman (1981), Rasmussen (1983) and Reason, 1990). From these taxonomies, technologies such as Tripod© (Doran and Van der Graaf, 1996) have emerged that are more inclusive of human error and its antecedents. Incorporating both the theoretical framework of the Swiss Cheese Model (SCM) of Reason (1990) and his GEMS taxonomy, the Tripod© model utilizes bow-tie accident modelling in the analysis of accidents and incidents (Figure 4.9). Explicit in the model are error types, SRK performance levels (Rasmussen, 1983), as well as decisions made by policy makers (UK P&I Club, 2008).  Figure 4.9: Schematic illustrating inclusion of error types in the Tripod model (UK P&I Club, 2008)  87  In the examination of the integration of human error analysis into techniques and technologies of investigation, it is important to understand not in the abstract, but tangibly – from where does an error of decision-making originate. Decision errors are, for the purposes of this research, subsumed by cognitive error. Cognitive errors are in turn a general subset of the broader class of human error, for which there is an emerging taxonomy. Within this taxonomy are the specific phenotypes of slips, lapses, mistakes and violations (Table 4-3). The phenotypes of mistakes and violations correspond with decision error, and are the focus of the remainder of this dissertation. More particularly to the phenotype of violations, are violations that are routine or exceptional in nature.  Error Type Description Possible Causes Precondition Slip Unintended deviation from a correct plan of action Attention failure  Mis-timing Distraction from task  Preoccupation with other things Lapse Omission/repetition of a planned action Memory failure Change in nature of task, change in task environment Mistake (Rule-based) Unintended action inappropriate to the circumstances Sound rule applied in inappropriate circumstance Application of unsound rule Failure to recognize correct area of application Failure to appreciate rule Mistake (Knowledge-based) Erroneous judgement in situation not covered by rule Insufficient knowledge or experience – immaturity Time/emotional pressures Organizational deficiency Inadequate Training Routine Violation Habitual deviation from required practice Natural human tendency to take path of least resistance Indifferent operating environment; no rewards for compliance Exceptional Violation Ad hoc infringement of regulated practice Wide variety – dictated by local conditions not planned for Particular tasks or circumstances Acts of Sabotage Deliberate violation for malicious reasons --------------------- ------------------------- Table 4-3: Taxonomy of human error and their performance levels (UK P&I Club, 2008)   88  4.8.1 Decision Error The psychological precursors (antecedents) to any event are triggering mechanisms and a product of a culture of safety of any organization (Reason, 2005; Hollnagel, 2005). Whereas it is understood that they contribute widely to the provenance of error, it is specifically the error in mental processing – the cognitive errors that are particularly susceptible. Cognitive errors are by their nature, difficult to ascertain, post event or otherwise. However, it is their physical manifestation, or phenotype, that is observable - post event. The process of investigation sheds some light on their characterization. Profiling cognitive error presents the opportunity to observe these decision errors, and then provide insight about the perceptions of risk held by the decision-makers, individually or collectively. Rasmussen and Svedung (2000:17) state the case plainly: Study of decision making for protection against major accidents involves an identification of the interaction found between the effects of decisions made by different actors distributed in different organizations, at different levels of society and during activities during different points in time. We have to consider that all these decision-makers are deeply emerged in their normal, individual work context. Their daily activities may not be coupled in any functional way, only the accident as observed after the fact connects their performance into a particularly coupled pattern. By their various independent decisions and acts, they have shaped a causal path through the landscape along which an accidental course of events sooner or later may be released. A release that is very likely caused by yet another quite normal variation in somebody‘s work performance – which will be very likely then to be judged the ‗root cause‘ after the accident.  Thus, we are not looking for the decision errors that are traditionally being considered causes of the accident, we seek to identify all the organizational bodies that contributed to the creation of the accident scenario, whether or not they have violated rules or committed errors. For this analysis we have to develop further the traditional formats for accident analysis.  An important point requires emphasis. As earnest as one can be in the pursuit of human error, it is not the end game, but rather the start. Hollnagel (2005:164) states: ―The consequence of this line of argument is that the variability of human performance constitutes the noise rather than the signal.‖ Human error insofar as it presents as a failure mechanism is a symptom (i.e. noise), and not the disease (signal). The human factor is just one factor within the system of human/machine/environment that through interaction, complexity and coupling potentially results in failure. Understanding the way 89  in which this interaction becomes destabilized and deleterious to the enterprise speaks more about its design and governance, than it does about the emergence of operator error. Humans are inherently fallible and any system design needs to be sufficiently robust and resilient as to afford the detection of such errors and their correction without becoming critical to the enterprise. 4.9 Conclusion The classification of human error and its taxonomy (phenotypes) examines the distinction between types of human error and their underlying mechanisms (genotypes). There has been a renaissance of modelling in human error taxonomy as it applies to causality that has served us well – starting with the Seven Stages of Action (Norman (1981) and evolving with GEMS (Reason, 1990) and culminating with CREAM (Hollnagel, 2001). These models provide a blueprint for progressing forward. By their example, a number of elements must preside in any model or theory, the purpose of which is to determine the psychological precursors (antecedents) of an event through the back-analysis of decision errors. First, the model must offer a clear taxonomy based upon empirical data and supportive of current system accident theory. Second, the taxonomy must observe the physical manifestation (phylogeny) of the error and have the capacity to transcend the observable to the inferential; by attributing the cause (ontogeny) of the error. Third, the model must apply equally to the collective as it does the individual, as decisions in the modern workplace are pluralistic constructs, and not limited to the singular. Leveson (2004:6) summarizes the latter, and writes: Effectively preventing accidents in complex systems requires using accident models that include that social system as well as the technology and its underlying science. Without understanding the purpose, goals, and decision criteria used to construct and operate systems, it is not possible to completely understand and most effectively prevent accidents.  Lastly, it is essential that whatever the methodology, modelling of human error must explain not only the cause, but also the human dynamics underlying causality. The third hypothesis of cognitive profiling and this research is: The profiling of cognitive errors (particularly decision errors) is not only local and must consider, if not explicate, the biases and heuristics of all of the parties to the enterprise. 90   My failures have been errors in judgment, not of intent." Ulysses S Grant (Bartlett, 2000)  5 DECISION ERROR THEORY Decision error theory (Sweeney, 2004) holds that any decision error contributing to an event scenario can be classified as one of four mutually exclusive decision error phenotypes. They are errors of commission, errors of omission, errors of mistaken belief and system errors (Figure 5.1). The color coding of each decision error is an integral part of the graphical analysis of decision errors and will be maintained throughout this dissertation.  Figure 5.1: Depiction of the four genotypes of decision error theory (Sweeney, 2004)  5.1 Decision Errors Defined Decision errors are decisions made by a decision maker, or makers that are determined to be contributory to an event scenario, or in the absence of an event - 91  deleterious to the enterprise.  The decision is the object of the analysis and the subjects of the analysis are the party or parties who make the decisions. A decision is established when a standard that exists a priori to the decision is transgressed, defeated or otherwise rendered ineffective by its derivative actions. A decision error is established when the standard is not met owing to human intervention, or the lack thereof. That is, the operator(s) must have known that a rule, norm or statute was applicable, and that some action was required, but failed to take correct action, or failed to take adequate action.   In this model, decision errors are unique to human error analysis, insofar as an objective non-participant in the event scenario – typically the accident investigator determines the degree of correctness. Understandably, from the perspective of the decision maker, their decision may, or may not, be deemed contributory to the event scenario. This determination is that of the analyst who must consider the decision under the lens of intention and in the context of evidentiary record concerning the event scenario in question. 5.1.1 Nomenclature It is essential that there be no ambiguity respecting the existence of a standard, rule or duty of care, or the party or parties that were cognizant of its transgression. For this reason, the terms used in this model are necessarily both precise and explicit. The standards that are the object of examination are any norm, rule, statute or duty of care that demonstrably exists a priori to the event in question. The measure that these standards exist is not necessarily by their documentation, but the degree to which the decision maker(s) were aware of and understanding the standards. In this regard, their existence must flow from the evidentiary record; however, of particular relevance is the techno-social dimension – was the standard a real artefact of the workplace for the decision maker(s)? The question is not whether they were in agreement with the standard, rather that there was an expectation or obligation imposed on the decision maker(s). Documentation is a necessary, but not a sufficient, condition of substantiation. The aim of this research is to evaluate the cognition (collective or individual) of the parties, and is not limited to primary evidence. The analyst must go beyond the evidentiary record and consider the state of mind of the decision maker(s), and determine their worldview to truly appreciate and understand causality (Dekker, 2004).  92  The decision makers of decision errors are inclusive of all parties within the enterprise, in keeping with the second hypothesis of this research (Chapter 4). They may be individual, or a collective (distributed cognition) as stated by the first hypothesis of this research (Chapter 3). Within this dissertation, the makers of decisions are those that are most proximal to the deleterious action, in the first instance, and successively distal thereon. As the subjects of this research, parties to the enterprise are referred to as decision maker(s), actors or operators - depending upon the context and meaning. The noun ‗actor‘ applies in the vernacular of accident theory, and the noun ‗operator‘ appears in case studies and examples.  5.1.2 Errors of Commission Errors of commission are decision errors in which an actor knows that a standard, norm or rule exists; but elects to transgress the standard for reasons only known to themselves, and in so doing contributes to the realization of the event scenario. An example of error of commission is the sinking of the Titanic. Captain Smith was making nearly 22 knots when the vessel under his control struck an iceberg. The standard of the day was to slow down – indeed, in many instances come to a stop when in the vicinity of ‗iceberg alley‘; particularly as far north as he was. There is much speculation as to why he was steaming so fast; however, the reason died with him, and is an example of experience trumping prudence – and an error of commission. 5.1.3 Errors of Omission Errors of omission are decision errors in which an actor knows that a standard, norm or rule exists; but, elects to defer applying the standard for reasons of conflicting priorities – usually human factors such as panic, fatigue, confusion, exhaustion, boredom; or environmental factors such as distractions or ambient noise. Often times, an actor will defer a decision to another in an attempt to gain time to resolve stimuli both internal and external to their worldview. They may do so because they genuinely believe that other parties are more competent to make the decision or because they feel that that a decision would be prejudicial to their immediate interests. In the absence of deference to a second party, an actor may also defer the decision for an undetermined period of time, in the hopes that circumstances will resolve or that additional information or resources will come forth that will bring alacrity.   An example of an error of omission 93  was the Hinton rail disaster, which took 23 lives in February, 1986. Two trains collided when one train failed to yield to a track signal. The engineer was operating the freight train with only two hours sleep. It is surmised that he was literally asleep at the switch at the time of collision. Fatigue and sleep deprivation are common human factors attributed to errors of omission (Edwards, 2006). 5.1.4 Error of Mistaken Belief Errors of mistaken belief are decision errors in which an operator either does not know that a standard, norm or rule exists; or, was aware of the standard but as a result of insufficient or incorrect information makes a decision error ultimately contributing to the event scenario. Implicit in this definition, is that had the information been true, the decisions would have been rendered harmless. An example is that of the ‗Gimli Glider‘; an event involving a 767 running out of fuel over Manitoba. The pilot had mistaken the amount of fuel on board owing to a metric conversion error. He simply believed that he had more fuel on board than he actually had. He was able to glide the aircraft safely to an old WWII airstrip near Gimli, Manitoba and narrowly averted a disaster (Williams, 2003). 5.1.5 System Error In addition to the three mutually exclusive decision error types, there is a default error, or system error. System errors are inherent in complex technical systems known to occur when subsystems or their components interact in inexplicable and unpredictable ways (Perrow, 1984). However, for the purposes of this research, we consider a broader definition of system error inclusive of organizational and cultural determinants. These system errors result in a defence or control to become inoperable, not by direct human intervention, but by changes, or perturbations in the organization with time.  Notable among these may be unintended consequences when an organization restructures or changes through time.  A well-intended organizational improvement can result in substandard conditions through lack of change management on a very gradual scale. Latent factors can go unnoticed until they become symptomatic with time; and yet, no apparent decision error is attributable. System errors in this taxonomy are colour coded white (Figure 5.1) and indicate that the system complexity obscures the evidentiary record to the extent that the 94  error cannot be connected to any one action or inaction. Nonetheless, a standard is transgressed, and by this definition, a decision opportunity presents itself, but is not attributable to any particular party to the enterprise. 5.1.6 Unintended Consequences A phenomenon not well understood is that of unintended consequences, and serves as the exception that proves the rule in decision error theory. More a theory akin to Murphy‘s Law than a scientific fact, the phenomenon describes the observation that through the implementation of actions toward some goal or purpose, there are unpredicted effects that can have serious consequence. An example is the codification of bicycle helmets throughout many communities in North America. Known to reduce the severity of injuries, contemporary theories suggest (Sloan, 2006) that from the perspective of motor vehicle operators, bicyclists wearing helmets can be afforded less room on the road; thus causing an increase in collisions.  Apparently, a decision to legislate the use of a sensible device to mitigate head injuries has resulted in the unintended consequence of increasing the frequency of collisions between motor vehicles and bicyclists. One can see however that from the perspective of policy that this claim merits closer examination. The policy achieves its desired result – it modifies the behaviours of cyclists to protect their heads. The fact that some motor vehicle operators change their behaviours to be less risk averse is another example of risk homeostasis. The decision makers in this example are the operators of motor vehicles – not the policy makers. There is no causal connection (other than sequential) between the policy of wearing helmets and the perception of risk on the part of motor vehicle operators. Thus, in this example, there is no causal connection between collision frequencies to the policy of wearing bicycle helmets. From a societal/system point of view, there may be an apparent covariance between the implementation of the policy and the increase in bicycle/vehicle collisions, however there is no causality between the intention of the policy (reduce risk) and the behaviours of the motor vehicle operators alleged to increase the frequency of collisions.   To be clear, there are occasions when unintended consequences do flow from decisions in the workplace. However, unless there is a causal connection in the first instance, and a violation of a known standard in the second instance – they are not decision errors in the purest application of decision error theory (Figure 5.2). The 95  objective of decision errors analysis is not to bring under scrutiny all decisions in the workplace – only decision errors for which there is a causal connection to an event scenario or actions deleterious to the enterprise.  5.1.7 Decision Error Logic As is the case for other models of human error (Rasmussen, 1983; Reason, 1990), the foundation of decision error analysis is their correct determination and taxonomy. The logic of decision error analysis is deceptively simple (Figure 5.2). Either through the process of investigation or by evaluation of the investigative record, conditions resulting from the defeat of controls or defences is examined. This is the starting point, and one that is common with most investigation methods. More subtle and difficult is the determination of those parties who knew that these conditions existed, and through action or inaction missed the opportunity to remedy the condition. This is often the analysis ‗lost‘ in conventional investigations, as emphasis is on cause and effect. It is during this moment of action or inaction that goes to the very heart of decision error theory and cognitive science. What were they thinking? What was their motivation? What was their perception of risk and what did the event scenario look like from the point of view of these actors (Dekker, 2004)? These lost opportunities (decisions) are the lament of many experiencing event scenarios, and central to decision error analysis.  Having determined that a condition/decision existed contributing to an event scenario the next step is to determine what standard, norm or duty of care existed prior to the decision (Figure 5.2). The pre-existence of a standard, and that the standard was violated is pivotal to the argument that a decision was reached resulting in action or inaction by some party. The standard may be any number of artefacts ranging from implicit to explicit (Figure 5.3). The absence of violation of any standard, defence or control is indicative of the lack of management systems and preventative strategies respecting the event, and is a determination of some importance. In terms of the logic of decision error theory however, the absence of standards precludes any further analysis. The next consideration in the logic of decision error is to determine if the decision maker did in fact have knowledge of the standard employed as a defence or control (Figure 5.2). In the case of a deficiency of knowledge of the standard or lack of experience to apply it, the error would be one of mistaken belief. Both decision errors of commission and decision errors of omission require that the person making the decision 96  have full and complete knowledge of the standard, otherwise they are not advertently participatory in its transgression. Discounting the case of error of mistaken belief, the next test is the extent to which the actor intended to meet the standard, norm or duty of care – or not (Figure 5.2). In those instances where the decision maker elected to not meet the standard then the error is one of commission; one where through affirmative action or inaction, the standard was not met.   Figure 5.2: Flow-chart illustrating the logic of decision error classification and their determination  97  In the absence of errors of omission, commission or mistaken belief, the default classification is that of system error (Section 5.4). If no decision can be found that can account for a defeated defence or compromised control, then other error mechanisms must be considered; suggestive of a system of defences and controls that is dysfunctional or inactive. It is noted that such a degree of complexity and coupling rarely occurs within the mining industry, however it is anticipated that as the enterprise of mining embraces larger and deeper mines requiring technologic innovation, mines may well encounter system errors and their effects (Sweeney and Scoble, 2006). The last remaining decision error class is error of omission. The test of error of omission is that the decision maker was cognizant of the standard but deferred or otherwise failed to meet the standard for reasons beyond their ability to cope (Figure 5.2). The reasons could be environmental or human factored; or a combination of both that compromise cognitive performance. An example illustrating this is workers who function beyond the ‗red line‘ in terms of hyper-vigilance; a state in which our bodies revert to fight or flight instinct and shed more advanced skills and training cues (Chiles, 2002). Control room operators during the Three Mile Island crisis in 1979 and the Chernobyl disaster in 1986, as well as numerous air traffic incidents, are examples of cognitive lock; a state of diminished capacity when too much information overwhelms human cognition. The decision maker wants to make correct decisions and meet the requisite standards, but is unable to do so within their impaired frame of reference. They take a ‗time-out‘ to reset their perceptions and either omit key decision opportunities or defer decisions until it is too late. Ultimately they may make poor decisions that would otherwise be well within their capacity to make under normal circumstances. Heat-exhaustion, fatigue and overstimulation to alarms, lights and enunciators can have a similar effect, as can boredom, coercion and anxiety. 5.1.8 Standards of Care Requisite in the application of decision error theory is that some standard of care pre-exist the event occurrence. It is presumed that most people in the workplace are conscientious and caring individuals who want to make a contribution and fulfill their employment contract. Implied in this contract however, is that there are standards and norms that guide if not set the expectations of performance, behaviour and conduct. These standards can take on many forms and artefacts within the workplace, or 98  enterprise.  Standards may be implied or explicitly stated and documented (Figure 5.3). The rule of law (civil statute) is an example of a minimum standard of care imposed on workplace parties that is explicit. Roles and responsibilities for workplace parties are set out by legislation (an Act), and the means by which this standard is measured is prescribed by regulation.  In British Columbia, the governing statute for mining and exploration is the Mines Act for British Columbia, and the Health, Safety and Reclamation Code for Mines in British Columbia (Mines Act, 2008). Within most statutes there is an explicit requirement that a responsible party demonstrate due diligence in their conduct and actions in the workplace (usually the employer). Thus, due diligence is the definitive standard of care under civil code (Keith, 2006); explicitly imposed on the workplace parties (Figure 5.3).  Figure 5.3: A schema illustrating the primacy of standards of care used in this research  99  At the lower end of the spectrum of standards of care are those that are assumed by the workplace parties and are largely implied by virtue of affiliation with the mine workplace, or the organization. Within the scope of the workplace, it cannot be over emphasized that all parties to the enterprise are compelled by numerous standards of care, and that in general the more authority and special knowledge of the enterprise – the more numerous and imposing are these standards of care. It is often not appreciated that every party to a mine workplace has, as a minimum, a duty of care to every other party respecting their health and safety. It is a shared duty that cannot be abrogated. 5.1.9 Duty of Care Afforded special treatment is to the notion of duty of care, as it requires some knowledge of tort law. Duty of care is the implied obligation one person has to another to use all prudence, caution and attention of a reasonable person in their actions. If a duty of care can be established (as one workplace party has to another), and it can be shown that a person‘s actions breached that duty resulting in harm; then a claim of negligence can be alleged (Bruce, 1998). Of issue is whether a person‘s act or omission is one of misfeasance or nonfeasance. Bruce (1998:1) explains: Even if the defendant has foreseen the harmful event, he/she will often not be found to owe a duty of care if his/her failure to act is one of nonfeasance rather than misfeasance. If it is the actions of the defendant which create the circumstances in which a third party may be harmed, failure to take precautions to avert that harm is called misfeasance. In that circumstance, the defendant will be held to owe a duty of care. If, however, the defendant has merely observed that a third party may be harmed if a certain precaution is not taken, and has not taken that precaution, that failure to act is termed nonfeasance. In that circumstance, the defendant may be found to owe no duty of care (assuming that he/she did not create the circumstances – i.e. that he/she was not also a misfeasor). For example, if A knocks down a stop sign and lack of that sign subsequently contributes to the injury of B at that intersection, A may be found to have owed a duty of care to B – and may be found negligent for having failed to report the initial accident. On the other hand, if, after A has knocked over the stop sign, C notes the absence of the sign and fails to report that fact, C will not be found to have owed a duty of care to B.  Acts of malfeasance are deemed in this investigation to be equivalent to Reason‘s acts of sabotage, and fall outside of the scope of this research. Acts of nonfeasance and misfeasance are very much of interest, although the legal distinction 100  becomes moot. Of interest is the fact that in either case the party knew of a standard, defence or control that was being compromised and their responsibility to act speaks to the degree of their duty of care.  5.2 Lost Error Taxonomy Explicit in the decision error theory proposed by this research is the observation that there is a gap in event modelling (Figure 1.1). Typically missing from analysis is the identification of cognitive precursors, antecedent to an event.  Human error analysis in general and cognitive profiling in particular addresses the taxonomy of these cognitive precursors. A more declarative description is that there are human errors that are lost to the ‗fog of causality‘ (errata ignotus); and to know something about them requires a more indirect methodology than is available with current error analysis. This methodology involves cognitive profiling and builds upon the insight of Norman (1981), Rasmussen, (1983) and Reason (1990). We cannot observe cognitive processing in the workplace, much less cognitive errors. We can however, observe the behaviours and actions that are derivative of cognitive errors. As manifestations of error, they are known and measurable – as the phenotypes of errors of commission, omission and mistaken belief.  Decision errors fall within the intended acts of Reason‘s (1990) GEMS model. Errors of commission and omission are clearly rule-based violations. Similarly, errors of mistaken belief are knowledge-based mistakes, in terms of the GEMS model. The ‗lost errors‘ of the taxonomy proposed in this dissertation are analogous to Norman‘s gulf of evaluation (Norman, 1981). In contrast, the decision errors of commission, omission and mistaken belief are analogous to Norman‘s gulf of execution because they are consonant with intention, but inappropriate to the goal. Only by observing the error phenotypes can we adduce something about the error genotypes. The Lost Error Taxonomy presented in this dissertation (Figure 5.4) is an adaptation of the Seven Stages of Action (Norman, 1981), and is a reduction of other human error taxonomies insofar as it is restricted to decision errors. Specifically, this taxonomy corresponds with the SRK (Rasmussen, 1983) basic error types of mistakes and violations. It is not a general theory as epitomized by Reason‘s GEMS model, but rather a theory specific to cognition. Nonetheless, the objective of cognitive profiling is a holistic characterization of decision errors determinant to an event, with the attendant 101  benefit of providing insight into the collective ethos of error within the organization under whose governance events occur.   Figure 5.4: Lost Error Taxonomy schema, an adaptation of that of Norman (1981)  Worldview is the starting point for this model. In a perfect world, the actor‘s perception of risk is accurate and their interpretation of that risk results in ideation that is appropriate to their goals. This represents a state of consonance between the actor‘s perception of risk and their decisions (cognitive consonance). Alternatively, an actor‘s perception of risk may not be equal to the worldview (either risk averse or risk tolerant), and their ideation would not be appropriate to their goals. This represents a state of dissonance between the actor‘s perception of risk and their decisions (cognitive dissonance). Such decision errors are elusive; and, are lost to conventional event analysis, because of the lack of evidentiary record and the limitations imposed by cause and effect modelling. Regardless, a decision, once formed, must result in action - in 102  order that the decision, or decision error, exist. The characterization and classification of these decision errors is essential to the understanding of perception and cognitive error. This taxonomy (Lost Error Taxonomy) is predicated on decision errors, and by their presence, there are opportunities for error detection and correction (Figure 5.4). Errors such as ‗failure to detect‘ and ‗failure to correct‘ are errors commonly missed in the investigation of events owing to increasingly complex and coupled systems (Perrow, 1984). The capacity to evaluate, detect and correct errors is tantamount to their prevention, and is one of the goals of human reliability assessment (Hollnagel, 2005). Furthermore, there is reason to believe that this tactical faculty is resonant with the concept of situational assessment (Stanton et al, 2001). To complete this symmetry, a good sense of situational awareness lends itself to good decisions, and alternatively, its absence contributes to decision error (Figure 5.4). Clearly, situational awareness is antecedent to decisions (Endsley, 2000) and their validation (situational assessment). 5.2.1 Situational Awareness Endsley (2000:5) defines situational awareness as ―the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and a projection of their status in the near future.‖ In this definition, perception is explicitly the determining factor. Endsley (2000) emphasizes the dynamic nature of situational awareness, and by extension situational assessment. Theoretically, the absence of situational awareness, and a failure of operators to adapt to changes in status could conceivably lead to a decoupling of an operator‘s perception of risk and a rational worldview (Woods, 1988). The taxonomy proposed within this dissertation by its recursive nature, provides a mechanism that accommodates the notion that if decision errors are left unchecked – a decision error could result in a failure. This dissertation asserts that situational awareness is antecedent to situational assessment. By this reasoning, if the perception of risk determines situational awareness, then an operators‘ capacity for situational assessment is determined by how well they reconcile the dissonance between their worldview of risk and the actual risk in the workplace. Next, in this dissertation, we will expand on decision error theory and the Lost Error taxonomy to introduce decision error analysis and cognitive profiling as an innovative error modelling technology. 103  5.3 Decision Error Analysis The technique employed in this research to map decision errors is that of decision error analysis (Sweeney, 2004). The decision error analysis diagram (DEAD) is a graphical method of recording and tracking decision errors contributory to an event scenario with respect to chronology and the actors within the scenario. The purpose of the analysis is to distil from the evidentiary record, or investigative records, standards that are defeated, violated or otherwise compromised in accordance with chronology. This is achieved by use of a ‗radar diagram‘ structure, with which the decision errors are plotted (Figure 5.5). The technique can be applied post-event, as part of the investigative process, or as an audit tool to evaluate the contribution of human errors to an enterprise, and their characterization. 5.3.1 Radar Diagram Structure The decision error radar diagram is populated by decision error in accordance with decision error theory (Figure 5.5). The centre of the diagram represents an event, and the rays diverging from this point represent potential decision errors of actors participating in the scenario. Concentric circles emanating from the epicentre (the event) of the diagram, represent conditions contributing (determinants) to the event, and are covariant with time. Chronology flows in retrograde from proximal to the event to distal. Thus decisions plotted closer to the event are closer in time than those further away. The conditions can be any determinants found to contribute to the event, physical, organizational or behavioural. In the abstract, the conditions are akin to unsafe acts or unsafe conditions (Heinrich et al, 1980), however they also represent standards (artefacts, controls and defences) that have been compromised. With this analysis, a condition cannot be plotted unless a standard of care pre-exists the event; consistent with decision error theory. The conditions are identified by number or letter and are tabulated prior to being plotted accordingly (Table 5-1). 5.3.2 Characterization of Actors The actors are identified by vocation, workplace parties or individually, depending upon the scope of the analysis. At the enterprise level, parties may be appropriate; at the operational level, their occupations may suffice. It depends upon the specificity desirable 104  by the analyst. If the analysis is broad and there is reason to believe that distributed cognition is of interest, then the workplace parties or parties to the enterprise are plotted. In contrast, in an event where the behaviours specific persons are of interest, their occupations or titles may be plotted. As a precaution, there is little to be gained by identifying actors by name, as participants to a scenario are likely to be sensitive to being identified by name in the analysis.    Figure 5.5: An unpopulated example of a decision error analysis radar diagram (Sweeney, 2004) 5.3.3 Radar Diagram Principles Designed and intended for clarity, decision error analysis is simple in its presentation. The determination of the conditions contributory to the event and participants to their existence is another matter. A number of salient principles will assist in its application. They are: i. For every analysis, there must be one, and only one, well defined event. ii. Conditions are plotted in reverse order of their chronology. iii. Each condition is uniquely identified and appears only once. CONDITIONSWorkerSupervisorManagerCorporateRegulatoryGovernment105  iv. Each condition specifies a standard, defence or control that was in place prior to the event. v. Each decision error plotted is a graphical representation of either an error of commission, omission, mistaken belief or system error. vi. Actors are plotted in reverse order of standard of care; or, as a rule - in order of their organizational reporting hierarchy. vii. Additional diagrams can be concatenated to accommodate many actors or many conditions, appropriate to the scale of the diagrams. 5.4 Decision Error Analysis Tutorial An example of a mine event resulting in a fatality will serve to illustrate the functionality and richness of cognitive profiling. Starting with a description of the event scenario, we will proceed sequentially through the analysis that will culminate in a cognitive profile of this hypothetical mine organization. It is important to bear in mind that there are a limited quantity of data points (decision errors) that are presented by any one event, and hence the benefit of combining the data from numerous events to determine an accurate profile of the risk culture. 5.4.1 A Hypothetical Event Scenario The setting is an underground metal mine. A two-man bolter/screener crew composed of a seasoned miner and a junior helper was working at the 4200 feet level on the day-shift installing screen. They were working off the back of a MacLean scissor-lift truck with a ‗stoper‘ and materials. The mine was seismically active with a history of bad ground on the 4200 level, to the extent that a recent consultant‘s report suggested that production be reconsidered, if not abandoned. The morning of the event, the cross-shift had noted many falls of ground; something that had occurred in recent weeks but not entered in the log books as per company policy and procedure.  The senior bolter set up his scissor-lift and proceeded to drill holes without scaling or sounding the back. The drift was crossing the transition ground between country rock and ore, and was visibly ‗slabbing‘. Previously, the cross-shift drillers and blasters had elected not to drill the requisite blast pattern.  They took the same ground with more powder and less holes, in an effort to increase production. They were on a 106  bonus system per tonne of ground taken. This was not an accepted practice, but not unknown. Minutes prior to the event, the senior bolter drilled a hole for which he could not insert a bolt (evidence of ‗slabbing‘). The standard operating procedure for this contingency is that drilling be stopped and for the crew to withdraw from the face. It is a ‗one hole–one bolt‘ rule that should not to be violated. The senior bolter/screener elected to continue however, and asked the helper to prepare the scissor lift truck to advance a few feet. The helper dismounted the scissor-lift truck and the senior bolter/screener collared a second hole when approximately 4.5 tonnes of slab ground from the back fell on him, crushing him instantly. All attempts to resuscitate the miner failed, and he expired before he could be transported to surface.  An investigation revealed that all of the seismic monitors had exceeded trigger levels, but the mine ground control department did not cause cessation of operations or warn the crews. The heading was known to be actively working. All entries of major ground fall that were made in the ground control log were estimated at 4.9 tonnes so that reports did not have to be made to the regulatory agencies of major ground fall (5 tonne trigger-level). All parties working on the 4200 level had heard, if not witnessed significant falls of ground during the preceding hours to days to the event. 5.4.2 Compromised Standards of Care Sixteen decision errors compromised existing standards and contributed to this event scenario (Table 5-1). References to cross shifts are synonymous with crews working the night shift. For clarity, the conditions contributing to this event scenario are (from proximate to distal to the event): A: The senior bolter violated the one hole – one bolt standard for bolting and screening.  A: The miner‘s helper deferred to his more senior partner, and did not challenge the decision of his partner, believing that he was not empowered to do so. B: The senior bolter did not scale the back nor sound for rock mass competency. B: The helper also did not scale of sound as he believed that if it needed doing, his partner would direct him to. 107  C: The senior bolter did not refuse to work in the drift that was clearly slabbing with major ground falls observed when he entered the drift. C: The helper did not refuse work, as he was in a junior position and did not want to jeopardize his job. D: The cross-shift blasters had not drilled and blasted the face of the drift in question with the correct pattern and had compromised the back. E: The ground control technician had not alerted the crews of incipient failure of the ground within the 4200 heading, as indicated by the seismometers. F: The cross-shift shift-boss did not adequately evaluate and control workplace for hazards and risks. F: The day-shift shift-boss did not adequately evaluate and control workplace hazards and risks. G: The cross-shift mine captain did not inform the day-shift mine captain of the falls of ground during the back shift (night crew) owing to lack of time. H: The cross-shift mine captain had not been entering the observed falls of ground into the ground control logbooks for reasons of competing priorities. H: The day-shift mine captain had not been entering the observed falls of ground into the ground control logbooks for reasons of competing priorities. I: The mine superintendent had elected to not report ground falls that exceeded 5 tonnes to the regulatory authorities. J: The mine superintendent deferred to the mine manager the decision to mine the 4200 level, in spite of having knowledge of ground control problems. J: The mine manager did not take seriously the consultant‘s report advising him of bad ground conditions and incipient ground failure at the 4200 level. 5.4.3 Observations: Decision Error Analysis  The table summarizing decision error analysis depicts ten actors contributing to sixteen decision errors in which ten standards were violated (Table 5-1). Of the sixteen decision errors, ten were errors of commission, three were errors of omission, and three were errors of mistaken belief.  The hourly workers contributed to 50% of the decision 108  errors, the supervisory staff contributed 19%, and management to 31% of the decision errors (Table 5-2).  The overall character of this decision error analysis diagram (DEAD) is a ‗spiralling down‘ pattern in which the decedent and his fellow worker made decision errors heavily weighted in errors of commission (Figure 5.7). It is evident from the diagram that the helper took his lead from the more senior miner, and was working under his direction. Also evident is that there was a cluster of errors of commission proximal to the event that the shift-bosses or more senior mine management did not participate in. Both the day shift-boss and the cross shift-boss made the same decision error (error of commission), respecting condition F, which was that the headings were not evaluated for hazard and risk. This decision error was a violation of procedure, statute and the duty of care to the workers (Table 5-1).  The decision error analysis diagram (DEAD) illustrates that there are increasing instances of errors of commission with proximity to the event (Figure 5.7). This is indicative of mine operators who lack commitment to the standards of care and conduct in their workplace, and in the absence of direction, will serve to perpetuate a ‗signal‘ (Hollnagel, 2005) that such standards are discretionary. In contrast, the mine management/supervision exhibit errors disposed to errors of omission and mistaken belief that suggest a management ethos that is out of touch with the standards of care under their management. The outlier is the Mine Superintendent, who presents two errors of commission. Given the status and authority of mine superintendents within underground mine operations (Figure 5.6) it would be prudent for management to examine closely the influence that this party (Mine Superintendent) has on the errors of commission of subordinate parties in this scenario.  The spiral pattern exhibited in this analysis may be misleading. One would not expect to see a string of unsafe acts or conditions in a mine workplace where there are so many workers present, and for which there is attribution to only a single party. It is likely, that a more in depth investigation at a system level would have revealed that were other parties cognizant of the lack of compliance to workplace standards within this mine. However, insofar as this investigation was based upon sequence-of-events modelling, the determination of cause and effect limits the scope of investigation. This is the disadvantage of sequence-of-events modelling: there is an implicit assumption that a party to an unsafe act or condition is the cause, and not the symptom of the cause.  109   Figure 5.6: Hierarchical ‗command and control‘ structure of traditional Canadian hard-rock mines 5.4.4 Discussion This event scenario is consistent of the events occurring within operating hard-rock mines in Canada. The organizational structure of hard-rock mines dates back to the early 20th century, and reflects a command and control style of management. The hierarchy is highly structured and stratified (Figure 5.6). As societal values and regulations have changed, the organizational structure of operating mines has not adapted to suit. Consequently, there is a lack of empowerment of subordinate parties, as regards to challenging decisions and asserting their right to safe work. The regulatory statutes (Mines Act, 2008) have undergone reform explicitly providing all mine workers with these rights and mine operations have changed management systems to accommodate, but the organizational culture is handicapped by hierarchical rigidity.  Paradoxically, a command and control ethos can be ineffective in the implementation of mine workplace standards when those in position of authority do not have an accurate worldview of the risk. The opportunity for correction rests with those parties who are closest to the work – and the risk (Figure 5.6). It is these parties (miners and technicians) who have the most opportunity to detect and mitigate risk; their perception of risk critical for motivation. Mine supervisors by comparison, have a balance 110  of authority and opportunity. When coupled with knowledge of the mine workplace, the shift-bosses and mine captains clearly are in the best position for both the oversight and enforcement of standards in the mine workplace. In this hypothetical event scenario, the mine shift-bosses and mine captains contributed 19% of the decision errors (Table 5-2). The mine shift-bosses exhibited errors or commission, and the mine captains exhibited errors of omission. The mine shift-bosses abdicated their responsibility for assessing the workplace for hazards, a duty of care that is assumed by those in supervision and imposed by statute (Mines Act, 2008). The mine shift-bosses would benefit from education and training of their roles and responsibilities as supervisors. The mine captains made errors of omission that were communication related (Table 5-1). These occurred during shift change and are likely attributed to too many expectations over too little of time. Mine management would benefit from streamlining and prioritizing communication and documentation during shift change. 5.4.5 Decision Error Analysis Critique The objective of decision error analysis is to examine the nature of contributions by actors in an event scenario, as evidenced by the state of standards (controls, defences and artefacts) put in place to prevent such an event. It does so in a straightforward, graphical manner that adds value to event analysis by considering the human error element in the context of the conditions prevailing prior to event, as opposed to after. The technique implements a simple and robust phenotype taxonomy that is as objective as the evidentiary record. Herein lays its weakness, insofar as it requires a concise narrative of the determinants contributory to an event. This is a dichotomy of proximity to the event. If the analyst is too affiliated with the actors involved in the scenario, they will be prone to subjectivity. If they are too removed from the worldview of the actors involved in a scenario, they will lack insight as to the cognitive state of the actors. Clearly, an emerging skill set for this analysis is cognitive science.111  Letter Workplace Party Condition Contributing to the Event Scenario Standard Transgressed Error A Miner (Decedent)  Continued drilling after bolt failed to install Standard Industry Practice EOC A Miner‘s Helper Continued drilling after bolt failed to install Standard Industry Practice EOC B Miner (Decedent) Back not scaled or sounded for ‗slabbing‘ Regulatory Statute EOC B Miner‘s Helper Back not scaled or sounded for ‗slabbing‘ Regulatory Statute EOC C Miner (Decedent) Continued to work in conditions of imminent danger Duty of care to helper EOC C Miner‘s Helper Continued to work in conditions of imminent danger Duty of care to miner EMB D X-shift Blaster Drilled a deficient pattern weakening the back Standard Operating Procedure EOC E Ground Control Technician Seismicity not communicated to crews Duty of care to miners EOO F X-shift Shift Boss Heading not evaluated for hazard and risk Duty of care to miners EOC F Day-shift Shift Boss Heading not evaluated for hazard and risk Duty of care to miners EOC G X-shift Mine Captain Ground fall conditions during the night not communicated to day shift workers Duty of care to miners EMB   To be continued on next page Table 5-1a  112  Table 5.1: Table specifying the various workplace parties making decision errors contributing to the event scenario   Letter Workplace Party Condition Contributing to the Event Scenario Standard Transgressed Error H X-shift Mine Captain Ground conditions not entered into logbooks Standard Operating Procedure EOO H Day-shift Mine Captain Ground conditions not entered into logbooks Standard Operating Procedure EOO I Mine Superintendent 5 tonne ground falls not reported Regulatory Statute EOC J Mine Superintendent Consultants report disregarded Due Diligence EOC J Mine Manager Consultants report not taken seriously Due Diligence EMB   -End- Table 5-1b  113  Workplace Party EOC % EOC EMB % EMB EOO % EOO Totals Total % Hourly Mine Workers 6 75% 1 13% 1 13% 8 50% Mine Shift Bosses 2 100% 0 0% 0 0% 2 13% Mine Captains 0 0% 1 33% 2 67% 3 18% Mine Superintendent 2 100% 0 0% 0 0% 2 13% Mine Manager 0 0% 1 100% 0 0% 1 6% All Workplace Parties 10 62.5% 3 18.8% 3 18.8% 16 100% Table 5.2: Table summarizing the distribution of the decision errors by the workplace parties  114   Figure 5.7: Decision error analysis diagram illustrating the decision errors contributing to an underground mine event resulting in a fatality   CONDITIONSMinerHelperCross Shift BlasterGround ControlCross Shift BossDay Shift BossX shift Mine CaptainMine CaptainMine SuperintendentMine Manager115  5.5 The Cognitive Profiling Methodology The cognitive profiling technology presented in this dissertation is a complete and incremental methodology for profiling decision errors within a mining enterprise (Figure 5.8). Starting with decision error theory, the analyst examines the determinants to an event in accordance with the lost error taxonomy. Having characterized the determinants for which there were known standards present that were compromised resulting in unsafe acts or conditions, it is left to the analyst to plot them on a decision error analysis diagram (DEAD). This graphical analysis will enumerate and tabulate each decision error, unsafe act or condition, presiding standard of care, actors involved, and the phenotype of the decision error. These phenotypes are inputs to cognitive profiling.   Figure 5.8: Diagram illustrating the sequence of analysis in the cognitive profiling methodology  The overarching objective of the cognitive profiling methodology is to apply a model supportable by theory and practice that transforms decision error phenotypes 116  (Lost Error Taxonomy) into cognitive error genotypes. These genotypes are characterizations of the ethos of error - the collective perception of risk of parties to the event under scrutiny. The method by which this is achieved is by plotting the weighted percent of the decision error phenotypes (excluding system error) onto a ternary diagram (Figure 5.9). There are four regions within this ternary diagram, three of which correspond to the end-member genotypes of cognitive deficit, cognitive dissent and cognitive deferral. Each of these regions corresponds to a distinct genotype of error and denotes a different perception towards standards of care in the workplace. Accordingly, these cognitive errors are not manifested directly, but must be connoted by decision errors observed in the evidentiary record.   Figure 5.9: A ternary diagram depicting the three cognitive genotypes (Sweeney, 2004) 5.5.1 Cognitive Deficit Cognitive deficit is the region of the ternary diagram that corresponds to the preponderance of decision errors reporting as errors of mistaken belief. Arbitrarily, this region is bounded by 50% percent of errors of mistaken belief, as a proportion of the total decision error population under scrutiny. The region of cognitive deficit is 117  representative of a cognitive error genotype that is largely knowledge dependent and therefore corresponds with knowledge-based errors of the SRK taxonomy (Rasmussen, 1983). By definition, errors that report to this region are made by parties who do not know that a standard applies to a situation, or do not have adequate knowledge or experience to apply the standard.  Typically, organizations that report to this region have workplace parties that are junior or inexperienced, have inadequate on-the-job training, or are lacking in knowledge of the risks. For front line workers, this presents as poor situational awareness and unfamiliarity with workplace standards and their artefacts. Supervisory personnel present as deficient in risk assessment and knowledge of their roles and responsibilities. Managers present as deficient in leadership, knowledge of their standards of care, or specialized technical knowledge. Common biases associated with cognitive deficit are confirmation bias, selectivity and availability heuristics whereby the actor has an incomplete worldview or set of facts on which to formulate a worldview. 5.5.2 Cognitive Dissent Cognitive dissent is the region of the ternary diagram that corresponds to the preponderance of decision errors reporting as errors of commission. Arbitrarily, the region is bounded by 50% percent of errors of commission, as a proportion of the total decision error population under scrutiny. The region of cognitive dissent is representative of a cognitive error genotype that is largely rule dependent and therefore corresponds with rule-based errors of the SRK taxonomy (Rasmussen, 1983). By definition, errors that report to this region are made by parties who know that a standard applies to the circumstance, but choose not to comply with the standard for reasons known only to them. To be clear, this is not a case of an actor avoiding ‗the application of a bad rule‘ phenomena (Reason, 1990). It is presumed that the standard is both appropriate and necessary to the circumstance.   Typically, organizations that report to this region have workplace parties that are senior, experienced and often times a member of another social unit that may, or may not, be directly affiliated with the workplace (collective bargaining units). Front line workers present in stages of denial; first by refuting objective reality, followed by anger, and finally by self-justification. Supervisors present behaviours ranging from impugning the motivations of subordinates to profound shame or guilt. Managers present by 118  challenging authority, misdirection or capitulation. Common biases are the halo effect, overconfidence and availability heuristics whereby the actor forms a worldview that they alone have situational awareness and special knowledge. 5.5.3 Cognitive Deferral Cognitive deferral is the region of the ternary diagram that corresponds to the preponderance of decision errors reporting as errors of omission. Arbitrarily, this region is bounded by 50% percent of errors of omission, as a proportion of the total decision error population under scrutiny. The region of cognitive deferral is representative of a cognitive error genotype that is largely skill dependent and therefore corresponds with skill-based errors of the SRK taxonomy (Rasmussen, 1983). By definition, errors that report to this region are made by parties who know that a standard applies, but defer making a decision appropriate to the circumstances to others, or for some undetermined time in the future - owing to competing or misplaced priorities.  Typically, organizations that report to this region have workplace parties that are exposed to environmental and human factor challenges. They are working in work environments that exhibit noise, temperature or other extremes that are physiologically deleterious, or high cognitive demands beyond which they can cope – or both. Workers commonly appear overwhelmed, fatigued, frightened or confused. Supervisors present as frustrated, indifferent or agitated. Managers do not frequently make decisions in such an operating environment, but during periods of duress or emergency will present the same physiological symptoms as workers or supervisors, depending on the extent that their decisions direct others. Biases common to cognitive deferral are illusionary correlation and problems with complexity and causality. The methodology is entirely dependent on the correct taxonomic classification of the decision errors, which is in turn dependent on a systemic evaluation of the scenario, or enterprise, in terms of determinants to the event. The cognitive genotypes represent the psychological precursors, or cultural determinants that are indicative of the distributed perception of risk (ethos) of the organization, qualitatively. It will be demonstrated by research conducted at an operating mine, that in combination with a semi-quantitative assessment of risk of the enterprise, one can derive a rich and detailed profile of the organizational ethos covariant with a variety of domains. These domains include, but are not limited to, worker age, worker experience, occupation, mine 119  department, calendar month, hazard type, and mechanism of injury. Armed with these profiles, mine management can incorporate stronger controls and standards, mitigating the risks and appropriately allocating scarce resources. 5.5.4 Cognitive Profiling Tutorial Again, the best understanding of cognitive profiling is achieved by means of example. To continue the analysis of the hypothetical mine fatality, the decision error analysis diagram (Figure 5.7) and supporting summary table (Table 5-2) are useful. In this instance, we are interested in the profiling of cognitive error as correlated with the domains of workplace party. Thusly, we plot the calculated percentile proportions of decision errors on the ternary diagram and observe the distributions, as well as the overall character (centroid) for the event (Figure 5.10). The ternary logic of this analysis reflects the mutually exclusive phenotypes of errors of commission, omission and mistaken belief. System errors, by definition and default, have no cognitive attribution and do not appear on this diagram (had there been any in this example).   Figure 5.10: Ternary diagram illustrating the respective cognitive profiles of the workplace parties 120  There are no weighting factors. An error or commission has no less or more weight than an error of omission or mistaken belief. Objectivity is preserved in their equivalence, and to do otherwise would require context in terms of causation. There is no presumption or intimation of causation. The analysis remains judgement neutral - consistent with the imperative that the analysis is part of a system that examines another system: the error producing system comprising the event scenario. As much as decision analysis is retrospective, cognitive profiling is prospective and attempts to describe what is, not what was. It is in this spirit of heuristics that cognitive profiling is a predictive model. 5.5.5 Observations: Cognitive Profiling  Upon observation, it is self evident that the workplace parties do not share the same cognitive distribution, and we adduce that their perceptions of risk may also vary, respecting the event in question (Figure 5.10). For this example scenario, other parties in the enterprise were not considered (organized labour, regulatory agency or industry association), thus the analysis is not of the enterprise level, but is restricted to the mine operations as is typically the case. The aggregate value of decision errors for this fatality was 62% errors of commission, 19% errors of omission and 19% errors of mistaken belief (Table 5-2). Thus, decision errors of commission for all the workplace parties exceeded 50%, and therefore the preponderance of decision errors report to the region of cognitive dissent.  Mine shift-bosses and mine superintendents were both highly disposed (100%) to errors of commission and report to the extreme vertices of the cognitive dissent region. By comparison, the mine manager was in cognitive deficit and the mine captains reported to the cognitive deficit/cognitive deferral axis. The mineworkers contributed the most decision errors (50%, Table 5-2) and are profiled near the centroid of the diagram. There is a marked difference in distribution between the workplace parties; occupying the three cognitive regions of the diagram. 5.5.6 Interpretation Overall, the workplace parties contributing decision errors to this event are in cognitive dissent (Figure 5.10). However, the lack of close grouping around the centroid is indicative of the absence of distributed cognition, such as groupthink. By definition, the 121  parties making errors of commission did so knowing that a standard existed, but chose to violate standards for reasons that bear further investigation. The Mine Superintendent and the mine shift-bosses were particularly disposed to dissent. In contrast, the Mine Manager and the mine captains reported to the cognitive deficit and cognitive deferral regions, respectively. It would be interesting to examine what dynamic exists between the mine captains who present with cognitive deferral and the Mine Superintendent, to whom they report, who presents cognitive dissent. 5.5.6.1 Discussion: Confirmation Bias The Mine Manager either did not understand the risk presented by the mine conditions, or was deficient in his capacity to discharge his duty of care; in this case - due diligence on behalf of the mine corporation. Given that, the Mine Manager had received an expert opinion advising him of the incipient risk (Section 5.4.2) we surmise that he was not duly diligent for reasons connected to his perception of the risk of ground failure. What biases or heuristics influenced his perceptions? We know that he had adequate information on which to act (the consultant‘s report). There was no argument with the science, as it was based upon the seismometer readings within the mine. What would compel someone not to act on information for which he paid handsomely?  The answer may be in his selection of the information influencing his decision. The mine had been seismically active for some time. They had not experienced a catastrophic failure in spite of the routine observation of falls of loose ground. The consultant‘s report was predictive in nature, but was clearly not consonant with the worldview held by the Mine Manager. The Mine Manager is presenting with what we know to be a familiar pattern or heuristic when a decision maker is confronted with data that is contradictory to their worldview. They are loath to accept it (Reason, 1990). This is the very definition of confirmation bias.  The Mine Manager discarded a worldview that was supported by objective reality and scientific rigour in favour of a worldview (the mine was not at risk of catastrophic failure) that was increasingly difficult to support.  To the extent that the Mine Manager disregarded the consultant‘s report, he was exhibiting cognitive strain (Reason, 1990) whereby his worldview did not contain a scenario that was inclusive of lost production, catastrophic failure and tragedy. Subsequent to the event, the Mine Manager would be extremely sensitive and responsive to deteriorating ground conditions. His experience 122  would shift his worldview by altering his perception of risk. This is availability heuristics in action. Until a more compelling experience becomes available to him, the Mine Manager reconstructs his worldview to a new and more sobering perception of risk. 5.5.6.2 Discussion: Problems with Causality The mine captains had expectations upon which they could not deliver given their limited resources, and they deferred to other parties (Mine Superintendent, Mine Manager), and other priorities that they believed to be more pressing. That is, the mine captains deferred acting on their duty of care to other mine personnel, and standard operating procedures (Section 5.4.2) because they perceived there were other risks more salient than that of falls of ground. Again, exactly what these other risks were bears closer examination. On first inspection, this may appear to be an example of mistaken belief insofar as the mine captains, in hindsight, were incorrect that other priorities during shift change took priority over reporting falls of ground and incident reports. However, they could not have known that they were inadvertently contributing to a system of errors that would culminate in tragedy. This is for the same reason as was true for the Mine Manager and all of the workplace parties at the mine. It was not within their worldview. Here we observe a collision of perspectives. First, there is the perception of the mine captains without knowledge of the event. Second, there is that of the analyst with full knowledge of the event. Neither perception is incorrect. Both perspectives are based upon their perceptions of risk. To the mine captains, the catastrophic failure was remote and not causally connected to the recording of falls of ground and incident reports. To the analyst looking through the rear view mirror, the causality is sequential and the danger apparent.  What were the biases and heuristics of the mine captains? We can only surmise their thought processes respecting the missing log and reports. They knew that the reporting of ground falls and incident reports would aid somebody, somewhere, in their knowledge of the workplace conditions within the mine. The mine captains already knew what those conditions were; they had firsthand knowledge. The safety department and the mine regulators did not work in the mine, in the here and now. The logs and reports could wait. What purpose could the reports make if there was a danger; it would be too late? These musings are understandable and reflect the concept of bounded rationality 123  wherein decision makers restrict their worldview to accommodate what they consider to be in their own best interests (Simon, 1997).  Within the context of their rationality, the decision of the mine captains was to optimize their efforts in response to this rather restricted – bounded worldview. Their worldview ‗is bounded‘ by what they considered necessary to achieve their goals. It was not inclusive of the complexity of the system, nor was it impaired by any problems of causality – ground fall related, or other. In the words of Reason (1990:91): ―Because they are guided primarily by stored recurrences of the past, they will be inclined to disregard any irregularities of the future‖. Until the reporting of ground falls and their occurrence shape the outcome of their goals, the decisions of the mine captains are likely not to be inclusive of them. Parties in cognitive deferral reduce their cognitive demand to accommodate their perception of rationality, particularly during times of diminished capacity (environmental, human factor stressors) or limited resources. 5.5.6.3 Discussion: Availability Heuristics The hourly workers, the mine superintendent and the mine shift-bosses were disposed to cognitive dissent (Figure 5.10). Particularly in an underground mine environment, standards relating to ventilation, geo-mechanics and blasting are critical. Yet within this scenario is empirical evidence to support that the parties closest to the hazards were dissident in their amelioration and control. The blasters did not blast to standard. The geo-mechanical technician did not alert the workforce of imminent danger. The shift-bosses did not evaluate the headings for incipient failure. The decedent and his helper did not scale the back or bolt to standard. How could this be, and how could the Superintendent of the Mine accept this ethos in consideration of the underground risks? The answer lays in their biases, and in their motivations. The bias that they bring to the workplace is not only their own, but a collective one. The theory of availability heuristics tells us that we are limited to the application of causality by what we experience and can explain (Section 2.4.2). The hourly workers and their supervisors in this scenario example had long since come to terms with ground fall hazards. They worked in mine conditions on a daily basis. Their experience with surviving these risks apparently outweighed their aversion to them. Self-awareness of the probability of personal injury that would change this risk equation was outside of their worldview.  124  Miners exhibiting availability heuristics are influenced by experiences and cognition that are immediately available to them, not by some future state. Additionally, selectivity comes into play. They are motivated to mine (bonuses, halo effect, hubris), and will select the perception of risk that is consonant with their preferred worldview. Self-justification will address the internal conflict concerning the contravention of workplace standards (Fine, 2006). In this manner, dissenters of workplace standards minimize cognitive dissonance between their worldview and objective risk - by selecting their perceptions of risk, instead of the other way around. It is therefore the role of management through education, deterrence or by any means necessary to assert the existence of objective risk, thus altering the worldview of dissenters and motivating a change in behaviours. 5.5.7 Significance Cognitive profiling, similar to any analysis of human error, is the start, not the end to examination (Busse, 2001). This is the value of cognitive profiling: it goes to what the workplace looked like from the perspective of the parties, what perceptions they held and what motivated or de-motivated them to alter their situational awareness and worldview (Dekker, 2004). The significance of cognitive profiling is in its richness and utility. With successive analysis of events in the workplace over time, the analyst can track changes in the cognitive culture or the effect of remedial action. This is an important dimension in profiling, as cognitive errors are not directly associated with cause and effect; yet, are part of the system of event causality that changes with time. To the extent that social units exert an influence over the distributed cognition of its members, cognitive profiling is applicable to differentiate these social units covariant with their perception of risk. Predictably, mine operators junior in experience will not have the same perception as those more senior. In addition, tradesmen may have different risk perceptions than haul truck drivers or office workers. In a similar fashion cognitive profiling by mechanism of injury, mine department, and age may also show variations that can be addressed through tactical changes of the management systems. This research will explore these potentialities by profiling a variety of domains covariant with time by means of a study of a contemporary operating mine (Chapter 7).  125  5.5.8 Characteristic Cognitive Profiles In the absence of extensive and rigorous field trials, it is difficult to ascertain what the precise shape or limits of the cognitive regions are within a ternary diagram. However, based upon this research we can assume some symmetry exists and that there is a basic geometry (Figure 5.10) from which we can extrapolate profiles and relationships between workplace parties (Figure 5.11).    Figure 5.11: A series of nine characteristic profiles illustrating cognitive profile prototypes  126  For simplicity and clarity of these prototypes, we consider that there are four workplace parties, consisting of corporate management, mine management, supervisory staff and workers (Figures 5.11.1 through 5.11.9). In each example, a black dot represents the decision error centroid; a red circle represents corporate management; and black crosses represent one of the remaining parties. These nine profiles illustrate the significance and utility of cognitive profiling (Figure 5.11). These nine cognitive prototypes are abstractions of empirical events observed in the workplace, some of which are contained Chapter 6, in this dissertation.  5.5.8.1 Organizational Consonance with No Central Tendency An organization that does not exhibit any collective predisposition toward cognitive error as discerned by this model reports to the centre region of the ternary diagram (Figure 5.11.1). In this region, there is a balance between the three error phenotypes, and therefore no preponderance of error ascribed to any particular cognitive genotype. In the instance of workplace parties reporting in a close cluster around the organizational centroid in the diagram, presumably whatever management style or ethos exists within the organization is well communicated and the organization is effective in its messaging regarding risks and their mitigation.  This is an optimal state of distributed cognition, and a prototype to be aspired to within the context of cognitive profiling. The profile of management is concordant with the other workplace parties and representative of the desired organizational ethos toward risk and error. The workplace parties are respectful of the standards of care and conduct, and are likely have a perception of risk that is consonant with the effective risk and with that of each other.  Fortunately, most contemporary mines within Canada fall within this cognitive profile, as standards of care are high and the industry is highly regulated. 5.5.8.2 Organizational Dissonance with No Central tendency Collectively, an organization may fall within the region of no central tendency, but upon examination of the discrete parties within the enterprise, these parties report out to the other regions of the cognitive diagram (Figure 5.11.2). These parties present markedly different cognitive error genotypes, and by inference, perceptions of risk. In the illustrated example, the cognitive profile of corporate management is marginally 127  disposed towards cognitive dissent, influencing at least one other party to be disposed towards cognitive dissent. The remaining two parties by comparison are disposed to cognitive deferral and deficit. This dissonance in cognitive error is indicative of the absence of distributed cognitive error at the organizational level. Management is ineffective in their messaging as regards to standards, and it is likely that they are not communicating an accurate depiction of risk.  At least one party is dissenting against the standards of care and conduct that would otherwise mitigate risk.  Any strategy to mediate this dissonance must address human factors, environmental factors, as well as deficiencies in knowledge and experience, as indicated by the cognitive error genotypes. This cognitive profile suggests that management would benefit by achieving alignment concerning risks and their mitigation by a combination of strong messaging, education and training and progressive discipline in support of normative compliance.  Large mining enterprises in which numerous contractors are present with separate and distinct operational cultures typify this cognitive profile. It is incumbent upon management to be cognizant of disparate cultures toward risk and instil the parties with a common and accurate perception of risk. Within the mining industry in Canada, underground metal mines under development often exhibit this profile, owing to the incremental project management practices and the fractionation of the workforce through contracting and sub-contracting. 5.5.8.3 Organizational Consonance with Nascent Cognitive Dissent An organization that is effective in messaging, but not diligent in communicating the true nature of the risk empowers workplace parties to adopt a discounted perception of risk, or worse – influences these parties towards cognitive error. This may occur in any of the three cognitive genotypes, but is most egregious in the instance of cognitive dissent (Figure 5.11.3). Management may only be marginally disposed to dissent of standards, but subordinate parties amplify this influence, as they are closer to the risks and more likely to participate in the evolution of an event. By this logic, the reverse is also true; that management can move towards risk aversion and through effective messaging influence the other workplace parties by their example.  128  In this cognitive profile, management is best served by paying attention to standards of care and developing policies in support of standards to consolidate expectations and intent. Subordinate parties are emboldened by any prevarication toward standards on the part of management and are prone to compromise standards through the mechanism of risk homeostasis(Section 7.1.2.2). Management of consonant organizations in which the perception of risk does not fairly represent the effective risk are prudent to consider changing their messaging and putting in place near miss reporting systems. Additionally, these systems should be inclusive of all parties within the enterprise to benefit from the increased operational intelligence respecting risk. Most importantly, management benefits from shifting the perception of risk by responding to emerging risks in a manner that is both timely and appropriate to the circumstances. Near miss reports left unresolved serves to further entrench the ethos of discounting risk. Within the Canadian mining industry, small quarries and aggregate pits are prone to fall within this cognitive profile. The operations are cyclical, often undercapitalized and do not have a large pool of human resources upon which to draw. Larger, more traditional mining operations have the benefit of offering higher wages and greater job security. Aggregate operations rely therefore on entry level and a less skilled workforce. These workers are influenced by the perceptions of risk presented to them by their employers and other parties to the enterprise. The enterprise is naturally organizationally consonant. Quarry and aggregate operations are also very competitive, with smaller operators often equating an entrepreneurial ethos with risk taking. 5.5.8.4 Organizational Consonance with Incipient Cognitive Dissent Enterprises that are organizationally consonant exhibit cognitive profiles that are tightly grouped indicating similar cognitive error and perceptions of risk (Figure 5.11.5). Reporting to the cognitive dissent region of the cognitive diagram, the individual cognition of all parties reflects the collective cognition error of the enterprise and the cognitive error of management in particular. More likely to evolve in low to mid risk enterprises, the various parties may be influenced by personal gain, collective bargaining or fear of economic and competitive disadvantage. The dissent is incipient, however the party‘s perception of risk is consonant with that of management and they will only alter their worldview to the extent that 129  management demonstrates and has the capacity for intervention. With decisive management change, persistent leadership and messaging concerning risk, the collective cognition of these parties will modify in step with their worldview. Clarity of roles and responsibilities is essential for affecting change in this cognitive profile, as dissident parties strive to maintain an ethos that compliance with standards is discretionary.  In Canada, a mining organization consonant with incipient cognitive dissent is not likely to evolve because of three factors. First, there is an ethos within the industry for normative compliance with the support of its membership. Second, there is a strong regulatory framework within each province within Canada. Lastly, the typical mining enterprise within Canada is by nature a risk-based venture – both with respect to uncertainty of operations and economics. Consequently, the vast majority of operators mediate these risks by applying established standards of care such as risk controls, management systems and artefacts (practices and procedures) within the workplace.  However, by this same logic, in the absence of any or all three of these factors, a mining operation could be susceptible to devolving to this cognitive profile. Hypothetically, such an operation would work within the exploration phase of the mining industry. In exploration, the workforce is junior in age and seniority and disposed to working in remote locations. These remote locations are subject to a rapidity of change of locale, thus they are prone to poor governance by both the regulatory authorities and corporate oversight. The parties at an exploration prospect are inherently risk takers – albeit by nature. Finally, the operational risks at exploration sites are situational, depending upon the mode of transport, the threat from local wildlife and climatic conditions. Thus, mineral exploration enterprises may be prone to this cognitive profile because of their mobility, paucity of regulatory oversight and the illusion of low risk. 5.5.8.5 Organizational Dissonance with Incipient Cognitive Dissent Enterprises profiled as organizational dissonant with incipient cognitive dissent exhibit loosely clustered parties in terms of cognitive error (Figure 5.11.4). The organization collectively reports to the cognitive dissent region of the cognitive diagram. This cognitive profile is indicative of a disregard for risk and the consequences thereof. Organizations in incipient cognitive dissent are opposed to statutory oversight and therefore are not given to self-impose standards of care as artefacts in the workplace. In 130  the absence of events that would correct this worldview, there is an expediency by the parties to replace risk aversion with an aversion towards the expenditure of resources on anything that does not realize immediate return - operationally, commercially or socially. Existing standards of care are subject to interpretation and to competing operational demands that inevitably devolve into operational creep (Section 7.1.2.3). A command and control decision structure will support, if not promote, this ethos to the extent that the parties are subject to intimidation and coercion. Additionally, bonuses, risk pay or other benefits intended to reward production, inappropriately augment motivation. Designed to motivate efficiency, reward programs have the potential for creating a new bounded rationality (Simon, 1997) for the target workers by offsetting the perception of risk with an implied benefit further biasing and contributing to cognitive error. A more palatable worldview of personal gain, reward and success replaces a future state of injury, calamity or disaster through availability heuristics. This cognitive prototype is an example of directing minds within the operation (or enterprise) forming the intention of achieving competitive advantage at the expense of risk to other workplace parties. Such conduct and behaviour falls within the purview of statutory decision makers, indeed within the criminal justice system to expurgate and prosecute.  Fortuitously, examples of this cognitive profile within the Canadian mining