You may notice some images loading slow across the Open Collections website. Thank you for your patience as we rebuild the cache to make images load faster.

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Current challenges in business and IT alignment Khodabandeh Amiri, Amin 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2018_february_khodabandehamiri_amin.pdf [ 2.94MB ]
Metadata
JSON: 24-1.0361749.json
JSON-LD: 24-1.0361749-ld.json
RDF/XML (Pretty): 24-1.0361749-rdf.xml
RDF/JSON: 24-1.0361749-rdf.json
Turtle: 24-1.0361749-turtle.txt
N-Triples: 24-1.0361749-rdf-ntriples.txt
Original Record: 24-1.0361749-source.json
Full Text
24-1.0361749-fulltext.txt
Citation
24-1.0361749.ris

Full Text

Current Challenges in Business and IT Alignment by  Amin Khodabandeh Amiri  BSc, Shahid Beheshti University, 2005 MA, Tarbiat Modares University, 2008   A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Business Administration)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  December 2017  © Amin Khodabandeh Amiri, 2017 ii  Abstract This dissertation addresses two current challenges in IT-Business alignment. The first study introduces a new type of operational alignment, namely capacity alignment, and addresses how it becomes a threat to the success and survival of organizations. Taking a grounded theory (GT) approach, the study started with a quest for answering how IT unavailability becomes a strategic risk. While there exist many unavailability incidents with no strategic consequences, anecdotal evidence suggests that some unavailability incidents have caused negative strategic impacts on organizations. Twenty-six cases of IT unavailability with strategic consequences, along with two cases with non-strategic consequences, were studied. The analysis of cases revealed that IT unavailability is, in fact, a capacity misalignment rather than an IT outage suggested by extant literature. Moreover, a system dynamic view of IT unavailability was developed to help clarify how IT capacity misalignment becomes a strategic risk. Unfortunately, the existing classic GT as well as its interpretivist extensions are inconsistent with the way positivist researchers view and test theories. Therefore, the dissertation had to customize classic GT to develop an IT unavailability theory compatible with a positivist ontology of theories. As well, the revised methodology ensures that conclusions from data have a higher chance of reproducibility by other researchers and datasets. This strengthens the accuracy of GT that Burton-Jones and Lee (2017) called for and ensures that the theory is grounded in data rather than researchers. The second study addresses improving strategic alignment through CIO’s language. Shared language between CIO and top management team— one of the most powerful antecedents of iii  alignment— has been neglected by the extant literature. The purpose of this study was to prescribe guidance for CIOs regarding the language that should be used in a conversation with top managers about the strategic role of IT. Leveraging the literature on strategic management, the study suggests applying the nomenclature of theories of the resource-based view and the capability-based view instead of technical language. An experiment was conducted to evaluate the effectiveness of these languages in terms of the antecedents of strategic alignment. The study suggests which language should be used in which conditions.   iv  Lay Summary The current turbulence in information technology (IT) and business environments brings many challenges to organizations. First, the priorities and processes in IT department quickly become misaligned with business priorities and demands. This misalignment highlights the need for an efficient communication between IT and business managers. Yet many such managers feel they speak two different languages. We prescribed three languages to be applied by IT managers to improve the efficiency of their conversations with business managers on IT strategy and resources. Another current challenge is the pandemic IT unavailability due to unpredictable IT outages or excessive business demand for IT. Organizations are becoming more dependent on IT and thus more susceptible to its unavailability. We explained how and under what conditions IT unavailability becomes a major threat to the success or existence of organizations. Some mitigation tactics are suggested as well. v  Preface The research work presented in this dissertation, including framing research questions, devising the methodology, data collection, analysis, and interpretations, as well as drafting and preparation of the thesis dissertation was completed by the author, Amin Khodabandeh Amiri, in consultation with members of the supervisory committee. The present work has significantly benefited from the feedback and revisions from this committee. Chapter 2 suggested a new methodology for theory generation. The author conducted the literature review, devised the methodology, and writing the final manuscript. The study articulated in Chapter 3 involved collecting public data, conducting and transcribing interviews, analyzing textual data, building a theory, integrating the theory with extant literature, and writing the final manuscript. The author was responsible for all activities. The UBC Behavioural Research Ethics Board (BREB) approved the empirical parts of this work under the certificate number H14-00005. The study described in Chapter 4 required designing an experiment, validating the measures, pretesting, data collection and analysis, as well as writing the manuscript. The author was responsible for all activities. The UBC BREB approved the empirical parts of this work under the certificate number H16-02246. A version of Chapter 2 is under final preparation for submission to a research journal for peer review. A version of Chapters 3 was sent to a peer reviewed research journal. A series of comments were received and the manuscript was revised accordingly. The new version is under final preparation to be sent to the same journal. Additionally, older versions of Chapters 3 and 4 have vi  been published as research-in-progress papers in the proceedings of the international conferences listed below: • Khodabandeh Amiri, A., Cavusoglu, H., & Benbasat, I. 2014. “When is IT Unavailability a Strategic Risk?: A Study in the Context of Cloud Computing,” Proceedings of the Thirty Fifth International Conference on Information Systems 2014 (ICIS 2014), Auckland, New Zealand. • Khodabandeh Amiri, A., Cavusoglu, H., & Benbasat, I. (2015). “Enhancing Strategic IT Alignment through Common Language: Using the Terminology of the Resource-based View or the Capability-based View?,” Proceedings of the Thirty Sixth International Conference on Information Systems 2015 (ICIS 2015), Fort Worth, United States. vii  Table of Contents  Abstract .......................................................................................................................................... ii Lay Summary ............................................................................................................................... iv Preface .............................................................................................................................................v Table of Contents ........................................................................................................................ vii List of Tables ................................................................................................................................xv List of Figures ........................................................................................................................... xviii List of Abbreviations ...................................................................................................................xx Acknowledgements .................................................................................................................... xxi Dedication ................................................................................................................................. xxiii Chapter 1: Introduction ................................................................................................................1 1.1 A Quick Literature Review of (Mis)Alignment.............................................................. 1 1.1.1 Conceptualization and Types of (Mis)Alignment....................................................... 1 1.1.2 The Importance of (Mis)Alignment ............................................................................ 4 1.1.3 Attainment of Alignment ............................................................................................ 4 1.1.4 Alignment: A Moving Target ..................................................................................... 6 1.2 Current Challenges in IT-Business (Mis)Alignment ...................................................... 6 1.2.1 Strategic Risk of IT-Business Capacity Misalignment ............................................... 6 1.2.1.1 An Ontology-Driven Grounded Theory Methodology to Develop Reproducible Theories ............................................................................................................................. 7 1.2.2 Improving Strategic Alignment via Shared Language ................................................ 7 1.3 A Bird's Eye View of Dissertation .................................................................................. 8 viii  Chapter 2: An Ontology-Driven Grounded Theory Methodology to Develop Reproducible Theories ...........................................................................................................................................9 2.1 Introduction ..................................................................................................................... 9 2.2 Establishing the Goal of Customized GT ..................................................................... 12 2.2.1 Conflicting Goals of Modernism and Classic GT..................................................... 13 2.2.2 Developing an Ontology-Compatible Theory without Torturing Data .................... 15 2.2.2.1 Isn’t it Torturing Data to Develop an Ontology-Driven Theory? ..................... 18 2.2.3 Enhancing Reproducibility (Accuracy) of Conclusions at the Cost of Parsimony ... 19 2.2.3.1 Classic GT Deems Accuracy Non-Relevant and Focuses on Creativity .......... 20 2.2.3.2 Grounded in Researcher, not in Data ................................................................ 21 2.2.3.3 Accuracy is Relevant ........................................................................................ 21 2.2.3.4 Sacrificing Parsimony for Accuracy and Generality ........................................ 22 2.3 Revising the Methodology ............................................................................................ 26 2.3.1 Construct Validity (Construct Reproducibility)........................................................ 26 2.3.1.1 Reproducible Substantive Codes via Abstaining from Analytical Codes ........ 27 2.3.1.2 Specify Classes or Things in Substantive Codes .............................................. 27 2.3.1.3 Reproducible Generalization via a “Data-then-Literature”-Driven Constant Comparison ....................................................................................................................... 28 2.3.1.4 Develop a Hierarchical Diagram in Memos ..................................................... 31 2.3.1.5 Develop a Thesaurus of Things and Classes in Memo ..................................... 31 2.3.1.6 Develop a Cross-Case Analysis Matrix in Memo............................................. 32 2.3.2 Conclusion Validity (Relationship Reproducibility) ................................................ 33 2.3.2.1 Have Extreme Cases in the Sample .................................................................. 34 ix  2.3.2.2 Increase the Reliability of the Sample with Triangulation ............................... 35 2.3.2.3 Assertive Coding ............................................................................................... 36 2.3.3 Internal Validity (Causal Relationship Reproducibility) .......................................... 37 2.3.3.1 Be Sensitive to Causal and Temporal Terms in Substantive Coding ............... 38 2.3.3.2 Develop a Process Model of Empirical Indicators for Each Case .................... 39 2.3.3.3 Use Explanatory Validation to Identify Missing Relationships in Memo Sorting   ........................................................................................................................... 40 2.3.3.4 Develop and Report a List of Rival Explanations and Non-Fitting data .......... 41 2.3.4 External Validity (Reproducibility with other Samples) .......................................... 42 2.3.4.1 Heterogeneity sampling using Secondary Data Sources .................................. 43 2.3.4.2 Apply a Careful Generalization of the Unit of Analysis................................... 44 2.3.5 Measures that Improves All Validity Types ............................................................. 44 2.3.5.1 Provide a Thick Description of Theory ............................................................ 44 2.3.5.2 Clarify the Scope at the Beginning of Study .................................................... 45 2.3.5.2.1 The Effect of Clarifying Scope on Selective Coding .................................. 46 2.3.5.3 Clarify Prior Knowledge in the Write-up or Presentation ................................ 47 2.3.5.4 Investigator Triangulation ................................................................................. 48 2.3.5.5 Peer Debriefing ................................................................................................. 48 2.3.5.6 Participant Check .............................................................................................. 49 2.3.5.7 Role of Numbers in Modernist GT ................................................................... 50 2.4 Limitations .................................................................................................................... 51 2.5 Conclusion .................................................................................................................... 56 x  Chapter 3: How does IT Unavailability Become a Strategic Risk?: A Grounded Theory Approach ......................................................................................................................................57 3.1 Introduction ................................................................................................................... 57 3.2 A Preliminary Literature Review .................................................................................. 59 3.2.1 IT Unavailability ....................................................................................................... 60 3.2.1.1 Conceptualization of IT Unavailability as Downtime: ..................................... 61 3.2.1.2 Reducing Downtime ......................................................................................... 62 3.2.1.3 Impact of Downtime ......................................................................................... 63 3.2.2 Strategic Risk ............................................................................................................ 64 3.3 Methodology ................................................................................................................. 66 3.3.1 Positivist Grounded Theory ...................................................................................... 66 3.3.2 Research Process ....................................................................................................... 67 3.4 Findings: Domino View vs. Dynamic View of IT Unavailability ................................ 72 3.4.1 Domino View of IT Unavailability ........................................................................... 72 3.4.1.1 Strengths of the Domino View ......................................................................... 79 3.4.1.2 Weaknesses of the Domino View ..................................................................... 80 3.4.2 Dynamic View of IT Unavailability ......................................................................... 80 3.4.2.1 Causal Loop of IT Unavailability ..................................................................... 83 3.4.2.2 Causal Loops of Operational Inertia ................................................................. 84 3.4.2.3 Causal Loop of Client Limbo ........................................................................... 86 3.5 Discussion: Theory Validation ..................................................................................... 86 3.5.1 Explanatory Validation ............................................................................................. 87 3.5.1.1 Explaining Triggers of IT Unavailability ......................................................... 87 xi  3.5.1.2 Explaining Reactions of Organization .............................................................. 88 3.5.2 Non-Fitting Data ....................................................................................................... 90 3.5.3 Rival Explanation...................................................................................................... 91 3.6 Contributions to the Research ....................................................................................... 91 3.6.1 A More Comprehensive Conceptualization of IT Unavailability ............................. 92 3.6.2 Capacity Misalignment: A New Type of Operational Misalignment ....................... 93 3.6.3 Efficient Estimation of the Strategic Impact of IT Unavailability ............................ 94 3.6.4 Capability-based vs Process-based vs Resource-based Impact Analysis of IT Unavailability ........................................................................................................................ 96 3.7 Implications for Practice ............................................................................................... 98 3.7.1 Speak in an Abstract Level about IT Unavailability ................................................. 98 3.7.2 Focus on Client Limbo as Much as Client Debarment ............................................. 98 3.7.3 Reconsider the Definition of IT Unavailability in Reports and Contracts................ 99 3.7.4 Integrate Business Capacity into IT Capacity Monitoring Tools ............................. 99 3.8 Limitations and Future Studies ................................................................................... 100 3.9 Conclusions ................................................................................................................. 102 Chapter 4: Enhancing Strategic IT Alignment through Common Language: How Can CIOs Speak in an Alignment-Friendly Manner? ....................................................................104 4.1 Introduction ................................................................................................................. 104 4.2 Discovering Business Languages to Be Prescribed to CIOs ...................................... 107 4.2.1 Resource-based Language (RbL) ............................................................................ 109 4.2.2 Capability-based Language (CbL) .......................................................................... 113 4.2.3 Combining Resource-based and Capability-based Languages (RbL+CbL) ........... 116 xii  4.3 Enhancing Strategic Alignment via Language ........................................................... 117 4.3.1 Consensus Formation and Semantic Network Memory ......................................... 119 4.3.2 The Semantic Memory of a Typical Top Manager ................................................. 121 4.3.2.1 Business Knowledge: What Memories of Top Business Managers Include .. 122 4.3.2.2 IT Knowledge: Where Business Managers Differ .......................................... 123 4.3.3 Effects of the Prescribed Languages on Perceived Shared Language .................... 124 4.3.4 Effects of the Prescribed Languages on Understanding the Strategic Role of IT (USRI) ................................................................................................................................. 128 4.3.4.1 USRI when Technical Language Is Given to Top Managers with Low IT Knowledge ...................................................................................................................... 131 4.3.4.2 Comparing USRI when RbL vs. CbL Is Given to Top Managers with Low IT Knowledge ...................................................................................................................... 132 4.3.4.3 USRI when RbL+CbL Is Given to Top Managers with Low IT Knowledge . 134 4.3.5 Effects of the Prescribed Languages on Credibility ............................................... 135 4.4 Research Method ........................................................................................................ 137 4.4.1 Experimental Design ............................................................................................... 137 4.4.2 Measurement Development .................................................................................... 137 4.4.3 Scenario................................................................................................................... 139 4.4.4 Pilot ......................................................................................................................... 142 4.4.5 Sample..................................................................................................................... 143 4.4.6 Manipulation Check ................................................................................................ 144 4.4.7 Experimental Procedures ........................................................................................ 145 4.5 Data Analysis .............................................................................................................. 145 xiii  4.6 Discussion ................................................................................................................... 154 4.6.1 Discussion of Results .............................................................................................. 154 4.6.2 Contributions........................................................................................................... 158 4.6.3 Limitations and Future Study.................................................................................. 161 4.7 Conclusion .................................................................................................................. 164 Chapter 5: Conclusion ...............................................................................................................166 Bibliography ...............................................................................................................................171 Appendices ..................................................................................................................................233 Appendix A Classic Grounded Theory ................................................................................... 233 A.1 Goals of Classic GT ................................................................................................ 233 A.2 Activities in Classic GT .......................................................................................... 234 Appendix B Does GT have the Means Required for Developing an Ontology-Compatible Theory? ................................................................................................................................... 240 Appendix C Further Guidance on Generalization in Grounded Theory ................................. 241 Appendix D The Illustration of Cases Using the Dynamic View of IT Unavailability .......... 244 Appendix E Sources for the Cases of IT Unavailability ......................................................... 251 Appendix F A Thick Description of the Domino View of IT Unavailability ......................... 253 F.1 IT Unavailability ..................................................................................................... 253 F.2 Business Incompetency ........................................................................................... 257 F.3 IT Resource’s Detachability ................................................................................... 266 F.4 IT Resource - Business Alignment ......................................................................... 268 F.5 Operational Inertia .................................................................................................. 270 F.6 Information Capability Indispensability ................................................................. 272 xiv  F.7 Client Debarment .................................................................................................... 274 F.8 External Matching Capability ................................................................................. 276 F.9 Client Limbo ........................................................................................................... 277 F.10 Client Dissatisfaction .............................................................................................. 278 F.11 Incompetency Distinctiveness ................................................................................ 279 F.12 Public Infamy .......................................................................................................... 281 F.13 Public Psychological Proximity of the IT Unavailability ....................................... 288 F.14 Financial Inefficiency ............................................................................................. 292 F.15 Strategic Risk .......................................................................................................... 293 Appendix G Details of Explanatory Validation of the Theory of IT Unavailability .............. 295 Appendix H Other Contributions of the IT Unavailability Study........................................... 301 H.1 Comparing IT Unviability’s Risk Generation Process with IT’s Value Generation Process ................................................................................................................................ 301 H.2 Contribution to Service Quality Literature ............................................................. 305 H.3 Contribution to Strategic Risk Literature ................................................................ 305 H.4 Contribution to IT Strategy Literature .................................................................... 306 Appendix I Delving into the Credibility of the Prescribed Languages ................................... 308 Appendix J Comparing Modernist GT with the Methodology Suggested by Miles et al. (2013)................................................................................................................................................. 312  xv  List of Tables Table 2.1 An example of a desired list of constructs and their details ......................................... 17 Table 2.2 An example of a desired list of relationships and their details ..................................... 17 Table 2.3 Various levels of reproducibility achievement ............................................................. 20 Table 2.4 Similarities and differences between the goals of classic GT and modernist GT ........ 25 Table 2.5 An example for cross-case analysis matrix ................................................................... 32 Table 2.6 Various forms of peer debriefing .................................................................................. 49 Table 2.7 New and customized activities in Modernist GT .......................................................... 52 Table 3.1 Cases analyzed in this study ......................................................................................... 70 Table 3.2 List of constructs that constitute the domino view ....................................................... 74 Table 4.1 RBV’s terminology ..................................................................................................... 111 Table 4.2 CBV’s terminology ..................................................................................................... 115 Table 4.3 Perceived shared language depends on prescribed languages and IT knowledge ...... 125 Table 4.4 The moderating effect of IT knowledge on the effectiveness of languages on shared language ...................................................................................................................................... 127 Table 4.5 USRI depends on prescribed languages and IT knowledge ....................................... 130 Table 4.6 The moderating effect of IT knowledge on the influence of languages on USRI ...... 135 Table 4.7 Prediction of credibility of prescribed languages using social capital theory ............ 136 Table 4.8 List of constructs used in the study ............................................................................. 138 Table 4.9 The scenario ................................................................................................................ 140 Table 4.10 Descriptive statistics ................................................................................................. 146 Table 4.11 Tests of between-subjects effects ............................................................................. 148 Table 4.12 Pairwise comparison of dependent variables ............................................................ 149 xvi  Table 4.13 The result of testing hypotheses ................................................................................ 152 Table 4.14 Comparing the observed shared language, USRI, and credibility of the prescribed languages..................................................................................................................................... 158 Table B.1 Comparing entities of classic GT and entities of Weber’s ontology ......................... 240 Table D.1 Narrating the case of Comair using sentences from dataset ...................................... 244 Table E.1 Sources for the cases of IT unavailability .................................................................. 251 Table F.1 Measuring IT unavailability via response time .......................................................... 256 Table F.2 Various proxies applied in cases to report the magnitude of business incompetency 261 Table F.3 Examples of information incompetency and client-related business incompetency .. 264 Table F.4 Examples for the relationship between IT resource-business alignment and IT resource’s detachability .............................................................................................................. 269 Table F.5 Examples of operational inertia .................................................................................. 271 Table F.6 Examples of vital and non-vital information capabilities for client-related business capabilities .................................................................................................................................. 273 Table F.7 Relationship between business incompetency and client debarment ......................... 275 Table F.8 Number of negative tweets about United Airlines before and after IT unavailability incident ........................................................................................................................................ 283 Table F.9 Public infamy scale created based on Alexa’s global and national rankings for websites..................................................................................................................................................... 284 Table F.10 Public infamy category caused by example websites ............................................... 285 Table F.11 Public infamy scale and the client awareness scale applied to measure public infamy in the cases .................................................................................................................................. 286 Table F.12 Trace of public psychological proximity in the data ................................................ 288 xvii  Table F.13 Measuring the public psychological proximity of an IT resource ............................ 290 Table G.1 The IT unavailability theory can explain all action, events, states found in data and literature ...................................................................................................................................... 295 Table I.1 The strength of backing for value developed by each language ................................. 309 Table I.2 Prediction of credibility of prescribed languages using argumentation theory ........... 309 Table I.3 Prediction of credibility of prescribed languages using construal fit theory ............... 310 Table I.4 Mega-prediction of credibility of prescribed languages .............................................. 311 Table J.1 The similarities and differences between Modernist GT, QDA, and classic GT ........ 311  xviii  List of Figures Figure 1.1 Antecedents and consequences of strategic and operational misalignment .................. 5 Figure 1.2 A bird’s eye view of the studies in this dissertation ...................................................... 8 Figure 2.1 The advantages of our customized GTM over current methodologies ....................... 12 Figure 2.2 An example of two constructs compatible with the positivist ontology ..................... 15 Figure 2.3 Be open to all TCs and use incompatible TCs to verify the explanatory power of theory ............................................................................................................................................ 19 Figure 2.4 GAP Theorem: a research methodology can focus on maximum two of the GAP qualities ......................................................................................................................................... 23 Figure 2.5 Comparing classic GT and modernist GT in terms of targeting qualities ................... 24 Figure 2.6 An example hierarchical diagram suggested for constant comparison ....................... 29 Figure 2.7 Process models increase the visibility of relationships between categories ................ 40 Figure 2.8 Modernist GT methodology ........................................................................................ 55 Figure 3.1 Two popular conceptualization of (un)availability ..................................................... 62 Figure 3.2 Domino view of IT unavailability ............................................................................... 73 Figure 3.3 Dynamic view of IT unavailability.............................................................................. 81 Figure 3.4 Result of the activation of loops 2 and 3 in the case of Comair .................................. 85 Figure 3.5 IT Unavailability is conceptualized as a distribution of IT capacity deficit over time 92 Figure 4.1 Shared Language enhances strategic alignment through CIO-TMT trust and shared understanding .............................................................................................................................. 105 Figure 4.2 IT value generation model ......................................................................................... 118 Figure 4.3 The logical relationship between focal concepts of each language ........................... 122 xix  Figure 4.4 IT knowledge and business knowledge in the context of the semantic memory of top managers ..................................................................................................................................... 123 Figure 4.5 Shared language in the context of the semantic memory of top managers ............... 124 Figure 4.6 Understanding the strategic role of an IT resource in the context of the semantic memory of top managers ............................................................................................................ 129 Figure 4.7 Comparing shared language developed by prescribed languages for different IT knowledge, controlling for the covariate .................................................................................... 155 Figure 4.8 Comparing USRI developed by prescribed languages for different IT knowledge, controlling for the covariate ........................................................................................................ 156 Figure 4.9 Comparing credibility developed by prescribed languages for different IT knowledge, controlling for the covariate ........................................................................................................ 157 Figure C.1 An example for identifying a construct (information capability) using well-established categories.................................................................................................................. 243 Figure D.1 The illustration of the case of Comair Airline .......................................................... 246 Figure D.2 The illustration of the case of JP Morgan Chase ...................................................... 247 Figure D.3 The illustration of the case of United Airlines ......................................................... 248 Figure D.4 The illustration of the case of Translink’s SkyTrain ................................................ 249 Figure D.5 The illustration of the case of Obamacare website ................................................... 250 Figure F.1 The IT unavailability construct ................................................................................. 255 Figure F.2 The difference between business capability, business process, and business capacity..................................................................................................................................................... 258 Figure H.1 Contribution of IT unavailability study to the literature of IT value ........................ 302 xx  List of Abbreviations CbL Capability-based Language CBV Capability-based View CIO Chief Information Officer CEO Chief Executive Officer ERP Enterprise Resource Planning GT Grounded Theory GTM Grounded Theory Methodology IS Information System IT Information Technology RbL Resource-based Language RBV Resource-based View SLA Service-Level Agreement SCA Sustainable Competitive Advantage SCT Social Capital Theory TC Theoretical Code TL Technical Language TMT Top Management Team USRI Understanding the Strategic Role of IT  xxi  Acknowledgements First, I would like to express my heartfelt gratitude to my supervisors, Professor Izak Benbasat and Professor Hasan Cavusoglu, for their contributions of time, guidance, and funding while allowing me to be independent with my research and follow my passion. I am confident that without those contributions, I would not be where I am today. Had Izak not explained the issue of reproducibility during a lunch at The Point Grill, Chapter 2 would not exist today. Had Hasan not motivated me to work on the strategic risk of IT during a two-minute conversation on a gloomy November day of 2013, Chapter 3 would not exist now. Had I not attended the phenomenal COMM 634, taught by Izak, I would not have known about Weber’s (2012) ontology of theories, Dube & Pare’s (2003) guidance for positivist case study, and details of conducting a lab experiment that played a pivotal role in conducting the studies in this thesis. The papers assigned for COMM 580A, led by Hasan, sparked the idea of using the notion of capability that plays the main role in theories described in Chapter 3 and 4. The list goes on and on. This kid was able to figure out the novel contributions in this work as he was standing on the shoulders of giants.  I was also very lucky to have Professor Nilesh Saraf as an experienced external committee member. Nilesh’s comments helped me see the big picture when I was immersed in the details of this thesis. I am thankful that he took time out of his busy schedule to be part of my thesis committee. Also, I would like to acknowledge the role of scholars who kindly reviewed this work and shared their knowledge with me. Professor Olga Volkoff’s comments have greatly enriched the grounded theory part of this thesis. Olga reviewed an older version of Chapter 3 in 2015 and motivated me to write Chapter 2. Professor Andrew Burton-Jones kindly reviewed a recent version of Chapter 2 and 3. Chapter 2 has also benefitted from comments by members of SIGGTM, especially Professors Natalia Levina, Cathy Urquhart, and Walter Fernandez. Dr. Arash Saghafi introduced me to semantic distance theory that is the pillar of Chapter 4 and spent tens of hours discussing the studies. Dr. Usman Aleem spent significant time to hear my immature theories and challenge my ideas. Anonymous reviewers of my papers, published in the proceedings of ICIS 2014 and ICIS 2015, challenged me greatly and provided rich comments. Professor Ronald Cenfetelli helped me find the proper statistical analysis in Chapter 4. Dr. Camille Grange introduced me to inn.theorizeit.org that allowed me to quickly find papers related to theoretical constructs in this thesis. My discussion with Moksh Matta helped me to have a clear understanding of the problem raised in Chapter 4. Finally, constructive comments by the esteemed faculty of the MIS division including Professors Yair Wand, Carson Woo, Ning Nan, Adam Saunders as well as Ph.D. students including Daniel, Patt, Lior, Atefeh, and Hongki were beneficial. Financial support from the Social Sciences and Humanities Research Council (SSHRC) of Canada, Sauder School of Business, and Affiliated Awards of UBC have enabled me to conduct the studies, travel to conferences, and support my doctoral education. Their support is greatly appreciated. I would also like to extend my gratitude to people who made the demanding Ph.D. life easier. Elaine Cho and Paula Chang were extremely supportive. Payam and Parisa were always there for xxii  us through the good times and bad, cheering me up. The get-togethers with our gang, Emad, Mehdi, Majid, Hasti, Mona, Forough, Reza, Negar, and Atefeh always gave me the energy to work harder the next day. My friends, Alborz and Arash, were also my counselor who patiently listened to my strong emotions and shared their wisdom with me.  I am profoundly indebted to my family for their immense love and support. Even though we are thousands of miles away, they always supported me whenever I needed them. My utmost gratitude to my parents who always found the best available teachers for me. Special thanks to my parents-in-law for their invaluable support. I also want to thank my dear brothers and brothers-in-law who had to make up for our absence when our parents and parents-in-law needed us.  Lastly, I want to thank my dearest Hoda and Taha who endured the insanity of a Ph.D. life with me in a country far away from home. Taha! Thanks for being such an understanding son. Thanks for allowing me to work more on my thesis rather than spending more time with you. I taught you soccer, but I could have taught you game programming. I hope I can make it up to you in near future. Hoda! You held my hand throughout this tough journey and experienced my ups and downs first hand. Thank you for making life much more pleasant with your beautiful heart, patience, and friendship. You are the source of my solace! xxiii  Dedication   To Hoda, Taha, and our parents 1 Chapter 1: Introduction This dissertation extends our understanding of misalignment between information technology (IT) and business in organizations. In spite of being studied for over 30 years, IT-Business alignment (hereafter referred to as alignment) is still the top concern of chief information officers (CIOs) (Society for Information Management, 2016). This dissertation addresses some remaining but critical issues in IT-Business misalignment (hereafter referred to as misalignment). To understand how this dissertation contributes to the alignment literature, we need to take a quick review of this literature. 1.1 A Quick Literature Review of (Mis)Alignment The literature studies alignment, both as alignment (e.g., Reich & Benbasat, 2000) and as misalignment (e.g., Strong & Volkoff, 2010). In the following, we also alternate between these two views of alignment where appropriate. 1.1.1 Conceptualization and Types of (Mis)Alignment According to the Oxford dictionary, misalignment is defined as the incorrect arrangement or position of something in relation to something else. Accordingly, IT-Business misalignment is defined as the incorrect arrangement or position of IT in relation to business and vice versa. However, researchers were not consistent in conceptualizing alignment.  There exists six perspective of alignment from which it can be defined and studied: (a) moderation, (b) mediation, (c) matching, (d) covariation, (e) profile deviation, and (f) gestalts (Venkatraman, 1989, Bergeron et al., 2001, 2004). The moderation perspective conceptualizes alignment as the interaction between two variables (e.g., strategic orientation and strategic IT management) and studies their interaction effect on firm performance. Hence, correlation analysis and regression 2 analysis are the appropriate testing techniques. The mediation perspective considers alignment as an intervening variable between antecedent variables such as strategic orientation and consequent variables such as firm performance. Therefore, path analysis is the proper technique for this conceptualization. The matching perspective defines alignment as a match between the two variables. Deviation score analysis, residual analysis, and variance analysis support testing this definition. The covariation perspective adopts a conceptualization based on the internal consistency among a set of underlying related variables. The appropriate testing technique is second-order factor analysis with variables like environment uncertainty, strategic orientation, business structure, strategic IT management as first-order factors. The profile deviation perspective assumes an ideal profile exists that is ideal values of variables are the values of high performers. Deviations from this ideal profile is considered misalignment and is calculated using the Euclidean distance. The gestalts perspective conceptualizes alignment as frequently recurring clusters of attributes. Hence, different patterns of variables (relationships between variables) can be considered as the pattern of alignment. Cluster analysis is an example of appropriate statistical analysis for this perspective. This dissertation takes a matching perspective of misalignment, conceptualizes it as a mismatch between two variables and evaluates it using measures like deviation score and residual (Bergeron et al., 2001, 2004). Therefore, capacity misalignment is, for instance, the difference between two 3 variables: (a) the IT capacity that business demands (e.g., supporting 1000 concurrent users), (b) the IT capacity provided by the IT department (e.g., supporting 300 concurrent users)1.  Henderson and Venkatraman (1993, 1999) identified two main types of misalignment between IT and business: strategic misalignment and operational misalignment. The former is related to the misfit between IT strategy and business strategy, while the latter pertains to misfit between processes, skills, architecture, and infrastructure of IT and business. Reich and Benbasat (1996, 2000) have identified two perspectives of strategic alignment: intellectual perspective and social perspective. Intellectual perspective considers alignment as the state in which the contents of IT and business plans and strategies are interconnected. On the other hand, social perspective considers alignment as the state in which the top management team (TMT) and IT executives understand and are committed to the IT and business strategies, respectively. Strong and Volkoff (2010) identified six types of operational misalignment in their grounded theory study: functionality misalignment, data misalignment, usability misalignment, role misalignment, control misalignment, and cultural misalignment. Functionality misalignment occurs when the IT-based processes are executed with low efficiency or effectiveness. Data misalignment occurs when stored or required data have quality issues such as inaccuracy, inconsistency, inaccessibility, lack of timeliness, or inappropriateness for users’ contexts. Usability misalignment occurs when the interactions with the IT system are cumbersome. Role                                                   1 The IT capacity that business demands (e.g., supporting 1000 concurrent users) is rooted in the demanded business capacity (e.g., fulfilling 50 orders per minute). Hence it can also be viewed as a mismatch between IT capacity and business capacity. 4 misalignment occurs when the roles in the IT system are inconsistent with the existing skills, responsibility, and authority. Control misalignment occurs when the controls embedded in the IT system provide too much or too little control to assess or monitor performance appropriately. Cultural misalignment occurs when the IT system requires ways of operating that is inconsistent with organizational norms. 1.1.2 The Importance of (Mis)Alignment Alignment leads to increased firm performance through more focused and strategic use of IT (Sabherwal & Chan, 2001; Chan et al., 2006; Chan & Reich, 2007). Wu et al. (2015) provide empirical support that strategic alignment has a positive influence on financial performance including financial return, operational excellence, and customer perspective. Grew et al. (2014) ran a meta-analysis of studies on alignment and found positive relationships between all dimensions of alignment (i.e., social, intellectual, and operational) and performance types (i.e., customer benefit, productivity, and financial performance). Finally, Wagner et al. (2014) show that strategic alignment is influential on organizational performance through operational alignment. 1.1.3 Attainment of Alignment Whereas Reich and Benbasat (2000) believe that both social and intellectual perspectives are necessary to achieve high levels of strategic alignment, the literature of IT alignment mainly has 5 taken a social perspective2. Using the social perspective of alignment, Karahanna and Preston (2013) conducted a thorough literature review of antecedents of strategic alignment that can be manipulated by chief information officers (CIOs) and top management teams (TMTs). They found empirical support for four of those antecedents: TMT’s trust in CIO, CIO’s trust in TMT, shared language, and shared understanding of the role of IT. They also found that shared language and shared understanding affect strategic alignment through enhancing trust between CIO and TMT. Moreover, Preston and Karahanna (2009) also found that shared language and shared knowledge are antecedents of shared understanding. Figure 1.1 summarizes the antecedents and consequences of strategic and operational alignment stated in this literature review.  Figure 1.1 Antecedents and consequences of strategic and operational misalignment                                                   2 Examples are Kearn & Sabherwal (2007), Kearns and Ledere (2003), Preston and Karahanna (2009), Karahanna and Preston (2013), Preston (2004), Reich and Benbasat (2000), and Wagner et al. (2014). 6 1.1.4 Alignment: A Moving Target The persistent ranking of alignment as CIOs’ top concern denotes that CIOs keep having difficulty in improving alignment in a timely manner. This can be attributed to the fact that alignment is a moving target in that both business and IT are changing continuously in this world (Chan & Reich, 2007; Venkatraman & Henderson, 1993; Tallon, 2007; Baker et al., 2011). New technologies like cloud, internet of things (IoT), block chain, and big data push companies to leverage on them to gain edge over their competitors. On the other hand, changes in the business environment, particularly, the increasing migration of clients from pen-and-paper processes to IT-enabled services, are overtaking organizations’ capabilities to react to them. It is evident that the current, fast-changing environment has made it difficult for organizations to achieve strategic and operational alignment. This dissertation focuses on two current challenges related to IT-business misalignment in the current environment of today. 1.2 Current Challenges in IT-Business (Mis)Alignment 1.2.1 Strategic Risk of IT-Business Capacity Misalignment One of the common IT challenges of today is a functionality misalignment that operational IT systems suffer from. The first study of the thesis addresses how and under what conditions the misalignment between IT capacity and business capacity becomes a strategic risk threatening the survival or success of organizations. The research began as a study on IT unavailability and its strategic risk. Yet, during the grounded theory generation process, it became evident that IT unavailability is in fact a type of functional misalignment, and without being viewed as an alignment issue, the dynamics of organizations during an IT unavailability cannot be explained. Described in Chapter 3, the study reveals how capacity misalignment is triggered, how it turns into 7 a threat to the existence or success of organizations, and how organizations mitigate or exacerbate the consequences. 1.2.1.1 An Ontology-Driven Grounded Theory Methodology to Develop Reproducible Theories We3 noticed that the sensitivity and confidentiality of the strategic risk subject would result in a biased, low sample size due to uncooperative respondents. Hence, we decided to use the grounded theory (GT) methodology to analyze publicly available cases of IT unavailability to achieve a general theory. Unfortunately, current GT methodologies does not have measures that increase the chance of reproducibility of conclusions from data by different researchers or datasets. Moreover, the generated theories are not compatible with the ontology of objectivist-positivist theories. Hence, we customized GT to satisfy the way positivist researchers view theories and conclude from data. Chapter 2 explains the goals and activities of this customized methodology, whereas Chapter 3 employs this methodology to understand how capacity misalignment becomes a strategic risk. 1.2.2 Improving Strategic Alignment via Shared Language The second study focuses on shared language as one of the best manageable antecedent of strategic alignment in today’s fast changing environment. Shared language between CIO and top management team is one of the most important yet neglected antecedent of strategic alignment                                                   3 Through the dissertation, I use the term “we” rather than “I” to appreciate the intellectual contribution of my supervisors and their ratifications of my decisions. 8 (Jentsch & Beimborn, 2014; Preston & Karahanna, 2009; Karahanna & Preston, 2013). While previous studies suggest that CIOs avoid technical language and use business terminologies (Reich & Benbasat, 2000; Preston & Karahanna, 2009; Karahanna & Preston, 2013), they do not provide further details. The purpose of this study, described in Chapter 4, is to prescribe guidance for CIOs regarding what possible business languages can be used and which language under what conditions is more efficient in enhancing strategic alignment. 1.3 A Bird's Eye View of Dissertation The rest of the dissertation is organized as follows. Chapter 2 introduces the customized GT methodology to provide the methodological support for the first study. Chapter 3 describes the first study that addresses the strategic consequences of capacity misalignment, commonly known as IT unavailability. Whereas the first study (Chapter 3) targets a current challenge in operational misalignment, the second study (Chapter 4) aims at a challenge related to strategic misalignment. Chapter 4 aims to prescribe languages to CIOs in order to enhance IT strategic alignment. Finally, chapter 5 synthesizes the main findings of the previous chapters to conclude the dissertation. Figure 1.2 provides a bird’s eye view of the studies in this dissertation.  Figure 1.2 A bird’s eye view of the studies in this dissertation 9 Chapter 2: An Ontology-Driven Grounded Theory Methodology to Develop Reproducible Theories 2.1 Introduction This chapter aims to provide the methodological support for the study on the strategic risk of IT unavailability. It also aims to resolve certain concerns that discourage the application of grounded theory by positivist information system (IS) researchers so that the community of grounded theory users in IS grows with a faster pace. According to Grover and Lyytinen (2015), 95% of positivist research articles in top IS journals involve theories borrowed from reference disciplines with no or minor modification. In fact, only 17% of positivist IS studies have added new constructs beyond those in the reference theories. Although borrowing and testing reference theories is a necessary scholarly activity, primarily following this path has resulted in IS scholars missing the novelty and foresight required to achieve theoretical breakthroughs. The attempts to provide novel, indigenous theories “are often stymied by present institutional norms of grounding everything in the literature or subjecting all proposed ideas to empirical testing” (Grover and Lyytinen, 2015, p. 287).  As a result, positivist IS researchers lag behind the needs for studying new Information Technology (IT) phenomena and sensitive, confidential subjects. Recent IT phenomena like big data and block chain technologies suffer from dearth of existing literature to draw on. Therefore, IS theories on new IT phenomena are either outdated or not well contextualized. In addition, sensitive subjects like strategic risk of IT or highly-strategic IT projects bring about a high number of uncooperative respondents that lead to biased samples or very low sample size with the available budget and time. This data collection problem deters positivist researchers from studying sensitive subjects. On the other hand, data on many such new or sensitive subjects are available through both private 10 and public online data sources (Levina & Vaast, 2015). A methodology that can satisfy positivist requirements and systematically map available archival data to novel, indigenous IS theories can be beneficial. Grounded Theory (GT) is considered as one of such data-driven methods that can develop new constructs and indigenous IS theories (Olbrich, 2015; Birks et. al., 2013; Levina & Vaast, 2015). Developed around five decades ago by Glaser and Strauss (1967), the method is vastly used by interpretivist researchers in social sciences and is popular among interpretivist IS researchers (Wiesche et al., 2017; Hughes and Jones, 2003). However, IS researchers, who are mainly positivist, have not exploited GT to its full potential (Wiesche et al., 2017). Unfortunately, the current GT methodologies (GTM) do not satisfy the way positivist researchers view and test theories. Positivist researchers require theories to consist of measurable constructs to be verifiable via their favorite methods e.g., econometrics, or statistical analysis. Furthermore, current GT methodologies do not consider if the conclusions from data are reproducible by other researchers or other datasets (Glaser & Strauss, 1967, p. 103). Since the conclusions in GT research studies are likely to be affected by researchers’ biases and mistakes as well as the quality and scope of data, positivist researchers are hesitant about the reliability and validity of the outcomes of those GT methodologies.  These factors result in positivist IS researchers to shy away from GT. IS researchers may feel that positivist reviewers, who are the majority in top IS journals, are not able to fully appreciate the developed theories and the applied GT methodology. Therefore, using GT increases the risk of the rejection of the manuscripts. Furthermore, some positivist IS researchers may believe that the eventually-published paper brings about a comparatively lower level of recognition by the majority of IS community since the positivist scholars cannot easily borrow constructs developed in 11 classical and interpretivist GT studies. Therefore, the eventually-published paper would have a comparatively lower chance of being referenced by the majority of IS researchers and would not help with author-level metrics like h-index. In this chapter, we aspire to increase the congruity of GT with the objectivist-positivist view of science. We selected classic GT, namely Glaserian GT (Glaser & Strauss, 1967; Glaser, 1978, 1992, 1998, 2005), over Straussian GT (Strauss & Corbin, 1990, 2008) as the basis of our methodology. The reason is that the concept-indicator basis of classic GT (Glaser, 1978, 2005) is closer to our positivist view of science than the symbolic interaction basis of Straussian GT. We try to reconcile the goal of classic GT with objectivism-positivism and modify its activities accordingly. In Appendix A, we explain the Glaserian methodology for readers who are unfamiliar with it and attempt to clarify some of the misconceptions that have been preached by several well-known researchers! Our approach brings several benefits to positivist researchers compared to the existing classic and interpretivist GT. First, our approach to GT is tuned to develop a theory compatible with the Weber’s ontology for positivist theories (2012), which is based on a generalized ontology proposed by Bunge (1977, 1979). Consequently, constructs and theories are compatible with how positivists view and test theories. Therefore, not only the developed theories can be verified using empirical tests that positivist believe in, but the developed constructs can also be borrowed and applied in other positivist studies. Second, positivists emphasize reproducibility of the conclusions from data (O'Leary, 2004). Our approach aims to increase the chance of reproducibility of the conclusions from original data by other researchers or datasets to answer the call of Burton-Jones and Lee (2017) for strengthening the accuracy of GT. Third, our approach enables positivist researchers to study new IT phenomena and sensitive, confidential subjects despite the meagre literature and lack 12 of sufficient quantitative data (see Figure 2.1). Finally, developing a theory with our GT approach and testing it empirically will help answer Goes (2013) and Venkatesh et al.’s (2013) calls to apply mixed-method approaches. In this light, first, we discuss the differences between the goals of GT and our positivist view of science in order to settle those differences and establish the goals of our customized GT. Then, we modify the activities suggested by classic GT in order to achieve those goals. Finally, we conclude with a summary of findings.  Figure 2.1 The advantages of the proposed GTM over current methodologies 2.2 Establishing the Goal of Customized GT The epistemological position of classic GT is debated by many researchers (Matavire & Brown, 2013; Glaser, 2005, p. 43). As positivists or interpretivists, they find certain activities in classic GT that supports their epistemological and theoretical perspective, certain activities that violates 13 them, and some requirement unsatisfied. Glaser dislikes customizing GT with requirements from any theoretical perspectives (e.g., interpretivism, positivism) as he believes such requirements would distract GT from generating theories and downgrade its power (Glaser, 2004). Nonetheless, classic GT has been customized and remodeled by many researchers in the hope of satisfying the requirements of the aforementioned theoretical perspectives and resolving the concerns they had. The most well-known customization was introduced by Strauss and Corbin (1990), which received a vociferous objection by Glaser (1992), and divided GT into the two streams of Straussian and Glaserian. The customization continued by many others including Charmz (2006), and Strauss and Corbin (2008). In this chapter, we intend to customize GT to be compatible with our objectivist-positivist view of theories and theory development. This customization should start with the reconciliation of the theoretical goals of classic GT and positivism. It should be noted that we believe in the main goals of classic GT as stated in Glaser & Strauss (1967) and Glaser (1978). We believe the generated explanation should be conceptual but not descriptive; the explanation should fit the data, not be forced on data; it should work, i.e., be able to predict and explain; and it should be ready to be tested by quantification techniques. More importantly, we also believe that GT is a methodology for building a theory, not testing it (see Table 2.4). 2.2.1 Conflicting Goals of Modernism and Classic GT However, at least two conflicting points exist between the goals of classic GT and our modernist view of science. Modernism is a slippery term that encompasses a broad variety of meanings in philosophy, social science, and architecture (Crotty, 1998). In this chapter, modernism means adopting an objectivist epistemology and a positivist theoretical perspective with the purpose of 14 developing general theories (Hatch & Cunliffe, 2013). Modernism’s main rival is interpretivism. While interpretivism believes science (especially in humanity and arts) thrives on creativity, self-insight, and liberation, modernism believes science prospers from its ability to predict and control outcomes. In this light, for modernists, theory is a set of concepts whose proposed relationships offer causal or predictive explanation of a phenomenon of interest, rather than an individual’s or a community’s understanding of it. Therefore, while interpretivists try to create an “understanding” of the phenomena of interest, modernists try to “explain” their antecedents and consequences (Hatch & Cunliffe, 2013). In this light, modernism has two major issues with classic GT. First, modernism has a predefined goal for research, i.e., researchers should provide causal or predictive explanations. However, classic GT believes setting such goals is forcing explanations on data i.e., torturing data to state what researchers want (Glaser, 2005, p. 49), and advocates that researchers should be open to any kind of relationships in data, not just causation and prediction. As a result, the output of a GT study may not be in the form of a theory acceptable to modernist researchers4.  Second, modernism intends to enable prediction and control of outcomes. Therefore, accuracy and reproducibility of conclusions from data is central to modernism. However, classic GT does not provide a reproducible theory, i.e., independent researchers come up with different theories even though they use the same datasets (Glaser & Strauss, 1967, p. 103). This irreproducibility is rooted in classic GT’s emphasis on researchers’ creativity in its activity, particularly in developing                                                   4 Take a look at Figure 2 in Levina & Ross (2003) and Figure 3 in Levina & Vaast (2008). 15 theoretical codes (TCs). A TC is a concept that relates two pieces of data e.g. an abstraction, a causal relationship, two steps of a process (see Appendix A). The emphasis on creativity puts classic GT on the side of interpretivism and away from modernism. In the following, we explain how we resolved these two conflicting goals.  2.2.2 Developing an Ontology-Compatible Theory without Torturing Data Perhaps Weber (2012) explained, in the best way possible, what modernism considers as a theory. Using Bunge’s ontology (1977, 1979), Weber stated that a theory should clarify its constructs, relationships, and boundaries. A construct (or concept) in a theory represents a property-in-general or an attribute-in-general of one or several classes-of-things in a domain. A property can be a mutual property of several classes or an internal property of one class. Attributes are by definition measurable. They are defined as more or less accurate proxies through which we perceive properties in the world (Weber, 2012). For instance, in the context of a research that focuses on the strategic risk of IT unavailability, the research includes two properties of two classes: unavailability of an IT resource and strategic risk for an organization. Downtime duration can be an attribute (proxy) of unavailability, whereas financial loss can be an attribute of strategic risk.  Figure 2.2 An example of two constructs compatible with the positivist ontology 16 A relationship, however, shows that the values of one construct are somehow related to the values of another construct. For instance, IT unavailability and strategic risk are positively associated. The precision of a relationship can be specified with varying levels of precision: a) simply related, b) positively or negatively related, c) functional relationship provided. Moreover a direction for causality can be provided as for a higher level of precision.  Classic GT has the means to develop constructs and relationships (see Appendix B). However, it does not have the means to completely clarify boundaries. According to Weber (2012) a boundary is the range of values of attributes the theory can and cannot explain. An example of a boundary is the case of a theory of IT unavailability that is not able to explain a complete shutdown of a legacy system for a year that did not result in any financial consequences. To completely clarify the boundaries of attributes and relationships, researchers need to test the theory against a sufficient amount of data to extract the outliers and thus boundaries. This is out of GT’s reach. Yet, we believe it is mandatory to reveal any non-fitting data as some boundaries may be revealed during the analysis of data in GT.  In this light, constructs and relationships are mandatory products of modernist GT and boundaries are optional products. Modernist GT needs to provide a list of constructs. For each construct, the property, the associated class or classes, their attributes, supporting data and literature, and boundary, if any, should be provided (see Table 2.1 for an example). Note that constructs and attributes are equivalent to notions of concepts and indicators in classic GT (Glaser, 1978).  Modernist GT needs to provide a list of relationships as well. For each relationship, the sign and direction of relationships, supporting data and literature, and unearthed boundaries should be supplied (see Table 2.2 for an example). 17 Table 2.1 An example of a desired list of constructs and their details  Construct (Property) Class(es) Definition Attributes Supporting data and literature Boundary (if any) IT Unavailability IT Resource The failure to keep the IT resource running Downtime, Frequency Availability risk assessment is generally influenced by three factors: Frequency, Duration and Impact. (Liu et al., 2010) A downtime longer than a year Business Disruption Organization The degree to which the organization is incapable of providing service or products. Duration, Span “A major IT outage in British Airways caused severe disruption to flight operations worldwide”(BBC) - .. .. .. .. .. .. Table 2.2 An example of a desired list of relationships and their details Propositions (relationships) Sign Supporting data and literature Boundary (if any) IT Unavailability -> Business Disruption  Positive “A major IT outage in British Airways caused severe disruption to flight operations worldwide” A five-minute downtime that results in bankruptcy .. .. .. .. 18 2.2.2.1 Isn’t it Torturing Data to Develop an Ontology-Driven Theory? One may think that developing an ontology-compatible theory is in disagreement with Glaser’s recommendation. Classic GT recommends researchers to be open to any TCs, not just causation and conditions suggested in 6C family (Glaser, 2005) (see Appendix A). Glaser believes that limiting oneself to predefined theoretical codes, as the Straussian approach (Strauss & Corbin, 1990) suggested, violates the very nature of “being grounded in data” in that this limitation means “torturing data” to speak what the researchers want data to speak (Glaser, 1992). Therefore, in a similar vein trying to extract causality and prediction from data only is torturing data. We acknowledge Glaser’s point that researchers should be open to any TCs and should not confine themselves to 6C family or any other families of TCs. However, we believe in modernism’s understanding of theory in general and Weber’s ontology of theory, in particular. To us, TCs that are not related to Weber’s ontology can be knowledge contributions (e.g., actions, means-goal, stages), but they cannot be constituents of a theory. In other words, if a study does not provide an output according to Weber’s ontology, the output can be a knowledge contribution (called a model by Wiesche et al (2017)), but we do not consider that knowledge contribution as a theory. In summary, we are open to all TCs, but do not consider all TCs as components of a theory. In this light, we suggest analysts5 to be open to all types of TCs. However, separate those TCs related to Weber’s ontology (e.g., 6C family) and use them to build an ontology-compatible theory.                                                   5 Analyst is the researcher who is in charge of substantive coding. We later describe the favorite roles in a research team using Glaser’s (1978) suggestions. 19 Then, use the rest of identified TCs to verify the explanatory power of the theory and its theoretical completeness. Take the example of the aforementioned strategic IT unavailability study. The main theory is a process through which IT unavailability becomes a strategic risk (see Figure 2.3). However, researchers can come across many pieces of data describing events (e.g., Christmas) that exacerbated the consequences or organizational reactions that alleviate the pain (e.g., using legacy systems, queuing customer requests for service). In Figure 2.3, the theory can explain that “using legacy systems” in a case mitigated strategic risk via decreasing IT unavailability. However, the theory has no constructs that can explain how queueing services and Christmas influence strategic risk. This indicates a gap in theory which calls for revisiting data to find what constructs or relationships are being missed.  Figure 2.3 Be open to all TCs and use incompatible TCs to verify the explanatory power of theory. Question mark means a construct is missing in the theory. 2.2.3 Enhancing Reproducibility (Accuracy) of Conclusions at the Cost of Parsimony Reproducibility is of high importance to modernism in that it enables the power of prediction and explanation. It is defined as the degree to which the study’s conclusions would be supported if the same methodology was applied in a different study in a similar context (O'Leary, 2004). Ideally, 20 modernists expect the generated theory to be reproducible by different researchers and datasets. Practically, reproducibility is a range and can be achieved at different levels (see Table 2.3.) At the very least, it is expected that the same research team would end up with the same theory with the same dataset at a different time. This requires the theory generation process to include measures that highlight researchers’ oversights and biases as well as missing or inaccurate data. Such measures ensure that the developed theory and its constituents exist as meaningful entities independent of researchers’ mood, knowledge, and experience (Hatch & Cunliffe, 2013, Crotty, 1998). Table 2.3 Various levels of reproducibility achievement  Same dataset Different dataset Same research team Level 1 Level 3 Different research team Level 2 Level 4 2.2.3.1 Classic GT Deems Accuracy Non-Relevant and Focuses on Creativity Classic GT, however, believes that theory development does not require the accuracy or rigorous data collection that positivism does (Glaser, 2004; Glaser & Strauss, 1967). The argument is that “the evidence may not necessarily be accurate beyond a doubt, but the concept is undoubtedly a relevant theoretical abstraction about what is going on in the area studied.” (Glaser & Strauss, 1967, p. 23). The fathers of GT are concerned that focusing on accuracy by having redundant data and voluminous descriptions distracts researchers from the true goal of GT which is to generate theory. Therefore, classic GT along with interpretivism emphasizes researchers’ creativity and self-insight. However, what interpretivism calls creativity, modernism calls non-reproducibility. 21 2.2.3.2 Grounded in Researcher, not in Data To modernists, the output of researcher-driven creativity is highly susceptible to “biases” and “oversights”. Experienced researchers are influenced by their prior knowledge (biases). For instance, in the study of IT unavailability, our prior knowledge of the existing operational definition of unavailability (e.g., IT outage) in literature led us to sample for such unavailability incidents and neglect checking for the existence of other possible types of IT unavailability that the literature does not address (e.g., IT unavailability that roots in excessive demand). This resulted in developing a theory that could not provide a complete picture of the dynamics of IT unavailability and strategic risk. Therefore, there must be measures in the methodology to highlight such biases. On the other hand, novice researchers, who do not have enough research experience and theoretical sensitivity, are susceptible to oversights in open coding and abstracting data to categories. Furthermore, the sample that researchers gather can have inaccurate or missing data. Therefore, it is highly possible that different researchers come up with different sets of constructs and hypotheses. Consequently, such theories are not grounded in data, but grounded in researchers. 2.2.3.3 Accuracy is Relevant Classic GT disregards accuracy on the basis that even an inaccurate data provides a relevant construct. But, it neglects the fact that if you put biases and oversights in, you get biases and oversights out. It neglects that concepts can be relevant, but with very lower power for explaining what really happened in the real life. For instance, a researcher who is an expert in the domain of IT unavailability knows that IT modularity decreases the consequences of IT unavailability. He starts the study on strategic risk of IT unavailability with an intention toward having IT modularity 22 somewhere in the model. IT modularity is relevant and required to explain the consequences of IT unavailability where the unit of analysis is a project, a user, a business process, or an IT department. At the strategic level, where the theory intends to help board members and chief executives grasp the possible strategic consequences of IT unavailability and the ways they can mitigate the impacts, there exist more expressive concepts than IT modularity (e.g., information capacity, backup information capacity, business capacity, incompetency distinctiveness, or media coverage). Therefore, constructs (e.g., IT modularity) can be related, but have no power for explaining a phenomena (e.g., strategic risk). 2.2.3.4 Sacrificing Parsimony for Accuracy and Generality Focusing on reproducibility to have accurate conclusions takes its toll. Researchers have to sacrifice parsimony. According to Weick (1995, pp. 389-390), any explanation is deficient in one or more of the qualities of generality, accuracy, and parsimony (GAP). Therefore, a trade-off among these qualities is inevitable in an explanation. For instance, a higher amount of generality can be achieved by trading off an explanation’s accuracy and/or parsimony. We deduce from this axiom that a research methodology can focus on maximum two of the GAP qualities. For instance, econometricians, often, use a number of explanatory variables and sacrifice parsimony to enhance accuracy and generality. In lab experiments, researchers, often, sacrifice generality to achieve higher accuracy and parsimony. We named this phenomenon, the GAP Theorem of research (see Figure 2.4). 23  Figure 2.4 GAP Theorem: a research methodology can focus on maximum two of the GAP qualities Classic GT is not an exception and conforms to the rule of the GAP theorem. By sacrificing accuracy, it intends to develop a theory that accounts “for as much variation in pattern of behavior with as few concepts as possible, thereby maximizing parsimony and scope [generality]” (Glaser, 1978, p. 93). We, however, intend to increase accuracy, i.e., reproducibility of conclusions from data. Therefore, we need to choose to sacrifice either generality or parsimony. Classic GT has a slight tendency toward generality. It encourages researchers to develop formal theories with a higher level of generality rather than sticking to the substantive theories they developed for a substantive area (Glaser, 1978). Classic GT’s emphasis on having “a conceptual theory abstract of time, place and people” is almost as strong as its emphasis on “being grounded in data” (Glacer, 2004). We also opt for generality due to several reasons. First, providing a grand narrative is the way in which modernism enlightens the world (Hatch & Cunliffe, 2013). Second, gathering data from different substantive areas improve reproducibility. While this would have required a huge effort in 1967 when classic GT was born, it has been made much easier today with the introduction of 24 worldwide web, social media, and search engines (Levina & Vaast, 2015). Third, in the world of computers and machine learning, more precise prediction for a larger population is far more important than having fewer variables.  Therefore, our approach to GT uses a large set of constructs to develop a general and accurate theory. This does not mean we ignore parsimony. Wherever possible, we try to enhance parsimony as much as possible. But, generality and accuracy are of paramount importance. Figure 2.5 illustrates how our modernist approach to GT is different than classic GT in terms of targeted qualities for theory. That is, while classic GT puts more importance to generality and parsimony, modernist GT emphasizes generality and accuracy. In this light, we define modernist GT as a grounded theory methodology that targets general, reproducible (accurate) theories.    Figure 2.5 Comparing classic GT and modernist GT in terms of targeting qualities   25 Table 2.4 Similarities and differences between the goals of classic GT and modernist GT Classic GT’s position Modernist GT’s position Reason for change Based on Build a conceptual theory abstract of time, place and people, not a voluminous description of cases. - Glaser (2004); Glaser & Strauss (1967); Weber (2012); Trochim (2007) Theory should fit the data, not forced on data. - Glaser & Strauss (1967); Glaser (1978; 2004; 2005) Theory should work, i.e., be relevant to the phenomenon of interest and be able to explain it. - Glaser & Strauss (1967); Glaser (1978) Theory should be clear enough to be operationalized in quantitative studies. - Glaser & Strauss (1967, p. 3); Weber (2012) Build a theory, but not verify the theory against data. - Glaser & Strauss (1967); Glaser (1978; 1992; 1998; 2005) Theories include theoretical codes (TCs) and theoretical codes can be anything. Theoretical codes can be anything, but modernism does not accept all TCs as constituents of theory.  Modernism’s emphasis on providing causal or predictive explanations Hatch & Cunliffe (2013); Crotty, 1998; Weber (2012) Positivist requirements deviate researchers from generating theories. Positivist requirements ensure that the theory is grounded in data, not in researchers. Modernism’s emphasis on reproducibility of conclusions from data O'Leary (2004) Focus on generality and parsimony (accuracy is not relevant at the conceptual level). Focus on generality and accuracy (i.e., accurate conclusions from data in theory development). Modernism’s emphasis on reproducibility of conclusions from data O'Leary (2004); Weick (1995); Hatch & Cunliffe (2013) 26 2.3 Revising the Methodology In light of the fact that there is a gap between what is needed and what classic GT furnishes, we suggest measures, activities, and artifacts to be added to classic GT to ensure that an ontology-compatible theory is developed in a reproducible manner. Modernism generally suggests four types of validity to ensure accuracy: construct validity, conclusion validity, internal validity, and external validity6 (Trochim, 2007). In our customized grounded theory approach, we aimed to improve validity along these types as much as possible. 2.3.1 Construct Validity (Construct Reproducibility) Construct validity refers to the degree to which inferences can be reasonably made from the operationalization to the theoretical construct (Trochim, 2007; Straub et al., 2004). Translating to the grounded theory domain, construct validity refers to the degree corresponding passages of each construct fit the construct they have generalized to. In GT, constructs (called categories) are built by constant comparisons of empirical indicators and generalizing them. Empirical indicators, themselves, are built from labels assigned to passages. Having an interpretivist view, researchers leave the labeling and generalization up to their own creativity. Consequently, the resultant constructs can vary from researcher to researcher based on their prior knowledge, biases, and mood. As a modernist, we want generated constructs to be reproducible by other researchers as                                                   6 Qualitative researchers apply terms such as transferability and dependability in place of the above words (Bitsch, 2005; O'Leary, 2004). We prefer to use the original terms used in modernism to make our points more comprehensible to modernist researchers. We hope to convince them that our customized GT can satisfy their concerns so that the community of GT users in MIS grows with a faster pace. 27 much as possible. Therefore, we need to embed reproducibility in GT’s generalization process, which occurs in substantive coding (i.e., open coding and selective coding) and in constant comparison activities. 2.3.1.1 Reproducible Substantive Codes via Abstaining from Analytical Codes Ensuring reproducibility of the constructs begins with ensuring reproducibility of empirical indicators. As such, in the open coding phase, where possible, words contained in a piece of data i.e., in vivo codes (labels derived directly from the language of respondents (Glaser, 1978)) should be applied for labeling that piece (Volkoff et al., 2007; Strong & Volkoff, 2010). If that is not possible, or they are at the assertive coding phase, analysts should use words from similar pieces of data to do so. Analysts should restrain from using analytical codes (i.e., analytic terms generated by researchers and relevant theories (Urquhart, 2012)) as much as possible and use descriptive and in vivo codes instead. 2.3.1.2 Specify Classes or Things in Substantive Codes Moreover, vague substantive codes can result in erroneous generalization. Take an example of the following passage: “She didn't arrive to visit relatives until 3 a.m. Saturday [2-day delay] on a Delta flight, the fourth she booked.” If the analyst labels this passage with “DelayedArrival”, it would not be obvious which class-of-thing this event belongs to. Delayed arrival can be assigned to airline’s flights, crews, or passengers. Therefore, it is highly probable that later the analyst would mistakenly aggregate all related codes under the Flight’s Delayed Arrival category and lose an important category that should have come out from the passage, i.e., Passenger’s Delayed Arrival. Therefore, it is suggested that the Thing or the Class-of-thing be added to the substantive codes wherever possible. “She” in the passage refers to a “passenger”. Therefore we need to change the 28 open code to “Passenger-DelayedArrival”. Therefore, a substantive code and consequently a category should have two parts: a) thing or class-of-thing, and b) a knowledge point which is defined in this chapter as an event, an action, an attribute, or anything related to that thing or class-of-thing. 2.3.1.3 Reproducible Generalization via a “Data-then-Literature”-Driven Constant Comparison  The open coding phase produces empirical indicators i.e., substantive codes. In constant comparison, these empirical indicators are generalized to categories. Practically, categories are the product of generalizing two or more similar empirical indicators, or adding a new similar empirical indicator to an existing category, or aggregating two or more existing similar categories. But from a bird’s eye view, categories are developed by generalizing a collection of similar empirical indicators. We need this generalization to be reproducible. We explain our generalization approach using examples from the IT unavailability study. Finding similar empirical indicators hinges on finding similar things or classes and building a class or superclass out of them, respectively. In the study on IT unavailability, Comair and Virgin Blue (and later American Airlines and United) were all referred to as an “airline” in the data. Therefore, these things were generalized to a class named as “Airline”. After the case of Translink, a rail transportation system, was analyzed, “Airline” and “SkyTrain” were generalized to “Transportation Company” through consulting with the North American Industry Classification System (NAICS). We call this a careful generalization of things and classes using a data-then-literature approach. 29 Next, the newly identified classes are anchored in to develop the new categories. The subclasses or empirical indicators of the new class are checked for similar knowledge points. For instance, in our study, Airline-DelayedFlights was developed using the following empirical indicators: Comair-DelayedFlights and Virgin-DelayedFlights. The category was, then, saturated by empirical indicators from two other airline cases. After analyzing Translink, the category name was changed to TransportationCompany-DelayedTrips through consulting with NAICS (See Figure 2.6.)  Figure 2.6 An example hierarchical diagram suggested for constant comparison The generalization continued until we reached to a class-of-thing “organization” and we wanted to give an umbrella name to its knowledge points. In our memo (See Figure 2.6), we had a set of in vivo codes such as degraded service, delayed process, and crippled process. Therefore, we could have leveraged on service or business process to develop this overarching category (namely 30 construct). But we used “Business Incompetency” as the overarching category since in the thematic literature review, we had found that “business capability” (the opposite concept of Business Incompetency) is a better concept for impact analysis and strategic planning than business process or service (Homann, 2006; Ulrich & Rosen, 2011). This generalization process is an example of what we call a “data-then-literature”-driven generalization, where researchers try their best to control for their biases in coining a category name by first referring to data and, if not successful, then referring to the related literature. The most reproducible approach is to use the name of one of the sub-categories or empirical indicators to do so. Similar knowledge points are rarely identical like the above example. By sampling for similar cases, researchers can increase similar knowledge points to ease the generalization. If that is not possible, analysts can use terms suggested in other parts of the data to name the category. In the case of the unavailability study, Airline-NumberOfDailyFlights and Airline-NumberOfDailyPassengers were categorized under the name Airline-CarryingCapacity. Carrying Capacity was later mentioned in the Translink case. If that is not possible either, analysts can consult with the extant literature, standard classification systems, and dictionaries to do so. This is in line with the Thematic Literature Review suggested by Urquhart and Fernandez (2013, p. 230), where “the researcher returns to the extant literature to help develop the emerging concepts.” These measures make the “data-then-literature”-driven generalization highly reproducible in comparison with a researcher-driven generalization. Appendix C provides further details on our generalization approach. Previous measures ensure a systematic generalization. To enhance construct reproducibility further, analysts need to reduce oversights. Oversights can materialize in the form of mistakes in generalization or missing open codes. Missing open codes roots in analysts’ negligence or missing 31 data. We recommend three artifacts based on Miles et al.’s (2013, p. 108) suggested displays to be added to memo bank. These artifacts can help reduce the oversights along with finding gaps, identifying new categories, and recognizing relationships. 2.3.1.4 Develop a Hierarchical Diagram in Memos The first artifact is a hierarchical diagram as seen in Figure 2.6. We suggest that each newly identified cluster of empirical indicators and the candidate category name(s) are memoed using a hierarchical diagram on a paper or in a presentation file (e.g., a Microsoft PowerPoint file). If a newly identified empirical indicator is considered related to the cluster, a leaf node is added to the hierarchy tree and the category name (i.e., the root node) is changed if required. In addition, analysts may come across in vivo codes that can represent the cluster (i.e., hierarchy tree). These in vivo codes are added to the label of the root node as candidates for the category name. Therefore, memo sorting is conducted gradually along with the constant comparison. This artifact helps reduce researchers’ oversights during generalization.  2.3.1.5 Develop a Thesaurus of Things and Classes in Memo Oversights increase with the number of open codes. The second artifact is a thesaurus of things and class-of-things to reduce redundancy in open coding. A single thing or class can be referred to by different names. For example, “passenger”, “traveler”, and even “customer” all refers to the same class-of-things in a dataset. Neglecting this can result in a huge amount of open codes that can make the generalization very difficult and susceptible to mistakes. Our suggestion is to allocate a special place in the memo bank for things, class-of-things, and the various names they are referred to. If analysts come across a potential new thing or classes-of-thing (e.g., “traveler”), they could compare the thing/class against the thesaurus of that case in the memo bank. If they are still 32 deemed new, the new thing/class is used in open coding. Otherwise, the new term is added as a synonym to the thesaurus, and the representation for that group of classes/things (e.g., passenger) is used in open coding. In the unavailability study, we had around 406 open codes that could have been doubled if there had been no thesaurus. This idea of having such thesaurus roots in the Department of Defense Architecture Framework (DoDAF)’s artifact, called AV-2. 2.3.1.6 Develop a Cross-Case Analysis Matrix in Memo The third artifact is a cross-case analysis matrix that emboldens missing constructs and empirical indicators (i.e., open codes). This matrix can be created using a spreadsheet software or application such as Microsoft Excel or Google Docs (see Table 2.5). Before starting to analyze a new case, analysts assign a row of the matrix to that case (e.g., Comair’s crew scheduling system). For each newly-identified variable (attribute-in-general), the analysts adds a corresponding column to the matrix (e.g., IT Outage Duration, Duration to Resume Normal Operation, Passenger’s Delay Duration, and Media Coverage). Next, the intersection cell is filled with a corresponding value or a range stated in the data (e.g., “1 day”, “5 days”, and “up to 2 days”, and “Global” respectively).  Table 2.5 An example for cross-case analysis matrix Case IT outage duration Duration to resume normal operation Impacted clients Media coverage Comair 1 Day 4 days 150,000 CNN Virgin Blue 21 hours 2 days 50,000 Blomberg United Airlines 1.5 hours 2 days 400,000 CNN American Airlines 1.5 hours 1 day ~ 7500 CNN 33 Empty cells in front of new cases increase sensitivity to those variables in substantive coding. Empty cells in front of previously-analyzed cases prompt analysts to revisit those cases and fill them or gather more data on the case to be able to fill the new columns. They may not be able to fill all the cells, but we believe they should try their best. But why do analysts need to struggle for filling an empty cell? After all, this deviates analysts from generating theory to description according to classic GT (Glaser, 2004; Glaser & Strauss, 1967).  Filled cells enable cross-case comparison of values that can reveal missing constructs. In the study of IT unavailability, the matrix revealed that American Airlines’ number of impacted passengers was far below other airlines’. However, American Airline’s IT outage received almost the same level of media coverage. This prompted us to revisit the cases to check what construct we had missed. During our review and comparison of the cases, we noticed that the title of newspaper articles regarding the American Airline’s case includes the specific name of the technology (i.e., iPad), while others only mentioned “computer problem”. The level of abstraction reminded us of the Construal Level Theory (Trope & Liberman, 2010) and revealed a new construct that we named “IT Resource’s Public Psychological Proximity”. If we had not had the cross-case analysis matrix, we could have not identified this construct. The aforementioned precautions improve construct reproducibility (validity); that is, constructs are accurately created and they are reproducible by other researchers. The next step is to ensure the relationships among those constructs are accurately identified.  2.3.2 Conclusion Validity (Relationship Reproducibility) Whenever researchers investigate a relationship between two constructs, there are two possible conclusions: either there is a relationship in the data or there is none. Either way, researchers could 34 be wrong in their conclusion. This validity is called conclusion validity and is the most important of the four validity types. Conclusion validity is defined as the degree to which conclusions we reach about relationships in our data are credible (Trochim, 2007). It should be noted that both conclusion validity and internal validity pertain to the relationships between constructs. However, a relationship between two constructs can be conclusively valid, but internally invalid that is the relationship exists but it is not a causal relationship. To increase the reproducibility of relationships, researchers should enhance conclusion validity. There are three ways to improve conclusion validity (Trochim, 2007): having a larger sample size, increasing the risk of making a Type I error (e.g., changing alpha from 0.05 to 0.10), and increasing the effect size. Considering our qualitative approach, the first two options are not feasible. However, we can work on increasing the effect size through reducing noise (anything that obscure relationships) or increasing signal (the visibility of relationships). 2.3.2.1 Have Extreme Cases in the Sample Signal can be enhanced by increasing the saliency of the relationship. In an experimental context, researchers can increase the dosage of the program (i.e., the independent variable) in the treatment group (Trochim, 2007). In a field study, researchers can try to have samples with both extreme values of dependent and independent variable. In the study on strategic risk of IT unavailability, for instance, we needed to have samples with high strategic consequences as well as samples with non-strategic consequences or very low ones. We also needed to have samples with high IT unavailability and low IT unavailability.  It should be noted that seeking a negative case (e.g., a case with non-strategic consequences) is not a procedure in classic GT due to two reasons. First, classic GT assumes analyzing negative cases 35 is a technique to verify theories. As mentioned before, classic GT believes “a focus on testing can easily block the generation of a more rounded and more dense theory” (Glaser & Staruss, 1967, p. 27). On the contrary, we do not consider “verifying theories” as the only purpose of analyzing negative cases. It is our opinion that analyzing negative cases help identify critical constructs and thus achieve theoretical completeness. In other words, extreme sampling is a way of conducting theoretical sampling that tries to fill the theoretical gaps. For instance, in the study of IT unavailability, the negative cases revealed the construct Information Capability Indispensability. If we had studied only the positive cases with strategic consequences, we could have not found such a construct.  Second, classic GT believes “GT researcher does not know in advance what will be found. Incidents sampled may be similar or different, positive or negative.… [Therefore, looking for negative cases] is more likely to be preconceived forcing.” (Glaser, 2004) In our opinion, it is undoubtedly a forceful action, but a force on gathering data, not interpreting the data. What is not acceptable is forcing a theory on data that results in a theory not grounded in data. If researchers analyze the forcefully-gathered data in an open manner, the theory will be still grounded in data. Therefore, we encourage researchers to analyze negative cases as well as positive cases. 2.3.2.2 Increase the Reliability of the Sample with Triangulation  Noise, on the other hand, can be reduced through increasing reliability. A method of observation or measure is considered reliable if it would give you the same result over and over again (Trochim, 2007). Therefore, to enhance reliability, we need data triangulation and method triangulation. Data triangulation means using a variety of data sources instead of relying on a single source (Bitsch, 2005). In the IT unavailability study, for instance, we collected an average of 6.5 stories from 36 different sources (i.e., websites and correspondents) for each case of IT Unavailability. Method triangulation refers to using a combination of methods (e.g., interview, observation, and archival data) to reduce possible distortions or misrepresentations (Bitsch, 2005).  Falling into the trap of premature closure about a case can occur easily which calls for an additional data collection and reanalysis (Miles et al., 2013). Data and method triangulation “offers more complete understanding by bringing together the information gained from different perspectives and prompting interrogation of emergent contradictions” (Oliver, 2011). Therefore, triangulation helps have a complete, accurate list of empirical indicators in a case via reducing the influence of oversights of researchers, biases of narrators, and missing data. This enhances reproducibility of relationships as well as constructs. Moreover, hearing the same story from several sources helps theoretical coding. The more frequently two concepts occupy the working memory simultaneously, the stronger becomes the association between them in memory (Raaijmakers & Shiffrin, 1981). Listening to empirical indicators of a case several times leads to their co-presence in researchers’ memories several times, which strengthens the relationships between them in their memory. This promotes theoretical coding, i.e., identifying the relationship between two empirical indicators or concepts. Hence, analyzing redundant data (e.g., same stories by different observers) does not impede theory generation, but helps it. 2.3.2.3 Assertive Coding We acknowledge the importance of Glaser’s concern regarding distraction from theory generation. To reduce the distraction caused by redundant data, we suggest to delimit open coding to the first story of a case. In the remaining stories by other observers, researchers do not code the redundant empirical indicators. Rather they limit open coding to either new empirical indicators (e.g., 37 incidents or actions) or discrepancies between stories in explaining an empirical indicator. We call this coding assertive coding since the purpose of this coding is to provide a confident narration of what happened in a case. Therefore, researcher can assertively answer the research question in the context of a case (e.g., how IT unavailability resulted in strategic consequences in the study of IT unavailability). Note that assertive coding is different than selective coding. Selective coding is conducted on new cases to have saturated constructs, while assertive coding is conducted on new stories (redundant data) of the same case to have saturated empirical indicators. Selective coding starts when constructs are in need of being saturated with examples from different cases. Thus, analysts assign the name of the construct to the passages in the stories rather than openly code them. However, assertive coding is about limiting coding to new incidents or disparities in the hope of being confident in explaining what happened in a case (e.g., the process through which IT outage became a strategic risk). It is to make sure that researchers have an accurate, complete narration of a case.  2.3.3 Internal Validity (Causal Relationship Reproducibility) Internal validity pertains to ensuring the existence of a causal relationship. Three criteria should be met to ensure internal validity: temporal precedence of the cause, covariation of the cause and effect, and non-existence of plausible alternative explanations (Trochim, 2007). Lab experiments in which subjects are randomly assigned to control and treatment groups, are the most applied methodology to ensure internal validity. In field studies as well as experiments, control variables are applied in statistical analysis to rule out the effect of other explanatory variables (Myers et al., 2011). However, in a qualitative GT, which is conducted in a real setting, researchers cannot apply 38 those methods easily. Therefore, classic GT is inherently vulnerable to internal validity threats. Nevertheless, there are methods to reduce its impact. To enhance internal validity in GT, causal relationships should be reproducible by independent analysts. In other words, if independent researchers analyze the same dataset, they would achieve same causal relationships as much as possible. Causal relationships pertain to theoretical coding in GT in general and 6C family in particular. As noted before, potential TCs emerge during the substantive coding and constant comparison. Then, during the memo sorting, TCs are consolidated. To enhance internal validity, analysts need to ensure development of accurate, reproducible TCs during these three activities. 2.3.3.1 Be Sensitive to Causal and Temporal Terms in Substantive Coding During the open coding and selective coding of verbal data (e.g., field notes, transcripts of interviews, and archives), analysts can come across many TCs. Two-clause sentences containing causal and temporal conjunctions (e.g., “after,” “before,” “when,” “since,” and “because”) as well as one-clause sentences containing causal-indicative verbs (e.g., “forced”, “prevented … from”, “consequences like”, “this resulted in”, “push into”, “increased”, “collapsed”, “disrupted”, and “postponed”) are rich sources of TCs, especially the 6Cs family.  Therefore, being sensitive to terms indicative of causal or temporal relationships is of great importance. However, the mere co-occurrence of empirical indicators in a sentence is neither necessary nor sufficient for an emergent TC to be part of theory. The co-occurrence is just a hint to be memoed and verified in the memo sorting phase later. But, such hints definitely increase the reproducibility of relationships by other researchers. In the IT unavailability study, for instance, we came across the following passage in the case of Comair: “All the rescheduling necessitated by the bad weather 39 forced the system to crash. As a result, Comair had to cancel all 1,100 of its flights on Christmas Day.” The causal terms are underlined to see how they connect empirical indicators. The passage hints the following TCs: “Weather-Bad -> Comair-ReschedulingAmount -> System-Crashed -> Flights-Cancelled”. However, these TCs should not be taken as relationships between constructs since there can be missing empirical indicators in between the two empirical indicators. For instance, as a result of assertive coding, we became aware of more empirical indicators, and, thus, the accurate sequence of empirical indicators were modified as follow: Weather-Bad -> JetTires-Frozen -> Flights-Cancelled -> Comair-ReschedulingAmount -> System-Crash -> Schedule-NotAvailable -> Airline-CannotAssembleCrewsOnFlight -> Flights-Cancelled. Therefore, potential TCs are in need of further review. 2.3.3.2 Develop a Process Model of Empirical Indicators for Each Case  To increase the reproducibility of relationship identification, we need to reduce oversights in finding relationships. To facilitate fishing such relationships, analysts need to increase the visibility of these relationships. In this light, we suggest analysts develop a process model for each case and compare the process models to extract relationships.  Process models are visual presentations of causal relationships or temporal precedence among empirical indicators. They can be developed along with substantive coding or at the end of assertive coding.  The model can be developed on paper, or using a software program like Microsoft PowerPoint, Visio, or Google Docs. A process model can be incomplete since analysts may not be able to find causal/temporal relationships for some empirical indicators. When created, process models are added to the memo bank. 40 Visual presentation per se increases visibility of relationships. However, to increase visibility even more, analysts can use same color to code the same type of knowledge point (e.g., incident, action, state, etc.). Moreover, when categories are revealed, analysts can change the color of corresponding empirical indicators to same color. This helps with emboldening the relationships between categories (see Figure 2.7). Not only process models help with identifying relationships between categories, they may even prompt analysts to revisit data to check if they missed any relationships or constructs.  Figure 2.7 Process models increase the visibility of relationships between categories 2.3.3.3 Use Explanatory Validation to Identify Missing Relationships in Memo Sorting Not all relationships are directly mentioned in data or can be extracted from process models. To increase reproducibility further, analysts should be able to identify missing relationships as well. As mentioned in the section “Establishing the Goal of Modernist GT”, our approach leverages on TCs that are unrelated to Weber’s ontology to identify missing relationships. Particularly, we have found TCs such as reaction and means-goal TCs (see Appendix A) very useful. In this approach, in memo sorting, analysts gather all the TCs and try to explain them using the theory. If extant relationships cannot explain these TCs, analysts try to find new ones. They may even have to find 41 new constructs to build relationships that can explain such TCs. This approach is called “Explanatory Validation” and is often applied in hermeneutics to validate subjective theories (Holtzer, 2004; Idalovichi, 2014). Figure 2.3 provides an example of this explanatory validation. 2.3.3.4 Develop and Report a List of Rival Explanations and Non-Fitting data The importance of clarifying the boundaries of theories is emphasized by Weber (2012). Therefore, it is necessary to report non-fitting data and rival explanations along with a realistic analysis of excluding them from theory (Creswell, 2012; Bitsch, 2005; Dube & Pare, 2003; Yin, 1994). Rival explanations can be found in opinion-based claims by story-tellers, i.e., claims without support in data. In the study of IT unavailability, for instance, a computer security expert claimed that IT unavailability could raise perceived security risk that results in customer loss and thus a strategic risk. However, we were not able to find any piece of data that supports this claim. Therefore, we did not add a relationship supporting the claim to our theory. Therefore, if such a case exists in the real world, our theory cannot explain it. Note that ruling out the alternative explanations necessitates statistically testing them against a dataset with enough sample size. In qualitative GT, however, researchers are not able to do so. Nevertheless, researchers can add rival explanations to the list of rival explanations in the memo bank. Then, these explanations can be checked by comparing against current data and/or through gathering and analyzing additional data (Johnson & Christensen, 2008). If observations or facts support them, they are removed from the list and are added to the theory. Otherwise, they are kept in the list and are reported separately in the paper as rival explanations. As a result, the readers who want to apply the theory will be vigilant of their probable existence. Internal validity would be, then, checked by readers in the context they want to use the generated theory. 42 Not all pieces of data end up fitting the theory. Non-fitting data is the outcome of negative sampling (Miles et al., 2013), which is suggested to be performed close to the end of study to increase the generalizability of the theory, or the unintentional by-product of theoretical sampling. These negative cases indicate the boundaries of the theory. In the case of IT unavailability, for instance, we noticed that in one case, business disruption occurred before IT unavailability. If there exists a sufficient number of similar cases (i.e., enough funds and time are provided), the theory can be updated to reflect these cases. Otherwise, the non-fitting observations should be reported to the readers. 2.3.4 External Validity (Reproducibility with other Samples) External validity refers to the degree to which the conclusions of a study holds for (i) other setting (e.g., other IT Unavailability incidents), (ii) other examples of the unit of analysis (e.g., other organizations), and (iii) at other times (Trochim, 2007). To ensure external validity, random sampling from the targeted population is the main approach applied by modernist researchers. However, classic GT despises random sampling because it distracts researchers from theory generation and filling theoretical gaps, the main goal of classic GT (Glaser, 1979; 2004). As a modernist, we also believe that theory generation should be the main goal of GT, and, thus, we also believe this concern is valid. Moreover, logistics issues can make random sampling infeasible in GT or even useless when the sample is biased due to high dropout rate (e.g., when sensitive subjects like strategic risk are studied). Yet, two following measures can promote external validity by increasing the probability of achieving the same theory if analysts repeated the research with new samples. 43 2.3.4.1 Heterogeneity sampling using Secondary Data Sources To increase reproducibility, we suggest researchers to conduct heterogeneity sampling along with theoretical sampling. Among various sampling techniques applied by modernist researchers, heterogeneity sampling seems to have the highest alignment with classic GT. While random sampling tries to provide a sample that is representative of the population, heterogeneity sampling is to include all types of units in a population regardless of their proportions in the population (Trochim, 2007). Heterogeneity sampling tries to include all different units (e.g., people, organization) in a variety of places at different times. In the IT unavailability study, we tried to have various types of IT unavailability incidents (e.g., network outage, app outage, hardware failure) with various sources (e.g., human error, bugs) in organizations from various industries (e.g., transportation, banking, retail) in our sample. Nonetheless, the limitations of time and fund may not allow qualitative researchers to gather a heterogeneous data. In the past, gathering heterogeneous data was a difficult task. However, today, an abundance of online data (e.g., interviews in the format of videos) exists (Levina & Vaast, 2015). Researchers can study many cases described and written by others, even if in a somewhat incomplete manner, rather than studying a couple of accurate, complete cases described by the researchers via conducting in-depth interviews and observations. The accuracy of a case description from a secondary data source can be enhanced by gathering descriptions from different sources along with assertive coding. Finding case descriptions recounted by reliable narrators (e.g., professional correspondents at BBC or CNN) would also enhance accuracy. The descriptions are often not as reliable and complete as descriptions that can be created by the researchers themselves. Nevertheless, they are valuable as researchers are able to study a larger sample size. 44 2.3.4.2 Apply a Careful Generalization of the Unit of Analysis Heterogeneity sampling should be complemented by careful generalization. If researchers study three airlines, then the study can be generalized to other airlines at best. If a metropolitan rail system is also studied along the cases of airlines, the study can be generalized to other transportation companies at best. Generalizing the study to all “organizations” instead of “airline” or “transportation companies” is a violation of external validity. To claim that the theory holds for all “organizations”, at the very least, researchers need to study other cases from service industry, manufacturing industry, and public administration. If an industry is not studied, it should be mentioned as the limitation of study unless the researchers can provide reasons why the theory still holds in that industry. 2.3.5 Measures that Improves All Validity Types In the literature on qualitative methodologies, we have found several measures that can enhance two or more types of validity. In the following, we explain these measures in detail. 2.3.5.1 Provide a Thick Description of Theory In GT, the burden of verification of various validities shifts from researchers to potential users of the theory. Therefore, providing a dense and rich explanation of the generated theory is a must (Bitsch, 2005). Note that this is a thick theoretical description and aims at theoretical clarity; it should not be confused with thick description of cases pursued by some qualitative methodologies. This theoretical clarity, materialized in Table 2.1 and Table 2.2, entails providing examples or quotes from studied cases to create a higher level of understanding. Citing interviewees and observations allows readers to reach an independent judgment regarding the merits of the analysis 45 (Dube & Pare, 2003; Bitsch, 2005; Yin, 1994), i.e., construct validity, conclusion validity, internal validity, and external validity of the theory. 2.3.5.2 Clarify the Scope at the Beginning of Study As mentioned before, Classic GT starts with analyzing the data researchers have. The assumption is that there is no clear starting concepts, problems, or frameworks (Glaser, 2004). However, similar to many interpretivist researchers, having a clear research objective is an axiom to modernist researchers (Benbasat et al. 1987; Dube & Pare, 2003; Eisenhardt, 1989; Mays & Pope 1995; Miles & Huberman, 1994) in that a clear research question is critical to site selection and data collection (Benbasat et al. 1987). In the study of IT unavailability, a permanent loss of data, which is an accuracy risk rather than an availability risk, could have been mistakenly considered in the scope of study if we had started with the analysis of available data without clearly delineating the scope of the study. To settle this difference, Urquhart and Fernandez (2013), who are interpretivist, suggested GT researchers conduct a preliminary literature review to a) define the scope of study, b) choose the appropriate methodology for the study, c) develop theoretical sensitivity, and d) be aware of existing theories for future comparisons. In other words, this literature review is not a stepping stone but a starting point, which means the preliminary literature review is non-committal. In the study on the strategic risk of IT unavailability, we needed to clarify the conceptualization of IT unavailability and strategic risk to decide what cases to be included in the study and what cases to be excluded. The study began with those conceptualizations; however, the primary conceptualizations were revised as a result of data analysis.  46 2.3.5.2.1 The Effect of Clarifying Scope on Selective Coding It should be noted that having even a somewhat clear objective, scope, framework, or problem is against classic GT’s recommendation as it is considered forcing on data (Glaser, 2004). In fact, finding those in a GT study is a turning point: it is where selective coding starts. Therefore, one may wrongly infer that in modernist GT, where researchers often start with a research question, analysts skip open coding and start with selective coding. The misunderstanding roots in the assumption that because researchers have a vague research question in mind, they may not be open to all types of data and only gather data related to the research question. On the contrary, in practice, interviewees, archives, and other sources, provide data that “they believe” are related. What sources believe as related data is different than what researchers believe as related data. For instance, in a case of an airline that experienced an IT unavailability, a news article described the IT outage and the business disruption it caused first. Then, it described another business disruption that had happened in the same year, caused by a job action. At first sight, we believed this part was unrelated. Nevertheless, we openly code this event, and later these codes helped us unearth several critical constructs. Therefore, researchers still need to be open to any kind of empirical indicators. The question is, then, when to start selective coding. In other words, now that the problem, framework, scope, or objective is somewhat clear, when should analysts cease open coding and limit coding to the data that they believe are related? Our suggestion is when there exists a basic theory that somewhat answers the research question. This temporal point is when classic GT starts integrating the literature to the generated theory since the main theory is almost safe from biases. That is why we believe at this point it is safe to start selective coding. 47 2.3.5.3 Clarify Prior Knowledge in the Write-up or Presentation Researcher as a “blank slate” is another prevalent myth about GT (Urquhart & Fernandez, 2013). It is naive to think that researchers, especially experienced ones, begin a GT study without the knowledge of extant theories. Most researchers have been exposed to various theories throughout their careers. Take Strong and Volkoff (2010) as an example of how they provided a fresh perspective on misfit. Fit between IT and business had been studied for over two decades before their study. Hence, they have not started off with a clean slate. It is ok that GT researchers, when encounter a research question, would have some theories in mind. We believe that knowing the literature relevant to the research question well enough should not be an excuse to divest experienced researchers of applying GT. In fact, the founding fathers of GT, Glaser and Strauss, were experienced researchers in sociology. What they tried to do is to reduce the chances that the study is being contaminated by existing theories during the early stages of theory building (Glaser & Strauss, 1967). They wanted the generated theories grounded in data, not in extant literature. In this light, we suggest GT researchers clarify their knowledge prior to the beginning of GT study in their papers or presentations. First, writing about prior knowledge help the researchers to check for potential impacts of their biases. They may be able to reconsider impacted parts in their findings. If the writing is done before the study, it is even more fruitful in that it makes analysts vigilant about their biases. Second, clarifying prior knowledge enables readers and reviewers to gauge researchers’ success or failure in extricating themselves from their biases. Disclosing what researchers already know is necessary to check whether it is the data that speaks, or readers are listening to researchers’ biases.  48 2.3.5.4 Investigator Triangulation Many qualitative researchers resort to investigator triangulation to improve all types of validity (Dube & Pare, 2003; Bitsch, 2005). Investigator triangulation is defined as constituting a research team to balance predispositions (Bitsch, 2005, p. 84). When several researchers agree on existence of developed constructs and relationships, theory has a higher potential of reproducibility. Investigator triangulation is, often, materialized in the form of two or more coders. However, Glaser (1978, p. 59) rejects such a team configuration and believes “the analyst must do his own coding”. According to him, hiring another coder and training him to go beyond data and conduct theoretical memoing simultaneously as well as exchanging the vital information distracts analysts from theory generation. Moreover, since coders have no stake in the analysis, they have less motivation to push for theoretical saturation and theoretical completeness. Therefore, Glaser believes “if someone else is to code too, he should be hired as an analyst and given a stake in the resulting theory”. Glaser, however, suggests that only one analyst focus on coding and finding empirical indicators, and other researchers help out solely with generalization and conceptualizing. Modernist GT endorses this suggestion. 2.3.5.5 Peer Debriefing Peer debriefing addresses sharing the conclusions of study with non-contractually involved peers during the research process (Bitsch, 2005). By discussing findings with peer researchers, researchers test the theory against their perceptions in order to find missing constructs or relationships or any other flaws in the findings.  49 Peer debriefing can be conducted in different presentation formats and by different types of review (see Table 2.6). The study can be presented in the form of a complete GT study to the peers. However, if researchers want to obtain pure theoretical feedback, the theory and the literature review can be combined to comprise a research-in-progress field survey. In addition, the study findings can be sent in the form of a paper to conferences to be reviewed by anonymous reviewers or researchers can solicit informal reviews from experienced researchers. 2.3.5.6 Participant Check In addition to peer debriefing, it is suggested that study findings be reviewed by participants who provided data (Dube & Pare, 2003; Bitsch, 2005). The objective is to ensure that the provided data are in agreement with the findings, which ensures construct validity, conclusion validity, and internal validity. However, we noticed that participant check could also help with external validity. In the IT unavailability study, we took our theory to the research participants. In addition to explaining the outage they experienced using the theory, we explained some other cases of IT unavailability. To our surprise, one participant also checked our findings against other cases he had experienced in his former jobs. Therefore, participant check can help with ensuring external validity as well. Table 2.6 Various forms of peer debriefing  Presentation format Review type Complete GT study Research-in-progress field survey Anonymous formal review   Informal review   50 2.3.5.7 Role of Numbers in Modernist GT  There is a myth in part of IS community that modernist GT researchers should provide numbers in their manuscripts. We have also noticed that some studies provided coding statistics. For example, Volkoff et al. (2007), which took a critical-realist perspective, provided number of passages with a code, number of sources with passages having a code, and so forth7. On the contrary, we do not see the point in reporting such descriptive statistics (e.g., frequencies, ratios, percentages). Descriptive statistics are not able to test statistical hypotheses in general and verifying construct validity, conclusion validity, and external validity in particular. Only inferential statistics that report confidence level can verify corresponding hypotheses (Broyles, 2006). In GT, since the sample size is very low, applying inferential statistics is next to impossible. However, reporting descriptive numbers cannot help either. The valuable time of researchers can be assigned to far more important activities, including collecting and analyzing more archival cases.  Table 2.7 provides a summary of new activities as well as customized activities in modernist GT. Figure 2.8 illustrates the research process our approach to GT suggests. Appendix J also describes the similarities and differences between modernist GT and the methodology suggested by Miles et al. (2013).                                                   7 The authors, however, did not provide such numbers in another paper they published later (Strong & Volkoff, 2010). 51 2.4 Limitations In our customized GT approach, we tried to improve various validity types as much as possible. However, note that GT is a method for building theories, not testing them. The modifications cannot fill the void of verifying the accuracy or generality of theory against a representative sample. For instance, empirical generalization obtained by random sampling cannot be completely achieved by applying heterogeneity sampling and careful generalization. Hence, there is no doubt that the generated theory is in need of testing against a proper sample using statistical analysis. Moreover, performing all the aforementioned tasks is not feasible in all studies. For instance, enough secondary data may not exist in some studies. Peer debriefing via submitting a research-in-progress paper to a conference requires a huge time commitment that is not feasible for many researchers. Therefore, we suggest readers and especially reviewers of positivist GT studies to look at our methodology as a recommendation and consider the context of study in performing or reviewing the suggested tasks.  52 Table 2.7 New and customized activities in Modernist GT Activity Reason for customization Change description Based on Open Coding: In classic GT, each line of qualitative data will be assigned with labels that reflect the gist of the line. Labels can be anything: descriptive, in vivo, or analytical (developed by analyst). Construct validity a) Use words in a piece of data (in vivo codes) for open coding. If not possible, use words from similar pieces of data. Restrain from using analytical codes. b) Add class-of-thing or thing to open codes and selective codes to avoid having vague codes. Trochim (2007); Straub et al. (2004); Volkoff et al. (2007); Strong & Volkoff (2010); Weber (2012); (Bunge, 1977; 1979) Internal validity c) Be sensitive to causal and temporal terms during open coding and selective coding. Trochim (2007); O'Leary (2004) Constant Comparison: In classic GT, after each iteration of sampling and coding, analysts compare incidents to incidents, concepts to more similar incidents, and concepts to concepts. Constructs (categories or their properties) and their relationships are developed in this activity. Construct validity a) Start with generalizing things or classes-of-thing to new classes-of-things. b) Only generalize empirical indicators with identical class-of-thing. In other words, refrain from generalizing identical events, actions, attributes, and other knowledge points if they belong to different classes-of-things unless a superclass can be created. c) Apply a “data-then-literature”-driven generalization instead of a researcher-driven generalization. Trochim (2007); Straub et al. (2004); Urquhart and Fernandez (2013) All validity types d) Ensure investigator triangulation. The team consists of one analyst (coder) and one or two researchers who conduct generalization or validate it. While this team configuration is only a suggestion by classic GT, our approach to GT takes it seriously. Trochim (2007); Bitsch (2005); Dube & Pare (2003); Glaser (1978) Assertive Coding (New) Conclusion validity The purpose of this activity is to enable analysts to assertively describe what occurred in a case by having a complete, accurate list of empirical indicators. To lower the distraction from theory generation, open coding is limited to new incidents or disparities in the redundant data gathered for the sake of reliability. The first (and sometimes the second) story of each case is coded in an open manner. The rest of stories is coded using assertive coding. Trochim (2007); Oliver (2011) 53 Activity Reason for customization Change description Based on Theoretical Sampling: In classic GT, choosing the new groups or subgroups for the next stage of data collection is driven only by the need to fill the emergent gaps in the theory. Sampling is not performed for the sake of accuracy, reliability of data, or empirical generalization. External validity a) After theoretical sampling, heterogeneity sampling is applied to include various population units (e.g., organization) in a variety of places at different times. Particularly, the abundant existing secondary data can be analyzed for this purpose. O'Leary (2004); Trochim (2007); Levina & Vaast (2015) Conclusion validity b) Apply sampling for extreme values, along with the theoretical sampling, to increase saliency of relationships. c) Gather redundant data on a case using data and method triangulation to increase reliability. Apply assertive coding to develop a full picture of a case while decreasing the time not spent on theory generation. Trochim (2007); Bitsch (2005) Theoretical Memoing: In classic GT, an analyst interrupts open/selective coding, reading literature, or any other activities to add ideas about categories, properties, conceptual connections between data and categories i.e., theoretical codes (TCs), and gaps in theory to the memo bank. Construct validity a) Develop a hierarchical diagram of substantive codes and categories to reduce missing an empirical indicator’s contribution in coining a category name.  b) Develop a thesaurus of things and classes-of-thing in the memo bank to decrease the number of redundant open codes and thus reduce oversights in generalization. c) Develop a cross-case analysis matrix, which includes cases and measurable categories (attributes), to identify missing constructs and empirical indicators. Trochim (2007); Straub et al. (2004); Internal validity d) Develop a process model for each case that visually represents the temporal or causal relationships of empirical indicators. Use same color for empirical indicators with same type of knowledge point or same categories. This reduces oversights in relationship identification and increases reproducibility. e) Develop a list of rival explanations and report non-fitting data so that readers become vigilant of their probable effects (be able to check internal validity in the context they want to use the theory). Trochim (2007); Creswell (2012); Bitsch (2005); Dube & Pare (2003); Johnson & Christensen (2008); Yin (1994); O'Leary (2004) Memo Sorting: In classic GT, the final theory is generated at this stage. Once Internal validity a) Use explanatory validation to identify missing relationships in theory to increase the reproducibility of relationships even with different datasets. Holtzer (2004); Idalovichi (2014); Weber (2012) 54 Activity Reason for customization Change description Based on categories are saturated by data from various groups, all memos (categories, properties, and TCs) are compared against one another to develop new TCs, verify older ones, and integrate all TCs in order to build the final theory. Particularly, coding families like strategy family and means-end family are very useful. b) Finalize the list of rival explanations and non-fitting data.  External validity c) Apply a careful generalization of the unit of analysis. Trochim (2007) Writing the Paper All validity types a) Provide a thick description of theory so that readers can gauge its validity in the context they apply theory for. Bitsch (2005); Dube & Pare (2003); Yin (1994) Literature Review: In classic GT, literature of substantive fields different from the research can be reviewed at any time. Once there is a basic theory and a core category, related literature will be fed into constant comparison as if it is another source of data. All validity types a) Clarify scope in the beginning of study using a preliminary literature review. b) Clarify prior knowledge in the write-up or presentation so that audience can check whether it is the data that speaks, or readers are listening to researchers’ biases. Urquhart & Fernandez (2013); Benbasat et al. (1987); Dubé & Pare (2003); Eisenhardt (1989); Mays & Pope (1995); Miles & Huberman (1994) Selective Coding: In classic GT, open coding is ceased once the core category (the pattern that sums up the data) is emerged, and researchers delimit coding to only those variables that relate to the core category. Preliminary literature review Selective coding starts when a basic theory is found. Positivist researchers usually start with a somewhat clear question, therefore core category is somewhat determined at the beginning of the study. Note that research question can be changed or clarified during the research process. Glaser (1978; 2004) Peer Debriefing & Participant Check All validity types a) Conduct peer debriefing b) Conduct participant check Dube & Pare (2003); Bitsch (2005) 55  Figure 2.8 Modernist GT methodology56 2.5 Conclusion We started this chapter with a quest to customize GT method to be able to develop a reproducible, ontology-compatible theory. The developed theory via current GT methodologies as well as their theory development process is not compatible with the way modernism, i.e., objectivism-positivism, views and tests theories. This resulted in positivist IS researchers shying away from GT method losing its ability to provide novel yet well-contextualized theories about new IT phenomena and sensitive subjects. This chapter reconciled the differences between the theoretical goals of classic GT and objectivism-positivism. We tried to stay away from torturing data and be open to all relationships found in data. However, classic GT’s lack of measures to enforce accurate conclusion from data was not reconcilable. Hence, we had to sacrifice parsimony for the sake of generality and accuracy. We, then, modified the activities accordingly to achieve the established goals. The customized activities increased the chance of having reproducible constructs and relationships if other researchers use different datasets to develop conclusions from data. We emphasized that the developed theory lacks empirical generalizability and is still in need of being verified against a sample with a proper size that is collected using random sampling.   57 Chapter 3: How does IT Unavailability Become a Strategic Risk?: A Grounded Theory Approach 3.1 Introduction Organizations are becoming increasingly vulnerable to the unavailability of Information Technology (IT) (Westerman & Hunter, 2007). A one-day failure of Comair’s crew-scheduling system in 2004 resulted in a $20m financial loss and the resignation of the company’s president three weeks after the incident (Westerman, 2009). The Royal Bank of Scotland (RBS) set aside £125m for compensation and was fined £56m by regulators due to a one-week disruption of its core banking systems in 2012 (BBC, 2014). The struggling healthcare.gov website was believed to be “the number one driver” of USA Democrats’ Senate loss in the 2014 election, according to Democrats’ then-Senate leader and Republican strategists (Raju, 2014; Borger, 2014). There is no shortage of real-world stories in which the unavailability of IT resulted in significant losses or strategic consequences (Chen et al., 2011). On the other hand, many IT interruptions have not resulted in any noticeable impact on organizations. For instance, Dynes et al. (2007) found that IT disruptions within the supply chain of an electrical component manufacturer did not affect the firm’s strategic ability to manufacture parts. Likewise, Novartis’ Chief Information Officer (CIO) argued that “it probably doesn’t matter if one of our payroll systems breaks. We can give people cash; we’ll give them checks. It’s no business risk whatsoever” (Westerman & Hunter, 2007, p. 21). It is also evident that there are many other instances of IT unavailability with relatively mild consequences. 58 This raises an important, and yet unanswered, question: how and under what conditions does IT unavailability become a strategic risk? Most studies on IT unavailability (Azzopardi et al. 2011; Liu et al. 2010; Tsai and Sang 2010; Hariharan 2010; Ohlef et al. 1978; Chunping et al. 1997; Beaudet et al. 2006; and Chen et al. 2002) have mainly focused on estimating or decreasing the likelihood and/or duration of IT unavailability. Other than a few exceptions, such as Zambon et al. (2007) and Zambon et al. (2011), there is a dearth of research on the business consequences of IT unavailability. This chapter aims to shed light on the business side of IT unavailability and uncover how organizations are strategically affected by the interplay between failing business and IT components. To answer the research question, we have applied a customized Glaserian grounded theory methodology (GTM) to analyze cases of IT unavailability with varying degrees of strategic consequences. GTM suggests a systematic way to generate a theory from data (Glaser & Strauss, 1967; Strauss & Corbin, 1990; Glaser, 1992). It is especially useful in developing process-oriented (i.e., how-oriented) explanations of phenomena (Glaser, 1978; Myers, 1997; Orlikowski, 1993; Urquhart et al., 2010; Urquhart & Fernandez, 2013). Grounded in data, GTM also enables researchers to discover new theoretical constructs and/or take a fresh look at existing constructs to understand them better (Glaser & Strauss, 1967; Glaser, 1978; Olbrich et al., 2011; Strong & Volkoff, 2010). This chapter makes three main contributions to the literature of IT unavailability. First, the developed theory clarifies how the interplay between IT unavailability and business disruption leads to strategic risk. The theory is able to explain how businesses trigger IT unavailability and how organizational reactions can mitigate or exacerbate its consequences. Second, the generated theory introduces more accurate estimators of the impact of IT unavailability. Particularly, a 59 capability-based impact analysis of IT unavailability is suggested as opposed to the current resource-based and process-based impact analyses. Third, the study reveals new constructs, such as Public Psychological Proximity of IT unavailability and Client Limbo, which intensify the impact of IT unavailability on business. Consistent with suggestions by Urquhart and Fernandez (2013) on writing GTM articles, the chapter is organized as follows. The next section provides a preliminary literature review that defines the scope of the study and our initial knowledge prior to the start of the study. Next, the research methodology is described and the findings are explicated. Subsequently, the explanatory power of the theory is discussed. Then, the research’s contributions to the literature as well as recommendations to practitioners are provided. Subsequently, the limitations of the study are discussed and possible future research directions are suggested. Finally, the chapter concludes with a summary of the findings. 3.2 A Preliminary Literature Review To distinguish relevant cases from irrelevant ones during the theoretical sampling, we had to have a clear understanding of the two key concepts on which the research question is built, namely, IT unavailability and strategic risk. Therefore, we conducted a non-committal review of the literature (Urquhat & Fernandez, 2013) to determine how these two concepts are conceptualized and operationalized by practitioners and researchers. The literature review was non-committal in that the conceptualizations and operationalizations obtained were just a starting point, but not a stepping stone. It only provided a primary inclusion and exclusion criteria for cases. In addition, the preliminary literature review describes our knowledge prior to the start of the data collection and analysis. Therefore, readers can evaluate our success in extricating ourselves from prior biases 60 that can undermine a GT study (Glaser & Strauss, 1967; Glaser, 1978). Note that this literature review is not to justify creating a new theory (Glaser, 1978). 3.2.1 IT Unavailability IT risk is defined as “the potential for an unplanned event involving Information Technology to threaten an enterprise objective”8 (Westerman & Hunter, 2007, p. 22). From a business perspective, IT risk can be categorized in terms of availability, access, accuracy, and agility (Westerman & Hunter, 2007). Availability risk refers to the potential failure to keep the systems running and to recover from disruptions. Access risk concerns the potential failure to ensure appropriate access to data and systems. Accuracy risk is the potential failure to provide correct, timely, and complete information. Agility risk is the potential failure to change information systems with acceptable cost and speed. While acknowledging the criticality of access, accuracy, and agility risks, this study focuses on the availability risk and the consequences of IT unavailability incidents. In this light, a cyber-attack that results in disclosure of private customer information is only an access risk and outside the scope of the study, while a cyber-attack that results in system unavailability is considered within the scope of this study.                                                   8 There is a lack of consensus among IS researchers on the definition of IT risk. One aspect of this disagreement is whether IT risk should focus exclusively on negative outcomes or include both threats and opportunities (Alter & Sherer, 2004; Benaroch et al, 2006). This study adopts a negative connotation of risk. 61 3.2.1.1 Conceptualization of IT Unavailability as Downtime:  The prevalent operational definition of availability (unavailability) used by IT practitioners is the ratio of the duration a system or IT component is functional (non-functional) to the total duration it is required or expected to function (Federal Standard 1037C, 2016; Azzopardi & Lamb, 2011; Tsai et al., 2010; Azzopardi et al., 2011; Khan et al., 2007). Some example target values for IT availability are two nines (i.e., being 99% available or 1% unavailable equating to 3.65 days per year) or three nines (i.e., being 99.9% available or 0.1% unavailable equating to 8.76 hours per year) (Marcus, 2003).  Another conceptualization considers availability (unavailability) as a distribution of uptime (downtime) experienced over a time interval (see Figure 3.1) (Beaudet et al., 2006). Therefore, while both conceptualizations share a downtime-based view of unavailability, the former operationalizes unavailability based on the duration of downtime while the latter is based on both duration and frequency (Azzopardi et al., 2011).  Unlike the duration-based conceptualization that considers all downtimes to be worth exactly the same to the organization, the distribution-based conceptualization is time-sensitive and cognizant that an outage in December during the Christmas shopping period can hurt an e-commerce firm more than an outage in August (Marcus, 2003).  It is evident that downtime is the intersection of both conceptualizations and is the operational term usually used by practitioners and researchers in lieu of unavailability. Downtime can be caused by many factors (Goldstein et al., 2011): technical problems (e.g., hardware failure, software bugs, network overload, and power outages), natural phenomena (e.g., floods), malicious activities (e.g., denial-of-service (DOS) attacks, vandalism), human error, inadequate IT 62 capabilities, ill-suited service-level agreements (SLAs), inappropriate configuration, and improper infrastructure management. 3.2.1.2 Reducing Downtime To reduce the duration of downtime, the literature mainly suggests disaster recovery plans (DRPs) to guide the efforts to recover the system as quickly as possible (Swanson et al., 2002; Azzopardi et al., 2011; Westerman & Hunter, 2007). Moreover, some features like automatic restart, automatic backup and restoration, dynamic resource allocation, and graceful degradation in systems have been suggested (Limoncelli et al., 2014). DRaaS is the most recent approach suggested in the literature to quickly restore service after a downtime (Wood et al., 2012; Abuhussein et al., 2012).  Figure 3.1 Two popular conceptualization of (un)availability To decrease the likelihood (or frequency) of downtime, the literature suggests employing automatic crash data collection and analysis (Limoncelli et al., 2014), creating IT redundancy by creating replicas or load balancing (Limoncelli et al., 2014; Azzopardi et al., 2011), reducing 63 interdependency of systems through loose coupling and modularity (Limoncelli et al., 2014; Tsai & Sang, 2010; Zambon et al., 2007;  Zambon et al., 2011;  Azzopardi et al., 2011; Tanriverdi et al., 2007), applying Simian Army tests (Izrailevsky & Tseitlin, 2011) such as the chaos monkey test (Bennett & Tseitlin, 2012), and analyzing the availability using failure mode effects analysis (FMEA) (Ohlef et al., 1978; Chunping et al., 1997), fault tree analysis (FTA), and reliability block diagrams (Beaudet et al., 2006; Chen et al., 2002). In general, having service-level agreements with IT service providers (Liu et al., 2010), creating a risk-aware culture through training IT users (Westerman & Hunter, 2007), having formal processes for IT availability management such as those specified in the Information Technology Infrastructure Library (ITIL) and Control Objectives for Information and Related Technologies (COBIT), and having non-complex system architectures (Hariharan, 2010, Westerman & Hunter, 2007) can help in reducing both the duration and the likelihood of downtime. 3.2.1.3 Impact of Downtime Even with all the above precautions, all systems eventually fail (Limoncelli et al., 2014). According to the literature, the impact of downtime can be measured by its impact on: • Labor productivity loss (e.g., lost user minutes, required overtime minutes, number of employees affected) (Liu et al., 2010; Balaouras & Dines, 2010; Vision Solutions, 2014)  • Revenue loss (e.g., unprocessed business transactions, lost future revenue) (Liu et al., 2010; Okolita, 2009; Vision Solutions, 2014)  64 • Financial cost (e.g., compensatory payments, customer retention cost, lost discounts, credit rating, temporary employees, equipment rental, overtime costs) (Okolita, 2009; Balaouras & Dines, 2010; Vision Solutions, 2014) • Damage to reputation and loyalty of business partners (e.g., customers, suppliers, financial markets, banks, employees) (Okolita, 2009; Balaouras & Dines, 2010, Vision Solutions, 2014) • Legal and regulatory obligations (Balaouras & Dines, 2010; Vision Solutions, 2014) The impacts of downtime are investigated in a risk management activity called business impact analysis (BIA), which deals with identification and prioritization of critical IT systems and components (Swanson et al., 2002). BIA is the first step in business continuity management (BCM) (Tammineedi, 2010), which also includes developing recovery and contingency plans for critical systems and finally implementing and testing those plans to keep the business running (Westerman & Hunter, 2007). 3.2.2 Strategic Risk The strategic management literature studied risk in different contexts. Most studies take a financial economics perspective conceptualizing risk as the covariance of an expected return of a company with the expected return of a market above the risk-free rate yields by government bonds (Ruefli et al., 1999; Sharpe, 1964; Lintner, 1965; Reinganum, 1981; Lakonishok & Shapiro, 1986; Baird & Thomas, 1985, 1990; Fama & French, 1992; Fama & French, 2002). Some other studies in the management science literature use the volatility of a firm’s financial performance measure strategic risk (Ruefli et al., 1999; Conrad & Plotkin, 1968; Cootner & Holland, 1970; McEnally & 65 Tavis, 1972; Marsh & Swanson, 1984). Finally, a few studies view strategic risk as the probability of losing competitive advantage over competitors (Ruefli et al., 1999; Collins & Ruefli, 1992; Drew et al., 2006; Tanriverdi et al., 2007). This perspective considers the relative nature of strategic risk and thus, resonates better with business managers (Collins & Ruefli, 1992). Strategic risk in the context of our research question seems to be more aligned with the last conceptualization. Different conceptualizations in literature are related to this view. Collins and Ruefli (1992, p. 1709) defined strategic risk as “the probability of losing rank position vis-a-vis the other firms in the reference set.” The reference set is the group of firms providing similar services or products and/or competing with one another over resources. Drew et al. (2006) defined strategic risk as the probability of losing market position, critical resources, and the ability to innovate and grow. Finally, Tanriverdi et al. (2007) defined strategic risk as the probability of either of the any of the following mishaps: loss of financial health, inferior competitive position, or decrease in market share. Having defined the scope of the research to some extent, the next step was to collect and analyze cases with unavailability incidents by using the conceptualization of IT unavailability to exclude those cases that did not fit this conceptualization. The conceptualization of strategic risk, particularly the reputation damage mentioned by Tanriverdi et al. (2007) as a materialization of strategic risk, also helped us with finding cases with strategic consequences. In the following section, we explain how we identified the cases, analyzed them, and built our theory.  66 3.3 Methodology 3.3.1 Positivist Grounded Theory Chapter 2 provides an overview of the customized GT methodology to achieve the stated goals. To summarize, we needed a theory-generation methodology that can satisfy our objectivist-positivist view of science in addition to reducing the possibility of inaccurate conclusions from the secondary data we relied on. Therefore, we customized the GT method9 to derive a methodology that (i) produces an ontology-compatible theory (Weber, 2012) and (ii) increases the chance of reproducibility of our conclusions with other datasets and researchers. The former ensures that theories are verifiable via generally acceptable positivist techniques like statistical analysis and econometrics, while the latter answers the call by Burton-Johns & Lee (2017) for strengthening accuracy in GT. We selected classic GT, namely, Glaserian GT (Glaser & Strauss, 1967; Glaser, 1978, 1992, 1998, 2005), over Straussian GT (Strauss & Corbin, 1990, 2008) as the basis of our methodology since the concept-indicator basis of classic GT (Glaser, 1978, 2005) is closer to our positivist view of science than the symbolic interaction basis of Straussian GT.  Despite the congruity of classic GT’s basis with positivism, certain inconsistencies exist between their goals. According to Weick (1995, pp. 389-390), any explanation is deficient in one or more of the qualities of generality, accuracy, and parsimony. Classic GT focuses on generality and parsimony (Glaser, 1978, p. 93) and believes that accuracy is not relevant at the conceptual level                                                   9 Note that Urquhart and Fernandez (2013) believe that GT as a method is neutral until it is implemented within a methodology when epistemological and ontological considerations are included. 67 (Glaser & Strauss, 1967, p. 23). On the contrary, we believe accurate (reproducible) development of conclusions from data is important. We also want a complete theory that can explain as many cases as possible. This means removing a construct or relationship from our theory results in not being able to explain at least one case of IT unavailability. In this light, our approach focuses on generality and accuracy, at the cost of sacrificing parsimony. 3.3.2 Research Process The motivation for this study had its genesis in the difficulty that we faced in explaining the differential consequences of two cases of IT unavailability. The downtime-based literature of availability could not explain why a one-day outage of Comair’s crew-scheduling system left strategic consequences whereas two-month hiccups of a learning management system (LMS) at the University of X (UoX) left no major impact.  After clarifying the purpose and scope of the study with a non-committal preliminary literature review, we embarked on collecting and analyzing mainly secondary data (see Appendix E for the links to the sources). We opted for secondary data since we found organizations to be uncooperative due to the sensitivity and confidentiality of the failure events having a strategic nature. Moreover, secondary data enabled us to study 28 cases and have a more general theory. If instead of studying 28 cases from secondary sources, we had limited ourselves to study one or two cases in depth, we would have missed contextual factors such as Public Psychological Proximity (PPP) and IT Detachability in our theory. The secondary data were gathered from stories published in online news outlets (e.g., CNN, BBC) or organizations' websites that narrate infamous cases of IT unavailability. These stories were written by professional correspondents, audit teams, or 68 experts who had obtained information from insiders, customers, and other stakeholders via their unique social networks. As Glaser (1978; 2004) suggested, we started by analyzing the available data i.e., stories related to the case of Comair. We openly coded the first two stories and conducted an assertive coding of the six remaining stories, which are narrated by other sources, to have a complete, accurate list of empirical indicators (e.g., incidents, actions, events). Our theoretical sampling at this stage was directed by the similarity of the cases to have similar empirical indicators which facilitates their constant comparison in order to extract constructs and relationships. Hence, we continued the analysis with cases related to three other airlines: Virgin Blue, American Airlines, and United Airlines. Next, we analyzed two separate incidents of IT unavailability in an urban transportation system called Translink Skytrain. By the end of the analysis of the six cases, we had 406 open codes, six process models, one cross-case analysis matrix showing values for each identified construct in each case, and a basic substantive theory. Therefore, we began selective coding of data on new cases to saturate existing constructs with samples and find new constructs if any. At this stage, our theoretical sampling was directed to have both low and high impacts to figure out a way to understand and measure strategic risk in the context of IT unavailability incidents. After identifying a tentative method to evaluate the magnitude of strategic risk, we began heterogeneity sampling so that various types of industry, IT unavailability, and business incompetency participate in building the theory. This helped have a better understanding of constructs, like public psychological proximity (PPP), and improved the external validity and generality of the theory. Along with selective coding, we conducted a thematic literature review to identify proper labels for the identified constructs. 69 Next, in the memo-sorting phase, we verified our theory (Figure 3.2) against all identified constructs and relationships we had added to the memo bank. The explanatory power of the generated theory was validated so that the theory explains as many identified relationships between empirical indicators and categories as possible. We also continued reading, but not coding, stories of new cases of IT unavailability in the news and checking our theory’s explanatory power against them. Coding two stories were deemed necessary as our theory had difficulty in explaining some empirical indicators. The dynamic view of IT unavailability (Figure 3.3) is an outcome of this memo-sorting and explanatory validation. After the theory was consolidated, a theoretical literature review was conducted to extract how the theory contributes to the existing literature of IT unavailability and neighboring areas. It is worth noting that different versions of the developed theories (presented as a GT study as well as a research-in-progress field survey without mentioning GT) were reviewed by anonymous reviewers of a top IS conference and experienced researchers to increase the chance of the reproducibility and ensure the explanatory power of theory. Table 3.1 shows the details of the 28 cases we studied. The heterogeneity of our sample enhanced the generality of our findings. The sample includes various industries, various types of IT resources, and an 11-year interval of occurrence from 2004 to 2015. Similarly, the extreme sampling ensured the sample has a range of downtime durations from zero to five months as well as a range of client-base awareness of the incident (the operationalization of strategic risk) from no awareness to complete awareness. This enhanced the reliability (reproducibility) of the relationships.  Furthermore, the reliability of our findings was improved by having several narrations of a case. An average 6.5 stories that were analyzed for each case ensures data triangulation. The stories included observations, interviews, and archival data that ensures method triangulation. We also conducted two interviews in the case of UoX. Overall, 182 stories from 1  70 Table 3.1 Cases analyzed in this study Order Industry Organization IT resource IT downtime duration Date No. of stories Coding Sampling Client-base awareness (strategic risk) 1 Transportation Comair Crew scheduling system 1 day 2004 Dec 8 Open Theoretical Complete 2 Transportation Virgin Blue Check-in, reservation, boarding systems 21 hours 2010 Sep 13 Open Theoretical Complete 3 Transportation American Airlines Pilots’ iPad App 12 days (sporadic) 2015 Apr 9 Open Theoretical Complete 4 Transportation United Airlines Router issue, airline's reservation system 90 minutes 2015 Jul 12 Open Theoretical Complete 5 Transportation BC Translink Skytrain Communication circuit board 3 hours 2014 Jul, 17 3 Open Theoretical Complete 6 Transportation BC Translink Skytrain Vehicle control centre (VCC) system 1 hour 2014 Jul, 21 3 Open Theoretical Complete 7 Government US Department of Health Healthcare.gov (health care enrollment web site) ~2 months 2013 Oct 13 Selective Theoretical Complete 8 Government British Columbia Government  Integrated case management (ICM) system 2 weeks 2014 May 12 Selective Theoretical Complete 9 Government UK Border Force Self-service e-passport gates and staffed customs desks 1 day 2014 Apr 8 Selective Theoretical Complete 10 Government US State Department Passport database 3 days (2 weeks at 50%) 2014 Aug 6 Selective Theoretical Complete 11 Financial Royal Bank of Scotland Banking system 1 week 2012 Jun 6 Selective Theoretical Complete 12 Financial Bank of America Online banking 1 week 2011 Oct 7 Selective Theoretical Complete 13 Financial JPMorgan Chase Database outage, online banking  3 days 2010 Sep 5 Selective Theoretical Complete 14 Financial Development Bank of Singapore Network outage, back-office services to ATMs 7 hours 2010 Jul 3 Selective Theoretical Complete 15 Entertainment Netflix Amazon web service 0 minute 2014 Oct 3 Selective Theoretical Not at all 16 Entertainment Netflix Online streaming 1 day 2012 Dec 8 Selective Theoretical Complete 71 Order Industry Organization IT resource IT downtime duration Date No. of stories Coding Sampling Client-base awareness (strategic risk) 17 Education University of X Learning management system (LMS) 1700 minutes 2013 Sep & Oct 4 Selective Theoretical Negligible 18 Manufacturing BMW Car-to-mobile app 1 week 2014 Jul 5 Selective Heterogeneity Considerable 19 Manufacturing BMW Supply management system 3 months 2013 Jun-Sep 3 Selective Heterogeneity Significant  20 Manufacturing Saudi Aramco All computers hacked (but production lines) 5 months 2012 Aug 10 Selective Heterogeneity Complete 21 Utility BC Hydro Customer communication website 2 days 2015 Aug 7 Selective Heterogeneity Complete 22 Health Sutter Health Electronic health record system 1 day 2013 Aug 4 Selective Heterogeneity Complete 23 Entertainment Sony Play Station online network hacked 25 days for online gaming, 42 days for the digital content shop 2011 Apr 11 Selective Heterogeneity Complete 24 Retailing Loblaw Cash registers 10 hours 2016 May 5 Selective Heterogeneity Complete 25 Retailing Amazon Amazon e-commerce 20 minutes 2016 Mar 6 Selective Heterogeneity Complete 26 Retailing Target Target website 1 day 2011 Sep 3 Selective Heterogeneity Complete 27 Government UK's National Border Targeting Centre  System that checks passenger data against watch lists of suspect individuals ~ 2 days 2015 Jun 3 Selective Theory Validation Significant 28 Transportation Delta Ground operations software 3-4 hours 2016 Feb 2 Selective Theory Validation Complete   72 to 40 pages in length were studied. Both heterogeneity and reliability of the sample helped with the reproducibility of the generated theory (for further details, see Chapter 2). In the next section, we explain two theories generated during the research process. Then we provide a discussion of the validity of our final theory and our contributions to IS researchers and practitioners. 3.4 Findings: Domino View vs. Dynamic View of IT Unavailability In this section, we explain how IT unavailability becomes a strategic risk. We provide two theories: a domino view and a dynamic view of IT unavailability incidents. Domino view was our primary answer to the research question. Yet, in the theory validation phase, we noticed that our over-generalization of categories and biases made domino view unable to fully explain some observations. Therefore, we got back to our memos and devised the dynamic view.  Nonetheless, we briefly explain the domino view due to two reasons (a detailed explanation is provided in Appendix F). First, the domino view facilitates the understanding of the dynamic view. In fact, the dynamic view was developed using attributes of the domino view’s constructs. Second, in the peer debriefing and participant check phases, we noticed that many researchers and practitioners have a domino view of IT unavailability incidents. We wanted to emphasize that this theory lacks the power to explain all cases of IT unavailability. 3.4.1 Domino View of IT Unavailability The generated theory illustrated in Figure 3.2 provides a domino view of an IT unavailability incident. The domino causal pattern includes a sequence of constructs with a clear beginning and  73   Figure 3.2 Domino view of IT unavailability74 end, where effects propagate from causes in domino-like patterns (Grotzer, 2004; Bell-Basca et al., 2004; Jarema & Libben, 2007). The domino pattern can be found in the literature represented in the form of structural equation models tested via applications like LISREL and SmartPLS (Maruyama, 1997).  The domino view consists of ontology-compatible constructs (Weber, 2012) (see Table 3.2) that indicate to the states of the affected classes-of-things, i.e., IT resource, organization, and related entities like clients, industry, and end-users. Note that other than organization and IT resource, the remaining classes are at the collective level (Burton-Jones & Gallivan, 2007). Two types of constructs are used in the domino view. Core factors are constructs that comprise the chain of events from IT unavailability to strategic risk. Contextual factors are constructs that strengthen or weaken the severity of the effect of a core factor on the next one in the sequence10. Table 3.2 List of constructs that constitute the domino view IT Unavailability: A mutual property of IT Resource and End Users classes defined as the extent to which end users perceive an IT resource unavailable to them. Proxies: IT Capacity Deficit, Response Time (Average | Percentile; End Users’ Location | End-to-end Latency), Geographical Span, Duration, Frequency. Example from cases: i) Although UoX’s Learning Management System (LMS) was not down and was technically available, many (not all) students and lecturers were not able to use the system due to huge demand for the system. ii) The Obamacare website drew 250,000 concurrent end users during the first days of launch while it was able to handle 1,100 concurrent users. Thus, there was a capacity deficit as big as 248,900 end users or 99.56% of the end users. The average end-to-end latency was more than 60 seconds during the first days of rollout and was 8 to 16 seconds                                                   10 These conditions are referred to as (i) moderators, which is a term applied in variance-based theories, (ii) contextual factors, which is a term familiar to GT researchers (Glaser, 1978, p. 74), or (iii) peripheral factors, which is a term familiar to configurational theory researchers (El Sawy et al., 2010). 75 after a month. The unavailability was country-wide and its magnitude gradually decreased until it hit a one-second average end-to-end latency at the end of the second month. Supported by: Westerman & Hunter (2007). (Client-related) Business Incompetency: A mutual property of Organization and Clients classes defined as the extent to which clients of an organization perceive it incapable of providing service or products to them. Proxies: Business Capacity Deficit (Number of Affected Business Operations, Number of Unserved Clients), Operation Delay, Geographical Span, Duration of Business Incompetency, Frequency. Example from cases: i) Comair lost its capability to operate flights. BMW was not able to deliver spare products and thus repair customers’ vehicles. UK border force was not able to check passengers against the no-fly list and thus protect citizens and passengers against terrorist. ii) As a result of a malfunctioning communication circuit board, SkyTrain lost its capability to drive trains completely (100%) for 6 hours in one region (out of the three regions). After five days, in another incident, SkyTrain experienced a ubiquitous 100% business incompetency for five hours due to a power outage that shut down the system that guides self-driving trains. Supported by: Homann (2006); Ulrich & Rosen (2011). Information Incompetency: A mutual property of Organization and Information Consumers (e.g., employees, clients) classes defined as the extent to which an organization is incapable of supplying its information consumers with the information they require. Proxies: Information Capacity Deficit, Operation Delay (Response Time), Geographical Span, Duration of Information Incompetency, Frequency. Example from cases: Sutter Hospital was unable to provide the nurses and physicians of its all 100+ facilities in Northern California with access to updated patients’ vital history, medication orders, their allergies, and so forth for a day. Supported by:Martin & Leben (1989); Galbraith, (1974); Premkumar et al. (2005); Tan et al. (2016); Tanriverdi & Ruefli (2004); Tian & Xu (2015). IT Resource’s Detachability: A mutual property of Organization, and IT Resource classes defined as the extent to which the IT resource and supported information capabilities are separable that is supported information capabilities can be independently materialized on another IT resource or business process without loss of functionality. Proxies: Backup’s Information Staleness, Backup’s Time Lengths of Operation, Backup’s Agility (response time to IT unavailability), Backup’s Information Capacity Coverage, Backup’s Geographical Coverage, Backup’s Response Time (to information needs). Example from cases: i) In the case of Sutter Hospital, it was reported that the data on the backup system had been two to three days out of date. Moreover, Sutter’s pen-and-paper process had been capable of handling outages up to one to two hours before it became inefficient. ii) In the case of SkyTrain, where the Vehicle On-Board Computers were not able to drive the trains, it took staff 25 minutes to arrive into the trains and drive them to the next station. Supported by: Tanriverdi et al. (2007). IT Resource-Business Alignment: A mutual property of Organization and IT Resource classes defined as the extent to which the IT resource covers an organization’s business activities, data, interactions, roles, controls, and culture. Proxies: percentage of organization’s covered information capabilities, data entities, user interactions, organizational roles, controls, norms. Example from cases: i) In the case of United Airline, several information 76 capabilities including selling tickets, tracking maintenance schedules, crew scheduling, gate assignments, and managing aircraft movement were dependent on the failed reservation system. The system was irreplaceable. ii) In a similar vein, it was reported that several Comair’s information capabilities had sprung from the crew scheduling system; that users had become extremely accustomed to Comair’s crew scheduling system; that controlling for compliance with the strictly enforced Federal Aviation Administration safety regulations had been built into the system; that even the definition of a workday in crews’ contracts had been lifted straight out of the concepts in the system. The system was irreplaceable. Iii) However, in the case of American Airlines, the faulty app supported only one information capability that was supplying pilots with navigation plans. The app was replaced by a PDF reader app and navigation plans on paper. Supported by: Strong & Volkoff (2010). Operational Inertia: An intrinsic property of the Organization class defined as the tendency of incapability to remain unchanged. In other words, the degree to which it is difficult to re-initialize an organization’s processes in order to achieve the normal business capacity. Proxies: Duration to Resume Normal Operation (the time lag between IT recovery and business recovery), Number of Affected sub-Capabilities. Example from cases: Comair reacquired its normal crew scheduling capability after a day. However, flight operation was not at full capacity until three days later (see Figure 3.4). Supported by: Hannan & Freeman (1984). Information Capability Indispensability: A mutual property of Organization and Clients classes defined as the degree to which information capability is vital for providing service or product to clients. Proxies: Self-declaring, reflective items by organization in the form of following questions: Is the information capability a client-related capability? (True|False), Does the information capability deal with information on clients, products, services, and operations? (True|False). Example from cases: SkyTrain’s system management centre (SMC) allows operators to monitor the speed of trains and change the schedules. However, another system, dubbed as Vehicle Control Centre (VCC), checks schedule changes and speed commands for safety and then enforces them. Vehicle On Board Controller (VOBC), which is in charge of driving a train, gets the commands and reports train status to VCC. If service regulation is lost due to SMC failure, like what initially happened on July 21st, the trains can run automatically since they are dependent on VCC, not SMC. Therefore, the optimization capability provided by SMC is not vital. Supported by: -. Client Debarment: An intrinsic property of the Clients class defined as the degree to which clients feel they are being precluded from enjoying certain possessions, rights or practices they primarily require. Proxies: Product/Service Acquisition Delay, Number of Debarred Clients. Example from cases: i) Some Saudi Aramco’s customers did not receive oil for 10 days. ii) Sutter Health Hospital’s patients did not take their medicines for a day. Iii) 77 million registered customers of Sony were not able to play online games. Supported by: Grover et al. (1996); Kaplan & Norton (2004); Tan, (2011); Bitner et al. (2000); McCollough et al. (2000). External Matching Capability: A mutual property of Organization and Industry classes defined as the capability exists in the environment that can match an organization’s client-related business capability. Proxies: Industry’s 77 Capacity Surplus, Industry’s Agility (Response Time to Business Incompetency). Example from cases: In the case of Skytrain, the surplus capacity of bus system was anywhere from 1000 passengers during peak hours to 2500 passengers during off-peak hours. This was no match for the 400,000 daily ridership demand for Skytrain system. The response time, i.e., getting those buses to the area of SkyTrain service disruption, was around an hour in rush hour periods. Supported by:-. Client Limbo: An intrinsic property of the Clients class defined as the degree to which clients feel uncertain about the time they completely acquire the service or product. Proxies: Limbo Duration, Number of Clients in Limbo. Example from cases: In the case of BC Hydro, the estimated time of power return was tweeted after 10 hours. However, it is not clear what percentage of the affected 700 thousands customers became aware of the estimate as the majority of phone batteries must have been dead after the ten hour power outage. Supported by:-. Client Dissatisfaction: A mutual property of Organization and Clients classes defined as the degree to which clients are dissatisfied with the organization. Proxies: Self-declaring, reflective items by clients; Number of Dissatisfied Clients. Example from cases: In the studied cases, clients expressed their conditions as frustrating, poor, inconvenient, terrible, infuriating, stressful, and so forth. Supported by: Kim et al. (2009); Fornell et al. (2010); Bhattacherjee, (2001); Spreng, (1996). Incompetency Distinctiveness: A mutual property of Organization, and Reference Set (e.g., Industry) classes defined as the degree to which the business incompetency is unique in the reference set (e.g., competitors). Proxies: Percentage of Organizations in the Reference Set whose corresponding Business Capability are untouched. Example from cases: After BMW’s incapability to deliver parts for repairing cars, a customer stated: “I have always been a die-hard BMW driver and am currently driving my seventh BMW, but will consider which brand I’ll buy the next time.” Repair service is common to all car manufacturers.  However, we did not come across this level of client dissatisfaction and customer defection risk in the case of Connected Drive, BMW’s service for remote access to vehicles. In this case, this percentage was close to zero. Supported by: Clemons & Row (1991); West & DeCastro (2001); Barney (1991, 1995); Wade & Hulland (2004); Kraaijenbrink et al., (2010); Bharadwaj (2000). Public Infamy: A mutual property of Organization and Public classes defined as the degree to which public minds perceive negative judgements about an organization. Proxies: Number of Negative Comments on Online Social Networks, Alexa-based Public Infamy, Alexa-based Client Base Awareness. Example from cases: i) The number of negative tweets about United increased by four times during the business incompetency. ii) The case of UK Border Force’s passport scanning outage was reported by bbc.com with PI4, thegurdian.com with PI4, huffingtonpost.co.uk with PI2, newsweek.com with PI0, and so forth. Therefore, the incident caused a PI4 Public Infamy. PI is a scale we developed to allow researchers and practitioners measure and communicate the magnitude of reputation damage. Please see Appendix F for further details. iii) In the cases of UoX, BMW’s Connected Drive, BMW’s Logistic system, and Comair client bases were informed negligibly, considerably, significantly, and completely, respectively. Supported by: Gray & Balmer (1998); (2014); Pavlou & Fygenson (2006); Carson et al. 78 (2006); Pennington et al. (2003); Hoxmeier (2000); Chan et al. (1997); Dümke (2002); Ghose (2009); Ba & Pavlou (2002); Rice (2012). Public Psychological Proximity (PPP) of the IT Unavailability: A mutual property of Organization, Public, and IT Resource classes defined as the extent to which the IT Unavailability is psychologically close to the public mind. Proxies: Formative construct, i.e., a subtotal of Google News-based PPP of the IT resource, Google News-based PPP of IT Unavailability in the Industry, Google News-based PPP of the Organization. Example from cases: i) Within a month before the occurrence of outage in Amazon, American Airlines, Sutter Hospital, and SkyTrain, there were 1450000, 804000, 50, and one news articles related to Amazon website, iPad, Electronic Health Record system, and SELTRAC, respectively. ii) The Delta outage occurred 7 to 9 months after two major IT outages in the industry: United Airlines and American Airlines. iii) The Delta outage occurred a week after the media covered a story of a fight between two Delta flight attendants that had led to an unscheduled landing. Supported by: Trope & Liberman, (2010). Financial Inefficiency: An intrinsic property of the Organization class defined as the extent to which an organization fails to achieve the pre-specified financial goals. Proxies: The Amount of Dollars Lost, The Percentage of a Past Acquired Profit or Revenue that was Lost, The Percentage of Decline in Share Prices. Example from cases: i) Comair’s crew-scheduling system in 2004 caused a direct $20m financial loss. ii) Comair had lost nearly as high as the firm’s entire operating profit for the previous quarter. iii) The share of Delta dropped more than 3 percent the day after business incompetency. Supported by: Barki et al. (1993). Strategic Risk: An intrinsic property of the Organization class defined as the probability of experiencing strategic consequences, that is, losses in financial health, competitive position, and market share. Proxies: Strategic Risk is materialized in the form of Public Infamy and Financial Inefficiency which are the states in which the organizations are in serious danger of enduring strategic impacts, i.e., their survival or long-term success is threatened. Example from cases: The cases of Netflix’s AWS outage, UoX’s LMS hiccups, and roughly speaking, BMW’s ConnectedDrive are examples of IT unavailability incidents that did not become a Strategic Risk. The remaining cases, however, experienced Strategic Risk by and large. Supported by: Tanriverdi et al. (2007); Drew et al. (2006); Ruefli et al. (1999); Collins & Ruefli (1992). The domino view also includes four types of ontology-compatible relationships (Weber, 2012) among the aforementioned constructs. In addition to the simple causal relationship, three types of moderators are applied because the simple moderation in the variance-based theories does not allow us to fully explain the influence of these contextual (peripheral) constructs (El Sawy et al., 79 2010). The first type of contextual effect is intensifier, which strengthens the relationship between the two constructs. Intensifiers are not necessary causes of the consequences, but their presence reinforces the effects of necessary causes. Intensifiers are illustrated as double-line arrows in the theoretical model (Figure 3.2). The second type of contextual effect is multiple sufficient causes (MSCs), which means that either a core factor or a contextual factor suffices to produce the given effect. MSC is illustrated as a circle that contains a capital O in the theoretical model. The third type of contextual effect is multiple necessary causes (MNCs), which means that both a core factor and a contextual factor are necessary to produce the given effect. MNC is illustrated as a circle with a plus sign in the theoretical model. These contextual effects are in alignment with the causal schema suggested by Kelley (1973) and are congruent with the reasons behind the suggestion to apply configurational theories for IS strategy research (El Sawy et al., 2010).  3.4.1.1 Strengths of the Domino View The domino causal pattern enables us to think about the far reach of events (Grotzer et al., 2002), in this case, the strategic risk of IT unavailability. The domino view enjoys a high explanatory power; it can explain many organizational situations and reactions. For instance, it explains why President Obama’s rollout speech for his healthcare initiative should not have contained “you can compare insurance plans, side by side, the same way you'd shop for a plane ticket on Kayak or a TV on Amazon.” This sentence provided an unrealistic reference set for the newly-launched website, highlighted the comparative deficiencies, and thus significantly increased the Incompetency Distinctiveness. It reveals that Client Limbo can be as detrimental as Client Debarment during an IT unavailability incident. It reveals that high IT alignment, like what is observed in enterprise resource planning (ERP) systems, can be dangerous. 80 3.4.1.2 Weaknesses of the Domino View Despite its usefulness in explaining many cases, the domino view cannot clearly explain the interplay between business and IT in some cases. For instance, it cannot explain how the unavailability of SkyTrain’s vehicle control center (VCC) system reverberated through the business domain to come back to the IT domain in the shape of the unavailability of SkyTrain’s website. The domino view cannot clearly explain the case of JPMorgan Chase, where a Type-II IT Unavailability (which happens when the demanded capacity for the IT resource surpasses its actual capacity) occurred after a Type-I IT Unavailability (which happens when the actual capacity of an IT resource becomes zero). It also is not able to clearly explain how the low level of de-icing fluid, an undoubtedly non-IT-related matter, created a huge IT unavailability incident that would ground Comair’s flights for several days and force Comair’s IT director and president to resign. These observations are a testament to the existence of a reverse causality between Business Incompetency and IT Unavailability. Unfortunately, our bias, rooted in the way the research question is positioned, did not allow us to see this relationship.  3.4.2 Dynamic View of IT Unavailability The domino view’s explanatory deficiencies forced us to develop a dynamic view of IT unavailability (see Figure 3.3) using a cyclical causal pattern where causal loops and mutual causality can be explained (Grotzer et al., 2002). Our revisit of memos as well as raw data, especially the narration used by some expert informants, led to the conclusion that the supply and demand for IT resource, information capability, and business capability can create a clearer explanation of the mutual causality between IT Unavailability and Business Incompetency. Therefore, instead of using constructs comprising the domino view (i.e., column 1 in Table 3.2),  81  Figure 3.3 Dynamic view of IT unavailability 82 the proxies of constructs (i.e., column 4 in Table 3.2) were used to build the dynamic view. Particularly, the model is built on the concept of capacity, which is defined as the actual or required level of a business capability, e.g., 1100 flights per day (see Figure F.2 in Appendix F). In a nutshell, the dynamic view is the domino view replaced by its proxies, plus a chain of demanded capacity from client to IT11. The dynamic view demonstrates that there exist four amplifying causal loops (Glaser, 2005) in the case of IT unavailability incidents. Therefore, an increase (a decrease) in a variable in the loop propagates through the loop and returns to the same variable in the next period, reinforcing the initial increase (decrease). A small growth in IT response time, for instance, can build upon itself, becoming larger and larger to a point that overwhelms the IT system (a process dubbed as snowball effect). Since the causal loops are connected, activation of one of them can lead to activation of the others. Therefore, even if the variables of a loop return to their normal condition after an incident, other loops may remain somewhat activated.12 Figure D.1 provides an illustration of these causal loops in the case of Comair.                                                   11 Regarding the dynamic view, we did not provide a list of propositions and support from literature and data similar to what Appendix F did for the domino view. First, what Appendix F provides is a norm for variance-based and configuration theories. However, this is not a norm for papers on systems dynamics. In fact, the comparatively large number of constructs and relationships along with the page limits of journals do not allow having such thick description. Second, since we have used the proxies (measures) of constructs in the domino view to develop the dynamic view, we believe the supporting literature and data is already available in Appendix F. Nevertheless, Appendix D provides illustrations of several cases to show that our data support the dynamic view. 12 According to Sterman (2000), the dynamic view does not describe what actually happens, but it explains what would happen if the variable were to change. For instance, an increase in a cause (e.g., the Clients' Service/Product Acquisition Delay) does not necessarily mean the effect (e.g., the Backlog of Clients) will actually increase. First, a 83 3.4.2.1 Causal Loop of IT Unavailability The first causal loop includes IT Unavailability, represented by its proxy IT Response Time. The relationships among IT Response Time, Information Operation Delay, Business Operation Delay, and Client Service/Product Acquisition Delay are similar to their corresponding constructs in the domino view. Similarly, the relationship between External Capacity and Backlog of Clients as well as the relationship between Backup Information Capacity and Information Operation Delay are similar to their corresponding contextual factors in the domino view. The effect of Actual IT Capacity and Demanded IT Capacity on IT Response Time as well as the effect of Client Acquisition Delay on the size of the Backlog of Clients in the next period are common knowledge supported by our dataset (see Appendix B). The direct negative effect of Demanded IT Capacity on Actual IT Capacity is observed in some cases like Comair. These constructs constitute a reinforcing causal loop.  The case of JPMorgan is a great example of the activation of the first loop. The problem with the authentication database led to the unavailability of the online banking system (the response time became infinite). Therefore, JPMorgan experienced a Type I IT Unavailability (i.e., an                                                   variable (e.g., the Backlog of Clients) often has more than one cause; some are shown in the model (e.g., External Capacity) and some are not shown as they are idiosyncratic to one context (see some examples in Appendix B). Second, the dynamic view does not distinguish between stocks (e.g., the number of clients in backlog) and flows (e.g., the rate of backlog change). For instance, while an increase in the Client's Acquisition Delay results in growth of the Backlog of Clients (i.e., the stock), a drop in Client's Acquisition Delay does not decrease the Backlog of Clients (i.e., the stock), but it reduces the outgoing flow from the Backlog of Clients. In other words, fewer clients will be served and more clients will remain in waiting mode when Clients' Acquisition Delay decreases. In a nutshell, an increase in acquisition delay will add to stock, while a decrease results in a decrease in the flow, not the stock. 84 unavailability that is rooted in losing actual IT capacity). Since many clients had no access to automated teller machines or teller-based banking (low Backup Capacity), transaction processing, which is both an information capability and a client-related business capability, was delayed. The resultant service delay led to a growing list of clients who had no accounts in other banks (no External Capability). This in turn increased the demand for transaction processing in the next period and thus the demand for the online banking system. Although the issue was resolved after a day, the high demand for the online banking system made the IT Response Time so high that the system was considered unavailable by many clients, a Type II IT Unavailability (i.e., an unavailability that is rooted in excessive demanded IT capacity). See Figure D.2 for an illustrative explanation of the JP Morgan case. 3.4.2.2 Causal Loops of Operational Inertia The second and third causal loops open the black box of Operational Inertia in the domino view. They, respectively, relate to the direct and indirect mutual causality between Business Operation Delay and Complementary Manual/Physical Capacity. A client-related business capability like flight operation can be dependent on several complementary physical and manual capabilities on top of the problematic information capability (e.g., crew scheduling). A manual capability (e.g., flying a plane) is an information capability that is completely supported by staff as the processing power as well as manual business processes. A physical capability (e.g., luggage transportation) is the ability to produce, move, upgrade, or decommission a tangible item, such as a product or individual. See Appendix F for further details on the types of business capability. This mutual causality between Operation Delay and Complementary Manual/Physical Capacity can be observed vividly in the case of United Airlines. A reservation system outage caused a flight 85 operation delay. Consequently, some pilots and attendants ran out of the maximum hours they could work according to the regulation. Therefore, the airline lost its Manual/Physical Capacity and new crew had to be called in. Moreover, the airlines lost the assigned gates and had to wait for available gates to load and unload passengers. The Operation Delay also affected the Manual/Physical Capacity indirectly. The long delay made frustrated passengers extricate themselves from being trapped in lines. This affected the capacity to assemble passengers on planes and thus exacerbated the capacity of the airline to operate the flights. The loss of the mentioned Manual/Physical Capacities themselves led to longer flight Operation Delay. Consequently, although IT unavailability was resolved after two hours, the flight capacity did not return to a normal level until the next day because of the activation of loops 2 and 3. Same pattern was observed in the case of Comair. Although the crew scheduling system was back after a day, Comair’s flight capacity did not return to normal level until three days later. Figure 3.4 demonstrates that the same loops were activated in the case of Comair. See Figure D.1 and Figure D.3 for an illustrative explanation of the cases of Comair and United Airlines.  Figure 3.4 Result of the activation of loops 2 and 3 in the case of Comair (although the unavailability of the crew-scheduling system was resolved after a day, business capacity i.e., the number of flights removed did not completely return until three days later) 86 3.4.2.3 Causal Loop of Client Limbo The fourth causal loop deals with Client Limbo. Clients who are uncertain when they will be served demand for status updates continuously. This increases the demand for customer relationship management (CRM) systems which can create a Type II unavailability in these systems. The SkyTrain system is a driverless system where on-board computers drive the trains (a digital information capability). When the brain of the system, called VCC, experienced a Type I IT Unavailability, it stopped responding to the on-board computers i.e., VCC’s users. Hence, the computers on trains stopped driving them as a safety measure. This led to the delay in driving trains to stations (Information Operation Delay) since SkyTrain did not have enough staff to drive all trains (low Backup Capacity). As a result, trips were delayed (Business Operation Delay). That left a huge number of passengers, who were trapped in trains or waiting in stations, wondering when they would be served (Client Limbo). This increased the demand for trip updates (the second information capability). Since the public address (PA) system’s audio quality (Backup Capacity) was poor, clients resorted to the customer communication website. That increased the demand for the website, which in turn led to elongated response time and thus a Type II IT Unavailability of the website. See Figure D.4 for an illustrative explanation of the SkyTrain case. 3.5 Discussion: Theory Validation In this section, we discuss how the explanatory power of our theory was validated. We also report a case that does not fit the theory as well as a rival explanation to our theory.  87 3.5.1 Explanatory Validation We leveraged TCs that are unrelated to Weber (2012)’s ontology for theories (e.g., strategy family, means-end family) to validate the explanatory power of our theory. For instance, some of the affected organizations resorted to tactics to mitigate the consequences of IT unavailability and others inadvertently performed actions that deteriorated the situation. Furthermore, time triggers like Christmas or approaching deadlines started IT unavailability or added fuel to the existing fire. If our theory could not clearly explain how those reactions or events lowered or increased the intensity of the consequences, its explanatory power would not be satisfactory. Therefore, we would look for missing relationships or constructs to update our theory. After several iteration of explanatory validation, we are convinced that our final theory has a high explanatory power. In the following, we explain how the dynamic view explains the effect of these triggers and reactions found in our data. 3.5.1.1 Explaining Triggers of IT Unavailability Four types of events can activate the loops and cause IT unavailability and strategic risk. If they happen during an IT unavailability, they reinforce the loops and deteriorate the condition. The first type of triggers is known to the literature. They directly reduce Actual IT Capacity to the point that it becomes lower than Demanded IT Capacity. An IT infrastructure outage (e.g., hardware failure, database outage, network outage, cloud outage), human error, deliberate shutdown due to unauthorized access concerns, natural disasters, software bugs, and sabotage attacks are some examples. This type of trigger initially activates the first loop.  The second type of triggers causes Demanded IT Capacity to rise to become larger than Actual IT Capacity. Distributed denial of service (DDoS) attacks and a huge number of curious users are 88 examples of this type of trigger. President Obama’s invitation to the public to take a look at the Obamacare website is categorized under this type of triggers that resulted in a substantial number of unwanted users (see Figure D.5). Creating unwanted, curious users is new to literature. This type of triggers initially activates the first loop. The third type of triggers affects Manual/Physical Capacity. Job actions, epidemic disease, and natural disasters are examples of this type of triggers. The case of Comair initiated when an unprecedented winter storm froze jet tires to the ground and enough de-icing fluid did not exist to deice the planes. This resulted in cancelation of almost all flights and necessitated an unprecedented level of crew re-scheduling. The high demand caused the shutdown of the crew-scheduling system due to a latent limitation in the system. The shutdown resulted in the complete cancellation of flights as crews did not have their new schedules. This type of triggers initially activates the second loop. The fourth type of triggers increases the number of clients drastically. Marketing campaigns, special seasons (e.g., Christmas, Cyber Monday), approaching deadlines, and natural disasters are examples of this type of triggers. Target’s marketing campaign in collaboration with Missoni, a fashion design company, created an unprecedented demand causing Target’s website crash. A power outage caused by a severe storm left 700,000 clients of BC Hydro in darkness and created a massive demand for information about power restoration. The huge demand overwhelmed BC Hydro’s website.  These triggers activate the first loop initially. 3.5.1.2 Explaining Reactions of Organization The main solutions applied by IT departments are IT recovery and capacity enhancement that aim at returning to or increasing the Actual IT Capacity. IT departments can also subscribe to Disaster 89 Recovery as a Service (DRaaS) as well as prepare legacy systems or other existing systems to deliver the critical functionalities of these systems should they become unavailable. During the LMS outage, UoX used an existing library system to enable instructors to share their files with students. American Airlines used the Acrobat Reader app to transfer flight maps to pilots when their iPad app crashed. This type of reaction aims at developing a Backup Capacity to reduce the Information Operation Delay. In the meantime, business managers resort to three types of reactions to prevent the snowball becoming an avalanche. The first type of reactions aims at reducing the Backlog of Clients by eliminating the switching cost temporarily, outsourcing the service momentarily to competitors, and taking similar measures. United Airlines, for instance, stopped charging clients the fee for changing flights. The second type of reactions is intended to reduce the Demanded Business Capacity through queuing client requests, extending deadlines, and similar actions.  For example, the Obama administration extended the deadline for buying a health insurance plan as a result of the issues at the Obamacare website. Moreover, the software development team added a queuing feature to the site that e-mailed users with tips about when to return to the site at a less congested time and a link that took them to the front of the line.  The third type of reaction aims to increase the Manual/Physical Capacity through automating them, calling on all staff, temporary staffing, and so forth. In the case of SkyTrain, the audit report suggested “upgrading the guide way intrusion system” and “installing CCTV [closed-circuit television] cameras to increase the visibility of front-line staff.” Consequently, whether people are 90 clear of the guide ways can be determined more quickly than the current manual process should a system shutdown occur.  Appendix G demonstrates all actions, events, and states identified in our data or in the availability literature. It identifies at least one corresponding construct that explains their effect. For instance, IT Resource’s complexity (Westeran & Hunter, 2007) decreases IT Detachability (Backup Capacity) and increases the chance for IT unavailability (decrease in Actual IT capacity).   3.5.2 Non-Fitting Data As our positivist GT approach suggests, it is necessary to report the data that do not fit the theory. We came across one such case that our generated model could not explain. Semaphore, an information system used by the United Kingdom’s National Border Targeting Centre (NBTC) to flag potential suspect passengers and screen for terrorists and criminals, crashed multiple times in 2015. The safety of ultimate NBTC’s clients (i.e., British citizens) was compromised since “terrorists or criminals could still have boarded aircraft without being detected by British security services” (Telegraph, 2016). However, neither clients nor their representatives became aware of the situation. They simply were not informed, and no flights were grounded either. Therefore, although Business Incompetency occurred, no Client (citizen) Dissatisfaction was reported. In fact, the public got aware of the issue after a year when the British Airways technicians who had helped restart the system were fired due to the outsourcing of their jobs to India. In this case, it was the Public Infamy that preceded Client Dissatisfaction. It seems that our theoretical model cannot explain the loss of business capabilities that provide services invisible to clients. Examples are capabilities that deal with privacy protection, intrusion 91 detection, and safety risk reduction. More occurrences of such cases would allow future research to extend our model in order to explain them. 3.5.3 Rival Explanation In our data, we came across a rival explanation offered by a security expert: “[If] I were a customer, and this [outage] happened again, I would definitely be worried about the security of my bank and most likely change banks.” Putting it in a theoretical context, the security expert claimed that IT Unavailability -> Perceived Access Risk -> Client Dissatisfaction should be added to the theoretical model. We acknowledge that some organizations in our sample took action to refute rumors of a cyber-attack. However, we did not come across a report stating that a significant number of clients were seriously dissatisfied or defected due to perceived access risk. Therefore, we did not add these relationships to the theoretical model. However, to reject this hypothesis, we need additional data collection and analysis of a representative sample of clients who experienced IT unavailability. Therefore, we ask readers to be cognizant of the possibility of a strategic consequence due to perceived access risk in their context to which they apply our generated theory. 3.6 Contributions to the Research Our main contribution is the systems theory illustrated in Figure 3.3. There exists three main types of theories: variance theory, process theory, and systems theory (Burton-Jones et al., 2015). In a systems theory, the primary emphasis is on the overall system and the interaction of its variables as opposed to co-variation among constructs or sequences of events. The output of this study is a new system (dynamics) theory (Sterman, 2000) rather than a parsimonious variance or process theory that is composed of only new relationships. Some proposed relationships are new and some are already stated in the literature. However, the entire system is new and theoretically complete, 92 that is removing a relationship (whether new or old) means our theory cannot clearly explain at least one case of IT unavailability in our dataset. In this section, we explain how our theory contributes to the extant literature on impact analysis of IT unavailability to highlight some other novel contributions. 3.6.1 A More Comprehensive Conceptualization of IT Unavailability  An accurate impact analysis of IT unavailability begins with an accurate understanding of its severity. This in turn hinges on having an accurate perception of IT unavailability. The effectiveness of the dynamic view in explaining various scenarios of IT unavailability promotes viewing IT unavailability as a misalignment between Actual IT Capacity provided by IT department and Demanded IT Capacity by business (see Figure 3.5). This motivated us to conceptualize IT unavailability using its proxy IT Capacity Deficit. In this light, IT Unavailability is defined as the distribution of IT capacity deficit over time.   Figure 3.5 IT Unavailability is conceptualized as a distribution of IT capacity deficit over time 93 We believe this conceptualization is more accurate and comprehensive than downtime. As mentioned in the preliminary literature review section, the extant literature suggests a downtime conceptualization that considers IT availability dichotomous at a given time, i.e., either available or unavailable. However, our data revealed that unavailability is not dichotomous as the concept of downtime suggests, but it is a continuum of capacity deficit ranging between 0% and 100%. An IT resource can be available to some users and unavailable to others at a given time. For instance, a web server whose capacity is 240 concurrent users experiences partial unavailability (20%) with 300 concurrent users.  Our suggested conceptualization covers the cases of partial unavailability, i.e., Type II IT Unavailability, in addition to the traditional downtime-based view of IT unavailability (i.e., Type I IT Unavailability and its proxies frequency and duration). Therefore, researchers, managers, and practitioners do not miss identifying some incidents due to a narrow definition. This is especially critical in IT contracts with cloud service providers. Moreover, the new conceptualization points out that business can inadvertently be responsible for IT unavailability by driving excessive demand for IT. 3.6.2 Capacity Misalignment: A New Type of Operational Misalignment This dissertation recommends viewing IT unavailability incidents using the lens of capacity misalignment as introduced in this study. Capacity misalignment is defined as the mismatch between demanded capacity by business and the actual capacity provided by the IT department. As Bergeron et al. (2001, 2004) suggested, capacity misalignment can be measured by the deviation score or residual that can be mapped to the capacity deficit in these cases. Two types of IT-Business capacity misalignment is identified in this study. First, IT resource-business capacity is defined as the mismatch between the demanded capacity of an IT resource by business, and the 94 actual capacity the IT resource provides. When 1000 concurrent customers visit an e-commerce website that can handle 700 concurrent users, there exists a mismatch between the actual IT capacity and the demanded IT capacity of the website. Second, information-business capacity misalignment is defined as the mismatch between the demanded capacity for an information capability by business and the actual information capacity the IT department provides. For instance, when an e-commerce website allows processing maximum 50 orders per minute, there exists a mismatch if Black Friday increases the business demand to 80 orders per minute. Hence, this dissertation introduces two capacity misalignment as a type of operational misalignment and provides two of its subtypes. The dynamic view is built on these two notions of capacity misalignment. The success of dynamic view in explaining almost all cases of strategic IT unavailability denotes the explanatory power of the notion of capacity misalignment and its subtypes. 3.6.3 Efficient Estimation of the Strategic Impact of IT Unavailability As mentioned in the preliminary literature review section, organizations use reputation damage as a measure to evaluate the impact of IT unavailability. This study has identified a new way to measure reputation damage (i.e., Public Infamy) that is based on news articles addressing the IT unavailability incident. The popularity of the news websites publish those articles can be an efficient estimator of the damage to the image of the affected organization. There can be many ways to estimate this popularity. We used global and national rankings of the publishing sources (e.g., CNN, BBC) on Alexa.com to develop a scale to measure the intensity of the damage. The Public Infamy scale includes five levels, PI0 to PI4, developed to resemble the categories for hurricane scale. The strongest category, PI4, damages the organization’s image globally, while the weakest category, PI0, causes a bad reputation in an “isolated” community. For instance, a website 95 like BBC or CNN that is ranked within top 200 in the world can cause a PI4 damage. The details of computing the PI level for an incident can be found in Appendix F.  We believe this measure can quickly yet efficiently provide an evaluation of the strategic impact of IT unavailability. Strategic Risk is materialized into two forms in IT unavailability incidents: Financial Inefficiency and Public Infamy. While an accurate evaluation of Financial Inefficiency (including revenue loss, financial cost, litigation cost, labor productivity loss) is often available after a long time, our data suggests that Public Infamy usually appears very quickly in many cases.  Moreover, we believe the Public Infamy Scale is an efficient estimator of the Strategic Risk. Financial Inefficiency can be attributed to factors other than the IT unavailability incident. It can be alleviated using heavy advertising campaigns after the incident. However, our innovative measure for Public Infamy is not affected by confounding factors since our method applies only news articles published on the incident. Therefore, the Alexa-based Public Infamy scale and the Alexa-based Client Base Awareness (see Table 3.1) can isolate the effect of one incident on the reputation of an organization and remove the effect of confounding factors. Furthermore, the current text analytic technologies allow the computation to be completely automatic. These two characteristics are especially interesting to econometricians since it makes these proxies efficient estimators of the Strategic Risk of operational incidents. Therefore, we suggest that researchers and practitioners apply our Public Infamy Scale to evaluate the strategic risk of IT unavailability incidents. 96 3.6.4 Capability-based vs Process-based vs Resource-based Impact Analysis of IT Unavailability  There are two competing approaches that are currently applied in practice to estimate the monetary impact of IT unavailability: impact by lost user productivity and impact by degraded business processes (Liu et al., 2010). The former estimation of the impact of IT unavailability is calculated based on the sum total of users’ time affected during the unavailability period. The latter is, however, calculated based on the monetary loss associated with the disruption in business processes.  The extant literature seems to encourage a process-based view of IT unavailability. van Ginkel (2009) believes what really matters to an organization is that the business processes are up and running. Therefore, if an IT unavailability incident does not lead to the discontinuity of the business processes, it would not have important consequences. Similarly, Liu et al. (2010) believe that a process approach provides a better reflection of business impact since “resources [such as users’ time] are not valuable in and of themselves, but they are valuable because they allow firms to perform activities [i.e. business processes]” (Porter, 1991, p. 108). Zambon et al. (2007; 2011) also include the cost of business process discontinuity in their time dependency model. Our study also argues against applying user productivity approach, especially at the strategic level. The approach is an instance of resource-based view in analyzing the impact of IT unavailability, which roots in resource-based theory (Barney, 1991, 1995; Wade & Hulland, 2004; Kraaijenbrink et al., 2010; Bharadwaj, 2000). In addition to the aforementioned reasons, our data revealed that almost none of the lost resources that caused strategic consequences were rare, inimitable, or non-substitutable by competitors. In fact, what was rare was the incapability of the organization. Hence, 97 resource-based view of IT unavailability does not have the explanatory power to be used in impact analysis. However, we believe the process-view does not have the explanatory precision either. Our data revealed a capability-based view of IT availability risk. According to Homann (2006), business capability is what a business can do regardless of how it organizes the activities and what resources it utilizes. A Business Process, however, defines how the business performs or implements the given capability. A business capability can be materialized by several business processes e.g., self-service checkout, cashier-based checkout (see Appendix F for further details). We believe what really matters to organizations is not necessarily the business process, but it is keeping the level of business capability, namely, business capacity (e.g., 20 checkout per minutes). Having a capability-based approach to impact analysis of IT unavailability has two benefits. First, the notion of capability is easy to understand by chief executive officers (CEOs) and board members as well as IT practitioners. Thus, they can facilitate communications of IT risks between business and IT (Ulrich & Rosen, 2011). Second, capabilities are relatively more stable than concepts such as business process and resource productivity. Thus, they yield a relatively longer lasting model of the area of focus (Homann, 2006). This makes capability suitable to serve as a baseline for strategic management and impact analysis (Ulrich & Rosen, 2011). Some further contributions to other areas of research including value generation by IT and service quality are briefly explained in Appendix H. 98 3.7 Implications for Practice The generated theories help organizational roles in charge of IT risk management (e.g., board members, top business managers, IT managers, and risk managers) think about and discuss risks of IT unavailability more systematically. In particular, Figure 3.3 can help them choose or improvise risk mitigation tactics that prevent IT unavailability from becoming a strategic risk. The following provides practical suggestions beyond those implications. 3.7.1 Speak in an Abstract Level about IT Unavailability Contemporary recommendation by digital marketing agencies is to give customers detailed and transparent information in the case of business disruption (Regester & Larkin, 2008). While updating customers is often a good idea, the case of American Airlines demonstrates that if it is done by ill-trained crews, it can be a serious issue. The pilot mentioned the “iPad” as the source of the problem, while he could have explained it as an issue with receiving navigation plans. The iPad’s Public Psychological Distance was very close at the time. When a few customers reported what the pilot said on Twitter, the fact that the issue was related to an iPad instigated the traditional media to cover the story. Consequently, a non-significant delay caused a huge reputation damage. Therefore, we suggest not using explicit naming of IT resources with close Public Psychological Distance in updating clients. 3.7.2 Focus on Client Limbo as Much as Client Debarment Our data revealed that Client Limbo could cause the same or even a higher level of dissatisfaction comparing to Client Debarment. Informing clients when they will acquire service or products during IT unavailability is a critical information capability. We suggest organizations, especially 99 airlines, to enhance this capability using advanced computational technologies such as machine learning. 3.7.3 Reconsider the Definition of IT Unavailability in Reports and Contracts The study recommends considering IT unavailability as a capacity deficit instead of downtime. Therefore, it is better to report IT unavailability in the format shown in Figure 3.5, which represents the demand and supply of IT capacity, rather than in the more traditional format shown in Figure 3.1, which only captures aggregate statistics over time. Our reporting can help managers proactively manage demand and supply of capacity. If the demand is projected to exceed the supply, they can proactively consider possibilities to expand their actual capacity or prepare themselves to mitigate the negative consequences before the unavailability is materialized.   Capacity deficit measure can also be used in SLAs in outsourcing contracts. If IT service is contracted based on the prevailing downtime as the performance measure (e.g., two nines, three nines, or five nines), the outsourcer may not have enough incentive to address partial unavailability incidents where a fraction of users experiences unavailability. Therefore, we suggest that IT managers include their demanded IT capacity in IT contracts and hold software producers and cloud service providers (CSPs) liable if the actual capacity becomes lower than the demanded capacity. 3.7.4 Integrate Business Capacity into IT Capacity Monitoring Tools Given that less than 5% of enterprises currently use IT capacity monitoring tools (CMTs) (Head & Govekar, 2015), we suggest that CIOs equip themselves with IT CMTs to monitor their critical IT resources in real time. While monitoring capacity is a step in the right direction, we believe that 100 the data on IT resource capacities should be linked to the data on business capacities to improve capacity planning and prevent IT unavailability. Unless the relationships between business capacity and IT capacity are understood well, IT managers cannot predict the demand for IT capacity accurately. Thus, we suggest that IT CMT vendors enable their tools to be integrated with business applications so that their clients can have a comprehensive view of IT capacity. 3.8 Limitations and Future Studies  This study has some limitations that root in the nature of GT methodology used. GT is an approach to build theories grounded in data, but not to test them in the proper manner positivists expect (Glaser & Strauss, 1967; Glaser, 1978, 2004, 2005). For instance, it is not possible to determine the strength of the relationships; nor is it possible to estimate the extent to which a construct’s variance is explained by its antecedents. We tried to use explanatory validation to remove relationships that do not help explain cases. Nonetheless, a future study is needed to test the generated theory against a proper sample using statistical analysis or fuzzy-set qualitative comparative analysis (Ragin et al., 2006).  In addition, our theory is based on a non-representative sample. Although we tried to control the effects of this issue with heterogeneity sampling, a reliable empirical generalizability can be obtained only via random sampling. For instance, organizations in the examined cases were mostly large organizations with well-known brand names. Our cases with strategic consequences were all obtained through secondary sources like news outlets. All cases occurred in countries where press freedom exists, which allows news outlets to generate Public Infamy. The lost business capabilities are all directly or indirectly related to clients. Although some business capabilities were related to regulators (e.g., Comair’s crew scheduling), we have not been able to find capabilities that provide 101 output to government (e.g., tax calculation) or other institutional forces. The business capabilities were all related to providing services or products visible to clients. No business capability was related to providing pure IT service to other organizations (e.g., Amazon Web Service, Microsoft Azure). Therefore, researchers and practitioners should be cautious when they apply the generated theory in a new context. However, our understanding suggests that the findings should hold across other industries of all sizes to the extent that they share business capabilities similar to the ones covered in this study. Researchers of future studies should take into account that sampling bias is inherent to any study related to strategic risk. The topic of strategic risk is highly sensitive and confidential. Organizations usually are not interested in disclosing risk-related information due to the fear of more reputation damage, disclosing their vulnerabilities to competitors or to adversaries, or similar reasons. It is highly possible that many organizations will decline to participate in a study requiring revelation of strategic risks. Therefore, researchers should find ways to decrease the sampling bias and its consequences. An interesting research question that arises from this study involves what characteristics make an information capability indispensable for the dependent business capabilities. For instance, order taking is an information capability that is critical to order fulfillment capability. However, bookkeeping of the payment, which is done at the end of the business process, is not critical to the availability of order fulfillment capability. Unfortunately, our data have not shown any reliable patterns that can be used to develop a proxy to measure this construct. Therefore, we suggest that a future study opens the black box of Information Capability Indispensability. 102 Finally, we suggest developing a design science model that completes the time dependency model suggested by Zambon et al. (2007). Artifact constructs such as information capability, business capability, and client expectations from the generated theory can be added to their model. IT practitioners and business managers can use such a model to discuss required IT capacities. The model can be applied to ensure the alignment among IT capacity, business capacity, and customer values. IT managers can leverage these architectural models to justify investment in ensuring IT availability and communicate the business risks that can arise from unavailability of those IT resources. 3.9 Conclusions This study focused on explaining how IT unavailability becomes a strategic risk. We took a grounded theory approach that was customized to achieve a higher level of reproducibility and develop a theory compatible with an objectivist-positivist ontology of theories (Weber, 2012). The result was a system theory that encourages a dynamic view of IT unavailability as opposed to a domino view, i.e., a chain of cause and effects from IT unavailability to strategic risk. The theory uses the actual and demanded capacity for IT resource, information, and business to explicate how the interplay between IT unavailability and business incompetency leads to a strategic risk. Hence, the theory can explain why a Type-II IT Unavailability is highly probable after a Type-I IT Unavailability, how people in higher ranks in the organizations, like President Obama, initiate or deteriorate IT Unavailability, how an IT unavailability incident creates a business issue that turns into another IT unavailability incident, and why business incompetency lasts longer than IT unavailability in some cases. Finally, the dynamic view revealed the triggers of IT unavailability and recommended several workarounds to mitigate their adverse consequences. 103 The study had some other findings as well. It recommended a capacity deficit-based conceptualization of IT Unavailability to be applied in research as well as managerial reports and IT contracts. Moreover, this study suggested an efficient instrument to measure the strategic risk of IT unavailability using the popularity of websites that report the IT unavailability incident (provided by Alexa). Furthermore, the study revealed that a capability-based impact analysis of IT unavailability works better than a resource-based or process-based impact analysis. 104 Chapter 4: Enhancing Strategic IT Alignment through Common Language: How Can CIOs Speak in an Alignment-Friendly Manner? IT is from Venus, non-IT is from Mars. –George Westerman 4.1 Introduction Strategic information technology (IT)-business alignment is as a state in which IT strategies and business strategies as well as chief information officers (CIOs) and the top management team (TMT) are aligned (Reich & Benbasat, 1996, 2000). Despite all the efforts that have gone into studying the alignment of IT and business over the past 30 years, alignment is still the top concern of CIOs (Society for Information Management, 2016). CIOs have difficulty in achieving alignment because alignment is a moving target (Chan & Reich, 2007; Venkatraman & Henderson, 1993; Tallon, 2007; Baker et al., 2011). Rapid changes in business and technology are the root cause of the chronic misalignment. When business changes, information systems (ISs) must adapt to satisfy evolving organizational needs, goals, and strategies (Vessey & Ward, 2013). Alternatively, when new information technologies emerge, organizations must revise their business strategies and operations to utilize the opportunity and stay competitive or avert the associated business risks (Venkatraman & Henderson, 1993). This highlights the criticality of those antecedents of alignment that can be improved in a timely manner. A powerful antecedent of strategic alignment that can be improved quickly is shared language, which is defined as the degree of commonality between language and terminology used by the CIO and TMT in their communication (Karahanna & Preston, 2013; Preston & Karahanna, 2009). According to the extant literature, shared language enhances strategic alignment by improving the shared understanding of the role of IS (Preston & Karahanna, 2009; Armstrong & Sambamurthy, 105 1999; Chan, 2002; Reich & Benbasat, 1996, 2000; Rockart et al., 1996; Tan & Gallupe 2006) and fostering trust between the CIO and TMT, namely, relational social capital (see Figure 4.1)  (Karahanna & Preston, 2013). Shared language is necessary to communicate meaning, convergence of opinions about situations, and knowledge exchange and integration, all of which allow the CIO and TMT to reach a consensus on the role of IS capabilities in achieving strategic goals (Karahanna & Preston, 2013; Johnson & Lederer, 2005; Madhavan & Grover, 1998; Nahapiet & Ghoshal, 1998; Nelson & Cooprider, 1996). Shared language also fosters trust between the CIO and TMT by creating a sense of familiarity, increasing transparency, and reducing perceptions that the CIO potentially has a hidden agenda behind the use of technical language (Karahanna & Preston, 2013). All in all, communication is more effective when a shared language exists (Selten & Warglien, 2007; Charaf et al., 2013).  Figure 4.1 Shared Language enhances strategic alignment through CIO-TMT trust and shared understanding Adopted from Karahanna & Preston (2013) and Preston & Karahanna (2009) To enhance shared language, the literature suggests that CIOs avoid using technical, IT-based jargon when interacting with TMT members and use business terminology instead (Reich & Benbasat, 2000; Preston & Karahanna, 2009; Karahanna & Preston, 2013). However, the alignment literature is almost silent on an appropriate shared language (Jentsch & Beimborn, 106 2014).  CEOs can apply many different terminologies.13 The question is: What business nomenclature can be utilized, which nomenclature works better under what conditions, and why? A few attempts have been made to establish such nomenclatures. Van der Zee and de Jong (1999), for instance, suggest developing a balanced score card and visualizing how IT affects performance measures of finance, customer, organization learning, and business process. An artifact called business service/function catalog in TOGAF, an enterprise architecture framework, uses organization unit, business function, business service, and information system service as concepts in its language (The Open Group, 2011). However, these attempts have not indicated “why” (Whetten, 1989) those nomenclatures should be applied and why they are better than the other nomenclatures including technical jargon. In fact, those attempts seem to be based mostly on inspired creativity and trial-and-error design processes rather than theoretical justification (Gregor & Hevner, 2013). Therefore, there is room for significant research to provide theoretically and empirically supported prescriptions about what terminology CIO should use in their communications with TMT members to express the strategic role IT should play in the organization. Thus, first, this study leverages the literature of strategic management (e.g., resource-based view, dynamic capability) to unearth two types of business language: resource-based language and capability-based language. Then, using semantic memory theory, this study compares the effectiveness of these languages in terms of their                                                   13 In this chapter, we we use language, terminology, and nomenclature as synonyms. 107 ability to (a) develop perceived shared language, (b) create an understanding of the strategic role of IT, and (c) create trust in the CIO’s claims.  The results of the study will help CIOs establish shared language in their communications with the TMT. The results prescribe what language should be applied under what conditions so that the conversation leads to higher levels of shared language, understanding the strategic role of an IT resource, and credibility, the three antecedents of strategic alignment. The findings can also serve as a kernel (justificatory) theory for designing an artifact (Gregor & Hevner, 2013) to be used for CIO-TMT communication. Such a design artifact can be added to enterprise architecture frameworks like TOGAF. Moreover, this paper answers Jentsch and Beimborn’s (2014) call to fill the void in research on alignment that investigates alignment from the language point of view.  The rest of this chapter is organized as follows. The next section suggests three business nomenclatures using the literature of strategic management. The following section compares the strength of these nomenclatures in terms of enhancing shared language, understanding, and credibility, drawing on the theories of consensus making using language. Next, the experimental research method applied in this study is described, followed by the data analysis. Subsequently, discussion of the findings, contributions to research and practice, and limitations are presented. The final section provides concluding remarks. 4.2 Discovering Business Languages to Be Prescribed to CIOs Applying technical language (TL) to discuss IT and business strategies is detrimental to strategic alignment (Reich & Benbasat, 2000; Preston & Karahanna, 2009; Karahanna & Preston, 2013). A technical description contains terms that describe the technical attributes of IT, which are unfathomable to most TMTs. Consider the example of a CIO who uses a TL to encourage the TMT 108 to invest in a technology called DataX rather than a competing technology called DataY.14 The CIO says:  Both DataX and DataY target in-memory database acceleration. I recommend DataX for two reasons. First, DataY supports only the column-oriented format while DataX supports both row-oriented and column-oriented formats. Second, DataX completes more than twice the number of navigation steps in one hour than DataY on the same chipset. In other words, DataX completes database operations two times faster than DataY. Currently, only one of our competitors is being equipped with DataX's in-memory acceleration.  Technical terms, like “in-memory acceleration,” “column-oriented,” “row-oriented,” and “navigation steps” are not understandable to many top managers. According to the literature (Preston & Karahanna, 2009; Karahanna & Preston, 2013), such terms lead to lower perceived shared language, lower understanding of the role of the technology advocated (DataX, in the example above), and lower trust in claims. In the following, we provide a literature review on the nomenclatures researchers have leveraged to understand, think, share, and discuss strategies and competitive advantage. We believe the nomenclatures applied in theories that have attracted and been used by many scholars of strategic management are good candidates since they have demonstrated high potential in clarifying strategy and competitive advantage. It is reasonable for CIOs to apply the same nomenclatures in their dialogues with the TMT discussing IT plans, objectives, and strategies to achieve higher strategic alignment.                                                   14 We adopt the descriptions of DataX and DataY from an advertisement published on the cover of The Economist (Nov 14-20, 2015) by Oracle to convince SAP Hana (DataY) users to migrate to the Oracle Database (DataX). We use DataX and DataY in the paper to conceal the identities of the products to prevent potential biases associated with the products from confounding the results.   109 Note that since IT can be assumed to be an internal source of competitive advantage, we only focus on the strategic management literature that addresses strategies and competitive advantages from an internal point of view. We do not consider the industrial organization view suggested by Bain (1968) and Porter (1979, 1980, 1985) that explains the same issue using external sources like the bargaining power of customers and suppliers (Kraaijenbrink et al., 2010). In this regard, two distinct, popular internal views of strategies (i.e., how firms create rent) have been identified in the strategic management literature: (i) the resource-picking mechanism, dubbed the resource-based view (RBV), and (ii) the capability-building mechanism, dubbed the capability-based view (CBV) (Makadok, 2001). In the following, these two views and the nomenclatures they apply will be discussed in detail.  4.2.1 Resource-based Language (RbL) RBV (Barney, 1991; Barney, 1995) has been one of the best-known and most powerful theories for understanding organizations over the past two decades (Barney et al., 2011). It aspires to explain the differences in performance of firms in the same industry and their competitive advantages using the internal sources of the firm. RBV considers resources to be the most important component of firms and, in fact, views a firm as a bundle of resources (Kraaijenbrink et al., 2010). Employing resource as the unit of analysis (Lockett et al., 2009), RBV suggests that controlling valuable, rare, inimitable, and non-substitutable (VRIN) resources results in returns over and above the marginal producer, referred to as rent (Ricardo, 1817; Seddon, 2014), and leads to sustainable competitive advantage (SCA) (Barney, 1991). RBV theory applies the following terminology: resource, value, rarity, imitability, and substitutability (see Table 4.1). Some researchers have also mentioned appropriability and mobility 110 as two resource attributes (Wade & Hulland; 2004; Mata et al., 1995). We start by defining the focal concept of RBV (i.e., resource). Despite its popularity, RBV suffers from its ambiguous definition of resource (Wade & Hulland, 2004; Kraaijenbrink et al., 2010; Seddon, 2014; Teece, 2009). Barney (1991a, p. 101) defines resources as “all assets, capabilities, organizational processes, firm attributes, information, knowledge, etc. controlled by a firm that enable the firm to conceive of and implement strategies that improve its efficiency and effectiveness.” However, Barney (1995) indirectly contradicts his first definition by talking about “resources and capabilities” as if they are separate (Seddon, 2014). The all-inclusive definition of resource is problematic as it drives the theory toward tautology and non-falsifiability (Lado et al., 2006; Lockett et al., 2009; Kraaijenbrink et al., 2010) in that there is nothing strategically useful associated with the firm that is not a resource, and even SCA can be regarded as a resource (Kraaijenbrink et al., 2010). Moreover, this definition does not appropriately distinguish between resources that are inputs to the firm and capabilities that select, deploy, and organize such inputs (Kraaijenbrink et al., 2010). In this study, we define resources as anything that can be valued by accountants a priori (Makadok, 2001) (e.g., employees, devices, machines, equipment, stocks, hardware, network infrastructure, software licenses) and used as inputs to firms’ processes to create, produce, and/or offer goods or services to a market (Wade & Hulland, 2004).15                                                     15 Wade and Hulland (2004) argue that “resource” is defined broadly or narrowly in different research. In this study, we take a narrower definition of resource which is close to Wade and Hulland’s definition of asset. 111 Table 4.1 RBV’s terminology Language concept Definition Resource Anything that can be valued by accountants a priori (e.g., employees, devices, machines, equipment, stocks, hardware, network infrastructure, software licenses) and used as inputs to firms’ processes to create, produce, and/or offer goods or services to a market Valuable The degree to which a resource increases the efficiency and effectiveness of other resources in a firm Rarity The degree to which competing firms possess the same resource Imitability The extent to which competitors can duplicate the resource Substitutability The degree to which other resources can be applied in place of a resource Mobility The extent to which the ownership of the resource can be poached by competitors Appropriability The extent to which the firm can apply the resource and appropriate the returns As for defining value, Barney assumes a resource is valuable if it improves firms’ efficiency and effectiveness (Barney, 1991, p. 105). If effectiveness or efficiency is defined in terms of Porter’s value creation, a tautology similar to that mentioned above can apply to value (Kraaijenbrink et al., 2010).  In this light, we suggest defining the value of a resource in terms of its effects on the efficiency and effectiveness of other resources. For instance, introducing a new software application can increase employees’ productivity or adding a new CIO can enhance the IT staff’s productivity, which in turn can increase the efficiency of applications in terms of response time 112 and capacity. In our opinion, this conceptualization of value is highly aligned with the essence of RBV.16 Other terms in the RBV nomenclature are defined according to the term resource as well. Rarity refers to the degree to which competing firms possess the same resource. Imitability refers to the extent to which competitors can duplicate the resource. Substitutability refers to the degree to which other resources can be applied in place of the resource. Mobility refers to the extent to which the ownership of the resource can be poached by competitors. Finally, appropriability refers to the extent to which the firm can apply the resource and appropriate the returns.  Given RBV’s elegant simplicity and its immediate face validity (Kraaijenbrink et al., 2010), many scholars have applied the theory and its terminology to share their thoughts on strategy and competitiveness. Thus, we postulate that CIOs can also adopt RBV’s terminology to communicate the strategic role of an IT resource with the TMT. CIOs can speak about an IT resource’s value (i.e., its effect on existing IT resources in the organization). Such IT resources can be property-based resources like software applications, data, hardware devices, and network components, as well as knowledge-based resources like IT staff, IT procedures, and IT users (Wade & Hulland,                                                   16 Value can also be interpreted as “business value” (i.e., how a resource increases the financial efficiency of an organization). For instance, “DataX can make the revenue two times bigger.” First, we believe that speaking in terms of efficiency and effectiveness of other IT resources is more aligned with the essence of RBV. Second, coming up with a precise number can be difficult for financial analysts, let alone CIOs. We target the prescribed languages achievable to most CIOs. Note that many CIOs do not have the financial knowledge to make reliable claims regarding the expected financial outcome. Nevertheless, as seen in Table 4.8, we mention that DataX is financially valuable. For all descriptions, we state that “DataX database engine can increase shareholders' long-term value more than DataY can.” 113 2004). CIOs can describe the value, rarity, inimitability, and non-substitutability of the focal resource or the VRIN-ness of its combination with other IT resources. For instance, the CIO mentioned above can encourage an investment in DataX as follows:  Both DataX and DataY target real-time (fast) data analysis. I recommend DataX for two reasons. First, using DataY requires rewriting the current software applications while with DataX, there is no need to rewrite software applications. Second, DataX provides the results of user queries two times faster than DataY. Therefore, new patterns in data will be found two times faster than with DataY. Currently, only one of our competitors is being equipped with real-time data analysis provided by DataX. The value of DataX is explained in terms of its effect on three current IT resources (i.e., software application, user, and data). The last sentence in the example indicates the rarity and imitability of the focal IT resource (i.e., DataX). Finally, the comparison of DataX to DataY explains the substitutability of DataX. 4.2.2 Capability-based Language (CbL) The capability-based view can be found in theories of dynamic capability (Teece & Pisano, 1994; Pavlou & El Sawy, 2011) and improvisational capability (Pavlou & El Sawy, 2010). CBV is suggested as an extension of RBV (Baker et al., 2011; Pavlou & El Sawy, 2011) to resolve the aforementioned issues in RBV by distinguishing resources from capabilities (Lockett et al., 2009). CBV suggests that rent creation occurs through VRIN operational capabilities that are reconfigured to remain or become VRIN by dynamic capabilities and improvisational capabilities. Both of these first-order capabilities reconfigure zero-order, internal, or external capabilities to address rapidly changing environments (Teece et al., 1997). Whereas dynamic capabilities address quasi-predictable patterns in the environment, improvisational capabilities spontaneously reconfigure operational capabilities as a response to unpredictable events (Pavlou & El Sawy, 2010). 114 In both these theories, the unit of analysis is capability rather than resource. While RBV attributes rent creation to smart resource picking, CBV attributes rent creation to capability building (Makadok, 2001). However, both views share the VRIN-ness suggested by RBV (Kraaijenbrink et al., 2010). Therefore, we need to provide a definition of capability, and other terms are defined accordingly. Grant (1991) defines capability as the capacity for a team of resources (i.e., production factors) such as skills, patents, and capital equipment to perform a task or activity. Wade and Hulland (2004) define capability as a repeatable pattern of actions in the use of resources to create, produce, or offer goods or services to a market or another internal unit.  However, in our opinion, Homann (2006) provides a deeper clarification of the nature of business capability. According to him, capabilities are what a business can do (e.g., pay employees or ship products) regardless of the resources being used or how those resources are configured. Therefore, whether a capability is sourced internally or outsourced and whether it is manual or automated does not matter. Homann defines capability as opposed to business process, which is described as how a capability is materialized in terms of the sequence of activities and the resources being used in each activity at a given point in time. He states that capabilities are measurable as they are expected to perform at a certain level. For instance, capabilities can be expressed in terms of units per period of time (e.g., fulfilling 10,000 orders per day) or quality measurement (e.g., being very fast in stocking hot items for customers to buy) (Austin et al., 2009). With a precise definition of capability, VRIN-ness can be defined accordingly. Table 4.2 summarizes the terminology used in CBV.   115 Table 4.2 CBV’s terminology (CBV shares VRIN-ness suggested by RBV; Kraaijenbrink et al., 2010) Language concept Definition Capability What the business can do (e.g., pay employees or ship products) regardless of the resources being used or how those resources are configured (e.g., whether in-sourced or outsourced or manual or automated) Value The degree to which a resource increases the efficiency and effectiveness of business capabilities of the organization Rarity The degree to which competing firms possess the same capability Imitability The extent to which competitors can replicate the capability Substitutability The degree to which other capabilities can be applied in place of the capability Mobility The extent to which the capability can be poached by competitors  Appropriability The extent to which the firm can appropriate the returns Since a significant number of researchers has used CBV to discuss strategies and competitive advantages, it is reasonable for CIOs to adopt the terminology of CBV to communicate with the TMT. In this light, CIOs should speak about the effect of an IT resource on business capabilities as well as the rarity, inimitability, and non-substitutability of those business capabilities. Take the previous example of DataX and DataY; the CIO can use the concepts of capability, value, and rarity to state: Both DataX and DataY target real-time (fast) decision making. I recommend DataX for two reasons. First, implementation of DataY disrupts our business while with DataX, there is no disruption of business operations. Second, DataX provides answers to business questions two times faster than DataY. Therefore, new patterns in the market, customers, suppliers, machine utilization, etc. will be found two times faster than DataY. Currently, only one of our competitors is being equipped with real-time decision making provided by DataX. 116 The value of DataX is explained using its effect on business capabilities in general (disruption of business) and its influence on decision making, answering business questions, and finding patterns of machines, market, customers, and suppliers in particular. The last sentence in the example indicates the rarity and imitability of the business capability. Finally, by comparing the effects of DataX on business capabilities with the effects of DataY, the CIO explains how much the new business capability is substitutable17. Note that we still have to use the name of the focal IT resources (i.e., DataX, DataY). However, we avoid explaining their influence on other IT resources; rather, we only explain the influence of these IT resources on business capabilities. 4.2.3 Combining Resource-based and Capability-based Languages (RbL+CbL) Knowing both RbL and CbL, a common conclusion is to use both RbL and CbL during a conversation with the TMT. CIOs can explain the effects of the focal IT resources on current IT resources as well as their effects on business capabilities. In particular, a change to a current IT resource can be interpreted as a change in a business capability. Thus, the description can be positioned as a number of causes and effects. Regarding DataX and DataY, the CIO can combine the two previous descriptions and state: Both DataX and DataY target real-time (fast) decision making by enabling real-time (fast) data analysis. I recommend DataX for two reasons. First, implementation of DataY disrupts our business while with DataX, there is no disruption of business operations. This can be attributed to the fact that using DataY requires rewriting the current software applications while with DataX, there is no need to rewrite software applications. Second, DataX provides answers to business questions two times faster than DataY. The reason is that DataX provides the results of user queries two times                                                   17 We might have explained more on substitutability, but we want to keep the RbL and CbL descriptions as similar as possible for the experiment. 117 faster than DataY. Therefore, new patterns in the market, customers, suppliers, machine utilization, etc. will be found two times faster than with DataY since new patterns in data will be found two times faster than with DataY. Currently, only one of our competitors is being equipped with real-time decision making via real-time data analysis provided by DataX. 4.3 Enhancing Strategic Alignment via Language This study focuses on the influence of the above prescribed language on the TMT’s perceived shared language, the TMT’s understanding of the strategic role of IT, and the TMT’s credibility of the CIOs’ claims.18 These three dependent variables are selected for two reasons: (i) Empirical support exists for their influence on strategic alignment (Reich & Benbasat, 2000; Preston & Karahanna, 2009; Karahanna & Preston, 2013) and (ii) CIOs can control them in their conversation with the TMT.19 The purpose of these conversations managed by the CIO is to help the TMT come up with the strategic role of an IT resource to let the CIO know how much the IT resource aligns with the business strategy and whether the TMT agrees or disagrees with the role. An IT resource can generate strategic value for an organization by playing different roles. Figure 4.2 shows an IT value generation model synthesized from Tallon (2014, 2011, 2007), Tallon and Pinsonneault (2011), and Mithas and Rust (2016). A consensus on the strategic role of IT means the CIO and                                                    18 All definitions can be found in Table 4.8. 19 According to Karahanna and Preston (2013), strategic alignment has four empirically supported antecedents: CIO’s trust in TMT, TMT’s trust in CIO, shared language, and shared understanding of the strategic role of IT. CIOs can leverage language to influence the last three antecedents. Based on multilevel theory (Burton-Jones & Gallivan, 2007), if CIOs enhance these variables in individual conversations about an IT resource, they can improve the collective trust, shared language, and shared understanding. 118  Figure 4.2 IT value generation model, synthesized from Tallon (2014, 2011, 2007), Tallon and Pinsonneault (2011), and Mithas and Rust (2016) TMT agree on a path (or several paths) from the IT resource to the long-term shareholder’s value.20 In this light, this study aims to enhance three target variables in a dialogue managed by CIOs to explain the strategic role of an IT resource: (a) shared language perceived by the TMT, (b) the                                                   20 Note that both may not understand the entire path completely due to the lack of business and IT knowledge. However, they both share at least one node (e.g., spot market patterns faster). The CIO knows the antecedents of that node and the top managers know the consequences of it.  119 TMT’s understanding of the strategic role of an IT resource,21 and (c) the credibility of the CIO’s claims about an IT resource. 4.3.1 Consensus Formation and Semantic Network Memory In the theory of communicative action, Habermas (1985) explains how a consensus on a subject is reached through dialogue. First, the listener needs to understand the meaning of what is being claimed. This necessitates that the listener know the language in use. Then, the listener needs to verify the validity of the claim and inform the partner if the claim is true or false. By confirming the validity of the claim, a consensus about the communication content is reached (Jentsch & Beimborn, 2014). Therefore, if the listener’s verification of a claim can be facilitated, consensus can be achieved faster or consensus on more subjects can be reached. The relationship between the language of a claim and its verification has been studied in depth in several research studies by Collins and Quillian (1969, 1970a, 1970b), Anderson and Bower (1972), Rips et al. (1973), Collins and Loftus (1975), and Raaijmakers and Shiffrin (1981). The intersection of their ideas is that human memory is a huge network of nodes interconnected by associations. For each word or term in a language, there exists a node in the memory of the individual who knows the language. Upon presentation of the word to an individual, the corresponding node in memory will be activated, which then activates other related nodes.                                                    21 We believe the path to a shared understanding of the role of an IT resource is through understanding the role of the IT resource. Note that TMTs can still reject the role to be played by an IT resource after they fully understand it. However, understanding accelerates TMTs’ and CIOs’ consensus on what the appropriate role should be. 120 Collins and Quillian (1969, 1970a, 1970b), in their theory of semantic memory, provide empirical support for the idea that the nodes are organized in a semantic way. They report that verification of the claim that a “canary is an animal” takes longer than the claim that a “canary is a bird.” Therefore, they conclude that the three nodes, “canary, bird, animal,” are stored in the form of canary -> bird -> animal in human memory, rather than in the form of canary -> bird and canary -> animal. This experiment corroborates the existence of semantic memory. Rips et al. (1973) attribute the longer verification time to the longer semantic distance between the two nodes. They define the semantic distance of two nodes as the number of intermediate nodes existing in between the two nodes in memory. According to the theory of semantic distance, the verification time for a sentence claiming a truth has a negative correlation with the semantic distance of the corresponding nodes of the applied words. Therefore, compared to “canary is a bird,” verifying the claim that “canary is an animal” takes longer because the semantic distance of “canary” and “animal” in human memory (which is wired as “canary -> bird -> animal”) is longer than the semantic distance between “canary” and “bird.” Verification can also be faster if two words’ corresponding nodes have a strong association (Collins & Loftus, 1975). According to Raaijmakers and Shiffrin (1981), the strength of the association between two nodes increases if the two words occupy working memory simultaneously. Therefore, the more the two words appear concurrently in stimuli, the stronger their association becomes. Since top managers often deal with their area of expertise (e.g., value creation), the stimuli in their environments strengthen the associations among the words of the corresponding domain (e.g., a structure like Figure 4.2). 121 Also note that according to Postman et al. (1948), recognition is generally superior to active recall. Assuming A, B, and C are wired as A->B->C in memory, recognition is a memory activity that occurs when the listener is given A -> B -> C and searches the semantic memory to verify whether such a path exists. Recalling is a memory activity that occurs when the listener is given A -> C and looks for a node (i.e., B) that connects the two nodes in memory. Recall is slower since it is a two-step process that includes retrieving all nodes connected to A (i.e., A->X) and recognizing whether they are connected to C (X -> C) (Anderson & Bower, 1972). This process explains why verification time increases as the semantic distance between A and B grows. 4.3.2 The Semantic Memory of a Typical Top Manager To evaluate the influence of these languages on understanding the strategic role of IT, we translate the focal concepts of these languages (i.e., technical attributes, resource, and capability) to the model suggested by the theory of semantic memory. We want to find the logical connection among these focal concepts and thus the connections of their corresponding nodes in a memory, if they exist. According to Kohli and Grover (2008, p. 30), the commonly accepted logical progression of IT value is IT Resource → Capability → Business Value (i.e., “IT Resource creates capabilities required which in turn create business value).” Chapter 3 shows that a more detailed logical progression can be IT Resource Performance -> Information Capability -> Business Capability -> Business Value (i.e., client capability, client satisfaction, and financial performance). Assuming that information is one type of IT resource, we can say the IT resource has technical attributes or performance. The technical attributes influence other IT resources and their functionalities. This in turn influences business capabilities, which influences business value. Figure 4.3 illustrates this 122 logical sequence and provides three examples using the descriptions of DataX. Therefore, according to the theory of semantic memory, if these nodes exist in a top manager’s memory, they are stored according to Figure 4.3. 4.3.2.1 Business Knowledge: What Memories of Top Business Managers Include Typical top managers have high business knowledge; that is, their memories include nodes and links related to business value and business capability as Figure 4.2 suggests (see Figure 4.4). For instance, they know that service interruptions lead to unsatisfied customers, which in turn increases the churn rate and eventually lowers the revenue. They know that “faster answers to top managers’ questions” means “faster recognition of market and customer patterns,” hence, “faster introduction of new products to the market,” and thus “higher long-term revenue.”  Figure 4.3 The logical relationship between focal concepts of each language It is also highly probable that top managers know the consequences of updating existing IT resources for business capabilities. Their memories can include nodes related to current IT 123 resources and business capabilities and include links that connect these two types of nodes.  For instance, top managers may know the beneficial consequences of having more productive IT users (e.g., employees and business managers). If a technology enables IT users to have the result of analysis twice as fast, top managers can infer that decisions can be made two times faster. However, top managers may not know all current IT resources in their organizations. They also may not know that having a consolidated database enhances the accuracy of decisions. We believe that most top managers are familiar with current IT resources like users and software applications, but not familiar with their organizations’ current IT infrastructure (see the dashed line in Figure 4.4).  Figure 4.4 IT knowledge and business knowledge in the context of the semantic memory of top managers 4.3.2.2 IT Knowledge: Where Business Managers Differ Rockart et al. (1996) suggest that success or failure of an organization’s use of IT depends more on the IT knowledge of its business managers than the effectiveness of the IT organization. Bassellier et al. (2003) find empirical support for the positive association between business managers’ IT knowledge and their proactive role in promoting IT. The positive effect of IT knowledge can be attributed to its positive influence on shared understanding, shared language, and strategic alignment (Preston & Karahanna, 2009).  124 Based on semantic memory, top managers’ IT knowledge is defined as the degree to which their memories have nodes and links that connect various technical attributes to their effects on existing IT resources (see Figure 4.4). For instance, a top manager with high IT knowledge knows that supporting both column-oriented and row-oriented formats in a database means that implementing the database does not require rewriting current software applications. Therefore, if technical attributes of a new technology are given to such top managers, they can infer its effect on current IT resources. 4.3.3 Effects of the Prescribed Languages on Perceived Shared Language In this study, perceived shared language is conceptualized as the degree to which CIOs use a common language in their communication with the TMT (Preston & Karahanna, 2009). Translating to the semantic memory model, shared language is the degree to which the CIO uses terms that have corresponding nodes in top managers’ minds (see Figure 4.5). Therefore, the perception of shared language depends on the interaction of two independent variables: words used by the CIO and nodes existing in top managers’ memories. The higher the probability of the existence of words applied by the CIO in the memories of top managers, the higher the perceived shared language.  Figure 4.5 Shared language in the context of the semantic memory of top managers 125 We believe IT knowledge plays an important role in the perception of shared language. As stated earlier, top managers have business knowledge (i.e., they have corresponding nodes for business values and business capabilities; see Figure 4.2). However, top managers differ in their IT knowledge (i.e., nodes related to technical attributes and current IT resources in the organization). Take the example of this sentence: “DataX supports dual-format architecture.” The memories of top managers with high IT knowledge contain the node “dual-format architecture,” whereas the memories of top managers with low IT knowledge do not. The nodes related to technical attributes and current IT resources are covered by the concept of IT knowledge. Therefore, the interaction of prescribed languages with IT knowledge influences their effectiveness in terms of developing shared language (see Table 4.3). In this light, H1: Prescribed languages, top managers’ IT knowledge, and their interaction influence shared language perceived by top managers. Table 4.3 Perceived shared language depends on prescribed languages and IT knowledge Language  Low IT knowledge (nodes in memory are  CbL  and value, RbL may exist) High IT knowledge (all nodes, including TL, RbL, CbL, and value, exist in memory) TL Corresponding TL nodes do not exist Corresponding TL nodes exist RbL Corresponding RbL nodes may or may not exist Corresponding RbL nodes exist CbL  Corresponding CbL nodes exist Corresponding CbL nodes exist RbL+CbL Corresponding CbL nodes exist; corresponding RbL nodes may or may not exist Corresponding nodes for both RbL and CbL exist 126 The question to be investigated is which language is more effective in developing shared language. Top managers with high IT knowledge in addition to business knowledge have all the nodes applied in the prescribed languages and TL in their memory. Therefore, these managers perceive a high degree of shared language even when they are communicated in RbL or TL. Therefore: H1-1: For top managers with high IT knowledge, the differences among the perceived shared language of TL, RbL, CbL, and RbL+CbL are not significant. As stated earlier, our assumption is that typical top managers have nodes related to business value and business capability in mind. Therefore, CbL, which contains capability-related terms, will look familiar to them. However, RbL and TL can look unfamiliar to top managers with low levels of IT knowledge. In the memory of top managers with low IT knowledge, nodes like “dual-format architecture” do not exist, and RbL nodes like “current database engine” may or may not exist. Therefore, these managers will perceive a lower degree of shared language with RbL or TL than with CbL.  Thus, H1-2: For top managers with low IT knowledge, CbL leads to a higher level of perceived shared language than TL.  H1-3: For top managers with low IT knowledge, CbL leads to a higher level of perceived shared language than RbL.  The question is which language, RbL or TL, leads to a higher perceived shared language when IT knowledge is low. First, current IT resources in the organization (e.g., sales application, WIFI network, Java libraries) have a higher chance of having corresponding nodes in the minds of typical top managers. Second, even if both do not exist in memory, since current IT resources have a 127 shorter semantic distance to the existing nodes in the memories of top managers (i.e., capability), using current IT resources in conversation has a higher chance of developing shared language. In this light, H1-4: For top managers with low IT knowledge, RbL develops a higher level of perceived shared language than TL.  Since RbL+CbL includes some terms existing in the memories of top manager with low IT knowledge (i.e., capabilities) and some terms that have a lower chance of existence (i.e., current IT resources), it develops shared language more readily than RbL, but less readily than CbL. In a similar vein to RbL and CbL, the terms applied in RbL+CbL have a higher chance of existing in top managers with low IT knowledge as opposed to technical terms in a TL description. Therefore, H1-5: For top managers with low IT knowledge, RbL+CbL develops a higher level of perceived shared language than RbL.  H1-6: For top managers with low IT knowledge, CbL develops a higher level of perceived shared language than RbL+CbL.  H1-7: For top managers with low IT knowledge, RbL+CbL develops a higher level of perceived shared language than TL. Table 4.4 provides a summary of H1-1 through H1-6.  Table 4.4 The moderating effect of IT knowledge on the effectiveness of languages on shared language Shared language for TMT with low IT knowledge Shared language for TMT with high IT knowledge CbL > RbL+CbL > RbL > TL RbL+CbL= CbL = RbL = TL 128  4.3.4 Effects of the Prescribed Languages on Understanding the Strategic Role of IT (USRI) Shared understanding is defined “as the degree to which people concur on the value of properties, the interpretation of concepts, and the mental models of cause and effect with respect to an object of understanding” (Bittner & Leimeister, 2014, p. 115). In this light, we define shared understanding of the role of an IT resource as the consensus of the CIO and TMT on the value of the IT resource (i.e., the desired role of the IT resource in the organization) and the cause-and-effect chain through which the IT resource increases the long-term value for shareholders. As mentioned earlier, top managers and CIOs may not be able to comprehend the complete cause-and-effect chain from IT resource to shareholders’ long-term value, and top managers may not have the IT knowledge (Bassellier et al., 2003) to understand the cause-and-effect chain from an IT resource to business capabilities (see Figure 4.4). On the other hand, CIOs may not have the business competency (Bassellier & Benbasat, 2004) to understand the cause-and-effect chain from business capabilities to shareholders’ long-term value (see Figure 4.2). Therefore, both parties have to agree on the IT resource’s role that both understand (e.g., increasing a business capability, improving a critical software application) and rely on the other party to know the rest of the cause-and-effect chain. This shared understanding, however, hinges on the CIO being able to help top managers understand the strategic role of IT (USRI). Such understanding implies that top managers can articulate the cause-and-effect chain from the role of IT resource (i.e., whatever node on the chain in Figure 4.4) to the shareholders’ long-term value. Having understood how the IT resource increases the long-term value, top managers can agree or disagree about the role that the IT in question is argued to play. Disagreement means that the CIO has to come up with another 129 role for the IT resource or select another IT resource. As Preston and Karahanna (2009) indicate, this shared understanding depends on top managers’ IT knowledge and the CIO’s language. Translating to the semantic memory model, for this understanding to happen, top managers have to use terms provided by the CIO to actuate a path from corresponding nodes in the description to shareholders’ long-term value in their minds (see Figure 4.6). The existence or non-existence of such a path in top managers’ memories depends on their IT knowledge and business knowledge. Typical business managers have high business knowledge but different levels of IT knowledge. For example, the memories of top managers with high IT knowledge contain a path like “dual-format architecture” -> “no need to rewrite current applications” -> “no disruption of business” -> “no interruption in service” -> “not offending and losing customers” -> “no revenue loss”. On the other hand, the memories of top managers with low IT knowledge do not have a path this long. Generally, their memories include a path that starts from “no disruptions of business” and ends with “no revenue loss.” The portion of the path from “dual-format architecture” to “no need to rewrite current applications” in memory is contingent on IT knowledge. IT knowledge also represents the strength of this IT path (i.e., how frequently the nodes in the path occupy the working memory).  Figure 4.6 Understanding the strategic role of an IT resource in the context of the semantic memory of top managers 130 If the CIO uses a term in the IT path (e.g., dual-format architecture) and top managers’ IT knowledge is low, USRI does not happen. However, if IT knowledge is high, top managers can understand the strategic role that IT resource plays.  Therefore, USRI differs based on whether a word in the IT path is used and the existence and strength of the path (see Table 4.5). Hence, H2: Prescribed language, top managers’ IT knowledge, and their interaction influence top managers’ USRI. Table 4.5 USRI depends on prescribed languages and IT knowledge Language  Low IT knowledge (memory includes CbL -> Value, may include weak path of RbL -> CbL) High IT knowledge (memory includes strong paths TL -> RbL -> CbL -> Value) TL No path to value (the node TL does not exist) Recalling TL-> RbL, RbL-> CbL, and CbL-> Value  RbL A path to value may not exist; if it exists, recalling RbL-> CbL and CbL-> Value Recalling RbL-> CbL and CbL-> Value  CbL  Recalling CbL-> Value Recalling CbL-> Value RbL+CbL Recalling CbL-> Value (non-mandatory learning of  RbL-> CbL) Recalling CbL-> Value (non-mandatory recognition of  RbL-> CbL) We contend that the influence of languages diminishes as IT knowledge increases. Top managers with high IT knowledge have strong paths from nodes used in the chosen languages to nodes related to values in their memory. The strength of a path between nodes is due to the frequent presence of nodes in managers’ working memory (Collins & Loftus, 1975; Raaijmakers & Shiffrin, 1981). Hence, the CIO using any term in a path that exists in the working memory of a top manager has a similar effect as the CIO using all the terms in the path. Therefore, these top managers can 131 always activate a path to business value, even if spoken in TL that has a long semantic distance from business value. Hence, H2-1: For top managers with high IT knowledge, the differences between USRI of a description articulated in TL, RbL, CbL, and RbL+CbL are not significant. 4.3.4.1 USRI when Technical Language Is Given to Top Managers with Low IT Knowledge Since top managers may not have high IT knowledge, the question is which language is more effective in developing USRI when IT knowledge of the manager receiving the CIO’s communication is low. First, the semantic memory model can explain why speaking in technical jargon is not effective. Take the example of this sentence: “DataX supports dual-format architecture.”  In the memory of a top manager with a low IT knowledge, the node “dual-format architecture” does not exist. Therefore, the sentence has an extremely low chance of activating nodes corresponding to the long-term cost or revenue. Hence, those top managers either cannot or barely understand the strategic role that DataX would play in the organization.  On the other hand, communication in RbL or CbL would use nodes that have a higher chance of existing in top managers’ memories and being connected to the nodes of long-term cost and revenue. Take the example of the following sentence, which is phrased in RbL: “With DataX, there is no need to rewrite our software applications.” Unlike “dual-format architecture,” it is highly probable that examples of current applications (e.g., sales application, customer relationship management system) exist in managers’ minds. In a similar vein, the sentence phrased in CbL, “With DataX, there is no disruption of business operations,” actuates examples of business operations like order fulfillment that exist in top managers’ minds. Compared to “dual-format 132 architecture,” both “rewriting applications” and “business disruption” have a higher chance of actuating nodes corresponding to long-term cost and revenue. Even if these terms do not exist in memory, their shorter semantic distance from value-related nodes can increase their chance of activating value-related nodes as opposed to TL.  Therefore, H2-2: For top managers with low IT knowledge, RbL develops a higher level of USRI than TL. H2-3: For top managers with low IT knowledge, CbL develops a higher level of USRI than TL. H2-4: For top managers with low IT knowledge, RbL+CbL develops a higher level of USRI than TL. 4.3.4.2 Comparing USRI when RbL vs. CbL Is Given to Top Managers with Low IT Knowledge We believe that CbL results in a higher understanding of the role of an IT resource than RbL for three reasons. First, examples of capabilities have a higher chance of existing in a typical top manager’s mind since capability is a more stable concept in today’s constantly changing environment. Capability is more stable because it is independent of the business processes that materialize it and independent of the required resources.  A firm can own the same capability using different resources over time. For instance, the order fulfillment capability has been needed in firms as long as firms have existed, though the business processes and required resources have changed to accomplish the capability. Moreover, two firms may also have the same capability to satisfy customers’ needs using different resources (Lockett et al., 2009). This results in capability-based models being the long-lasting models of the area of focus (Homann, 2006) and makes capability suitable to serve as a baseline for strategic planning, change management, and impact 133 analysis to the extent that it is called the Rosetta Stone for business/IT alignment (Ulrich & Rosen, 2011). This is in line with Tallon’s (2007) suggestion that alignment should be viewed, conceptualized, and evaluated at the capability level, which helps in fostering a deeper and more meaningful understanding of its effects on firm performance and finding the right type of fit.22 Second, even if both resource and capability exist in a top manager’s mind, capability has a shorter semantic distance to business value (see Figure 4.3). This enables easier verification of capability as the IT path is not strong, and the link between resource and capability is weak.  Based on the literature (Anderson & Bower, 1972; Postman et al., 1948), when top managers hear resources are to be changed (e.g., “current applications should be rewritten”), they need time to recall the corresponding business capabilities. Therefore, top managers have to retrieve all business capabilities and check if they are linked to the resource update. Thus, recalling capabilities is slow and takes more mental effort than retrieving the capability from memory. Hence, the strategic role of an IT resource is understood more easily in CbL than RbL by top managers with lower IT knowledge. Third, capability is empirically shown to have more power in explaining the differences in firms’ performance. This can be attributed to the fact that resources are seldom valuable in isolation (Penrose, 1959), but the synergistic combination or bundle of resources is what matters (Kraaijenbrink et al., 2010; Teece, 2007; Grant, 1996). Newbert (2007) finds evidence that resource combinations or capabilities are more likely to explain differences in firms’ performance                                                   22 Literally, Tallon (2007) uses “process” but his conceptualization of process is closer to our conceptualization of capability than our conceptualization of business process.  134 than single resources in isolation. This is because adding a resource to a firm can result in a different impact on the firm’s performance based on the functionality of the resource in the firm (Lockett et al., 2009). An off-the-shelf software application can be used by different firms and have different degrees of effects on the firm performance. Moreover, customers do not care about a firm’s resources, but they do care about a firm’s capability to fulfill their needs (Lockett et al., 2009). Therefore, speaking in terms of a language concept that is highly correlated with firm performance can be more fruitful for understanding the strategic role of an IT resource. Therefore,  H2-5: For top managers with low IT knowledge, CbL develops a higher level of USRI than RbL. 4.3.4.3 USRI when RbL+CbL Is Given to Top Managers with Low IT Knowledge Combining RbL and CbL requires managers to verify the relationship between the two concepts (e.g., verify whether “rewriting current applications” leads to “disruption of business”). Whereas this means the interlocutor has to recall this link in the case of RbL, he or she has to recognize this link in the case of RbL+CbL. Since recognition is generally superior to active recall (Postman et al., 1948), the strategic role of IT can be understood more easily and faster in a communication phrased in RbL+CbL than in RbL. Therefore,  H2-6: For top managers with low IT knowledge, RbL+CbL develops a higher level of USRI than RbL. On the other hand, both CbL and RbL+CbL have the capability concept and, thus, the same semantic distance from business value. In both cases, managers who typically have business capabilities and values in mind can easily find a path from capability (e.g., fast decision making) to value (e.g., fast reaction to competitors). However, RbL+CbL requires processing an extra link 135 between RbL and CbL (e.g., “fast data analysis -> fast decision making” and “rewriting current applications -> disruption of business”). This extra memory operation can cause a longer verification. If the link already exists in the memory of the interlocutor, the required recognition is easy and fast because recognition does not require as much mental effort as recall does (Postman et al., 1948; Anderson & Bower, 1972). However, if the link does not exist, a learning operation is required (i.e., the link will be added to memory). Note that this processing is not required to understand the strategic value of the IT resource. “Fast decision making” is sufficient to activate the connected value nodes (see Figure 4.3). Therefore, we believe that both develop the same level of understanding of the strategic role of IT resource. H2-7: For top managers with low IT knowledge, the difference between RbL+CbL and CbL in developing USRI is not significant. Table 4.6 summarizes the effect of different languages on the understanding of the role of an IT resource for top managers with low and high IT knowledge. Table 4.6 The moderating effect of IT knowledge on the influence of languages on USRI USRI for TMT with low IT knowledge  USRI for TMT with high IT knowledge CbL = RbL+CbL > RbL > TL RbL+CbL= CbL = RbL = TL 4.3.5 Effects of the Prescribed Languages on Credibility In this study, credibility is defined as the degree to which top managers believe that the CIO’s claims about an IT resource are reliable and trustworthy. Karahanna and Preston (2013) leverage social capital theory (SCT) to find and test the antecedents of credibility and strategic alignment. They report empirical support for two antecedents of credibility: shared language and USRI. 136 Shared language and USRI develop a sense of familiarity and transparency and reduce perceptions that the CIO has a hidden agenda and hence prioritizes IT objectives instead of aiming to achieve organizational goals (Karahanna & Preston, 2013). Therefore, credibility can be predicted by combining H1 and H2 (see Table 4.7). Since most of the dyadic comparisons are the same across these two hypotheses, they must remain the same. When the dyadic comparisons differ (when CbL is compared to RbL+CbL), we take the union of the conditions (“>” and “=” translate into “>=”).   Table 4.7 Prediction of credibility of prescribed languages using social capital theory Based on Low IT knowledge High IT knowledge H1: Shared Language CbL > RbL+CbL > RbL > TL RbL+CbL= CbL = RbL = TL H2: USRI CbL = RbL+CbL > RbL > TL RbL+CbL= CbL = RbL = TL Credibility CbL >= RbL+CbL > RbL > TL RbL+CbL= CbL = RbL = TL In this light, H3: Prescribed languages, top managers’ IT knowledge, and their interaction influence top managers’ perception of the credibility of claims. H3-1: For top managers with high IT knowledge, the differences in the credibility of a description articulated in TL, RbL, CbL, and RbL+CbL are not significant. H3-2: For top managers with low IT knowledge, the credibility of the language is ranked as  CbL >= RbL+CbL > RbL > TL. 137 4.4 Research Method 4.4.1 Experimental Design An online experiment is conducted to test the hypotheses. A 4×2 between-subjects factorial design is employed. Prescribed language includes four levels (i.e., TL, RbL, CbL, and RbL+CbL) and IT knowledge consists of two levels (i.e., high and low). Whereas language is a treatment, IT knowledge is measured at the end of the study and participants are assigned to two groups using this self-reported measure. An advertisement by Oracle published on the cover of The Economist is used to write a description in four languages. The advertisement promotes Oracle’s in-memory database acceleration over SAP HANA in a technical language. However, it includes a link to a webpage that supplies the sentences applied in the other three languages. To prevent the influence of respondents’ biases on their answers, Oracle and SAP HANA are replaced with DataX and DataY, respectively. The sentences are slightly changed during a card-sorting process (Moore & Benbasat, 1991) to ensure that the sentences reflect the corresponding languages. Each description is accompanied by an image that summarizes the description. 4.4.2 Measurement Development In addition to IT knowledge and manipulation checks, this study includes three dependent variables (i.e., shared language, understanding the strategic role of IT, and credibility of claims). Table 4.8 provides the items used for the operationalization of each construct. Card sorting (Moore & Benbasat, 1991) is applied to update the items adopted from the literature in order to ensure convergent and discriminant validity of the constructs. According to Lance et al. (2006), basic 138 research should rely on constructs with a minimum Cronbach’s alpha of 0.80 to ensure internal consistency reliability. Later statistical analysis shows that all constructs in this study satisfy this condition. We also control for the influence of prior trust in CIOs on the credibility of claims, USRI, and shared language. Table 4.8 List of constructs used in the study Perceived Shared Language: The degree to which CIOs use a common language and terminology in their communication with the TMT. Based on: Preston & Karahanna (2009), Karahanna & Preston (2013). Scale: 7-point scale ranging from “completely disagree” (-3) to “completely agree” (+3). [lang1] I feel that the CIO and I share a common language in the explanation of DataX and DataY. [lang2] The CIO’s explanation of DataX and DataY uses a common language that the CIO and I share. [lang3] The CIO’s explanation of DataX and DataY uses terminology that we share. Cronbach’s Alpha: 0.93. Perceived Understanding of the Strategic Role of IT Resource (USRI): Top mangers’ understanding of the strategic value an IT resource brings to the organization. Based on: Bittner & Leimeister (2014), Preston & Karahanna (2009), Karahanna & Preston (2013). Scale: 7-point scale ranging from “barely” (1) to “completely” (7). [und1] I …. understand the role of DataX in improving the company’s long-term condition. [und2] I …. understand how DataX can create value for our shareholders in the long run. [und3] I am …. able to evaluate the benefit of DataX for the firm’s long-term performance. [und4] I can …. see the strategic role DataX plays in enhancing shareholders’ value. [und5] It is …. clear to me how DataX can help us create higher value for our shareholders in the long run. Cronbach’s Alpha: 0.93. Credibility of Claims: The degree to which top managers believe that the CIO’s claims about an IT resource are reliable and trustworthy. Based on: Hansen & Wanke (2010); Karahanna & Preston (2013). Scale: 7-point scale ranging from (1) “barely” to (7) “totally.” [cred1] I believe the CIO's claim about DataX being a better long-term option than DataY is ...  to support. Scale: (1) very hard (7) very easy. [cred2] In my opinion, the CIO's claim about DataX being a better long-term option than DataY is …. Scale: (1) barely reliable (7) totally reliable. [cred3] The CIO's claim about DataX being a better long-term option than DataY seems …. Scale: (1) barely believable (7) totally believable. [cred4] I believe the 139 CIO's claim about DataX being a better long-term option than DataY seems …. Scale: (1) barely plausible (7) totally plausible. Cronbach’s Alpha: 0.91. IT Knowledge: The degree to which top managers know the effects of technical attributes on existing IT resources. Based on: Bassellier et al. (2003); Karahanna & Preston (2013). Scale: 7-point scale ranging from “barely” (1) to “completely” (7). [Knw1] Before reading about DataX and DataY, I was ... aware that "in-memory database acceleration" enables "real-time (fast) data analysis." [Knw2] Before reading about DataX and DataY, I was ... aware that "supporting both row-oriented and column-oriented formats" means "there is no need to rewrite current software applications." [Knw3] Before reading about DataX and DataY, I was ... aware that "completing navigation steps twice in one hour" means "providing the results of user queries two times faster." [Knw4] Before reading about DataX and DataY, I was ... aware that "completing computer operations two times faster" means "new patterns in data will be found two times faster." Cronbach’s Alpha: 0.83. Manipulation Check [CBLmanipulation] The CIO explained that DataX influences the business capabilities of the marketing and operations departments (e.g., answering business questions, machine utilization). [RBLmanipulation] The CIO explained how DataX influences other IT resources (e.g., software applications). [TLmanipulation] The CIO explained the technical attributes of DataX (e.g., supported architectures). Control Factor [GeneralFeeling] Generally, claims made by the CIO about the need for investment in information technologies are … Scale: 7-point scale ranging from “completely unreliable” (-3) to “completely reliable” (+3) (answered before reading the scenario). 4.4.3 Scenario A scenario is designed to ensure that participants think in terms of strategic value creation when reading the description of technology (see Table 4.9). The scenario includes several multiple-choice questions (see Table 4.9) to ensure that subjects have a correct understanding of the study and do not complete the questionnaire arbitrarily. Choosing the wrong answer to these questions stops the survey, takes participants to the last page, and thanks them for their participation. 140 Incomplete responses are discarded. Moreover, after reading the description and before answering the questions related to dependent variables, we ask respondents to write about the strategic role of DataX. Respondents are given Figure 4.2 and asked to provide a chain of events from installing DataX to shareholders’ long-term value.   Table 4.9 The scenario The Scenario Assume that you have been appointed as the new chief marketing officer of a big company that owns an e-commerce website. The main responsibility of the top management team (i.e., you and other chief officers) is to increase the long-term shareholder value. This can happen by increasing the long-term revenue or decreasing the long-term cost. The following diagram shows some (but not all) tactics and strategies the top management team can apply to achieve this goal. Every now and then, you need to make a decision about whether to invest in a new technology to achieve this goal. ----Figure 4.2 is displayed here--- As a top manager, I am mainly responsible for ...  Development cost of DataX.  Long-term revenue/cost of the company.  Short-term revenue/cost of the company.  Organizing meetings.  [Condition: If “long-term revenue/cost” is not selected, skip to the end of survey.] The Scenario continues: You have only met the other executives, including the chief information officer (CIO), once and received a congratulatory email from all, including the CIO, during the previous week. Therefore, you do not know his personality as well as most other executives’ personalities. You know that top management teams generally have unequal power distribution, and some executives may have hidden agendas. However, the CIO may be totally honest and into the success of the company. I …. know the CIO. (1) barely ---- completely (7). Generally, claims made by the CIO about the need for investment in information technologies are …(-3) completely unreliable---completely reliable (+3). The Scenario continues: 141 The CIO just sent you and other executives an email about the next executives’ meeting. A decision about investing in a technology called DataX will be made at that meeting, to be held within three days. As part of the email, the CIO describes the DataX technology, a database engine.  Keep in mind that some CIOs can be difficult to understand. Therefore, it is OK if you do not fully understand what the CIO says. You will see four rival languages that CIOs use at the end. Please read this description and answer the corresponding questions. Which one is FALSE?  DataX is a database engine.  You will see four rival languages at the end.  All CIOs can be difficult to understand.  Not understanding what the CIO says is OK. [Condition: If “All CIOs …” is not selected, skip to the end of survey.] ---Next Page--- In his email, the CIO claims that the DataX database engine can increase shareholders' long-term value more than DataY can. [ Randomly place one of the descriptions (TL, RbL, CbL, or RbL+CbL) here]  [Real USRI] Please explain how DataX, as opposed to DataY, increases the long-term shareholder value more than DataY. Use the image below [Figure 4.2] and the above description to provide a chain of events (or several chains of events) from DataX (at the bottom of the image) to long-term shareholder value (at the top of the image).  A chain of event is like: Using DataX -> (faster ?) -> (?) -> ... -> long-term shareholder's value, which is read as: Using DataX results in faster X that results in ... Or like: Using DataX -> (fewer ?) -> (?) -> ... -> long-term shareholder's value, which is read as: Using DataX results in fewer X that results in ... The first question mark (?) is filled using the above description. The rest are filled using your own judgment as well as the image below.  [A textbox that is used by respondents to enter the answer (non-mandatory)] ----Figure 4.2 is displayed here again for convenience--- [PageSubmitTime: a hidden timer to store the time respondents spend on this page] ---Next Page--- Questions in Table 4.8 142 4.4.4 Pilot A pilot test with 22 subjects is conducted prior to the main experiment to identify potential issues with the measurement. We find that some respondents missed the strategic context of the scenario and thought in terms of a project and its development cost and short-term value. We underscore “long-term” wherever possible and add a relevant multiple-choice question to the scenario to stop the survey if the subject misses the context. Moreover, when they come across technical language, some participants withdraw from the study as they find it difficult to understand and think they are not qualified to participate. We place a message, saying “Some CIOs can be difficult to understand. Therefore, it is OK if you do not fully understand what the CIO says,” in the scenario to reduce withdrawals.  We also notice some issues with measurement. First, the IT knowledge measure we adopt from Bassellier et al. (2003) measures knowledge of email, the World Wide Web, and so forth and thus is too general for our study. Therefore, we only ask about IT knowledge related to the description of in-memory database acceleration technology. Moreover, we note that the questions related to manipulation check are difficult to answer. Our discussion with participants indicates that a slight rewording of the measurement items and descriptions would resolve the issue to some extent. However, overall, deciding whether the description is written in terms of technical attributes, business capability, or its influence on other resources is difficult. This can be attributed to the logical closeness of these concepts that confuses some participants. As a result of the measurement change, we ask five new respondents to answer the new questionnaire. Accordingly, no further modification is needed.  143 4.4.5 Sample 135 participants, including 124 business managers recruited from a North American panel maintained by a marketing research firm and 11 experienced MBA students,23 constitute the sample. The sample size ensures a 0.94 power to detect an effect size of 0.5 on a Likert scale, equal to the medium effect size suggested by Cohen (1977). The marketing company sends 30,942 emails to the members of the panel; 544 individuals check the survey’s first page that includes the consent form. 149 participate but 25 responses are deemed unqualified due to missing the trap multiple-choice questions or spending less than two minutes reading the description and typing the answer to the written question. 124 responses are complete and pass the trap multiple-choice questions. Participants are rewarded with monetary rewards or panel rewards based on the site to which they belong within the company’s network. Among the 135 participants, 62 are females. They come from diverse industries, including manufacturing, finance, consultancy, health care, and education; 42% is between 26 and 35 whereas 58% is older than 35. On average, the sample has 13 years of industry experience and 6.5 years of experience in top management roles. Regarding the size of the organization as measured by the number of full-time employees, 33% is affiliated with small organizations (1-49), 28% belongs to medium-sized organizations (50-499), and 36% works for big organizations (500 or more). The one-way analysis of variance (ANOVA) shows no significant differences between                                                   23 These students have at least three years of business experience and have passed a course on strategic management. Recruiting MBA students is consistent with the literature that contends that professionals and students make decisions along similar lines (Glaser et al., 2007, 2012; Menkhoff et al., 2013). 144 subjects randomly assigned to each of the four languages with respect to age, gender, organization size, years of experience, years of being in a top management role, and time spent reading the description. 4.4.6 Manipulation Check 76% of respondents believes the influence on business capabilities is explained when CbL is given (i.e., in descriptions in CbL and RbL+CbL) and does not believe the effects on business capabilities are explained when CbL is not given (i.e., in descriptions in RbL and TL) (see CBLmanipulation in Table 4.8). 65% of respondents believes the influence on other IT resources is explained when RbL is given (i.e., in descriptions in RbL and RbL+CbL) and does not believe the influence on other IT resources is explained when RbL is not given (i.e., in descriptions in CbL and TL) (see RBLmanipulation in Table 4.8). Also, 59% of respondents believes the technical attributes are explained when TL is given (i.e., in descriptions in TL) and does not believe the technical attributes are explained when TL is not given (i.e., in descriptions in RbL, CbL, and RbL+CbL) (see TLmanipulation in Table 4.8).  We discuss the manipulation check questions with participants who provide us their email address and hold a different view from us. We notice that the logical closeness of these concepts confuses some participants as they are not taught the differences between these concepts. For instance, although some of these respondents are given a description in CbL, they believe that technical attributes are also given as they consider enhanced business capabilities (e.g., two times faster decision) to be an extension of technical attributes (e.g., faster database operation). The same closeness between RbL and CbL can exist. Someone given RbL might have understood the capabilities and thus answered as seeing CbL. One respondent who was given a description in TL 145 but disagreed that she was given TL expressed that she saw the technical attributes but expected more explanation of more technical attributes. Considering respondents’ difficulty in answering manipulation check questions due to not being trained in the difference between languages, we are satisfied with the percentage (i.e., higher than 50%). Note that card sorting had already corroborated that the descriptions belong to the targeted languages. 4.4.7 Experimental Procedures All participants start by reading the scenario. Then, they are randomly assigned to one of the languages (i.e., TL, RbL, CbL, or RbL+CbL) and read its description. Next, they answer the three questions related to the manipulation check and are asked to explain why DataX is a better long-term option than DataY by writing a short paragraph. They are instructed to use Figure 4.2 and the description of technology to develop a chain of events from DataX to long-term shareholders’ value. Next, participants evaluate the dependent variables as well as their IT knowledge prior to reading the description. If we had asked respondents the IT knowledge questions (see questions identified as Knw1 to Knw4 in Table 4.8 ) before reading the description, respondents with low IT knowledge who were assigned a technical language might have been able to understand the technical description. 4.5 Data Analysis A two-way multivariate analysis of covariance (MANCOVA) is conducted on the influence of the independent variables (IVs), that is, prescribed languages (TL, RbL, CbL, RbL+CbL) and IT knowledge (low, high) on dependent variable (DVs), that is, perceived shared language, understanding the strategic role of IT, and the credibility of claims, controlling for the covariate prior trust in the CIO. The average of the items of each DV and IT knowledge represents its value 146 in the analysis. To separate low and high IT knowledge, we use the median of the IT knowledge for all cases. Accordingly, a value of 4 or lower is considered low IT knowledge, and a value greater than 4 is deemed high IT knowledge.  The observed mean difference between languages must be both statistically and theoretically significant. Generally, we use an alpha level of .05 for a significant result and .1 for a marginally significant result of statistical analysis. The smallest effect size (i.e., mean difference between two languages) we want to observe is 0.5 on a Likert scale, which is equal to Cohen’s (1977) medium effect size. Table 4.10 shows the descriptive statistics of DVs. The correlations between dependent variables are lower than .90, suggesting the absence of multicollinearity (Tabachnick & Fidell, 2012). Box's test of equality of covariance matrices is non-significant (p>.01). Wilks' Lambdas for all main and interaction effects are significant (p<.05). The Levene’s test is negative (p>.05) for shared language and credibility, but not for USRI. Therefore, the equal variances across groups cannot be accepted for USRI. Hence, we choose a more conservative p value of .01 for accepting a significant main and interaction effect (Raykov & Marcoulides, 2008).  Table 4.10 Descriptive statistics Dependent variable Language IT knowledge Mean Std. deviation N Shared language TL Low -.5208 1.22908 16 High 1.5000 1.20592 18 Total .5490 1.57609 34 RbL Low .9000 1.43107 20 High 1.6429 1.12062 14 Total 1.2059 1.34594 34 CbL Low 1.5882 .98269 17 High 1.6481 .99982 18 Total 1.6190 .97733 35 147 Dependent variable Language IT knowledge Mean Std. deviation N RbL+CbL Low 1.7778 .93564 12 High 1.9000 1.02084 20 Total 1.8542 .97620 32 Total Low .8923 1.45906 65 High 1.6810 1.07292 70 Total 1.3012 1.32889 135 USRI TL Low 3.9875 .99457 16 High 5.4444 .90633 18 Total 4.7588 1.19065 34 RbL Low 4.5600 1.49751 20 High 5.5429 1.03902 14 Total 4.9647 1.39911 34 CbL Low 4.9765 1.28235 17 High 5.5111 .81810 18 Total 5.2514 1.08719 35 RbL+CbL Low 5.7167 .79753 12 High 5.8700 .89507 20 Total 5.8125 .84995 32 Total Low 4.7415 1.32793 65 High 5.6029 .90633 70 Total 5.1881 1.20491 135 Credibility TL Low 4.6719 .82522 16 High 5.6250 1.06153 18 Total 5.1765 1.05982 34 RbL Low 4.8500 1.05569 20 High 5.5536 .99604 14 Total 5.1397 1.07519 34 CbL Low 5.5147 1.21021 17 High 5.7361 .92144 18 Total 5.6286 1.06130 35 RbL+CbL Low 5.8333 .67700 12 High 5.7250 .93506 20 Total 5.7656 .83747 32 Total Low 5.1615 1.06931 65 High 5.6679 .95905 70 Total 5.4241 1.04125 135 148 Table 4.11 shows the result of the test of between-subjects effects. All main effects are statistically significant (p<.05), indicating a significant difference between the DVs for the different levels of prescribed languages and IT knowledge. The interaction effect (language*IT knowledge) is significant for shared language (p<.05) and marginally significant for USRI and credibility (p<.01 and p<.05, respectively), suggesting that the overall effect of prescribed languages is greater when IT knowledge is low than when IT knowledge is high. Hence, the data analysis corroborates the main and interaction effects of prescribed languages and IT knowledge on shared language (H1), USRI (H2), and credibility (H3).  Table 4.11 Tests of between-subjects effects Source Dependent variable Type III sum of squares df Mean square F Sig. Corrected model Shared language 77.737a 8 9.717 7.705 .000 USRI 65.771b 8 8.221 8.044 .000 Credibility 34.902c 8 4.363 4.980 .000 Intercept Shared language 113.619 1 113.619 90.093 .000 USRI 2085.292 1 2085.292 2040.431 .000 Credibility 2320.449 1 2320.449 2648.771 .000 Prior trust in CIO (covariate) Shared language 5.591 1 5.591 4.433 .037 USRI 16.581 1 16.581 16.224 .000 Credibility 12.583 1 12.583 14.364 .000 Language Shared language 34.736 3 11.579 9.181 .000 USRI 20.134 3 6.711 6.567 .000 Credibility 9.443 3 3.148 3.593 .016 IT knowledge Shared language 16.223 1 16.223 12.864 .000 USRI 17.284 1 17.284 16.913 .000 Credibility 5.097 1 5.097 5.818 .017 Language *  IT knowledge Shared language 21.628 3 7.209 5.717 .001 USRI 9.738 3 3.246 3.176 .026 Credibility 7.051 3 2.350 2.683 .050 Error Shared language 158.902 126 1.261   149 Source Dependent variable Type III sum of squares df Mean square F Sig. USRI 128.770 126 1.022   Credibility 110.382 126 .876   Total Shared language 465.222 135    USRI 3828.320 135    Credibility 4117.063 135    Corrected Total Shared language 236.639 134    USRI 194.541 134    Credibility 145.284 134    a. R Squared = .329 (Adjusted R Squared = .286) b. R Squared = .338 (Adjusted R Squared = .296) c. R Squared = .240 (Adjusted R Squared = .192) A post-hoc pairwise comparison of the estimated marginal means of the DVs using the least significant difference (LSD) approach is conducted to evaluate H1-1 through H1-7, H2-1 through H2-7, and H3-1 and H3-2 (see Table 4.12). As H1-1, H2-1, and H3-1 predict, the differences between the languages are not significant when IT knowledge is high. The mean differences between languages are lower than .50 and the p-values are larger than 0.05. As for low IT knowledge, all hypotheses related to shared language (H1-2 through H1-7) and USRI (H2-2 through H2-7) are supported except for H1-6 (i.e., CbL> RbL+CbL for shared language) and H2-5 (i.e., CbL > RbL for USRI). The data analysis also shows that the difference between the credibility of CbL and RbL+CbL as well as RbL and TL (suggested in H3-2) is not significant in our data, rejecting that CbL>= RbL+CbL and RbL> TL for credibility. Table 4.13 provides a summary of our conclusion regarding the hypotheses.  Table 4.12 Pairwise comparison of dependent variables Dependent variable IT knowledge (I) Lang (J) Lang Mean difference (I-J) Std. error Sig.b 95% confidence interval for difference Lower Bound Upper Bound Shared language Low TL RbL -1.425* .377 .000 -2.170 -.680 CbL -2.078* .391 .000 -2.853 -1.303 150 Dependent variable IT knowledge (I) Lang (J) Lang Mean difference (I-J) Std. error Sig.b 95% confidence interval for difference Lower Bound Upper Bound RbL+ CbL -2.376* .430 .000 -3.228 -1.524 RbL TL 1.425* .377 .000 .680 2.170 CbL -.653^ .371 .081 -1.387 .081 RbL+ CbL -.951* .412 .022 -1.766 -.137 CbL TL 2.078* .391 .000 1.303 2.853 RbL .653^ .371 .081 -.081 1.387 RbL+ CbL -.298 .427 .486 -1.142 .546 RbL+ CbL TL 2.376* .430 .000 1.524 3.228 RbL .951* .412 .022 .137 1.766 CbL .298 .427 .486 -.546 1.142 High TL RbL -.158 .400 .694 -.950 .634 CbL -.148 .374 .693 -.889 .593 RbL+ CbL -.357 .365 .331 -1.080 .366 RbL TL .158 .400 .694 -.634 .950 CbL .009 .400 .981 -.783 .802 RbL+ CbL -.199 .392 .613 -.975 .577 CbL TL .148 .374 .693 -.593 .889 RbL -.009 .400 .981 -.802 .783 RbL+ CbL -.209 .365 .569 -.932 .515 RbL+ CbL TL .357 .365 .331 -.366 1.080 RbL .199 .392 .613 -.577 .975 CbL .209 .365 .569 -.515 .932 USRI Low TL RbL -.580^ .339 .090 -1.251 .091 CbL -.935* .352 .009 -1.633 -.238 RbL+ CbL -1.863* .387 .000 -2.630 -1.096 RbL TL .580^ .339 .090 -.091 1.251 CbL -.356 .334 .289 -1.016 .305 RbL+ CbL -1.283* .370 .001 -2.016 -.550 CbL TL .935* .352 .009 .238 1.633 RbL .356 .334 .289 -.305 1.016 RbL+ CbL -.927* .384 .017 -1.687 -.168 RbL+ CbL TL 1.863* .387 .000 1.096 2.630 RbL 1.283* .370 .001 .550 2.016 CbL .927* .384 .017 .168 1.687 High TL RbL -.124 .360 .732 -.837 .589 CbL -.067 .337 .843 -.734 .600 151 Dependent variable IT knowledge (I) Lang (J) Lang Mean difference (I-J) Std. error Sig.b 95% confidence interval for difference Lower Bound Upper Bound RbL+ CbL -.351 .329 .288 -1.002 .300 RbL TL .124 .360 .732 -.589 .837 CbL .057 .360 .874 -.656 .770 RbL+ CbL -.227 .353 .521 -.926 .472 CbL TL .067 .337 .843 -.600 .734 RbL -.057 .360 .874 -.770 .656 RbL+ CbL -.284 .329 .389 -.935 .367 RbL+ CbL TL .351 .329 .288 -.300 1.002 RbL .227 .353 .521 -.472 .926 CbL .284 .329 .389 -.367 .935 Credibility Low TL RbL -.184 .314 .558 -.806 .437 CbL -.796* .326 .016 -1.442 -.150 RbL+ CbL -1.278* .359 .001 -1.988 -.568 RbL TL .184 .314 .558 -.437 .806 CbL -.612^ .309 .050 -1.223 7.048E-6 RbL+ CbL -1.093* .343 .002 -1.772 -.415 CbL TL .796* .326 .016 .150 1.442 RbL .612^ .309 .050 -7.048E-6 1.223 RbL+ CbL -.482 .356 .178 -1.185 .222 RbL+ CbL TL 1.278* .359 .001 .568 1.988 RbL 1.093* .343 .002 .415 1.772 CbL .482 .356 .178 -.222 1.185 High TL RbL .049 .334 .883 -.611 .709 CbL -.111 .312 .722 -.729 .506 RbL+ CbL -.035 .305 .909 -.638 .568 RbL TL -.049 .334 .883 -.709 .611 CbL -.160 .334 .632 -.821 .500 RbL+ CbL -.084 .327 .797 -.731 .563 CbL TL .111 .312 .722 -.506 .729 RbL .160 .334 .632 -.500 .821 RbL+ CbL .076 .305 .803 -.527 .679 RbL+ CbL TL .035 .305 .909 -.568 .638 RbL .084 .327 .797 -.563 .731 CbL -.076 .305 .803 -.679 .527 152 Dependent variable IT knowledge (I) Lang (J) Lang Mean difference (I-J) Std. error Sig.b 95% confidence interval for difference Lower Bound Upper Bound *. The mean difference is significant at the .05 level.  ^. The mean difference is significant at the .1 level (marginally significant). Table 4.13 The result of testing hypotheses H# Hypothesis Statistical difference Effect size (>.5) Theoretical result H1 Main effects and interaction effect of IT knowledge and prescribed language on shared language Significant  - All supported H1-1 RbL+CbL= CbL = RbL = TL for high IT knowledge Non-significant - Supported H1-2 CbL > TL for low IT knowledge  Significant  2.08 Supported H1-3 CbL > RbL for low IT knowledge Marginally significant .65 Supported H1-4 RbL > TL for low IT knowledge Significant 1.43 Supported H1-5 RbL+CbL > RbL for low IT knowledge Significant .95 Supported H1-6 CbL> RbL+CbL for low IT knowledge Non-significant - Rejected H1-7 RbL+CbL > TL for low IT knowledge Significant 2.38 Supported Shared language CbL = RbL+CbL > RbL > TL for low IT knowledge RbL+CbL= CbL = RbL = TL for high IT knowledge H2 Main effects and interaction effect of IT knowledge and prescribed languages on USRI Significant for main effect, marginally significant for interaction effect - All supported H2-1 RbL+CbL= CbL= RbL= TL for high IT knowledge Non-significant - Supported 153 H# Hypothesis Statistical difference Effect size (>.5) Theoretical result H2-2 RbL > TL for low IT knowledge Marginally significant .58 Supported H2-3 CbL > TL for low IT knowledge Significant  .94 Supported H2-4 RbL+CbL > TL for low IT knowledge Significant 1.86 Supported H2-5 CbL > RbL for low IT knowledge Non-significant - Rejected H2-6 RbL+CbL > RbL for low IT knowledge Significant 1.23 Supported H2-7 RbL+CbL= CbL  for low IT knowledge Significant (rbl+cbl > cbl) .93 Rejected. USRI RbL+CbL > CbL = RbL > TL for low IT knowledge RbL+CbL= CbL = RbL = TL for high IT knowledge H3 Main effects and interaction effect of IT knowledge and prescribed languages on USRI Significant for main effect, marginally significant for interaction effect - All supported H3-1 TL= RbL= RbL+CbL= CbL for high IT knowledge non-significant - Supported H3-2 CbL>= RbL+CbL> RbL> TL for low IT knowledge CbL= RbL+CbL> RbL= TL (non-significant for CbL>= RbL+CbL and RbL> TL) - Partially Supported. Credibility CbL= RbL+CbL> RbL= TL for low IT knowledge RbL+CbL= CbL = RbL = TL for high IT knowledge It is also worthwhile to mention that we check the main and interaction effect of prescribed languages and IT knowledge on the time spent reading the description and answering the written question. One might argue that a respondent spent more time to complete the task and that can explain the observed differences in the measures of interest. The main effect of prescribed languages yields an F ratio of F(3, 126) =.47, p>.01, indicating a non-significant difference 154 between languages on the time spent. The main effect of IT knowledge yields an F ratio of F(1, 126) =.58, p>.01, suggesting a non-significant difference in the time spent between low and high IT knowledge. The interaction effect of language and IT knowledge yields an F ratio of F(3, 126) =1.35, p>.01, indicating that the interaction effect is also not supported. 4.6 Discussion 4.6.1 Discussion of Results As for shared language, the data analysis shows that RbL+CbL = CbL > RbL > TL for respondents with low IT knowledge and RbL+CbL= CbL= RbL= TL for respondents with high IT knowledge (see Figure 4.7).24 The data analysis demonstrates that we succeed in achieving our main goal (i.e., designing languages that enhance perceived shared language for managers with low IT knowledge). The results show that all three prescribed nomenclatures develop a shared language higher than TL for these managers. Moreover, all hypotheses but one are supported. We expected that the RbL component, which contains less familiar words, would adversely affect the perceived shared language of RbL+CbL as lower than a pure CbL description. However, the difference between RbL+CbL and CbL is not significant. The reason might involve the learning operation that occurs in RbL+CbL (see Table 4.5). It appears that learning about the relationship between an IT resource and business capability (e.g., faster data analysis -> faster decision making) develops a feeling of shared language. A top business manager already has the node “faster decision making” in his or her memory and this node is well understood (i.e., it has occupied the memory                                                   24 Taking into account that the smallest effect size we want to observe is 0.5 on a Likert scale. 155 many times). The learning operation instantly adds the RbL nodes in the memory by connecting them to the well-established nodes. Therefore, RbL does not reduce the perceived shared language. Note that when the top manager reads RbL or TL, he or she cannot connect either to the existing nodes in his or her memory.  Figure 4.7 Comparing shared language developed by prescribed languages for different IT knowledge, controlling for the covariate As for USRI, data analysis reveals that RbL+CbL> CbL= RbL> TL for respondents with low IT knowledge, whereas RbL+CbL= CbL= RbL= TL for respondents with high IT knowledge (see Figure 4.8). This suggests that all three suggested nomenclatures succeed in achieving an improvement over TL when IT knowledge is low. Moreover, all hypotheses but two are supported. First, the difference between RbL and CbL is not significant, whereas we expected to observe a higher USRI for CbL. Our analysis of the provided written answers to the strategic role of IT (see section 4.4.3) shows that the respondents either connect the phrase “faster data analysis” to an 156 incorrect capability (e.g., “flexible process”) in Figure 4.2 or connect it to “long-term revenue” without mentioning the intermediate nodes. It appears that terms like “faster” can create a feeling of USRI which may not root in the real USRI (see question identified as RealUSRI in Table 4.9). Note that the perceived USRI (see questions identified as und1 to und5 in Table 4.8) is the antecedent of credibility and strategic alignment according to the literature. Second, RbL+CbL develops a USRI that is higher than CbL while we hypothesize a non-significant difference between them. Apparently, the aforementioned learning operation (i.e., learning about the relationship between an IT resource and business capability) develops a feeling of understanding in respondents with low IT knowledge. We speculate that this feeling of understanding influences the perception of USRI and results in observing a marginally higher level of USRI for DataX when combined language is used.  Figure 4.8 Comparing USRI developed by prescribed languages for different IT knowledge, controlling for the covariate 157 Regarding credibility, the data analysis shows that CbL= RbL+CbL> RbL= TL for respondents with low IT knowledge and RbL+CbL= CbL= RbL= TL for respondents with high IT knowledge (see Figure 4.9). Hence, CbL and RbL+CbL as opposed to TL can improve credibility for top managers with low IT knowledge. However, certain issues exist with hypotheses related to credibility. Whereas SCT can accurately predict the equal credibility of languages for high IT knowledge, it cannot correctly predict the differences for low IT knowledge. First, while RbL develops a higher shared language and USRI than TL does, it cannot develop higher credibility. Second, the difference between RbL+CbL and CbL is not significant. Apparently, SCT is not sufficient for predicting credibility when IT knowledge is low. Appendix I uses two different theories that predict credibility, construal fit theory (CFT) (Hansen & Wanke, 2010) and argumentation theory (AT) (Toulmin, 1958; Verheij, 2001; Hunter, 2007). They also cannot accurately predict credibility. This is acceptable as many factors can moderate the influence of shared language and USRI on credibility or directly influence it. Future studies should delve into the credibility of these languages.   Figure 4.9 Comparing credibility developed by prescribed languages for different IT knowledge, controlling for the covariate 158 Overall, the results show that when IT knowledge is low, RbL+CbL is the best choice for CIOs to use to speak with top managers (see Table 4.14). Although its perceived shared language and credibility is not significantly higher than CbL, RbL+CbL develops a higher level of USRI. Compared to RbL, CbL develops a higher level of shared language, credibility, and in many cases USRI. Hence, CbL is a better option than RbL. As for RbL and TL, whereas the two may not have a significant difference in credibility, RbL develops a higher perceived shared language and USRI. Therefore, RbL seems to be a better choice than TL. When IT knowledge is high, the difference between the languages diminishes, and CIOs can choose any language to communicate with top managers. Table 4.14 Comparing the observed shared language, USRI, and credibility of the prescribed languages  Low IT knowledge High IT knowledge Observed shared language RbL+CbL = CbL > RbL > TL RbL+CbL= CbL = RbL = TL Observed USRI RbL+CbL > CbL = RbL > TL RbL+CbL= CbL = RbL = TL Observed credibility RbL+CbL= CbL > RbL = TL RbL+CbL= CbL = RbL = TL 4.6.2 Contributions According to Jentsch and Beimborn (2014), there exists a need for studies that explore language styles in the area of strategic IT-business alignment. CIOs’ language can influence strategic alignment through its three powerful antecedents, i.e., perceived shared language, shared understanding of the strategic role of IT, and credibility of CIO’s claims (Preston & Karahanna, 2009; Karahanna & Preston, 2013). Although the alignment literature suggests CIOs not to use a technical language in their communication with top business managers (Reich & Benbasat, 2000; Preston & Karahanna, 2009), the literature provides almost no guidance on the appropriate 159 business language to be used by CIOs. A few attempts that are reported in the literature had no theoretical support to explain why and how such language concepts can help increase strategic alignment.  This study makes three key contributions to the literature of strategic alignment. First, it introduces two new business languages (i.e., RbL and CbL) to the literature of strategic alignment, which CIOs can use in their conversations with the top management team. The study leverages the concepts used by two extensively-applied theories in the literature of strategic management, i.e., RBV and CBV, to design the two languages. The merit of these languages is supported by the empirical evidence in this study as well as their vast application in research studies to discuss and communicate strategies and competitive advantage.  Second, the study takes the matter beyond designing languages by comparing the efficiency of the prescribed languages in terms of the three aforementioned antecedents of strategic alignment. It reveals that when top managers have a high knowledge of the IT resource being discussed, there exists no significant difference between the prescribed languages. On the other hand, when CIOs deal with top managers with low technical knowledge of the IT resource, the prescribed languages bring about different levels of perceived shared language, understanding the strategic role of the IT resource, and the credibility of the claims. The study reveals that RbL+CbL, CbL, and RbL are respectively better choices when CIOs communicate with such top business managers.  Finally, the study provides theoretical and empirical support for considering shared language and shared understanding as two different yet correlated concepts as opposed to viewing shared language as a dimension of shared understanding. Whereas studies by Bitter and Leimeister (2013) as well as Jentsch & Beimborn (2014) consider the latter view of these two antecedents of strategic 160 alignment, this study calls for considering them as two different concepts as advocated by Karahanna & Pretsons (2009). An indication of this difference can be found in the data analysis of this study where RbL improves shared language significantly in comparison with TL, but RbL  weakly enhances USRI. The semantic network memory model can explain this difference. Shared language is conceptualized as having shared nodes in the context of semantic network memory (see Figure 4.5), whereas shared understanding is conceptualized as having a path from technology to long-term shareholder’s value that is commonly created by CIO and top managers (see Figure 4.6). The term “shared” in shared language refers to the overlap between words used by CIO and words exist in top managers’ memories. On the other hand, “shared” refers to the shared custody where CIO and top managers each has part of the path in their memory but have a joint responsibility for creating the path from the IT resource to the long-term shareholders’ value. Hence, the study invites to viewing shared language and shared understanding as two different constructs. In terms of practical implications, the study suggests that CIOs use RbL+CbL when they have plenty of time to communicate with top managers who have low-level IT knowledge. However, note that CbL develops comparatively shorter messages and conversations. Therefore, if CIOs have limited access to and time with top managers, CbL seems to be a better option. Moreover, we know that not all CIOs have the business competence (Bassellier & Benbasat, 2004) needed to communicate in CbL. RbL seems to be a better choice than TL for such CIOs. In particular, if CIOs focus on the effect of IT resources on current IT resources that top managers deal with (i.e., users, software applications, and data rather than infrastructural IT resource), they have a higher chance of developing understanding and credibility. 161 Not only are CIOs better able to speak to top managers in CbL and RbL, but the documents and reports are better when written in these languages. IT strategies, business cases, and monthly reports are examples that can hugely benefit from the explanatory power of these two languages. We also suggest that IT vendors prepare three types of descriptions promoting their products and services: one in TL for developers and IT practitioners with a technical background, one in RbL for CIOs, IT consultants, and top managers who have high general IT knowledge, and one in CbL for business managers who have low IT knowledge. In particular, IT start-ups can rely on these different descriptions to convince various venture capitalists with various levels of technology savviness to invest in their companies. As external consumers of technology investment descriptions, stock analysts may exhibit more favorable recommendations of firms with IT investments that are explained in these languages, especially RbL+CbL. Furthermore, the result suggests to IS design-science researchers that a design artifact aimed at enhancing the strategic alignment should have concepts suggested by CbL and RbL. Such design artifact can be added to enterprise architecture frameworks like TOGAF and DODAF as well as the business model canvas (Osterwalder & Pigneur, 2010). Developing these artifacts can be the focus of future studies. 4.6.3 Limitations and Future Study This study is subject to several limitations. First, another potential language, namely, value-based language (VbL), could have been studied. Increasing strategic value is the main goal of top managers. Therefore, describing the effect of an IT resource on long-term value (e.g., double-digit revenue growth) can influence credibility, understanding, and shared language. That said, studying the effect of adding or removing value-based terms would have required a large sample size to 162 evaluate the dependent variables for possible combinations (i.e., VbL, RbL+VbL, CbL+VbL, and RbL+CbL+VbL). Budget and time constraints to access the right sample would not have allowed us to complete such an ambitious goal. Therefore, we left the study of VbL for the future. Second, we have not studied the effect of languages on cognitive overload, in which the required cognitive processing exceeds the available cognitive capacity (Mayer & Moreno, 2003). Combined languages like RbL+CbL and CbL+VbL develop a higher level of understanding and credibility, but they come at a cost of increasing the cognitive load. We had developed a longer case to study the degree to which cognitive overload affects understanding and credibility of the descriptions. However, the survey was already long (required 12-18 minutes) and requesting more time commitment would have reduced the response rate and increased the cost significantly. Third, the data are collected online. Therefore, the sample is skewed toward managers with medium to high IT knowledge and does not contain enough managers with low and very low IT knowledge. Had we gained access to managers with lower IT knowledge using paper-and-pencil surveys, we believe we would have observed a larger effect size. However, more and more North American managers are becoming familiar with websites and email. Hence, we believe our sample is a better representation of future managers. Fourth, this study is focused on top managers as the interlocutors. Our assumption was that average CIOs are capable of identifying related resources of an IT resource and its effects on business and therefore can speak in terms of them. However, many CIOs and IT practitioners have low business competence (Bassellier & Benbasat, 2004) and are not able to identify affected capabilities. A study is required to prescribe a language that business managers can use to extract the role of an IT resource from a conversation with such CIOs. 163 Fifth, even though RbL improves shared language significantly, it weakly enhanced USRI. This reveals that the RbL nodes exist in top managers’ memories (see Figure 4.5), but they are not connected to nodes corresponding to business capabilities and values (see Figure 4.6). The case of RbL shows that IT knowledge and business knowledge can exist as two isolated islands of connected nodes in top managers’ memories. This requires recognizing a new type of knowledge different from IT knowledge and business knowledge in strategic alignment. We name this IT-business mapping knowledge, which is the degree to which managers can map the changes in IT resources to changes in business capabilities (e.g., two times faster data analysis means two times faster decision making). We believe the definition of IT competency (Bassellier et al., 2003) and shared domain knowledge (Preston & Karahanna, 2009) should be revised accordingly. Moreover, future studies should shed light on the effect of this type of knowledge in IT management and strategic alignment. Sixth, our attempt to explain and predict credibility indicated that neither social capital theory, nor construal fit theory, nor argumentation theory can provide a precise picture of the credibility of languages (see Appendix I). A study is required to examine the interaction of these theories or introduce a new one that can explain the pattern we observed in our data.  Seventh, a field study is required to verify the effect of prescribed languages on credibility, alignment, and even firm performance. Using text-mining algorithms, conversations between CIOs and top managers can be evaluated to check for the influence of prescribed languages on alignment. Studies can leverage text-mining applications to estimate a score for the frequency of the prescribed languages in public documents (e.g., reports, strategies, business cases) to check whether the languages make a difference in project and organizational performance (e.g., stock 164 price). Researchers can also investigate the effect of window dressing of unsound projects using the prescribed languages. Finally, a possible limitation of the study is finding small differences between the three languages. One may wonder if such small effects have any impact in the real world. We would like to point out that even if the effects are small, it is still remarkable that such a slight manipulation in language can account for any variance in credibility, understanding, and hence strategic alignment. Moreover, we believe that even a small effect in business at a strategic level can have an enormous financial impact. 4.7 Conclusion Shared language is an important, yet neglected antecedent of strategic IT-business alignment. However, the literature of alignment does not recommend any terminologies to CIOs. The research questions addressed in this study are what business terminology can be applied by CIOs and which terminology can enhance the antecedents of strategic alignment better. Using the existing literature of strategic management, the terminologies of two theories are suggested: the resource-based view and the capability-based view. This study demonstrated that for top managers with high IT knowledge, the CIO’s language cannot bring a significant improvement in the antecedents of strategic alignment. However, when IT knowledge is low, all three languages (i.e., resource-based, capability-based, and their combination) provide a significantly higher perceived shared language and understanding of the strategic role of IT as opposed to technical language. The combined language, capability-based language, and resource-based language, respectively, better improve strategic alignment. Although the combinatory approach is the best choice, it has two issues. CIOs may not have the business competency to come up with the capability-based language and not all 165 CIOs enjoy sufficient access to top managers to communicate the longer messages developed by combining both capabilities and resources. In such cases, resource-based or capability-based language is the best choice.  166 Chapter 5: Conclusion This dissertation addresses some current challenges in operational and strategic alignment. This chapter summarizes our contribution to the literature of IT-business alignment. First, this dissertation introduces capacity misalignment as a type of operational alignment. Commonly known as IT unavailability, the dissertation reveals that IT unavailability is in fact a misalignment between actual IT capacity provided by IT department and IT capacity demanded by business (which is rooted in demanded business capacity). Whereas the extant literature conceptualizes IT unavailability as downtime, Chapter 3 suggests that a mismatch perspective of IT unavailability is a more comprehensive conceptualization. Not only the recommended conceptualization includes Type I IT unavailability, namely downtime, it also includes Type II IT unavailability, where the IT system is running but it is unavailable to some or all users due to the excessive demand. Moreover, capacity deficit is suggested as a measure for IT unavailability in addition to the existing measures i.e., frequency and duration. Particularly, applying the new measure in IT contracts and SLAs is highly recommended in order to protect organizations against all types of IT unavailability. Finally, this more accurate view of IT unavailability is the basis for understanding how business initiates and exacerbates IT unavailability and how IT unavailability becomes a strategic risk. Second, the dissertation explains how IT unavailability, as a type of operational misalignment, leads to strategic risk. Whereas many practitioners and researchers have a domino view, Chapter 3 reveals that a sequence of causes and effects from IT unavailability to strategic risk cannot explain all cases of IT unavailability that led to strategic risk. Chapter 3 demonstrates that a system dynamic theory that includes several amplifying causal loops enjoys a higher power of explaining 167 such cases. Hence, the reverse causal effect of business incompetency on IT unavailability in particular and operational misalignment in general should be taken into account in such cases. Third, the dissertation provides empirical support that the overall operational alignment plays a negative role during IT unavailability making organizations more vulnerable to capacity misalignment (see Figure 3.2). The higher the overall operational alignment (i.e., coverage of activities, data entities, user interactions, organizational roles, controls, and norms by IT), the lower the chance of having standby backup processes that are able to replace the affected IT or lower the excessive demand for the affected IT. Examples are enterprise-wide systems like ERP that support several business processes in sales, marketing, operations, logistics, accounting, and so forth. Developing backup IT systems or manual business processes that can cover all those tasks, creating and updating disaster recovery plans to be able to initiate these backup systems and processes, and practicing incident response are complex and expensive deterring many organizations to have a completely reliable plan for the discontinuity of such systems. This makes organizations vulnerable to their unavailability. Hence, researchers and practitioners must be cautious about enhancing operational alignment without planning for improving IT detachability (i.e., having reliable backup processes) at the same time. Fourth, the dissertation also aims to enhance strategic alignment via shared language. Unlike some other antecedents of strategic alignment such as CIO’s business knowledge and top managers’ IT knowledge, shared language can be improved in a timely manner in the fast changing environment of today. Yet, the only suggestion of extant literature for CIOs is to avoid technical language and use business terminology without suggesting any languages. Chapter 4, however, uses the literature of strategic management to design two business languages: resource-based language and capability-based language.  The study reveals that a combination of capability-based and resource-168 based language is generally more alignment-friendly than either languages. However, the combined approach requires longer conversations with top managers, which may not be always possible. In such cases, capability-based language is the best option. Moreover, not all CIOs have the business competency to come up with the capability-based language. Such CIOs find resource-based language friendlier toward alignment than technical language. Fifth, both studies in Chapter 3 and Chapter 4 highlight the role of the business capability as a notion that can help understand and improve strategic and operational alignment. Chapter 3 suggests a capability-based impact analysis of capacity misalignment. Chapter 4 recommends that a capability-based language can be highly effective in terms of enhancing strategic alignment. Note that capability is also a stable concept in today’s constant-changing environment since it is independent of the business processes that materialize it and independent of the required resources (Ulrich & Rosen, 2011). Hence, this dissertation provides empirical support that business capability can be the concept that can bridge the gap between the business and IT worlds in the fast changing environment of today.  In addition to contributions to the alignment literature, the dissertation introduces the modernist grounded theory methodology. Chapter 2 customized Glaserian GT to develop theories compatible with a positivist ontology of theories. Consequently, theories can be verified using positivists’ favorite methods, which allows the mixed approach advocated by Goes (2013) and Venkatesh et al. (2013). As well, constructs generated can also be appreciated and borrowed by the larger community of positivist researchers. Moreover, the customized methodology leverages the notions of construct validity, conclusion validity, internal validity, and external validity to ensure that conclusions from data have a higher chance of reproducibility by other researchers and datasets. 169 This strengthens the accuracy of GT that Burton-Jones and Lee (2017) called for and ensures that the theory is grounded in data rather than researchers. The previous chapters provide a full description of related future studies. In the following, we propose some other studies related to alignment. First, we suggest studying the strategic impact of other types of operational misalignment. Chapter 3 studies how capacity misalignment becomes a strategic risk. Capacity misalignment is a type of functionality misalignment. Using our modernist GT to develop theories that can describe how data, usability, role, control, and cultural misalignment (see Figure 1.1) leave strategic impacts on organizations could be the focus of future studies. Similar to Chapter 3, such studies may also reveal the true nature of these misalignment types as well as new ways to evaluate strategic risk. Second, we call for new prescriptive studies that focus on improving strategic alignment via its powerful antecedents other than shared language. Chapter 4 targets the same goal via prescribing a business language to enhance shared language. Prescriptive studies are required to enhance strategic alignment via TMT’s IT knowledge, CIO’s business knowledge, and so forth. Particularly, the IT-business mapping knowledge that is unearthed in Chapter 4 can be a great candidate. Designing artifacts that facilitate the communication of such knowledge using visual presentation is a possible way to achieve this goal.  Finally, the relationship between strategic alignment and operational alignment is mainly studied in general by the extant literature (see Figure 1.1). To open this black box, we suggest two ways. First, the modernist GT suggested in Chapter 2 can be used to shed light on the process through which strategic misalignment leads to operation misalignment. Second, this dissertation has found the notion of business capability useful in understanding alignment. We speculate that this concept 170 can also be leveraged to explain the positive relationship between strategic alignment and operational alignment. 171  Bibliography Abuhussein, A., Bedi, H., & Shiva, S. 2012. "Evaluating security and privacy in cloud computing services: A Stakeholder's perspective," Proceedings of 7th International Conference for Internet Technology and Secured Transactions (ICITS 2012), London, United Kingdom. pp. 388-395. Alter, S., and Sherer, S. 2004. “A General, but Readily Adaptable Model of Information System Risk,” Communications of the Association for Information Systems (14:1), pp. 1-28. Anderson, J. R. 1972. “A Simulation Model of Free Recall,” In G. H. Bower, The Psychology of Learning and Motivation, New York: Academic Press. Armstrong, C. P. & V. Sambamurthy. 1999. “Information technology assimilation in firms: The influence of senior leadership and IT infrastructures”. Inform. Systems Research. (10:4), pp. 304–328. Austin, R. D., Nolan, R. L., & O'Donnell, S. 2009. Adventures of an IT Leader, Boston: Harvard Business School Publishing Corporation. Avvenuti, M., Cimino, M. G., Cresci, S., Marchetti, A., & Tesconi, M. 2016. “A Framework for Detecting Unfolding Emergencies using Humans as Sensors,” SpringerPlus (5:43), pp. 1. Azzopardi, M., Levine, A., and Lamb, J. 2011. “Defining a Risk-based Approach to the Design and Technology Usage of Systems to Attain IT Availability and Recoverability Requirements,” in 8th International Conference & Expo on Emerging Technologies for a Smarter World (CEWIT), New York, NY, pp. 1-6. IEEE. 172  Bain, J. S. 1968. Industrial Organization. New York: John Wiley. Baird, I. S. & Thomas, H. 1990. “What is risk anyway?' In R. A. Bettis and H. Thomas (eds), Risk, Strategy, and Management. JAI Press, Greenwich, CT, pp. 21-52. Baird, I. S., & Thomas, H. 1985. “Toward a contingency model of strategic risk taking,” Academy of Management Review (10:2), pp. 230-243. Baker, J., Jones, D., Cao, Q., and Song, J. 2011. “Conceptualizing the Dynamic Strategic Alignment Competency,” Journal of the Association for Information Systems (12:4), pp. 299-322. Balaouras, S., & Dines, R. 2010. "Forrester's Business Impact Analysis Template", "Forrester, July 23, 2010, available at: http://www.forrester.com/Forresters+Business+Impact+Analysis+Template/fulltext/-/E-RES57360, accessed: 2016-05-01. Barney, J.B., 1991. “Firm Resources and Sustained Competitive Advantage,” Journal of Management (17:1), pp. 99–120. Barney, J.B., 1995. “Looking Inside for Competitive Advantage,” Academy of Management Executive (9:4), pp. 49–61. Barney, J.B., Ketchen Jr., D.J., and Wright, M., 2011. “The Future of Resource-based Theory: Revitalization or Decline?,” Journal of Management (37:5), pp. 1299–1315. Bassellier, G., Benbasat, I., and Reich, B. H. 2003. “The Influence of Business Managers' IT Competence on Championing IT,” Information Systems Research (14:4), pp. 317-336. 173  Beaudet, S. T., Courtney, T., and Sanders, W. H. 2006. “A Behavior-based Process for Evaluating Availability Achievement Risk Using Stochastic Activity Networks,” in Proceedings of the Annual Reliability and Maintainability Symposium (RAMS'06), Washington, DC, pp. 21-28. Belk, R., Fischer, E., Kozinets, R. 2012. Qualitative Consumer and Marketing Research, Sage. Bell-Basca, B., Grotzer, T. A., Donis, K., & Shaw, S. 2000. “Using Domino and Relational Causality to Analyze Ecosystems: Realizing What Goes Around Comes Around”. In annual meeting of the National Association of Research in Science Teaching, New Orleans, LA. Benbasat, I. 2001. Editorial Notes. Information Systems Research (12:2), pp. iii-iv. Benbasat, I., & Zmud, R. W. 1999. “Empirical Research in Information Systems: the Practice of Relevance,” MIS Quarterly (23:1) pp. 3-16. Benbasat, I., Goldstein, D. K., and Mead, M. 1987. “The Case Research Strategy in Studies of Information Systems,” MIS Quarterly (11:3), pp. 369-385. Benlian, A., Koufaris, M., & Hess, T. 2011. “Service Quality in Software-as-a-Service: Developing the SaaS-Qual Measure and Examining its Role in Usage Continuance,” Journal of Management Information Systems (28:3), pp. 85-126. Bennett, C. & Tseitlin, A. 2012. “Chaos Monkey Released into the Wild”, Netflix, available at: http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html, accessed: 2016-01-07. 174  Bergeron, F., Raymond, L., & Rivard, S. 2001. “Fit in Strategic Information Technology Management Research: An Empirical Comparison of Perspectives,” Omega (29:2), pp. 125-142. Bergeron, F., Raymond, L., & Rivard, S. 2004. “Ideal Patterns of Strategic Alignment and Business Performance,” Information & Management (41:8), pp. 1003-1020. Bharadwaj, A. S. 2000. “A Resource-Based Perspective On Information Technology Capability And Firm Performance: An Empirical Investigation,” MIS Quarterly (24:1), pp. 169-196. Bhattacherjee, A. 2001. "Understanding Information Systems Continuance: An Expectation-Confirmation Model", MIS Quarterly (25:3), pp. 351-370. Birks, D. F., Fernandez, W., Levina, N., & Nasirin, S. 2013. “Grounded Theory Method in Information Systems Research: its Nature, Diversity and Opportunities,” European Journal of Information Systems (22:1), pp. 1-8. Bitner, M. J., Brown, S. W., and Meuter, M. L. 2000. “Technology Infusion in Service Encounters,” Journal of the Academy of Marketing Science (28:1), pp. 138-49. Bitsch, V. 2005. Qualitative research: A grounded theory example and evaluation criteria. Journal of Agribusiness, (23:1), pp. 75-91. Bittner, E. A. C., & Leimeister, J. M. 2014. “Creating Shared Understanding in Heterogeneous Work Groups: Why it Matters and How to Achieve it,” Journal of Management Information Systems (31:1), pp. 111-144. 175  Borger, G. 2014. “How Obama and the Democrats lost the election”, CNN, 2014/11/05, available at: http://www.cnn.com/2014/11/05/opinion/borger-how-democrats-lost-the-election/, accessed: 2016-05-01.  Bradlow, E. T., & Fitzsimons, G. J. (2001). Subscale Distance and Item Clustering Effects in Self-Administered Surveys: A New Metric. Journal of Marketing Research (38:2), pp. 254-261. Broyles, R. W. (2006). Fundamentals of Statistics in Health Administration. Jones & Bartlett Learning. Bunge, M. 1977. Treatise on basic philosophy: Volume 3: Ontology I: The furniture of the world. Dordrecht, Holland: D. Reidel Publishing Company. Bunge, M. 1979. Treatise on basic philosophy: Volume 4: Ontology II: A world of systems. Dordrecht, Holland: D. Reidel Publishing Company. Burton-Jones, A., & Gallivan, M. J. 2007. “Toward a deeper understanding of system usage in organizations: a multilevel perspective,” MIS Quarterly (31:4) pp. 657-679. Burton-Jones, A. McLean, E., & Monod, E. 2015. "Theoretical Perspectives in IS Research: From Variance and Process to Conceptual Latitude and Conceptual Fit," European Journal of Information Systems (24), pp. 664–679. Burton-Jones, A., & Lee, A. S. 2017. “Thinking about Measures and Measurement in Positivist Research: A Proposal for Refocusing on Fundamentals,” Information Systems Research. Published online in Articles in Advance 15 Jun 2017. https://doi.org/10.1287/isre.2017.0704. 176  Carson, S., Madhok, A. & Wu, T. 2006, “Uncertainty, Opportunism, And Governance: The Effects Of Volatility And Ambiguity On Formal And Relational Contracting,” Academy of Management Journal (49) pp. 1058-1077. Cenfetelli, R. T., Benbasat, I., & Al-Natour, S. 2008. “Addressing the What and How of Online Services: Positioning Supporting-Services Functionality and Service Quality for Business-to-Consumer Success,” Information Systems Research (19:2), pp. 161-181. Chan, Y. 2002. “Why haven’t we mastered alignment? The importance of the informal organization structure,” MIS Quarterly Executive (1:2), pp. 97–112. Chan, Y. E., and Reich, B. H. 2007. “IT alignment: What Have we Learned?,” Journal of Information Technology (22:4), pp. 297-315. Chan, Y., Huff, S. & Barclay, D. 1997. “Business Strategic Orientation, Information Systems Strategic Orientation, and Strategic Alignment,” Information Systems Research (8) pp. 125-150. Charaf, M., Rosenkranz, C., & Holten, R. 2013. “The emergence of shared understanding: applying functional pragmatics to study the requirements development process,” Information Systems Journal (23:2), pp. 115–135. Chen, D. Q., Mocker, M., Preston, D. S., & Teubner, A. 2010. “Information Systems Strategy: Reconceptualization, Measurement, And Implications,” MIS Quarterly (34:2), pp. 233-259. Chen, D., Dharmaraja, S., Chen, D., Li, L., Trivedi, K. S., Some, R. R., and Nikora, A. P. 2002. “Reliability and Availability Analysis for the JPL Remote Exploration and Experimentation 177  System,” in Proceedings of the International Conference on Dependable Systems and Networks (DSN ’02), pp. 337-342. Chen, K. S., Chang, T. C., Wang, K. J., & Huang, C. T. 2015. “Developing control charts in monitoring service quality based on the number of customer complaints,” Total Quality Management & Business Excellence, (26:5), pp. 675-689. Chen, P. Y., Kataria, G., and Krishnan, R. 2011. “Correlated Failures, Diversification, and Information Security Risk Management,” MIS Quarterly (35:2), pp. 397-422. Chen, X., John, G., Hays, J. M., Hill, A. V., & Geurs, S. E. 2009. “Learning From A Service Guarantee Quasi Experiment,” Journal of Marketing Research (46:5), pp. 584-596. Chen, Y., Wang, Y., Nevo, S., Jin, J., Wang, L., & Chow, W. S. 2014. “IT capability and Organizational Performance: The Roles of Business Process Agility and Environmental Factors,” European Journal of Information Systems (23:3), pp. 326-342. Chung, W., He, S., & Zeng, D. (2015). “eMood: Modeling Emotion for Social Media Analytics on Ebola Disease Outbreak,” Proceedings of the 2015 International Conference on Information Systems (ICIS 2015), December 13-16. Fort Worth, Texas.  Chunping, H., Peiqiong, L., and Yiping, Y. 1997. “The Application of Failure Mode and Effect Analysis for Software in Digital Fly Control Systems,” in Proceedings of the 16th AIAA/IEEE Digital Avionics Systems Conference (V1). Irvine, CA, pp. 4.4-8-13. Clemons, E., and Row, M. 1991. “Sustaining IT Advantage: The Role of Structural Differences,” MIS Quarterly (15:3), pp. 275-292. 178  Cohen, J. 1977. Statistical Power Analysis for the Behavioral Sciences. New York. Academic Press. Collins, A. M. and Qullian, M. R. 1970a. “Experiments on Semantic Memory and Language Comprehension,” In L. W. Gregg (Ed.), Cognition in learning and memory. New York: Wiley. Collins, A. M. and Qullian, M. R. 1970b. “Does Category Size Affect Categorization Time?,” Journal of Verbal Learning and Verbal Behavior (9), pp. 432-438. Collins, A. M. and Qullian, M. R. 1996. “Retrieval Time from Semantic Memory,” Journal of Verbal Learning and Verbal Behavior (8), pp. 241-248. Collins, A. M., & Loftus, E. F. 1975. “A Spreading Activation of Semantic Processing,” Psychological Review (82), pp. 407-428. Collins, J. M., and Ruefli, T. W. 1992. “Strategic Risk: An Ordinal Approach,” Management Science (38:12), pp. 1707-1731. Conboy, K., Fitzgerald, G., and Mathiassen, L. 2012. “Qualitative Methods Research in Information Systems: Motivations, Themes, and Contributions,” European Journal of Information Systems (21:2), pp.113-118. Conrad, G. R. & Plotkin, I. H. 1968. “Risk/return: U.S. industry pattern,” Harvard Business Review (46:2), pp. 90-99. Cootner, P. H. & Holland. D. M.  1970. “Rate of return and business risk,” Bell Journal of Economics and Management Science (1), pp. 211-226. 179  Creswell, J. W. 2012. Qualitative Inquiry and Research Design: Choosing Among Five Approaches. Sage. Creswell, John W. 1998. Qualitative Inquiry and Research Design. Thousand Oaks, CA: Sage. Crotty, M. 1998. The foundations of social research: Meaning and perspective in the research process. Sage. Donaldson, L. 2001. The Contingency Theory of Organizations. Sage. Dong, S., Xu, S. X., & Zhu, K. X. 2009. Research note-information technology in supply chains: the value of it-enabled resources under competition. Information Systems Research (20:1), pp. 18-32. Doucet, L. 2004. “Service Provider Hostility and Service Quality,” Academy of Management Journal (47:5), pp. 761-771. Drew, S. A., Kelley, P. C., and Kendrick, T. 2006. “CLASS: Five Elements of Corporate Governance to Manage Strategic Risk,” Business Horizons (49:2), pp. 127-138. Dubé, L., & Paré, G. 2003. “Rigor in Information Systems Positivist Case Research: Current Practices, Trends, and Recommendations,” MIS Quarterly (27:4), pp. 597-636. Dümke, R. 2002. Corporate Reputation and Its Importance For Business Success. Master's Thesis, 2002. 180  Dynes, S., Johnson, M. E., Andrijcic, E., & Horowitz, B. 2007. “Economic Costs of Firm-level Information Infrastructure Failures: Estimates from Field Studies in Manufacturing Supply Chains,” The International Journal of Logistics Management (18:3), pp. 420-442. Eisenhardt, K. M. 1989. “Building Theories from Case Study Research,” Academy of Management Review (4:4), pp. 532-550. El Sawy, O. A., Malhotra, A., Park, Y., & Pavlou, P. A. 2010. “Research Commentary-Seeking the Configurations Of Digital Eco-dynamics: It Takes Three to Tango,” Information Systems Research (21:4), 835-848. Fama, E. & French, K. 1992. “The Cross-Section of Expected Stock Returns”, Journal of Finance (67:2), pp. 427-465. Fama, E. & French, K. 2004. “The Capital Asset Pricing Model: Theory And Evidence,” Journal of Economic Perspectives (18). Pp. 25-46. Federal Standard 1037C, 2016, Telecommunications: Glossary of Telecommunication Terms, available at: https://www.atis.org/glossary/definition.aspx?id=5637, accessed: 2016-08-29. Floyd, S. W., & Wooldridge, B. 1990. “Path Analysis of the Relationship between Competitive Strategy, Information Technology, And Financial Performance,” Journal Of Management Information Systems (7:1), pp. 47-64. Fornell, C., Rust, R., & Dekimpe, M. (2010), “The Effect of Customer Satisfaction on Consumer Spending Growth,” Journal of Marketing Research (47), pp. 28-35. 181  Galbraith, J. R. 1974. “Organization Design: An Information Processing View,” Interfaces (4:3), pp. 28-36. Gerow, J. E., Grover, V., & Thatcher, J. 2016. “Alignment's Nomological Network: Theory and Evaluation,” Information & Management (53:5), pp. 541-553. Gerow, J. E., Thatcher, J. B., & Grover, V. 2015. “Six Types of IT-Business Strategic Alignment: An Investigation of the Constructs and Their Measurement,” European Journal of Information Systems, (24:5), pp. 465-491. Gerow, J. E., Grover, V., Thatcher, J. B., & Roth, P. L. 2014. “Looking toward the Future of IT-Business Strategic Alignment through the Past: A Meta-analysis,” MIS Quarterly (38:4), pp. 1059-1085. Ghose, A. 2009. “Internet Exchanges For Used Goods: An Empirical Analysis of Trade Patterns and Adverse Selection,” MIS Quarterly (33:2), pp. 263-291. Glaser, B. G. 1998. Doing Grounded Theory: Issues and Discussions. Sociology Press. Glaser, B. G. 2005. The Grounded Theory Perspective III: Theoretical Coding, California: Sociology Press. Glaser, B.G. 1978. Theoretical Sensitivity: Advances in the methodology of grounded theory, Mill Valley, CA: Sociology Press. Glaser, B.G. 1992. Emergence vs. Forcing: Basics of grounded theory analysis, Mill Valley, CA: Sociology Press. 182  Glaser, B.G. and Strauss, A.L. 1967. The Discovery of Grounded Theory: Strategies for Qualitative Research, New York: Aldine Publishing Company. Goes, P. B. 2013. “Editor's comments: commonalities across IS silos and intradisciplinary information systems research,” MIS Quarterly (37:2), pp. iii-vii. Goldstein, J., Chernobai, A., and Benaroch, M. 2011. “An Event Study Analysis of the Economic Impact of IT Operational Risk and its Subcategories,” Journal of the Association for Information Systems (12:9), pp. 606-631. Goo, J., Kishore, R., Rao, H. R., & Nam, K. 2009. “The role of service level agreements in relational management of information technology outsourcing: an empirical study” MIS Quarterly (33:1), pp. 119-145. Gopal, A., & Koka, B. R. 2012. “The Asymmetric Benefits Of Relational Flexibility: Evidence from Software Development Outsourcing,” MIS Quarterly (36:2), pp. 553-576. Grant, R. M. 1991. “The Resource-based Theory of Competitive Advantage: Implications for Strategy Formulation,” California Management Review (33:3), pp. 114-135. Grant, R. M. 1996. “Prospering in Dynamically-Competitive Environments: Organizational Capability as Knowledge Integration,” Organization Science, (7), pp. 375-387. Gregor, S., and Hevner, A. R. 2013. “Positioning and Presenting Design Science Research for Maximum Impact,” MIS Quarterly (37:2), pp. 337-356. 183  Grotzer, T. A., Basca, B., & Donis, K. 2002. Causal Patterns in Ecosystems: Lessons to Infuse into Ecosystems Units. Cambridge, MA: Project Zero, Harvard Graduate School of Education. Grover, V., and Lyytinen, K. 2015. “New State of Play in Information Systems Research: The Push to the Edges,” MIS Quarterly (39:2), pp. 271-296. Grover, V., Cheon, M. J., & Teng, J. T. 1996. “The Effect of Service Quality and Partnership on the Outsourcing of Information Systems Functions,” Journal of Management Information Systems (12:4), pp. 89-116. Hannan, M. T., & Freeman, J. 1984. “Structural Inertia and Organizational Change,” American Sociological Review (49:2), pp. 149-164. Hansen, J., & Wanke, M. 2010. Truth from language and truth from fit: The impact of linguistic concreteness and level of construal on subjective truth. Personality and Social Psychology Bulletin, 36, 1576-1588. Hariharan, K. 2010, “Availability Risk Assessment: A Quantitative Approach”, ISACA Journal (1), pp. 1-4, available at: http://www.isaca.org/Journal/Past-Issues/2010/Volume-1/Pages/Availability-Risk-Assessment-A-Quantitative-Approach1.aspx, accessed: 2016-05-01. Harris, D., 2014, “Netflix lost 218 database servers during AWS reboot and stayed online”, GIGAOM,  Oct. 3, 2014, available at: https://gigaom.com/2014/10/03/netflix-lost-218-database-servers-during-aws-reboot-and-stayed-online/?utm_content=8807761&utm_medium=social&utm_source=linkedin, accessed: 2015-07-19. 184  Hatch, M. J., & Cunliffe, A. L. 2013. Organization Theory: Modern, Symbolic and Postmodern Perspectives. Oxford University Press. Head, I. & Govekar, M. 2015. “Market Guide for Capacity Management Tools,” Gartner, available at: https://www.gartner.com/doc/2974420/, accessed: 2016-05-01. Henderson, J. C., & Venkatraman, H. 1999. “Strategic Alignment: Leveraging Information Technology for Transforming Organizations,” IBM Systems Journal (38:2/3), 1999, p. 476. Holtzer, G. 2004. Voies vers le plurilinguisme (Vol. 772). Presses Univ. Franche-Comté. Homann, U. 2006. “A Business-Oriented Foundation for Service Orientation,” Microsoft Developer Network, available at: http://msdn.microsoft.com/en-us/library/aa479368.aspx, accessed: 2016-05-01. Hughes, J., and Jones, S. 2003. “Reflections on the Use of Grounded Theory in Interpretive Information Systems Research,” in Proceedings of the European Conference on Information Systems, Naples, Italy. Hunter, A. 2007. “Elements of Argumentation,” In European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty (pp. 4-4). Springer, Berlin, Heidelberg. IBM, 2015. “Understanding the Economics of IT risk and reputation”, available at: http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=SA&subtype=WH&htmlfid=RLW03024USEN, accessed: 2015-07-11. 185  Idalovichi, I. B. Y. 2014. Symbolic Forms as the Metaphysical Groundwork of the Organon of the Cultural Sciences: Volume 1 (Vol. 1). Cambridge Scholars Publishing. Izrailevsky, Y., & Tseitlin, A. 2011. “The Netflix Simian Army”, Netflix, available at: http://techblog.netflix.com/2011/07/netflix-simian-army.html, accessed: 2016-01-07. Jarema, G., & Libben, G. 2007. The Mental Lexicon: Core Perspectives. Amsterdam: Elsevier. Jentsch, C., & Beimborn, D. 2014. “Shared Understanding Among Business and IT - A Literature Review and Research Agenda,” In: 22th European Conference on Information Systems. Tel Aviv. Johnson, A., & Lederer, A. 2005. “The effect of communication frequency and channel richness on the convergence between chief executive and chief information officers,” Journal of Management Information Systems (22:2), pp. 227–252. Johnson, B., & Christensen, L. 2008. Educational Research: Quantitative, Qualitative, and Mixed Approaches. Sage. Kaplan, R. S., & Norton, D. P. 2004. Strategy maps: Converting intangible assets into tangible outcomes. Harvard Business Press. Karahanna, E., and Preston, D. S. 2013. “The Effect of Social Capital of the Relationship between the CIO and Top Management Team on Firm Performance,” Journal of Management Information Systems (30:1), pp. 15-56. 186  Kearns, G. S., & Sabherwal, R. 2006. “Strategic Alignment Between Business And Information Technology: A Knowledge-Based View of Behaviors, Outcome, and Consequences,” Journal of Management Information Systems (23:3), pp. 129-162. Kelley, H. H. 1973. “The Process of Causal Attribution,” American Psychologist (28:2), pp. 107-128. Kim, Dan J., Donald L. Ferrin, and H. Raghav Rao (2009), “Trust and Satisfaction, Two Stepping Stones for Successful E-Commerce Relationships: A Longitudinal Exploration,” Information Systems Research (20), pp. 237-257. Kim, G., Shin, B., & Kwon, O. 2012. “Investigating the Value of Socio-materialism in Conceptualizing IT Capability of a Firm,” Journal of Management Information Systems (29:3), pp. 327-362. Kohli, R., and Grover, V. 2008. “Business Value of IT: An Essay on Expanding Research Directions to Keep up with the Times,” Journal of the Association for Information Systems (9:1), pp. 23-39. Kohli, R., Devaraj, S., & Mahmood, M. A. 2004. “Understanding Determinants Of Online Consumer Satisfaction: A Decision Process Perspective,” Journal of Management Information Systems (21:1), pp. 115-136. Kraaijenbrink, J., Spender, J. C., and Groen, A. J. 2010. “The Resource-based View: a Review and Assessment of its Critiques,” Journal of Management (36:1), pp. 349-372. 187  Kwortnik Jr, R. J., Lynn, W. M., & Ross Jr, W. T. 2009. “Buyer Monitoring: A Means To Insure Personalized Service,” Journal of Marketing Research (46:5), 573-583. Lado, A. A., Boyd, N. G., Wright, P., and Kroll, M. 2006. “Paradox and Theorizing within the Resource-based View,” Academy of Management Review (31:1), pp. 115-131. Lakonishok, J. and C. Shapiro 1986. “Systematic Risk, Total Risk and Size as Determinants of Stock Market Returns,” Journal of Banking and Finance (10). pp. 115-132. Lance, C. E., Butts, M. M., & Michels, L. C. 2006. “The Sources of Four Commonly Reported Cutoff Criteria: What Did They Really Say?,” Organizational Research Methods (9:2), pp. 202-220. Lee, A. S. 1989. “A Scientific Methodology for MIS Case Studies,” MIS Quarterly (13:1), pp. 33-50. Liao, H., & Chuang, A. 2004. “A Multilevel Investigation of Factors Influencing Employee Service Performance and Customer Outcomes,” Academy of Management Journal (47:1), pp. 41-58. Liberman, N., & Trope, Y. (2008). The psychology of transcending the here and now. Science, 322, 1201-1205. Limoncelli, T. A., Chalup, S. R., & Hogan, C. J. 2014. The Practice of Cloud System Administration: Designing and Operating Large Distributed Systems (Vol. 2). Pearson Education. 188  Lintner, J. 1965.”The Valuation Of Risky Assets and the Selection of Risk Investments in Stock Portfolios and Capital Budgets,” Review of Economics and Statistics, pp. 13-37. Liu, H., Lin, Y., Chen, P., Jin, L., and Ding, F. 2010. “A Practical Availability Risk Assessment Framework in ITIL,” in 5th IEEE International Symposium on Service Oriented System Engineering, Nanjing, China, pp. 286-290. Lockett, A., Thompson, S., and Morgenstern, U. 2009. “The Development of the Resource‐based View of the Firm: A critical Appraisal,” International Journal of Management Reviews, 11(1), 9-28. Luftman, J., and Brier, T.. 1999. “Achieving and Sustaining Business-IT Alignment,” California Management Rev. (42:1), pp. 109–112. Madhavan, R., and Grover, R. 1998. “From embedded knowledge to embodied knowledge: New product development as knowledge management,” Journal of Marketing (62:4), pp. 1–13. Makadok, R. 2001. “Towards a Synthesis of Resource-Based and Dynamic Capability Views of Rent Creation,” Strategic Management Journal (22:5), pp. 387-402. Marcus, E. 2003. “The myth of the nines”. TechTarget, available at: http://searchstorage.techtarget.com/tip/The-myth-of-the-nines, accessed: 2016-05-01.  Marsh, T. A. & Swanson, D. S.  1984. “Risk-return trade-offs for strategic management,” Sloan Management Review (25:3), pp. 35-49. 189  Martin, J., & Leben, J. 1989. Strategic Information Planning Methodologies. Englewood Cliffs, New Jersey: Prentice-Hall. Maruyama, G. M. 1997. Basics of Structural Equation Modeling. Sage Publications. Mata, F. J., Fuerst, W. L., & Barney, J. B. 1995. “Information Technology and Sustained Competitive Advantage: A Resource-based Analysis,” MIS Quarterly (19:4), pp. 487-505. Matavire, R., & Brown, I. (2013). “Profiling Grounded Theory Approaches in Information Systems Research,” European Journal of Information Systems (22:1), pp. 119-129. Mayer, R. E., & Moreno, R. 2003. “Nine Ways to Reduce Cognitive Load in Multimedia Learning,” Educational Psychologist (38:1), pp. 43-52. Mays, N., and Pope, C. 1995. “Rigour and Qualitative Research,” British Medical Journal (311), pp. 109-112. McCollough, M., Berry, L. L., and Yadav, M. S.  2000. “An Empirical Investigation of Customer Satisfaction after Service Failure and Recovery,” Journal of Service Research (3:2), pp. 121-137. McEnally, R. W. & Tavis, L. A.  1972. “Spatial Risk and Return Relationships: A Reconsideration,” Journal of Risk and Insurance (39) pp. 351-365. Melville, N., Kraemer, K., & Gurbaxani, V. 2004. “Review: Information technology and organizational performance: An integrative model of IT business value,” MIS quarterly (28:2), pp. 283-322. 190  Miles, M. B., and Huberman, A. M. 1994. Qualitative Data Analysis: An Expanded Sourcebook. Sage Publications, Beverly Hills, CA. Miles, M. B., Huberman, A. M., & Saldana, J. 2013. Qualitative Data Analysis. Sage. Miranda, S. M., Kim, I., & Summers, J. D. 2015. “Jamming With Social Media: How Cognitive Structuring Of Organizing Vision Facets Affects It Innovation Diffusion,” MIS Quarterly (39:3), pp. 591-614. Mithas, S., & Rust, R. T. 2016. “How Information Technology Strategy and Investments Influence Firm Performance: Conjecture and Empirical Evidence,” MIS Quarterly (40:1), pp. 223-245. Moore, G., and Benbasat, I. 1991. “Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation,” Information Systems Research (2:3), pp. 192-222. Morse, Janice 1994. "Emerging from the Data." Cognitive Processes of Analysis in Qualitative Research. In Janice Morse (Ed.), Critical Issues in Qualitative Research Methods (pp.23-41). Thousand Oaks, CA: Sage. Mruck, K., & Mey, G. 2007. “Grounded Theory and Reflexivity,” The Sage Handbook of Grounded Theory, edited by Bryant, A. & Charmaz, K., pp. 515-538. Myers, A., & Hansen, C. 2011. Experimental Psychology. Cengage Learning. Myers, M.D. 1997. Qualitative Research in Information Systems, MIS Quarterly (21:2) pp. 241–242. 191  Nahapiet, J., and Ghoshal, S. 1998. “Social capital, intellectual capital, and the organizational advantage,” Academy of Management Review (23:2), pp. 242–266. Nelson, K., and Cooprider, J. 1996. “The contribution of shared knowledge to is group performance,” MIS Quarterly (20:4), pp. 409–433. Newbert, S. 2007. “Empirical Research on the Resource-Based View of the Firm: An Assessment and Suggestions for Future Research,” Strategic Management Journal (28:2), pp. 121-146. Ohlef, H., Binroth, W., and Haboush, R. 1978. “Statistical Model for a Failure Mode and Effects Analysis and its Application to Computer Fault-Tracing,” IEEE Transactions on Reliability (27:1), pp. 16-22. Okolita, K. 2009. “How to Perform a Disaster Recovery Business Impact Analysis,” available ta: http://www.csoonline.com/article/2124593/emergency-preparedness/how-to-perform-a-disaster-recovery-business-impact-analysis.html, accessed: 2014-03-06. Olbrich, S., Mueller, B., & Niederman, F. 2011. Theory Emergence in IS Research: The Grounded Theory Method Applied. In 10. JAIS Theory Development Workshop. Shanghai, China. O'Leary, Z. 2004. The Essential Guide to Doing Research. Sage. Oliver, C. 2011. “Critical Realist Grounded Theory: A New Approach for Social Work Research,” British Journal of Social Work, bcr064. Orlikowski, W.J. 1993. CASE Tools as Organizational Change: Investigating incremental and radical changes in systems development, MIS Quarterly (17:3), pp. 309–340. 192  Osterwalder, A., & Pigneur, Y. (2010). Business Model Generation: a Handbook for Visionaries, Game Changers, and Challengers. John Wiley & Sons. Parasuraman, A., Zeithaml, V.A., & Berry, L.L. 1988. “SERVQUAL: A Multi-Item Scale For Measuring Consumer Perceptions Of Service Quality,” Journal of Retailing (64:1), pp. 12 –40. Pavlou, P. A., & Fygenson, M. 2006. “Understanding And Predicting Electronic Commerce Adoption: An Extension of the Theory of Planned Behavior,” MIS Quarterly (30:1), pp. 115-143. Pavlou, P. A., & El Sawy, O. A. 2006. “From IT Competence to Competitive Advantage in Turbulent Environments: The Case of New Product Development,” Information Systems Research (17:3), pp. 198-227. Pavlou, P. A., & El Sawy, O. A. 2010. “The “Third Hand”: IT-enabled Competitive Advantage in Turbulence through Improvisational Capabilities,” Information Systems Research (21:3), pp. 443-471. Pavlou, P. A., and El Sawy, O. A. 2011. “Understanding the Elusive Black Box of Dynamic Capabilities,” Decision Sciences (42:1), pp. 239-273. Pennington, R., Wilcox, H., & Grover, V. 2003. “The Role of System Trust in Business-to-Consumer Transactions,” Journal of Management Information Systems (20), pp. 197-226. Penrose, E. T. 1959. The Theory of the Growth of the Firm. New York: John Wiley and Sons. Porter, M. 1991. “Towards a dynamic theory of strategy,” Strategic Management Journal (12:1), pp. 95–117. 193  Porter, M. E. 1979. How Competitive Forces Shape Strategy. Harvard Business Review (March-April): pp. 137-145. Porter, M. E. 1980. Competitive Strategy: Techniques for Analyzing Industries and Competitors. New York: The Free Press. Porter, M. E. 1985. Competitive Advantage. New York: The Free Press. Postman, L., Jenkins, W. O., & Postman, D. L. 1948. “An experimental comparison of active recall and recognition,” The American Journal of Psychology (61:4), pp. 511-519. Premkumar, G., Ramamurthy, K., & Saunders, C. 2005, “Information Processing View of Organizations: An Exploratory Examination of Fit in the Context of Interorganizational Relationships,” Journal of Management Information Systems (22), pp. 257-294. Preston, D. S., & Karahanna, E. 2009. “Antecedents of IS Strategic Alignment: a Nomological Network,” Information Systems Research (20:2), pp. 159-179. Raaijmakers, J. G. W. & Schiffrin, R. M. 1981. “Search of Associative Memory,” Psychological Review (88:2), pp. 98-134. Ragin, Charles C. 2006. User's Guide to Fuzzy-Set/Qualitative Comparative Analysis 2.0. Tucson, Arizona: Department of Sociology, University of Arizona. Raju, M.  2014. “Reid's new mission: Blocking 'crazy stuff'”, Politico, 12/09/14, available at: http://www.politico.com/story/2014/12/harry-reids-new-mission-blocking-crazy-stuff-113452, accessed: 2016-05-01.  194  Ravichandran, T., & Rai, A. 2000. “Quality Management In Systems Development: An Organizational System Perspective,” MIS Quarterly (24:3), pp.381-415. Ray, G., Muhanna, W. A., & Barney, J. B. 2005. “Information Technology and the Performance of the Customer Service Process: A Resource-Based Analysis,” MIS Quarterly (20:2), pp. 625-652. Raykov, T., & Marcoulides, G. A. 2008. An Introduction to Applied Multivariate Analysis. Routledge: Taylor & Francis Group. Regester, M., & Larkin, J. (2008). Risk Issues and Crisis Management in Public Relations: A Casebook of Best Practice. Kogan Page Publishers. Reich, B. H., and Benbasat, I. 1996. “Measuring the Linkage between Business and Information Technology Objectives,” MIS Quarterly, pp. 55-81. Reich, B. H., and Benbasat, I. 2000. “Factors that Influence the Social Dimension of Alignment between Business and Information Technology Objectives,” MIS Quarterly (24:1), pp. 81-113. Reinganum, M. R. 1981. “A New Empirical Perspective on the CAPM,” Journal of Financial and Quantitative Analysis (16). pp. 439-462. Ricardo, D., 1817. “On the Principles of Political Economy and Taxation,” Library of Economics and Liberty, available at: http://www.econlib.org/library/Ricardo/ricP.html, accessed: 2014-12-01. 195  Rice, S. C. 2012. “Reputation And Uncertainty In Online Markets: An Experimental Study,” Information Systems Research (23:2), pp. 436-452. Rips, L. J., Shoben, E. J., and Smith, E. E. 1973. “Semantic Distance and the Verification of Semantic Relations,” Journal of Verbal Learning and Verbal Behavior (12:1), pp. 1-20. Rockart, J. F., Earl, M. J., and Ross, J. W. 1996. “Eight Imperatives for the New IT Organization,” Sloan Management Rev. (38:1), pp. 43–55. Ruefli, T. W., Collins, J. M., and Lacugna, J. R. 1999. “Risk Measures in Strategic Management Research: Auld Lang Syne?,” Strategic Management Journal (20:2), pp. 167-194. Sarker, S., Xiao, X., & Beaulieu, T. 2013. “Guest editorial: Qualitative studies in information systems: A critical review and some guiding principles,” MIS Quarterly (37:4), pp. iii-xviii. Seddon, P. B. 2014. “Implications for Strategic IS Research of the Resource-based Theory of the Firm: A Reflection,” The Journal of Strategic Information Systems (23:4), pp. 257-269. Seddon, P. B., Calvert, C., & Yang, S. 2010. “A Multi-Project Model Of Key Factors Affecting Organizational Benefits From Enterprise Systems,” MIS Quarterly (34:2), pp. 305-328. Segars, A. H., & Grover, V. (1998). “Strategic Information Systems Planning Success: An Investigation Of The Construct And Its Measurement,” MIS Quarterly (22:2), pp. 139-163. Selten, R. & Warglien, M. 2007. “The emergence of simple language in an experimental coordination game,” Proceedings of the National Academy of Sciences (104:18), pp. 7361–7366. 196  Sharpe, W. F. 1964. “Capital Asset Prices: A Theory Of Market Equilibrium Under Conditions Of Risk,” Journal of Finance (19). pp. 425-442. Society for Information Management, 2016. “The 2016 SIM IT Trends Study: Issues, Investments, Concerns, and Practices of Organizations and their IT Executives,” Society for Information Management, available at: http://www.simnet.org/?page=IT_Trends_Survey, accessed: 2016-05-01.    Spreng, R.A., MacKenzie, S., & Olshavsky, R.W.  1996. “A Re-examination of the Determinants of Consumer Satisfaction,” Journal of Marketing (60), pp. 15-32. Sterman, J. 2000. Business Dynamics: Systems Thinking and Modeling for a Complex World. Irwin/McGraw Hill. ISBN 0-07-231135-5. Strauss, A.L. & Corbin, J.M. 1990. Basics of Qualitative Research: Grounded Theory Procedures and Techniques, Newbury Park, CA: Sage Publications. Strong, D. M., & Volkoff, O. 2010. “Understanding organization-enterprise system fit: a path to theorizing the information technology artifact,” MIS Quarterly (34:4), pp. 731-756. Susarla, A., Barua, A., & Whinston, A. B. 2009. “A transaction cost perspective of the" Software as a Service" business model,” Journal of Management Information Systems (26:2), pp. 205-240. Swanson, M., Wohl, A., Pope, L., Grance, T., Hash, J., and Thomas, R. 2002. “Contingency Planning Guide for Information Technology Systems: Recommendations of the National Institute of Standards and Technology (No. NIST-SP-800-34). National Institute of Standards and Technology, Gaithersburg, MD. 197  Tabachnick, B. G., & Fidell, L. S. 2013. Using Multivariate Statistics, 6th Edition. Pearson. Tallon, P. P. 2007. “A Process-Oriented Perspective On The Alignment Of Information Technology And Business Strategy,” Journal of Management Information Systems (24:3), pp. 227-268. Tallon, P. P. 2011. “Value Chain Linkages and the Spillover Effects of Strategic Information Technology Alignment: A Process-Level View,” Journal of Management Information Systems (28:3), pp. 9-44. Tallon, P. P. 2014. “Do You See What I See? The Search for Consensus among Executives’ Perceptions of IT Business Value,” European Journal of Information Systems (23:3), 306-325. Tallon, P. P., & Pinsonneault, A. 2011. “Competing Perspectives on the Link between Strategic Information Technology Alignment and Organizational Agility: Insights from a Mediation Model,” MIS Quarterly (35:2), pp. 463-486. Tammineedi, R. L. 2010. “Business Continuity Management: A Standards-based Approach,” Information Security Journal: A Global Perspective (19:1), pp. 36-50. Tan, C. W. 2011. Understanding e-Service Failures: Formation, Impact and Recovery. PhD Thesis, available at: https://circle.ubc.ca/handle/2429/36390, accessed: 2016-05-01. Tan, C. W., Benbasat, I., & Cenfetelli, R. T. 2013. “IT-Mediated Customer Service Content And Delivery In Electronic Governments: An Empirical Investigation Of The Antecedents Of Service Quality,” MIS Quarterly (37:1), pp. 77-109. 198  Tan, C. W., Benbasat, I., & Cenfetelli, R. T. 2016. “An Exploratory Study of the Formation and Impact of Electronic Service Failures,” MIS Quarterly (40:1), pp. 1-29. Tan, F. B. & Gallupe, R. B. 2006. “Aligning business and information systems thinking: A cognitive approach. IEEE Transaction Engineering Management (53:2), pp. 223–237. Tanriverdi, H., & Ruefli, T. W. 2004. “The Role Of Information Technology In Risk/Return Relations Of Firms,” Journal of the Association for Information Systems (5:11), pp. 421-447. Tanriverdi, H., Konana, P., & Ge, L. 2007. “The Choice of Sourcing Mechanisms for Business Processes,” Information Systems Research (18:3), pp. 280-299. Techopedia, 2016. “Twitterstorm”, available at: https://www.techopedia.com/definition/29624/twitterstorm, accessed: 2016-08-31. Teece, D. J. 2007. “Explicating Dynamic Capabilities: The Nature and Microfoundations of (Sustainable) Enterprise Performance,” Strategic Management Journal (28:13), pp. 1319-1350. Teece, D., and Pisano, G. 1994. “The Dynamic Capabilities of Firms: An Introduction,” Industrial and Corporate Change (3:3), pp. 537-556. Teece, D., Pisano, G., and Shuen, A. 1997. “Dynamic capabilities and strategic management,” Strategic Management Journal (18:7), pp. 509–533. Teece, D.J., 2009. Dynamic Capabilities and Strategic Management, Organizing for Innovation and Growth, Oxford UK: Oxford University Press. 199  Teo, T. S., Srivastava, S. C., & Jiang, L. 2008. “Trust And Electronic Government Success: An Empirical Study,” Journal of Management Information Systems (25:3), pp. 99-132. The Open Group. 2011. TOGAF Version 9.1, available at: http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap35.html, accessed: 2016-05-01.  Tian, F., & Xu, S. X. 2015. “How Do Enterprise Resource Planning Systems Affect Firm Risk? Post-Implementation Impact,” MIS Quarterly (39:1), pp. 39-60. Toulmin, S.E. 1958. The Uses of Argument. Cambridge: University Press. Trochim, W. M. K. 2007. The Research Methods Knowledge Base. 3rd Ed.  Atomic Dog Publishing. Trope, Y., & Liberman, N. 2000. Temporal distance and timedependent changes in preference. Journal of Personality and Social Psychology, 79, pp. 876-889. Trope, Y., & Liberman, N. 2003. Temporal construal. Psychological Review, 110, pp. 403-421. Trope, Y., & Liberman, N. 2010. Construal-level theory of psychological distance. Psychological Review, 117, pp. 440-463. Tsai, D. R., & Sang, H. A. 2010. “Constructing a Risk Dependency-based Availability Model,” in International Carnahan Conference on Security Technology (ICCST), San Jose, CA, pp. 218-220. Turel, O., Yuan, Y., & Connelly, C. E. 2008. “In Justice We Trust: Predicting User Acceptance Of E-Customer Services,” Journal of Management Information Systems (24:4), pp. 123-151. 200  Ulrich, W., and Rosen, M. 2011. “The Business Capability Map: The "Rosetta Stone" of Business/IT Alignment,” Enterprise Architecture (14:2), pp. 1-23. Urquhart, C., & Fernández, W. 2013. “Using grounded theory method in information systems: the researcher as blank slate and other myths,” Journal of Information Technology (28:3), pp. 224-236. Urquhart, C., Lehmann, H., & Myers, M. D. 2010. “Putting the ‘theory’ back into grounded theory: guidelines for grounded theory studies in information systems,” Information Systems Journal (20:4), pp. 357-381. Urquhart, Cathy 2012. Grounded Theory for Qualitative Research: A Practical Guide. SAGE Publications. Kindle Edition. Van Der Zee, J. T. M., and De Jong, B. 1999. Alignment is not Enough: Integrating Business and Information Technology Management with the Balanced Business Scorecard,” Journal of Management Information Systems (16:2), pp. 137-156. van Ginkel, W. 2009. “A Business Process Approach to Assets and Risks,” Verizon Business Security Solutions, available at: http://www.verizonenterprise.com/resources/whitepapers/wp_a-business-process-approach-to-assets-and-risks_en_xg.pdf, accessed: 2016-05-01. Venkatesh, V., Brown, S., and Bala, H. 2013. “Bridging the Qualitative–Quantitative Divide: Guidelines for Conducting Mixed Methods Research in Information Systems,” MIS Quarterly (37:1), pp. 21-54. 201  Venkatraman, N. 1989. The Concept of Fit in Strategy Research: Toward Verbal and Statistical Correspondence,” Academy of Management Review (14:3), pp. 423-444.  Venkatraman, N., Henderson, J. C., and Oldach, S. 1993. Continuous Strategic Alignment: Exploiting Information Technology Capabilities for Competitive Success,” European Management Journal (11:2), pp. 139-149. Verheij, B. 2005. “Evaluating Arguments based on Toulmin’s Scheme,” Argumentation (19:3), pp. 347-371. Vessey, I., and Ward, K. 2013. The Dynamics of Sustainable IS Alignment: The Case for IS Adaptivity,” Journal of the Association for Information Systems (14:6), pp. 283-311. Vision Solutions. 2014. “Assessing the Financial Impact of Downtime”, available at: http://www.strategiccompanies.com/pdfs/Assessing%20the%20Financial%20Impact%20of%20Downtime.pdf, accessed: 2015-07-11. Volkoff, O., Strong, D. M., & Elmes, M. B. 2007. “Technological Embeddedness and Organizational Change,” Organization Science (18:5), pp. 832-848. Wade, M., & Hulland, J. 2004. “Review: The Resource-based View and Information Systems Research: Review, Extension, and Suggestions for Future Research,” MIS Quarterly, (28:1), pp. 107-142. 202  Wade, M., and Hulland, J. 2004. “Review: The Resource-based View and Information Systems Research: Review, Extension, and Suggestions for Future Research,” MIS Quarterly, (28:1), pp. 107-142. Wagner, H. T., Beimborn, D., & Weitzel, T. 2014. “How Social Capital Among Information Technology And Business Units Drives Operational Alignment And IT Business Value,” Journal of Management Information Systems (31:1), pp. 241-272. Wand, Y., & Weber, R. 1993. “On the ontological expressiveness of information systems analysis and design grammars,” Information Systems Journal (3:4), pp. 217-237. Wang, N., Liang, H., Zhong, W., Xue, Y., & Xiao, J. 2012. “Resource structuring or capability building? An empirical study of the business value of information technology,” Journal of Management Information Systems (29:2), pp. 325-367. Weber, R. 2012. “Evaluating and Developing Theories in the Information Systems Discipline,” Journal of the Association for Information Systems (13:1), pp. 1-30. West III, P., & DeCastro, J. 2001. “The Achilles heel of firm strategy: resource weaknesses and distinctive inadequacies,” Journal of Management Studies (38:3), pp. 417-442. Westerman, G. 2009. “IT Risk as a Language for Alignment,” MIS Quarterly Executive (8:3), pp. 109-121. Westerman, G., and Hunter, R. 2007. IT Risk: Turning Business Threats into Competitive Advantage. Boston, MA:Harvard Business School Press. 203  Weick, K. E. 1995. “What Theory Is Not, Theorizing Is,” Administrative Science Quarterly (40:3), pp. 385-390. Whetten, D. A. 1989. “What Constitutes a Theoretical Contribution?.” Academy of Management Review (14:4), pp. 490-495. Wiesche, m., Jurisch, M., Yetton, P., & Krcmar, H. 2017. “Grounded theory methodology in Information System,” MIS Quarterly (41:3) pp. 1-17 Wood, T., Cecchet, E., Ramakrishnan, K. K., Shenoy, P., Van Der Merwe, J., & Venkataramani, A. 2010. "Disaster recovery as a cloud service: Economic benefits and deployment challenges," In 2nd USENIX Workshop on Hot Topics in Cloud Computing, pp. 1-7. Wu, S. P. J., Straub, D. W., & Liang, T. P. (2015). How Information Technology Governance Mechanisms and Strategic Alignment Influence Organizational Performance: Insights from a Matched Survey of Business and its Managers,” MIS Quarterly (39:2), pp. 497-518. Yim, C. K., Tse, D. K., & Chan, K. W. 2008. “Strengthening Customer Loyalty through Intimacy and Passion: Roles of Customer-Firm Affection and Customer-Staff Relationships in Services,” Journal of Marketing Research (45:6), pp. 741-756. Zambon, E., Bolzoni, D., Etalle, S., and Salvato, M. 2007. “Model-based Mitigation of Availability Risks,” in 2nd IEEE/IFIP International Workshop on Business-Driven IT Management (BDIM'07), pp. 75-83. 204  Zambon, E., Etalle, S., Wieringa, R. J., and Hartel, P. 2011. “Model-based Qualitative Risk Assessment for Availability of IT Infrastructures,” Software and Systems Modeling (10:4), Munich, Germany, pp. 553-580. Case References Amazon Catchpoint, 2016. “Amazon Outage Denies Afternoon Access”, available at: http://blog.catchpoint.com/2016/03/11/amazon-outage/, accessed: 2016-05-01. CNBC, 2016. “Amazon.com intermittently down as users report widespread outage”, available at: http://www.cnbc.com/2016/03/10/reuters-america-amazoncoms-primary-e-commerce-website-back-up-after-outage.html, accessed: 2016-05-01. InternetRetailer, 2016. “How much did Amazon’s outage cost the online giant?”, available at: https://www.internetretailer.com/2016/03/11/how-much-did-amazons-outage-cost-online-giant, accessed: 2016-05-01. GeekWire, 2016. “Update: Amazon.com back online after sudden outage hits users across the U.S.”, available at: http://www.geekwire.com/2016/amazon-reportedly-users-across-country/, accessed: 2016-05-01. MyFox8, 2016. “Amazon down: Amazon back after brief outage”, available at: http://myfox8.com/2016/03/10/amazon-down-amazon-down-for-most-customers-across-u-s/, accessed: 2016-05-01. 205  Reuters, “Amazon.com's primary e-commerce website back up after outage”, available at: http://www.reuters.com/article/us-amazon-com-outages-idUSKCN0WC2PP , accessed: 2016-05-01. American Airlines Apple Insider, 2015. “American Airlines now using Apple's iPad in all cockpits”, available at: https://www.youtube.com/watch?v=AGmeavqhObw&feature=youtu.be, accessed: 2016-05-01. BBC, 2015. “American Airlines planes grounded by iPad app error”, available at: http://www.bbc.com/news/technology-32513066, accessed: 2016-05-01. CNN, 2015. “American Airlines says iPad software glitch delays flights”, available at: http://money.cnn.com/2015/04/29/technology/american-airlines-ipad/, accessed: 2016-05-01. Computer World, 2015. “What third-party app crashed American Airlines pilots' iPads and caused flight delays?”, available at: http://www.computerworld.com/article/2916577/security0/what-third-party-app-crashed-american-airlines-pilots-ipads-and-caused-flight-delays.html, accessed: 2016-05-01. Gaurdian, 2015. “App fail on iPad grounds 'a few dozen' American Airlines flights”, available at: http://www.theguardian.com/technology/2015/apr/29/apple-ipad-fail-grounds-few-dozen-american-airline-flights, accessed: 2016-05-01. QZ, 2015. “An iPad app glitch grounded several dozen American Airlines planes”, available at: http://qz.com/393909/american-airlines-planes-are-grounded-because-their-pilots-ipads-have-crashed/, accessed: 2016-05-01. 206  The Verge-1, 2015. “iPad app issue grounds 'a few dozen' American Airlines flights”, available at: http://www.theverge.com/2015/4/28/8511993/ipad-issue-grounds-american-airlines-737s, accessed: 2016-05-01. The Verge-2, 2015. “American Airlines pilots will need paper charts or PDFs to fly to DC”, available at: http://www.theverge.com/2015/4/29/8515481/american-airlines-pilots-will-need-paper-charts-to-fly-to-dc, accessed: 2016-05-01. USA TODAY, 2015. “iPad glitch grounds two dozen American flights”, available at: www.usatoday.com/story/travel/2015/04/28/american-airlines-737-ipad/26551871/, accessed: 2016-05-01. Bank of America ABC News, 2011. “Bank of America Under Hacking Attack?”, available at: http://abcnews.go.com/blogs/business/2011/10/bank-of-america-under-hacking-attack/ , accessed: 2016-05-01. American Banker, 2011. “Online Banking Upgrade Contributed to Bank of America Outage”, available at: http://www.americanbanker.com/issues/176_195/bank-of-america-website-outage-online-banking-1042932-1.html, accessed: 2016-05-01. Business Insider, 2011. “This Is Getting Unbelievable: The Bank Of America Website Is Still Down”, available at: http://www.businessinsider.com/outrageous-bank-of-americas-website-down-again-2011-10 , accessed: 2016-05-01. 207  CNN, 2011. “Bank of America explains website outage”, available at: http://money.cnn.com/2011/10/06/news/companies/bank_of_america_website/, accessed: 2016-05-01. Time, 2011. “Was Bank of America Hacked?”, available at: http://business.time.com/2011/10/05/was-bank-of-america-hacked/, accessed: 2016-05-01. Wall Street Journal, 2011. “BofA Blames Website Slowness on Upgrade”, available at: http://www.wsj.com/articles/SB10001424052970203388804576612803222293510, accessed: 2016-05-01. Washington Post, 2016. “U.S. charges Iran-linked hackers with targeting banks, N.Y. dam”, available at: https://www.washingtonpost.com/world/national-security/justice-department-to-unseal-indictment-against-hackers-linked-to-iranian-goverment/2016/03/24/9b3797d2-f17b-11e5-a61f-e9c95c06edca_story.html , accessed: 2016-05-01. BC Hydro CBC, 2015. “Poorly-timed BC Hydro tweet leads to hilarious responses during power outage”, available at: http://www.cbc.ca/news/canada/british-columbia/poorly-timed-bc-hydro-tweet-leads-to-hilarious-responses-during-power-outage-1.3210512, accessed: 2016-05-01.  CTV News, 2015. “‘It wasn’t enough’: Hydro apologizes for website failure during storm”, available at: http://bc.ctvnews.ca/it-wasn-t-enough-hydro-apologizes-for-website-failure-during-storm-1.2541986 , accessed: 2016-05-01. 208  Global News, 2015. “BC Hydro spokesperson apologizes for website being down”, available at: http://globalnews.ca/news/2193179/bc-hydro-spokesperson-apologizes-for-website-being-down-today/ , accessed: 2016-05-01. Vancity Buzz, 2015. “BC Hydro's Twitter strategy kept customers in the dark during storm”, available at: http://www.vancitybuzz.com/2015/08/bc-hydro-twitter-strategy-fail-storm-2015/ , accessed: 2016-05-01. Vancourier, 2015. “BC Hydro website contractor linked to botched Obamacare launch”, available at: http://www.vancourier.com/news/bc-hydro-website-contractor-linked-to-botched-obamacare-launch-1.2046700, accessed: 2016-05-01. News 1130, 2015. “BC Hydro critic unsatisfied with website”, available at: http://www.news1130.com/2015/11/17/bc-hydro-critic-unsatisfied-with-website/ , accessed: 2016-05-01. BC Translink SkyTrain Audit Report, 2014. “Independent review: SkyTrain Service Disruptions on July 17 and July 21, 2014”, available at: http://www.translink.ca/~/media/documents/about_translink/media/2014/tra_1795_independent_review_booklet_final.ashx, accessed: 2016-05-01. Journal of Commerce, 2014. “$71 million in recommendations adopted by TransLink” available at: http://www.journalofcommerce.com/Infrastructure/News/2014/11/71-million-in-recommendations-adopted-by-TransLink-1003895W/, accessed: 2016-05-01. 209  The Globe and Mail, 2014. “Electrician suspended in connection with Vancouver’s SkyTrain shutdown”, available at: http://www.theglobeandmail.com/news/british-columbia/crippling-skytrain-outage-caused-by-human-error-vancouver-authority-says/article19711156/ , accessed: 2016-05-01. Vancouver Sun, 2014. “Inside the SkyTrain control room”, available at: http://www.vancouversun.com/Inside+SkyTrain+control+room/10092529/story.html?__lsa=70da-bda3, accessed: 2016-05-01. BMW- ConnectedDrive Auto Connected Car, 2014. “BMW ConnectedDrive outage & connected car app disasters”, available at: http://www.autoconnectedcar.com/2014/07/bmw-connecteddrive-outage-connected-car-app-disasters/, accessed: 2016-05-01. BMW, 2013. “My BMW Remote App”, available at: http://www.bmw.com/com/en/owners/bmw_apps_2013/apps/my_bmw_remote_app/, accessed: 2016-05-01. Computer World, 2014. “BMW's connected car services go offline across UK”, available at: http://www.computerworlduk.com/news/mobile-wireless/3532556/bmws-connected-car-services-go-offline-across-uk/, accessed: 2016-05-01. Telematics Wire, 2014. “BMW’s ConnectedDrive suffers a major outage problem in UK”, available at: http://telematicswire.net/bmws-connecteddrive-suffers-a-major-outage-problem-in-uk/ , accessed: 2016-05-01. 210  The Register, 2014. “BMW's ConnectedDrive falls over, bosses blame upgrade snafu”, available at: http://www.theregister.co.uk/2014/07/24/bmw_connected_drive/, accessed: 2016-05-01. Transport Evolved, 2014. “BMW i3 Owners Left Feeling Disconnected After Possible Telematics Failure”, available at: https://transportevolved.com/2014/07/24/bmw-i3-owners-left-feeling-disconnected-possible-telematics-failure/, accessed: 2016-05-01. BMW-Logistics Bloomberg-1, 2013. “BMW Owners Vent Anger at Months-Long Wait for Spare Parts”, available at: http://www.bloomberg.com/news/articles/2013-08-27/bmw-owners-vent-anger-at-months-long-wait-for-spare-parts, accessed: 2016-05-01. Bloomberg-2, 2013. “BMW Owners Waiting for Repairs on Supply Chain Breakdown”, available at: http://www.bloomberg.com/news/articles/2013-08-20/bmw-owners-waiting-for-repairs-on-supply-chain-breakdown, accessed: 2016-05-01. Bloomberg-3, 2013. “BMW to Fix Supply-Chain Delays by Month’s End, Executives Say”, available at: http://www.bloomberg.com/news/articles/2013-09-10/bmw-to-fix-supply-chain-delays-by-month-s-end-executives-say , accessed: 2016-05-01. British Columbia’s ICM BC Government and Service Employees’ Union, 2014. “Children at Risk Campaign Bulletin: Special Issue: ICM”, available at: http://www.bcgeu.ca/children-risk-campaign-bulletin-special-issue-icm , accessed: 2016-05-01. 211  BC Society of Transition Houses, 2014. “[Open Letter] Province of British Columbia’s Integrated Case Management System Crash Impacts Women Fleeing Violence”, available at: http://www.bcsth.ca/press/open-letter-province-british-columbia%E2%80%99s-integrated-case-management-system-crash-impacts-women, accessed: 2016-05-01.  CBC, 2014. “Computer crash puts children at risk, says B.C. NDP”, available at: http://www.cbc.ca/news/canada/british-columbia/computer-crash-puts-children-at-risk-says-b-c-ndp-1.2634771, accessed: 2016-05-01. CKNW, 2014. “Social services computer system still not working”, available at: source: http://www.cknw.com/2014/05/08/social-services-computer-system-still-not-working/, accessed: 2016-05-01.  Huffington Post, 2014. “B.C. Kids At Risk With Failed Government Computer System: Child Advocate”, available at: http://www.huffingtonpost.ca/2014/05/07/bc-kids-at-risk-governmet-computer-system_n_5283619.html, accessed: 2016-05-01.  IEEE Spectrum, 2014. “British Columbia's Integrated Case Management System Falls Over” available at: http://spectrum.ieee.org/riskfactor/computing/it/british-columbias-integrated-case-management-system-falls-over, accessed: 2016-05-01.   The Province, 2014.  “B.C.’s problem-plagued computer system puts kids at risk — and costs taxpayers”, available at: http://www.bcgeu.ca/michael-smyth-bc%E2%80%99s-problem-plagued-computer-system-puts-kids-risk-%E2%80%94-and-costs-taxpayers, accessed: 2016-05-01. 212  The Globe and Mail, 2014. “B.C. government computer system to be examined after crashes”, available at: http://www.theglobeandmail.com/news/british-columbia/bc-government-computer-system-to-be-examined-after-crashes/article18547342/, accessed: 2016-05-01.  Times Colonist, 2014. “Computer-system crashes putting children at risk, B.C. watchdog says”, available at: http://www.timescolonist.com/news/local/computer-system-crashes-putting-children-at-risk-b-c-watchdog-says-1.1026740, accessed: 2016-05-01.  Vancouver Sun-1, 2014. “The B.C. government’s $182-million computer system just won’t work”, available at: http://www.vancouversun.com/technology/government+million+computer+system+just+work+with+video/9840193/story.html, accessed: 2016-05-01. Vancouver Sun-2, 2014. “Major glitch in provincial computer system hurting B.C.'s most vulnerable: NDP MLA”, available at: http://www.vancouversun.com/life/Major+glitch+provincial+computer+system+hurting+most+vulnerable/9813167/story.html, accessed: 2016-05-01.  BC Government, 2015. “ICM Project Overview”, http://www.integratedcasemanagement.gov.bc.ca/documents/icm-overview.pdf, accessed: 2016-05-01. BC Government, 2015. “Frequently Asked Questions about ICM”, available ta: http://www.integratedcasemanagement.gov.bc.ca/faq.html, accessed: 2016-05-01. Comair 213  CNN, 2004. “CNN PEOPLE IN THE NEWS” available at: http://www.cnn.com/TRANSCRIPTS/0412/25/pitn.01.html, accessed: 2016-05-01.  CIO, 2005. “Comair's Christmas Disaster: Bound to Fail”, available at: http://www.cio.com/article/2438920/risk-management/comair-s-christmas-disaster--bound-to-fail.html?page=2, accessed: 2016-05-01. Foxnews, 2004. “Comair Cancels All 1,100 Flights”, available at: http://www.foxnews.com/story/2004/12/25/comair-cancels-all-1100-flights/, accessed: 2016-05-01. Information Week, 2004. “Comair Downed by Computer Counting Limit”, available at: http://www.informationweek.com/comair-downed-by-computer-counting-limit/d/d-id/1029318? , accessed: 2016-05-01.  NBC News, 2005. “Comair Taps President after Christmas Fiasco”, available at: http://www.nbcnews.com/id/6835907/ns/business-us_business/t/comair-taps-president-after-christmas-fiasco/#.VyzVboSDGko, accessed: 2016-05-01.  USAToday, 2004. “Analysts: Comair's Christmas failure could hurt airline in long run”, available at: http://usatoday30.usatoday.com/travel/news/2004-12-28-comair-cancellations_x.htm, accessed: 2016-05-01. USAToday-2, 2004. “Transportation official wants investigation of Comair flight cancellations”, available at: http://usatoday30.usatoday.com/travel/news/2004-12-28-comair-investigation_x.htm, accessed: 2016-05-01.  214  USAToday-3, 2004. “Comair to replace old system that failed”, available at: http://usatoday30.usatoday.com/travel/news/2004-12-28-comair-usat_x.htm, accessed: 2016-05-01.  Westerman, G., and Hunter, R. 2007. IT Risk: Turning Business Threats into Competitive Advantage. Boston, MA:Harvard Business School Press. Delta CNBC, 2016. “Delta: Outage causes two dozen brief delays”, available at: http://www.cnbc.com/2016/02/02/delta-admits-app-issues-amid-multiple-reports-of-system-outage.html, accessed: 2016-05-01.  Reuters, 2016. “Delta software outage delays boarding for two dozen flights”, available at: http://www.