{"Affiliation":[{"label":"Affiliation","value":"Business, Sauder School of","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","classmap":"vivo:EducationalProcess","property":"vivo:departmentOrSchool"},"iri":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","explain":"VIVO-ISF Ontology V1.6 Property; The department or school name within institution; Not intended to be an institution name."},{"label":"Affiliation","value":"Management Information Systems, Division of","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","classmap":"vivo:EducationalProcess","property":"vivo:departmentOrSchool"},"iri":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","explain":"VIVO-ISF Ontology V1.6 Property; The department or school name within institution; Not intended to be an institution name."}],"AggregatedSourceRepository":[{"label":"Aggregated Source Repository","value":"DSpace","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/dataProvider","classmap":"ore:Aggregation","property":"edm:dataProvider"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/dataProvider","explain":"A Europeana Data Model Property; The name or identifier of the organization who contributes data indirectly to an aggregation service (e.g. Europeana)"}],"Campus":[{"label":"Campus","value":"UBCV","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#degreeCampus","classmap":"oc:ThesisDescription","property":"oc:degreeCampus"},"iri":"https:\/\/open.library.ubc.ca\/terms#degreeCampus","explain":"UBC Open Collections Metadata Components; Local Field; Identifies the name of the campus from which the graduate completed their degree."}],"Creator":[{"label":"Creator","value":"Fard Bahreini, Amir","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/creator","classmap":"dpla:SourceResource","property":"dcterms:creator"},"iri":"http:\/\/purl.org\/dc\/terms\/creator","explain":"A Dublin Core Terms Property; An entity primarily responsible for making the resource.; Examples of a Contributor include a person, an organization, or a service."}],"DateAvailable":[{"label":"Date Available","value":"2021-06-17T20:51:51Z","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/issued","classmap":"edm:WebResource","property":"dcterms:issued"},"iri":"http:\/\/purl.org\/dc\/terms\/issued","explain":"A Dublin Core Terms Property; Date of formal issuance (e.g., publication) of the resource."}],"DateIssued":[{"label":"Date Issued","value":"2021","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/issued","classmap":"oc:SourceResource","property":"dcterms:issued"},"iri":"http:\/\/purl.org\/dc\/terms\/issued","explain":"A Dublin Core Terms Property; Date of formal issuance (e.g., publication) of the resource."}],"Degree":[{"label":"Degree (Theses)","value":"Doctor of Philosophy - PhD","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#relatedDegree","classmap":"vivo:ThesisDegree","property":"vivo:relatedDegree"},"iri":"http:\/\/vivoweb.org\/ontology\/core#relatedDegree","explain":"VIVO-ISF Ontology V1.6 Property; The thesis degree; Extended Property specified by UBC, as per https:\/\/wiki.duraspace.org\/display\/VIVO\/Ontology+Editor%27s+Guide"}],"DegreeGrantor":[{"label":"Degree Grantor","value":"University of British Columbia","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#degreeGrantor","classmap":"oc:ThesisDescription","property":"oc:degreeGrantor"},"iri":"https:\/\/open.library.ubc.ca\/terms#degreeGrantor","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the institution where thesis was granted."}],"Description":[{"label":"Description","value":"Inadvertent and Irrational human errors (e.g., clicking on phishing emails) have been the primary cause of security breaches in recent years. It has been estimated that these errors are a source of approximately 84% of all breaches in 2017 (Sher-Jan, 2018). To understand the root cause of these errors and examine practical solutions for personal users, I applied the theory of bounded rationality (Simon, 1972, 2000). In the second chapter, I examined the role of several factors (i.e., objective knowledge, subjective knowledge, and default security level) on how secure a decision made by a personal user is (i.e., security level of user\u2019s decision). I discovered that the default security level has the most significant influence on the security level of a user\u2019s decision. Furthermore, the results illustrated that subjective security knowledge mediates the impact of objective security knowledge on security decisions. In Chapter 3, I explored the role of heuristics (i.e., short mental processes) in security decision making. Interviews conducted reveal that users rely on various heuristics to simplify their decision making. Specifically, users rely on experts\u2019 comments (i.e., expertise heuristic), information at hand, such as recent events (i.e., availability heuristic), and security-representative visual cues (i.e., representativeness heuristic). Findings also showed the use of other heuristics, including affect, brand, and anchoring, to a lesser degree. In Chapter 4, I examined the impact of several nudging strategies by using the most prevalent heuristic cues discovered in Chapter 3 and the construal level (i.e., level of abstraction) of messages on users\u2019 security decisions. Using the security level of settings and password entropy as measures of the overall degree of security, users made more secure decisions in the presence of any of the heuristic cues irrespective of the construal level compared to the baseline group (i.e., no-message group). Additionally, with respect to the security level of settings, low-level construal availability, low-level construal representativeness, and high-level construal expertise had the highest impact. For password entropy, low-level construal availability and low-level construal representativeness were also the most effective combination. However, there was no significant difference between high-level and low-level construal expertise conditions.","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/description","classmap":"dpla:SourceResource","property":"dcterms:description"},"iri":"http:\/\/purl.org\/dc\/terms\/description","explain":"A Dublin Core Terms Property; An account of the resource.; Description may include but is not limited to: an abstract, a table of contents, a graphical representation, or a free-text account of the resource."}],"DigitalResourceOriginalRecord":[{"label":"Digital Resource Original Record","value":"https:\/\/circle.library.ubc.ca\/rest\/handle\/2429\/78723?expand=metadata","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/aggregatedCHO","classmap":"ore:Aggregation","property":"edm:aggregatedCHO"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/aggregatedCHO","explain":"A Europeana Data Model Property; The identifier of the source object, e.g. the Mona Lisa itself. This could be a full linked open date URI or an internal identifier"}],"FullText":[{"label":"Full Text","value":"ROLE OF HEURISTICS AND BIASES IN INFORMATION SECURITY DECISION MAKING   by   Amir Fard Bahreini  B.Acc, Shiraz University, 2015 M.Sc. University of Oklahoma, 2017 M.B.A University of Oklahoma, 2017   A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY  in  THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES  (Business Administration)  THE UNIVERSITY OF BRITISH COLUMBIA  (Vancouver)  June 2021  \u00a9 Amir Fard Bahreini, 2021    ii The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, the dissertation entitled:  Role of Heuristics and Biases in Information Security Decision Making  submitted by  Amir Fard Bahreini   in partial fulfillment of the requirements for the degree of Doctor of Philosophy  in Business Administration   Examining Committee: Ronald T. Cenfetelli, Professor, Accounting and Information Systems, Sauder School of Business, University of British Columbia Co-supervisor Hasan Cavusoglu, Associate Professor, Accounting and Information Systems Division, Sauder School of Business, University of British Columbia Co-supervisor  Victoria Lemieux, Associate Professor, School of Information, University of British Columbia Supervisory Committee Member Ning Nan, Associate Professor, Accounting and Information Systems Division, Sauder School of Business, University of British Columbia University Examiner Konstantin Beznosov, Professor, Department of Electrical and Computer Engineering, University of British Columbia University Examiner       iii Abstract  Inadvertent and Irrational human errors (e.g., clicking on phishing emails) have been the primary cause of security breaches in recent years. It has been estimated that these errors are a source of approximately 84% of all breaches in 2017 (Sher-Jan, 2018). To understand the root cause of these errors and examine practical solutions for personal users, I applied the theory of bounded rationality (Simon, 1972, 2000). In the second chapter, I examined the role of several factors (i.e., objective knowledge, subjective knowledge, and default security level) on how secure a decision made by a personal user is (i.e., security level of user\u2019s decision). I discovered that the default security level has the most significant influence on the security level of a user\u2019s decision. Furthermore, the results illustrated that subjective security knowledge mediates the impact of objective security knowledge on security decisions. In Chapter 3, I explored the role of heuristics (i.e., short mental processes) in security decision making. Interviews conducted reveal that users rely on various heuristics to simplify their decision making. Specifically, users rely on experts\u2019 comments (i.e., expertise heuristic), information at hand, such as recent events (i.e., availability heuristic), and security-representative visual cues (i.e., representativeness heuristic). Findings also showed the use of other heuristics, including affect, brand, and anchoring, to a lesser degree. In Chapter 4, I examined the impact of several nudging strategies by using the most prevalent heuristic cues discovered in Chapter 3 and the construal level (i.e., level of abstraction) of messages on users\u2019 security decisions. Using the security level of settings and password entropy as measures of the overall degree of security, users made more secure decisions in the presence of any of the heuristic cues irrespective of the construal level compared to the baseline group (i.e., no-message group). Additionally, with respect to the security level of settings, low-level construal availability, low-level construal representativeness, and high-level construal expertise had the highest impact. For password entropy, low-level construal availability and low-level construal representativeness were also the most effective combination. However, there was no significant difference between high-level and low-level construal expertise conditions.  iv Lay Summary   Human beings continue to be the weakest link in information security, and their mistakes cause financial and data losses for them, and organizations connected to them. The findings of the three studies provide an explanation behind users\u2019 errors and the decision-making process in the context of information security. Users seek to reduce their thinking efforts in security decisions. In this process, they rely on various effort-reduction thinking approaches and\/or default options to make the decision process easier. Furthermore, what they think they know is a key contributor to utilizing their actual security knowledge. While inherently not problematic, these actions could lead to costly errors in judgments. The findings also offer a practical solution in the form of system messages that can mitigate the possibility of such errors and help users make secure decisions.                v Preface  This dissertation is an original, unpublished work by the author, Amir Fard Bahreini. The studies were conducted with the supervision of Dr. Ronald T. Cenfetelli and Dr. Hasan Cavusoglu. The field study report in Chapter 2 was conducted after UBC Human Ethics approval (ID: H19-00666). The exploratory study reported in Chapter 3 was conducted after UBC Human Ethics approval (ID: H20-03508). The experiment reported in Chapter 4 was conducted after UBC Human Ethics approval (ID: H20-00879).   vi Table of Contents Abstract ......................................................................................................................................... iii Lay Summary ............................................................................................................................... iv Preface ............................................................................................................................................ v Table of Contents ......................................................................................................................... vi List of Tables .................................................................................................................................. x List of Figures .............................................................................................................................. xii Acknowledgments ...................................................................................................................... xiii Chapter 1: Introduction ............................................................................................................... 1 1.1 Research Motivation ............................................................................................................. 2 1.2 Theoretical Framework ......................................................................................................... 3 1.3 Research Objectives and Questions ...................................................................................... 5 Chapter 2: Role of Security Knowledge and Default Security Level in Security Decision Making ........................................................................................................................................... 7 2.1 Introduction ........................................................................................................................... 7 2.2 Literature Review.................................................................................................................. 9 2.2.1 Status Quo (Default) Options......................................................................................... 9 2.2.2 Objective Knowledge and Subjective Knowledge ...................................................... 10 2.2.3 Default Options, Subjective Knowledge, and Objective Knowledge in                    Information Security Literature ............................................................................................ 12 2.3 Theory Development .......................................................................................................... 13 2.3.1 Objective and Subjective Security Knowledge............................................................ 15 2.3.2 Role of Default Security Level .................................................................................... 15 2.3.3 Role of Objective Security Knowledge ....................................................................... 16 2.3.4 Role of Subjective Security Knowledge ...................................................................... 17 2.3.5 Mediating Role of Subjective Security knowledge in the Relationship Between Objective Security Knowledge and Security Level of the User\u2019s Decision ......................... 17 2.3.6 Moderating Role of Subjective Security Knowledge in the Relationship Between Default Security Level and Security Level of User\u2019s Decision ............................................ 18 2.4 Methodology ....................................................................................................................... 18 vii 2.4.1 Item Development ........................................................................................................ 18 2.4.2 Sampling ...................................................................................................................... 23 2.4.3 Study Procedure ........................................................................................................... 24 2.4.4 Descriptive Analysis .................................................................................................... 25 2.4.5 Measurement Model .................................................................................................... 25 2.4.6 Structural Model Results.............................................................................................. 28 2.5 Discussion of Results .......................................................................................................... 30 2.6 Theoretical Implications ..................................................................................................... 31 2.7 Practical Implications.......................................................................................................... 32 Chapter 3: Exploratory Study of Role of Heuristics in Information Security Decision Making ......................................................................................................................................... 35 3.1 Introduction ......................................................................................................................... 35 3.2 Heuristics in Information Security Literature ..................................................................... 38 3.3 Theoretical Background ...................................................................................................... 39 3.3.1 Why People use Heuristics? ......................................................................................... 39 3.3.2 How People Use Heuristics ......................................................................................... 39 3.4 Methodology ....................................................................................................................... 42 3.4.1 Study Design ................................................................................................................ 42 3.4.2 Data Collection ............................................................................................................ 44 3.4.3 Data Analysis ............................................................................................................... 48 3.4.4 Results .......................................................................................................................... 53 3.5 Discussion of Results .......................................................................................................... 55 3.5.1 Expertise Heuristic Sub-Themes.................................................................................. 55 3.5.2 Availability Heuristic Sub-Themes .............................................................................. 56 3.5.3 Representativeness Heuristic Sub-Themes .................................................................. 57 3.5.4 Anchoring Heuristic Sub-Themes ................................................................................ 58 3.5.5 Affect Heuristic Sub-Themes ....................................................................................... 59 3.5.6 Brand Heuristic Sub-Themes ....................................................................................... 59 3.6 Theoretical Implications ..................................................................................................... 60 3.7 Practical Implications.......................................................................................................... 63 Chapter 4: Empirical Analysis of the Influence of Heuristic-Based Nudging on Security viii Decision Making .......................................................................................................................... 64 4.1 Introduction ......................................................................................................................... 64 4.2 Relevant Literature.............................................................................................................. 68 4.2.1 Availability Heuristic and Decision Making ................................................................ 68 4.2.2 Representativeness Heuristic and Decision Making .................................................... 69 4.2.3 Expertise Heuristic and Decision Making ................................................................... 69 4.2.4 Construal Level and Decision Making ........................................................................ 70 4.3 Theory Development .......................................................................................................... 71 4.3.1 Availability Heuristic in Information Security ............................................................. 71 4.3.2 Representativeness Heuristic in Information Security ................................................. 72 4.3.3 Expertise Heuristic in Information Security ................................................................ 72 4.3.4 Construal Level Theory (CLT) in Information Security .............................................. 74 4.4 Methodology ....................................................................................................................... 75 4.4.1 Demographic and Treatment Breakdown .................................................................... 76 4.4.2 Study Design ................................................................................................................ 77 4.4.3 Measurements .............................................................................................................. 78 4.4.4 Treatments .................................................................................................................... 80 4.4.5 Study Procedure ........................................................................................................... 82 4.4.6 Data Analysis ............................................................................................................... 82 4.5 Manipulation Checks .......................................................................................................... 89 4.6 Discussion of Results .......................................................................................................... 92 4.7 Theoretical Implications ..................................................................................................... 97 4.8 Practical Implications.......................................................................................................... 98 Chapter 5: Conclusion .............................................................................................................. 100 5.1 Limitations ........................................................................................................................ 100 5.2 Summary and Future Direction ......................................................................................... 102 Bibliography .............................................................................................................................. 105 Appendices .................................................................................................................................. 118 Appendix A (Chapter 2) Objective Security Knowledge Construct Item Development .........118 Appendix B (Chapter 2) Security Knowledge Final Items ..................................................... 125 Appendix C (Chapter 2) Control Variables Items ................................................................... 127 ix Appendix D (Chapter 2) Control Variables Descriptive Statistics .......................................... 128 Appendix E Bivariate Correlation (All Variables) .................................................................. 129 Appendix F (Chapter 2) Collinearity Diagnostic .................................................................... 130 Appendix G (Chapter 2) Influential Outliers .......................................................................... 131 Appendix H (Chapter 3) Phase 1 Questionnaire ..................................................................... 132 Appendix I (Chapter 3) Phase 2 Questionnaire ...................................................................... 133 Appendix J (Chapter 3) Data Analysis Stages and Adherence to Main Criteria .................... 135 Appendix K (Chapter 3) Card Sorting Results ....................................................................... 137 Appendix L (Chapter 4) Password Entropy Post Hoc Comparison (Main Effects) ............... 143 Appendix M (Chapter 4) Security Setting Post Hoc Comparison (Main Effects) .................. 144                               x  List of Tables  Table 1.1 Thesis Theoretical Framework ........................................................................................ 6 Table 2.1 Objective Security Knowledge Scale Development ..................................................... 21 Table 2.2 Security Level of User\u2019s Decision ................................................................................ 22 Table 2.3 Default Security Level .................................................................................................. 23 Table 2.4 Descriptive Statistics for Main Continuous Constructs ................................................ 25 Table 2.5 Bivariate Correlations for Main Constructs .................................................................. 25 Table 2.6 Latent Control Variables CFA\/EFA ............................................................................... 27 Table 2.7 Squared Multiple Correlation ........................................................................................ 28 Table 2.8 SEM Full Results .......................................................................................................... 30 Table 2.9 Changes to Default Options .......................................................................................... 33 Table 3.1 Demographic Breakdown ............................................................................................. 44 Table 3.2 Decision Types .............................................................................................................. 46 Table 3.3 Phase 1 Preliminary Assessment Summary .................................................................. 47 Table 3.4 Indexing Codebook ....................................................................................................... 49 Table 3.5 Heuristic Usage per Decision ........................................................................................ 52 Table 3.6 Bivariate Correlations ................................................................................................... 52 Table 3.7 Heuristic Usage per Participant ..................................................................................... 54 Table 3.8 Heuristic Usage per Participants and Task Type ........................................................... 55 Table 3.9 Expertise Heuristic Sub-Themes (Sample Responses) ................................................. 56 Table 3.10 Availability Heuristic Sub-Themes (Sample Responses) ............................................ 57 Table 3.11 Representativeness Heuristics Sub-Themes (Sample Responses) .............................. 58 Table 3.12 Anchoring Heuristics Sub-Themes (Sample Responses) ............................................ 58 Table 3.13 Affect Heuristics Sub-Themes (Sample Responses) ................................................... 59 Table 3.14 Brand Heuristics Sub-Themes (Sample Responses) ................................................... 59 Table 4.1 Availability, Representativeness, Expertise in Information Security ............................ 74 Table 4.2 Demographic Breakdown ............................................................................................. 76 Table 4.3 Treatment Random Assignment Breakdown ................................................................. 77 Table 4.4 Settings Security Level ................................................................................................. 79 Table 4.5 Password Treatments ..................................................................................................... 80 xi Table 4.6 Settings Treatments ....................................................................................................... 81 Table 4.7 Password Entropy Descriptive Statistics ....................................................................... 83 Table 4.8 Password Entropy ANOVA Results .............................................................................. 84 Table 4.9 Settings Security Level Descriptive Statistics .............................................................. 86 Table 4.10 Settings Security Level ANOVA Results .................................................................... 86 Table 4.11 Word Cloud Manipulation Check Criteria .................................................................. 89 Table 4.12 Likert Scale Manipulation Checks Descriptive Statistics ........................................... 91 Table 4.13 Likert Scale Manipulation Checks ANOVA ............................................................... 92 Table 4.14 Theoretical Contributions ............................................................................................ 98 Table 4.15 Practical Implications .................................................................................................. 99 Table A.1 (Appendix A) Rating Task Example Adopted by MacKenzie et al. (2011) ................119 Table A.2 (Appendix A) Expert Rating Task Results ................................................................. 121 Table A.3 (Appendix A) Average Scale Difficulty...................................................................... 122 Table A.4 (Appendix A) Frequency Distribution of Response \u2013Technology Question .............. 123 Table A.5 (Appendix A) Frequency Distribution of Response \u2013Best Practices Questions......... 123 Table A.6 (Appendix B) Security Knowledge Quiz ................................................................... 126 Table A.7 (Appendix C) Control Variables ................................................................................. 127 Table A.8 (Appendix D) Descriptive Statistics \u2013 Control Variables ........................................... 128 Table A.9 (Appendix E) Bivariate Correlations .......................................................................... 129 Table A.10 (Appendix F) Multicollinearity Check ..................................................................... 130 Table A.11 (Appendix J) Quality Criteria for Qualitative Research ........................................... 136 Table A.12 (Appendix K) Hit Ratio Assessment ........................................................................ 142 Table A.13 (Appendix K) Fleiss\u2019 Kappa Between the Study Investigators ................................ 142 Table A.14 (Appendix L) Main Effect Pairwise Comparison (Password Entropy) .................... 143 Table A.15 Main Effect (Appendix M) Pairwise Comparison (Settings Security Level) ........... 144         xii List of Figures  Figure 2.1 Theoretical Model ....................................................................................................... 14 Figure 2.2 App Setup Pages .......................................................................................................... 22 Figure 2.3 SEM Results ................................................................................................................ 29 Figure 3.1 Data Collection Approaches ........................................................................................ 43 Figure 3.2 Example of Card Sorting Answers .............................................................................. 51 Figure 3.3 Example of Hit Ratio Analysis .................................................................................... 51 Figure 3.4 Theoretical Contribution.............................................................................................. 62 Figure 4.1 Study Website .............................................................................................................. 78 Figure 4.2 Example of Treatments on Website ............................................................................. 81 Figure 4.3 Significant Two-Way Interaction (Password Entropy) ................................................ 84 Figure 4.4 Significant Two-Way Interactions (Settings Security Level) ...................................... 87 Figure 4.5 Significant Three-Way Interaction (Settings Security Level) ..................................... 88 Figure 4.6 Word Cloud Manipulation Check Results ................................................................... 90 Figure 4.7 Average Entropy per Group (Sorted) ........................................................................... 93 Figure 4.8 Average Settings Security Level per Group (Sorted) .................................................. 94 Figure A.1 Cook\u2019s D Check ........................................................................................................ 131       xiii Acknowledgments  I offer my utmost appreciation to Professor Cenfetelli and Professor Cavusoglu for their incredible support over my doctoral studies and for inspiring me to continue my work in this field. I owe particular thanks to Professor Lemieux for her guidance and support in my academic journey.  I would also to thank the faculty and fellow students at Sauder and especially the MIS division for their help, advice, discussions, and feedback over the years.   Special thanks are owed to my parents and my brother, who have supported me throughout my years of education, both morally and financially. 1 Chapter 1: Introduction With the ever-increasing role of technology, the number of security decisions organizational and personal users must make is rising; choosing a secure password, avoiding infected websites, recognizing dangerous emails, and selecting the best security settings are a few examples of such decisions. Accordingly, helping users to make more security decisions and reducing decision errors has been an ongoing and essential objective of many information security practitioners and scholars. To understand these existing efforts, it is important to recall the components of information security decision making. All security decisions are the results of the interaction of two main parts: technology and people. For example, to prevent unauthorized access, an individual can create a password using the functionality provided by the operating system. To secure one\u2019s device, an individual may use security software (e.g., antivirus) to detect and remove malware. While technology, or lack of it, can be the source of insecurity, people are the weakest link (Anderson & Agarwal, 2010). As a result, there has been a growing effort in security research in recent years to understand the behavior of people in the context of information security, particularly in the context of personal users (Beitelspacher, Hansen, Johnston, & Deitz, 2012; Boss, Galletta, Lowry, Moody, & Polak, 2015; Chen & Zahedi, 2016; Kritzinger & von Solms, 2010; Li & Siponen, 2011; Marett, McNab, & Harris, 2011; Menard, Bott, & Crossler, 2017; Schuetz, Lowry, Pienta, & Bennett Thatcher, 2020). Personal users comprise a big portion of overall decision makers in information security (Kritzinger & von Solms, 2010) and endure 90% of all attacks (Anderson & Agarwal, 2010; Kritzinger & von Solms, 2010).  When it comes to personal users, there are several differences with users in the organizational context with respect to their security decision making: first, as Li & Siponen (2011) pointed out, unlike firm employees who receive a variety of benefits such as activity monitoring by the security team at their offices, personal users are the judge, jury, and executioner in their every-day security decisions. They receive no human assistance throughout the process of security decision making, and if a mistake occurs, they are responsible for fixing it. Second, these users cannot be mandated to learn security knowledge (Anderson & Agarwal, 2010; Kritzinger & von Solms, 2010), and unlike corporate employees who usually undergo security training, set by the firm\u2019s management, personal users decide when and if they need to increase their security knowledge. In addition to the differences highlighted above, there is a broader concern for personal 2 users not making their best security decisions. The security decisions of this group of users are not less consequential than those of organizational users. Indeed, the poor security decision of personal users can influence not only the security of their own data but also can impact the security of others. For example, by choosing a poor password, a hacker can gain access to data on the individual\u2019s phone. However, the damage does not end there as the hacker can obtain personally identifiable information (PII) of the person\u2019s contacts the victim\u2019s device and carry out further malicious attacks [to others]. A recent example is a new type of Android ransomware that can spread to all original victim's device contacts via text message and eventually infiltrate their systems (Symantec, 2017).  Consequently, personal users\u2019 better security decisions can help enhance their own data security and assist with increasing the protection of people and organizations connected to them. 1.1 Research Motivation Based on these challenges, the question becomes, \u201chow can we help personal users with their security decisions?\u201d Analogous to other domains that aim to improve human judgment, such as health studies, the answer to this question comprises of two parts: understanding the root of the poor security decision making and offering practical solutions to mitigate those causes. As elaborated previously, my target audience is personal users on their personal devices who cannot be mandated to learn about new security topics (e.g., a new type of malware) \u2013 as may be required of an employee \u2013  and they are not under the supervision of any security experts that can help them with any potential security issues. (Anderson & Agarwal, 2010; Kritzinger & von Solms, 2010). Thus, exploring the issue alone will not provide a comprehensive solution. Furthermore, driven by prior recommendations, my objective in this thesis is to increase the theoretical rigor and practical relevance of the study to the best of my capability (Benbasat & Zmud, 1999; Davenport & Markus, 1999). Thus, I aim to explore possible theory-driven solutions, as well. This is somewhat analogous to the consumer-customer topics in business commerce; discovering \u201cwhat is the issue?\u201d is only half the picture, discovering \u201chow to address the issue?\u201d is the other half. For example, in the toy industry, finding out what toys children (i.e., consumer) need is futile unless one can understand how to sell to the parent (i.e., customer), who can fulfill that need for children. In this context, the group that can offer solutions to this issue are software companies and technology firms. They help users using a variety of security products and the interactive design 3 feature in them. Anti-viruses, firewalls, and email spam detection tools are all IT artifacts that help users, especially novice users, by providing feedback to avoid security incidents on their personal devices.  1.2 Theoretical Framework  The literature on judgment and decision making contains three theoretical frameworks that can be used to study security decision making. It is crucial to discuss these frameworks briefly to elaborate on which theoretical framework I chose for my thesis. Normative Theories: rooted in statistics, utility theory, and probability, this branch of decision theories tells us how a decision-maker ought to judge and make decisions (Over, 2004). While each normative theory can have specific assumptions of self, there are common assumptions among all. The main assumption is that a decision maker has all the necessary information and will use all those information to make her decision to obtain a maximum expected utility from an outcome (Bell, Raiffa, & Tversky, 1989). For many years, normative decision models have been the dominant framework used by scholars to assess decision making in most social science contexts, including information security (Adjerid, Peer, & Acquisti, 2018). Most of these models follow the assumptions of the subjective expected utility theory.  Based on this theory, these rational models have always assumed that in a decision with multiple alternatives, individuals will use all of the information necessary and will select the one with the highest expected value (Neumann & Morgenstern, 2004; Savage, 1954). Furthermore, the decision makers assess the available alternatives independent of each other (Schoemaker, 1982).  Descriptive Theories: Despite the gain in popularity of normative theories, researchers began documenting decisions that were not explainable under these models. As a result, a separate branch of decision theories emerged, which aimed to describe the mechanism of such decisions. Rooted in psychology, descriptive theories were developed to \u201cdescribe how people actually think\u201d (Over, 2004, pg. 3). This is because over the years, researchers began to observe that the decision-making process of individuals does not adhere to assumptions of normative theories, and they deviate from norms set by these theories. More specifically, they may not have complete information, consider certain alternatives, and disregard others (Tversky & Kahneman, 1974). One of the early descriptive theories is the theory of bounded rationality developed by Herbert A. Simon, which has been used over the years to understand the process of judgment and decision making of 4 individuals (Simon, 1955, 1959, 1972, 2000). Under this theory, Simon discusses the role of cognitive limitation. According to him, people\u2019s actual thought process in decision making is different from how they ought to think (under rational models) because people are prone to various forms of cognitive limitations.  Prescriptive Theories: one of the main purposes of judgment and decision-making literature has been to improve individuals\u2019 decision quality. While normative theories describe how one is ought to decide and descriptive theories describe how one is actually making decisions, prescriptive theories attempt to improve judgments from what has been observed using descriptive theories and offer corrections so that decisions improve according to normative theories (Over, 2004).  Majority of prior studies in information security literature use models of normative theories to assess security decision making (Adjerid et al., 2018). However, while the results are undoubtedly valuable, they cannot explain the reasons behind people\u2019s irrational and inadvertent errors. This is because, as mentioned previously, theories under the normative perspective assume that the decision maker is rational and does not have any cognitive limitations. For example, under normative theories, if one is aware of a strong password criterion, she will make a strong password. However, evidence from actual security decisions paints a different picture. Several reports point to individuals\u2019 irrational decisions. In the 2013 U.K. Information Security Breaches Survey, it was reported that 36% of the worst security breaches in the year were caused by inadvertent human errors, including clicking on a malicious email, downloading a tempered file, or creating a weak password. According to the EY Information Security 2017 Survey, which polled 12,000 companies worldwide, inadvertent, and careless mistakes by employees are considered the biggest cause of cyberattacks by 77% of respondents. This number is higher than both respondents who see the attacks by criminal organizations as the biggest threat (56%) and those that see intentional malicious activities as the biggest threat (47%) (EY, 2018). In another instance, a third-party investigation in Rockyou.com password hacking revelated that the most popular password chosen by the users was \u201c123456\u201d(Moscaritolo, 2010). Furthermore, inadvertent errors, in general, are argued to be even more prevalent among non-experts in a domain (Thaler & Sunstein, 2008). Accordingly, I chose to examine the information security decision making of personal users by using the theory of bounded rationality, a descriptive decision theory, because it can address the root of irrational and inadvertent decision errors as the 5 assumptions of cognitive limitation is a major component of these theories. Specifically, I argue that assessment of decisions under the theory of bounded rationality can be valuable for three reasons: i.  First, with its multi-disciplinary roots in economics and psychology, the theory of bounded rationality can help in understanding decision-making processes that cannot be explained with normative models.  ii. Second, by showcasing how the personal users make decisions, it can help scholars investigate potential issues in the process of decision making and enable them to focus on areas where individuals need assistance.  iii. Finally, the theory of bounded rationality can guide us in developing a prescriptive framework. To help users make more secure decisions, we must first understand how they make decisions.   1.3 Research Objectives and Questions  With this motivation, my thesis focuses on assessing the sources of personal users\u2019 poor security decision making and examining practical solutions to those issues. Ultimately, I hope the findings can assist users in making better security decisions. Accordingly, I set the following objectives for chapters of my dissertation: \u2022 Chapter 2: Examining several potentially influential factors, including knowledge, perception of knowledge, and default security level in security decision making.  \u2022 Chapter 3: Identifying the potentially influential heuristics in information security-decision making.  \u2022 Chapter 4: Examining decision assistant strategies to help personal users make more secure decisions.    In Chapter 2 and Chapter 3, I aim to address the first part of my thesis motivation (i.e., understanding the causes of poor security decisions). Building upon findings from these two chapters, in Chapter 4, I aim to find the best decision assistant strategies that can address the issues discovered in the first two chapters. Table 1.1 summarizes the overall theoretical framework of the thesis and specific theories that are used in each chapter.      6 Chapter Overarching Theoretical Framework Specific Theoretical Basis Chapter 2: Role of Security Knowledge and Default security level in Security Decision Making Descriptive Framework (Keren & Wu, 2015) \u2022 Theory of Bounded Rationality (Simon, 1972) Chapter 3: Exploratory Study of Role of Heuristics in Information Security Decision Making \u2022 Theory of Bounded Rationality (Simon, 1972) \u2022 Cognitive Heuristics and Biases (Shah & Oppenheimer, 2008; Tversky & Kahneman, 1974) Chapter 4: Empirical Analysis of the Influence of Heuristic-Based Nudging on Security Decision Making Prescriptive Framework (Keren & Wu, 2015) \u2022 Theory of Bounded Rationality (Simon, 1972) \u2022 Cognitive Heuristics and Biases (Larrick, 2004; Shah & Oppenheimer, 2008; Tversky & Kahneman, 1974) \u2022 Construal Level Theory (Liberman, Trope, & Wakslak, 2007; Trope & Liberman, 2010) Table 1.1 Thesis Theoretical Framework                   7 Chapter 2: Role of Security Knowledge and Default Security Level in Security Decision Making 2.1 Introduction As technology's role in day-to-day activities increases, users make a myriad of security decisions with their personal devices; password selection, risky websites\u2019 avoidance, phishing email detection, and security settings selection are a few examples. Security settings themselves include many facets, such as two-factor authentication, login-alert from a new device, automatic update, cookies, and location access. Consequently, the online security of users is not determined just by one decision alone. Instead, it is the aggregate of many choices. For instance, with respect to security settings, to make the best (i.e., most secure) decision is to select options in such a way that increases one\u2019s online security level. Thus, there is always a degree of objective variability in how good or bad the decisions can be in protecting one against online threats (i.e., security level of user\u2019s decision). For instance, if user A has turned on \u2018two-factor authentication\u2019 and \u2018login alert from a new device\u2019 and person B has only turned on \u2018two-factor authentication\u2014while both have made good security decisions\u2014Person A has chosen a higher security level with respect to her decision comparatively to person B since she has also turned on \u2018login alert form a new device.\u2019 Accordingly, I am interested in assessing what influences the security level of users\u2019 decisions.   To follow this motivation, I rely on the theory of bounded rationality developed by Simon, which aims to explain people\u2019s actual decision making (Simon, 1972). Based on the theory of bounded rationality, in this study, I am interested in examining three potential influencers on the security level of users\u2019 decisions: actual security knowledge of users (objective security knowledge), user\u2019s perception of their own security knowledge (subjective security knowledge), and the default options\u2019 level of security (default security level).  Assessing these influencers is valuable for two main reasons: first, in information security, many decisions include default options, and the default options have their own security level. In other words, they can either be preselected securely (i.e., high-level security) or insecurely (i.e., low-level security). For instance, if default access to phone contacts is turned on for a mobile game, then one can say the default option is preselected on low-level security. Currently, some platforms are malicious and have default options at low-level security. Others present users with higher 8 security levels. For example, in just the first three months of 2020, more than 29,000 malicious apps in Android were identified (Khalili, 2020). These apps use a variety of ways, including in-app ads and low-level security default options, to access users\u2019 data. On the other hand, many platforms initially provide a number of default options at high-level security. Facebook or Instagram will not have options such as access to calendar, media, or files turned on by default. However, not all options are set at high-level security. For example, Facebook does not automatically turn on the login alert from a new device or enable two-factor authentication. Accordingly, the default options of the majority of websites and applications have a pre-selected security level (high or low), which can potentially be influential in users\u2019 final decisions according to the theory of bounded rationality.  Second, as stated previously, with personal security, users are solely responsible for their decisions. In such circumstances, the role of one\u2019s security knowledge is further amplified. For example, in the context of enabling\/disabling cookies in security settings, both objective and subjective security knowledge can be influential. Knowing what cookies are and the perception of this knowledge (i.e., how much users think they know about cookies) can play an important role in deciding to enable\/disable cookies.   Existing literature does not shed much light on the relationships between objective security knowledge, subjective security knowledge, default security level, and the security level of the user\u2019s decision. First, there is little assessment of the relationship between objective security knowledge and subjective security knowledge(Aggarwal, Kryscynski, Midha, & Singh, 2015). Pertinent to this challenge, there is no assessment of objective security knowledge and subjective security knowledge\u2019s influence on the security level of personal users\u2019 decisions. Prior studies heavily focused on supervised users in organizational contexts (D'Arcy, Hovav, & Galletta, 2009; Lebek, Uffen, Neumann, Hohler, & H. Breitner, 2014). Second, considering most security decisions include default options, an empirical analysis of this relationship can be beneficial. Third, assessing how these three predictors associate with each other can be beneficial in understanding how they impact users\u2019 security decisions. Specifically, assessing the association between objective and subjective security knowledge which has not been investigated in the context of personal users can be valuable.   Based on this, I aim to answer the following research questions in this study:  9 RQ1. What is the relationship between objective and subjective security knowledge?  RQ2. How do default security level, objective security knowledge, and subjective security knowledge influence the security level of a user\u2019s decision? To answer the research questions, I conducted a three-day field study in which I recruited 95 users via Prolific.co (a well-known online recruitment platform). I began by developing a questionnaire that can accurately measure personal objective security knowledge. In the next design step, to capture the actual security level of a decision, I used the security level of settings selected within an app that was designed and developed for both IOS and Android devices. The app was made available on Google Play Store and App Store. In the study, the participants were required to download the app, use it, and assess its design and functionality in the span of three days. Results from the study showed that default security level is the most influential factor in the security level of a user\u2019s settings selection. I also found that subjective security knowledge plays a critical role: higher subjective security knowledge increases the level of security in users\u2019 settings selection and positively mediates the influence of objective security knowledge on the security level of settings selection.   The chapter is structured as follows: I start with a literature review on the role of default security level, objective knowledge, and subjective knowledge on decision making. Subsequently, I discuss relevant research in information security literature. Next, I develop the study\u2019s underlying theory based on bounded rationality theory and generate the main research hypotheses. I follow this by discussing the research methodology and data analysis. In the end, I discuss the theoretical and practical implications of the study.  2.2 Literature Review 2.2.1 Status Quo (Default) Options  The role of default options has been studied extensively over the last 30 years. Samuelson & Zeckhauser (1988) were among the first to support evidence for the role of default options. Given a set of alternatives in which one is labeled as the default (i.e., status quo), people are more likely to choose that option. The potential error from this process, which is labeled as status quo bias, generated great interest from various literature over the years. Many studies following that seminal paper provided support for the presence of status quo bias. Johnson et al.  (1993) showed 10 that most people tend to stay with their car insurance company even if it was more expensive than others, and some other alternatives were better. Hsieh & Pi-Jung (2015) showed that health professionals in Taiwan resist the acceptance of new cloud-based technology and are prone to the status quo bias. Kim & Kankanhalli (2009) surveyed a selected number of employees of an organization before implementing a new Office Plus software and discovered participants would rather use the existing software and resisted the adoption of new updates. Polites & Karahanna (2012) also showed that that status quo bias alongside habit is associated with a decrease in intention to use new information systems over incumbent systems. In addition to assessing the presence and influence of status quo bias, part of the literature has focused on this phenomenon\u2019s antecedents. Switching costs, cognitive misperception (such as loss aversion), and psychological commitment (such as sunk cost) are some of the factors explaining why individuals gravitate toward the status quo (Samuelson & Zeckhauser, 1988). Moreover, IS literature has also looked at the reasons behind the status quo bias in information systems adoption and usage [For reviews, see Polites & Karhanna (2012), Kim & Kankanhalli (2009), and Lee & Joshi (2017)].     2.2.2 Objective Knowledge and Subjective Knowledge   Prior works related to objective knowledge and subjective knowledge can be categorized into two domains: examining the impact of objective knowledge and subjective knowledge on decision making and the association between the two constructs. In the former domain, the findings show that not only do these two types of knowledge influence various aspects of the decision-making process, but they also do so in different ways (Park, Gardner, & Thukral, 1988; Park, Mothersbaugh, & Feick, 1994). More specifically, prior works have assessed the influence of objective knowledge and subjective knowledge on information processing (Brucks, 1985; Park et al., 1994), information search (Moorman, Diehl, Brinberg, & Kidwell, 2004; Raju, Lonial, & Mangold, 1995), and decision quality (Lusardi & Mitchell, 2007). Fredrica (1979), in one of the early works concerning the influence of objective knowledge and subjective knowledge on food selection, showed that subjective knowledge increases reliance on previously stored information in memory while objective knowledge increases the use of newly acquired information. Subsequent studies support the influence of objective knowledge and subjective knowledge on information processing and searching. Pertinent to object knowledge, Brucks (1985) found that higher objective knowledge increased search efforts for adding new information regarding the 11 products of interest during a purchase simulation. Relevant to subjective knowledge, Radecki et al. (1995) conducted an experiment where they told the participants that they would need to answer a number of questions on birth control. To assist them, they also gave users access to a database with information on birth control. They found that individuals with high subjective knowledge searched for less information in the database before answering the questions. Similarly, Raju et al. (1995) found out when it came to selecting VCR based on available attributes, there was an inverse relationship between subjective knowledge1 and external information search.  Concerning decision quality, prior studies provide evidence that objective knowledge improves the quality of decisions (Lusardi & Mitchell, 2007). For instance, Van Rooij et al. (2011) conducted a survey across 2,000 households in the Netherlands to assess the association between financial literacy and financial decision making. They found out that even after controlling for a large number of economic and demographic variables, there is a significant and meaningful positive association between objective financial knowledge and actual investment in the stock market (Van Rooij, Lusardi, & Alessie, 2011). Using a longitudinal survey of nearly 2,000 participants, Lusardi & Mitchell (2007) reported that higher objective financial knowledge leads to better financial planning and more wealth during retirement.  The same has not been found for subjective knowledge. While subjective knowledge increases the willingness to act in uncertain circumstances, it may not necessarily lead to better decisions. Babiarz & Robb (2013) reported that based on a national survey, individuals with high subjective knowledge are willing to put more into their emergency savings, which can ultimately help with their future financial well-being. Hader et al. (2013), in a series of experiments, reported that high subjective knowledge would pursue more risky investments, which may or may not pay off. Bartholom\u00e9 et al. (2006) discovered high subjective could also reduce help-seeking and ultimately reduce decision quality when students are asked to work with new software.    In the second domain (i.e., assessing the relationship between subjective knowledge and objective knowledge), prior findings found the correlation between these two knowledge constructs ranges from non-existent (r=.04) to highly positive (r=.56) (Brucks, 1985; Cole, Gaeth,  1 There is various nomenclature that have been used to define these two constructs over the years; objective knowledge is also referred to as knowledge accuracy, objective probability judgement, and actual knowledge. Subjective knowledge is also labeled as confidence, subjective probability judgement, self-perceived knowledge, and metaknowledge (Aggarwal et al., 2015; Alba & Hutchinson, 2000; Griffin & Brenner, 2004; Lichtenstein, Fischhoff, & Phillips, 1981)   12 Chakraborty, & Levin, 1992; Raju et al., 1995). In a meta-analysis study, Carlson et al. (2009) showed the existence of eleven moderators in the consumer decision-making domain that can explain the variation between the correlation of these two constructs. Additionally, assessing the association between objective knowledge and subjective knowledge created a branch of research that aims to examine the roots and consequences of incongruency between objective knowledge and subjective knowledge (Alba & Hutchinson, 2000).  This phenomenon, most commonly known as miscalibration, illustrates the correspondence between objective knowledge and subjective knowledge (Alba & Hutchinson, 2000). Under this explication, knowledge miscalibration is a continuum where an individual can fall under three possible states: i) overconfidence: a state where subjective knowledge is higher than objective knowledge, ii) calibrated: a state where subjective knowledge matches objective knowledge, iii) underconfidence: a state where subjective knowledge is lower than objective knowledge. In most of the prior studies related to miscalibration, researchers examined whether overconfidence is a robust finding among individuals. Results have highlighted that in many domains, specifically in difficult tasks, people are overconfident in their knowledge (Alba & Hutchinson, 2000; Lichtenstein et al., 1981; Spence, 1996). Albeit there is evidence of the existence of the two other states. Perfect calibration is seen among meteorologists who receive continuous feedback in their weather prediction (Murphy & Winkler, 1977) and tasks with moderate difficulty, and underconfidence is witnessed when the knowledge domain is easy (Griffin & Brenner, 2004).   2.2.3 Default Options, Subjective Knowledge, and Objective Knowledge in                    Information Security Literature  In information security literature, while the relevance and potential importance of the default options have been discussed (Acquisti et al., 2017), and recently few studies examined perception of Knowledge (Houser & Bolton, 2017; Ur et al., 2016; Ur et al., 2015; Wash, 2010).  Furthermore, most prior studies in information security have focused on the composite construct composed of objective knowledge and subjective knowledge (i.e., miscalibration) (Ament, 2017; Wagner & Mesbah, 2019). The work of Wang et al. (2016) is the closest to the study. In this study, the authors assessed whether the subjective assessment of participants\u2019 performance is a good predictor of phishing email detection accuracy. In the study, 600 participants were presented with fifteen email screenshots, and they were asked to determine if the email is legitimate or malicious. 13 Subsequently, their subjective performance assessment was measured. Their results failed to establish a significant correlation between subjective performance assessment and phishing email detection accuracy (Wang, Li, & Rao, 2016). However, they did not assess users\u2019 security knowledge as their sole focus was on the security decision.  Consequently, as discussed prior, the lack of evidence on the relationship between objective security knowledge, subjective knowledge, default security level, and security level of users\u2019 decisions, combined with the importance of these constructs in information security, was a motivation to examine these relationships through the theoretical lens of the theory of bounded rationality.  2.3 Theory Development  To begin examining the influencers of security decision making, I drew upon the theory of bounded rationality. This is a descriptive decision theory developed by Simon (1956, 1972, 2000), which has been used over the years to understand the process of judgment and decision making of users. The theory is particularly popular in domains where many users are not experts in a subject matter (such as consumer behavior) since it aims to explain a variety of decisions and deviations from normative decision models (Kahneman, 2003). Drawing upon cognitive psychology, Simon introduced several boundaries and limitations on how people make decisions, which is contrary to prior normative economic decision-making models in which many assumptions existed. For instance, those models assumed people\u2019s choices are only a function of their overall goals and external environment properties. However, Simon (1959, p. 274) discusses that in addition to those factors, \u201cAny particular concrete behavior is the resultant of a large number of premises\u2026 there will be premises about the state of the environment based directly on perception, premises representing beliefs and knowledge.\u201d  First, the security decision environment can differ greatly depending on the context, their objective, and the interface they have been made. From renewing a security software to selecting the security settings for a mobile app, users make myriad decisions related to their personal online security. Based on the multifaceted literature of choice structure in decision making (Keren & Wu, 2015), the state of the environment can be assessed from different lenses (e.g., the order of choices, number of choices, medium which choices are presented). However, there appears to be one common aspect of the state of the environment that exists in almost all forms of decision making: 14 the existence of default options. Almost all modern decisions are presented with a default option (Hastie, 2001; Samuelson & Zeckhauser, 1988). For example, suppose users wish to purchase security software. In that case, they have the opportunity to keep using the software which they already had or was installed by OS (e.g., Windows Defender on Windows OS) or to purchase a new software. Similarly, with security settings, the users always have the option to continue with the default options with no modification. Therefore, much like many other domains, many security decisions include a default option (Samuelson & Zeckhauser, 1988). Consequently, the default security level of these options can anchor users\u2019 decision security level.  In addition to the state of the environment, Simon also discusses the role of knowledge. When referring to knowledge, he does not simply refer to it as the knowledge that the decision maker has. Specifically, he mentioned that an individual\u2019s decision is a result of the \u201cknowledge that decision-makers do and don\u2019t have of the world, their ability or inability to evoke that knowledge\u201d (Simon, 2000, p.25).\u201dAccordingly, over the years, knowledge has been categorized under two forms: objective knowledge and subjective knowledge (Alba & Hutchinson, 2000; Brucks, 1985; Park et al., 1988). Subjective knowledge, which Russo and Shoemaker (1993) refer to as \u201cmetaknowledge,\u201d helps individuals understand their objective knowledge's scope and limits. Thus, combined, these types of knowledge influence our decision making.    Accordingly, I begin this section by presenting the theory and postulating the hypotheses related to the role of default security level, objective security knowledge, and subjective security knowledge in security decision making. Figure 1 depicts the theoretical model.           Objective Security Knowledge  Subjective Security Knowledge  Security Level  of User\u2019s Decision  H4  (Main)   H5  (Mediating) H2 H6 (Moderating) Figure 2.1 Theoretical Model Default Security Level   H3 H1 15 2.3.1 Objective and Subjective Security Knowledge  I begin by assessing the relationship between objective security knowledge and subjective security knowledge. Subjective knowledge is a function of objective knowledge (Brucks, 1985; Carlson et al., 2008). As Russo and Shoemaker (1993) discuss, subjective knowledge helps individuals to understand the scope and limitation of their knowledge. This self-assessment improves as one\u2019s objective knowledge increases. Simply, as individuals gain more objective knowledge in a domain, they become more cognizant of their actual ability and their limitations, causing higher subjective knowledge. Accordingly, I expect this relationship to be positive. Thus, I postulate:   H1: As objective security knowledge increases, subjective security knowledge increases. 2.3.2 Role of Default Security Level   The default option is automatically set as an anchor for users to make a decision. First introduced by Tversky and Kahneman (1974), the anchoring effect refers to circumstances where individuals make the decision using an initial anchor. Default options are an instance of anchors that decision makers can use. Based on this concept, individuals assess alternatives according to this anchor, and this anchor influences their ultimate decision. Accordingly, the default security level can anchor the resulting security level of the user\u2019s decision. Dhingra et al. (2012) labeled this particular anchoring effect as the default pull. Simply put, when presented with several alternatives, where one is set as the default, the decision of \u201cwhich is the best alternative?\u201d will become \u201cDo I prefer the default alternative over others?\u201d\u2014many of them will. According to the theory of bounded rationality, if an option can make decision making easier, individuals will use it. Security settings selection is a good example where default pull can exist. This could be both because people are not knowledgeable enough to make informed decisions or do not wish to put effort into their thought process. For example, the default pull is also reported in medical decision making, which is among the most important decisions that people make. Suri et al. (2013) conducted a lab study in which participants were supposed to receive electric shocks. In this process, the participants had two options, do nothing, and wait until they receive the shock (i.e., default option) or press a button to get it over with. They expected that most participants choose the latter since, historically, waiting for shock is shown to cause more anxiety than receiving it 16 right away. However, nearly 60% of participants went with the default option and chose to wait despite it being the inferior outcome. This means that default options act as a double-edged sword: if preselected at a higher security level, then users are likely to select the options at a higher security level, and if preselected at a lower security level, then users are likely to select the options at a lower security level. Regardless of the reason, the consensus is that most people tend to maintain the status quo rather than selecting new alternatives. I propose that these default pull also is present in information security. Specifically, I postulate that: H2: Default options that overall are preselected at a higher security level lead to an increase in the final security level of the user\u2019s decision, while default options that overall are preselected at a lower security level lead to a decrease in the final security level of the user\u2019s decision. 2.3.3 Role of Objective Security Knowledge   Prior literature highlights the importance of the distinction between these two types of knowledge as different factors influence them. Objective knowledge is influenced by one\u2019s expertise and prior experience. Accordingly, objective knowledge is considered as an important direct input in decision making (Alba & Hutchinson, 2000, p.129), which affects the number of attributes that are assessed when making decisions and will ultimately lead to better decisions (Alba & Hutchinson, 2000; Park et al., 1994). For instance, Brucks (1985) showed that individuals with higher objective knowledge review more relevant product-related attributes in purchase decisions. Objective knowledge is also linked to better performance quality in various domains. For instance, empirical findings have shown that IT knowledge positively influences managers\u2019 intention to champion IT in their organizations (Bassellier, Benbasat, & Reich, 2003), and higher objective financial knowledge improves financial decision making (Huston, 2010). I believe that this relationship also holds in information security decision making for personal users. For example, if a person is knowledgeable in different social engineering methods, she is more likely to detect and prevent such attacks. Accordingly, in the context of personal information security, I posit that: H3: Higher levels of objective security knowledge will increase the security level of the user\u2019s decision.  17 2.3.4 Role of Subjective Security Knowledge   Subjective knowledge helps people to understand the scope and limit of their objective knowledge. This is why subjective knowledge also is referred to as \u201cmetaknowledge\u201d in prior literature (Russo & Schoemaker, 1992). Higher levels of subjective knowledge are associated with being more proactive in various decisions. A person with high subjective knowledge not only uses her knowledge in a task but also is more engaged in the process of decision making (Aggarwal et al., 2015; Alba & Hutchinson, 2000). For example, in the finance literature, some studies suggest that subjective knowledge will lead to better investment strategies and financial well-being (Babiarz & Robb, 2014; Van Rooij et al., 2011). I argue this also applies to information security decision making.  Higher subjective security knowledge will cause individuals to understand the limits and scope of their objective security knowledge. This increased awareness will help users know when they can decide on their own and when they should ask for help. Consequently, I postulate that: H4: Higher levels of subjective security knowledge will increase the security level of the user\u2019s decision.  2.3.5 Mediating Role of Subjective Security knowledge in the Relationship Between Objective Security Knowledge and Security Level of the User\u2019s Decision   As discussed, subjective knowledge is influential in decision making (Alba & Hutchinson, 2000). Accordingly, to fully utilize objective knowledge, individuals rely on their perception of the knowledge they have. \u201cKnowledge, transformational operations, and component skills are necessary but insufficient for accomplished performances. Indeed, people often do not behave optimally, even though they know full well what to do. This is because self-referent thought also mediates the relationship between knowledge and action. (Bandura, 1982, p122).\u201d Over the years, there have been several studies of such an effect. Aggarwal et al. (2015) showed that self-perceived IT knowledge mediates the impact of actual IT knowledge on new technology adoption. Based on this, considering the role that self-referent thought, subjective knowledge, in this case, I expect that: H5: Subjective security knowledge mediates the relationship between objective security knowledge and the security level of the user\u2019s decision.  18 2.3.6 Moderating Role of Subjective Security Knowledge in the Relationship Between Default Security Level and Security Level of User\u2019s Decision Finally, I propose that subjective security knowledge can influence the relationship between the default security level and the security level of the user\u2019s decision. In H1, I argued that default options preselected at a higher (lower) security level would increase (decrease) the security level of the user\u2019s decision. Put merely, personal users use default options as an anchor in their security decision making. However, I argue that due to subjective security knowledge\u2019s nature and consequences, higher subjective security knowledge will pull away users from using default options as an anchor. Individuals with high subjective security knowledge are proactive and tend to make their own decisions. Therefore, for users with high subjective security knowledge, the influence of the default security level will be lessened. Thus, I postulate:   H6: Subjective security knowledge moderates the positive effect of default security level on the security level of the user\u2019s decision. That is, higher subjective security knowledge will reduce the positive association between default security level and the security level of the user\u2019s decision.  2.4 Methodology  2.4.1 Item Development I used a combination of established and newly developed scales to operationalize the constructs. Measures for objective security knowledge and security level of user\u2019s decision were developed for this study.   Capturing Objective Security Knowledge for Personal Users: Drawing from Aggarwal et al. (2015), I define objective security knowledge as the \u201cAwareness of common security threats and available defense mechanisms.\u201d Under this definition, there are two main areas in objective security knowledge: security threats and defense mechanisms. Both can be caused and implemented by humans and technology (two main components in information security). Security threats can arise from other individuals (i.e., social engineering attacks, shoulder surfing) or malicious software (i.e., an IT artifact). Similarly, defense mechanisms can be implemented by humans or technology. Some forms of defense mechanisms are human-oriented (e.g., selecting a strong password, following best practices in public Wi-Fi), while others are technology-oriented 19 (e.g., VPNs, Firewalls). To accurately measure the objective security knowledge of personal users based on this definition, I developed a new scale for this study by following a recent framework proposed by Boateng et al. (2018) for the development and validation of scales for behavioral sciences and integrated this procedure with several steps used and proposed in previous IS literature (Aggarwal et al., 2015; Bassellier, Benbasat, & Reich, 2003; MacKenzie, Podsakoff, & Podsakoff, 2011). The final scale was a 20-item multiple-choice quiz. I highlight the procedure in Table 2.1. The complete development process and final questionnaire are presented in Appendix A and B. Capturing Security Level of User\u2019s Decision:  To increase the study's ecological validity and improve the measurement of the security decision construct recommended by prior work (Alba & Hutchinson, 2000),  I captured users\u2019 actual decisions using a mobile app\u2019s security settings (Shown in Figure 2.2). I designed sixteen dichotomous options (i.e., on vs. off) in the app. In these options, I included both access-based options (e.g., access to location), which are commonly used in privacy-focused studies, and non-access options (e.g., two-factor authentication). Generally, information security settings are divided into two parts: privacy settings and security settings. The former focuses on the data sharing and confidentiality side of information security. A further look also reveals that all of the prior studies have mainly looked at privacy settings and what influences those (Almuhimedi et al., 2015; Crossler & B\u00e9langer, 2019; Wagner & Mesbah, 2019), and most except for a handful of the studies, have assessed user intentions rather than actual decisions (Almuhimedi et al., 2015). Security settings that deal with options of integrity\/availability of data have been mostly forgotten and not studied. As to why prior literature solely focused on privacy settings, one can attribute it to the topic\u2019s sensitive and sensational nature in the last decade, which only garnered more attention after Snowden and Facebook data leaks. However, both settings combined form what is labeled as information security, and optimal decisions for both settings are pillars of preventing security threats in smartphones. Both belong to the family of information security: privacy setting deals with confidentiality, and security settings deal with the integrity and availability of data. Under the CIA framework, information security is comprised of three dimensions: confidentiality, integrity, and availability (Hui, Kim, & Wang, 2017; Parker, 2012). Accordingly, my focus was on the overall security decision, which includes included options that represent both privacy and security settings. To measure the security level of the user\u2019s decision, I scored each final option selected by 20 the user based on a pre-defined security level score. Each high-level (low-level) security option was given a score of one (zero). For example, if \u2018Login alert\u2019 is turned on, the decision is counts as a high-level security decision. Accordingly, if the participants turned on this option, they received a score of one. Otherwise, they received a zero. Therefore, the total security level of the user\u2019s decision ranged from zero to sixteen (As shown in Table 2.2). To measure subjective security knowledge, I adopted a standard scale from Alba & Hutchinson (2000), which has been used in prior information security research (Ament, 2017; Wang et al., 2016). I measured subjective security knowledge by asking the participants to estimate how many questions they think they answered correctly after answering the objective security knowledge quiz. To operationalize high vs. low-level security default options, I created two groups of default options: for the first group, labeled as the high-level security default options, 50% of the options were selected at the high-level security. These options were randomly selected but were the same for all individuals in that group. For the second group, labeled as low-level security default options, none of the options were at the high-level security (Table 2.3 illustrates how participants saw the settings at the time of setting up their account in each group). 21 Table 2.1 Objective Security Knowledge Scale Development  Step # Objective Brief Summary of Actions 1 Preliminary Item Generation  I created a multiple-choice questionnaire fitted for the target population based on best security practices recommendations by firms such as Symantec, Avast, Norton, and AVG, and security guidelines set by the National Institute of Standards and Technology (NIST), National Security Agency (NSA), and Government of Canada. 30 Multiple-choice questions were initially developed. 2 Content Validity Assessment The content of items was subsequently reviewed with the help of a senior security analyst and a scholar in information security engineering research. The experts assisted me in evaluating the questions\u2019 content validity, answer choices regarding their correctness, and relevance to construct definition. Based on this, I removed, modified, and added several questions. 3 Construct Validity Assessment  (By Security Experts) Utilizing an approach suggested by MacKenzie, Podsakoff, & Podsakoff (2011), which was initially introduced by Hinkin & Tracey (1999), I sent out the scale (in a specific form) to 5 judges. With their answers, I evaluated both the inter-rater agreement, which illustrates the reliability of the items and content and construct validity (i.e., divergent and discriminant validity) of technology and best practices-related questions.  4 Face Validity Assessment  Simultaneous to step 3, I conducted eight cognitive interviews (also knowns as process tracing and verbal protocol analysis) with a group of participants who were representative of the study population to assess the face validity of the items find the problematic and ambiguous items for further analysis (Beatty & Willis, 2007; Kuusela & Pallab, 2000). Based on the analyses in steps 3 and 4, the problematic questions were flagged. Several questions were removed, and several others went under major and minor edits in wording and structure.   4 Construct Validity Assessment  (By Non-Experts)  I conducted a card-sorting exercise among fellow Ph.D. students at the business school (n=8). The hit ratio (i.e., correct item placements to total placements across all dimensions) for twenty-four items ranged between 88% to 100%, which is above the generally accepted threshold of 80% (Cenfetelli, Benbasat, & Al-Natour, 2008; Moore & Benbasat, 1991). 5 Pilot  Survey  Difficulty Index Check In the final stage, the remaining twenty-four items questionnaire were piloted via TurkPrime (n=161). To ensure that the final was in the medium difficulty, I calculated the item-difficulty index; item difficulty (p) is the percentage of the individuals who answered an item correctly over total response (Adkins, 1960; Boateng et al., 2018; DeVellis, 2016). Reliability Assessment Item  For the reliability analysis of scales, I used the split-half method (Aggarwal et al., 2015), where each knowledge scale was divided into two parts, and subsequently, the correlation between scores of the two parts was calculated. I kept questions that produced a medium difficulty and produced split-half reliability above .70. (Bollen, 1989) Item Distractor Analysis  The main purpose of item distractor efficiency analysis is finding the non-functional distractors; these are options within the questions which have very little to no impact on the answer provided by respondents, and the quality of a multiple-choice question is highly dependent on these distractors (Nunnally & Bernstein, 1994; Tversky, 1964). Non-functional distractors were flagged.  Choice Optimization A meta-analysis of prior research in this domain showed that over the years, MCQs with three options per question had been considered as the optimal format for knowledge assessment (Rodriguez, 2005); three-option MCQs mathematically have the highest discrimination and power (Tversky, 1964), increase test reliability (Grier, 1976), and maximize information processing by the test taker (Bruno & Dirkzwager, 1995). Based on prior recommendations and distractor analysis, I decided to remove one option (i.e., the biggest non-functional distractor) from each question.  6 Scale Finalization   I finalized the scale with 20 three-option MCQ  22 Table 2.2 Security Level of User\u2019s Decision     Setting Description High-Level Security Selection Raw Score Calendar Allow app to access your calendar Off 1 Location Allow app to access your location Off 1 Camera Allow app to access your camera Off 1 Contacts Allow app to access your contacts Off 1 Microphone Allow app to access your microphone Off 1 Storage Allow app to access your storage Off 1 Phone Allow app to access your phone calls Off 1 SMS Allow app to access your text messages Off 1 Update Software Automatically update app software On 1 Update Plugins Automatically update app plugins On 1 Download Allow downloads from unknown sources Off 1 Login Alert Alert when logging in from a new device On 1 Two-Factor Authentication Require two-factor authentication every time logging in from a new device On 1 Pop-ups and Redirects Allow pop-ups and redirects in the news and social network sections Off 1 Ads Allow adds from third parties to be shown within the app Off 1 Cookies Allow the app to save cookies from your search history Off 1 Total Security Level of User\u2019s Decision  16 Figure 2.2 App Setup Pages 23 Table 2.3 Default Security Level This operationalization of the default security level allowed me to assess the association between default security level with the security level of the user\u2019s decision in a controlled and meaningful way and emulate the existing approaches in setting up default option as discussed prior.  For subjective security knowledge, the following variables were controlled for: self-efficacy, age, IT experience, IT education, gender (Aggarwal et al., 2015). For security level of user\u2019s decision, in addition to the above variables, a number of other control variables were used: technology usage experience (in years), phone usage experience (in years), daily phone usage, perceived threat severity, perceived threat susceptibility, impulsivity, social norms, descriptive norms, security news exposure, and prior security violation (Aggarwal et al., 2015; Anderson & Agarwal, 2010; Wang et al., 2016).  2.4.2 Sampling  I recruited participants from Prolific.co (an online labor market). Sampling from Prolific allowed me to diversify the demographics of the sample (e.g., age and gender). A total of 100 Setting Description Default Option: High-level security Default Option: Low-level security  Calendar Allow app to access your calendar On On Location Allow app to access your location Off  Off Camera Allow app to access your camera On On Contacts Allow app to access your contacts Off On Microphone Allow app to access your microphone On On Storage Allow app to access your storage Off On Phone Allow app to access your phone calls On On SMS Allow app to access your text messages Off  On Update Software Automatically update app software On  Off Update Plugins Automatically update app plugins Off Off Download Allow downloads from unknown sources On On Login Alert Alert when logging in from a new device Off Off Two-Factor Authentication Require two-factor authentication every time logging in from a new device On Off Pop-ups and Redirects Allow pop-ups and redirects in the news and social network sections Off  On Ads Allow adds from third parties to be shown within the app On On Cookies Allow the app to save cookies from your search history Off On 24 subjects participated in the study. I rejected five submissions due to failure to complete the whole study and failure in the final quality checks. The final pool included 95 participants, including 41 men (43% of total subjects) and 54 women (57% of total subjects). Regarding device OS used in the study, 45 participants downloaded and installed the app on an IOS device from Apple Store, and the remaining 50 subjects downloaded and installed the app from Google Play Store.  2.4.3 Study Procedure   The study took place in a span of three days. Based on two rounds of pilot studies, I did not disclose the study\u2019s primary intentions to the participants until after the study was concluded. This was to avoid possible priming effects that may occur due to revealing the focus of the study (i.e., information security). Accordingly, I added a ten-question quiz examining the participants\u2019 knowledge of the app design process, in addition to the twenty-question objective security knowledge quiz, to avoid revealing the focus of the study early on. Subsequently, the subjects were recruited on the premise that they were participating in an app testing study where they ought to assess the functionality and design of a newly developed mobile app. They received a flat compensation for committing to complete the study. They were also motivated to provide feedback on the app\u2019s performance and design for improvements in future updates at the end of the study. The study was conducted online, remotely, and without any interference from the researchers. The study\u2019s procedure was as the following: after giving their online consent, participants were first shown a brochure of the value of early app testing from the usability, design, and security perspective. Then, they were prompted to complete an online questionnaire where I captured their demographic data. After filling out this section, they were shown the app\u2019s name that needed to be downloaded from their app store (i.e., Google Play Store and Apple Store). They were instructed to create an account, complete the app setup, and return to the online questionnaire. Security decisions were recorded at this time of the initial setup. Once participants installed the app and set up their accounts, they returned to the online questionnaire and answered the knowledge quizzes (security and design). In the next three days, I sent participants messages to check and review different app settings and make any changes to their liking. Finally, on day three, they received a follow-up questionnaire to their Prolific account. In this post-study questionnaire, I measured the control variables and checked the validity of results; I checked whether participants received the review messages and assessed their accounts\u2019 authenticity. A total of five participants either did 25 not return and failed to complete the post-study questionnaire or failed to prove that they actually installed and used the software. Thus, they were removed prior to the analysis. 2.4.4 Descriptive Analysis I began my analysis by screening the 95 responses. There was no missing data of variables of interest in the dataset. I observed fairly normal distributions for the indicators of latent factors and all other variables (e.g., age, experience) in terms of skewness and kurtosis. The kurtosis values ranged from benign to 3. While this does violate strict rules of normality, it is within more relaxed rules suggested by Sposito et al. (1983), who recommend 3.3 as the upper threshold for normality. Furthermore, participants randomly received either the low-level security default options (45 participants) or the high-level security default options (50 participants) at the beginning of the study. Table 2.4 shows the descriptive statistics for the continuous constructs used in the study.  As part of the initial analysis, I also calculated the bivariate correlation, as seen in Table 2.5. The descriptive statistics and bivariate correlation for all the variables, including control variables, are shown in Appendix E.   Construct Min Max Mean Std. Deviation Skewness Kurtosis Objective Security Knowledge  4 19 13.18 2.69 -.49 .42 Subjective Security Knowledge 4 20 11.48 3.22 .18 -.26 Security level of User\u2019s Decision 0 16 8.54 4.64 -.71 -.45 Table 2.4 Descriptive Statistics for Main Continuous Constructs    1 2 3 4 1 Security level of User\u2019s Decision     2 Objective Security Knowledge .28**    3 Subjective Security Knowledge .30** .50**   4 Default security level   .30** .09 -.05             Note. N=95, *p<.05; **p<.01           Table 2.5 Bivariate Correlations for Main Constructs 2.4.5 Measurement Model  As the study\u2019s main construct was measured objectively, I did not foresee an issue with common method bias. However, I followed the measurement model procedure for the latent control variables. To examine the construct validity (divergent and convergent) of the latent control 26 constructs in the study, I conducted an exploratory factor analysis (EFA). EFA is a well-established and common method that determines which items are closely correlated to each other and are likely to represent the same underlying constructs (Fabrigar, Wegener, MacCallum, & Strahan, 1999). Several statistical tests and graphical representations were used to assess the results of the EFA. After removing one item that was correlating with multiple underlying constructs, the remaining model was supported regarding its validity and reliability. Following EFA, I conducted a confirmatory factor analysis (CFA) to confirm the latent control constructs\u2019 structure. CFA is conducted after EFA to finalize the items used in the structural model (Brown, 2015). Ultimately one item with loading below .6 was removed, and I kept the rest. In the next step, I assessed the data for the presence of multicollinearity by calculating the VIF for each of the independent variables. Among the more conservative thresholds, O\u2019Brien (2007) states VIF below 3 is not problematic. Most VIF values were below 2, with the highest (i.e., age) being slightly above 2. Accordingly, I found no multicollinearity issue regarding the independent variables (O\u2019brien, 2007). In the last test, before setting up the structural model, I calculated Cook\u2019s D to see if there are any influential outliers that warrant further investigations. Using 4\/n, with n being the number of subjects in the data (Cook, 1977), I found no issue with influential outliers. The result can be seen in Appendix F and G.           27 Items   Cronbach\u2019s Alpha EFA Factor Loading CFA Factor Loading Perceived Threat Severity (Johnston & Warkentin, 2010)   PTS1 If my smartphone were infected by malicious software, it would be severe. .94 .89 .86 PTS2 If my smartphone were infected by malicious software, it would be serious. .94 .95 PTS2 If my smartphone were infected by malicious software, it would be significant. .93 .95 Perceived Threat Vulnerability (Johnston & Warkentin, 2010)   PTV1 My smartphone is at risk of security threats. .84 .82 .82 PTV2 It is likely that my smartphone will be hit by a security threat. .86 .80 PTV3 It is possible that my smartphone will be hit by a security threat. .83 .78 Impulsivity (Pogarsky, 2004; Vance, Lowry, & Eggett, 2015) IMP1 I act on impulse. .87 .91 .94 IMP2 I often do things on the spur of the moment. .92 .98 IMP3 I always consider the consequences before I take action. .80 .67 IMP4 I rarely make hasty decisions. .717 <.6 discarded Subjective Norms (Anderson & Agarwal, 2010; Taylor & Todd, 1995) SN1 Friends who influence my behavior would think that I should take measures to secure my smartphone. .89 .82 .73 SN2 Significant others who are important to me would think that I should take measures to secure my smartphone. .93 .91 SN3 My peers would think that I should take security measures on my smartphone to help secure the Internet. .91 .93 Descriptive Norms (Anderson & Agarwal, 2010; Taylor & Todd, 1995) DN1 I believe other people implement security measures on their smartphones. .79 .81 .67 DN2 I believe the majority of other people take security measures on their smartphones to help protect the Internet. Cross-loading with DN, SN, and SE - DN3 I am convinced other people to take security measures on their smartphones. .86 .85 DN4 It is likely that the majority of smartphone users take security measures to protect themselves from an attack by hackers. .80 .72 Security Behavior Self-Efficacy (Anderson & Agarwal, 2010; Taylor & Todd, 1995) SBE1 I feel comfortable taking measures to secure my smartphone .89 .86 .78 SBE2 I feel comfortable taking security measures to limit the threat to other people and the Internet in general. .79 .71 SBE3 Taking the necessary security measures is entirely under my control .82 .79 SBE4 I have the resources and the knowledge to take the necessary security measures .83 .85 SBE5 Taking the necessary security measures is easy .84 .84  Notes: EFA: Kaiser-Meyer-Olkin (KMO) Measure of Sampling Adequacy = .684, Extraction Method: Principal Component Analysis, Rotation Method: Varimax with Kaiser Normalization, Rotation converged in 6 iterations. Table 2.6 Latent Control Variables CFA\/EFA    28 2.4.6 Structural Model Results To examine the model, I used structural equation modeling (SEM) using AMOS (Byrne & Stewart, 2006). I constructed the structural model in two stages: with no control variables and with the inclusion of control variables. In the analysis, I first examined the model fit indices. Once passed according to the existing thresholds (Hair, Black, Babin, & Anderson, 2014; Hooper, Coughlan, & Mullen, 2008; Kline, 2015), I examined the squared multiple correlations (analogous to R2 in OLS regression) to ensure that the variance explained by the study\u2019s model is sufficient for meaningful contribution to the field. Finally, I examined the estimates pertinent to the hypotheses. For moderation analysis, I followed the guidelines set by Byrne and Stewart (2006) in conducting moderation testing in AMOS. For mediation analysis, I used the bootstrap 90% bias-corrected approach by Hayes (2009) using 2000 samples. The complete model (with control variables included) explains 35% of the subjective security knowledge and 36% of the security level of the user\u2019s decision variance.  Construct Multiple Squared Correlation No Control Squared Multiple Correlation with Control Subjective Security Knowledge  .25 .35 Security Level of User\u2019s Decision  .22 .36 Table 2.7 Squared Multiple Correlation    Objective security knowledge was positively associated with subjective security knowledge (standardized \u03b2 = .43, p<.01), supporting H1.  Default security level had a positive association with the security level of the user\u2019s decision, in which participants with a high-level security default option ultimately decided to select an overall higher level security options (standardized \u03b2 =.31, p<.01), supporting H2. Objective security knowledge had an insignificant association with the security level of user\u2019s decision (standardized \u03b2 =.14, p=.20), despite having a significant positive bivariate correlation with the security level of the user\u2019s decision (r=.28), thus not supporting H3. This is not entirely surprising as I proposed that subjective security knowledge mediates the positive relationship between objective security knowledge and the security level of the user\u2019s decision. Subjective security knowledge was positively associated with the security level of user\u2019s decision (standardized \u03b2 = .29, p<.01), supporting H4. Subjective security knowledge mediated the relationship between objective security knowledge and the security level of user\u2019s decision in support of H5. The indirect effect of objective security 29 knowledge and subjective security knowledge on the security level of the decision was significant (standardized \u03b2 =.22, p<.05). Subjective security knowledge also moderated the influence of default security level on the security level of the user\u2019s decision (standardized \u03b2 = -.21., p<.05), supporting H6.                Notes: \u2022 * p<.05, ** p<.01, MSC= Multiple Squared Correlation (~ R2) \u2022 Control variables on subjective security knowledge: self-efficacy, age, IT experience, IT education, gender \u2022 Control variables on the security level of user\u2019s decision: technology usage experience (in years), phone usage experience (in years), daily phone usage, perceived threat severity, perceived threat susceptibility, impulsivity, social norms, descriptive norms, security news exposure, prior security violation, and self-efficacy, self-efficacy, age, IT experience, IT education, gender            Objective Security Knowledge  Subjective Security Knowledge  Security Level of User\u2019s  Decision  H4=.29** (Main)   H5 = .22** (Mediating) H2 = .31** H6 = -.21** (Moderating) Figure 2.3 SEM Results Default security level   H3=. 14n.s H1=.43** 30 Dependent Variable Independent  Variable SEM Model Including Control Variables  SEM Model Excluding Control Variables Uns. \u03b2 Sta. \u03b2 P Uns. \u03b2 Sta. \u03b2 p Subjective Security Knowledge  Objective Security Knowledge .51 .43 <.01 .6 .50 <.01 Self-Efficacy .41 .14 .11    Gender -1.41 -.22 <.05    Age -.26 -.09 .31    IT Education -.25 -.08 .46    IT Experience -.23 -.07 .51    Security Level of User\u2019s Decision  Subjective Security Knowledge .43 .29 <.01 .36 .25 <.05 Objective Security Knowledge  .24 .14 .20 .23 .13 .21 Default security level   3.25 .35 <.01 2.80 .30 <.01 Descriptive Norms .03 .01 .918    Social Norms -.65 -.21 <.05    Self-Efficacy -.83 -.19 .06    Perceived Threat Vulnerability -.30 -.09 .37    Perceived Threat Susceptibility .40 .13 .20    Impulsivity .07 .02 .82    Age -.43 -.10 .41    Gender .65 .07 .46    IT Education .14 .03 .79    IT Experience -.84 -.17 .12    Operating System .38 .04 .66    Phone Daily Usage -.07 -.06 .51    Phone Year Usage -.06 -.04 .66    Technology Year Usage .12 .19 .14    Prior Security Violation .00 .00 .96    Security New Exposure -.18 -.06 .59    Model Fit indices  \u03c72\/DOF .86 1.31 CFI 1.00 .99 RMSEA .00 .04 PCLOSE .76 .342 Standardized RMR .01 .03 Table 2.8 SEM Full Results 2.5 Discussion of Results First, I observed that higher objective security knowledge gave rise to higher subjective security knowledge. Second, the high-level security default option led to an increase in the security level of the user\u2019s decision. In this study, I randomly placed individuals in two groups: low-level security default option and high-level security default option. Based on the unstandardized coefficients, the simple procedure of increasing the default options\u2019 security level by 50% (8 out of 16) led participants to select three additional options at high-level security. Accordingly, the 31 default security level also had the strongest influence on the security level of the user\u2019s decision (standardized \u03b2 =.35). Second, the data did not support H2, in which I predicted that an increase in objective security knowledge would lead to an increase in security level in the users\u2019 decisions. This can be explained by the mediation effect of subjective security knowledge (H5). Simply the effect of objective security knowledge (what users actually know) is transmitted through subjective security knowledge (what they think they know). This is in line with findings from the finance literature and Bandura\u2019s statement on the role of self-referent thought in which he mentioned that having adequate knowledge is not enough to lead to proper action. Rather, one must also be equipped with the perception of that knowledge, in other words, high subjective knowledge (Bandura, 1997). As I continued the analysis, the central role of subjective security knowledge became clearer. First, I saw that higher subjective security knowledge leads to a higher security level of user\u2019s decision (H3). Furthermore, in addition to mediating the influence of objective knowledge on the security level of user\u2019s decision, I saw that subjective security knowledge also reduces the influence of default security level. Simply, participants who had higher degrees of subjective security knowledge were less willing to use the default options as the main anchor in their decisions.  2.6 Theoretical Implications The results of the study make important theoretical contributions. First, the default security level is influential in the security decision making of personal users. While studies have speculated on the influence of default options in information security (Acquisti et al., 2017; Dinev et al., 2015; Tsohou et al., 2015), this study provides empirical evidence of this effect. The findings advanced our understanding of the role of default security level on the security level of the user\u2019s actual decision within the context of an actual mobile app developed and designed for the study.  Furthermore, the study showed a form of knowledge (i.e., subjective security knowledge) that mitigates the influence of default options and so acting as a debiasing mechanism. As biases (i.e., errors in judgments) can often lead to negative outcomes for decision makers, the judgment and decision-making literature have been equally fascinated with techniques to combat these biases (Arkes, 1991; Larrick, 2004). In this study, subjective security knowledge dampened the positive relationship between default option security and the security level of user\u2019s decision. These findings contribute not only to information security literature but also to the broader IS literature. 32 In the past decade, IS scholars have examined status quo bias from multiple angles and in various domains and heavily focused on this phenomenon\u2019s causes. However, there has been less attention to possible debiasing mechanisms. Grounded in the theory of bounded rationality, by showing subjective security knowledge reduces the status quo\u2019s impact, I contribute to the literature by examining the debiasing mechanisms in information security decision making, which perhaps can be effective in other IS decision-making contexts. The impact of subjective security knowledge is not limited to its role in mitigating status quo bias. Subjective security knowledge plays a central role in the process of security decision making of personal users. Not only is the influence of objective security knowledge transmitted via subjective security knowledge, but subjective security knowledge is also positively associated with better decisions. Subjective security knowledge mediates the relationship of objective security knowledge on security decisions. Thus, even if people are objectively knowledgeable, they will not act on that knowledge without high subjective knowledge. Thus, subjective security knowledge acts as a bridge with objective security knowledge and decisions and will propel them to apply their knowledge; to decide whether to take action or not. I see this as an important takeaway because, in the practice of improving users\u2019 performance, where the focus in many cases is on presenting users with nudges that can increase their knowledge (Acquisti et al., 2017).  2.7 Practical Implications According to industry reports, various companies and institutions such as universities, e-retailers, and social media services that offer services to personal users constantly face the pressure of security breaches. E-bay, LinkedIn, Yahoo, Facebook, and Target have all experienced massive security breaches in recent years (Ahmed, 2019). Universities are also facing a rise in security threats (Muncaster, 2020). There are various forms of security threats, and each requires a specific countermeasure strategy. For instance, threats can occur due to weak information security technology infrastructure, lackluster oversight and actions of a malicious employee, and poor security decisions of users. In this study, I focused on the security decisions by personal users. The security decision making of users can either exacerbate or alleviate existing security threats targeting them. For example, weak passwords were one of the factors that worsened the 2016 LinkedIn password security breach, which led to the sale of 117 million records on the dark web (Pagliery, 2016). Pertinent to combating security threats related to poor security decision making, 33 I argue the findings have several practical implications for designers, educators, and organizations who provide services to personal users.   First, default options are one of the best tools for designers and organizations to push personal users towards better decisions. As shown, default security level has a more significant influence on actual security decision making than any knowledge constructs and control variables. The tendency to anchor security decisions using the default options is a great opportunity for designers to help with users\u2019 online security. Many popular websites such as Facebook, Instagram, or even Google do not, by default, present the settings at high-level security. For example, when creating a new account at the time of this study, these websites mostly have the options turned off, and it is up to users to select their settings. However, there is a potential issue: many personal users do not change their settings. In the study, I saw that nearly 20% of respondents did not change their default options, regardless of whether the default options were at high or low-level security.                            Low-Level Security High-Level Security Yes 35 77.8% 39 78.0% No 10 22.2% 11 22.0% Table 2.9 Changes to Default Options   There is also anecdotal evidence of the user\u2019s lack of engagement upon registration. A report from 2013 showed that nearly 13 million users of Facebook never change their privacy settings (Protalinski, 2012). Regardless of what causes this status quo bias, the phenomenon is an important feature in personal users\u2019 security decision making. As shown in Table 2.9, the proportion of individuals in the high and low-level security group that did not change their default options and just used them were identical, hinting that a group of personal users may not be able to distinguish between a high or low-level security default option, or simply not care to change it.  Consequently, default options can act as a double-edged sword. From one perspective, they can lead to decisions with a lower security level and be a great opportunity for hackers or malicious developers to steal customer\u2019s data. For example, in smartphones, blindly giving permission to an app can cause data loss and invasion of privacy. In web accounts, a lack of proper setup for security settings can cause inaccessibility to the account in case of unauthorized intrusion. From another perspective, this is an opportunity for designers and developers to assist novice users with their security decisions. For instance, by turning on \u201cLogin alert from a new device\u201d by default, it is Changed at Least One Option Default Security Level 34 more likely that a personal user will keep this option on, thus enhancing their personal information security.  The findings can also be useful for public educators. In recent years, there has been a coordinated effort to increase public knowledge of information security by various governments. I gave the example of the governments of Canada and the United States, which offer educational websites and brochures for best security practices guidelines. Currently, sources such as these solely focus on objective security knowledge. As shown in the results, increasing the objective security knowledge of personal users does not necessarily lead to the best results without one having high subjective security knowledge. Accordingly, I believe that raising users\u2019 subjective security knowledge - so long it is based on their actual objective security knowledge - should be of priority in feedback and online learning tools. For example, an online security quiz should ask user\u2019s perception of their performance after a quiz and provide feedback after quiz completion on both their subjective security knowledge and objective security knowledge. This way, the user will understand the bounds and limits of her knowledge. Ultimately, security decisions can improve with an increase of subjective security knowledge that is supported by adequate objective security knowledge. Furthermore, these educational websites can also include information about the benefit and dangers of default options in many security decisions. Overall, the study highlights that there are more factors other than objective security knowledge that impacts personal users\u2019 security decision making. Thus, including information about those factors (i.e., subjective security knowledge, default options) in educational websites can be valuable for users who visit these websites.              35 Chapter 3: Exploratory Study of Role of Heuristics in Information Security Decision Making 3.1 Introduction  In Chapter 2, I assessed the impact of objective security knowledge, subjective security knowledge, and default security level on personal users\u2019 security decision making. In this chapter, I expand on my search for potential influencers in information security decision making by examining the role of short mental processes, also known as heuristics (Tversky & Kahneman, 1974). According to the Theory of bounded rationality, people have limitations in their cognitive abilities. Simply put, how people actually think is different from how they ought to think from a normative perspective. Under this theory, individuals use processes that are different from those prescribed by normative decision models and do not consider all the necessary information to make their decisions. For example, assume the decision of phishing email detection. Several criteria can be looked at to assess whether an email is malicious or not: the address it came from, grammatical error, the urgency of email, title, and signature are a few of those criteria. As defined under normative theories, a rational decision maker will look at all of these factors before making a decision. However, according to the theory of bounded rationality, people often do not look at all factors necessary and most likely decide after assessing only a few (and perhaps one) factor(s). Simon described such processes of judgment and decision making as heuristics. \u201cHeuristics are methods for arriving at satisfactory solutions with modest amounts of computation,\u201d explains Simon (1990, pg. 11). Accordingly, by using heuristics, people aim: \u201cto reduce the effort associated with decision processes\u201d (Shah & Oppenheimer, 2008, Pg.207).  Heuristics are not inherently problematic. Rather, they act as a double-edged sword.  In many instances, they can help individuals to reach a correct decision by using less effort than what a normative decision model would dictate. For instance, many experts reach fast and correct decisions by using heuristics to make their decisions. These heuristics work because they have been based on years of experience (Kahneman, 2003). However, in many other situations, they can lead individuals to erroneous judgments and decisions (i.e., biases). Not surprisingly, assessment of heuristics and resulting biases has been quite popular in many domains, including psychology, judgment and decision making and marketing, and has accumulated a myriad of works since its introduction in 1972 (Kahneman, Slovic, Slovic, & Tversky, 1982; Shah & Oppenheimer, 2008; 36 Tversky & Kahneman, 1974). By some accounts, more than 70 heuristics have been examined in different fields in the prior research (Shah & Oppenheimer, 2008).   However, despite advances in the literature in various other fields, few studies have investigated the role of heuristics in information security literature. Prior heuristics-related studies include research commentary, review, and call for research studies, which have suggested that these heuristics may influence security decision making (Acquisti et al., 2017; Rosoff, Cui, & John, 2013).   Based on the existing lack of research on this topic, the potential role of heuristics in decision making, and pertinent to my focus on assessing influential factors in the decision making of personal users in my thesis, I argue that studying heuristics in information security can be valuable. Specifically, the lack of extensive work or theoretical discussions in information security literature creates an opportunity for an exploratory study into the role of heuristics and their resulting biases in information security decision making. I believe an exploratory study can provide two main values: i. Providing a holistic assessment of the role of heuristics in information security decision making: According to the theory of bounded rationality, people may use multiple heuristics when making decisions. For example, when purchasing new security software, a person may use her positive emotion towards a product (i.e., affect heuristics) combined with a recommendation from another person (i.e., anchoring heuristics) to make a decision. An exploratory study can help us understand which heuristics people use and see if multiple heuristics are utilized in their security decision making. In other words, are some heuristics more prevalent than others? With only a few studies commenting on heuristics in information security literature, an exploratory study can provide much-needed insight. ii. Exploring underlying heuristics\u2019 sub-themes specific to the information security domain: While there are generally acceptable heuristics that can apply to domains, scholars over the years have shown that some heuristics are more common in some fields than others (Shah & Oppenheimer, 2008, Pg.209). There is an inherent difference between IS and other domains, which may lead to the development of common usage of some heuristics. People do not make security decisions in a vacuum. Rather they use technology to make such decisions. Therefore, in IS, there is a human-IT interaction that, in other domains, does not necessarily exist (Burton-Jones & Grange, 2012). Furthermore, this interaction\u2019s nature and effect can influence how 37 people process information and make security decisions. Therefore, understanding the underlying heuristics sub-theme (i.e., specific detail with heuristic usage) can further enhance our understanding. For example, if users rely on the availability heuristic (i.e., using available information), what type of information is used? What security decisions is this heuristic commonly used? Answering questions such as these can provide much-needed empirical-based contextual information that has generally been absent from prior discussions on this topic in information security.   Consequently, the findings can shed light on what heuristics are more influential in this context and lays the basis for the study of nudging techniques in Chapter 4 of the thesis.  Based on this motivation, in this study, I draw upon the Heuristics Framework as first described by Newell and Simon (1972) and Tversky & Kahneman (1974), which was later developed by Shah & Oppenheimer (2008). The framework, which itself rooted in the theory of bounded rationality (Simon, 1955, 1959, 1972, 2000), is used to address the existing challenges by exploring the role of heuristics in information security. Accordingly, I intend to answer the following research questions in this chapter: RQ1. Do heuristics comprise an essential part of security decision making? RQ2. What heuristics are commonly used in the process of security decision making? To answer the aforementioned research questions, I utilized Framework Analysis (Gale, Heath, Cameron, Rashid, & Redwood, 2013; Ritchie, Lewis, Nicholls, & Ormston, 2013) with data collected from semi-structured interviews (Schmidt, 2004). After two rounds of data collection (n=27), the results showed that many users see security decisions as an afterthought. Furthermore, reducing cognitive effort is the main goal for many of the participants in their decision making, and they use a variety of heuristics in their decision making (RQ1). Most commonly, they shorten their decision process and simplify their decision making by relying on others\u2019 expertise (i.e., expertise heuristics) and using only available information (i.e., availability heuristics) to make decisions. The analysis also showed the use of representativeness, affect, and brand heuristics to a lesser degree (RQ2). Task type appeared as a moderator in the type of heuristics used by users. Furthermore, exposure to security news and prior security breach experience were associated with fewer heuristic utilization in information security. The rest of the chapter is structured as the following: I begin by discussing the theoretical background discussed in other domains. I then discuss the methodology where I present the study 38 design and findings. Finally, I discuss the implication of the study. 3.2 Heuristics in Information Security Literature  Heuristics and bounded rationality were first mentioned in privacy literature nearly 20 years ago (Acquisti, 2004; Acquisti & Grossklags, 2005). As Acquisti (2004) pointed out, normative models do not properly explain privacy decisions, and \u201cit is unrealistic to expect individual rationality in this context,\u201d as users most \u201cresort to simple heuristics.\u201d However, the nature and specific role of heuristics have never been thoroughly investigated in information security. Over the years, several studies sporadically delve into the role of bounded rationality and heuristics (Goel, Williams, & Dincelli, 2017; Khern-am-nuai, Yang, & Li, 2017; Malkin, Mathur, Harbach, & Egelman, 2017; Peer et al., 2020; Van Bavel, Rodr\u00edguez-Priego, Vila, & Briggs, 2019; Van Schaik, Renaud, Wilson, Jansen, & Onibokun, 2020). P\u00f6tzsch (2008) found out that despite awareness of privacy issues, users utilize simple decision models to make decisions due to their cognitive limitations. In information disclosure, Sundar et al. (2013) found out users do not use all the information when disclosing information and rather apply heuristics thinking. This was a finding that was later repeated in an exploratory study (Gambino, Kim, Sundar, Ge, & Rosson, 2016). While valuable, these initial studies were focused on privacy and mostly the information disclosure domain and never delved into theoretical explanations behind such observations. In the following years, several studies called for an investigation of heuristics and biases in information security decision making (Dinev et al., 2015). Tsohou et al. (2015) provided a review and commentary for the role of heuristics and biases in information security. In the paper, certain heuristics were assumed as influential in information security only because they were important in other domains. While such commentary is valuable, no empirical evidence of whether users actually use these heuristics in security decision making was presented. In another literature review,  Acquisti et al. (2017) discuss potential biases resulted from heuristics in information security. Reviewing heuristics literature in information security points out two things: first, prior research suggests that heuristics appear to play a role in information security. This is based on bounded rationality and the observation that users\u2019 behavior does not follow normative decision models (Acquisti, 2004). However, such a proposition has never been supported by empirical evidence in information security. Additionally, even if we assume that heuristics play an essential role in security decision making, the type of heuristics used in the domain so far has only been assumed, 39 mainly based on other literature findings (Tsohou et al., 2015). Accordingly, to answer these two questions (i.e., are heuristics influential in security decision making and, if so, which type is more prevalent), I set to conduct an exploratory study. 3.3 Theoretical Background  In this section, I begin by describing the nature of heuristics. I will then elaborate on why people use heuristics (according to the theory of bounded rationality) and present a detailed explanation of some of the well-known heuristics.    3.3.1 Why People use Heuristics? With the increased quantity of information, individuals need to process copious amounts of data to make decisions. Simon first introduced the theory of bounded rationality, in which he argued that people deviate from the normative model because their rationality is limited (Simon, 1972). \u201cBounded rationality is simply the idea that the choices people make are determined  not  only  by  some  consistent  overall  goal  and  the  properties  of  the external  world,  but  also  by  the  knowledge  that  decision-makers  do  and  don't  have of the  world,  their  ability  or  inability  to  evoke  that  knowledge  when  it  is  relevant, to  work  out  the  consequences  of  their  actions,  to  conjure  up  possible  courses  of action,  to  cope  with  uncertainty  (including  uncertainty  deriving  from  the  possible responses  of  other  actors),  and  to  adjudicate  among  their  many  competing  wants.\u201d (Simon, 2000, pg. 25). Later, he introduced the concept of heuristics as information processing methods to reduce cognitive efforts (Simon and Newell, 1972). According to the theory, the main reason people use heuristics is to reduce the complexity of information processing. \u201cHeuristics methods that make this selectivity [of information search] possible have turned out to be the central magic in all human problem solving that has been studied to date.\u201d discuss Simon and Newell (1972, pg. 147). Under the theory of bounded rationality, individuals do not often process complete information to reach a decision. Rather, they use various heuristics to reduce their mental efforts. 3.3.2 How People Use Heuristics   Shah & Oppenheimer (2008) present an extensive review of prior literature studying 40 heuristics. Perhaps, the most well-known and influential are heuristics first discussed by Tversky & Kahneman (1974): availability (i.e., making decisions based on the information that are salient or recent), representativeness (i.e., making decisions based on an action, option, or item by the degree which it is resembles another action, option, or item), anchoring (i.e., making a decision based on an available reference point).  Another heuristic that was later introduced was affect (i.e., making decisions based on positive or negative emotions)(Kahneman, 2003; Slovic, Finucane, Peters, & MacGregor, 2002). While I will utilize a comprehensive list of heuristics provided by Shah & Oppenheimer (2008) to understand which heuristics are most commonly used in information security, I will discuss the definition of these four heuristics and their potential role in security decision making in more detail.    Availability Heuristic: Availability heuristic is used when people judge an option, item, event by the ease and accessibility of related options, items, or events that can be brought to mind (Tversky & Kahneman, 1974). Tversky & Kahneman (1974) gave an example of risk assessment. They discuss when asking people, \u201cWhat are the chances that they might have a heart attack in the future?\u201d they assess that risk by recalling heart attacks that they have previously witnessed in their family or are aware of in their relatives. Most participants did not consider other significant determinants (e.g., eating habits and exercise routines). Potentially, this process of thinking can be extended to the information security context as well. For example, this heuristic can be influential in security risk assessment; what are the risks associated with creating a weak\/strong password? What is the risk of visiting a potentially malicious website? I argue that when individuals assess these risks, they possibly rely on their past experiences rather than following best security practices. This study can provide an answer to if and how users utilize the availability heuristic.   Representativeness Heuristic: When people use the representativeness heuristic, they judge an action, option, or item by the degree to which it resembles another action, option, or item. Hence they disregard other factors that might be relevant in their decision (Tversky & Kahneman, 1974). For example, Tversky & Kahneman (1974) describe an experiment where a group of participants was given a description of an individual. Subsequently, they were asked to determine whether that person is a lawyer or an engineer. There were two experimental conditions in the study: In the first condition, participants were told that the individual belongs to a population that consists of 30 lawyers and 70 engineers. Participants in the second condition were told that the population consists of 70 lawyers and 30 engineers. It was expected that the majority in the first 41 condition predicts the person as an engineer, and the majority in the second group labels the person as a lawyer.  This showed how the participants disregarded the information on demographic detail and judged the person to be a lawyer or an engineer based on how the description resembled a stereotype of a lawyer or an engineer. This heuristic can potentially be salient in the context of information security in detection measures, more specifically, in identifying malicious content on the web and emails. Two good examples of this are spear-phishing emails (i.e., phishing emails where the hacker send a malicious email from a trusted email address to the user) and spoof websites (i.e., malicious websites which are identical to that of the real popular website, except one small difference in their URL addresses). An individual may use a website\u2019s design as a representation of its legitimacy and disregard other factors, thus falling victim to identity theft.   Anchoring Heuristic: Tversky & Kahneman (1974) discuss that people always anchor their decision towards an initial estimate; that initial value can be either given to them or is calculated by individuals themselves. Such a heuristic can be present in password creation. Different websites have different password requirements; some only have minimum character lengths. Others may have mixture rules (e.g., the password must be a mixture of small letters, capital letters, and special characters). Individuals can potentially anchor their decision based on information on display on the password creation page. More precisely, they assess how good their password is based on the requirement on that page. Consider two individuals; one is creating a password for a website with no requirement and recommendation. The other is creating a password for a website that has a minimum length requirement. A six-character length password may be considered very strong by the first person with no requirement, while it is deemed to be mediocre by the second person who saw the password requirements.  Affect Heuristic: Slovic et al. (2002) discussed that affect heuristic leads people to consult their feelings and conduct an affective evaluation (i.e., positive or negative) of information at hand in any judgment. When presented with a stimulus, individuals map the information of that stimulus to the affect pool in their mind, where they create a positive or negative feeling during judgment, which ultimately impacts their decision (Slovic et al., 2002).  One of the areas in which affect heuristics has been studied is risk\/benefit judgments. Alhakami and Slovic (1994) conducted a study with 100 participants, where they assessed their perceived benefits and risks of several activities and technologies. They showed that individuals judge an action or technology based on \u201cwhat they think about it\u201d and \u201chow they feel about it.\u201d If they like that activity\/technology, they 42 attribute low risk and high benefit, and if they dislike it, they attribute high risk and low benefit to that activity\/technology (Slovic et al., 2002). Investigating this heuristic is particularly interesting in information security because of the role that security technologies play in security decision making. One empirical question that can be asked is regarding people\u2019s feelings about various security technologies; do people feel positive or negative about anti-viruses, decision aids, and VPNs? Does their feeling, as suggested by Alhakami and Slovic (1994) sway their assessment of the risk\/benefit of using security technologies in any way? If so, then affect heuristic can be salient in information security and mediate people\u2019s assessment of risk\/benefit obtained from security technologies and their ultimate adoption.  With this theoretical background and potential relevance, I began the exploratory study to answer the research questions.  3.4 Methodology    In this study, I aim to understand users\u2019 decision making by specifically investigating if heuristics are an important part of their decision process (RQ1) and which heuristics are more commonly used in information security decision making (RQ2). Based on the motivation of the study, I used the Framework Analysis (FA) to analyze the data (Ritchie, Spencer, Bryman, & Burgess, 1994; Ritchie, Spencer, & O\u2019Connor, 2003). Framework Analysis which falls under the thematic methodology, allows for a flexible, structured, and transparent approach to data analysis (Gale et al., 2013; Ritchie et al., 2013). The method is specifically useful when analyzing data according to a-priori framework and is most suitable for systematic modeling and mapping of data (Gale et al., 2013). Using the framework method allows for easy comparison between the cases since every case will be coded according to the same codebook. The objective of the study is to examine what heuristics are used in the process of security decision making. Accordingly, a framework is already in place (i.e., a list of heuristics) that can be used to analyze the data.  3.4.1 Study Design  To answer the research questions, I chose interviews as the main approach to data collection. With each interview, I used process tracing via think-a-loud techniques of actual and hypothetical security decisions (Ericsson & Simon, 1998) and semi-structured questioning (Schmidt, 2004) approaches to collect data as they are among the main approaches to explore 43 heuristics used by individuals (Shah & Oppenheimer, 2008, pg. 218). Each of these approaches offers strengths while possesses some weaknesses. I chose this three-faceted approach in my data collection to utilize the strengths to the fullest and achieve triangulation in the data collection (Yin, 2015), hence increasing the reliability of the results and adhering to the standards of rigor in qualitative studies (Lincoln, 1985).                A number of questions were designed to elicit decision-making processes using the thinking-aloud technique. As the name suggests, thinking aloud allows users to express their thoughts on topics and questions out loud with no back-and-forth with the interviewers. This technique provides benefits over semi-structured interviews: first, since the interviewer will not interrupt the participants, any priming effect from the interview will be lower than other interview methods. Additionally, the think-aloud technique allows an individual\u2019s inner speech to show and enables the researcher to trace their cognitive thinking (Ericsson & Simon, 1998; Ji-Ye Mao, 2000). For this part, I inquired about both actual previous decisions and hypothetical scenarios. In the former, I aimed to explore users\u2019 security decision making in the past and their actual thought processes. The major upside with asking about actual prior security decisions is that it helps identify decisions with adequate complexity and importance to the individual. This method\u2019s main limitation is that participants may not recall past decision processes fully and accurately (Krabuanrat & Phelps, 1998). In the latter, I asked participants to give their decision process in Think-Aloud (Hypothetical Scenarios) Think-Aloud (Actual Decisions) Semi-Structured Interviews (+) Allows for capturing actual and relevant decisions freely and with interference.  (-) Faces constraint in which participants may not remember the details of the decisions fully and accurately.  (+) Allows for capturing inner thoughts freely and with interference at the moment which is more accurate than recalling past decisions.  (-) Faces constraint in which decisions are not based on actual experience.   (+) Allows for further exploration of participants\u2019 decision-making process.  (-) Faces constraint in which interviewer\u2019s questioning may interfere with the participants\u2019 responses. Figure 3.1 Data Collection Approaches 44 several hypothetical scenarios. The main advantage of using scenarios is that it allows us to capture current thought processes. Consequently, they can address the limitation of inquiring only about past decisions. After allowing participants to give their responses without any interruption, I began asking questions. The additional semi-structured questioning approach allows for further exploration of the research question, understanding the responses in more detail, and addressing any gaps and inconsistencies heard in participants\u2019 responses (Furneaux & Wade, 2011).  The study was designed to run between 45 to 60 minutes. It began by debriefing the participants on the purpose of the study. To avoid priming the users that this is a security-focused study, they were told that the purpose of the study was to understand how individuals make IT-related decisions on their devices and online platforms. It was emphasized that honesty is the most critical factor in the responses and whether the decisions are viewed as good or bad are irrelevant.  I started with the think-aloud sections. First, a warm-up exercise was conducted (Ericsson & Simon, 1984, 1998). Ericsson and Simon (1998, 1984) recommended using mental multiplication (e.g., 24 \u00d7 34) to warm up the participants before the think-aloud section.  Mental multiplication is a simple and unambiguous exercise that helps by \u201cproviding an opportunity to practice participants\u2019 full attention to the presented task while verbalizing thoughts.\u201d (Ericsson and Simon, 1998, pg. 181). After the warm-up exercise, I presented participants with security decision scenarios. I then asked them to describe their thought process when making such a decision using the think-aloud technique.  During their response, the interviewer did not intervene.  In the next step, I inquired about some of their own personal security decisions in the past and recorded their thoughts. The order of hypothetical and actual-focused questions was changed between participants to avoid any possible carry-over effect. After they fully respond to both scenarios, I followed up with the semi-structured questions to clarify responses and remove possible ambiguities if needed.  3.4.2 Data Collection   Overall, 27 interviews during two phases were conducted. Breakdown of age and gender is presented below: Age Range Female Male Total 18-24 10 3 13 25-34   5 7 12 35+  2   2 Total 15   12 27 Table 3.1 Demographic Breakdown 45 Based on the proposed study design, I developed the questionnaire (Appendix H). After advertisement on social media, I begin interviewing the interested participants. Respondents to the advertisement include both interested participants from the university campus and working participants outside the campus. This initial round of data collection continued until the preliminary results showed saturation and redundancies in the responses. At this stage, I conducted a preliminary assessment to see whether any changes to questions can (and should) be made. The process of iteration is a natural part of qualitative research, where initial data collection helps refine and improve the questions (Gerlach & Cenfetelli, 2020; Yin, 2015). This helps with further answering the research questions. There were two main takeaways from this preliminary assessment: First, the first-round questionnaire targeted general security decisions. No specific actual security decision (e.g., password creation) was probed. Rather, the participants discussed the decision processes they wanted. However, the preliminary assessment showed that users\u2019 decisions fall under four general types: account and device management, password creation, security software selection and usage, and web browsing.   Understanding the discussion of these common types of decisions by the users, I decided to refine the questions further. Specifically, instead of asking participants to discuss any security decisions made in the past, I presented them with more targeted questions. Specifically, questions that focus on the four decision types emerged from the preliminary assessment.  Additionally, early results showed that, indeed, most participants use heuristics in their decision making. A common term used by respondents was the importance of \u201cconvenience.\u201d This is in line with Simon\u2019s proposition that people utilize heuristics mainly as a way to reduce their cognitive efforts. Furthermore, early transcriptions showed several heuristics are most commonly used among the users. Among them was availability, affect, anchoring, brand, expertise, and representativeness. For examples, in response to their decision process when selecting an antivirus, \u201cexpertise\u201d appeared as a common heuristic: P12- [On removing viruses] \u201cI\u2019ll either ask people what they have or go to Best Buy or someplace else to see what they say\u2026 I\u2019ll ask what can protect me from viruses\u2026 and hacking.\u201d P10- [On Selecting a Security Software] \u201cVery first thing I will do is consult with my [dad]. Whenever it comes to decisions such as this, there are people in my life that I can ask about. Because they had more experience with these issues, and it is much more prevalent for them.\u201d  46   Decision Type Decision Type Instances Number of Participants Discussing the Decisions Account and Device Management Logging out of Accounts 3 Login Alert 4 Setting Up Account Recovery 1 Setting Up Security Questions 1 Turning off Bluetooth 1 Turning off GPS 1 Using 2FA for Account Set Up 6 Using Multiple Emails 2 Web Browsing Avoid Problematic Websites 1 Avoid Usage of Public Wi-Fi 1 Deleting Browsing History 1 Deleting Cookies 1 Detecting Phishing Emails 1 Disabling Cookies 1 Optimizing Security Settings 1 Password Creation Changing Passwords 2 Using Different Passwords 6 Using Strong Passwords 2 Security Software Selection and Usage Using VPN 7 Buying and Using Antiviruses 1 Table 3.2 Decision Types  Accordingly, going to the next phase of the data collection, while I still took notes of any other possible heuristics that may arise in users\u2019 responses, I paid particular attention to the usage of those five heuristics that seemed common in the first phase. This allowed me to conduct a more focused investigation into the users\u2019 security decision-making process. After 27 interviews, data collection was concluded. Historically, three approaches have been suggested to assess the adequacy of sample size in qualitative research: precedence, general guidelines, and data saturation (Marshall et al., 2013). All three approaches were taken into consideration when assessing the sample size for this study: first, I reached data saturation in both phases of the interviews. \u201cSaturation is reached when the researcher gathers data to the point of diminishing returns when nothing new is being added\u201d discuss Bowen et al. (2008, pg. 140). The purpose of this study was to examine which heuristics are commonly used by users. In each phase of the interviews, after the first ten interviews, responses began to show redundancy (i.e., similar types of heuristics were being used in different tasks). For instance, in password creation, 47 availability heuristics was being mentioned by most users. Furthermore, all participants had similar reasons as to why they create their passwords in such a way and how they create them. Additionally, the proportion of heuristics utilizations across different tasks remained somewhat constant. In each phase of interviews, I reached data saturation after interviewing roughly ten participants. However, I conducted several additional interviews beyond the point that saturation was reached to ensure no significant findings were lost (Marshall et al., 2013). The second approach of assessing the adequacy of sample size is following historical precedence. In their detailed literature review and commentary of sample size in qualitative research in IS literature, Marshall et al. (2013) recommended a range of 15 to 30 individuals as a satisfactory sample size. Studies utilizing grounded theory which aim to develop a new theory, were suggested to include data from at least 20 participants. Accordingly, from the precedence and general guidelines perspectives, this study follows the recommendation given by Marshall et al. (2013). It is also to point out that in this study, I didn\u2019t aim to develop a new theory. Rather, the goal was to assess how users utilize heuristics in their security decision making. This categorization was informed by the theory of bounded rationality. Accordingly, I argue that the sample is adequate for the study in the current scope.  Research Questions - Phase 1 Phase 1 Preliminary Assessment Research Questions - Phase 2 Are heuristics comprise an essential part of security decision making? Yes. However, early findings suggested that the importance of heuristics or their prevalence can be different depending on the type of tasks. Motivated by this, I focused the questions on several specific tasks. A question targeting general security decision making stayed in place.  Are heuristics comprise an important part of security decision making regarding   a. Password creation b. Web browsing c. Account and device management d. Security software selection and usage What heuristics are commonly used in the process of security decision making? The preliminary results indicated that users tend to use heuristics to make the decision more convenient for themselves. The most prevalent heuristics included availability, affect, anchoring, brand, expertise, and representativeness. Motivated by this observation, I paid specific attention to the aforementioned heuristics.   What heuristics are commonly used in the process of security decision making? Specifically, how often the following heuristics are used in the process of security decision making?   a. Anchoring b.  Availability  c. Brand d. Expertise, e. Representativeness f.  Affect  Table 3.3 Phase 1 Preliminary Assessment Summary  48  3.4.3 Data Analysis  Data analysis was conducted according to Framework Analysis while adhering to the qualitative research criteria (i.e., credibility, transferability, dependability, and confirmability) (Lincoln, 1985). The summary of adherence to these criteria is presented in Appendix J. The framework analysis involves five consecutive stages: familiarization, identifying a thematic framework (i.e., coding), indexing, charting, and mapping and interpretation. The analysis begins with familiarization.  The objective of this stage is for the PI to immerse and familiarize himself with the data. This stage may involve gaining a better understanding of responses and apparent themes within those responses. It also can help with identifying relevant parts of responses (Ritchie et al., 2003). This stage can start during data collection. As the PI, I reviewed the transcripts, highlighted parts of the responses that directly discussed the users\u2019 security decision making, and took notes of the apparent heuristic within those responses. The preliminary assessment of phase 1 of data collection, which led to refining the questionnaire for the second round of data collection, falls under this stage.   The second stage is concerned with Identifying a thematic framework (i.e., coding). This stage involves identifying the key themes embedded in the transcripts. In cases where an a-priori theory is in place,  a set of pre-defined codes can be used instead (Gale et al., 2013; Ritchie et al., 2013). Since my objective was to identify heuristics that have already been defined in information security, an a-priori codebook was developed based on prior literature (Shah & Oppenheimer, 2008). This codebook included the heuristics that were seemed most relevant in information security. In addition, if the theme that users believed to be present in the decision did not belong to the pre-defined codes, they had given the option to express the theme in their own words.        49  Heuristics Sources Definition Affect (Slovic et al., 2002)  Making decisions based on one\u2019s emotions such as fear, worry, pleasure, surprise. This can appear in two ways; 1) people tend to take action or consider an option beneficial if they feel positive about or,  2) people tend to do not take action or consider an option problematic if they feel negative about it. Anchoring and adjustment (Tversky & Kahneman, 1974) This heuristic states that decision makers will form judgments by first anchoring to a salient and accessible value and then adjusting their evaluations from this value.  Availability (Tversky & Kahneman, 1973, 1974) This heuristic occurs when a person uses information that is readily available to her\/him to make decisions. Human brains do not weigh the importance of all data in an equal or accurate manner. Information that is especially vivid or provocative is coded by the brain as important and is subsequently recalled more easily. Examples include information that is more recent, salient, or personal to them. Brand name (Maheswaran, Mackie, & Chaiken, 1992) This heuristic occurs when people judge and make decisions based on their favorite brands.  Choice by most attractive aspect (Svenson, 1979) This heuristic refers to making decisions based on only one factor which the user finds most valuable. Elimination by least attractive aspect (Svenson, 1979) This heuristic occurs when a user makes decisions in a step-by-step manner wherein each step removes one option based on only one factor that the persons find least valuable. Expertise (Ratneshwar & Chaiken, 1991) This heuristic refers to making decisions based on the recommendation of someone which the person considers an expert. Minimalist (Gigerenzer, Hoffrage, & Kleinb\u00f6lting, 1991)  Situations where users randomly search for criteria to use that make a final decision. Unlike \"choice by most attractive aspect\" and \"elimination by least attractive aspect,\" where users a-priori know what criteria is important\/not important to her\/him, in a minimalist approach, the user does not have a pre-conceived value attached to the criteria.  Representativeness (Tversky & Kahneman, 1974)  When people use representativeness heuristics, they judge an action, option, or item by the degree to which it resembles another action, option, or item.  In this heuristic, instead of assessing the main criteria, people assess criteria that are easier for them to understand.  Satisficing (Simon, 1955, 1956, 2000)  With this heuristic, to make a choice, decision makers set cutoff levels for each cue and then select the first alternative in their search that surpasses their cutoff for each cue. That is, the chosen alternative is simply \u201cgood enough.\u201d Weighted pros (Huber, 1979) This refers to a heuristic where a user takes into consideration multiple criteria, each with a different value, and make a final decision TBD - In case a different theme is identified by a judge, this them is assessed against the list of existing established heuristics according to (Shah & Oppenheimer, 2008) Table 3.4 Indexing Codebook   The next stage is indexing. This step involves applying back the developed codebook in prior steps to all the transcripts to identify the themes within transcripts (Ritchie et al., 2003). One 50 challenge with indexing is that while one person may see a decision including specific heuristics, others may not agree. To reduce subjectivity in this step as much as possible and integrate quantitative validation for inter-rater agreements and inter-rater reliability, I conducted the indexing in two phases. First, based on my notes and familiarization during the first stage, I developed a matrix (statements \u00d7 heuristics) for a modified card sorting assessment. In this matrix, rows were comprised of statements in which participants discuss their security decisions, and each column represents heuristics and includes a brief definition. Historically, card sorting has been used to determine the construct\u2019s convergent validity and discriminant validity with items from other domains (Moore & Benbasat, 1991). For this study, unlike a traditional card sorting where judges could only assign a statement to one category (e.g., one statement can be attributed to only one heuristics), judges could have assigned a statement to any number of heuristics they wanted. The resulting table appears as a mixture of traditional card sorting, and MacKenzie, Podsakoff, & Podsakoff (2011) proposed matrix structure for assessing inter-rater agreement where each cell generates a hit ratio. Hit ratio as a measure of inter-rater agreement is the number of item placements in one category to total possible placements. 80% is the generally accepted threshold for hit ratio statistics (Cenfetelli et al., 2008; Moore & Benbasat, 1991). Accordingly, the aforementioned matrix with 93 statements (decisions) was developed. The judges were told to assess whether statements contain usage of any heuristics. They were informed that they could use any number of heuristics for each statement. Figure 3.2 is an instance where one judge placed the statement under both availability and expertise heuristics. Cumulatively, the matrix was assessed by 15 judges independently.  The three investigators evaluated all the 93 statements. Additionally, 12 other researchers, which included Ph.D. students in the higher years of their information systems, computer engineering, and information science programs, participated in this card sorting assignment. Due to the large volume of the cases, for these judges, the matrix was broken to one-third of the original, leading to each of the additional judges to assessed 31 statements. Consequently, this resulted in seven responses for each statement.      51  Figure 3.2 Example of Card Sorting Answers2 This practice allowed me to calculate the hit ratio for each cell. For example, in the picture below, one out of the seven judges believed the statement to include the expertise heuristics. Thus, the hit ratio for that cell was calculated as .14. In contrast, six out of seven judges believe the statement consists of the usage of availability heuristics, thus resulting in a hit ratio of .86.   Figure 3.3 Example of Hit Ratio Analysis Based on the acceptable threshold (> .8), I kept any rows that included at least one cell of above .8 hit ratio. This conservative approach allowed me to obtain the decision-heuristic placement that is generally agreed upon (by 6 out of 7 judges). This process led to keeping 49 statements with the distribution seen in Table 3.4. While a given statement could be assigned to more than one heuristic, surprisingly, each of the final statements was only placed under one heuristic. After reaching inter-rater agreement among investigators and other external researchers during the card sorting assessment, I moved to calculate the Fleiss\u2019 kappa for the reliability of agreement among the study investigators (J. Fleiss, Levin, & Paik, 2003; J. L. Fleiss, 1971). While conservative thresholds mark a value of above .8 as a substantial agreement, values above .6 are considered good (Landis & Koch, 1977). For each heuristic in the matrix, I calculated the Fleiss\u2019 kappa, which ranged from .77 to .85, all near and above the acceptable threshold (Appendix K shows the complete results for card sorting exercise and Fleiss Kappa\u2019s calculations).   2 (1 denotes existence of heuristics for the statement and 0 denotes the lack of existence) 52 The next stage in framework analysis is charting the results. In this stage, to reduce the volume of data and keep the results and findings within the study, another matrix is developed. This matrix summarizes data by category and helps the researcher with presenting the final interpretation of the results (Gale et al., 2013; Ritchie et al., 2003). The analysis results showed that users use heuristics in their security decision making (RQ1). Furthermore, judges concluded that 17 decisions included the usage of expertise, 12 decisions included the usage of availability, eight decisions included the usage of brand, four decisions used anchoring, and four decisions included the usage of affect heuristic (RQ2).   Heuristic Affect Anchoring Availability Brand Expertise Representativeness Total Decision Count 4 4 12 7 17 5 49 Table 3.5 Heuristic Usage per Decision  I also calculated the bivariate correlation between the presence\/lack of presence of heuristics among themselves and three other user characteristics (security violation victim, security news exposure, and gender). Heuristics were coded as 1 = seen, 0 = not seen for each user. Gender was coded as 0 = female, and 1 = male. Security news exposure and prior security breach were operationalized on a Likert scale from 1 (very infrequently) to 7 (very frequently) (Anderson & Agarwal, 2010).    1 2 3 4 5 6 7 8 9 1 Affect                  2 Anchoring -.17         3 Brand -.16 -.16             4 Availability  .23  .04   -.20       5 Expertise  .19  .19  .17   .35          6 Representativeness  .07   -.20 -.04  -.21 -.08     7 News Exposure -.32  .01 -.13  -.09    -.10 -.09      8 Prior Breach -.25 -.01   -.10   .28 -.12 -.28 .15   9 Gender .05 -.16    .10  -.04 -.16 .34 .45 -.19  Table 3.6 Bivariate Correlations  The final stage is mapping\/interpretation of the results. This involves discussing the findings and their implications. It allows the researchers to discuss the bigger picture (Ritchie et al., 2003; Ross, 1973; Ward, Furber, Tierney, & Swallow, 2013). I discuss the results in two subsequent sections: I first discuss the results with respect to the research questions and subsequently delve into the study\u2019s implications.  53 3.4.4 Results     Overall, 49 decisions were judges to included the usage of one heuristic. In the results, I saw expertise as the most common heuristic (with 17 items) found in the decisions. However, since the unit of analysis was \u201ca security decision,\u201d this number may not reflect the prevalence of expertise among the sample. Hypothetically, if 7 out of 17 decisions were from responses of one participant, then expertise would not be seen as prevalent as initially thought. Accordingly, to measure the most prevalence heuristic accurately, I needed to assess heuristic usage per participant. This gives a clearer picture of which heuristics are more common among various users. To do this, I first mapped which decision belonged to which participant. Table 3.7 shows the prevalence of heuristics among participants.    Under the theory of bounded rationality, decision makers use heuristics to reduce their mental efforts due to the complexity of information processing. As to how they achieve this, I discussed that prior literature identified various heuristics used to reduce an individuals\u2019 mental effort. The data showed that this seems to be the case for many participants; not only participants implicitly or explicitly mentioned that they aim to reduce their mental effort in their security decision making, but the analysis also showed that they utilize a variety of heuristics to achieve this. However, as seen in Table 3.7, four individuals (15% of the sample) did not use heuristics in their security decision making. As to the reason behind this, the post-study correlational analysis can provide possible explanations. As seen in Table 3.5, there was a negative association between prior violations and news exposure with the usage of brand, expertise, affect, and representativeness. While the bivariate correlations went up to -.32 (between news exposure and affect heuristics), none of them were significant. This could be due to the small sample size of this calculation. Intuitively, it is plausible to predict that as users become more exposed to security news or themselves experience security breaches, they become more concerned and involved in the process of decision making. Thus, they will rely less on heuristics. Accordingly, these two constructs may influence the usage of heuristics by the users. However, more data is needed to support the observation.     54 Heuristic Affect Anchoring Availability Brand Expertise Representativeness P1   x    P2   x  x  P3   x x x  P4       P5     x  P7 x  x    P8  x     P9       P10 x  x  x  P11     x  P12  x   x  P13  x x  x  P14   x  x  P15 x  x  x  P16    x x  P17    x x  P18  x x  x  P19      x P20       P21    x x x P22       P23   x  x  P24 x  x  x x P25      x P26      x P27   x    Percentage of Heuristic Usage = \ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61 \ud835\udc5c\ud835\udc53 \ud835\udc62\ud835\udc60\ud835\udc4e\ud835\udc54\ud835\udc5227 15% 15% 44% 15% 56% 19% Table 3.7 Heuristic Usage per Participant  Additionally, the results also showed that heuristics usage in information security varies between various security decisions. In my analysis, I created four categories of personal security decisions based on responses from the first phase of data collection: account\/device management, password creation, web browsing, and security software selection and usage. The contextual usage of heuristics was most stark in the first three categories. The majority of instances of availability heuristics are traced back to password creation decisions. Representativeness was the dominant heuristic used in web browsing. This is while expertise followed by the brand was most commonly used in account\/device management decisions. 55  Decision Type Affect Anchoring Availability Brand Expertise Representativeness Account\/Device Management 3 participants 1 participant 1 participant 4 participants 13 participants  Password   9 participants    Security Software  3 participants 2 participants 1 participant 3 participants  Web Browsing 1 participant     5 participants Table 3.8 Heuristic Usage per Participants and Task Type   The main takeaway from this observation is the moderating role of task type. While heuristics may very well be present in information security decision making, it is important to acknowledge the type of heuristics that are used by users could vary due to the context of the decision. Thus, one heuristic that is dominant in one task is not necessarily prevalent in the others.  3.5 Discussion of Results   In this section, I further delve into subthemes within each heuristic usage.  3.5.1 Expertise Heuristic Sub-Themes The most prevalent heuristic among participants was expertise. In many instances, participants stated that they rely only on the expertise of others to make decisions. While a few would take any source of expertise (e.g., results from a Google search), many of those using these heuristics specifically discussed relying on family members (e.g., partner, father, or brother). For those people, trust was a contributing factor. A few elaborated that they believe they (i.e., family members) want the best for them, so their advice will undoubtedly be to their benefit. That is why they reach out to the family in the first place when facing a situation requiring them to make a security decision.           56 Heuristics: Expertise Most Common Task: Various Tasks Process: Relying on a family member\u2019s expertise  P10 [On Selecting a new security software] \u201cThe very first thing I would do, because I know my dad, he's worked a lot and a lot with security things because he's a mechanical engineer and that's always a very common thing for him. The very first thing I would do is definitely consult with him. Whether he knows it or not, there are definitely people in his company that do.\u201d P21 [on whether they get help on security decisions from others] \u201cI have some family members that are more well versed in security. I go to them if I'm unsure about a decision. Usually, when it comes to privacy, I have a very clear goal or a clear idea of what I want, but if it comes to something that I'm not sure of how a site operates, or I'm not sure about its reputation, I will go to them.\u201d [Interviewer: Does trust matter?] \u201cHeavily. I mean, I trust my family, and I know that they have my best interests at heart when they get me recommendations or when they get advice. So, I trust them on that aspect, and the friends that I know who are in it are very close friends. So, I also know that they'll give me advice that is best suited to my needs.\u201d P24 [on whether they get help on security decisions from others] \u201cI guess I go to my brother. Cause my brother has more expertise. he's just good with it [Security] compared to me, and since we're in the same age and then since we're part of the same family, it makes things easier.\u201d [Interviewer: Does trust matter?] \u201cI would say trust convenience of time and expertise.\u201d P16 [on whether they get help on security decisions from others] \u201cI do. My partner is a software developer. [I rely on his opinion].\u201d [Interviewer: Does trust matter?] \u201cMaybe a little, yeah. I still make most of my own decisions. I know I can trust my partner over some random website.\u201d P11 [On removing viruses] \u201cI go to people who are experts in the field. Both my partner and my dad worked in computers.  So, both of them have ideas of what is good, tried and true, security programs that I can rely on, so typically, that's how I would go about it because I would want to avoid downloading something that is in and of itself malware.\u201d P12 [on removing viruses] \u201cI would either maybe like contact my like the, like the number I could call for like my software, like, like a support system or something, you know, like a call center, maybe to walk me through it or if not, I would probably go to somewhere like best buy and see if they have any information.\u201d Table 3.9 Expertise Heuristic Sub-Themes (Sample Responses)  3.5.2 Availability Heuristic Sub-Themes  The availability heuristic, which is using available information (e.g., recent, personal, or salient) to make decisions, was the second-highest used heuristics among the participants. This was mainly due to users\u2019 discussions as to how they created their passwords. A closer look at these statements showed that with minor differences, the process is similar among those using the availability heuristics: They create their passwords using a template plus few variable additions. 57 While the template is fixed, the additions could be random or created to adhere to the password requirements of the websites.     Heuristics: Availability  Most Common Task: Password Creation Process: Using fixed template plus variable additions P3 \u201c[My password] just always kind of revolve around those few keywords.\u201d P16 \u201cI have like kind of a template of a password. I use the templates because I find it easier to remember it. Cause if I use a different password for like every single website, I have so many passwords and so many different logins that I would just always forget my password. And then that's just, that just becomes a hassle.\u201d P7 \u201cMy passwords are two parts: The first one is fixed and it's the same in all my accounts, but the second part is different.\u201d [interviewer: why would you use such a system?]  \u201cThere is a lot of passwords and I have to remember it. That\u2019s why they have to be easy.\u201d P24 \u201cI'm not that complicated, but definitely mix of numbers and letters. Uh, and then for like in the, in, in terms of convenience, I sort of have a same, a similar variation or like a standard first word. Like I have variations for that for different accounts. Some of them are the same password\u201d P18 \u201cI create a password based on something that is easy to remember.  Maybe a combination of a capital letter, a lowercase letter, maybe a special character as well. [For] most of my accounts, I have the same password.\u201d Table 3.10 Availability Heuristic Sub-Themes (Sample Responses)   During the interviews, this observation prompted me to ask participants about the effect of website requirements on their password creation process. Almost all participants stated that \u201c[Password requirement] has little to no effect\u201d and \u201c[they]only change their template so that it meets the requirement of the website.\u201d Password requirements are the current standard during account creation, and while adhering to them can create a degree of complexity or length suggested by security standards, it does not seem to push users or motivate them to create better passwords. Participants\u2019 responses to this requirement were more in the spirit of \u201cdoing it just to pass the hurdle\u201d rather than \u201cdoing it for increased online protection.\u201d 3.5.3 Representativeness Heuristic Sub-Themes  The data showed that users utilize representativeness heuristics primarily during web browsing. While representatives can be in the form (e.g., text, image), it seems that visual cues represent \u201csecure vs. insecure\u201d options to users. For instance, the padlock in the browser, the perceived quality of website design, and the usage of \u201chttps\u201d were brought up during the interviews. With regards to some of these cues, such as the padlock, this is expected because they 58 are designed to convey \u201cconnection security.\u201d However, some users use this as a sign that the whole website, including its content, is secure, while it may not be the case. This is an instance where using representativeness lead to an error in judgment.   Heuristics: Representativeness Most Common Task: Web Browsing Process: Certain visual cues represent the overall security of a website P21 \u201cIn the search bar at the top of most search engines, there's a small lock to show if the website's secure or not. If it's not secure that I know that it's not to be fully trusted. And if I see popups starting to come\u201d  P19 \u201cProbably just text and kind of how the website is formatted. Just general formatting in 21st-century websites, kind of a clean, modern look. I would make it. I would assume it's more legitimate.\u201d P26 \u201cMost of the websites I visit have the connection is secure; The look on their website. If I am shopping, I do the shop if the website has this secure image, and besides that on the website, if there are lots of commercials and the pop-ups, I try to avoid these kinds of that websites because every popup and try to link you and other websites. So, I kind of feel like that they're unsafe. I think the presence of the lock means the website is secure.\u201d P25 \u201cI mean, I can't say I consciously do it, but if I notice there's no lock, on the URL bar on the top, for example, then it's probably because it's not an HTTPS website for example. I think there's kind of this unconscious process looking as well as like, this a website is littered with ads.\u201d Table 3.11 Representativeness Heuristics Sub-Themes (Sample Responses)  3.5.4 Anchoring Heuristic Sub-Themes  When using anchoring, users make decisions based on an available reference point. Reviewing the answers showed some users do not necessarily look for expertise to make decisions. Rather they search for an anchor to make decisions. This can be an online review, product rating, and word-of-mouth.   Heuristics: Anchoring  Most Common Task: Security Software Usage Process: Use other\u2019s experience online and online rating to make a decision P8 [On using security software] \u201cI would do some digging around online, like, just some Googling on what are the best, what are what\u2019s considered the best, antivirus software. And I would also ask my friends as to what they use and what they recommend and then use all that info to make a decision.\u201d P12 [on selecting a security software] \u201cI would look online to see which ones are best rated. So, like which ones people recommend and have had the most experience with. I would also probably ask my dad for more information and what he thinks I should get.\u201d P18 [interviewer: how do you decide a software is powerful?] \u201cI see how many people have already installed it. Like in Google Play, there are some information regarding how many people use it [a software]. I also look at the ratings.\u201d Table 3.12 Anchoring Heuristics Sub-Themes (Sample Responses)  59 3.5.5 Affect Heuristic Sub-Themes  When using affect heuristics, users will judge actions that they feel positive (negative) more favorably (unfavorably). Although compared to the availability heuristic, affect was less observed, the pattern seemed to hold. Positive feelings (e.g., feeling better\/safer) and negative feelings (e.g., fear) led to users making certain decisions.   Heuristics: Affect Most Common Task: Various Tasks Process: Positive (negative) feelings influence judgment P10 [On turning off Bluetooth\/location]: \u201cI don't know [why I do it]. It's just something I do. I don't think about the consequences. I just, you know, in my mind, I think I feel more safe when my GPS is off.\u201d P15 [on removing a virus] \u201cI would probably panic. Power off the device or whatever and probably run a scan, try to fix it that way. I probably do my own thing and won't ask anyone.\u201d P24 [On safe practices for web browsing] \u201cI probably use the incognito browser, but more than that, I don't really, I don't change any habits. I know it's not safer. I guess like subconsciously I'm aware that it doesn't, but it just makes me feel better to be using [incognito] because it just feels less public to me.\u201d Table 3.13 Affect Heuristics Sub-Themes (Sample Responses)  3.5.6 Brand Heuristic Sub-Themes  With respect to the brand, only one brand played a role in the process of security decision making: Apple. On multiple accounts, users discussed that they see Apple products as more superior with respect to security.    Heuristics: Brand Most Common Task: Various Tasks Process: Apple is just safe P16 \u201cI feel safer [with using a MacBook]. I think overall, they are superior. I guess brand is important in my decisions.  I just heard that Apple is generally safer than windows when it comes to viruses and stuff.\u201d P17 \u201cI haven't installed any [Security software], especially an antivirus, since it's a Mac and the viruses are significantly less common on them, that would be for, for that reason.\u201d  P16 [Interviewer: Do you think the brand has influence on your decision?] \u201cNow that I'm thinking back to it, yes. I think Mac is more secure so when I initially was setting up my computer\u201d P25 \u201cI don\u2019t use an antivirus. All of my products are Apple, so I feel really like safe on that side. Like I'm not downloading anything that I don't trust and it's hard to get viruses in Apple.\u201d Table 3.14 Brand Heuristics Sub-Themes (Sample Responses)  60 3.6 Theoretical Implications  In this study, I demonstrated how users use various heuristics in their security decision making. To explain this process, I drew upon the theory of bounded rationality heuristics, which users utilize the reduce their thinking efforts (Newell & Simon, 1972; Shah & Oppenheimer, 2008). Specifically, expertise, availability, affect, representativeness, anchoring, and brand were shown as heuristics commonly used in the security decision making by users, with task type as a moderator in heuristic utilization.  As discussed in the literature review, over the years, studies sporadically studied the usage of heuristics in two groups: one group included commentary and reviews which discussed the potential influence of heuristics in information security, and the other group used heuristic as a possible explanation for users\u2019 behavior in information disclosure in privacy literature. The current study extends these works in two ways: first, it responds to the call to further assess heuristics in information security (Acquisti et al., 2017; Crossler et al., 2013; Dinev et al., 2015) and presents empirical evidence of actual types of heuristics used in this context (e.g., availability, affect). As the first exploratory study to examine heuristic utilization by users, this study specifically extends the prior work which suggested heuristics as a general explanation for irrational privacy and security decision making without much contextual investigation (Acquisti, 2004; Acquisti et al., 2015; Adjerid et al., 2018; Sundar et al., 2013).  In addition to the above implications, this study contributes to the security-convenience tradeoff stream of research, which is most commonly investigated in usable security literature (B. C. Kim & Park, 2012). Under this perspective, secure decisions often cause inconveniences for users. For this reason, in order not to lose their convenience, users often do not make the most secure decisions. This is especially shown to be the case in authentication and password management literature (Tam et al., 2010; Zviran & Haga, 1999). As a result of this, specific streams of security studies such as usable security investigated changes in user interface design that can increase user security without jeopardizing their convenience (Furnell, 2005; Hwang & Verbauwhede, 2004; J. Johnston et al., 2003). This study contributes to that stream in the following way: this study goes beyond \u201cwhy\u201d people seek convenience and make somewhat irrational decisions and explains \u201chow\u201d this occurs. Expertise, availability, anchoring, affect, and brand was seen as the type of heuristics that people use to make their security decisions. The findings are particularly useful for the usable security literature because understanding which specific 61 heuristics are used by users, can enable researchers to test new interface designs and feedback mechanisms that integrated those heuristic cues. For instance, will users create a stronger password if they were told of the source expertise of the password management\u2019s recommender?  Finally, while I observed the existing trends between heuristic type and task type (e.g., availability is commonly used in password management), the study showed in many instances, not everyone uses the same type of heuristics in a given type of security decision, and there is no dominant heuristic in security decisions. Instead, these short mental processes are quite dependent on the type of security decisions and other personal characteristics.                         62             Figure 3.4 Theoretical Contribution Existing Literature   \u2022 Heuristics are assumed as an explanation behind users\u2019 irrational decision in information security and privacy (e.g., information disclosure)   \u2022 Majority of the studies did not delve into type of heuristic used by users and discuss the concept in a general manner.   \u2022 Few studies which discussed the type of heuristics in information security, mainly assumed it based on findings from other domains. (Acquisti, 2004; Acquisti et al., 2017; Acquisti, Brandimarte, & Loewenstein, 2015; Adjerid, Peer, & Acquisti, 2018a; Crossler et al., 2013; Dinev, McConnell, & Smith, 2015; Sundar, Kang, Wu, Go, & Zhang, 2013; Tsohou, Karyda, & Kokolakis, 2015; Zviran & Haga, 1999) Existing Literature   \u2022 Secure decisions are often more inconvenience for users.  \u2022 Users internally make a trade-off between security and convenience, leading them to make insecure decisions.   \u2022 To address this, the usable security literature focuses interface design changes that can lead to more secure decisions while not jeopardizing user\u2019s convenience.    (Furnell, 2005; Hwang & Verbauwhede, 2004; J. Johnston, Eloff, & Labuschagne, 2003; B. C. Kim & Park, 2012; Tam, Glassman, & Vandenwauver, 2010; Zviran & Haga, 1999) Contribution   \u2022 The study presents empirical evidence from broad range of security decisions on heuristics utilization by users.  \u2022 The findings show the mechanism behind heuristics utilization and type of prevalent heuristics in security decision making.   \u2022 Analysis showed that heuristics are contextual, and their usage can be moderated by prior breach or security news exposure.    Contribution   \u2022 The findings went one step ahead of the reason for illogical decisions in security decisions (i.e., the \u201cwhy\u201d) and explained the process which this happens (i.e., \u201cthe how\u201d).  \u2022 Analysis results could provide further contribution to usable security literature for potential changes interface design and feedback development to users.   Heuristics and Bounded Rationality Perspective Security-Convenience Tradeoff Perspective 63 3.7 Practical Implications   People are still considered the weakest link in information security. Since 2015, while the number of threats risen from technology vulnerabilities has either remained constant or even decreased in some instances, human errors have been increasing steadily  (Verizon, 2020). This study provided an explanation on why such errors occur: people attempt to reduce their cognitive effort by using various heuristics. While heuristics are not inherently problematic, they can lead to biases and poor security practices. The findings are beneficial to practice in two ways:  Direct Learning: A simple approach is to educate users on the role that heuristics play in their decision making. Users consciously or subconsciously use these heuristics. However, increasing awareness on a meta-knowledge level can provide users with a great sense of understanding of their capabilities and security knowledge. In the first study, I saw that as subjective knowledge increases, so does users\u2019 security level of their decisions. Perhaps, a direct learning approach that makes users aware of these heuristics and enables them to understand how they make their security decisions is a plausible way based on this study\u2019s findings. For organizational users, this can be integrated into organizations\u2019 security awareness program and security modules, and for personal users, this can be integrated with public educational platforms such as NSA guidelines for the security of home users in the US and the GetCyberSafe program in Canada, which aim to provide educational materials for the public.  Interface Design\/Nudging: In this indirect approach, security engineers, designers, and platform administrators can attempt to help users make more secure decisions by either using innovative design or nudging them towards those decisions. The former has a long history in the usable security literature. Designing interfaces that a) make the security decisions more convenient and b) integrate essential heuristic cues (such as including available necessary information without overwhelming the user) is a plausible way to examine and extend the findings of the current study. The findings can also be used to investigate new nudging techniques further. \u201cNudge is any aspect of choice architecture that alters people\u2019s behavior in a predictable way without forbidding any options or significantly changing their economic incentives\u201d (Thaler & Sunstein, 2008). Depending on the decision environment, heuristic cues can be used to nudge users towards more secure decisions. For example, does including available information on proper security behavior in password creation help create stronger passwords? I will test the latter approach (heuristic-based nudging) in the final study.  64 Chapter 4: Empirical Analysis of the Influence of Heuristic-Based Nudging on Security Decision Making 4.1 Introduction   Study 2 showed that heuristics -short mental processes- play an essential role in various information security decisions. Specifically, the exploratory study showed the presence of the following five heuristics: anchoring, affect, availability, representativeness, brand, and expertise. According to the theory of bounded rationality, individuals are limited in their cognitive abilities and more often make decisions using mental processes that do not cause a major cognitive burden (Shah & Oppenheimer, 2008).  For example, in password selection, I discovered that many participants create their passwords based on an available fixed word that is easily available in their memory. In web browsing, a group of participants relied on visual cues, such as a padlock, to assess whether a website\u2019s content is secure, rather than making a complete assessment of the website (i.e., content, URL, \u2026). Study 2 was descriptive in nature, focusing on \u201chow people actually make security decisions.\u201d  While valuable, an additional step must be taken to take advantage of these results. In decision theories, the next step after assessing how people make decisions (i.e., descriptive assessment) is presenting a prescriptive framework where research tries to provide guidelines for people to make the best possible decision (Keren & Wu, 2015). Accordingly, my motivation in this study is to understand how we can turn the knowledge of heuristic utilization (results from Study 2\u2019s descriptive assessment) by users to provide a feasible strategy to help people make more secure decisions (offering a prescription to improve their decisions). For instance, can priming the users with salient information for secure behavior (i.e., availability heuristics), source of information expertise (i.e., expertise heuristics), or icons representing secure behavior (i.e., representativeness heuristics) lead to better security decisions? In Study 2, expertise, availability, and representativeness were found as the most prevalent heuristics. So, in this study, I examine these in further detail.  Furthermore, an important part of this assessment requires consideration of the level of construal of heuristics.   This inclusion is because of the role of construal on user decision making in general and recent findings in information security. Construal refers to the level of abstraction 65 of goals, items, or any action. According to construal level theory, any goal, action, information can range from specific to more abstract levels and descriptions (Carver & Scheier, 2000; Trope & Liberman, 2010). The content of high construal levels consists of abstract mental representations and focuses on \u201cwhy\u201d an action (goal) should be taken (achieved). In contrast, that of low-level construal consists of concrete and specific details and \u201chow\u201d an action (goal) should be taken (achieved)(Liberman et al., 2007; Trope & Liberman, 2010). Ever since it was found that the construal level can be procedurally primed, studies have found that the level of construal of messages is a significant factor in a user\u2019s decision making. The importance of construal level has been investigated in various domains such as recycling behavior (White, MacDonnell, & Dahl, 2011), self-control (MacGregor, Carnevale, Dusthimer, & Fujita, 2017), consumer behavior (Liberman et al., 2007), and persuasion literature (Hernandez, Wright, & Ferminiano Rodrigues, 2015). In information security, there have been new insights into how the construal level must be controlled in devising security messages when used for fear appeals and messaging framing (Fard Bahreini, Cavusoglu, & Cenfetelli, 2020; Schuetz et al., 2020).   Accordingly, the construal level is integrated into this study to extend prior findings in two ways: a) to examine their inclusion with respect to the heuristics mentioned above, and b) to assess how they influence actual security decisions. Understanding how the level of construal (i.e., high vs. low) of a heuristic influences security decisions can help with a better understanding of the cognitive processes underlying these heuristics and would help define the conditions under which these heuristics-based messages are most effective. Beyond the theoretical motivation, there is a practical benefit from this research as well. Industry reports and anecdotal evidence has shown the presence of inadvertent errors, which may be the results of individuals using cognitive heuristics when making security decisions. For example: in a 2013 U.K. Information Security Breaches Survey, it was reported that inadvertent human errors caused 36% of the worst security breaches in the year; examples include clicking on a malicious email, downloading a tempered file, or creating a weak password.  This report and others point to rising human errors. Understanding why these mistakes occur can greatly contribute to increasing security for users. One possible explanation might be that users do not simply have adequate knowledge. Prior studies and reports show that most users have basic knowledge of security (Anderson & Agarwal, 2010). Even in the first study, the average score of the security knowledge quiz was 13.2 out of 20, an indication that users, on average, have 66 some degree of knowledge with everyday security decisions. As prior literature pointed out, one possible explanation could be heuristics, which ultimately lead to such errors (Acquisti, 2004). Industry reports also point this out. Take password management as an example. As the dominant method of authentication, username and password continue to constitute a significant part of these human errors. In fact, the Annual Verizon Data Breach Investigation Report reported 80% of data breaches are linked to compromised, weak, and reused passwords. Furthermore, this number been fairly consistent in recent years (Verizon, 2020). A survey of 2,000 individuals in the United States, Australia, France, Germany, and the UK showed that around 90% knew the criteria for strong passwords and the risk of using the same password for multiple accounts (LastPass, 2020). However, 59% of respondents said they use the same password for all their accounts. The same trend was seen in the second study; while users were aware of the criteria, they create their passwords based on a salient word that they remember in order to reduce their cognitive burden. In another instance, a third-party investigation in Rockyou.com password hacking revelated that the most popular password chosen by the users by \u201c123456\u201d (Imperva, 2010). Accordingly, assessing the usage of heuristic cues and construal level on users\u2019 security decisions can have implications in practice in organizations that aim to reduce such errors and increase their information security.  Motivated by these observations, I turn to how we can utilize heuristic cues to assist users with their security decisions in this study. The efforts to improve security decisions can generally be categorized under two groups: solutions centered around increasing general security knowledge prior to decision making and solutions centered around developing intervention mechanisms during decision making.  a) There is a great deal of literature that is aimed at developing security training programs via various awareness tools in organizations (D'Arcy et al., 2009; Lebek et al., 2014; Puhakainen & Siponen, 2010; Spears & Barki, 2010; Straub & Welke, 1998). This group of studies aims to improve security performance by increasing the knowledge of the human actor before decision making.  b) In the second group of methods, researchers aim at guiding users to more secure decisions during security decision making. In recent years, there has been a surge in IS research where researchers test various techniques through manipulating messages and choice architectures to influence users\u2019 security decisions for the better (Acquisti, 2009; Acquisti 67 et al., 2017; Anderson & Agarwal, 2010; Liu et al., 2016). Among these, using security messages to nudge people has been used often in recent years. Nudges are not mandates. In fact, Thaler & Sunstein (2008) state that to count as a nudge, the intervention must be cheap, easy to implement, and more importantly, it must neither significantly change economic incentives associated with any option nor forbid other options. Part of the appeal of nudges is due to their lack of mandates. People do not like to be told what to do. Partly, this is because they do not want to relinquish control (Keren & Wu, 2015; Thaler & Sunstein, 2008). I observed this tendency in the second study; in discussing password requirements and their impact on users\u2019 password creation, the majority stated that the requirements have little effect on how they create their password.   Since nudging via heuristic-based messages is a ubiquitous approach (i.e., can be used for both organizational users and personal users) compared to increasing users\u2019 security knowledge via awareness programs which can mainly be applicable to the organizational users, I chose to utilize nudging with heuristic-based messages in this study.  Accordingly, I aim at addressing the following two research questions: RQ1. Can heuristic-based messages nudge users toward making more secure decisions? RQ2. Which construal level (i.e., high v. low) of expertise, availability, and representativeness are more influential than others in nudging users to make more secure decisions?   To answer these research questions, I conducted an online experiment. 424 participants were recruited on the premise of assessment of the design and functionality of a new health-oriented website. To measure security decisions, I used the security level of the settings selected and the zero-order entropy of the password selected during registration on the website. The former was also used in the first study, and here it measures the level of security for settings selected on the website. The zero-order entropy is a measure of password unpredictability based on length and complexity and has been used as a proxy for password strength (Egelman, Sotirakopoulos, Muslukhov, Beznosov, & Herley, 2013; Mell, Kent, & Nusbaum, 2005). Messages in the sign-up pages were manipulated based on the following 3\u00d73\u00d73 between-subjects design: (expertise heuristics: high construal\/low construal\/no expertise \u00d7 availability heuristics: high construal\/low construal\/no availability \u00d7 representativeness heuristics: high construal\/low construal\/no 68 representativeness).  In both dependent measures, users made more secure decisions in the presence of any of the three factors (i.e., availability, expertise, representativeness) irrespective of the level of construal compared to the control group, which saw no message during registration.  Looking more closely, with respect to the settings security level, low-level construal availability, low-level construal representativeness, and high-level expertise had the highest impact on the decisions. With respect to password entropy, the impact of low-level construal availability and low-level construal representativeness stayed intact as the most influential combination. However, there was no significant difference between high and low-level expertise conditions. The chapter is structured as the following: I begin by reviewing relevant literature. Next, I will present the theoretical development. This is followed by the methodology, discussion of results, and implications.  4.2 Relevant Literature   In chapter 2, I discussed the nature and definition of heuristics in detail and discussed the lack of research in information security.  Accordingly, pertinent to this study, I first discuss the relevant literature on the impact of availability, representativeness, and expertise heuristics on decision quality. This is followed by relevant literature on the construal level. In theory development, I discuss how and why these factors could potentially influence security decisions.    4.2.1 Availability Heuristic and Decision Making Availability occurs when users use only information that either recent, salient, or personal to them to make security decisions (Tversky & Kahneman, 1973, 1974). Depending on the nature of this available information, decision quality can increase or decrease. Accordingly, this heuristic has been a topic of interest in forecasting (Gilbert, Pinel, Wilson, Blumberg, & Wheatley, 1998; Harvey, 2007), risk perception (Folkes, 1988; Keller, Siegrist, & Gutscher, 2006), and gambling literature (Croson & Sundali, 2005; Guryan & Kearney, 2008). For example, with respect to risk perception, it is found that people who experienced a natural disaster before are more likely to buy insurance as they perceive higher risks (Kunreuther et al., 1978). Similarly, in the health domain, when people remember information about contracting a chronic disease,   their self-risk estimates increase, which leads them to act cautiously (Raghubir & Menon, 1998). In another example, in 69 gambling literature, it is shown that recent losses reduce online gambling while recent gains increase gambling (Ma, Kim, & Kim, 2014). In agile IS adoption, users only use a portion of information about past upgrades. Those who perceive past upgrades to be comfortable to implement are likely to hold such beliefs with respect to future upgrades (Hong, Thong, Chasalow, & Dhillon, 2011).  In an e-commerce experiment, Spiekermann et al. (2001) proposed that the observed incongruity between stated privacy concerns and actual disclosure behaviors could be due to the availability of positive memories with shopping assistants. 4.2.2 Representativeness Heuristic and Decision Making Representativeness occurs when an individual judges an item based on a specific prototype they have in mind rather than using all the information (Tversky & Kahneman, 1974). Accordingly, there has been a special interest in the domains of auditing management, insurance, and healthcare. These studies found both positive and negative impacts of this heuristic on the quality of decisions. For example, in healthcare, doctors and nurses must make the diagnosis based on all the available information in medical diagnosis. However, prior studies have shown medical personal tend to rely on representativeness heuristic to diagnose patients. For instance, they make a diagnosis by comparing a patient to a typical patient  (e.g., a typical patient with a particular disease) (Garb, 1996). A study found that if a patient with possible heart attack symptoms (e.g., chest pain) states that they recently have been laid off their job, it is 25% more likely the nurses attribute the symptoms to job loss rather than physical issues. This is because they do not use all the information, and in their mind, the symptoms are more likely to be representative of the stress as the result of the job loss (Brannon & Carson, 2003). In the same domain, the non-medical staff is reported to use a person\u2019s sexuality for initial diagnosis for STDs (Triplet, 1992). In entrepreneurial decision making by a group of students, it was found that they heavily use past success\/failure as representations of future success\/failure and do not consider all the available information when making a prediction (Wickham, 2003). In consumer literature, people use representativeness to predict whether a target product delivers the benefit based on the typical product they have in mind (Meyvis & Janiszewski, 2002).  4.2.3 Expertise Heuristic and Decision Making  The influence of source expertise on decision making has been investigated in various 70 domains, and overall, this influence has been shown to be positive. In accepting advice,  it was found that not only do people value advice from experts more than that of a novice (Meshi, Biele, Korn, & Heekeren, 2012), but they also follow their advice to a higher degree (Ratneshwar & Chaiken, 1991). Source expertise leads to higher perceived information quality on health messaging (Mun, Yoon, Davis, & Lee, 2013). In persuasion literature, expertise is shown to have a positive impact on outcomes (Petty, Wegener, & Fabrigar, 1997; Pornpitakpan, 2004). In advertising, citing source expertise has generally led to higher advertising effectiveness (Homer & Kahle, 1990). In the IS literature, expertise has been a topic of interest in knowledge management research; higher perceived expertise is shown to lead to her perception of knowledge usefulness and adoption (Sussman & Siegal, 2003; Watts & Zhang, 2008). Expertise help with knowledge transfer in organizations as it gives credibility to learnings (Santhanam, Seligman, & Kang, 2007). When filtering information online and retaining them in memory for long-term use, information that is originated from a source of expertise is more likely to be saved than that from an unknown origin (Meservy, Jensen, & Fadel, 2014).  4.2.4 Construal Level and Decision Making The influence of construal level theory on decision making has been of interest in various domains (Liberman & Trope, 1998; Liberman et al., 2007; Trope & Liberman, 2010). As to how the construal level influences decisions, the findings show that it is quite context-dependent. For example, under regulatory focus, it has been found that in prevention (promotion) tasks, people form thoughts on low (high) level construal. (A. Y. Lee, Keller, & Sternthal, 2010). In different contexts such as social settings and investment at low (high) construal, people pursue (do not pursue) potentially hurtful truths (Shani, Igou, & Zeelenberg, 2009).  K\u00f6hler et al. (2011) found the construal level of the recommendation moderates the acceptance of interactive decision aids: if focused on immediate (future) consumption, low-level (high-level) construal messages leads to higher (lower) levels of recommendation acceptance (K\u00f6hler, Breugelmans, & Dellaert, 2011). Low (high) construal priming combined with loss (gain) frame results in a higher degree of conservation (White et al., 2011). Lurkers in online policy deliberation forums are motivated to engage in discussions when collective (high-level construal) benefits are presented (Phang, Kankanhalli, & Tan, 2015). A concrete mindset by project managers is associated with identifying a greater number of project risks, perceiving greater potential impact of risks, and perceiving more 71 effort and resource required to respond to project risks (J. S. Lee, Keil, & Shalev, 2019). In information security, Sheutz et al. (2020) showed the importance of construal of fear appeals, where low-level construal fear appeal results into higher fear-appeal acceptance. Additionally, in one of my previous studies, I discovered that the construal level combined with message framing changes the impact of messages during setting selection by users. Specifically, negatively framed messages designed at lower construal led to making a more secure decision (Fard Bahreini et al., 2020). 4.3 Theory Development  I will discuss the theory in two steps: first, I will discuss why the presence of heuristics-based messages (regardless of their level of construal) can lead to more secure decisions. Second, I will discuss the role of construal in the effectiveness of those heuristics-based messages. 4.3.1 Availability Heuristic in Information Security   When a person uses information that is readily available to him to make decisions, he has used the availability heuristic (Tversky & Kahneman, 1974). Human brains do not weigh the importance of all data in an equal or accurate manner. Information that is especially vivid or provocative is coded by the brain as important and is subsequently recalled more easily. Examples include information that is more recent, salient, or personal to an individual (Tversky & Kahneman, 1973, 1974). As a result of availability heuristics utilization, such information (i.e., recent, salient, or personal) will be more impactful in users\u2019 judgment and decisions.  Accordingly, by making specific information more available to the decision maker at the time of the decisions, users are likely to judge and make decisions heavily based on the information presented to them. For instance, Keller et al. (2006) showed that users who received information about the risk of flooding for the past 30 years perceived more danger than those who did not. Additionally, in Study 2, I saw that people do not use all the information to make security decisions. Rather, they rely on the short mental process to make decisions, including utilizing availability heuristics. Based on this, I expect that users are likely to make a more secure decision by presenting information that discusses the criteria of a secure decision, as they will use that available information as the main basis for their judgment and decision making. 72  4.3.2 Representativeness Heuristic in Information Security  Pertinent to this study and the context of information security, I first must discuss what \u201crepresents\u201d secure decisions in the mind of the users. Prior studies have shown that users heavily pay attention to visual cues. For instance, Whalen & Inkpen (2005) showed that people use the lock pad as the preferred security indicator in assessing website security. In an eye-tracking examination, they found that the lock icon is commonly viewed when users are asked to assess the site\u2019s security. In my second study, I also found that participants use a variety of visual cues to make fast decisions about the security of a website. For instance, participants discussed that they use \u201cHTTPS\u201d as a way to determine the security of content in a website. In this case, \u201chttps\u201d is being used to diagnose websites with secure content. However, having \u201chttps\u201d only ensures that the connection is secure, and it does not convey information about the security of the website\u2019s content. Here, representativeness heuristics has led to an error in judgment by the user. We also have seen this in practice. Visual cues usage is universal: locks, the color green represents security, whereas danger signs and the color red represent insecurity in that context. In an adjacent pilot study prior to designing this study, I surveyed a group of 50 users and assessed which visual cues represent secure decisions to them: In password, the strength bar was selected as the visual cue representing a secure decision. In settings, the lock was picked as the visual cue representing a secure decision. Based on this, I propose that by showing visual cues that are representative of secure\/insecure decisions, users will use representativeness cues and make more secure decisions. 4.3.3 Expertise Heuristic in Information Security  Under this heuristic, decision makers judge and make decisions based on the information received from someone whom the person considers an expert in that context (Ratneshwar & Chaiken, 1991; Shah & Oppenheimer, 2008). In the minds of many, the phrase  \u201cexperts statements are valid\u201d is present (Bohner, Ruder, & Erb, 2002). The influence of source expertise on decision making has been investigated in various domains, and overall, this influence has been shown to be positive. In accepting advice,  it was found that not only do people follow advice from experts to a higher degree (Ratneshwar & Chaiken, 1991), but they also value their advice more than that of 73 a novice (Meshi et al., 2012; Pallak, Murroni, & Koch, 1983). Source expertise leads to higher perceived information quality on health messaging (Mun et al., 2013). Citing source expertise has generally lead to higher effectiveness of advertising (Homer & Kahle, 1990). In IS literature, expertise has been a topic of interest in knowledge management research; higher perceived expertise is shown to lead to higher perception of knowledge usefulness and adoption (Sussman & Siegal, 2003; Watts & Zhang, 2008). Expertise help with knowledge transfer in organizations as it gives credibility to learnings (Santhanam et al., 2007). When filtering information online and retaining them in memory for long-term use, information that is originated from a source of expertise is more likely to be saved than that from an unknown origin (Meservy et al., 2014). DeBono & Harnish (1988) found that participants were more convinced with arguments from someone who was introduced to them as an expert, regardless of the quality of argument compare to a non-expert, regarding addressing issues on the university campus. In persuasion literature, expertise is shown to have a positive impact on outcomes (Petty et al., 1997; Pornpitakpan, 2004). Furthermore, source expertise is shown to have a bigger impact on message persuasion when there is personal involvement in the task, which causes motivation to process information to increase (Petty, Cacioppo, & Goldman, 1981). This is because citing that the available is from an expert gives more credibility to the information at hand.   Our understanding of security decision making points to two things: first, congruent with the theory of bounded rationality in the second study, I saw that decision makers wish to reduce their cognitive effort. The most common way was their reliance on expertise from their family members. Second, security decision making requires personal involvement to some degree. Users must create passwords and agree to security settings and privacy conditions when working with their devices and online accounts.  Based on this, I propose that if the source expertise of security guidelines is salient, decision makers will use that as heuristics in their security decisions. That is, decision makers \u201cmay simply agree with an expert\u2019s message without processing its content in any depth\u201d (Petty et al., 1981). All the aforementioned heuristics have one thing in common: they are used to reduce mental effort during decision making. As a result, the individuals will not use all the information needed to make the most rational decision (as discussed in normative theories). The table below summarizes how users utilize heuristics in general and how those can be nudged in information security.  74  Heuristic Mechanism Manifestation in Information Security Availability Individual mainly uses the information that is recent, salient, or personal Content of security messages Representativeness Individual uses mainly use the representation of what he thinks resembles the right decision Visual security cues Expertise Individual uses experts\u2019 recommendation Source of security message\/visual security cues Table 4.1 Availability, Representativeness, Expertise in Information Security 4.3.4 Construal Level Theory (CLT) in Information Security  CLT, a contemporary theory, argues that almost all actions and goals can be described at a different level of abstraction. There are two distinct attributes with respect to each construal level.  The degree of abstraction: Low-level construal consists of specific details, and high-level construal consists of generic information with respect to the degree of abstraction. For instance, security software is an instance of the high-construal description of an item, whereas antivirus is an instance of the low-construal description of an item. Psychological distance: Psychological distance refers to the degree to which an event or action is perceived as close or far away. CLT suggests that there is a bidirectional association between the degree of abstraction and psychological distance. Specifically, research supports the assertion that psychological proximity evokes low-level construal in which people tend to think about actions with respect to the means they can use to achieve them (i.e., the how aspect). This is while psychological distal evokes high-level construal in which people tend to think about action with respect to their ends (i.e., the why aspect) (Liberman & Trope, 2014; Trope & Liberman, 2010). Fujita, Trope, Liberman, & Levin-Sagi, 2006; Liberman & Trope, 1998).  The importance of CLT was noticed when researchers found that task-specific construal can be procedurally primed (Freitas, Gollwitzer, & Trope, 2004; Fujita, Trope, Liberman, & Levin-Sagi, 2006; Hansen & W\u00e4nke, 2010; Rim, Hansen, & Trope, 2013; Wakslak & Trope, 2009). Specifically, a priming task can be used to trigger high-level or low-level construal information processing. For example, priming users with specific details\/how aspect (abstract information\/why aspect) can lead users to construe the task in lower-level (higher-level) degrees of abstraction.  Based on the construal level theory, the available information can range from high construal to low construal. At the lower level of construal, events tend to be perceived as more proximate 75 and more likely to occur (Newell, Mitchell, & Hayes, 2008; Sherman, Cialdini, Schwartzman, & Reynolds, 1985; Slovic, Monahan, & MacGregor, 2000). For instance, in a study manipulating the psychological distance of a risky event, an \u201cevery day\u201d framing of the event by making risks more proximal and concrete than an \u201cevery year\u201d framing resulted in increased risk perception (Chandran & Menon, 2004). Furthermore, low-level construal descriptions will activate one\u2019s pragmatic mindset. It causes one to think about its immediate actions in a domain rather than long-term goals (Kivetz & Tyler, 2007). In health messaging, a set of three studies showed that when presented with low-level construal (high-level construal) health messages, participants design their health behavior plans based on short-term outcomes (long-term outcomes). Low-level construal also promotes immersion into the here-and-now notion (MacGregor et al., 2017). Additionally, a low-level construal mindset leads to flexibility to external influences. Specifically, when adopting a concrete mindset, prior literature suggests that people will be more flexible to incorporate views of strangers. However, when individuals think about the same issue more abstractly, their evaluations are less susceptible to incidental social influence and instead reflect their previously reported ideological values (Ledgerwood, Trope, & Chaiken, 2010). Finally, concrete items were perceived as more truthful (Hansen & W\u00e4nke, 2010) Accordingly, I expect to see higher effectiveness of heuristic-based messages on low-level construal compared to high-level construal when utilizing each of availability, representativeness, and expertise heuristics. Overall, based on the presented literature, with respect to availability and representativeness heuristics, I propose that low-level information and visual cues in information security will activate one\u2019s pragmatic mindset, nudging them to focus on the current decision. Additionally, the low-level construal mindset is more flexible to receiving external influence during decision making. As a result, low-level construal availability\/representativeness will have a stronger influence on users\u2019 decisions than the higher levels of construal. With respect to the expertise heuristic, I propose that in addition to the above arguments, lower-level expertise will provide more details about the source expertise than the general description of it. This will add to the expert\u2019s credibility and, consequently, lead to a stronger influence by the lower construal. 4.4 Methodology    To answer the research questions, I conducted an experiment in a 3\u00d73\u00d73 (expertise: high construal or low construal or no expertise \u00d7 availability: high construal or low construal or no 76 availability \u00d7 representativeness: high construal or low construal or no representativeness) between-subject factorial design.  4.4.1 Demographic and Treatment Breakdown  A total of 424 subjects from prolific.co (an online labor market) participated in the study. They received a flat-rate compensation for completing the study and were randomly placed in one of the 27 study groups.    Gender Age Group Female Male Other 35 - 44  41  50  0  25 - 34  73  87  3  45 - 54  28  20  0  55 - 64  18  13  0  18 - 24  39  41  3  65 - 74  3  4  0  75 - 84  1  0  0  Table 4.2 Demographic Breakdown                      77 Expertise Availability Representativeness Count  High Construal Expertise High Construal Availability High Construal Representativeness  15 Low Construal Representativeness  15 No Representativeness 14 Low Construal Availability High Construal Representativeness  16 Low Construal Representativeness  15 No Representativeness 15 No Availability High Construal Representativeness  15 Low Construal Representativeness  15 No Representativeness 17 Low Construal Expertise High Construal Availability High Construal Representativeness  16 Low Construal Representativeness  14 No Representativeness 15 Low Construal Availability High Construal Representativeness  15 Low Construal Representativeness  15 No Representativeness 15 No Availability High Construal Representativeness  14 Low Construal Representativeness  14 No Representativeness 24 No Expertise High Construal Availability High Construal Representativeness  15 Low Construal Representativeness  14 No Representativeness 17 Low Construal Availability High Construal Representativeness  15 Low Construal Representativeness  15 No Representativeness 14 No Availability High Construal Representativeness  18 Low Construal Representativeness  21 No Representativeness 16 Table 4.3 Treatment Random Assignment Breakdown 4.4.2 Study Design   For this study, I designed a health-focused news aggregation website which was developed in late 2020. The website included pages dedicated to healthcare, diet, and workout news. The site was fully functional at the time of the study. Two measures as a proxy for security decisions were used: password entropy and settings security level. Furthermore, treatments were operationalized as in-pages messages. The operationalizations of dependent and independent variables are further discussed in the following pages.  78                    4.4.3 Measurements   Password entropy: As a proxy for password strength, I used the measure of zero-order entropy. Entropy is a measurement of how random (i.e., unpredictable) the password is. The higher the entropy, the harder the password is to crack as its randomness increases. Formulated as \ud835\udc59\ud835\udc5c\ud835\udc542\ud835\udc45\ud835\udc3f, with R representing the pool of unique characters and L representing the number of characters in the password, this measure allows for assessment of the strength of the password based on its Figure 4.1 Study Website 79 length and complexity. For example, if a selected password is \u201ctest,\u201d its entropy will be \ud835\udc59\ud835\udc5c\ud835\udc5424 26= 18.8, where 4 is the number of characters in the password, and 26 is the pool of unique characters (i.e., count of lower-case letters).   While the measure has limitations such as not taking into account usage of dictionary words, pattern, and repeated characters, to this day, it is believed that \u201can increase in entropy is seen as directly proportional to password strength.\u201d (Brecht, 2021). Furthermore, zero-order entropy allows for objectively quantifying relative difference password strength among different conditions, which is the main objective of the study (Egelman et al., 2013; Forget, Chiasson, Vance, et al., 2013, Van Oorschot, & Biddle, 2008). For example, Eagleman et al. (2013) use zero-order entropy to assess whether the design of password meters influences password strength. In their justification of zero-order entropy as a proxy for password strength, the authors state, \u201cZero-order entropy was entirely appropriate for quantifying relative differences between conditions.\u201d   Settings Security Level: As the second measure for security decisions, I used the overall settings security level, similar to the first study. Accordingly, I designed eight dichotomous options (i.e., on vs. off) in the site. Each option was defined a high-security state; for example, for \u201clogin alert from a new device,\u201d this was the \u201con\u201d state. If users opted to turn on this option, they received a score of 1. Otherwise, they received a score of 0. This was repeated for all options. Accordingly, users received a score between 0 and 8 on the settings security level based on their choices. The higher the score was, the higher the security level of their settings was. Similar to password entropy, this measurement allowed me to assess the relative security level of choices between various conditions.   Table 4.4 Settings Security Level Description Default Status High-Security Status Raw Score Allow my username to be visible to others On Off 1 Allow third-party apps to connect to your account On Off 1 Allow app to save cookies from your search history On Off 1 Allow redirect links from website to other source On Off 1 Require two-factor authentication every time logging in from a new device Off On 1 Allows ads and promotional campaigns to be shown in the site On Off 1 Save login information from this device On Off 1 Alert when logging from a new device Off On 1 Total Settings Security Level  8 80 4.4.4 Treatments   Based on the theory development, I designed the treatment messages. After the initial design, to ensure the convergent and divergent validity of the messages (e.g., making the distinction between lower and higher construal clear while conveying the meaning of each, I asked three independent experts who were not attached to the study to rate messages based on the definition of heuristics and their respective construal levels. After three rounds of pilot testing, the judges were in overall agreement on the validity of treatments. Accordingly, the treatments were finalized. Since the study included two dependent measures, two sets of treatment messages were developed, with each targeted for the specific tasks (i.e., password creation or settings selection).  Factor Expertise Availability Representativeness High Construal Official information security online guidelines recommend users think about why strong passwords increase accounts' online security. By creating a strong password, you can increase your online security, prevent brute force attacks, prevent educated guesses, protect against other password hacking methods.  Low Construal Specific guidelines by the International Organization for Standardization (ISO) and National Institute of Standards and Technology (NIST) recommend users how they can create their password with adequate length and complexity. You can create a password by using more characters, using small letters, using capital letters, using numbers, using special characters, and mixing all of the above.  No Message  - - - Table 4.5 Password Treatments       81 Table 4.6 Settings Treatments                 Factor Expertise Availability Representativeness High Construal  Official information security online guidelines recommend users think about why higher settings\u2019 security levels increase accounts\u2019 online security. By increasing the security level of your settings, you increase your online security, prevent intrusive account access, prevent unnecessary data collection, protect against other hacking methods  Low Construal  Specific guidelines by the International Organization for Standardization (ISO) and the National Institute of Standards and Technology (NIST) recommend users how they can set up their security settings by assessing each individual option based on the context of the decision.  You can select your security settings by turning on automatic updates, two-factor authentication, login alert from a new device, and turning off username visibility, cookies, third-party access, ads, pop-ups, and saved logins.  No Message - - - Figure 4.2 Example of Treatments on Website 82 4.4.5 Study Procedure    Participants were recruited on the premise of a new health website assessment: their task was to register on the website and assess a newly designed website regarding its design and functionalities. After giving their consent, participants began by filling a short questionnaire regarding their demographics. At the end of this questionnaire, a link to the website was provided. After entering the website, they were randomly placed in one of the twenty-seven treatment groups. The dependent measures were measured at this stage: to register, they first needed to create an account by selecting a username and password. Here the treatment messages with respect to the password shown (as demonstrated in Figure 4.2). Next, they needed to save the account settings. Similar to the passwords, treatment messages with respect to settings were displayed. After registration, participants need to spend at least three minutes browsing the website and its functionalities before moving forward. After this period, a feedback form was activated on the site, which took them to the second questionnaire, where I asked additional questions, conducted manipulation checks, and debriefed them on the actual purpose of the study.   4.4.6 Data Analysis    Password Entropy Main Effects: A three-way between-subject ANOVA for 424 subjects yielded a marginal main effect for the messages\u2019 expertise level F (2, 397) = 2.78, p <.1, \ud835\udf02\ud835\udc5d2 =.01, indicating a marginal significant difference between high-construal expertise (M = 38.0, SD = 10.7), low-construal expertise (M = 37.6, SD = 10.2), and no expertise (M = 36.0, SD = 9.65) with a small effect size. The main effect for availability yielded F (2, 397) = 53.87, p <.01, \ud835\udf02\ud835\udc5d2 =.21, indicating a significant difference between high-construal availability (M = 34.4, SD = 7.75), low-construal availability (M = 43.2, SD = 11.9), and no availability (M = 34.4, SD = 8.09) with a large effect size. The main effect for representativeness yielded F (2, 397) = 51.91, p <.01, \ud835\udf02\ud835\udc5d2 =.21, indicating a significant difference between high-construal representativeness (M = 36.3, SD = 8.44), low-construal representativeness (M = 42.5, SD = 11.2), and no representativeness (M = 33.0, SD = 8.47) with a large effect size. Password Entropy Interaction Effects: Results showed that the interaction effect between availability and representativeness was significant, F (4,397) = 17.65, p<.01, \ud835\udf02\ud835\udc5d2 =.15 indicating that there was a significant mean difference between groups in which low-construal availability and low-construal representativeness were having the highest average (M = 54.5, SD 83 = 7.58) with a large effect size. Other two-way interactions and the three-way interaction did not yield any significant results (Table 4.8 shows the full results and Figure 4.3 shows the significant interaction between availability and representativeness).   Table 4.7 Password Entropy Descriptive Statistics               Expertise Availability Representativeness Mean  SD High Construal Expertise High Construal Availability High Construal Representativeness  33.90 10.5 Low Construal Representativeness  35.78 8.12 No Representativeness 34.74 8.11 Low Construal Availability High Construal Representativeness  41.09 11.1 Low Construal Representativeness  55.60 6.23 No Representativeness 35.08 9.27 No Availability High Construal Representativeness  34.58 7.96 Low Construal Representativeness  37.75 7.76 No Representativeness 33.58 8.05 Low Construal Expertise High Construal Availability High Construal Representativeness  34.10 6.54 Low Construal Representativeness  34.77 8.68 No Representativeness 34.94 6.09 Low Construal Availability High Construal Representativeness  40.04 8.57 Low Construal Representativeness  55.63 7.6 No Representativeness 35.07 9.27 No Availability High Construal Representativeness  34.86 7.56 Low Construal Representativeness  39.92 7.98 No Representativeness 32.51 8.67 No Expertise High Construal Availability High Construal Representativeness  33.58 9.32 Low Construal Representativeness  35.29 6.06 No Representativeness 33.13 7.08 Low Construal Availability High Construal Representativeness  38.65 5.78 Low Construal Representativeness  52.37 8.74 No Representativeness 34.43 10.2 No Availability High Construal Representativeness  35.58 5.06 Low Construal Representativeness  36.88 6.02 No Representativeness 24.64 5.18 84 Source Type III Sum of Squares df Mean Square F Sig. Partial Eta Squared Corrected Model 18992.27a 26 730.47 11.55 .00 .43 Intercept 582720.95 1 582720.95 9216.44 .00 .96 Expertise 351.63 2 175.81 2.78 .06 .01 Availability 6812.44 2 3406.22 53.87 .00 .21 Representativeness 6563.95 2 3281.97 51.91 .00 .21 Expertise * Availability 107.26 4 26.82 .42 .79 .00 Expertise * Representativeness 157.75 4 39.44 .62 .65 .01 Availability * Representativeness 4462.55 4 1115.64 17.65 .00 .15 Expertise * Availability * Representativeness 421.11 8 52.64 .83 .57 .02 Error 25100.83 397 63.23    Total 630386.09 424     Corrected Total 44093.10 423     a. R Squared = .43 (Adjusted R Squared = .39) Table 4.8 Password Entropy ANOVA Results                           Figure 4.3 Significant Two-Way Interaction (Password Entropy)  85 Settings Security Level Main Effects: A three-way between-subject ANOVA for 424 subjects yielded a main effect for the messages\u2019 expertise level F (2, 397) = 43.92, p <.01, \ud835\udf02\ud835\udc5d2 =.18, indicating a significant difference between high-construal expertise (M = 5.45, SD = 1.38), low-construal expertise (M = 4.54, SD = 1.45), and no expertise (M = 4.08, SD = 1.60) with a large effect size. The main effect for availability yielded F (2, 397) = 51.32, p <.01, \ud835\udf02\ud835\udc5d2 =.21, indicating a significant difference between high-construal availability (M = 4.41, SD = 1.38), low-construal availability (M = 5.55, SD = 1.54), and no availability (M = 4.14, SD = 1.47) with a large effect size. The main effect for representativeness yielded F (2, 397) = 26.99, p <.01, \ud835\udf02\ud835\udc5d2 =.12, indicating a significant difference between high-construal representativeness (M = 4.47, SD = 1.29), low-construal representativeness (M = 5.28, SD = 1.74), and no representativeness (M = 4.30, SD = 1.53) with a large effect size.  Settings Security Level Interaction Effects: Results showed that the interaction effect between availability and representativeness was significant, F (4,397) = 9.21, p<.01, \ud835\udf02\ud835\udc5d2 =.09 indicating that there was a significant mean difference between groups in which low-construal availability and low-construal representativeness had the highest average (M = 6.80, SD = 1.08) with a medium effect size. The interaction effect between availability and expertise was significant, F (4,397) = 2.98, p<.05, \ud835\udf02\ud835\udc5d2 =.03 indicating that there was a significant mean difference between groups in which low-construal availability and high-construal expertise had the highest average (M = 6.00, SD = 1.45) with a small effect size. The interaction effect between representativeness and expertise was significant, F (4,397) = 2.45, p<.05, \ud835\udf02\ud835\udc5d2  =.02 indicating that there was a significant mean difference between groups in which low-construal representativeness and high-construal expertise had the highest average (M = 6.27, SD = 1.27) with a small effect size. Finally, the three-way interaction effect between expertise, availability, and representativeness was significant, F (8,397) = 2.4579, p<.01, \ud835\udf02\ud835\udc5d2  =.05 indicating that there was a significant mean difference between groups in which high-construal expertise, low-construal representativeness, and low-construal availability had the highest average (M = 7.07, SD = 1.16) with a small effect size.    86 Table 4.9 Settings Security Level Descriptive Statistics  Source Type III Sum of Squares df Mean Square F Sig. Partial Eta Squared Corrected Model 480.90a 26 18.50 12.61 .00 .45 Intercept 9256.03 1 9256.03 6311.85 .00 .94 Expertise 128.81 2 64.41 43.92 .00 .18 Availability 150.53 2 75.27 51.32 .00 .21 Representativeness 79.16 2 39.58 26.99 .00 .12 Expertise * Availability 17.48 4 4.37 2.98 .02 .03 Expertise * Representativeness 14.40 4 3.60 2.45 .045 .02 Availability * Representativeness 54.04 4 13.51 9.21 .00 .09 Expertise * Availability * Representativeness 32.72 8 4.09 2.79 .005 .05 Error 582.18 397 1.47    Total 10328.00 424     Corrected Total 1063.09 423     a. R Squared = .45 (Adjusted R Squared = .42) Table 4.10 Settings Security Level ANOVA Results  Expertise Availability Representativeness Mean  SD High Construal Expertise High Construal Availability High Construal Representativeness  4.93 0.88 Low Construal Representativeness  5.93 1.22 No Representativeness 4.79 1.31 Low Construal Availability High Construal Representativeness  5.69 1.14 Low Construal Representativeness  7.07 1.16 No Representativeness 5.27 1.44 No Availability High Construal Representativeness  4.87 1.55 Low Construal Representativeness  5.8 1.08 No Representativeness 4.76 1.09 Low Construal Expertise High Construal Availability High Construal Representativeness  4.13 1.41 Low Construal Representativeness  4.21 1.37 No Representativeness 4.07 1.33 Low Construal Availability High Construal Representativeness  4.4 1.18 Low Construal Representativeness  6.73 1.03 No Representativeness 5.07 1.22 No Availability High Construal Representativeness  4.21 0.98 Low Construal Representativeness  3.71 1.14 No Representativeness 4.33 1.27 No Expertise High Construal Availability High Construal Representativeness  4 0.76 Low Construal Representativeness  4.14 1.41 No Representativeness 3.65 1.37 Low Construal Availability High Construal Representativeness  4.13 1.55 Low Construal Representativeness  6.6 1.06 No Representativeness 4.93 1.07 No Availability High Construal Representativeness  3.94 1.06 Low Construal Representativeness  3.67 1.06 No Representativeness 2.06 1.18 87                                            Figure 4.4 Significant Two-Way Interactions (Settings Security Level)  88                                               Figure 4.5 Significant Three-Way Interaction (Settings Security Level)  89 4.5 Manipulation Checks    Two different approaches for manipulation checks were used. First, I used an open-ended question to ask users for their thoughts about the two security decisions (Labroo & Patrick, 2009; White et al., 2011). Adopted from prior literature, it is expected that those participants who were primed to low (high) construal group formulate more concrete (abstract) thoughts in their answers. Since there were three factors that were manipulated in their construal level, the open-ended question asked participants to give their thoughts about password creation\/settings selection and information security experts, available security information, and representative security icons. Based on the definition of construal level, I expected their responses in each construal level to include the information in table 4.11. To assess this, I used word cloud as a way to summarize the responses visually. While the approach is limited in that it only counts the frequency of words used in responses and discards sentence\/contextual information, it can provide a glance at the overall results. Figure 4.6 shows the results. It appears that for availability and representativeness, the manipulation check results were as I hoped. In high availability\/representativeness, there are more adjectives, goals (e.g., secure, informing), and fewer concrete nouns were used. This while in low availability\/representativeness, more nouns and specific details (e.g., number, letter, character) were used. However, for expertise, the word cloud from the two groups looks similar. Particularly, the low construal expertise does not include low construal characteristics (e.g., specific source)   Information to see in High Construal Abstract information (e.g., no specific mention of expertise source)  Reason\/Goals behind using those factors in password creation\/setting selection Adjectives  Low Construal Detailed information (e.g., specific mention of expertise source)  Process\/How to use those factors in password creation\/setting selection Nouns   Table 4.11 Word Cloud Manipulation Check Criteria   While I used this approach to assess the manipulation qualitatively, another method was used to determine the manipulation checks quantitatively.  90   High Availability Low Availability   High Expertise Low Expertise  High Representativeness  High Representativeness Low Representatives Figure 4.6 Word Cloud Manipulation Check Results 91  Table 4.12 Likert Scale Manipulation Checks Descriptive Statistics   As a second approach, six questions were asked to examine the impact of messages on participant\u2019s thought processes. Assessed on a Likert scale, each participant answered two questions with respect to each factor: one for high-level construal and the other for level construal. For instance, with respect to expertise, one question asked whether the message led users to focus on specific expert guides on how to create passwords\/select settings, and another question asked whether the message led users to focus on general expertise and why they should create password\/select security settings. The hope was that the average rating from the low (high) expertise group to be highest for the users who received the low (high) expertise treatment. The same goes for other treatments. Table 4.12 shows this to be the case. Albeit again, I saw that while the average response for the low expertise question is the highest for the low expertise treatment participants, it is not that high and only is about 3.83 out of 7. As the next part of this assessment, an ANOVA was run to investigate the impact of main effects on the mean difference between high\/low questions. Specifically, each participant gave an answer to high vs. low construal questions for each factor. A delta between these values was calculated and was used in ANOVA to assess the impact of the treatment. The results showed that there was a significant difference  3 [Low Expertise] To what extend did the messages on the account creation pages made you focus on specific security expert's guides on how to create a password and select security settings? (From 1: Not at all to 7: completely)  4 [High Expertise] To what extend did the messages on the account creation pages made you focus on general security expert's guide on why you should create a strong password and increase your accounting security by selecting secure settings? (From 1: Not at all to 7: completely)  5 [Low Availability] To what extend did the messages on the account creation pages made you focus on specific password and security settings information on how to create a password and how to select those settings? (From 1: Not at all to 7: completely)  6 [High Availability] To what extend did the messages on the password creation page made you focus on general password creation information on why you should create a strong password and why you should increase security level of your account? (From 1: Not at all to 7: completely)  7 [Low Representativeness] To what extend did the icons made you focus on how you can create a password and how you can select your settings? (1: Not at all, 7: A lot) [if there's no icon in the messages shown to you, choose 1]  8 [High Representativeness] To what extend did the icons made you focus why you should create a strong password and why you should increase the security level of your account with your settings selection? (1: Not at all, 7: A lot) [if there's no icon in the messages shown to you, choose 1]  Questions Low Expertise3 High Expertise4 Low Availability5 High Availability6 Low Representativeness7 High Representativeness8 Treatment Group Low Expertise 3.83 2.51 2.65 2.94 2.75 2.62 High Expertise 2.20 4.89 2.83 2.93 2.85 2.70 Low Availability 2.74 3.03 5.07 2.04 2.89 2.67 High Availability 2.76 2.91 1.69 5.50 2.89 2.64 Low Representativeness 2.88 2.91 2.85 3.01 5.62 1.78 High Representativeness 2.80 2.83 2.71 2.93 1.94 4.96 92 between ratings given by users in different construal groups for each of the fixed factors.    Dependent Variable Fixed Factor Results High Availability - Low Availability Availability F (2,421) = 843, P<.05 High Representativeness - Low Representativeness Representativeness F (2,421) = 909, P<.05 High Expertise - Low Expertise Expertise F (2,421) = 280, P<.05 Table 4.13 Likert Scale Manipulation Checks ANOVA 4.6 Discussion of Results   The results showed that regardless of the construal level, the presence of these heuristics cues nudge users to make more secure decisions. The baseline group (no expertise, no availability, no representativeness) had the worst performance across the two dependent measures: (M = 2.06, SD = 1.18) for settings security level, and (M = 26.64, SD = 5.18) for password entropy. As Figure 4.5 and 4.6 show, for other twenty-six group which received treatments, the average security setting level and password entropy were higher. The black bars show the average treatment per group for the measures, and the dotted bars show the net treatment effect (average treatment group measure \u2013 baseline group [no expertise \u00d7 no availability \u00d7 no representativeness] measure).        93         Figure 4.7 Average Entropy per Group (Sorted)   94   With respect to the second research question, the availability and representativeness treatments were more effective at lower construal per my prediction. However, the same was not seen regarding expertise.  Availability: Low construal availability led to users choosing more secure options with a large effect size (Cohen, 1988). This was also the case in password entropy, where low construal had the highest average among availability levels and led to an increase in password entropy compared to the baseline group.  The pattern continues in the two-way interactions and three-way interactions where groups including lower-construal availability consistently ranked higher in both measures than the other two availability levels. As discussed in the theoretical development, low construal detail will trigger one\u2019s practical mindset, leads the person to focus on now, and cause Figure 4.8 Average Settings Security Level per Group (Sorted)   95 the person to be more flexible in receiving feedback from external sources (Ledgerwood et al., 2010; Trope & Liberman, 2010). Accordingly, the data were in accordance with theoretical propositions.  Representativeness: With representativeness, while low and high construal both were significantly higher than the no representativeness level, low-level construal representativeness led to more secure decisions. However, the most influential treatment seems to stem from the interaction between availability and representativeness; specifically, the combination of low-level construal availability and low-level construal representativeness, which were at the top three average treatments in both two measures (As seen in Figure 4.5 and 4.6). The question arises: why the interaction between the two low-level construals of availability and representativeness led to the best performance?  A possible explanation can be given based on findings in educational research. The impact of visual and verbal\/textual explanations in teaching has been studied extensively. This is especially the case for topics that are complex (e.g., STEM) and\/or unknown to users (Angeli & Valanides, 2004; Hegarty, Carpenter, & Just, 1991). It has been supported when used together, they improve learning outcomes the most. While verbal explanation will include all the topic features, it may be hard to understand at the first time reading. On the other hand, while visual explanations are easier to understand, the key learning features may not be clear. Users may not clearly understand the takeaway when reading from a visual explanation compared to a verbal explanation. Accordingly, combined together, they complement each other\u2019s weaknesses and improve learning compared to when used solely (Bobek & Tversky, 2016). In this study, in the availability treatment, the purpose was to provide users with salient information about best practices. In the representativeness treatment, the purpose was to provide users with visual cues about best practices. Accordingly, while in this study, I am using these treatments as nudges for an immediate decision in a matter of a minute, rather than teaching them for their future career over a long period of time, I argue they are analogous to the verbal and visual explanation in learning contexts. Specifically, with respect to the question I raised above, the combination of low-level availability and representativeness leads to presenting users with a complete understanding of the best course of action with respect to decision security.  For example, in the case of password creation, the availability treatment tells users the criteria for creating a secure password, and the representativeness depicts this process using visual cues.  96 Expertise: With respect to expertise, while both low and high construal expertise yield more secure decisions compare to no expertise, the results with respect to the construal level were not as expected.  For both settings\u2019 security level and password entropy, unlike the expectation, high construal expertise was higher. Additionally, while the slightly higher effect of high construal expertise is present in the interactions for settings security level, it did not yield a significant result for password entropy, especially in the interactions. Two question arises: why contrary to my prediction, high construal expertise led to slightly more secure decisions for the settings security level measure? And why was the same pattern not seen for password entropy? There are several possible explanations. First, unlike availability and representativeness, expertise is not task-specific. It is essentially a cue outside the task. As a result, a low construal expertise treatment does not reveal much about \u201chow to make a decision securely\u201d like the other two factors. Rather it specifies the details of the expertise source. Additionally, it is possible that expertise sources used in the study (i.e., NIST and ISO) were not known by participants. In the second study, the majority of people who were using the expertise heuristics cited family members as a source of expertise in their decision making. So NIST or ISO may not be meaningful to the users or have any personal relevance. As prior studies show, people value the advice of experts more heavily and are more convinced with their arguments (DeBono & Harnish, 1988; Pallak et al., 1983; Petty et al., 1981). Based on this, the high construal may have a slightly higher impact because it explains why users should make a secure decision. Furthermore, the influence of expertise on password entropy was only marginally significant and was absent in the interactions. Compared to settings selection, the password is among the most common security decisions people make on a daily basis. As I discussed in the second study, people often already have a mental process of how they wish the password to be (i.e., a fixed-term + variable portion). Based on this and the frequency of password usage in online life, it is possible that a) people are more resistant to accepting nudge when it comes to password creation, and b) do not need source expertise in password creation, as they are already knowledgeable in criteria of a strong password.       97 4.7 Theoretical Implications  In this study, I examined the influence of heuristic-based nudging on personal security decision making. Utilizing a health website, I assessed how the inclusion of expertise, availability, the inclusion of representativeness in signup messages during website registration influence settings security level and password entropy of users in a 3 \u00d7 3 \u00d7 3 between-subject design. The results showed that low construal availability and low construal representativeness had the highest both as main effects and as interactions on the study measures. However, unlike the expectations, high construal expertise had a slightly higher impact than low construal expertise. There are several theoretical contributions from this study.  First, this is the first study that attempts to use heuristic-based nudges in the context of personal security decisions. Informed by the second study results, I attempted to nudge users toward more secure decisions using their own thinking toolbox. The second study revealed that availability, expertise, and representativeness were three common heuristics used by users. Accordingly, using these heuristics cues in messages, I assess actual security decisions made by users. The findings of the study can contribute to the ever-growing literature on behavioral information security. As we attempt to understand human behavior in security decision making further, this study presents a novel nudging approach that can be tested further in the future. Additionally, informed by one of my prior studies, I integrated the construal level theory in this nudging approach which has recently garnered attention in the security literature. Table 4.13 depicts how this study contributes to recent findings.            98   Context Manipulation Tool Findings (Fard Bahreini et al., 2020) Personal  In-app message Low construal decision feedback  increases \u2022 Decision security; operationalized with settings security level (Schuetz et al., 2020) Personal Video Low construal fear appeal   increases \u2022 Perceived threat severity  \u2022 Perceived threat vulnerability \u2022 Fear  \u2022 Protection motivation Organizational Video Low construal fear appeal   increases \u2022 Threat severity \u2022 Protection motivation  reduces \u2022 Response cost  Mixed Text Low construal fear appeal   increases \u2022 Fear  \u2022 Fear compliance  \u2022 Protection motivation \u2022 Perceived threat vulnerability  reduces Response cost Current Study Personal In-site message Both low and high construal availability    increase \u2022 Decision security; operationalized with settings security level and password entropy  Between the two levels, low construal has the higher impact. Both low and high construal representativeness     increases \u2022 Decision security; operationalized with settings security level and password entropy  Between the two levels, low construal has the higher impact. Both high\/low construal expertise     increases \u2022 Decision security; operationalized with settings security level and password entropy  Between the two levels, high construal has the higher impact. Table 4.14 Theoretical Contributions 4.8 Practical Implications   Password entropy and settings security level were used as two proxies for security decisions. They are meant to represent common security-related tasks all users experience at some point. Accordingly, the findings of the study can be grounds for novel security design and messaging mechanisms in a variety of decision contexts. For instance, password management 99 continues to be a significant issue in cybersecurity. Despite it being one of the weaker forms of authentication, it is universally used due to its low application cost (as opposed to other authentication methods such as biometrics). So, until passwords become a thing in the past, any solution that can push users to create more secure passwords worth being tested in practice. This study offers a simple, low-cost nudging technique that can be used in place or in addition to the existing password requirement. I believe this nudge\u2019s efficacy can be seen thoroughly if used with all aspects of password management. Currently, there are multiple facets with password management, including creating the initial password, changing passwords after periods of time (or after a breach), best practices in storing them and managing passwords across multiple accounts. In this study, I solely focused on initial password creation. Table 4.14 shows how the findings of the study can be utilized. The various available information and visual cues can be tested to see their effectiveness. Additionally, the findings from Study 2 and 3 showed source expertise plays an important role. Between family members, which participants are close to but cannot be used in such nudging techniques and standard agency such as NIST, which can be used but users may not be familiar with, there can be a middle ground\u2014using the organization or associated organizations that the user is familiar with as the source of expertise. For example, suppose the user is choosing a password on the US immigration website. In that case, this source can be cited as National Security Agency (NSA) or department of Homeland Security, which are recognizable to users. Password Management Task Current Common Practice What Can be Added Initial Password Creation Password requirement Low Availability Cue: A short message can discuss how a strong password can be selected in place or in addition to password requirements.  Low Representativeness Cue: A visual can be accompanied to support the verbal explanation further.  High Expertise Cue: A pertinent source of expertise can state why selecting a strong password is beneficial.  Password Change Over a Fixed Period\/After a Suspected Access Notification Email Stating Facts Low Availability Cue: A short message can discuss how the new password can be created.  Low Representativeness Cue: A visual can be accompanied to support the verbal explanation further.  High Expertise Cue: Additionally, a source of expertise can state why changing the password is required  Table 4.15 Practical Implications 100 Chapter 5: Conclusion  In this concluding section, I would like to discuss several limitations pertinent to each study. I will also discuss how these limitations can be potentially improved in the future. 5.1 Limitations    Study 1: First, in chapter 2, default security level was identified as an influential factor in users\u2019 security decision making. It is important to point out while this finding is generalizable to security decision which comes with default options (e.g., website settings selection, security software real-time protection options), the findings related to default security level is not directly applicable to security decisions which lack default options. For instance, the default security level doesn\u2019t have an impact on phishing email detection, where no default options are present.  The second limitation of this study is the generalizability of results across various security decisions. The measure of settings security level assumes that options with higher-level security are most beneficial to users\u2019 needs. However, user needs may differ. For example, in using a GPS-based service, users may turn on location to satisfy their navigation needs. In this case, turning on the location setting satisfies the need for the user but can be considered an insecure setting since the app is aware of the user's location. This brings up the concept of security\/performance trade-off, which has long been debated in the literature. While the notion of security\/performance trade-off certainly can apply to a variety of security decisions, it is not a ubiquitous phenomenon, and it doesn\u2019t apply to my studies. For example, if the dependent measure was a software that needed to run in the background to obtain its objective (e.g., health monitor, step tracker, antivirus), then this dilemma may appear. For example, users may face a dilemma where turning all the security options (including automatic updates) in their tracking app will reduce not only the performance of the app but also the phone. This is because the app is always running in the background due to its main objective (i.e., tracking steps). However, neither the app in study 1 nor the website in study 3 needs to sacrifice user\u2019s security to perform and achieve their purpose. Accordingly, I argue that the measures used in studies 1 and 3 are appropriate. However, in future studies where users\u2019 security acts as a high cost to the performance needs of the users, appropriate controls must be made.  Study 2: One of the key findings with study 2 was the role of decision context in heuristic utilization. For instance, availability was most prevalent in password management tasks, while expertise was observed in account\/device management. Thus, there\u2019s no dominant heuristics that 101 users use in various tasks as previously assumed (Tsohou et al., 2015). Rather, the heuristic prevalence appears to be task-dependent. In addition, a prior security breach experienced by the user was associated with less heuristic utilization. Consequently, I hesitate to generalize the findings of the second study beyond the four decision types investigate (i.e., password management, account\/device management, security software usage, and web browsing). While these four categories include all the decisions made by users in the study, any decision outside these four groups should be examined within its own context regarding heuristic utilization.  Another limitation of the second study is the generalizability of results to highly sensitive security decisions. On the basis of information security sensitivity, security decisions can range from non-sensitive decisions (where no personal information or\/and the device is disclosed\/used) to sensitive decisions (where personal information or\/and the device is disclosed\/used). Security decisions involving one\u2019s personal health record or personally identifiable information are examples of highly sensitive security decisions. It is plausible to assume that in these contexts, users may rely more heavily on more deliberate thinking (compared to using heuristics) as they perceived the decisions more important to them. Accordingly, in addition to decision type, security sensitivity of decision also matter. In this study, I investigated a variety of decisions that participants identified as important. However, there was no systematic investigation into heuristics used in sensitive vs. non-sensitive contexts. Future exploration in the existing data or future data collection can address this point more clearly. Until then, the findings should be interpreted with caution in the context of highly sensitive security decisions. Study 3: In study 3, I examined the influence of nudging messages on single security decisions. Accordingly, the findings of the study are generalizable to single-time decisions. However, I cannot comment on the effectiveness of nudges when they are used over a period of time or used in repeated decisions. This is both a limitation of this study and existing nudging literature. Thus, future studies are required to investigate the effectiveness of nudges over time. It is possible that over time other factors start to influence user\u2019s decisions, such as habituation to nudges. Investigating the potential influencers after the initial nudge and modifying the nudge based on these factors can extend the findings of this study.   Furthermore, the findings can be extended by examining additional constraints. For instance, in study 3, similar to study 1, users had the opportunity to make the decisions in their own time, without any time limitations. However, what happens if users are faced with time 102 constraints? It is possible that users rely even more heavily on heuristics. This because, in addition to reducing mental efforts, they also wish to reduce the time of completing the tasks. Additionally, in doing so, it is possible they begin to rely on time-saving heuristics (such availability) compared to expertise which requires inquiring from others. This is an interesting area for future investigations.  Finally, in all three studies, I attempted to collect data that is representative of the target study population. In doing so, a variety of factors were considered to gather a diverse sample: age, sex, demographic, IT education, IT work experience, information security education, information security work experience, employment status, prior exposure to security news, and prior security breach experience are among these factors. In each study, appropriate measures were taken to ensure the sample size was adequate. In study 1, the descriptive analysis and measurement model support the adequacy of the sample size. In study 2, the sample size was adequate according to the data saturation, general guidelines, and precedence in IS literature. Finally, in study 3, the sample size was determined based on a-priori power analysis.  Despite these efforts, the samples may still face representativeness limitations due to a lack of consideration of some factors such as level of education. Thus, one should interpret the results considering this notion.  5.2 Summary and Future Direction In today\u2019s inter-connected world, users make many security-related decisions daily and are very vulnerable to security risks. A combination of a lack of formal security training and susceptibility to inadvertent errors is a serious issue that must be investigated with the hope that users make fewer inadvertent errors in their information security decisions. After all, not only better security decisions of these users can help with enhancing their own personal data security, but they also can assist in increasing the protection of people and organizations connected to them. However, despite the prevalence of such errors and pertinent potential losses (e.g., data, financial) for the victims, their friends, or their workplace, there has been little research on the sources and solutions to these inadvertent errors. To assess possible causes of these errors, understand how users make decisions, and offer practical solutions that help users make more secure decisions, I conducted three interrelated studies in this thesis.    Study 1 was aimed at assessing the role of three potential, influential factors in information security decision making of personal users based on the theory of bounded rationality. The 103 evidence showed the existence of a positive influence from subjective security knowledge and default option security level on the decision security level. Furthermore, it showed that not only subjective security knowledge mediates the impact of objective security knowledge, but it itself is associated with lowering the tendency to blindly accept the default options.  According to the theory of bounded rationality, human decision making is influenced by various judgmental heuristics. From an effort-reduction perspective, this is because people wish to reduce their cognitive effort and make information processing easier. Accordingly, in Study 2, after interviewing 27 subjects, I discovered that heuristics play an important role in information security decisions. Expertise, availability, representativeness, brand, affect, and anchoring were shown as the dominant heuristic used by users. Additionally, I saw that usage of heuristics could vary depending on the type of security decisions.  In study 3, I investigate how using heuristic-based nudging can push users towards more secure decisions. Cognitive heuristics are short mental processes that help users make faster decisions (Newell & Simon, 1972; Tversky & Kahneman, 1974). Thus, they are not inherently problematic; they can lead to lower or higher quality decisions depending on how heuristics are used. For example, in the case of availability heuristics, if the information salient or recent to the user includes the best practices to a specific decision, then the decision could be positively influenced as the user relies on that information to make decisions. Alternatively, if the information most available to the user does not include the best practices, decisions could be negatively impacted, specifically if the available information contains poor guidance. Accordingly, in this study, I examined what happens if we nudge users with three heuristic cues at the time of decision making. Specifically, I looked at availability, representativeness, and expertise. Furthermore, informed by one of my prior studies, construal level was included in the study to assess the impact of the factors (i.e., heuristics) with their different construal levels (i.e., low-construal, high-construal, no heuristic) using a message-based nudging approach and used password entropy and security settings security level as two proxies for security decision.  The findings showed low availability, low representativeness, and high expertise have the highest impact on users\u2019 security decision making.  The presented research can be extended in the future in multiple avenues: First, the focus of the thesis was on security decisions. Accordingly, there is ample opportunity to apply and extend the findings concerning individuals\u2019 susceptibility to specific 104 targeted security attacks. One area worth exploring is users\u2019 responses to social engineering. Understanding individuals\u2019 thinking processes can reveal how and when these attacks can be successful. These findings, in turn, can enable organizations to expand their current strategies in addressing security vulnerabilities.  Second, the findings can be continued by examining and testing more advanced security system designs (e.g., decision-making environment, feedback mechanism). I believe the nudging techniques developed and tested in these studies can be further refined and customized for use in different contexts and various security decisions. In particular, with the predicted dominating role of AI in the next decade in information security, user-adaptive nudging approaches, choice structure, and security system design can be developed.     Third, as the next step in this research stream, the findings can ultimately be turned into policies, guidelines and be presented as a new aspect of information security education in workplaces and training programs. For many years, security experts have focused on improving the IT security infrastructure and security knowledge of decision makers in order to enhance online health. However, as the findings of my studies suggest, security knowledge is only one influencer in the security decision making of users, and factors, both psychological and environmental, while influential, have long been ignored. The question becomes, how can these factors effectively be included in educational materials. Developing this framework, in addition to designing novel nudging systems, can further empower decision makers and organizations in enhancing their online protection.                  105 Bibliography   Acquisti, A. (2004). Privacy in electronic commerce and the economics of immediate gratification. Paper presented at the Proceedings of the 5th ACM Conference on Electronic Commerce. Acquisti, A. (2009). Nudging privacy: The behavioral economics of personal information. IEEE Security & Privacy, 7(6), 82-85.  Acquisti, A., Adjerid, I., Balebako, R., Brandimarte, L., Cranor, L. F., Komanduri, S., . . . Sleeper, M. (2017). Nudges for privacy and security: understanding and assisting users\u2019 choices online. ACM Computing Surveys (CSUR), 50(3), 44.  Acquisti, A., Brandimarte, L., & Loewenstein, G. (2015). Privacy and human behavior in the age of information. Science, 347(6221), 509-514.  Acquisti, A., & Grossklags, J. (2005). Privacy and rationality in individual decision making. IEEE Security & Privacy, 3(1), 26-33.  Adjerid, I., Peer, E., & Acquisti, A. (2018). Beyond the privacy paradox: objective versus relative risk in privacy decision making. MIS Quarterly, 42(2), 465-488. Adkins, D. C. (1960). Test construction: development and interpretation of achievement test: Charles E. Merrill Books. Aggarwal, R., Kryscynski, D., Midha, V., & Singh, H. (2015). Early to adopt and early to discontinue: The impact of self-perceived and actual IT knowledge on technology use behaviors of end users. Information Systems Research, 26(1), 127-144.  Ahmed, I. (2019). The 15 Biggest Data Breaches of the Last 15 Years. Retrieved from https:\/\/www.socialmediatoday.com\/news\/the-15-biggest-data-breaches-of-the-last-15-years-infographic\/560456\/.  Alba, J. W., & Hutchinson, J. W. (2000). Knowledge calibration: What consumers know and what they think they know. Journal of Consumer Research, 27(2), 123-156.  Almuhimedi, H., Schaub, F., Sadeh, N., Adjerid, I., Acquisti, A., Gluck, J., . . . Agarwal, Y. (2015). Your location has been shared 5,398 times! A field study on mobile app privacy nudging. Paper presented at the Proceedings of the 33rd annual ACM conference on human factors in computing systems. Ament, C. (2017). The Ubiquitous security expert: Overconfidence in information security. ICIS 2017 Proceedings.  Anderson, C. L., & Agarwal, R. (2010). Practicing safe computing: a multimedia empirical examination of home computer user security behavioral intentions. MIS Quarterly, 34(3), 613-643.  Angeli, C., & Valanides, N. (2004). Examining the effects of text-only and text-and-visual instructional materials on the achievement of field-dependent and field-independent learners during problem-solving with modeling software. Educational Technology Research and Development, 52(4), 23-36.  Arkes, H. R. (1991). Costs and benefits of judgment errors: Implications for debiasing. Psychological Bulletin, 110(3), 486.  Babiarz, P., & Robb, C. A. (2014). Financial literacy and emergency saving. Journal of Family and Economic Issues, 35(1), 40-50.  Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY, US: W H Freeman\/Times Books\/ Henry Holt & Co. Bartholom\u00e9, T., Stahl, E., Pieschl, S., & Bromme, R. (2006). What matters in help-seeking? A study of help effectiveness and learner-related factors. Computers in Human Behavior, 106 22(1), 113-129.  Bassellier, G., Benbasat, I., & Reich, B. H. (2003). The influence of business managers' IT competence on championing IT. Information Systems Research, 14(4), 317-336.  Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287-311.  Beitelspacher, L. S., Hansen, J. D., Johnston, A. C., & Deitz, G. D. (2012). Exploring consumer privacy concerns and RFID technology: The impact of fear appeals on consumer behaviors. Journal of Marketing Theory and Practice, 20(2), 147-160.  Bell, D. E., Raiffa, H., & Tversky, A. (1989). Decision making. Cambridge University Press. Benbasat, I., & Zmud, R. W. (1999). Empirical research in information systems: The practice or relevance. MIS Quarterly, 23(1), 3-16.  Brecht, D. (2021, January 11). Password security: complexity vs. length [updated 2021]. Infosec.          https:\/\/resources.infosecinstitute.com\/topic\/password-security-complexity-vs-length\/ Boateng, G. O., Neilands, T. B., Frongillo, E. A., Melgar-Quinonez, H., & Young, S. L. (2018). Best practices for developing and validating scales for health, social, and behavioral research: A primer. Frontiers in Public Health, 149.  Bobek, E., & Tversky, B. (2016). Creating visual explanations improves learning. Cognitive Research: Principles and Implications, 1(1), 1-14.  Bohner, G., Ruder, M., & Erb, H. P. (2002). When expertise backfires: Contrast and assimilation effects in persuasion. British Journal of Social Psychology, 41(4), 495-519.  Bollen, K. (1989). Structural equations with latent variables. 1989 New York. NY Wiley.  Boss, S. R., Galletta, D. F., Lowry, P. B., Moody, G. D., & Polak, P. (2015). What do systems users have to fear? Using fear appeals to engender threats and fear that motivate protective security behaviors. MIS Quarterly, 39(4), 837-864.  Bowen, G. A. (2008). Naturalistic inquiry and the saturation concept: a research note. Qualitative Research, 8(1), 137-152. Brannon, L. A., & Carson, K. L. (2003). The representativeness heuristic: influence on nurses\u2019 decision making. Applied Nursing Research, 16(3), 201-204.  Brown, T. A. (2015). Confirmatory factor analysis for applied research: Guilford Publications. Brucks, M. (1985). The effects of product class knowledge on information search behavior. Journal of Consumer Research, 12(1), 1-16.  Bruno, J. E., & Dirkzwager, A. (1995). Determining the optimal number of alternatives to a multiple-choice test item: An information theoretic perspective. Educational and Psychological Measurement, 55(6), 959-966.  Burton-Jones, A., & Grange, C. (2012). From use to effective use: a representation theory perspective. Information Systems Research, 24(3), 632-658.  Byrne, B. M., & Stewart, S. M. (2006). Teacher's corner: The MACS approach to testing for multigroup invariance of a second-order structure: A walk through the process. Structural Equation Modeling, 13(2), 287-321.  Carlson, J. P., Vincent, L. H., Hardesty, D. M., & Bearden, W. O. (2008). Objective and subjective knowledge relationships: A quantitative analysis of consumer research findings. Journal of Consumer Research, 35(5), 864-876.  Carver, C. S., & Scheier, M. F. (2000). Autonomy and self-regulation. Psychological Inquiry, 11(4), 284-291.  Cenfetelli, R. T., Benbasat, I., & Al-Natour, S. (2008). Addressing the what and how of online services: Positioning supporting-services functionality and service quality for business-to-107 consumer success. Information Systems Research, 19(2), 161-181.  Chen, Y., & Zahedi, F. (2016). Individual's Internet Security Perceptions and Behaviors: Polycontextual Contrasts Between the United States and China. MIS Quarterly, 40(1), 205-222.  Cohen, J. (1988). Statistical power analysis for the social sciences. Routledge Academic.  Cole, C. A., Gaeth, G., Chakraborty, G., & Levin, I. (1992). Exploring the relationships among self-reported knowledge, objective knowledge, product usage, and consumer decision making. Advances in Consumer Research, 19(1), p191.  Cook, R. D. (1977). Detection of influential observation in linear regression. Technometrics, 19(1), 15-18.  Croson, R., & Sundali, J. (2005). The gambler\u2019s fallacy and the hot hand: Empirical data from casinos. Journal of Risk and Uncertainty, 30(3), 195-209.  Crossler, R. E., & B\u00e9langer, F. (2019). Why would I use location-protective settings on my smartphone? Motivating protective behaviors and the existence of the privacy Knowledge\u2013Belief Gap. Information Systems Research, 30(3), 995-1006.  Crossler, R. E., Johnston, A. C., Lowry, P. B., Hu, Q., Warkentin, M., & Baskerville, R. (2013). Future directions for behavioral information security research. Computers & Security, 32, 90-101.  D'Arcy, J., Hovav, A., & Galletta, D. (2009). User awareness of security countermeasures and its impact on information systems misuse: A deterrence approach. Information Systems Research, 20(1), 79-98.  Davenport, T. H., & Markus, M. L. (1999). Rigor vs. relevance revisited: Response to Benbasat and Zmud. MIS Quarterly, 23(1), 19-23. doi:10.2307\/249405 DeBono, K. G., & Harnish, R. J. (1988). Source expertise, source attractiveness, and the processing of persuasive information: A functional approach. Journal of Personality and Social Psychology, 55(4), 541.  DeVellis, R. F. (2016). Scale development: Theory and applications (Vol. 26): Sage Publications. Dhingra, N., Gorn, Z., Kener, A., & Dana, J. (2012). The default pull: An experimental demonstration of subtle default effects on preferences. Judgment & Decision Making, 7(1). Dinev, T., McConnell, A. R., & Smith, H. J. (2015). Research commentary\u2014informing privacy research through information systems, psychology, and behavioral economics: thinking outside the \u201cAPCO\u201d box. Information Systems Research, 26(4), 639-655.  Egelman, S., Sotirakopoulos, A., Muslukhov, I., Beznosov, K., & Herley, C. (2013). Does my password go up to eleven? The impact of password meters on password selection. Paper presented at the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Ericsson, K. A., & Simon, H. A. (1984). Protocol analysis: Verbal reports as data: the MIT Press. Ericsson, K. A., & Simon, H. A. (1998). How to study thinking in everyday life: Contrasting think-aloud protocols with descriptions and explanations of thinking. Mind, Culture, and Activity, 5(3), 178-186.  EY. (2018). Cybersecurity regained: Preparing to face cyber attacks: 20th global information security survey. Retrieved from https:\/\/assets.ey.com\/content\/dam\/ey-sites\/ey-com\/en_gl\/topics\/digital\/ey-cybersecurity-regained-preparing-to-face-cyber-attacks.pdf.  Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272.  Fard Bahreini, A., Cavusoglu, H., & Cenfetelli, R. (2020). Role of feedback in improving novice 108 users\u2019 security performance using construal level and valance framing. 39th International Conference on Information Systems (ICIS) Fleiss, J., Levin, B., & Paik, M. (2003). How to randomize. Statistical Methods for Rates and Proportions. 3rd ed. Hoboken, NJ John Wiley & Sons, 86-94.  Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378.  Folkes, V. S. (1988). The availability heuristic and perceived risk. Journal of Consumer Research, 15(1), 13-23.  Forget, A., Chiasson, S., Van Oorschot, P. C., & Biddle, R. (2008). Improving text passwords through persuasion. Paper presented at the Proceedings of the 4th Symposium on Usable Privacy and Security. Fredrica, R. (1979). Consumer food selection and nutrition information: New York: Praeger. Freitas, A. L., Gollwitzer, P., & Trope, Y. (2004). The influence of abstract and concrete mindsets on anticipating and guiding others' self-regulatory efforts. Journal of Experimental Social Psychology, 40(6), 739-752.  Fujita, K., Trope, Y., Liberman, N., & Levin-Sagi, M. (2006). Construal levels and self-control. Journal of Personality and Social Psychology, 90(3), 351.  Furneaux, B., & Wade, M. R. (2011). An exploration of organizational level information systems discontinuance intentions. MIS Quarterly, 573-598.  Furnell, S. (2005). Why users cannot use security. Computers & Security, 24(4), 274-279.  Gale, N. K., Heath, G., Cameron, E., Rashid, S., & Redwood, S. (2013). Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Medical Research Methodology, 13(1), 1-8.  Gambino, A., Kim, J., Sundar, S. S., Ge, J., & Rosson, M. B. (2016). User disbelief in privacy paradox: Heuristics that determine disclosure. Paper presented at the Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. Garb, H. N. (1996). The representativeness and past-behavior heuristics in clinical judgment. Professional Psychology: Research and Practice, 27(3), 272.  Garg, V., & Camp, J. (2013). Heuristics and biases: implications for security design. IEEE Technology and Society Magazine, 32(1), 73-79.  Gerlach, J., & Cenfetelli, R. T. (2020). Constant Checking Is Not Addiction: A Grounded Theory of IT-Mediated State-Tracking. MIS Quarterly, 44(4), 1704-1732.  Gigerenzer, G., Hoffrage, U., & Kleinb\u00f6lting, H. (1991). Probabilistic mental models: a Brunswikian theory of confidence. Psychological Review, 98(4), 506.  Gilbert, D. T., Pinel, E. C., Wilson, T. D., Blumberg, S. J., & Wheatley, T. P. (1998). Immune neglect: a source of durability bias in affective forecasting. Journal of Personality and Social Psychology, 75(3), 617.  Goel, S., Williams, K., & Dincelli, E. (2017). Got phished? Internet security and human vulnerability. Journal of the Association for Information Systems, 18(1), 22.  Grier, B. (1976). The optimal number of alternatives at a choice point with travel time considered. Journal of Mathematical Psychology, 14(1), 91-97.  Griffin, D., & Brenner, L. (2004). Perspectives on probability judgment calibration. Blackwell Handbook of Judgment and Decision Making, 177-199.  Guryan, J., & Kearney, M. S. (2008). Gambling at lucky stores: Empirical evidence from state lottery sales. American Economic Review, 98(1), 458-473.  Hadar, L., Sood, S., & Fox, C. R. (2013). Subjective knowledge in consumer financial decisions. 109 Journal of Marketing Research, 50(3), 303-316.  Hair, J., Black, W., Babin, B., & Anderson, R. (2014). Multivariate data analysis: Pearson new. International edition. Harlow, UK: Pearson Education Limited.  Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15(3), 309-333.  Hansen, J., & W\u00e4nke, M. (2010). Truth from language and truth from fit: The impact of linguistic concreteness and level of construal on subjective truth. Personality and Social Psychology Bulletin, 36(11), 1576-1588.  Harvey, N. (2007). Use of heuristics: Insights from forecasting research. Thinking & Reasoning, 13(1), 5-24.  Hastie, R. (2001). Problems for judgment and decision making. Annual Review of Psychology, 52(1), 653-683.  Haynes, S. N., Richard, D., & Kubany, E. S. (1995). Content validity in psychological assessment: a functional approach to concepts and methods. Psychological Assessment, 7(3), 238.  Hegarty, M., Carpenter, P. A., & Just, M. A. (1991). Diagrams in the comprehension of scientific texts.  Hernandez, J. M. d. C., Wright, S. A., & Ferminiano Rodrigues, F. (2015). Attributes versus benefits: The role of construal levels and appeal type on the persuasiveness of marketing messages. Journal of Advertising, 44(3), 243-253.  Hinkin, T. R., & Tracey, J. B. (1999). An analysis of variance approach to content validation. Organizational Research Methods, 2(2), 175-186.  Homer, P. M., & Kahle, L. R. (1990). Source expertise, time of source identification, and involvement in persuasion: An elaborative processing perspective. Journal of Advertising, 19(1), 30-39.  Hong, W., Thong, J. Y., Chasalow, L. C., & Dhillon, G. (2011). User acceptance of agile information systems: A model and empirical test. Journal of Management Information Systems, 28(1), 235-272.  Hooper, D., Coughlan, J., & Mullen, M. (2008). Evaluating model fit: a synthesis of the structural equation modelling literature. Paper Presented at the 7th European Conference on Research Methodology for Business and Management Studies. Houser, A., & Bolton, M. L. (2017). Formal Mental Models for Inclusive Privacy and Security. Paper presented at the SOUPS. Hsieh, P.-J. (2015). Healthcare professionals\u2019 use of health clouds: Integrating technology acceptance and status quo bias perspectives. International Journal of Medical Informatics, 84(7), 512-523.  Huber, O. (1979). Nontransitive multidimensional preferences: Theoretical analysis of a model. Theory and Decision, 10(1-4), 147-165.  Hui, K. L., Kim, S. H., & Wang, Q. H. (2017). Cybercrime deterrence and international legislation: Evidence from distributed denial of service attacks. MIS Quarterly, 41(2), 497-524.  Hwang, D. D., & Verbauwhede, I. (2004). Design of portable biometric authenticators-energy, performance, and security tradeoffs. IEEE Transactions on Consumer Electronics, 50(4), 1222-1231.  Imperva. (2010). Imperva releases detailed analysis of 32 million breached consumer passwords. Retrieved from https:\/\/www.imperva.com\/company\/press_releases\/imperva-releases-detailed-analysis-of-32-million-breached-consumer-passwords\/.  110 Ji-Ye Mao, I. B. (2000). The use of explanations in knowledge-based systems: Cognitive perspectives and a process-tracing analysis. Journal of Management Information Systems, 17(2), 153-179.  Johnson, E. J., Hershey, J., Meszaros, J., & Kunreuther, H. (1993). Framing, probability distortions, and insurance decisions. Journal of Risk and Uncertainty, 7(1), 35-51.  Johnston, A. C., & Warkentin, M. (2010). Fear appeals and information security behaviors: an empirical study. MIS Quarterly, 549-566.  Johnston, J., Eloff, J. H., & Labuschagne, L. (2003). Security and human computer interfaces. Computers & Security, 22(8), 675-684.  Kahneman, D. (2003). A perspective on judgment and choice: mapping bounded rationality. American Psychologist, 58(9), 697.  Kahneman, D., Slovic, S. P., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases: Cambridge University Press. Keller, C., Siegrist, M., & Gutscher, H. (2006). The role of the affect and availability heuristics in risk communication. Risk Analysis, 26(3), 631-639.  Keren, G., & Wu, G. (2015). The Wiley Blackwell handbook of judgment and decision making: John Wiley & Sons. Khalili, J. (2020). Tens of thousands of malicious Android apps flooding user devices. Retrieved from https:\/\/www.techradar.com\/news\/tens-of-thousands-of-malicious-android-apps-flooding-google-play-store.  Khern-am-nuai, W., Yang, W., & Li, N. (2017). Using context-based password strength meter to nudge users' password generating behavior: a randomized experiment. Paper presented at the Proceedings of the 50th Hawaii International Conference on System Sciences. Kim, B. C., & Park, Y. W. (2012). Security versus convenience? An experimental study of user misperceptions of wireless internet service quality. Decision Support Systems, 53(1), 1-11.  Kim, H.-W., & Kankanhalli, A. (2009). Investigating user resistance to information systems implementation: A status quo bias perspective. MIS Quarterly, 567-582.  Kivetz, Y., & Tyler, T. R. (2007). Tomorrow I\u2019ll be me: The effect of time perspective on the activation of idealistic versus pragmatic selves. Organizational Behavior and Human Decision Processes, 102(2), 193-211.  Kline, R. B. (2015). Principles and practice of structural equation modeling: Guilford Publications. K\u00f6hler, C. F., Breugelmans, E., & Dellaert, B. G. (2011). Consumer acceptance of recommendations by interactive decision aids: The joint role of temporal distance and concrete versus abstract communications. Journal of Management Information Systems, 27(4), 231-260.  Korstjens, I., & Moser, A. (2018). Series: Practical guidance to qualitative research. Part 4: Trustworthiness and publishing. European Journal of General Practice, 24(1), 120-124.  Krabuanrat, K., & Phelps, R. (1998). Heuristics and rationality in strategic decision making: An exploratory study. Journal of Business Research, 41(1), 83-93.  Kritzinger, E., & von Solms, S. H. (2010). Cyber security for home users: A new way of protection through awareness enforcement. Computers & Security, 29(8), 840-847.  Kunreuther, H., Ginsberg, R., Miller, L., Sagi, P., Slovic, P., Borkan, B., & Katz, N. (1978). Disaster insurance protection: Public policy lessons: Wiley New York. Kuusela, H., & Pallab, P. (2000). A comparison of concurrent and retrospective verbal protocol analysis. The American Journal of Psychology, 113(3), 387.  Labroo, A. A., & Patrick, V. M. (2009). Psychological distancing: Why happiness helps you see 111 the big picture. Journal of Consumer Research, 35(5), 800-809.  Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 159-174.  Larrick, R. P. (2004). Debiasing. Blackwell handbook of judgment and decision making, 316-338.  LastPass. (2020). Psychology of passwords. https:\/\/www.lastpass.com\/resources\/psychology-of-passwords-2020.  Lebek, B., Uffen, J., Neumann, M., Hohler, B., & H. Breitner, M. (2014). Information security awareness and behavior: a theory-based literature review. Management Research Review, 37(12), 1049-1092.  Ledgerwood, A., Trope, Y., & Chaiken, S. (2010). Flexibility now, consistency later: psychological distance and construal shape evaluative responding. Journal of Personality and Social Psychology, 99(1), 32.  Lee, A. Y., Keller, P. A., & Sternthal, B. (2010). Value from regulatory construal fit: The persuasive impact of fit between consumer goals and message concreteness. Journal of Consumer Research, 36(5), 735-747.  Lee, J. S., Keil, M., & Shalev, E. (2019). Seeing the Trees or the Forest? The Effect of IT Project Managers\u2019 Mental Construal on IT Project Risk Management Activities. Information Systems Research, 30(3), 1051-1072.  Lee, K., & Joshi, K. (2017). Examining the use of status quo bias perspective in IS research: need for reconceptualizing and incorporating biases. Information Systems Journal, 27(6), 733-752.  Li, Y., & Siponen, M. T. (2011). A call For research on home users' information security behaviour. Paper presented at the PACIS. Liberman, N., & Trope, Y. (1998). The role of feasibility and desirability considerations in near and distant future decisions: A test of temporal construal theory. Journal of Personality and Social Psychology, 75(1), 5.  Liberman, N., Trope, Y., & Wakslak, C. (2007). Construal level theory and consumer behavior. Journal of Consumer Psychology, 17(2), 113-117.  Lichtenstein, S., Fischhoff, B., & Phillips, L. D. (1981). Calibration of probabilities: The state of the art to 1980: DTIC Document. Lincoln, Y. S. (1985). Naturalistic inquiry \/ Yvonna S. Lincoln, Egon G. Guba. Beverly Hills, Calif: Sage Publications. Liu, B., Andersen, M. S., Schaub, F., Almuhimedi, H., Zhang, S., Sadeh, N., . . . Agarwal, Y. (2016). Follow my recommendations: A personalized privacy assistant for mobile app permissions. Paper presented at the Symposium on Usable Privacy and Security. Lord, F. M. (1977). Optimal number of choices per item\u2014A comparison of four approaches. Journal of Educational Measurement, 14(1), 33-38.  Lusardi, A., & Mitchell, O. S. (2007). Baby boomer retirement security: The roles of planning, financial literacy, and housing wealth. Journal of Monetary Economics, 54(1), 205-224.  Ma, X., Kim, S. H., & Kim, S. S. (2014). Online gambling behavior: The impacts of cumulative outcomes, recent outcomes, and prior use. Information Systems Research, 25(3), 511-527.  MacGregor, K. E., Carnevale, J. J., Dusthimer, N. E., & Fujita, K. (2017). Knowledge of the self-control benefits of high-level versus low-level construal. Journal of Personality and Social Psychology, 112(4), 607.  MacKenzie, S. B., Podsakoff, P. M., & Podsakoff, N. P. (2011). Construct measurement and validation procedures in MIS and behavioral research: Integrating new and existing 112 techniques. MIS Quarterly, 35(2), 293-334.  Maheswaran, D., Mackie, D. M., & Chaiken, S. (1992). Brand name as a heuristic cue: The effects of task importance and expectancy confirmation on consumer judgments. Journal of Consumer Psychology, 1(4), 317-336.  Marett, K., McNab, A. L., & Harris, R. B. (2011). Social networking websites and posting personal information: An evaluation of protection motivation theory. AIS Transactions on Human-Computer Interaction, 3(3), 170-188.  Malkin, N., Mathur, A., Harbach, M., & Egelman, S. (2017). Personalized security messaging: Nudges for compliance with browser warnings. Paper presented at the 2nd European Workshop on Usable Security. Internet Society. Marshall, B., Cardon, P., Poddar, A., & Fontenot, R. (2013). Does sample size matter in qualitative research?: A review of qualitative interviews in IS research. Journal of Computer Information Systems, 54(1), 11-22.  Mell, P., Kent, K., & Nusbaum, J. (2005). NIST Special Publication 800-83. Guide to Malware Incident Prevention and Handling, 2-11.  Menard, P., Bott, G. J., & Crossler, R. E. (2017). User motivations in protecting information security: Protection motivation theory versus self-determination theory. Journal of Management Information Systems, 34(4), 1203-1230.  Meservy, T. O., Jensen, M. L., & Fadel, K. J. (2014). Evaluation of competing candidate solutions in electronic networks of practice. Information Systems Research, 25(1), 15-34.  Meshi, D., Biele, G., Korn, C. W., & Heekeren, H. R. (2012). How expert advice influences decision making. PloS One, 7(11), e49748.  Meyvis, T., & Janiszewski, C. (2002). Consumers' beliefs about product benefits: The effect of obviously irrelevant product information. Journal of Consumer Research, 28(4), 618-635.  Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192-222.  Moorman, C., Diehl, K., Brinberg, D., & Kidwell, B. (2004). Subjective knowledge, search locations, and consumer choice. Journal of Consumer Research, 31(3), 673-680.  Moscaritolo, A. (2010). RockYou Hack Reveals Most Common Password: \u2018123456\u2019. Retrieved from https:\/\/www.scmagazine.com\/home\/security-news\/rockyou-hack-reveals-most-common-password-123456\/.  Mun, Y. Y., Yoon, J. J., Davis, J. M., & Lee, T. (2013). Untangling the antecedents of initial trust in Web-based health information: The roles of argument quality, source expertise, and user perceptions of information quality and risk. Decision Support Systems, 55(1), 284-295.  Muncaster, P. (2020). Over Half of Universities Suffered Data Breach in Past Year. https:\/\/www.infosecurity-magazine.com\/news\/over-half-of-universities-suffered\/.  Murphy, A. H., & Winkler, R. L. (1977). Can weather forecasters formulate reliable probability forecasts of precipitation and temperature. National Weather Digest, 2(2), 2-9.  Neumann, J. v., & Morgenstern, O. (2004). Theory of games and economic behavior: Princeton Univ. Press. Newell, A., & Simon, H. A. (1972). Human problem solving (Vol. 104): Prentice-hall Englewood Cliffs, NJ. Nunnally, J. C., & Bernstein, I. (1994). Psychometric Theory (McGraw-Hill Series in Psychology) (Vol. 3): McGraw-Hill New York. O\u2019brien, R. M. (2007). A caution regarding rules of thumb for variance inflation factors. Quality 113 & Quantity, 41(5), 673-690.  Over, D. (2004). Rationality and the normative\/descriptive distinction. Blackwell Handbook of Judgment and Decision Making, 3-18.  Pagliery, J. (2016). Hackers selling 117 million LinkedIn passwords. https:\/\/money.cnn.com\/2016\/05\/19\/technology\/linkedin-hack\/.  Pallak, S. R., Murroni, E., & Koch, J. (1983). Communicator attractiveness and expertise, emotional versus rational appeals, and persuasion: A heuristic versus systematic processing interpretation. Social Cognition, 2(2), 122-141.  Park, C. W., Gardner, M. P., & Thukral, V. K. (1988). Self-perceived knowledge: Some effects on information processing for a choice. The American Journal of Psychology, 101(3), 401-424.  Park, C. W., Mothersbaugh, D. L., & Feick, L. (1994). Consumer knowledge assessment. Journal of Consumer Research, 21(1), 71-82.  Parker, D. B. (2012). Toward a new framework for information security? Computer Security Handbook, 3.1-3.23.  Peer, E., Egelman, S., Harbach, M., Malkin, N., Mathur, A., & Frik, A. (2020). Nudge me right: Personalizing online security nudges to people's decision-making styles. Computers in Human Behavior, 109, 106347.  Petty, R. E., Cacioppo, J. T., & Goldman, R. (1981). Personal involvement as a determinant of argument-based persuasion. Journal of Personality and Social Psychology, 41(5), 847.  Petty, R. E., Wegener, D. T., & Fabrigar, L. R. (1997). Attitudes and attitude change. Annual Review of Psychology, 48(1), 609-647.  Phang, C. W., Kankanhalli, A., & Tan, B. C. (2015). What motivates contributors vs. lurkers? An investigation of online feedback forums. Information Systems Research, 26(4), 773-792.  Pogarsky, G. (2004). Projected offending and contemporaneous rule\u2010violation: Implications for heterotypic continuity. Criminology, 42(1), 111-136.  Polites, G. L., & Karahanna, E. (2012). Shackled to the status quo: The inhibiting effects of incumbent system habit, switching costs, and inertia on new system acceptance. MIS Quarterly, 21-42.  Pornpitakpan, C. (2004). The persuasiveness of source credibility: A critical review of five decades' evidence. Journal of Applied Social Psychology, 34(2), 243-281.  P\u00f6tzsch, S. (2008). Privacy awareness: A means to solve the privacy paradox? Paper presented at the IFIP Summer School on the Future of Identity in the Information Society. Protalinski, E. (2012). 13 million US Facebook users don't change privacy settings. https:\/\/www.zdnet.com\/article\/13-million-us-facebook-users-dont-change-privacy-settings\/.  Puhakainen, P., & Siponen, M. (2010). Improving employees' compliance through information systems security training: an action research study. MIS Quarterly, 757-778.  Radecki, C. M., & Jaccard, J. (1995). Perceptions of knowledge, actual knowledge, and information search behavior. Journal of Experimental Social Psychology, 31(2), 107-138.  Raghubir, P., & Menon, G. (1998). AIDS and me, never the twain shall meet: The effects of information accessibility on judgments of risk and advertising effectiveness. Journal of Consumer Research, 25(1), 52-63.  Raju, P. S., Lonial, S. C., & Mangold, W. G. (1995). Differential effects of subjective knowledge, objective knowledge, and usage experience on decision making: An exploratory investigation. Journal of Consumer Psychology, 4(2), 153-180.  114 Ratneshwar, S., & Chaiken, S. (1991). Comprehension's role in persuasion: The case of its moderating effect on the persuasive impact of source cues. Journal of Consumer Research, 18(1), 52-62.  Rim, S., Hansen, J., & Trope, Y. (2013). What happens why? Psychological distance and focusing on causes versus consequences of events. Journal of Personality and Social Psychology, 104(3), 457.  Ritchie, J., Lewis, J., Nicholls, C. M., & Ormston, R. (2013). Qualitative research practice: A guide for social science students and researchers: Sage. Ritchie, J., Spencer, L., Bryman, A., & Burgess, R. (1994). Qualitative data analysis for applied policy research. Analyzing Qualitative Data, 173, 194.  Ritchie, J., Spencer, L., & O\u2019Connor, W. (2003). Carrying out qualitative analysis. Qualitative research practice: A guide for social science students and researchers. Sage, 219-262.  Rodriguez, M. C. (2005). Three options are optimal for multiple\u2010choice items: A meta\u2010analysis of 80 years of research. Educational Measurement: Issues and Practice, 24(2), 3-13.  Rosoff, H., Cui, J., & John, R. S. (2013). Heuristics and biases in cyber security dilemmas. Environment Systems and Decisions, 33(4), 517-529.  Ross, S. A. (1973). The economic theory of agency: The principal's problem. The American Economic Review, 63(2), 134-139.  Russo, J. E., & Schoemaker, P. J. H. (1992). Managing overconfidence. Sloan Management Review, 33(2), 7-17.  Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1(1), 7-59.  Santhanam, R., Seligman, L., & Kang, D. (2007). Postimplementation knowledge transfers to users and information technology professionals. Journal of Management Information Systems, 24(1), 171-199.  Savage, L. J. (1954). The Foundations of Statistics; Jon Wiley and Sons. Inc.: New York, NY, USA.  Schmidt, C. (2004). The analysis of semi-structured interviews. A companion to qualitative research, 253-258.  Schoemaker, P. J. (1982). The expected utility model: Its variants, purposes, evidence and limitations. Journal of Economic Literature, 529-563.  Schuetz, S. W., Benjamin Lowry, P., Pienta, D. A., & Bennett Thatcher, J. (2020). The effectiveness of abstract versus concrete fear appeals in information security. Journal of Management Information Systems, 37(3), 723-757.  Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: An effort-reduction framework. Psychological Bulletin, 134(2), 207.  Shani, Y., Igou, E. R., & Zeelenberg, M. (2009). Different ways of looking at unpleasant truths: How construal levels influence information search. Organizational Behavior and Human Decision Processes, 110(1), 36-44.  Sher-Jan, M. (2018). Data indicates human error prevailing cause of breaches, incidents. Retrieved from https:\/\/iapp.org\/news\/a\/data-indicates-human-error-prevailing-cause-of-breaches-incidents\/.  Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99-118.  Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63(2), 129.  Simon, H. A. (1959). Theories of decision-making in economics and behavioral science. The 115 American Economic Review, 49(3), 253-283.  Simon, H. A. (1972). Theories of bounded rationality. Decision and Organization, 1(1), 161-176.  Simon, H. A. (1990). Invariants of human behavior. Annual Review of Psychology, 41(1), 1-20.  Simon, H. A. (2000). Bounded rationality in social science: Today and tomorrow. Mind & Society, 1(1), 25-39.  Simon, H. A., & Newell, A. (1971). Human problem solving: The state of the theory in 1970. American Psychologist, 26(2), 145.  Slovic, P., Finucane, M., Peters, E., & MacGregor, D. G. (2002). Rational actors or rational fools: Implications of the affect heuristic for behavioral economics. The Journal of Socio-Economics, 31(4), 329-342.  Spears, J. L., & Barki, H. (2010). User participation in information systems security risk management. MIS Quarterly, 503-522.  Spence, M. T. (1996). Problem\u2013problem solver characteristics affecting the calibration of judgments. Organizational Behavior and Human Decision Processes, 67(3), 271-279.  Spiekermann, S., Grossklags, J., & Berendt, B. (2001). E-privacy in 2nd generation E-commerce: privacy preferences versus actual behavior. Paper presented at the Proceedings of the 3rd ACM Conference on Electronic Commerce. Sposito, V., Hand, M., & Skarpness, B. (1983). On the efficiency of using the sample kurtosis in selecting optimal lpestimators. Communications in Statistics-simulation and Computation, 12(3), 265-272.  Straub, D. W., & Welke, R. J. (1998). Coping with systems risk: security planning models for management decision making. MIS Quarterly, 441-469.  Sundar, S. S., Kang, H., Wu, M., Go, E., & Zhang, B. (2013). Unlocking the privacy paradox: do cognitive heuristics hold the key? CHI'13 extended abstracts on human factors in computing systems (pp. 811-816). Suri, G., Sheppes, G., Schwartz, C., & Gross, J. J. (2013). Patient inertia and the status quo bias: when an inferior option is preferred. Psychological Science, 24(9), 1763-1769.  Sussman, S. W., & Siegal, W. S. (2003). Informational influence in organizations: An integrated approach to knowledge adoption. Information Systems Research, 14(1), 47-65.  Svenson, O. (1979). Process descriptions of decision making. Organizational Behavior and Human Performance, 23(1), 86-112.  Symantec. (2017). 2017 Internet Security Threat Report.  Tam, L., Glassman, M., & Vandenwauver, M. (2010). The psychology of password management: a tradeoff between security and convenience. Behaviour & Information Technology, 29(3), 233-244.  Taylor, S., & Todd, P. A. (1995). Understanding information technology usage: A test of competing models. Information Systems Research, 6(2), 144-176.  Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness: Yale University Press. Triplet, R. G. (1992). Discriminatory biases in the perception of illness: The application of availability and representativeness heuristics to the AIDS crisis. Basic and Applied Social Psychology, 13(3), 303-322.  Trope, Y., & Liberman, N. (2010). Construal-level theory of psychological distance. Psychological Review, 117(2), 440.  Tsohou, A., Karyda, M., & Kokolakis, S. (2015). Analyzing the role of cognitive and cultural biases in the internalization of information security policies: Recommendations for 116 information security awareness programs. Computers & Security, 52, 128-141.  Tversky, A. (1964). On the optimal number of alternatives at a choice point. Journal of Mathematical Psychology, 1(2), 386-391.  Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207-232.  Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131.  UK Department for Business Innovation and Skills. (2013). 2013 Information Security Breaches Survey. Retrieved from https:\/\/assets.publishing.service.gov.uk\/government\/uploads\/system\/uploads\/attachment_data\/file\/191670\/bis-191613-p191184-192013-information-security-breaches-survey-technical-report.pdf.  Ur, B., Bees, J., Segreti, S. M., Bauer, L., Christin, N., & Cranor, L. F. (2016). Do users' perceptions of password security match reality? Paper presented at the Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. Ur, B., Noma, F., Bees, J., Segreti, S. M., Shay, R., Bauer, L., . . . Cranor, L. F. (2015). \" I Added'!'at the End to Make It Secure\": Observing Password Creation in the Lab. Paper presented at the Eleventh Symposium On Usable Privacy and Security ({SOUPS} 2015). Van Rooij, M., Lusardi, A., & Alessie, R. (2011). Financial literacy and stock market participation. Journal of Financial Economics, 101(2), 449-472.  Van Bavel, R., Rodr\u00edguez-Priego, N., Vila, J., & Briggs, P. (2019). Using protection motivation theory in the design of nudges to improve online security behavior. International Journal of Human-Computer Studies, 123, 29-39. Vance, A., Eargle, D., Ouimet, K., & Straub, D. (2013). Enhancing password security through interactive fear appeals: A web-based field experiment. Paper presented at the 2013 46th Hawaii International Conference on System Sciences. Vance, A., Lowry, P. B., & Eggett, D. L. (2015). Increasing accountability through the user interface design artifacts: A new approach to addressing the problem of access-policy violations. MIS Quarterly, 39(2), 345-366.  Vance, A., Eargle, D., Eggett, D., Straub, D., Ouimet, K. \u201cDo security fear appeals work when they interrupt tasks? A multi-method examination of password strength,\u201d MIS Quarterly, forthcoming. Van Schaik, P., Renaud, K., Wilson, C., Jansen, J., & Onibokun, J. (2020). Risk as affect: The affect heuristic in cybersecurity. Computers & Security, 90, 101651.  Verizon. (2020). Data breach investigation report. Retrieved from https:\/\/enterprise.verizon.com\/resources\/reports\/2020-data-breach-investigations-report.pdf.  Wagner, A., & Mesbah, N. (2019). Too confident to care: Investigating overconfidence in privacy decision making. In Proceedings of the 27th European Conference on Information Systems (ECIS), Stockholm & Uppsala, Sweden, June 8-14, 2019.  Wakslak, C., & Trope, Y. (2009). The effect of construal level on subjective probability estimates. Psychological Science, 20(1), 52-58.  Wang, J., Li, Y., & Rao, H. R. (2016). Overconfidence in phishing email detection. Journal of the Association for Information Systems, 17(11), 759.  Ward, D. J., Furber, C., Tierney, S., & Swallow, V. (2013). Using framework analysis in nursing research: a worked example. Journal of Advanced Nursing, 69(11), 2423-2431. 117 Wash, R. (2010). Folk models of home computer security. Paper presented at the Proceedings of the Sixth Symposium on Usable Privacy and Security.  Watts, S. A., & Zhang, W. (2008). Capitalizing on content: Information adoption in two online communities. Journal of the Association for Information Systems, 9(2), 3.  Whalen, T., & Inkpen, K. M. (2005). Gathering evidence: use of visual security cues in web browsers. Paper presented at the Proceedings of Graphics Interface 2005. White, K., MacDonnell, R., & Dahl, D. W. (2011). It's the mind-set that matters: The role of construal level and message framing in influencing consumer efficacy and conservation behaviors. Journal of Marketing Research, 48(3), 472-485.  Wickham, P. A. (2003). The representativeness heuristic in judgements involving entrepreneurial success and failure. Management Decision.  Yin, R. K. (2015). Qualitative research from start to finish: Guilford Publications. Zviran, M., & Haga, W. J. (1999). Password security: an empirical study. Journal of Management Information Systems, 15(4), 161-185.                     118 Appendices  Appendix A (Chapter 2) Objective Security Knowledge Construct Item Development  Step 1: Preliminary Item Generation   I used a mixture of the deductive and inductive approaches to develop the initial items. First, I developed thirty items to measure security knowledge of personal users drawing upon the following sources: a) security guidelines proposed by anti-virus firms on their websites and annual reports including Symantec, Avast, Norton, and AVG, b) government agencies\u2019 security guidelines including GetCybersafe9 by the government of Canada and National Security Agency (NSA)10, c) and National Institute of Standards and Technology (NIST) special publication (SP-800 series) on best security guidelines which are widely adopted around the globe (Mell et al., 2005). I focused on the best security practices guidelines and security technology capabilities (e.g., what they can and cannot do) that are applicable to home users to generate our initial items.  Step 2: Content Validity Assessment After initial item development, I reviewed the content of items with two security experts, a senior security analyst and a scholar in information security engineering research. Due to their position and prior experience, these two experts were knowledgeable in both functions of security technologies and best practices for personal users to follow. The experts assisted me in evaluating the content validity of the questions, answer choices regarding their correctness, and relevance to security knowledge definition. After two separate meetings with these experts, I removed, modified, and added several questions. The result was a preliminary scale consisting of thirty-six multiple-choice questions (MCQs) with four options each, in which twenty questions aimed to measure best-practices-related questions and sixteen questions were designed to measure technology-related questions.     9 Getcybersafe.gc.ca 10 Best Practices for Keeping Your Home Network Secure, NSA (2016) 119 Step 3: Construct Validity Assessment (by Security Experts)   The third step included the preliminary assessment of items in regards to their construct validity (i.e., do the items capture the underlying construct). To assess construct validity, I used an approach suggested by MacKenzie, Podsakoff, & Podsakoff (2011). In their research commentary, the authors propose a modified format of the expert-rater procedure first introduced by Hinkin & Tracey (1999) to assess the validity of constructs of interests. In this method, the construct of interest and its items will be organized in a table format where the columns comprise of all the names and definitions of various knowledge dimensions, and the rows will include the items. After creating this evaluation matrix, several experts must rate to what degree each item captures the underlying definition of each dimension of construct on a Likert Scale from 1(not at all) to 5 (completely). Previous literature stated expert evaluation of five to seven people as the acceptable range in this phase (Boateng et al., 2018; Haynes, Richard, & Kubany, 1995). Due to this structure, this method not only provides the inter-rater agreement, which illustrates the reliability of the items but also content and construct validity (i.e., divergent and discriminant validity) of the items since judges must rate the item for all dimensions. Based on this description, I created a matrix with two columns: \u201cBest Security Practices Knowledge\u201d and \u201cSecurity Technology Knowledge\u201d and thirty-six rows (twenty for best practices and sixteen for technology questions). The items were randomized so as to avoid any biases and order effects in judges\u2019 answers.  I also added a column asking the judge to rate the difficulty of items individually.   # Items Best Security Practices Knowledge  Security Technology  Knowledge  Difficulty 1 VPNs can prevent security threats from _____. a) removing viruses before infecting the operating system             b) protecting users\u2019 online passwords  c) blocking the download of malicious file s             d) stopping malicious tampering with users\u2019 communication over the internet 1 5 3 Table A.1 (Appendix A) Rating Task Example Adopted by MacKenzie et al. (2011)   Once the results were gathered, each item had five ratings under the Best Security Practice and five ratings under the Security Technology column.  I analyzed these ratings using a one-way 120 repeated-measures ANOVA to see if the ratings\u2019 mean of five judges under one domain is significantly different from that of other dimensions. If there were statistically significant differences (p<0.05), I kept the question. Otherwise, I removed any item with a non-significant p-value from the analysis. This threshold is quite conservative since there were only five judges.  Step 4: Face Validity Assessment    Simultaneously, I conducted eight cognitive interviews (also knowns as process tracing and verbal protocol analysis) with a group of participants who were representative of the study population. Participants were, on average, 26.5 years old (SD = 3.47 years old, range 22-31 years old). Men constituted 57.5% of the participants, and women constituted 62.5%; four of the participants identified as Asians, two individuals as Middle Eastern, one individual as White, and one individual as African.  In this process, I handed out the thirty-six item questionnaire to interviewees and asked them to verbalize the mental process entailed in reading the items while giving their answers. This method helps with assessing the face validity of the items and helped me flag the problematic and ambiguous items for further analysis (Beatty & Willis, 2007; Kuusela & Pallab, 2000). While expressing their thought processes, the interviewer didn\u2019t engage with the participants. If there was serious ambiguity with any question, the interviewer asked several probing questions at the end of the session. The interviews were transcribed, and problematic questions were flagged. After further analysis of the cognitive interviews, one additional question was removed, and several others went under major and minor edits in wording and structure.  These modified questions were sent to security experts for the expert rating task for a second time. After the second round of expert ratings, I identified twenty-five items for the next round of validation. Table A. 2 illustrates the results of one-way repeated measures ANOVA of items we kept for the rest of the development phase.         121 Item Knowledge Dimensions F (1,4) p 1 T1 72.3 <.01  2 T 73.1 <.01 3 T 45.0 <.01 4 T 42.7 <.01 5 T 193 < .01 6 T 216 < .01 7 T 216 < .01 8 T 32.1 < .01 9 T 73.1 < .01 10 T 72.2 < .01 11 T 193 < .01 12 T 72.2 < .01 13 B 1.62e+33 < .01 14 B 81.0 < .01 15 B 73 < .01 16 B 72 < .01 17 B 72.3 < .01 18 B 12\/5 < .05 19 B 361 < .01 20 B 216 < .01 21 B 16 < .05 22 B 72.2 < .01 23 B 81.0 < .01 24 B 216 < .01 25 B 72.2 < .01 Table A.2 (Appendix A) Expert Rating Task Results (1T: Security Technology, 2B: Best Practices)  Step 5: Construct Validity Assessment (by Non-Experts)   As the final step in the validity assessment of the items, I conducted a card-sorting exercise among fellow Ph.D. students at the business school (n=8). Card sorting assists in determining the convergent validity within the construct and discriminant validity with items from other domains (Moore & Benbasat, 1991). In this study, I sent out the card-sorting online exercise to fellow Ph.D. students. Each judge was presented with two columns labeled with the two dimensions of objective security knowledge (e.g., security technology and best security practices), their definitions, and twenty-five items in a randomized format obtained from the last step. They were instructed to place each item under the label which is most conceptually related (e.g., Security Technology or Best Security Practices). The hit ratio (i.e., correct item placements to total placements across all dimensions) for twenty-four items ranged between 88% to 100%, which is above the generally accepted threshold of 80% (Cenfetelli et al., 2008; Moore & Benbasat, 1991). One item had a hit 122 ratio of 56% under its target group and was subsequently dropped from the item pool.  Step 6: Pilot Survey Administration      In the final stage, I conducted the remaining twenty-four items questionnaire via TurkPrime (n=165). Men constituted 64% of the participants, and women constituted 36%. 70% of the participants identified as White\/Caucasian, 13% as Asians, 9% as Black\/Africa-American, 5% as Hispanic\/Latino, and 4% as Others. Participants were compensated USD 3 for answering the questionnaire.    Step 6.1: Item-Difficulty Index First, I needed to control of scale difficulty of the final questionnaires. I controlled for the objective difficulty of scales and ensured that the final scales are medium-difficulty range following prior studies (Wang et al., 2016). I calculated the item-difficulty index; item difficulty (p) is the percentage of the individuals who answered an item correctly over total response (Adkins, 1960; Boateng et al., 2018; DeVellis, 2016). This number will give us an approximate objective assessment of the difficulty of the items based upon the pilot study. Afterward, I compared the average item difficulty (p) of the final ten items obtained from the pilot study with the mean subjective rating that five judges gave to these items in the second phase and found the objective difficulty from the pilot study to be close to the subjective difficulty assessment given by the judges. After developing the difficulty index, I kept ten questions that produced an equal average difficulty and produced split-half reliability above .70.  Scale # of items Average Objective Difficulty (Survey, n=161) Average  Subjective Difficulty (Experts, n=5) Split-Half Reliability Security Technology 10 .64 .60 .72 Best Security Practices 10 .67 .61 .73 Table A.3 (Appendix A) Average Scale Difficulty  Step 6.2 Split-Half Reliability  For reliability analysis of our score quiz, I used the split-half method (Aggarwal et al., 2015); each quiz was divided into two parts, and subsequently, the correlation between scores of the two parts was calculated. The split-half reliabilities of the final Security Technology quiz and 123 the Best Security Practices quiz were.72 and .73, respectively, above the acceptable threshold value of .70 (Bollen, 1989). The omission of the four responses did not significantly impact these results; the split-half reliabilities for the two quizzes with the inclusion of those omitted four participants produced were .72 and .73 for Security Technology quiz and Best Security Practices, respectively.   Step 6.3 Item Distractor Efficiency Analysis   \u201cIncorrect options\u201d in MCQs are known as distractors, and the quality of a multiple-choice question is highly dependent on these distractors(Nunnally & Bernstein, 1994; Tversky, 1964). The main purpose of item distractor efficiency analysis is finding the non-functional distractors; these are options within the questions that have very little to no impact on respondents\u2019 answers. One of the common methods in identifying these non-functioning distractors is via creating a response frequency table; if an option is selected less than 5% by all respondents in a sample, then the option is considered non-functional and should be either modified or removed from the question (Nunnally & Bernstein, 1994; Rodriguez, 2005). Using this threshold as a guideline, I created a response frequency table. (Bolden choices have a response frequency of less than 5%.) Option  Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 a 4 18 65 17 33 23 9 12 22 27 b 22 9 15 8 59 20 8 30 112 12 c 8 130 4 97 2 93 4 4 25 75 d 127 4 77 39 67 25 140 115 2 47 N 161 161 161 161 161 161 161 161 161 161 Table A.4 (Appendix A) Frequency Distribution of Response \u2013Technology Question    Option Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 a 3 78 12 4 55 11 9 8 20 1 b 11 25 27 14 27 11 8 140 10 4 c 6 6 21 3 34 116 6 7 7 152 d 141 52 101 140 45 23 138 6 124 4 N 161 161 161 161 161 161 161 161 161 161 Table A.5 (Appendix A) Frequency Distribution of Response \u2013Best Practices Questions      124 Step 6.4 Scale Finalization  I noticed that most questions have one option with a response frequency of less than 5% (i.e., a non-functional distractor. This observation is in line with findings from prior literature in psychometrics and educational measurements that the majority of MCQs do not have more than two functioning distractors. A meta-analysis of prior research in this domain showed that over the years, MCQs with three options per question had been considered as the optimal format for knowledge assessment (Rodriguez, 2005); three-option MCQs mathematically have the highest discrimination and power (Tversky, 1964), increase test reliability (Grier, 1976), and maximize information processing by the test taker (Bruno & Dirkzwager, 1995). Due to observing these patterns, many studies advise test writers to use MCQs with three options over other MCQ formats (Haladyna, Downing, & Rodriguez, 2002; Lord, 1977). Because of the highly recommended three-options MCQs format in measurement literature and the existence of one non-functioning distractor in the majority of our questions, I decided to drop the item with the lowest response frequency for each question. Thus, my final scale twenty questions with ten questions measuring best practices and technology-related knowledge each.              125 Appendix B (Chapter 2) Security Knowledge Final Items  Security Technology-Related Questions 1 VPN can prevent security threats by _________.  a) protecting users\u2019 online passwords  b) blocking the download of malicious file s  c) stopping malicious tampering with users\u2019 communication over the internet 2 VPN cannot __________. a) encrypt users\u2019 online traffic  b) mask users IP address  c) secure the device from viruses 3 A firewall can prevent security threats by ________. a) monitoring network traffic & permitting\/blocking the traffic b) removing viruses from the operating system  c) blocking the download of malicious files from trusted networks  4 A firewall cannot ___________, in order to prevent security threats. a) limit computer communications  b) hide users\u2019 connection  c) control online traffic 5 \u201cPrivate mode\u201d is a common feature in browsers which can _____.  a) hide user\u2019s activity from internet service providers b) prevent websites from tracking user\u2019s online behavior c) remove all cookies after the end of the browsing session 6 A \u201cproxy\u201d cannot ____.  a) block malicious websites b) prevent harmful programs from running on the computer c) hide personal network from the internet 7 Antivirus software cannot prevent security threats by __________. a) removing viruses from infecting the operating system  b) blocking the download of malicious files c) creating a secure tunnel between the user and the internet. 8 Browsers can help prevent security threats by ____.  a) configuring computer communications  b) blocking data packets that do not have the authorization to access the network c) recognizing the dangerous websites 9 \u201cTrojan horse\u201d is a type of malicious software that _____ _.  a) can only be embedded in pirated and illegitimate software b) can be embedded in both legal legitimate and pirated illegitimate software c) does not need to be attached to file in order to execute 10 \u201cWorm\u201d is a type of malicious software that ______. a) a type of malware that can infiltrate the system by damaging the hard drives b) a type of malware that can infiltrate the system without attachment to any computer files  c) a type of malware that can infiltrate the system only after getting attached to a computer file             126 Best Security Practices-Related Questions 11 It is ____ to use one password for multiple accounts, _____  a) safe, as long as the password includes letters and numbers b) safe, as long as the password includes special characters (e.g., $, %) c) not safe, under any circumstances. 12 Which of the following is the most effective approach in mitigating malicious email attacks? a) allowing only known contacts to send you emails b) being cautious about emails received with a sense of urgency  c) being cautious about emails received after work hour 13 It _____ to use multiple windows of the same browser for different activities (e.g., shopping, surfing) when connecting online. a) is more secure,  b) is less secure c) makes no difference 14 Which of the following is the most effective approach in mitigating malicious website attacks? a) checking the looks and design of the website b) checking for https at the beginning of URL; https ensures the security of the website content c) checking the name of the website on the address bar 15 Assuming equal security privileges, using the \u201cguest account\u201d instead of \u201cadministrator account\u201d will__ a) have no impact on the prevention of security threats b) negatively impact prevention of security threats  c) positively impact prevention of security threats  16 Updating browser plugins (Adobe, Java) is _____  a) only necessary for running the application (showing video or interface), but has no security impact b) only necessary for maintaining the compatibility with the browser, but has no security impact c) necessary for both maintaining the compatibility with the browser and security  17 Which of the following passwords is stronger than the others? a) gap3t0$     b) tag3s1$    c) trs7gt3$ 18 Which of the following passwords is stronger than the others? a) 2bbtsty1 b) 1yugts6       c) 1jkyaa1 19 If you accidentally visit a malicious website, ________. a) you are safe as longs as you do not click on anything from the website b) you are safe as longs as you do not download anything from the website c) you are not safe as it may already be too late to prevent security threats 20 It is ____ to save personal information on institutional computers (e.g., university computer labs), ____ a) safe, as long as the computers are owned and controlled by universities. b) safe, as long as the computers work on a private network. c) not safe since computers are exposed to security vulnerabilities.         Table A.6 (Appendix B) Security Knowledge Quiz     127  Appendix C (Chapter 2) Control Variables Items  Control Variable Measurement Gender What is your gender? Male (1), Female (2) Age How old are you?  Under 18 (1), 18 \u2013 24 (2), 25 \u2013 34 (3), 35 \u2013 44 (4), 45 \u2013 54 (5), 55 \u2013 64 (6), 65 \u2013 74 (7), 75 \u2013 84 (8), 85 or older (9)  IT Education  How many years of education in information technology do you have? (including school and professional training) Less than 1 year (1), 1-2 years (2), 2-3 years (3), 3-4 years (4), 4-5 years (5), 5-6 years (6), More than 6 years (7) IT Experience How many years of professional work experience in information technology do you have?  Less than 1 year (1), 1-2 years (2), 2-3 years (3), 3-4 years (4), 4-5 years (5), 5-6 years (6), 6-7 years (7), 7-8 years (8), More than 8 years (9)  Prior Security Violation (Anderson & Agarwal, 2010) How often have you personally been affected by an information security violation? Very infrequently (1) \u2026 Very frequently (7)  Security News Exposure (Anderson & Agarwal, 2010) How much have you heard or read during the last year about security violations (e.g., threats such as virus attacks and\/or unauthorized access to data by hackers)? Far too little (1) \u2026 Far too much (7)  Phone OS  What is the operating system of your smartphone? IOS (1), Android (2)  Phone Usage Daily How many hours during the day do you use your smartphone?   Technology Usage Experience How many years have you been using technology? Phone Usage Experience How many years have you been using a smartphone? Table A.7 (Appendix C) Control Variables           128 Appendix D (Chapter 2) Control Variables Descriptive Statistics   Item M SD Descriptive Norms 1 4.61 1.52 Descriptive Norms 2 3.68 1.48 Descriptive Norms 3 4.14 1.51 Descriptive Norms 4 3.84 1.62 Impulsivity 1 3.05 1.56 Impulsivity 2 3.15 1.57 Impulsivity 3 2.51 1.34 Impulsivity 4 2.83 1.50 IT Education .31 0.99 IT Experience .23 0.96 Perceived Threat Severity 1 5.20 1.64 Perceived Threat Severity 2 5.75 1.54 Perceived Threat Severity 3 5.64 1.52 Perceived Threat Vulnerability 1 4.71 1.73 Perceived Threat Vulnerability 2  3.62 1.59 Perceived Threat Vulnerability 3 5.08 1.62 Technology Year Usage 18.85 6.15 Phone Daily Usage 4.91 3.89 Phone Year Usage 9.20 2.99 Prior Exposure 4.51 1.56 Prior Violation 2.25 1.31 Self-Efficacy 1 5.54 1.21 Self-Efficacy 2 5.32 1.16 Self-Efficacy 3 5.63 1.16 Self-Efficacy 4 5.16 1.46 Self-Efficacy 5 4.96 1.45 Social Norms 1 4.34 1.69 Social Norms 2 4.51 1.71 Social Norms 3 4.41 1.60 Table A.8 (Appendix D) Descriptive Statistics \u2013 Control Variables         129   Note. N=95, *p<.05; **p<.01                                      Table A.9 (Appendix E) Bivariate Correlations    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1 Security level of user\u2019s decision \u2014                    2 Objective Security Knowledge .28** \u2014                                     3 Subjective Security Knowledge .30** .50** \u2014                  4 Default Security Level .30** .09 -.05 \u2014                                 5 Social Norms -.23* .05 -.07 -.01 \u2014                6 Perceived Threat Severity .05 .06 -.08 -.05 .25* \u2014                             7 Perceived Threat Vulnerability -.01 .06 .01 -.02 .21* .45** \u2014              8 Impulsivity .04 -.01 .02 -.05 -.11 .01 .13 \u2014                         9 Self Efficacy -.05 .16 .22* .17 .16 .01 -.27** -.31** \u2014            10 Descriptive Norms -.05 .02 .00 .02 .25* .18 .10 .09 .19 \u2014                     11 Phone Daily Usage -.02 -.04 .09 .02 .02 .10 -.04 .15 .07 .16 \u2014          12 Phone Year Usage .00 .11 .03 -.07 .04 -.03 -.15 .06 .00 .10 .05 \u2014                 13 Technology Year Usage .03 .21* .00 -.03 .21* .14 .14 -.09 .10 .07 -.14 .28** \u2014        14 News Exposure .10 .32** .19 .25* .15 .06 .12 .11 .14 .12 .05 .11 .32** \u2014             15 Prior Violation .01 -.11 .03 -.04 -.08 -.13 .23* .01 -.35** .08 .12 .00 .00 .27** \u2014      16 Gender -.03 -.26* -.34** .03 .02 .10 .06 -.04 -.09 .01 .09 -.01 .01 -.25* -.01 \u2014         17 Operating System .03 .29** .14 -.01 .15 -.01 .15 -.05 .15 .07 -.14 -.06 .24* .24* -.01 -.19 \u2014    18 Age -.10 .03 -.10 .04 .21* -.03 .16 -.14 .09 .09 -.25* .16 .67** .12 -.02 .08 .19 \u2014     19 IT Experience -.20 .02 -.11 -.06 -.03 -.06 .06 -.01 -.02 -.07 -.10 -.07 .22* .01 -.08 -.08 .08 .24* \u2014  20 IT Education -.08 .02 -.11 -.01 -.15 -.11 -.05 .03 -.02 -.11 -.08 .06 .01 -.13 -.17 -.01 -.09 .09 .57** \u2014 Appendix E Bivariate Correlation (All Variables) 130  Appendix F (Chapter 2) Collinearity Diagnostic  Table A.10 (Appendix F) Multicollinearity Check         Table 21 Collinearity Diagnostics a Model Unstandardized Coefficients Standardized Coefficients t Sig. Collinearity Statistics B Std. Error Beta Tolerance VIF (Constant) C 3.13  -.46 .65   Objective Security Knowledge .23 .21 .13 1.10 .28 .59 1.70 Subjective Security Knowledge .43 .18 .30 2.45 .02 .58 1.71 Default security level 3.29 .95 .36 3.45 .00 .82 1.22 Phone Daily Usage -.07 .12 -.06 -.57 .57 .81 1.24 Phone Year Usage -.06 .16 -.04 -.39 .70 .79 1.26 Technology Year Usage .12 .09 .20 1.36 .18 .41 2.45 Security News Exposure -.18 .37 -.06 -.48 .63 .55 1.80 Prior Security Violation -.02 .44 -.01 -.05 .97 .58 1.74 Gender .60 .99 .06 .61 .55 .78 1.29 Operating System .37 .98 .04 .37 .71 .77 1.30 Age -.48 .57 -.12 -.83 .41 .45 2.22 Perceived Threat Vulnerability -.36 .45 -.10 -.81 .42 .55 1.84 Social Norms -.82 .42 -.21 -1.96 .05 .77 1.29 Perceived Threat Severity .49 .41 .15 1.21 .23 .60 1.66 Impulsivity -.01 .35 .00 -.04 .97 .75 1.34 Self-Efficacy -1.05 .65 -.20 -1.61 .11 .55 1.83 Descriptive Norms .09 .52 .02 .16 .87 .80 1.24 IT Experience -.87 .60 -.18 -1.45 .15 .57 1.76 IT Education .15 .57 .03 .26 .80 .58 1.71 a. Dependent Variable: Security level of user\u2019s decision  131  Appendix G (Chapter 2) Influential Outliers   Figure A.1 (Appendix G) Cook\u2019s D Check      132  Appendix H (Chapter 3) Phase 1 Questionnaire   Question 1. Can you give me examples of specific decisions that you commonly make? Question 2. Can you walk me through your decision process for each of the aforementioned decisions that you just mentioned? [The interviewer will write down\/record the process of judgment for each decision] Question 3. [For this question, the interviewer presents the participants with the following scenarios and asks them how they will make decisions. The interviewer will not interfere with the participant's verbal explanations]  o Scenario 1- You just purchased a new laptop and have finally transferred all your information from your previous device to the new one. You would like to install security software on your new device. How would you make such a decision, considering there are many types of security software out there? What factors go into your decision. Say out loud your thought process.   o Scenario 2- Yesterday, you received an email regarding a party on Campus. The email contained a brochure attachment that you downloaded but was not able to open. The next day, you see on Facebook that such email was sent to all members of an online group that you are active. You also find out the email contained a computer malware, such as a computer virus. What steps do you take be make sure the virus gets removed from your computer? Question 4. (Once the participant discusses how she made those decisions, the interviewer will ask follow-up questions (if necessary) to investigate gaps and inconsistencies of the individual logic.  Question 5. On a scale of 1 to 7 (being not important at all to 7 being very important), how much these security decisions matter in your day-to-day life? [The interviewer will ask participants to provide the rating for each of those decisions mentioned in Question 1]     133  Appendix I (Chapter 3) Phase 2 Questionnaire  Password Creation  1. You are signing up for a new website:  a. How do you choose a password? [The interviewer will write down step-by-step process] b. Please tell me the password you selected.  c. Do you use different passwords for various websites? Do requirements by various sites cause you to change your password? Explain why.   d. Do you change your passwords regularly? Explain why.  e. Do your passwords contain a word that can be found in a dictionary?  Security Software  1. Do you have any of the following software installed on your laptop? a. Firewall? (Yes\/No) b. Antivirus? (Yes\/No) c. VPN? (Yes\/No) 2.a [If the answer to any of 1a-1c is yes], Did you install them yourself, or did they come pre-installed on your laptop? Please explain.  2.b [If the answer to any of 1a-1c is yes], Do you make an effort to make sure your software is always up to date? Please explain. 2.c [If the answer to all of 1a-1c is no], what do you think is the reason why you never used security software? 3. How do you decide on getting security software? [The interviewer will write down step-by-step process] 4. Does the brand of the device you have influenced your decision on whether you need an antivirus? Please Explain.  5. [If answer to any of 1a-1c is yes] Do you scan your computer often, or you just wait until you see an alert?  a. What type of scan?  b. How do you decide if you want to scan or not? Assuming you have not gotten a virus 134  that you are aware of [The interviewer will write down step-by-step process]  Account\/Device Management  1. Some accounts usually come with pre-defined settings (Login Alert, 2FA, \u2026). In the case of accounts for websites that are important to you [discovered in Q1], do you change these settings, or do you let them be as they are initially set? Why? 2. In the case of accounts for websites that are important to you [discovered in Q2], do you change these settings, or do you let them be as they are initially set? Why?  3. What about mobile apps? Many have pre-defined permissions. Do you change these settings, or do you let them be? Why? 4. Have you ever backed up your data in the accounts you have? Why? 5. Assume you suspect you have gotten a virus. Either your antivirus shows you a message that a file is malicious, or you found out a recent email attachment that you downloaded was malicious. What do you to eliminate this threat? [Write down step-by-step process] 6. Assume you suspect that someone has hacked one of the accounts important to you. What do you to eliminate this threat? [Write down step-by-step process]  Web Browsing  1. Are there any best practices you follow when it comes to your online browsing? Explain 2. Do you use public Wi-Fi for browsing? Explain 3. Does the brand of the browser you use change the way you follow best security practices? 4. If you see a warning message in a browser that the website you are trying to access is dangerous, what would you do? [The interviewer will write down step-by-step process] 5. How do you detect phishing websites? [The interviewer will write down step-by-step process] 6. Does the brand of the browser you use change the way you detect phishing websites?   135  Appendix J (Chapter 3) Data Analysis Stages and Adherence to Main Criteria  It is important to also highlight how I attempted to adhere to the main criteria of qualitative research in this study. Lincoln and Guba (1985) proposed four main criteria (analogous to validity and reliability criteria in quantitative research) that should be satisfied in qualitative research. Accordingly, in addition to following the stages in the Framework Analysis, I integrated techniques and quantitative measurements during parts of the analysis (e.g., indexing) that can help with the satisfaction of those four main criteria. Table A. 11 shows these four criteria, their objective, how they were addressed in the study, and in which stage of the framework analysis they were integrated.                        136  Criteria  Definition In this study, the criteria were Achieved  by Credibility \u2022 \u201cCredibility establishes whether the research findings represent plausible information drawn from the participants' original data and is a correct interpretation of the participants' original views.\u201d  \u2022 The authors suggest that credibility can be achieved via triangulation, which itself can be involved the triangulation of data, method, or investigators.   \u2022 In this study, I aimed to satisfy this criterion by using triangulation of data and investigators.   \u2022 Data triangulation was achieved in the data collection stage by using three forms of data collection during interviews: process tracing, semi-structured questioning, and think-aloud techniques.   \u2022 Investigator triangulation was achieved in the indexing stage by calculating the Fleiss Kappa from investigators\u2019 rating from the card sorting activity.    Transferability \u201cTransferability is the degree to which the results of qualitative research can be transferred to other contexts or settings with other respondents.\u201d  \u2022 The authors suggest that this can be achieved by providing a thick description of the study. This includes describing the context in which the research was carried out, its setting, sample, sample size, sample strategy, demographic, and related information.    \u2022 The information was included during the familiarization stage of the analysis.   Dependability  &  Confirmability \u201cDependability refers to the stability of findings over time.\u201d   \u201cConfirmability is the degree to which the findings of the research study could be confirmed by other researchers.\u201d  \u2022 The main approach for addressing dependability and confirmability is by keeping an audit trail. This includes storing a complete set of notes on decisions made during the research process, reflective thoughts, sampling, research materials adopted, and the emergence of the findings and information about data management.   \u2022 I attempt to provide a trail of the analysis in all the stages of the analysis from familiarization, coding, indexing until charting, mapping, and interpretation. The major points were all included in the body of the study. Research questions and detailed card sorting results were presented in the chapter appendices.  Table A.11 (Appendix J) Quality Criteria for Qualitative Research (Korstjens & Moser, 2018; Lincoln, 1985)  137  Appendix K (Chapter 3) Card Sorting Results    # Statement Anchoring  Availability Represent- ativeness Expertise Brand name Affect Heuristics 1 Let's say my bank account or something that does have my banking information on it. I just did things that I feel like are secure, for example, choosing three to five friends 0.00 0.14 0.00 0.00 0.00 0.86 Affect 2 [On turning off Bluetooth\/location]: I don't know [why I do it].  It's just something I do. I don't think about the consequences. I just, you know, in my mind, I think I feel more safer when my GPS is off. 0.00 0.00 0.00 0.00 0.00 1.00 Affect 3 [on installing security software] I think my, decision is just straightforward.  I'll probably just install whatever security software I use beforehand on my new laptop since I'm already like a fan of it.  0.00 0.86 0.00 0.14 0.00 0.00 Availability 4 [on installing a security software] I think maybe whether I need to pay for that or not. if it's free, I might go for it. Maybe based on convenience. Yeah. And yeah, based on these two things [interviewer: what do you mean by convenience] maybe just like pressing some buttons or downloading a software thought it does not require like lots of procedures. 0.00 1.00 0.00 0.00 0.00 0.00 Availability 5 I have like a lot of accounts I'm using social media for long years, so I have couple questions that I do not forget.  I think that\u2019s the main reason I [keep using the same security questions]. 0.00 1.00 0.00 0.00 0.00 0.00 Availability 7 [on creating passwords] It's easier for me to remember the password and it's also stuff that I'm not putting any other, like extra personal information. So, I feel safe to just like have one password for all of them.  [Interviewer: So, would you say that convenience is the biggest reason you would use a normal password across these websites?] Yes.  0.00 1.00 0.00 0.00 0.00 0.14 Availability 9 [on creating passwords] I just always kind of revolve around those few key words. [interviewer: why would use these?]; I would say the more convenient, because sometimes you have a lot of accounts 0.14 1.00 0.00 0.00 0.00 0.00 Availability 10 [On Selecting a new security software] The very first thing I would do, because I know my dad, he's worked a lot and a lot with security things because he's a mechanical engineer and that's always a very common thing for him. The very first thing I would do is definitely consult with him, whether he knows it or not, there\u2019s people in his company that do. 0.43 0.00 0.00 1.00 0.00 0.00 Expertise          12 [On removing viruses] I go to people who are experts in the field, both my partner and my dad worked in computers.  So, both of them have ideas of what are good, tried, and true, security programs that I can rely on so typically that's how I would go about it because I would want to avoid downloading something that is in and of itself malware. 0.29 0.00 0.00 1.00 0.00 0.00 Expertise 138  # Statement Anchoring  Availability Represent- ativeness Expertise Brand name Affect Heuristics 13 [On removing viruses] I would reach out to people who I know who are in my network who have experienced with computer. So typically, my dad or my partner, 0.29 0.00 0.00 1.00 0.00 0.00 Expertise 14 [on removing a virus] I would probably do a Google search. So, I'd like, look at some like reviews and the pros and cons, on various websites. Perhaps I talked to someone at best buy or like electronic retailer. 1.00 0.14 0.00 0.57 0.00 0.00 Anchoring 15 [on removing viruses] I would either maybe like contact the number I could call for like my software, like a support system or something, you know, like a call center, maybe to walk me through it or if not, I would probably go to somewhere like best buy and see if they have any information. 0.57 0.00 0.00 1.00 0.14 0.00 Expertise 16 [on creating passwords] I have like kind of a template of a password. I use the templates because I find it easier to remember it. cause if I use a different password for like every single website, I have so many passwords and so many different logins that I would just always forget my password. And then that's just, that just becomes a hassle.  0.14 0.86 0.00 0.00 0.14 0.00 Availability 17 [on selecting a security software] I would look online to see which ones are best rated. So, like which ones people recommend and have had the most experience with. I would also probably ask my dad for more information and what he thinks I should get. 0.86 0.00 0.29 0.71 0.14 0.00 Anchoring 18 [On removing virus] What I would do is I have a Mac book, so I would immediately go to a Mac store and tell them what happened. I would tell them that I just downloaded something, and I might have a virus and I'm not sure how to check it. 0.29 0.00 0.00 0.86 0.71 0.00 Expertise 19 [On removing virus] I might just call, call the call Apple shop for advice, or search online for like what other people do to like to remove the virus and, 0.43 0.00 0.00 0.86 0.71 0.00 Expertise 20 [on installing security software] My boyfriend is computer engineer.  I think I will need help from him. I'm not familiar with those applications and installing. 0.29 0.00 0.00 1.00 0.00 0.00 Expertise 21 [on installing security software] I've never used a security software before, so, uh, what I will do is I will ask my friends that know a lot about tech,  and I know that are like really concerned about their security, to know what type of software they use. And I will also look online and look at reviews. 0.57 0.00 0.00 0.86 0.00 0.00 Expertise 23 [on removing virus] if I'm not able to, to remove the virus myself, I might take it. You like, um, like those tech companies that remove viruses, like the third-party ones 0.14 0.00 0.00 0.86 0.00 0.00 Expertise      24 My passwords are two parts: The first one is fixed and it's the same in all my accounts, but the second part is different. [interviewer: why would you use such a system?] there is a lot of password and I have to remember it. That\u2019s why they have to be easy. 0.14 1.00 0.00 0.00 0.00 0.00 Availability 139  # Statement Anchoring  Availability Represent- ativeness Expertise Brand name Affect Heuristics 25 [On using security software] usually, also I would do some digging around online, like, just some Googling on what the best are, what are what\u2019s considered the best, antivirus software. And I would also ask my friends as to what they use and what they recommend and then use all that info to make a decision 1.00 0.00 0.00 0.43 0.00 0.00 Anchoring 27 [on device usage] I don\u2019t use an antivirus. All of my products are Apple, so I feel really like safe on that side. Like I'm not downloading anything that I don't trust and it's hard to get viruses in Apple 0.00 0.00 0.00 0.00 1.00 0.29 Brand 32 [on whether they get help on security decisions from others] I have a friend. I go to them because I'm not specialized in information technology, so I think, it is necessary to have some advice. I mean, in the beginning you have some advice from our friends in order to proceed in a secure way. [Interviewer: Does trust matter?] It plays a very important role, but I just go for suggestions and eventually it\u2019s my own decision and I will be accountable for the result. 0.71 0.00 0.00 0.86 0.00 0.14 Expertise 34 [on whether they get help on security decisions from others] I have some family members that are more well versed in security. I go to them if I'm unsure about a decision. Usually I, when it comes to privacy, I, I have a very clear goal or a clear idea of what I want, but if it comes to something that I'm not sure of how a site operates, or I'm not sure about its reputation, I will go to them. [Interviewer: Does trust matter?] Heavily. I mean, I trust my family and I, I know that they have my best interests at heart when they get me recommendations or when they get advice. So, I trust them on that aspect and the friends that I know who are in it are very close friends. So, I also know that they'll give me advice that is best suited to my needs. 0.43 0.00 0.00 0.86 0.00 0.29 Expertise 38 [on device usage] I guess brand is important in my decisions.  I just heard that Apple is generally safer than windows when it comes to viruses and stuff. 0.14 0.00 0.00 0.00 1.00 0.14 Brand 40 [on device safety and brands] with the laptops, I know I'd say Apple slightly safer. With PCs, viruses are more common just because there's so many more PCs. 0.00 0.00 0.14 0.00 1.00 0.00 Brand 42 [on removing a virus] I would probably panic. Power off the device or whatever and probably run a scan, try to fix it that way. I probably do my own thing and won't ask anyone. 0.00 0.14 0.00 0.00 0.00 1.00 Affect 43 [on security software usage]. I haven't gotten anything. Primarily because especially for the antivirus, since it's a Mac and the viruses are significantly less common on them, that would be for, for that reason. Also, I haven't really found any need for it yet. 0.00 0.14 0.00 0.00 1.00 0.14 Brand 45 [on choosing passwords] Most of the time I choose something that is easy for me to remember, but at the same time is kind of something that I don't think anyone would think of. I think so it's kind of like, partly that, and also partly like, related to like in a way that helps me remember the context of what this thing is for 0.00 0.86 0.14 0.00 0.00 0.43 Availability        140  # Statement Anchoring  Availability Represent- ativeness Expertise Brand name Affect Heuristics 47 [on password creation] I'm not that complicated, but definitely mix of numbers and letters. Uh, and then for like in the, in, in terms of convenience, I sort of have a same, a similar variation or like a standard first word. Like I have variations for that for different accounts.  Some of them are the same password, but yeah, depending on the conditions.  0.00 1.00 0.29 0.00 0.00 0.00 Availability 48 [on device usage] I was told that [i.e., viruses are less common for Mac], or I read that online when I was figuring out which, which computer I wanted, that it was less likely. And, and also from a friend. 0.43 0.14 0.00 0.29 0.86 0.00 Brand 49 [Creating passwords] For me, I don't have decision process, I guess. I just like to think of something random and then just use it. I like a couple of occasional passwords that I rotate it around. So, it's like not, not, not every website has an independently like different password, but then, yeah, like most accounts have a few different passwords, but they're basically like a variation of like one or two common passwords, kind of a variation, I guess. 0.00 1.00 0.00 0.00 0.00 0.00 Availability 50 [Interviewer: Do you think the brand has influence on your decision?] Now that I'm thinking back to it, yes. I think Mac is more secure so when I initially was setting up my computer, I 0.00 0.00 0.14 0.00 1.00 0.00 Brand 53 [on what causes to avoid a website] Many ads for the games. Games or just like sometimes when you click on the link or you click on something on the page, it keeps reopening to other weird ads. Sometimes it's explicit content. And then, um, but, and then the other thing would be websites with a weird name or URL. But it's mostly related to the content I see on the website. 0.00 0.00 1.00 0.00 0.00 0.00 Representativeness 55 [on whether they get help on security decisions from others] I usually I ask my dad sometimes about if I'm not sure about something. I guess I would say his, he has knowledge of how, how things work, how people do things. And also, he knows about he's very cautious about internet security. it's not liked the final decision, but I, I use it to get information and feedback on my ideas or, background information that I wouldn't have otherwise. [Interviewer: Does trust matter?] I think it would, it would play a big role because I know that he's studied these things or is he ran into them before. Plus, he usually thinks things through very well. So, he has good reasons for what he says or like evidence based. Uh, so yeah, but also will trust. Cause I know that he wouldn't do anything bad for me. 0.57 0.00 0.00 1.00 0.00 0.14 Expertise 56 [on whether they get help on security decisions from others] I guess I go to my brother. Cause my brother has more expertise. he's just good with it [Security] compared to me and since we're in the same age and then since we're part of the same family, it makes things easier. [Interviewer: Does trust matter?] I would say trust convenience of time and expertise,  0.57 0.29 0.00 1.00 0.00 0.00 Expertise 141  # Statement Anchoring Availability Represent- ativeness Expertise Brand name Affect Heuristics 57 [on creating passwords] I kind of use variations of a password I already know. I know it's not a good idea to use similar passwords, but I switch it up a bit every time. 0.43 0.86 0.14 0.00 0.00 0.00 Availability 58 [on what causes you to avoid websites] I mean, I don't, I can't say I consciously do it, but if I notice there's no lock, on the URL bar on the top, for example, then it's probably because it's not an HTTPS website for example. I think there's kind of this unconscious process looking as well as like, Oh, this a website is littered with ads. 0.00 0.00 1.00 0.00 0.00 0.29 Representativeness 61 [on secure websites] most of the websites I visit has the connection is secure, The look on their website name. Um, do you know what I mean? always try to, if I am shopping, I do the shop if the website has this secure image and besides that on the website, if there is lots of, lots of commercials and the pop-ups, and I try to avoid these kinds of that websites because every popup and try to link you and other websites. So, I kind of feel like that they're unsafe. I think presence of the lock means the website is secure.   I don't proceed. I just closed the window. 0.14 0.14 1.00 0.00 0.00 0.29 Representativeness 65 [on what is a secure website] Probably just text and kind of how the website is formatted. Right.  just general formatting in 21st century websites, kind of a clean, modern look. I would make it; I would assume it's more legitimate. 0.00 0.14 0.86 0.00 0.00 0.14 Representativeness 66 [on what is a secure website] In the search bar at the top of most search engines, there's a small lock to show if the website's secure or not. If it's not secure that I know that it's not to be fully trusted. And if I see popups starting to come, uh, well I have, certain blockers on my, on my browser. So, they get blocked, and I see notifications that pop ups are being blocked. I I'll know that it's not a website to completely trust things like that. 0.14 0.00 1.00 0.00 0.00 0.14 Representativeness 70 [On getting an antivirus] If I were to get an antivirus, I would go on Google and search \"What is the most powerful antivirus software?\" [interviewer: how do you decide a software is powerful?]  I see how many people have already installed it. Like in Google Play there are some information regarding how many people use it [a software]. I also look at the ratings.  1.00 0.14 0.43 0.14 0.00 0.00 Anchoring 71  [On safe practices for web browsing] I probably use the incognito browser, but more than that, I don't really, I don't change any habits. I know it's not safer.  I guess like subconsciously I'm aware that it doesn't, but it just makes me feel better to be using [incognito] because it just feels less public to me. 0.00 0.14 0.14 0.00 0.00 1.00 Affect 74 I have a friend. I go to them because I'm not specialized in information technology, so I think, it is necessary to have some advice. I mean, in the beginning you have some advice from our friends in order to proceed in a secure way. 0.14 0.00 0.00 1.00 0.00 0.00 Expertise 77 [on creating passwords] I create password based on Something that is easy to remember.  Maybe a combination of capital letter, a lowercase letter, maybe a special character as well. [For] most of my accounts, I have the same password.  0.00 0.86 0.00 0.00 0.00 0.00 Availability 142  # Statement Anchoring  Availability Represent- ativeness Expertise Brand name Affect Heuristics 89 [on device they are using] A MacBook. Yes,  I feel safer. I think overall, they are superior.  0.00 0.00 0.00 0.00 0.86 0.14 Brand 90 [on whether they get help on security decisions from others] Yes, I have one. Well, I trust this person. He\u2019s very smart. I don't know what else to say. I'm not very good with computers and stuff. He definitely knows more than me. 0.00 0.00 0.00 1.00 0.00 0.14 Expertise 91 [on whether they get help on security decisions from others] I do. My partner is a software developer. [I rely on his opinion]  Maybe a little, yeah. I still make most of my own decisions. I know I can trust my partner over some random website.  0.14 0.14 0.00 0.86 0.00 0.00 Expertise 92 [on whether they get help on security decisions from others] Yes. They're probably the person who  would know the most about it in my circle. So, for a quick answer, quick kind of general topic, I would go to them. [Interviewer: Does trust matter?] Yes. Helps with how reliable and useful their advice is.  0.14 0.14 0.00 0.86 0.00 0.00 Expertise Table A.12 (Appendix K) Hit Ratio Assessment    Heuristic Results Anchoring \u03ba=.83 (95% CI, .828 to .839), p < .05 Availability \u03ba=.77 (95% CI, .765 to .776), p < .05 Representativeness \u03ba=.82 (95% CI, .813 to .824), p < .05 Expertise \u03ba=.80 (95% CI, .793 to .803), p < .05 Brand \u03ba=.86 (95% CI, .855 to .865), p < .05 Affect \u03ba=.82 (95% CI, .813 to .824), p < .05 Table A.13 (Appendix K) Fleiss\u2019 Kappa Between the Study Investigators             143  Appendix L (Chapter 4) Password Entropy Post Hoc Comparison (Main Effects)                       Table A.14 (Appendix L) Main Effect Pairwise Comparison (Password Entropy)                       Post Hoc Comparisons - Expertise Comparison   Expertise   Expertise Mean Difference SE df t ptukey High Construal Expertise - Low Construal Expertise .03 .96 397 .03 1.00  - No Expertise 1.95 .95 397 2.05 0.10 Low Construal Expertise - No Expertise 1.92 .95 397 2.02 0.11   Comparison   Availability   Availability Mean Difference SE df t ptukey High Construal Availability - Low Construal Availability -8.64 .97 397 -8.91 <\u2009.01**  - No Availability -.01 .95 397 -0.01 1.00 Low Construal Availability - No Availability 8.63 .95 397 9.14 <\u2009.01**   Post Hoc Comparisons - Representativeness Comparison   Representativeness   Representativeness Mean Difference SE df t ptukey High Construal Representativeness - Low Construal Representativeness -6.40 .96 397 -6.67 <\u2009.01**  - No Representativeness 3.14 .95 397 3.31 <01** Low Construal Representativeness - No Representativeness 9.54 .95 397 10.02 <\u2009.01** 144   Appendix M (Chapter 4) Security Setting Post Hoc Comparison (Main Effects)    Post Hoc Comparisons \u2013 Expertise Comparison   Expertise   Expertise Mean Difference SE df t ptukey High Construal Expertise - Low Construal Expertise .92 .15 397 6.27 <\u2009.01**  - No Expertise 1.33 .15 397 9.18 <\u2009.01** Low Construal Expertise - No Expertise .42 .14 397 2.88 <.05*   Post Hoc Comparisons - Availability Comparison   Availability   Availability Mean Difference SE df t ptukey High Construal Availability - Low Construal Availability -1.12 .15 397 -7.56 <\u2009.01**  - No Availability .28 .14 397 1.92 0.136 Low Construal Availability - No Availability 1.39 .14 397 9.67 <\u2009.01**   Post Hoc Comparisons - Representativeness Comparison   Representativeness   Representativeness Mean Difference SE df t ptukey High Construal Representativeness - Low Construal Representativeness -.84 .15 397 -5.75 <\u2009.01**  - No Representativeness .15 .14 397 1.06 .537 Low Construal Representativeness - No Representativeness .99 .15 397 6.86 <\u2009.01** Table A.15 Main Effect (Appendix M) Pairwise Comparison (Settings Security Level)   ","attrs":{"lang":"en","ns":"http:\/\/www.w3.org\/2009\/08\/skos-reference\/skos.html#note","classmap":"oc:AnnotationContainer"},"iri":"http:\/\/www.w3.org\/2009\/08\/skos-reference\/skos.html#note","explain":"Simple Knowledge Organisation System; Notes are used to provide information relating to SKOS concepts. There is no restriction on the nature of this information, e.g., it could be plain text, hypertext, or an image; it could be a definition, information about the scope of a concept, editorial information, or any other type of information."}],"Genre":[{"label":"Genre","value":"Thesis\/Dissertation","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/hasType","classmap":"dpla:SourceResource","property":"edm:hasType"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/hasType","explain":"A Europeana Data Model Property; This property relates a resource with the concepts it belongs to in a suitable type system such as MIME or any thesaurus that captures categories of objects in a given field. It does NOT capture aboutness"}],"GraduationDate":[{"label":"Graduation Date","value":"2021-11","attrs":{"lang":"en","ns":"http:\/\/vivoweb.org\/ontology\/core#dateIssued","classmap":"vivo:DateTimeValue","property":"vivo:dateIssued"},"iri":"http:\/\/vivoweb.org\/ontology\/core#dateIssued","explain":"VIVO-ISF Ontology V1.6 Property; Date Optional Time Value, DateTime+Timezone Preferred "}],"IsShownAt":[{"label":"DOI","value":"10.14288\/1.0398452","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/isShownAt","classmap":"edm:WebResource","property":"edm:isShownAt"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/isShownAt","explain":"A Europeana Data Model Property; An unambiguous URL reference to the digital object on the provider\u2019s website in its full information context."}],"Language":[{"label":"Language","value":"eng","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/language","classmap":"dpla:SourceResource","property":"dcterms:language"},"iri":"http:\/\/purl.org\/dc\/terms\/language","explain":"A Dublin Core Terms Property; A language of the resource.; Recommended best practice is to use a controlled vocabulary such as RFC 4646 [RFC4646]."}],"Program":[{"label":"Program (Theses)","value":"Business Administration - Management Information Systems","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#degreeDiscipline","classmap":"oc:ThesisDescription","property":"oc:degreeDiscipline"},"iri":"https:\/\/open.library.ubc.ca\/terms#degreeDiscipline","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the program for which the degree was granted."}],"Provider":[{"label":"Provider","value":"Vancouver : University of British Columbia Library","attrs":{"lang":"en","ns":"http:\/\/www.europeana.eu\/schemas\/edm\/provider","classmap":"ore:Aggregation","property":"edm:provider"},"iri":"http:\/\/www.europeana.eu\/schemas\/edm\/provider","explain":"A Europeana Data Model Property; The name or identifier of the organization who delivers data directly to an aggregation service (e.g. Europeana)"}],"Publisher":[{"label":"Publisher","value":"University of British Columbia","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/publisher","classmap":"dpla:SourceResource","property":"dcterms:publisher"},"iri":"http:\/\/purl.org\/dc\/terms\/publisher","explain":"A Dublin Core Terms Property; An entity responsible for making the resource available.; Examples of a Publisher include a person, an organization, or a service."}],"Rights":[{"label":"Rights","value":"Attribution-NonCommercial-NoDerivatives 4.0 International","attrs":{"lang":"*","ns":"http:\/\/purl.org\/dc\/terms\/rights","classmap":"edm:WebResource","property":"dcterms:rights"},"iri":"http:\/\/purl.org\/dc\/terms\/rights","explain":"A Dublin Core Terms Property; Information about rights held in and over the resource.; Typically, rights information includes a statement about various property rights associated with the resource, including intellectual property rights."}],"RightsURI":[{"label":"Rights URI","value":"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/","attrs":{"lang":"*","ns":"https:\/\/open.library.ubc.ca\/terms#rightsURI","classmap":"oc:PublicationDescription","property":"oc:rightsURI"},"iri":"https:\/\/open.library.ubc.ca\/terms#rightsURI","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the Creative Commons license url."}],"ScholarlyLevel":[{"label":"Scholarly Level","value":"Graduate","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#scholarLevel","classmap":"oc:PublicationDescription","property":"oc:scholarLevel"},"iri":"https:\/\/open.library.ubc.ca\/terms#scholarLevel","explain":"UBC Open Collections Metadata Components; Local Field; Identifies the scholarly level of the author(s)\/creator(s)."}],"Supervisor":[{"label":"Supervisor","value":"Cavusoglu, Hasan","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/contributor","classmap":"vivo:AdvisingRelationship","property":"dcterms:contributor"},"iri":"http:\/\/purl.org\/dc\/terms\/contributor","explain":"A Dublin Core Terms Property; An entity responsible for making contributions to the resource.; Examples of a Contributor include a person, an organization, or a service."},{"label":"Supervisor","value":"Cenfetelli, Ronald T.","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/contributor","classmap":"vivo:AdvisingRelationship","property":"dcterms:contributor"},"iri":"http:\/\/purl.org\/dc\/terms\/contributor","explain":"A Dublin Core Terms Property; An entity responsible for making contributions to the resource.; Examples of a Contributor include a person, an organization, or a service."}],"Title":[{"label":"Title ","value":"Role of heuristics and biases in information security decision making","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/title","classmap":"dpla:SourceResource","property":"dcterms:title"},"iri":"http:\/\/purl.org\/dc\/terms\/title","explain":"A Dublin Core Terms Property; The name given to the resource."}],"Type":[{"label":"Type","value":"Text","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/type","classmap":"dpla:SourceResource","property":"dcterms:type"},"iri":"http:\/\/purl.org\/dc\/terms\/type","explain":"A Dublin Core Terms Property; The nature or genre of the resource.; Recommended best practice is to use a controlled vocabulary such as the DCMI Type Vocabulary [DCMITYPE]. To describe the file format, physical medium, or dimensions of the resource, use the Format element."}],"URI":[{"label":"URI","value":"http:\/\/hdl.handle.net\/2429\/78723","attrs":{"lang":"en","ns":"https:\/\/open.library.ubc.ca\/terms#identifierURI","classmap":"oc:PublicationDescription","property":"oc:identifierURI"},"iri":"https:\/\/open.library.ubc.ca\/terms#identifierURI","explain":"UBC Open Collections Metadata Components; Local Field; Indicates the handle for item record."}],"SortDate":[{"label":"Sort Date","value":"2021-12-31 AD","attrs":{"lang":"en","ns":"http:\/\/purl.org\/dc\/terms\/date","classmap":"oc:InternalResource","property":"dcterms:date"},"iri":"http:\/\/purl.org\/dc\/terms\/date","explain":"A Dublin Core Elements Property; A point or period of time associated with an event in the lifecycle of the resource.; Date may be used to express temporal information at any level of granularity. Recommended best practice is to use an encoding scheme, such as the W3CDTF profile of ISO 8601 [W3CDTF].; A point or period of time associated with an event in the lifecycle of the resource.; Date may be used to express temporal information at any level of granularity. Recommended best practice is to use an encoding scheme, such as the W3CDTF profile of ISO 8601 [W3CDTF]."}]}