Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Competency-based assessment in clinical high-fidelity simulation : a survey of methods used in undergraduate… Masigan, Peterson 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_november_masigan_peterson.pdf [ 2.39MB ]
Metadata
JSON: 24-1.0166759.json
JSON-LD: 24-1.0166759-ld.json
RDF/XML (Pretty): 24-1.0166759-rdf.xml
RDF/JSON: 24-1.0166759-rdf.json
Turtle: 24-1.0166759-turtle.txt
N-Triples: 24-1.0166759-rdf-ntriples.txt
Original Record: 24-1.0166759-source.json
Full Text
24-1.0166759-fulltext.txt
Citation
24-1.0166759.ris

Full Text

COMPETENCY-BASED ASSESSMENT IN CLINICAL HIGH-FIDELITY SIMULATION: A SURVEY OF METHODS USED IN UNDERGRADUATE NURSING  by  Peterson Masigan  B.S.N., The University of British Columbia, 2004  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF SCIENCE IN NURSING in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  October 2015  © Peterson Masigan, 2015  ii Abstract  This study aimed to describe the current use of competency-based assessment frameworks and tools in nursing programs in British Columbia (BC) who utilize HFS. High-Fidelity Simulation (HFS) is being adopted and used by nursing programs at an increasing rate. Competency-based assessment frameworks or tools offer an effective way to assess student learning and competence when utilizing HFS as part of teaching. However, current assessment methods used by nurse educators when utilizing HFS mostly involve assessing student’s self-reported competence measures, confidence, or satisfaction with the learning process. These instruments are typically designed within their institution and many have not been tested for validity or reliability.  A survey study was designed to explore the frameworks and instruments currently used by nurse educators, trends related to specialized training in using HFS as a teaching tool for nurse educators, HFS utilization in nursing programs, and challenges experienced when using HFS. An online survey was used to collect data from nurse educators in British Columbia. Findings indicated inconsistent use of competency-based assessment frameworks and tools in various nursing programs in BC. Participants reported completing formative assessments after each HFS scenario, but a large majority of participants did not complete any summative assessments when utilizing HFS activities as part of their teaching. Lower range values were reported with regards to the number of specially trained nurse educators using HFS, as well as hours students were exposed to HFS in their programs. Challenges related to students’ attitudes towards realism of HFS scenarios, as well as nurse educators’ resistance to implementing best practices related to HFS use in education were also reported. Further research on developing validated and reliable  iii competency-based assessment frameworks and tools, and implementation of consistent use of these tools in nursing programs is recommended.  iv Preface  The author, Peterson Masigan, designed the entire study with review and feedback from the thesis supervisory committee (Dr. Bernie Garrett, Dr. Tarnia Taverner, and Ms. Elsie Tan). The author designed the data collection tool with feedback from his committee, and with the invaluable assistance of Dr. Suzanne Campbell, Dr. Leanne Currie, and Dr. Afshin Khazei. The data collection tool was adapted with permission from Dr. Wendy Nehring & Dr. Felissa Lashley’s original research completed in 2004. The author recruited all the participants with the help of the nursing department chairs from the nursing programs involved in the study:  • Dr. Suzanne Campbell – University of British Columbia, Vancouver Campus • Dr. Tru Freeman – Kwantlen Polytechnic University • Dr. Noreen Frisch – University of Victoria • Dr. Kathy Fukuyama – Vancouver Community College • Dr. Darlaine Jantzen – Camosun College • Ms. Krista Lussier – Thompson Rivers University • Ms. Janine Lennox – Langara College • Dr. Patricia Marck – University of British Columbia, Okanagan Campus • Ms. Teresa Petrick – Selkirk Community College • Ms. Linda Pickthall – Douglas College • Ms. Monique Powell – Okanagan College • Ms. Kathy Quee – British Columbia Institute of Technology • Ms. Norma Sherret – College of the Rockies • Ms. Leslie Sundby – Vancouver Island University  v  This research study was completed in collaboration with the nursing programs in British Columbia listed above. The author cleaned, coded and analyzed all of the data. The author conducted the entire study and wrote the thesis from Chapter 1 to 6. The thesis committee reviewed and provided feedback on the author’s work after each chapter. This research was approved by the following research ethics board (certificate number): • University of British Columbia Behavioural Research Ethics Board (H14-00945) • British Columbia Institute of Technology Research Ethics Board (2015-03) • Langara College Research Ethics Board (20150327-1) •  Vancouver Community College Research Ethics Board (201504-09)  The author certifies that he is the sole author of this thesis and that no part of this thesis has been published or submitted for publication. The author certifies that, to the best of his knowledge, his thesis does not incorporate, without acknowledgement, any materials previously submitted for a degree or a diploma in any institution of higher education. The author certifies that his thesis does not contain any material previously published or written by another person except where due reference is made in the text. The author declares that this is a true copy of his thesis.    vi Table of Contents  Abstract(....................................................................................................................................................(ii!Preface(.....................................................................................................................................................(iv!Table(of(Contents(.................................................................................................................................(vi!List(of(Tables(..........................................................................................................................................(x!List(of(Figures(........................................................................................................................................(xi!List(of(Abbreviations(........................................................................................................................(xii!Definition(of(Terms((Glossary)(....................................................................................................(xiii!Acknowledgements(..........................................................................................................................(xiv!Dedication(.............................................................................................................................................(xv!Chapter((1:(Introduction(.....................................................................................................................(1!1.1! Significance(............................................................................................................................................(1!1.2! Problem(Statement(..............................................................................................................................(2!1.3! Statement(of(Purpose(..........................................................................................................................(3!1.3.1! Research!Questions!........................................................................................................................................!3!1.4! Conceptual(Definitions(.......................................................................................................................(3!1.5! Conceptual(Framework:(Lenburg’s(Competency(Outcomes(Performance(Assessment((COPA)(Model(...................................................................................................................................................(4!1.5.1! Core!Practice!Competencies!.......................................................................................................................!5!1.5.2! Competency!Outcomes!..................................................................................................................................!6!1.5.3! Interactive,!Practice>Focused!Learning!.................................................................................................!7! vii 1.5.4! Use!of!Competency!Performance!Examinations!and!Assessments!............................................!7!Chapter((2:(Literature(Review(.......................................................................................................(10!2.1! Clinical(Simulation(.............................................................................................................................(10!2.1.1! Fidelity!...............................................................................................................................................................!11!2.1.2! Educational!Value!.........................................................................................................................................!11!2.1.3! Resource!Implications!................................................................................................................................!12!2.1.4! Effectiveness!...................................................................................................................................................!13!2.2! Competence(..........................................................................................................................................(14!2.2.1! Curricular!Design!and!Competence!......................................................................................................!15!2.2.2! Competence!in!Nursing!Education!and!Practice!.............................................................................!16!2.3! Education(and(Assessment(.............................................................................................................(18!2.3.1.1! Competency>Based!Assessment!Tools!..........................................................................................................!19!2.3.1.2! Current!assessment!tools!in!simulations/simulated!activities!...........................................................!20!2.3.1.2.1! Objective!Structured!Clinical!Examination!........................................................................................!21!2.3.1.2.2! Simulation!Evaluation!Instrument!........................................................................................................!22!2.3.1.2.3! Medication!Administration!Safety!Assessment!Tool!.....................................................................!23!2.3.1.2.4! The!Clinical!Simulation!Grading!Rubric!..............................................................................................!23!2.3.1.2.5! Lasater!Clinical!Judgment!Rubric!...........................................................................................................!24!2.3.1.2.6! Emergency!Response!Performance!Tool!............................................................................................!25!2.3.1.2.7! Level!2!Synthesis!Clinical!Evaluation!Instrument!...........................................................................!26!2.3.1.2.8! Clinical!Simulation!Evaluation!Tool!......................................................................................................!27!2.3.2! Gaps!in!Competency>Based!Assessment!in!HFS!in!Nursing!Education!.................................!28!Chapter((3:(Methods(..........................................................................................................................(31!3.1! Study(Design(........................................................................................................................................(31!3.2! Ethical(Considerations(.....................................................................................................................(31! viii 3.2.1! Data!Management!.........................................................................................................................................!33!3.3! Sample(....................................................................................................................................................(33!3.3.1! Inclusion!Criteria!..........................................................................................................................................!33!3.3.2! Exclusion!Criteria!.........................................................................................................................................!34!3.4! Study(Procedures(...............................................................................................................................(34!3.4.1! Recruitment!....................................................................................................................................................!34!3.4.2! Data!Collection!Tool!Development!.......................................................................................................!35!3.4.3! Data!Collection!...............................................................................................................................................!36!3.5! Data(Analysis(.......................................................................................................................................(37!3.6! Rigor(and(Validity(..............................................................................................................................(38!Chapter((4:(Findings(..........................................................................................................................(40!4.1! Participant(Demographics(..............................................................................................................(40!4.2! Simulation(Use(....................................................................................................................................(45!4.2.1! HFS!Utilization!...............................................................................................................................................!45!4.2.2! Training!in!HFS!use!......................................................................................................................................!47!4.3! Assessment(...........................................................................................................................................(49!4.3.1! Assessment!of!Student!Learning!............................................................................................................!49!4.3.2! Assessment!Framework!and!Tools!.......................................................................................................!51!4.3.2.1! Competency>Based!Assessment!Frameworks!and!Tools!......................................................................!51!4.3.2.2! Other!Assessment!Frameworks!and!Tools!(Non>CBAT)!.......................................................................!53!4.4! Assessment(Considerations(In(HFS(..............................................................................................(54!4.4.1! Educational!Theories!..................................................................................................................................!54!4.4.2! Evidence!for!Assessment!Decisions!......................................................................................................!55!4.4.3! Other!Considerations!..................................................................................................................................!57!4.4.4! Challenges!........................................................................................................................................................!58! ix 4.5! Summary(...............................................................................................................................................(60!Chapter((5:(Discussion(......................................................................................................................(61!5.1! Discussion(of(Findings(......................................................................................................................(61!5.1.1! Demographics!................................................................................................................................................!61!5.1.2! HFS!use!..............................................................................................................................................................!61!5.1.3! Assessments!....................................................................................................................................................!62!5.1.3.1! Competency>Based!Assessment!Frameworks!and!Tools!......................................................................!63!5.1.3.2! Other!Assessment!Frameworks!and!Tools!(Non>CBAT)!.......................................................................!65!5.1.4! Challenges!........................................................................................................................................................!65!5.2! Limitations(...........................................................................................................................................(65!Chapter((6:(Conclusion(.....................................................................................................................(68!6.1! Summary(...............................................................................................................................................(68!6.2! Recommendations(for(Future(Research(.....................................................................................(69!6.3! Implications(for(Nursing(Education(.............................................................................................(71!References(............................................................................................................................................(72!Appendices(...........................................................................................................................................(82!Appendix(A:(Initial(Email(Contact(Letter(...............................................................................................(82!Appendix(B:(Letter(of(Information(..........................................................................................................(83!Appendix(C:(Follow[up(Email(Contact(Letter(.......................................................................................(87!Appendix(D:(Permission(Letter(to(Adapt(Data(Collection(Tool(From(Dr.(Wendy(Nehring(...(88!Appendix(E:(Online(Survey(........................................................................................................................(89!(  x List of Tables   Table 4.1: Participant Demographics ............................................................................................ 41!Table 4.2: Types of Nursing Program ........................................................................................... 45!Table 4.3: Competency-Based Assessment Tools, and Validity ................................................... 53!  xi List of Figures  Figure 4.1: Years Practicing as Registered Nurse (RN) ................................................................ 42!Figure 4.2: Years Teaching in Bachelor of Science in Nursing Program ..................................... 43!Figure 4.3: Years Using High Fidelity Simulation in Teaching .................................................... 44!Figure 4.4: Total Hours of HFS Exposure ..................................................................................... 46!Figure 4.5: Clinical Specialty Areas Simulated ............................................................................. 47!Figure 4.6: Number of Faculty with Specialized Training in HFS ............................................... 48!Figure 4.7: Frequency of Formative Assessment .......................................................................... 50!Figure 4.8: Frequency of Summative Assessment ........................................................................ 51!Figure 4.9: Origin of Competency-Based Assessment Tool (CBAT), and Validity ..................... 52!Figure 4.10: Origin of Other Assessment Tool, and Validity ....................................................... 54!Figure 4.11: Evidence for Assessment Decisions ......................................................................... 56!      xii List of Abbreviations ACLS: Advanced Cardiac Life Support  BC: British Columbia  BREB: Behavioural Research Ethics Board  BSN: Bachelor of Science in Nursing  CBAT: Competency-Based Assessment Tool  CNA: Canadian Nurses Association  COPA: Competency Outcomes Performance Assessment  CRNBC: College of Registered Nurses of British Columbia  CSET: Clinical Simulation Evaluation Tool  ERPT: Emergency Response Performance Tool  HFS: High-Fidelity Simulation  HPS: Human Patient Simulator  LCJR: Lasater Clinical Judgment Rubric  MASAT: Medication Administration Safety Assessment Tool  NCSBN: National Council of State Boards of Nursing  OSCE: Objective Structured Clinical Examination  REB: Research Ethics Board  RN: Registered Nurse  SCE: Simulated Clinical Experience  SEI: Simulation Evaluation Instrument  U.S.: United States  USDOE: United States Department of Education  xiii Definition of Terms (Glossary)  Competency-Based Assessment: Assessment that uses outcome statements worded as practice expectations consistent with actual nursing practice. Competence is determined by looking at the learner’s ability to complete the simulation and demonstrate effective practice employing key principles, instead of merely being able to complete sequential steps (Lenburg, Klein, Abdur-Rahman, Spencer, & Boyer, 2009). Formative Assessment: Assessment that determines the learner’s response to a specific learning experience, primarily designed to provide feedback to support learner’s further learning, e.g. after a class. (Bastable, 2014). High-Fidelity Simulation: Structured learning exercises using a technologically advanced, anatomically accurate, computerized mannequin that mimics physiologic responses - the Human Patient Simulator (HPS). These exercises are situated in an environment that replicates a clinical setting, where educators control the mannequin’s responses based on the interventions of the student (National Council of State Boards of Nursing, 2009). Summative Assessment: Assessment to determines the outcome or effect of the teaching effort, focused on a longer time period compared to formative assessments and primarily designed to measure student performance/learning (usually terminally), e.g. an exam at the end of a course (Bastable, 2014)   xiv Acknowledgements   My deepest gratitude goes out to my thesis committee, Dr. Bernie Garrett, Dr. Tarnia Taverner, and Ms. Elsie Tan, who guided me through the entire process of my thesis, and consistently supported me every step of the way. As well, to the faculty at the UBC School of Nursing, and to my classmates, who encouraged me during the last four-and-a-half years of my life, I owe heartfelt thanks. I am a better nurse and person because of my interactions with each and every one of you.  To my family and friends who pushed me every step of the way to stay on task – thank you. Without your help in keeping me accountable, I would have never accomplished this task. Your encouragement and prayers fuelled me to keep going.    xv Dedication  This body of work is dedicated to my parents – Peter and Tess Masigan – who instilled in me the value of education from a very young age, and always encouraged me to push myself.   1 Chapter  1: Introduction High Fidelity Simulation (HFS) is an education delivery modality that is being used within nursing education at an increasing rate (Akhtar-Danesh, Baxter, Valaitis, Stanyon, & Sproul, 2009; Arnold et al., 2009; Cant & Cooper, 2010; Gore, Hunt, & Raines, 2008; Todd, Manz, Hawkins, Parsons, & Hercinger, 2008). However, there is a lack of robust evidence reporting on the effects of simulation on nursing students’ learning outcomes within the research literature (Arnold et al.; Cant & Cooper; Gore et al.; Herm, Scott, & Copley, 2007; Radhakrishnan, Roche, & Cunningham, 2007). One particular aspect of simulation based education that continues to challenge nursing educators is the lack of valid and reliable tools for them to assess whether students achieve the learning outcomes associated with the simulations they engage in (Attree, 2006; Arnold et al.; Gore et al.; Kardong-Edgren, Adamson, & Fitzgerald, 2010; Radhakrishnan et al.). Given this problem, this study aimed to describe the current methods used by nurse educators to assess their students’ competency when utilizing HFS.   1.1 Significance Evaluating whether students learn the intended outcomes when using a teaching method is a critical part of education, as important as the actual teaching process (DeYoung, 2009). Competency-based student assessment enables the nurse educator to help students identify their learning needs as well as make judgments about students’ overall ability to practice safely and competently (Goldenberg & Dietrich, 2002). Given the increasing use of HFS within baccalaureate nursing programs as a teaching method (Arnold et al., 2009; Akhtar-Danesh, Baxter, Valaitis, Stanyon, & Sproul, 2009; Cant & Cooper, 2010; Gore, Hunt, & Raines, 2008; Todd, Manz, Hawkins, Parsons, & Hercinger, 2008), as well as the  2 most recent National Council of State Boards of Nursing (NCSBN) study suggesting that up to 50% of traditional clinical time can be effectively substituted with HFS (Hayden, Smiley, Alexander, Kardong-Edgren, & Jeffries, 2014), the importance of objectively assessing clinical competency of students when using HFS cannot be overstated. Unfortunately, the literature on clinical competency-based assessment tools used within nursing education to date for HFS indicates that there is a reliance on student confidence measures and self-reported satisfaction of the learning process (Attree (2006); Kardong-Edgren, Adamson, & Fitzgerald (2010); Todd et al.), while other available instruments lack validation and reliability (Arnold et al.; Attree; Gore et al.; Kardong-Edgren et al.; Radhakrishnan, Roche, & Cunningham, 2007). Initial exploratory research into the competency-based assessment methods used in HFS by nurse educators in baccalaureate nursing programs to identify current practices would seem a natural starting point, as little is currently known about a phenomenon (Arnold et al.; Cant & Cooper; Gore et al.; Herm, Scott, & Copley, 2007; Radhakrishnan et al.). Therefore, a study was designed to explore the use of competency-based assessment methods in HFS nursing education. The research undertaken could potentially provide a launching pad for future research to support the development of validated and reliable competency-based assessment instruments that nurse educators can use for HFS. 1.2 Problem Statement With the trend towards increasing integration of HFS in nursing education, the need to objectively assess student competency becomes more important. Unfortunately, the lack of available research to guide nurse educators, combined with the unavailability of highly valid and reliable instruments that measure objective competency-based learning outcomes rather  3 than self-reported satisfaction and confidence become a real issue. A description of the current competency-based assessment methods used by nurse educators who integrate HFS as part of their teaching repertoire may help fuel future research towards developing an objective, competency-based assessment instrument that is highly valid, reliable, and one that nurse educators will consistently utilize. 1.3 Statement of Purpose The primary purpose of this study was to explore the methods used by professional nurse educators to assess clinical competence in HFS education. 1.3.1 Research Questions The central question for this study was: What assessment tools and methods are nurse educators using to assess clinical competence when utilizing HFS? The following additional questions may be further explored through the central question: i. What educational theories guide the choice of assessment tools? ii. On what evidence do instructors base their assessment decisions? iii. What are some challenges associated with objectively assessing clinical competence within the context of HFS?  1.4 Conceptual Definitions  Within the context of this study, professional nurse educators are identified as nurse educators in post-secondary educational institutions that confer a baccalaureate degree in nursing in B. Learners are defined as baccalaureate students enrolled in baccalaureate nursing programs in British Columbia (BC) where the professional nurse educators practice.   4  HFS is operationalized to include sophisticated manikins that can portray physiologic processes in a life-like manner, as well as the environment in which the manikin is situated that replicate clinical practice environments as close to reality as possible, and simulation scenarios created to correspond with the learning outcomes (NCSBN, 2009). Although there are various definitions of HFS, this will be the working definition for the purpose of this study.   Assessment methods are defined as any form of evaluative process or tool that the nurse educator uses to measure whether the learner has met the required learning outcomes outlined in the HFS scenario (Watson, Stimpson, Topping, & Porock, 2002). In other words,. they are concerned with measuring learning, and establishing that learning has occurred.  Lastly, competency-based assessment within the context of the study looks at assessing outcomes using the various assessment elements of the Competency Outcomes Performance Assessment (COPA) model (Lenburg, 1999). For this study, competency-based assessment refers to frameworks or instruments used by nurse educators when assessing students during or after HFS scenarios. 1.5 Conceptual Framework: Lenburg’s Competency Outcomes Performance Assessment (COPA) Model Lenburg’s COPA model was the conceptual framework used to guide in deciding whether an assessment tool or method used by nurse educators in the sample population represents objective and competency-based assessment. Lenburg developed the COPA model using her 17 years of experience in nursing education, basing the model on “the philosophy of competency-based, practice-oriented methods and outcomes” (Lenburg, Klein, Abdur-Rahman, Spencer, & Boyer, 2009, p. 312). The COPA model is established on four essential  5 conceptual pillars: the specification of essential core practice competencies; end-result competency outcomes; practice-driven, interactive learning strategies; and objective competency performance examinations in all courses (Lenburg et al., p. 312).  1.5.1 Core Practice Competencies In the COPA Model, Lenburg defined core practice competencies not as discrete lists of specific tasks but rather categories that allow for clustering of various skills that fit under specific kinds, levels, and foci of nursing practice (1999). In her model, she listed eight specific categories of core practice competencies whereby “any nursing knowledge and skills for any course can be clustered under one or more of the eight universal core competencies” (Lenburg, Klein, Abdur-Rahman, Spencer, Boyer, 2009, p. 313). Specifically, these eight core competencies are: 1. Assessment and Intervention Skills,  2. Communication Skills,  3. Critical Thinking Skills,  4. Human Caring and Relationship Skills,  5. Management Skills,  6. Leadership Skills, 7. Teaching Skills, and  8. Knowledge Integration Skills,  (Lenburg, 1999; Lenburg et al., 2009). A complex clinical HFS scenario could include multiple nursing skills that would encompass all eight core practice competencies listed. A skill as simple as medication administration would require assessment and intervention skills (assessing the client’s level of pain and  6 deciding on the type, amount, and route of analgesic), communication skills (correct review of the Medication Administration Record and completing required charting), critical thinking skills (evaluating whether the intervention is effective or additional actions are required), human caring and relationship skills (advocating for the client to the physician for the appropriate dosage of analgesic if found to be ineffective), management skills (organization and prioritization related to administering the medication at the correct interval within the clinical scenario), teaching skills (performing timely client teaching related to the medication administered), and knowledge-integration skills (understanding the pharmacology of the medication administered). Developing outcome statements that integrate these core competencies would serve to guide the creation, facilitation, and evaluation of HFS scenarios that would best increase the success of student learning (Lenburg et al., 2009). 1.5.2 Competency Outcomes Lenburg defined outcomes within the context of her model as “statements that are integral to practice and worded as practice expectations… [that] guide interactive learning and assessment consistent with actual nursing practice” (Lenburg et al., 2009, p. 313). Outcomes as the endpoint of learning, is a tenet to the COPA model (Lenburg, 1999). This is a shift from the commonly used perspective of using behavioural objectives to guide instructional content organization, challenging historical traditions in education (Bastable & Doody, 2008). Lenburg argued that the main problem with using objectives to guide teaching is its heavy emphasis on the process of learning (i.e. the “how to” of learning the content) as oppose to the result of learning (1999). Outcomes, on the other hand, rely on current, relevant nursing practice to specify requisite knowledge and skills that nursing students need in order to competently practice in the current clinical environment (Lenburg et al., 2009). Because  7 outcomes are written as action-oriented “practice expectations” (2009, p. 313), they also serve to aid in objectively measuring a student’s learning progress in HFS scenarios. 1.5.3 Interactive, Practice-Focused Learning Lenburg argued that the traditional methods of teaching (which include lectures, traditional assignments, and multiple-choice exams) are ineffective ways to facilitate learning, producing students who lack confidence, competence, and currency (Lenburg, 1999). The COPA model relies on learner-focused, practice-based methods, and is based on the “philosophy of performance-based, interactive learning [strategies]” (Lenburg et al., 2009, p. 314). A multi-modal partnership between the “teacher and student, student and resources, and student and student” (Lenburg et al., p. 314) is a required component in teaching and learning according to the COPA model to facilitate effective learning for the students. As such, HFS represents an ideal teaching method to use for learner and practice focused teaching, intersecting the student, the teacher, and the resources in a safe environment (Cant & Cooper, 2009). HFS requires the student to be actively engaged in the learning experience, applying previous knowledge gained in the classroom, to perform nursing skills required within the clinical scenario to care for the “patient” in a competent manner.   1.5.4 Use of Competency Performance Examinations and Assessments According to Lenburg, the COPA model’s emphasis on the creation of objective performance assessments that measure a learner’s competence regarding a skill, instead of merely the learner’s knowledge is one of its main strengths (1999). The COPA model accomplishes this through the use of 10 essential psychometric concepts to ground any assessments created, and include: (a) examination (Competency Performance Examinations  8 (CPEs) and Competency Performance Assessments (CPAs)); (b) competencies, skills, abilities; (c) critical elements; (d) objectivity; (e) sampling; (f) acceptability; (g) comparability; (h) consistency; (i) flexibility; and (j) systematized conditions (Lenburg et al., 2009, p. 315). Lenburg posited that these 10 psychometric concepts emphasize the creation of “standardized, objective, and consistent performance assessments of competence in any given situation… [without] the bias, subjectivity, inconsistency, and inaccuracy often found in… [other] evaluation methods” (p. 314) by focusing on principles that must be maintained rather than linear steps, when identifying the critical elements of the competency outcomes in any given assessment (2009). Guided by this principle, Lenburg pointed out that the COPA model promotes objectivity in assessment of student learning by requiring the teacher to examine whether the element or behaviour they are assessing within the given skill is necessary to demonstrate competence, or is more related to merely knowing about the skill (1999; Lenburg et al., 2009). This process reinforces objectivity in the purpose and process of assessing whether the learner has achieved the outcomes within a teaching strategy, and can demonstrate competence (Lenburg, 1999).   The COPA model’s strength lies in its use of eight core practice competencies that apply in both clinical and didactic courses; the use of language that specify learning outcomes based on current nursing practice in various care settings; education delivery methods that focus on effective learning strategies and the learner; and 10 psychometric concepts as foundation for performance examination that “guide development and implementation of standardized, objective, and consistent performance assessment of competence in any given situation” (Lenburg et al., 2009, p.314). The focused approach of  9 the COPA model on competency-based assessment makes it a suitable conceptual framework to use in this study.  This chapter focused upon the background of the problem being studied, the significance to the current state of nursing education, the problem statement, purpose of the study, the actual research question guiding the study, and the conceptual framework of the study.  In the next chapter, the literature regarding assessment will be explored, focusing on the current objective assessment methods used in nursing education.  10 Chapter  2: Literature Review  This study was designed to investigate how baccalaureate nursing students are assessed when using HFS, with a focus upon the use of competency-based assessment tools. In this chapter, a review of the literature is presented, exploring the concept of assessment within the context of HFS. The first section defines simulation as it pertains to nursing education, including working definitions of the three types of simulation (low, medium/mid, and high). As well, the benefits and limitations of simulation (particularly HFS) are discussed. The second section explores the concept of competence, with particular attention to the literature from a nursing practice and education perspective. The third section reviews assessment in the setting of nursing education, outlining how students are assessed during simulated activities. The literature on current trends in student assessment in HFS in nursing education, including a survey of assessment tools currently used in HFS will be presented, with specific emphasis on competency-based assessment tools. This chapter is concluded by identifying gaps related to competency-based assessment in HFS in nursing education. 2.1  Clinical Simulation Simulation provides a model, facsimile, or representation of the real world using technical means, and within the context of healthcare education, has been described as the use of various technologies to “bridge… classroom learning and real-life clinical experiences, offer[ing] scheduled, valuable learning experiences that are difficult to obtain in real life” (Society for Simulation in Healthcare, 2012, para. 2). Respiratory/Cardiac arrest resuscitations, or perinatal emergencies, are some typical examples of these difficult to obtain real-life experiences that can be readily afforded by HFS.    11 2.1.1 Fidelity Within the context of clinical simulation, fidelity is defined as the level of realism or degree of reality that can be exhibited by the simulation – both within the context of the equipment and/or the setting (Arthur, Kable, & Levett-Jones, 2011; Feinstein & Cannon, 2002). Three levels of fidelity (low, medium/mid, and high) are generally used within healthcare education to describe simulation-based activities. Low fidelity simulation refers to the use of materials such as injection sponges, or manikin arms for learners to practice specific skills such as a subcutaneous injection, or intravenous catheter insertion. Medium fidelity simulation lies somewhere in between low and high fidelity simulation, often using manikins with very basic functionality without providing any feedback to the learner. HFS represents the latest technologies involving the use of computer-controlled life-like manikins that model many different physiological systems and responses to help students learn about complex conditions, and respond automatically to certain user actions (Akhtar-Danesh, Baxter, Valaitis, Stanyon, & Sproul, 2009). The sophistication of HFS technologies developed within the last few years has driven the trend we see today of HFS integration within nursing education (Arnold et al., 2009; Akhtar-Danesh et al.; Cant & Cooper, 2010; Gore, Hunt, & Raines, 2008; Todd, Manz, Hawkins, Parsons, & Hercinger, 2008).  2.1.2 Educational Value As well, the substantial use of simulation seen in other practice professions, particularly aviation and medicine, further supports the use of this education technology in nursing (Gore et al.). Supporters of HFS laud its value as an education tool. Cant & Cooper (2010) stated that the four facets of nursing education – “technical proficiency through psychomotor skills practice and repetition; expert assistance tailored to students’ needs;  12 learning within context; and incorporation of affective learning” (p. 4) – are enacted through the use of HFS (2010). Herm, Scott, and Copley (2007) found a relationship between the use of simulation practice and improved performance in the clinical setting. Hoffman, O’Donnell, and Kim (2007) found that HFS empowers learners through creative employment of multiple tools and systems to develop competencies and problem-solving skills. Kuiper, Heinrich, Matthias, Graham, & Bell-Kotwall (2008) proposed that the value of simulation comes from its ability to promote clinical reasoning and “situated cognition” or learning in context. Situated cognition is embedded in the premise of experiential learning, wherein learning results from the learner’s cognitive interaction with their physical, social, and cultural environment and context (Bandura, 1971; Kuiper et al). Simulation, therefore, is seen as a form of scaffolding activity that combines both successes and failures to develop expertise. One of the main benefits of HFS commonly mentioned in the literature is that it provides the chance for students to practice and experience learning opportunities without negative consequences on patient safety and well-being (Arnold et al.; Cant & Cooper; Gore, Hunt, & Raines, 2008; Hoffman et al.; Radhakrishnan, Roche, & Cunningham, 2007).  2.1.3 Resource Implications Initiatives to integrate HFS in healthcare education have been reflected in government spending in Canada. The Ontario Ministry of Health and Long-Term Care spent $20 million to supply simulation equipment to various colleges and universities in Ontario between 2003 and 2005 (Akhtar-Danesh, Baxter, Valaitis, Stanyon, & Sproul, 2009). In British Columbia, $5 million was pledged by the Ministry of Regional Economic and Skills Development to support replacement and renewal of healthcare education delivery equipment in 2011 (Mullins, 2011). However, despite the purported benefits of HFS in nursing education, there  13 continues to be issues associated with this education delivery method according to the literature.   For example, simulation, particularly HFS, has been identified as ‘resource heavy’ by many researchers. Cant & Cooper (2010) found in their review that 8-10 students per group is the maximum threshold to maintain successful learning outcomes. Compared to the typical classroom sizes found in colleges and universities, this necessitates a higher staff-to-student ratio, increasing the need for staffing budgets. Kardong-Edgren, Adamson, & Fitzgerald (2010) highlighted the financial and temporal costs associated with purchasing and maintaining sophisticated equipment, specialized training needed to effectively use simulation, and the need to adopt a new paradigm of teaching with HFS. The impact on nursing faculty with regards to the increased workload was further echoed by three other studies (Gore, Hunt, & Raines, 2008; Hoffman, O’Donnell, & Kim, 2007; Radhakrishnan, Roche, & Cunningham, 2007).  2.1.4 Effectiveness  Another major criticism related to HFS is the lack of robust evidence available within the nursing literature to support its effectiveness in nursing education. This is highly attributable to the lack of research evaluating the learning outcomes of learners using simulation (Arnold et al., 2009; Cant & Cooper; Gore et al.; Herm, Scott, & Copley, 2007; Radhakrishnan et al.). Higher-level evidence such as randomized controlled trials, quasi-experimental study designs, and intervention studies are uncommon in educational research, whereas descriptive summaries are numerous, further highlighting the lack of syntheses of current evidence and/or critical reviews (Attree, 2006). The predominant data that exists in the literature regarding learning outcomes are focused on self-reported satisfaction regarding  14 the learning process, and student confidence measures (Attree; Kardong-Edgren et al.; Todd, Manz, Hawkins, Parsons, & Hercinger, 2008). This is potentiated by the lack of valid and reliable instruments to measure objective learning outcomes, with no ‘gold standard’ tool available for educators to use (Arnold et al.; Attree; Gore et al.; Kardong-Edgren et al.; Radhakrishnan et al.). 2.2 Competence   Competence is defined as the effective application of knowledge and skills in a realistic work setting, including successful problem solving when unexpected situations arise (Blum, Borglund, & Parcells, 2010; Drisko, 2014; Klein, 2006; McAleavy & McAleer, 1991)). McAleavy & McAleer (1991) posited that there is “no agreed definition of the term competence” (p. 20), stating in a simplistic sense that competence can be defined as a person’s ability to successfully function within their work setting. It is expected that a competent person will apply his or her abilities in both familiar, as well as novel tasks related to their job (Lane & Ross, 1998). It is not something determined by the amount of information taught and assessed (Lenburg, 1999), but rather an outcome of education and training, producing knowledge-supported ability (Carraccio, Wolfsthal, Englander, Ferentz, & Martin, 2002). Wolf (as cited in Hyland, 1993) further elucidated that competence is truly demonstrated when skills and knowledge are underpinned by understanding. In a sense, competence should answer the ‘what’, ‘how’ and ‘why’ of any given actions performed in the work setting. In other words, competence is focused on the performance of actual skills in practice.   Early conceptions of competence within the context of vocational training and education were heavily influenced by behaviourism, focused predominantly on physical  15 performance of tasks and skills (Hyland, 1993; McAleavy & McAleer, 1991; Tarrant, 2000). The shift to competency as the measure of minimum standard as opposed to the more traditional metrics in education founded its roots in the socio-political setting of the United States in the 1960s and 70s (Thurman & Sanders, 1987). A decline in aptitude tests scores and academic grades resulting from “dilution of basic skills development to accommodate the existential movement” produced functionally illiterate adults graduating from higher education settings unprepared for the work force (Thurman & Sanders, p. 164). At the same time, public demand for increased competence from healthcare workers sparked an upsurge of competency-based training in various healthcare programs, such as dietetics, physiotherapy, and medicine (Carraccio et al., 2002). However, this was short-lived due to ill-defined competencies that lacked clear-cut benchmark objectives to describe specific competencies. Cursory curriculum guidelines and a lack of assessment tools to assess competency within the health education system eventually led to the loss in interest for further development until later in the century (Carraccio et al., 2002). 2.2.1 Curricular Design and Competence  The early 2000s saw a re-uptake of competency as the focus for curriculum revisions in some medical programs in the United States, but with emphasis on including knowledge-based competencies, and interpersonal skills, instead of focusing on skills-based competencies alone (Thurman & Sanders, 1987). The Society of Teachers of Family Medicine, among other groups, pushed for a curriculum with an emphasis on competence, including objective, competence-based assessment methods (Carraccio et al., 2002). Canadian medical schools utilized certification as a medium for emphasizing continuing competency both in the undergraduate medical programs as well as for practitioners  16 (Thurman & Sanders). The Baylor College of Dentistry is one such institution that ensured competency-based curricular review processes were embedded within its program (Carraccio et al., 2002).  The importance of competence, particularly in the professional healthcare setting, cannot be overemphasized. In 2006, the United States Department of Education (USDOE) concluded that educational institutions throughout the country were ill-preparing their graduates to perform entry-level tasks adequately in the workplace after completion of their programs in various disciplines (Drisko, 2014; USDOE, 2006).  As part of the initiative to move nursing programs from hospitals to universities in the late 1980s, Australia attempted to standardize professional nursing competencies throughout the country in order to facilitate this change (Chapman, 1999). However, issues related to standardized terminologies for competence and competencies provided a challenge to this endeavour (Thurman & Sanders, 1987). Critics  also voiced concern that this task would influence the Australian nursing curriculum development towards a “teach to the exam” direction, potentially losing important nursing knowledge concepts in favour of a skills-heavy curriculum more conducive to a competency-based format (Chapman). This serves to illustrate the tensions between skills and theory education in nursing academia. 2.2.2  Competence in Nursing Education and Practice  Competence is a vital part of nursing practice. Competence guides nurses to provide safe nursing care, and practice sound clinical decision-making (Blum, Borglund, & Parcells, 2010; Suliman & Halabi, 2007). The College of Registered Nurses of British Columbia (CRNBC) define competence within the context of nursing as: “the integration and application of knowledge, skills, attitudes and judgments required to perform safely and  17 ethically within an individual’s nursing practice or in a designated role and setting and includes both entry-level and continuing competence” (2010, p. 20). The Canadian Nurses Association (CNA) defines competency to include “the knowledge, skills, judgment and attributes required of a Registered Nuse (RN) to practise safely and ethically in a designated role and setting” (CNA, 2007, p. 11). The CNA Code of Ethics for Registered Nurses (2008) lists competence as part of the first nursing values and ethical responsibilities for all Registered Nurses (RN): “Providing safe, compassionate, competent, and ethical care” (p. i). The significance of competence in nursing education and practice can be observed through the Canadian provincial regulatory licensing bodies’ explicit use of competency statements as the benchmark for adequate preparedness of an entry-level registered nurse entering the work force (Association of Registered Nurses of Newfoundland and Labrador, 2013; College & Association of Registered Nurses of Alberta, 2013; College of Nurses of Ontario, 2014; CRNBC, 2015).   The move towards occupational competence as a framework for nursing practice began gaining regard in the 1980’s and early 90’s when a shift away from hospital-based training towards university-based preparation was gaining momentum (Gallagher, Smith, & Ousey, 2012). The landscape of nursing has changed drastically in recent years due to the growing body of knowledge, advances in technology, the increasing age and acuity of the population, and the increased focus on safe, quality patient care (Abe, Kawahara, Yamashina, & Tsuboi, 2013; Jeffries, 2005; Zasadny & Bull, 2015). As well, the decreasing availability of clinical placements for nursing students over the last decade has diminished the number of opportunities for students to practice in clinical settings further highlight the importance of a competency-based approach, particularly in assessing the readiness of nursing students  18 (Jeffries; Zasadny & Bull). The development of competence in nursing students require stable and prolonged contact with the clinical environment (Gallagher et al., 2012). However, given the trend of diminishing clinical placement availability for nursing students (Jeffries; Zasadny & Bull), many nursing programs are addressing this by supplementing clinical hours through the use of simulation. The recent NCSBN study findings supported that up to half of traditional clinical time can be effectively replaced by HFS (Hayden et al., 2014). Improved competence through the use of simulation-based education has been proposed, through its ability to link theory-driven classroom instruction with practice-based clinical setting experience in a controlled manner (Abe et al., 2013). Simulation provides learners with direct engagement to the learning activity while being provided immediate feedback, encouraging learning (Jeffries), and eventually developing competence.  2.3 Education and Assessment  Assessment is defined as the process of gathering, summarizing, interpreting, and using the data collected to direct an action (Bastable & Doody, 2008). Assessment plays a key role in education because it drives learning (Murray, Gruppen, Catton, Hays, & Woolliscrot, 2000).  Within the context of education, assessment allows for measurement of a student’s progress in a structured manner (Traynor & Galanouli, 2015) by providing tangible evidence to help the educator in making their decision related to the student’s success (Attree, 2006). It is concerned with testing what a student knows, or can do. It also allows educators and students to identify the students’ learning needs based on their performance, paying particular attention to their strengths, as well as their learning gaps (Traynor & Galanouli).   19   Outcomes, as the end goals of assessments, rather than processes, are an important aspect of effective assessment (USDOE, 2006). Outcomes, within the context of competence assessment, require contextualization in order to be measured objectively. Drisko (2014) posited that a link exists between competence, behaviour, and knowledge, in that competent behaviour is drawn from knowledge, and the inverse being lack of knowledge manifests in behaviours that can be inferred as lack of competence. Therefore, effective assessment of these outcomes requires observation of behaviours in order to decide whether a student is competent or not.   Successful assessment requires measuring multiple facets of a given competence simultaneously instead of only one single indicator, and preferably in a setting that best mimics the environment in which the particular competence will be most needed (Drisko, 2014; McClelland, 1973). As well, the assessment should incorporate elements of ambiguity to test the student’s ability to problem solve in a non-linear fashion, which is often how real-life situations unfold (Drisko; McClelland). It is only when the competence is assessed in a way that reflects real-life situations can the assessment be considered valid (Drisko).  2.3.1.1 Competency-Based Assessment Tools  Competency-based assessment methods, which assess students against defined criteria, instead of self-reported, subjective parameters, is more desirable in measuring competence because it allows the educator to verify when a student meets an established performance indicator (Carraccio et al., 2002). Its importance in education, particularly in health education, cannot be understated (Carraccio et al.). Further, one can ascertain the validity of assessment tools when there are direct links between the indicators and measures in the tools, and the concepts and constructs being assessed (Drisko, 2014).   20  Competency-based assessment is now deeply rooted within nursing education (Gallagher et al., 2012). “Given that assessment drives learning, one challenge facing educators today is to find robust and feasible tools of assessing… competencies” (Murray, Gruppen, Catton, Hays, & Woolliscroft, 2000, p. 873). Lasater (2007) pointed out the need for objective measurement of competence through the use of multi-factorial tools that go beyond simple observation of students. This would ensure that students are meeting learning outcomes as a result of their educators’ efforts (Murray et al., 2000). However, learners will be less motivated to meet learning outcomes that are not reflected in assessment tools used by their educators (Murray et al. 2000). Given this, the importance of using competency-based assessment tools to determine achievement of learning outcomes cannot be overstated. To this end, a brief survey of the literature regarding the current assessment tools and instruments used in simulation will be presented in the next sub-section.  2.3.1.2 Current assessment tools in simulations/simulated activities   There are a multitude of available assessment tools reported within the literature for use in assessing HFS learning activities. However, competency-based assessment tools are not as numerous compared to tools that assess student satisfaction with the learning process, or confidence (Attree, 2006; Cant & Cooper, 2010; Kardong-Edgren, Adamson, & Fitzgerald, 2010; Nehring & Lashley, 2004; Todd, Manz, Hawkins, Parsons, & Hercinger, 2008). Despite this, many competency-based assessment tools can be found within the literature that lend themselves to objective assessment of students in HFS learning activities. Highlights of these assessment tools are presented in this section.    21 2.3.1.2.1 Objective Structured Clinical Examination  The Objective Structured Clinical Examination (OSCE) is one of the most common assessment methods that incorporate simulation. It does this through the use of an actor who simulates a health issue that learners would encounter in the real-life practice setting (Traynor & Galanouli, 2015). The OSCE, developed in the 1970s in Scotland as a way to assess the competence of medical students in a systematic way, is considered a competency-based assessment tool because it allows for examination of a student’s ability within a confined scenario while replicating actual clinical events (Traynor & Galanouli, 2015). Many variations of the OSCE exist depending on the discipline and the program utilizing the tool, but all follow the same framework of multiple testing stations using a standardized patient (usually an actor) for each station, and an evaluator (usually a faculty member) observing the students’ performance in a limited time context (usually 5 minutes or less per station) (Cant & Cooper, 2010; Carraccio & Englander, 2000; Gibbons et al., 2002; Harden, Stevenson, Downie, & Wilson, 1975; Kak, Burkhalter, & Cooper, 2001). The OSCE was first described in 1975 (Carraccio & Englander; Harden et al.), which compared it against traditional clinical examination. Ninety-nine students were divided into three groups of 33, and two-thirds of the students were assessed using the traditional clinical exam while the other one third underwent the OSCE. The traditional clinical exam group’s written exam scores did not correlate with their clinical exam (γ = 0.17) and (γ = 21) whereas the OSCE group’s exam scored showed a positive correlation (γ = 0.63) (Carraccio & Englander). The OSCE’s reliability is dependent on the homogeneity of the tasks being assessed – the wider the range of tasks, the lower the reliability (Carraccio & Englander; Hilliard & Tallett, 1998; Joorabchi, 1991). The OSCE’s content validity in some studies was ensured through extensive subject  22 matter expert review (Carraccio & Englander; Joorabchi; Joorabchi & Devries, 1991). One nursing study showed the value of combining OSCE with HFS in assessing the competence of nursing students against their own self-assessment (Baxter & Norman, 2011). Another nursing study described a program’s experience and success integrating the OSCE as part of their standard student assessment (Traynor & Galanouli, 2015). The OSCE’s strength lies in the standardization of the assessment, supporting a more objective assessment. However, given the high demand for resources and time in performing the OSCE assessment, using this method make it difficult to implement consistently for nursing programs.  2.3.1.2.2 Simulation Evaluation Instrument  Todd, Manz, Hawkins, Parsons, & Hercinger (2008) created the Simulation Evaluation Instrument (SEI), a quantitative assessment tool designed to be used in simulated clinical experiences (SCE). SCEs involve a group of students working together in a scenario, which provided the specific challenge of using assessment tools designed for single-student assessments. The instrument was tested on 72 senior nursing students. Core competencies from the American Association of Colleges of Nursing (AACN) were used to guide the item creation of the tool, resulting in twenty-two behaviours being included in the instrument. Students were assessed using a two-point scale: ‘1’ for demonstrating competency, and ‘0’ if not demonstrated. A passing score of 75% was established to reflect Creighton University School of Nursing’s passing grade policy. Content validity of the instrument was tested using expert panel review consisting of seven faculty members from various clinical backgrounds with experience in simulation. Inter-rater reliability of the instrument was: 0.84 for assessment; 0.89 for communication; 0.88 for critical thinking; and 0.85 for technical skills sections of the instrument. Coefficients of 0.80 or greater are ideal to ensure inter-rater  23 reliability of an instrument (Polit & Beck, 2012). This instrument would be ideal in assessing a group of student’s performance within an HFS activity – however, the design would yield a group grade, potentially masking weaker performance of some students. 2.3.1.2.3 Medication Administration Safety Assessment Tool  Stanley, Philips, and Galatzan (2014) used the Medication Administration Safety Assessment Tool (MASAT) within a HFS scenario to assess second-year baccalaureate nursing students’ competence regarding medication administration to patients with limited English. The MASAT (Goodstone & Goodstone, 2013) was developed for use in both clinical settings, as well as simulated scenarios to measure students’ competence regarding medication administration. Goodstone and Goodstone tested the MASAT’s content validity prior to pilot study by surveying subject matter experts (SME) (n=10) using a 5-point rating scale, which found a Scale Content Validity Index (S-CVI) of 0.93, exceeding the suggested standard value of 0.90 for establishing S-CVIs (Polit & Beck). The pilot study was conducted on nursing students (n=14) using a medication administration simulation scenario from the National League of Nursing. The pilot study showed a Rater Agreement Index (RAI) of 0.83, which is adequate for group comparisons, but below the 0.90 coefficient desirable for individual measures (Polit & Beck). Though the validity of this instrument is high, and the reliability adequate, the MASAT can only be used to assess a very specific competency, and cannot be used as a standalone instrument for most other HFS scenarios. 2.3.1.2.4 The Clinical Simulation Grading Rubric  Clark (2006) created the Clinical Simulation Grading Rubric© to assess student performance during a simulated obstetric complication (i.e., abruption placentae). The rubric is used to assess the student over six areas of nursing practice, which include: patient  24 assessment; history gathering; critical thinking; communication; patient teaching; and lab data & diagnostic studies collection. Clark did not provide the method or scores used to evaluate the reliability and validity of the tool, but stated that inter-rater reliability can be easily established for each faculty group using the tool within each program (Clark, 2006). More importantly, although the instrument was originally designed to assess student performance in an obstetric clinical simulation, the instrument can be adapted for any simulation scenario developed within specific nursing programs and courses (Kardong-Edgren et al., 2010). This instrument’s strength lies in its objectivity, and ability to be adapted to other types of HFS scenarios, though the lack of information on its validity and reliability should spur further research into determining its validity and reliability when used by nurse educators in their own practice. 2.3.1.2.5 Lasater Clinical Judgment Rubric  Lasater (2007) created the Lasater Clinical Judgment Rubric (LCJR) based on Tanner’s Clinical Judgment Model (Tanner, 2006) as a way to quantitatively measure nursing students’ clinical judgment when using simulation. Lasater chose a rubric model for her instrument due to its ability to provide a clear description of the expected outcomes, standardize the language of assessment between varying groups of students, and promote critical thinking by setting out the outcome parameters but leaving the students to use their own judgment and knowledge base to meet the outcomes (Lasater). Thirty-nine third year students enrolled in an acute medical-surgical clinical course were observed in simulations over a period of seven weeks in order to create and improve the LCJR. Students are scored using a four-point scale (beginning, developing, accomplished, and exemplary) over four major dimensions of simulation activity (noticing, interpreting, responding, and reflecting).  25 Each dimension has two to four sub-dimensions, and each scale point has specific description for each sub-dimension that allows the evaluator to ascertain whether a student is demonstrating a beginning, developing, accomplished, or exemplary competence. The LCJR allows for a systematic way to assess students’ critical thinking within a simulation (Kardong-Edgren et al., 2010), but is not tested for validity or reliability, and has been criticized as complicated to use (Kardong-Edgren et al.,).  2.3.1.2.6 Emergency Response Performance Tool  The Emergency Response Performance Tool (ERPT) was developed by Arnold et al., in 2009 as a tool to assess nurses in critical care programs undergoing an emergency response simulation scenario (specifically, ventricular tachycardia arrest resuscitation). The ERPT consists of 20 items adapted from the American Heart Association’s (AHA) Basic Cardiac Life Support (BCLS) standards and Advanced Cardiac Life Support (ACLS) algorithm. The instrument was tested on 16 medical-surgical and critical care nurses using the known-group technique, which assesses an instrument’s contrast validity when tested on two (or more groups) with known difference regarding the critical attribute(s) (Polit & Beck). Content validity of the instrument was attained by using a validated instrument to draw items from, and verified by three nurse experts. Both categorical (p = .03) and continuous (p = .02) items related to medication administration showed statistically significant difference between groups but all other items did not show statistically significant differences (Polit & Beck). Inter-rater reliability was high for a majority of the categorical and continuous items (Polit & Beck). However, this instrument’s focus may not be appropriate for use with baccalaureate nursing students given the advanced knowledge required for ACLS, but the instrument’s  26 study design provide an excellent framework for nurse educators to use should they decide to design their own competency-based assessment tool. 2.3.1.2.7 Level 2 Synthesis Clinical Evaluation Instrument   Herm, Scott, and Copley (2007) developed the NURS 307/310 level Two Synthesis Clinical Evaluation instrument to assess third-year nursing students undergoing clinical evaluation through the use of a high-fidelity simulation testing scenario in a course within their nursing program. The instrument included 68 items divided into eight key competencies: (a) safety; (b) communication/professional boundaries; (c) perform head-to-toe physical assessment; (d) plan, prioritize, and implement nursing actions; (e) pain assessment; (f) medication administration; (g) documentation; and (h) critical elements. Each item is scored either as ‘met’ or ‘not met’ based on the evaluator’s observation of the student’s performance of each item during the simulation. The instrument’s inter-rater reliability was tested, but the authors did not indicate the exact statistical test used. The tool’s validity and reliability was not evaluated prior to use with the students. Interestingly, the instrument was compared against an older tool used by the authors’ nursing program for the same course, and they found that students who received passing grades when the old tool was used received failing grades based on the new instrument designed for the simulations (Herm, Scott, & Copley, 2007). Nurse educators may not be able to use the tool exactly as it is written for their own HFS scenarios given the specific design of the instrument for a specific nursing course in a specific nursing program. However, the tool could be adapted to fit specific HFS scenarios, but caution must be taken due to the lack of testing regarding the tool’s validity and reliability.   27 2.3.1.2.8 Clinical Simulation Evaluation Tool  Radhakrishnan, Roche, and Cunningham (2007) designed the Clinical Simulation Evaluation Tool (CSET) as part of a research study to assess the effectiveness of an education intervention using HFS within their nursing program. The CSET was designed as a way to objectively assess the nursing students in both the control (no additional HFS training prior to evaluation) and intervention (additional HFS training prior to evaluation) groups within the study. Students who demonstrated specific key behaviours throughout the simulation received a point. Key behaviours for each specific simulation scenario was included as an item in the tool, subdivided into five categories: (a) safety and communication; (b) assessment and critical thinking; (c) diagnosis and critical thinking; (d) interventions, evaluation and critical thinking; and (e) reflection and critical thinking. The tool was tested on 12 nursing students who have completed their final preceptorship experience. The authors did not indicate whether the tool was tested for reliability or validity. The instrument can be easily adapted for use by nurse educators in specific HFS scenarios, but care must be taken to ensure that items within the instrument are revised to reflect specific key behaviours relevant to the HFS scenario. As well, revisions will affect the instrument’s validity and reliability, so nurse educators should determine a revised tool’s validity and reliability prior to using the instrument for high-stakes student assessments.   This section presented a brief overview of select assessment tools currently used in simulation activities within the literature. The next section will discuss key gaps regarding simulation assessment, particularly regarding competency-based assessment in HFS in nursing education.   28 2.3.2 Gaps in Competency-Based Assessment in HFS in Nursing Education  Within the medical profession, many competency-based assessment tools are easily available that relate competence assessment to knowledge and specific clinical skills (Murray, Gruppen, Catton, Hays, & Woolliscroft, 2000). Unfortunately, the same case cannot be said for the nursing profession, given that within the context of nursing practice, there continues to be a deficiency of useful assessment tools for measuring competency (Cowan, Norman, & Coopamah, 2007; Zasadny & Bull, 2015). Some types of competence, such as those related to professionalism, are not as readily available due to a lack of objective, directly observable outcomes related to the competence (Murray et al). Others, such as those related to cross-cultural competence, have a higher emphasis on teaching the competency instead of assessing it (Murray et al.), further elucidating the complex nature of objective assessment.   Standardizing the assessment of nursing students’ competence is a challenge that educators face (Traynor & Galanouli, 2015). This proves to be a problematic situation because it is “not be possible to evaluate the outcome of educational initiatives without valid, reliable and feasible assessment tools” (Murray, Gruppen, Catton, Hays, & Woolliscroft, 2000, p. 873). Studies related to simulation assessment typically involve single-scenario situations or case studies, usually restricted to a very specific clinical context (Arnold et al., 2009; Blum et al., 2010; Clark, 2006; Herm et al., 2007).  This complicates the nurse educator’s ability to find an existing tool that reflects most, if not all, of the key competencies that exist in their specific HFS scenarios. Further, assessment tools found in the literature for simulation activities mostly assess qualitative aspects of the students’ simulation experience (Nehring & Lashley, 2004; Schoening, Sittner, & Todd, 2006; Todd et al., 2008). This  29 decreases the ability of nurse educators to determine whether HFS activities are effective in helping students meet learning outcomes set out within their specific nursing courses. Self-reported measures of confidence continue to dominate the available assessment tools in nursing education (Arnold et al., 2009). However, the literature suggests that self-assessment may not be reflective of a student’s actual competence (Colthart et al., 2008; Drisko, 2014; Eva & Regehr, 2005;) with some studies showing either no correlation between a student’s self-assessment regarding their own competence versus the observation of an evaluator, or worse yet, an inverse relationship (Davis et al., 2006; Gordon, 1991; Kruger & Dunning, 1991). Specifically, competent participants believed that they were less than capable, whereas weaker participants viewed themselves as highly competent (Baxter & Norman, 2011). This may be in part due to the presupposition that educators who evaluate students possess the knowledge and skills about the competence being assessed, gleaned from years of experience (Drisko, 2014), as opposed to students who do not possess the same knowledge and skills about the competence they are being asked to self-assess, therefore providing inaccurate self-assessment data. Regardless of this information, it has been suggested that nursing schools continue to utilize, and emphasize the importance and use of self-assessment (Baxter & Norman).   This chapter provided an overview of the literature as it related to the research study. Literature regarding simulation in the context of education was presented, including types of simulation, and the benefits and challenges related to simulation use in education. The concept of competence was defined, and its relevance in the current state of clinical education and practice was examined. Finally, assessment and its role in education were outlined, and competency-based assessment tools within the literature were reviewed,  30 including gaps related to competency-based assessment in HFS. The next chapter will present the methods undertaken to execute the research study in order to answer the research questions presented in the previous chapter.     31 Chapter  3: Methods  This section presents details of the study design, ethical considerations, sampling plan, study procedures, data collection, and data analysis procedures. Information about rigor and validity is also discussed. 3.1 Study Design  A prospective, cross-sectional descriptive study was designed to explore the methods that nurse educators in BC, who utilize HFS in their teaching, use to assess clinical competence and learning outcomes. Specifically, as an inductive survey exploration of methods in use by nurse educators, a univariate descriptive study was deemed appropriate for the purpose of this research, as it would support quantification of the different assessment tools and approaches in use. Polit & Beck (2012) described a univariate descriptive study as a type of non-experimental quantitative study that looks at the frequency of a condition or behavior. Univariate descriptive studies are known for being strong in realism due to the observational nature compared to experimental studies (Polit & Beck). Given that the aim of this study was to describe the current state of competency-based assessment in HFS, a web-based questionnaire was designed to be administered to practicing nurse educators using simulation in Bachelor of Science in Nursing (BSN) programs in BC. Additionally, several open questions were used to provide some qualitative data for thematic content analysis, to help provide more context to the quantitative data. 3.2 Ethical Considerations  Prior to the data collection phase of the study, all department heads of the BSN nursing programs in BC that were to be included in the sample were contacted by email to request information regarding site-specific ethics approval requirements in order to recruit their nursing  32 faculty into the study, as well as to determine if their site would be interested in participating. Ethics approval was received from the University of British Columbia (UBC) Behavioural Research Ethics Board (BREB), Langara College Research Ethics Board (REB), British Columbia Institute of Technology (BCIT) REB, and Vancouver Community College (VCC) REB. All other participating institutions were included under the UBC BREB approval since they did not require site-specific REB approval. Participants were invited to participate by sending an initial email contact letter (Appendix A) and a Letter of Information  (Appendix B) to the department heads of nursing programs for dissemination to the faculty.   The Letter of Information explained the study to potential participants, including information regarding privacy and confidentiality, particularly around the study’s use of an online survey company that stored its database in the United States. Information regarding risks and benefits of participating were included in the letter, as well as a statement that they could withdraw from the study at any time prior to submitting the completed survey, but once submitted, were unable to retract their submission due to the anonymous design of the survey. All participants indicated consent to participate by clicking the “I Consent” button on the first page of the survey and submitting the completed survey.   To protect the confidentiality of all participants, no personal identifiers were collected during the study. The online survey website address was provided in the Letter of Information which allowed participants to independently access the survey without contacting the researcher. However, to provide an incentive for participation, respondents were given an option to enter a draw for a $25.00 gift card after they had completed the survey. They were redirected to a different webpage not connected to their survey response where they entered their name and  33 email address, along with their choice of gift card. There was no way to connect their contact information to their survey response.  It was also noted that a summary of the research findings would be forwarded to the department heads of all BSN nursing programs in BC for dissemination to their faculty members. 3.2.1 Data Management  Fluid Survey, a Canadian survey company that houses its servers in Canada, was used to deploy the online survey and the prize draw page. Fluid Survey is a subsidiary company of Survey Monkey – an American survey company. Fluid Survey data may be stored and processed in the United States (U.S.), and survey data may be subject to U.S. Laws. Therefore, the standard wording information from the UBC REB website regarding Survey Monkey and the rights of the U.S. government to access data under the Patriot Act was included in the ‘Letter of Information’ that study participants received and reviewed prior to deciding to participate in the study. Electronic database containing study data is protected with an encrypted password, and stored in a password-protected computer. All study data will be kept in the Principal Investigator’s locked office for a minimum of five years after the study is completed then destroyed. 3.3 Sample  Participants were recruited through a simple targeted non-probability sample from all public post-secondary institutions in British Columbia that offered a baccalaureate nursing program. Participants were practicing nurse educators using HFS in their teaching. 3.3.1 Inclusion Criteria  The following inclusion criteria were used to further screen potential study participants, to whom the survey questionnaire was sent:  34 a) Nurse educators currently teaching in an undergraduate baccalaureate nursing program in BC, b) A minimum of one year experience as a professional educator, c) Use of HFS within the course they teach  These inclusion criteria were chosen specific to the study to ensure the potential sample pool was large enough to capture as many potentially eligible participants as possible, while maintaining the integrity of the data collected.  3.3.2 Exclusion Criteria The following exclusion criterion was used to exclude potential study participants: a) No previous experience using HFS as a teaching strategy (Prior experience with HFS of at least one term prior to the study is acceptable)  This exclusion criterion was chosen specific to the study because some of the nursing programs in BC do not utilize HFS. As well, prior experience of one term using HFS was chosen to ensure that participants had enough experience using HFS to provide data rich enough in detail to allow for a better description of the current state of HFS utilization in BC.  3.4 Study Procedures  3.4.1 Recruitment  In 2012, there were a total of 510 nurse educators working in 20 post-secondary institutions in British Columbia that offered a bachelors degree in nursing, either as a stand-alone degree program, or in a collaborative agreement with a degree-granting institution (CNA & Canadian Association of Schools of Nursing, 2013). Purposive sampling was used, inviting Department heads of the schools of nursing to partake in the study if eligible, and also to distribute study information to their faculty members (Polit & Beck). Eligibility of study  35 participants was screened in the second page of the web-based survey/questionnaire. Any potential participants who did not meet all the inclusion criteria was diverted to a page thanking them for their interest and informing them that they did not meet the eligibility requirements for the study. Consenting occurred through the web-based survey. Completing the online survey signified that potential participants consented to be part of the study.  Two weeks after the initial contact letter, a follow-up email (Appendix C) reminding potential participants to take part in the study until the end of recruitment period was sent to the nursing schools. According to Polit & Beck, web-based questionnaires typically garner response rates of less than 50%, specifically as low as 21% (Kaplowitz, Hadlock, & Levine, 2004), to as high as 40% (Cook, Heath, & Thompson, 2000). To help increase recruitment, an incentive of an optional entry into a prize draw for a $25.00 gift card to Starbucks, Tim Hortons, or iTunes (four cards in total) was offered to study participants who completed the entire survey.  3.4.2 Data Collection Tool Development  One specific area of concern to the validity and rigor of this research study was the use of an unvalidated data collection tool specifically developed for this study. An existing validated questionnaire that addressed the key questions asked in this study was not found after a detailed literature review; therefore a new survey tool was designed. The tool was modeled using Nehringer & Lashley’s data collection instrument from their 2004 study (Nehringer & Lashley, 2004). Content validity of Nehringer & Lashley’s instrument was attained by first comparing instrument items against nursing and medical human patient simulator (HPS) literature, then reviewed by nurse faculty who were familiar with HPS (Nehringer & Lashley) to establish face-value validity. Due to the constraints of this being a student thesis project, no reliability tests  36 were completed prior to use of the tool by the authors. Written permission was obtained from the original author prior to use of the original questions in the new tool (Appendix D).   To lessen the impact of using a non-validated tool, a modified Delphi approach was used to test the face validity of the data tool prior to deployment. The Delphi approach involves asking a panel of subject matter experts to complete several rounds of surveys regarding a topic within their expertise. This is repeated for multiple rounds, with data analysis and summarizations occurring after each round until consensus is reached (Polit & Beck). This approach allowed for accessing the expertise of multiple experts in a timely manner, and required fewer resources than utilizing factor analysis and pretesting that would normally be completed when developing a data collection tool (Polit & Beck). The survey was sent to three field experts in HFS, psychometrics, and data collection tool design for feedback regarding the survey’s face validity. Two rounds of revisions incorporating feedback from the expert panel were completed prior to deployment of the data tool. 3.4.3 Data Collection  A self-administered questionnaire was chosen to allow for anonymity, elimination of interviewer bias, and minimizing cost (Polit & Beck). Data collection was completed over a single time point using an online survey. The online survey (Appendix E) was developed using the following criteria: - Questions related to program (types of BSN programs offered; hours of HFS use in program; number of faculty members with specialized training in HFS; type of scenarios HFS is utilized;) - Questions related to decision-making pertaining to student assessment (i.e. educational theories guiding choice of assessment tools; evidence used to base assessment decisions;  37 challenges associated with objectively assessing clinical competence within the context of HFS)  - Questions related to student assessment logistics (i.e. how often students are assessed when utilizing HFS; assessment tools used by educators) - Basic demographic questions (i.e. gender, age, level of education, years of experience as RN, years of experience as nurse educator, years of experience using HFS) The online survey consisted of 30 questions, comprised of both closed-ended and open-ended questions. A mixture of closed and open-ended questions was chosen to allow for a balance of decreasing the amount of time needed to complete the questionnaire, while allowing for inclusion of data that may otherwise be omitted, and to provide additional context to quantitative answers provided (Polit & Beck). The online survey took an average of 22 minutes to complete. Once completed, the data was aggregated by the online survey program and downloaded to the researcher’s computer for cleanup, coding, tabulation, and analysis. 3.5 Data Analysis  The research question and the descriptive design of the study made the use of descriptive statistics appropriate to analyze the quantitative data collected from the participants (Polit & Beck). In order to make sense of the data within the context of the study design, frequency distributions and central tendencies were used to organize the data to demonstrate any trends within the dataset (Polit & Beck). Variability index, specifically range, was also calculated to elucidate how the scores within the dataset differed (Polit & Beck). The collected data was entered into a Microsoft Excel database and scanned for any outliers or potentially erroneous data. Nominal-level data (e.g. ‘yes/no’, ‘male/female/other’) were assigned numerical codes for tabulation purposes (e.g. ‘no’ was coded as 1, and ‘yes’ was coded as 2). As well, Fluid Survey  38 auto-coded close-ended multiple-choice questions (both single and multiple answer types) with a ‘1’ for any affirmative replies by the participants. An erroneous value was found in one of the ratio-level questions (year of birth), and this answer was deleted and not imputed. Once coded, frequency distributions, central tendencies, and variability of the data set were analyzed using Excel’s data analysis functions. Data analysis included the use of tables to explore frequency distribution of results, as well as histograms and pie charts to visually represent the data (Polit & Beck).  Answers to open-ended questions were subject to a content analysis and coded using a concrete category scheme that allowed for grouping of similar answers into discrete themed conceptual categories for each specific question (Polit & Beck). The thematic content analysis aimed to identify any significant repeated responses and familiar themes expressed by the participants, as well as individual responses (Polit & Beck).  3.6 Rigor and Validity  The validity of quantitative studies can be greatly strengthened by introducing study design features that mitigate threats to validity, increasing the overall quality of the results and the evidence generated (Polit & Beck). Specific to the study design of this research, sample size, and heterogeneity were some common threats to validity that were considered (Polit & Beck).  With regards to sample size, given that an inductive descriptive quantitative approach was being used, the largest available sample size was targeted (Polit & Beck). By removing a maximum limit for participant enrolment in the study, a larger data set was possible, which would support a richer, broader picture of the current state of competency-based assessment in HFS in nursing education. As well, all baccalaureate nursing programs in BC were initially invited prior to screening in order to maximize the potential sample pool.  39  An inherent problem with non-experimental study design is the inability to control for confounding characteristics that human participants inherently bring into any study (Polit & Beck). Recruitment of a homogenous sample population allows for some mitigation of this threat to validity by providing an alternative approach to controlling these confounding characteristics through selective enrolment of participants that share the same confounding traits  (Polit & Beck). Recruiting a homogenous sample population for this study was attempted in a two-step manner. First, nursing programs that did not utilize high-fidelity simulation were excluded from further invitation into the study. This was accomplished by contacting department heads and inquiring whether their program possessed the material and human resources to deliver HFS in their baccalaureate nursing programs. Only faculty members in programs that possessed HFS mannequins and/or labs as well as faculty with experience utilizing HFS were included. Secondly, the invitation letter listed the inclusion and exclusion criteria which allowed for self-selection of participants, increasing the likelihood for a homogenous sample.  In this chapter an overview of the study design, research methods, and data analysis undertaken to complete this research study was provided. In the next chapter the study findings are presented.   40 Chapter  4: Findings   In this chapter, the results of the study are offered. The demographics of the study participants, and the findings are presented through various formats including frequency tables, histograms, and pie charts to visually represent the results. Selected responses to open-ended questions are also presented to highlight any key findings in the study.  4.1 Participant Demographics  From the forty-four participants that accessed the online survey and consented to participate, thirty-five participants met the eligibility criteria, and twenty-five (n=25) participants completed the entire survey. The data was collected over a period of two months – March and April 2015.  Specific demographic data for the twenty-five participants are presented in Table 4.1 and Figures 4.1, 4.2, and 4.3. All study participants taught in public post-secondary institutions in British Columbia.   The mean age for the participants was 51 years, as well as the median. The sample was multimodal for age (42, 43, 51, 53,58, 60, 61) all occuring twice in the entire dataset. The youngest participant was 31 years old, and the oldest was 61 years old. Data was deleted for one participant due to an incorrect value provided in the survey (‘1900’ for their birth year). The age distribution of the participants is presented in Table 4.1.        41 Demographics Number of Participants (n=25) Percentage Gender   Male 3 12% Female 22 88% Highest Level of Education Completed       Bachelors (Nursing) 3 12%     Masters (Nursing) 19 76%     Masters (Other) 1 4%     PhD (Nursing) 1 4%     PhD (Other) 1 4% Age (Years)        30-34 2 8%      35-39 2 8%      40-44 5 20%      45-49 3 12%      50-54 5 20%      55-59 3 12%      60-64 4 16%  Table 4.1: Participant Demographics   The mean years practicing as RN for the participants was 24. The median years practicing as RN was 25. The sample was multimodal for 25 and 39 years, both occuring three times in the dataset. The range for years practicing for the participants was between 9 and 40 years.    42  Figure 4.1: Years Practicing as Registered Nurse (RN)   The mean years teaching in a BSN program for the participants was 11 and median years teaching was nine. The sample was multimodal for three, five, nine, and ten years teaching, all occuring three times each in the dataset. The range of years of teaching was between two and thirty-five years.  0!1!2!3!4!5!6!7!5>9! 10>14! 15>19! 20>24! 25>29! 30>34! 35>39! 40>44!Number(of(Participants(Years(Practicing(as(RN( 43  Figure 4.2: Years Teaching in Bachelor of Science in Nursing Program     The mean years of experience using HFS in teaching for the participants was 4.72. The median and the mode years for the sample was five. Years using HFS in teaching ranged from one to ten years. Figure 4.3’s x-axis is presented in single-digit year values as oppose to five-year range values seen in the former two figures in order better illustrate the variability in the score (which would be lost if the five-year range values was maintained). Consistency in this case was sacrificed in order to depict the data in a more accurate manner. 0!2!4!6!8!10!12!0>4! 5>9! 10>14! 15>19! 20>24! 25>29! 30>34! 35>39!Number(of(Participants(Years(Teaching(in(BSN(Program( 44  Figure 4.3: Years Using High Fidelity Simulation in Teaching    A large majority of the participants (72%) taught in a 3 or 4 year BSN program. The rest of the participants (32%) taught in an advanced standing/accelerated program. A few of the programs (either 3 – 4 year or advanced/accelerated) also offered alternate entry options for those with a Licensed Practical Nurse (LPN) degree (32%). Other programs offered refresher programs for internationally trained RNs or those whose practicing status has lapsed (12%) based on CRNBC criteria. One participant (4%) taught in a program partnered with the University of British Columbia Okanagan to deliver the first 2 years of the four-year nursing program.    0!1!2!3!4!5!6!7!1! 2! 3! 4! 5! 6! 7! 8! 9! 10!Number(of(Participants(Years(Using(HFS(in(Teaching(( 45 Types Of Nursing Program Number of Responses 3 or 4 year BSN program (No minimum college/university prerequisite credits or previous undergraduate degree required) 18 Advanced Standing/Accelerated BSN programs (minimum college/university prerequisite credits or previous undergraduate degree required) 8 Bridge-in Licensed Practical Nurse program 8 Refresher/Re-Entry program 3 Other 1   Table 4.2: Types of Nursing Program  4.2 Simulation Use  The findings are presented as follows: HFS utilization; training in HFS use; current assessment tools used in HFS (both competency-based and non-competency-based); assessment considerations in HFS (guiding theories, evidence); and considerations and challenges. Any relevant additional comments provided by participants at the end of the survey are also presented. 4.2.1 HFS Utilization  Forty percent of the participants (n=10) did not know the total hours that their students were exposed to HFS in their nursing program. One participant provided data for both the basic entry BSN program (200 hours) and the bridge-in LPN program (72 hours) combined HFS exposure. For the participants that provided data related to total hours (n=15), the mean total hours of HFS exposure was 47. The median was 25 hours, and the dataset is multimodal for 20 and 24 hours, both occuring twice.  46  Figure 4.4: Total Hours of HFS Exposure   Participants were asked to indicate the clinical specialty areas depicted in the HFS scenarios their program utilize. For this question, participants were able to provide multiple answers. All 25 participants indicated Medical and Surgical specialty areas were used in their HFS scnearios. Only two other clinical specialty areas, obstetrics (n=17) and pediatrics (n=15) scored above 12.5. Three participants reported ‘Other’ and specified pharmacology, leadership, operating room, and emergency as other clinical specialty areas/topics used in their program’s HFS simulation. 0!2!4!6!8!10!Number(of(Programs(Total(Hours(of(HFS(exposure( 47  Figure 4.5: Clinical Specialty Areas Simulated  4.2.2 Training in HFS use  The majority of the participants (44%) did not know the number of faculty members within their program who possessed specialized training in the use of HFS for nursing education. One participant who indicated ‘unknown’ did indicate a guess of  ‘5-10’ faculty members.   0!5!10!15!20!25!Number(of(Responses(Clinical(Specialty(Simulated(in(HFS(( 48  Figure 4.6: Number of Faculty with Specialized Training in HFS     When asked about their own individual training related to the use of HFS as a teaching tool, 28% (n=7) indicated that they had never received any training, and 72% (n=18) had received training. Those with no training indicated that they were either “self-taught”, or were given some on-the-job introduction, ranging from “self-taught” and “a 5 minute walkthrough 5 minutes before the lab”, to “peer taught’, and “watching more senior faculty.” One participant stated that the high fidelity simulation in their program “has been a trial and error project [with] limited training”. Participants who had received training related to HFS use as a teaching tool varied in their expertise. One highly experienced participant reported they were trained in “programming, running, maintenance/repairs/troubleshooting”, whereas most participants had received some formal training through various workshops and conferences. Specific training programs that participants reported participating in were: 0!2!4!6!8!10!12!0>1! 2>3! 4>5! 6>8! unknown!Number(of(Responses(Number(of(Faculty(with(Specialized(HFS(Training( 49 • The Simulation Educator Training Course (Norquest College) • A course at the Institute for Medical Simulation at Harvard University • The National League for Nursing Leadership for Simulation Educator certification program • The CAE (formerly Canadian Aviation Electronics) training in healthcare simulation • A 2-day SimMan course (training provided by manufacturer) • A Laerdal Sun Conference (manufacturer-sponsored conference) • A Post Graduate Certificate in Simulation (program/school not indicated) There was no duplication of completed training programs reported within the 25 responses from the participants.  4.3 Assessment 4.3.1 Assessment of Student Learning  Formative and summative assessment of student learning was explored. Participants were asked how often they incorporated formative assessments when utilizing HFS. Twenty percent reported ‘never’, while 68% reported formatively assessing their students after every scenario when running more than one scenario per class. Four percent of the participants performed a formative assessment after every class, while 16% reported ‘other’. One participant who answered ‘other’ stated “It really depends on who is running the scenario - some faculty never assess; they use the mannequins as task trainers,” potentially indicating inconsistencies within their program. Another participant indicated that formative assessment feedback is provided through post-scenario debriefing with the students.    50  Figure 4.7: Frequency of Formative Assessment   When asked about summative assessment in HFS activities, a large majority (72%) of participants reported never completing a summative assessment when utilizing HFS activities as part of their teaching. Twenty percent of participants performed summative assessments on their students once per term or semester. One participant reported completing a summative assessment every two months, and another participant every three months.  0!2!4!6!8!10!12!14!16!18!!Never! After!every!scenario!(>!one!scenario!per!class)!!After!every!class! Once!a!week! !Other!Number(of(Responses(Frequency(of(Formative(Assessment(when(using(HFS( 51  Figure 4.8: Frequency of Summative Assessment    4.3.2 Assessment Framework and Tools 4.3.2.1 Competency-Based Assessment Frameworks and Tools  With regards to the actual assessment framework(s) used by the participants, 64% utilized competency-based assessment framework(s), while the rest (36%) did not. Participants were asked about the origin of the competency-based assessment tool(s) that they used and their validity. Thirty-eight percent of the participants did not know the origin and the validity of the competency-based assessment tool that they use in HFS activities. Of those that did know the origin of the tool, 19% reported using tools from the literature that were tested for validity, and five percent from the literature but not tested for validity. Fourteen percent of the participants used a tool created within their institution that was not tested for validity, whereas nine percent of the participants reported using tools created within their institution that had been 0!2!4!6!8!10!12!14!16!18!20!Number(of(Responses(Frequency(of(Summative(Assessment(when(using(HFS( 52 tested for validity. One participant used a validated tool created by another institution, and another participant reported using various assessment tools, both tested and untested for validity, both from their own institutions as well as other institutions, but did not specify which specific tools were from their institutions, and which are validated. Unfortunately, a breakdown of these results in formative and summative categories were not collected due to the data collection tool’s design. A visual summary of these findings can be found in Figure 4.9 Please note that the statistics in this pie chart represents the subpopulation of participants that utilized CBATs, and not the entire study population.  Figure 4.9: Origin of Competency-Based Assessment Tool (CBAT), and Validity 9%!14%!5%!19%!5%!38%!10%! VALIDATED:!Internal!UNVALDATED:!Internal!VALIDATED:!External!UNVALIDATED:!External!VALIDATED:!Literature!UNVALIDATED:!Literature!Don't!Know!Other! 53  When asked which specific competency-based assessment tools were being used by the participants, only five participants actually provided names for the specific tools and their validity, found in Table 4.3. Name of Competency-Based Assessment Tool Validity In-House Checklist-Based Tool (name unspecified) Unspecified Simulation Design Scale © National League of Nursing 2005 Validated Simulation Learning System Scenarios Unspecified Sweeney-Clark Simulation Rubric Validated Tool adapted from the CRNBC Standards of Practice Unknown Note:‘Unspecified’ means that participant did not indicate in the survey whether the tool is validated or not. ‘Unknown’ means that the participant indicated they did not know whether the tool is validated or not.  Table 4.3: Competency-Based Assessment Tools, and Validity  4.3.2.2 Other Assessment Frameworks and Tools (Non-CBAT)   Participants were also asked about other non-competency-based tools that they used in HFS activities, and its validity. Similar to the results of the CBAT, the majority of respondents (56%) did not know the origin and the validity of the non-CBAT they used in HFS activities. Twenty-two percent of the respondents reported not using any assessment tools at all, and another 22% percent reported using non-validated non-CBAT developed within their institution. These findings are summarized in Figure 4.10. Please note that the statistics in this pie chart represents the subpopulation of participants that responded to the question, and not the entire study population.  54  Figure 4.10: Origin of Other Assessment Tool, and Validity  4.4 Assessment Considerations In HFS 4.4.1 Educational Theories  The participants were asked to indicate the educational theories that guided their choice of assessment tools they used in HFS. Forty percent (n=10) of the participants did not indicate any educational theories had been used to guide their choice of assessment tools when utilizing HFS to assess their students’ learning. For those who did, 20% (n=5) indicated constructivism as the educational theory that guided their choice. Kolb & Lewin’s experiential learning theory was indicated by 16% (n=4) of the participants, while behaviourism, was indicated by 12% (n=3) of the participants. Knowle’s androgogy was indicated by 8% (n=2) participants as one of the theories that guided their choice. The rest of the theories were indicated only once in the dataset, and include: 22%!0%!56%!22%!UNVALDATED:!Internal!Don't!Know!No!Assessment!Tools!Used! 55 • Cognitivism • Felder & Silverman’s Index of Learning Styles • Fleming & Mill’s VARK Learning Styles • Active Learning Theory • Critical Thinking Development Theory • Reflective Practice • Scientific Discovery Theory • Humanism • Transformational Learning Theory • Boyer’s Theory of Teaching and Learning • Benner’s From Novice to Expert • Narrative Pedagogy • Schon’s Reflection-in-Action/Reflection-on-Action Theory • Jeffries Simulation Framework Some of the participants indicated specifically that they did not understand the question, and provided answers indicating teaching methods, practice standards, and guidelines instead of theories.  4.4.2 Evidence for Assessment Decisions  Participants were also asked about the evidence they used to base their assessment decisions upon when assessing competence of a particular student when using HFS activities. Twenty percent (n=5) indicated that they did not assess students’ competence when utilizing HFS. A large majority (40%) of participants utilized direct observation of student technique as their basis for student assessment. Twenty-eight percent (n=7) of the participants in this category  56 also indicated that they combined visual observation with verbal expression of the student during the HSF activity. Verbal expression encompassed both the questions that students asked during the activity, as well as verbalizing the actions they were performing during the simulation activity. Student preparation prior to the HFS activity, and debriefing participation, were also utilized by eight percent (n=2) of participants. Student self-assessment, and responsiveness to cues during the HFS activity, were other forms of evidence used by the participants in basing their assessment decisions, each mentioned once in the survey. These findings are summarized in Figure 4.11.  Figure 4.11: Evidence for Assessment Decisions    36%!25%!7%!7%!3%!4%!18%!Direct!Observation!Direct!Observation!&!Student's!Verbal!Expression!Debriefing!Participation!Student!Prep!Prior!to!Sim!Responsiveness!to!Cues!Student!Self>Assessment!Not!Assessed/Not!Applicable! 57 4.4.3 Other Considerations  Participants were also asked to share any assessment-related considerations that should be taken into account when selecting assessment tools to use in HFS activities. Twenty percent (n=5) did not indicate any considerations, stating either “unknown” or “we do not utilize assessment tools”. Forty-four percent (n=11) of the participants believed the objective or goal of the simulation activity (i.e., whether the activity is purely a learning experience, or if it is a high-stakes activity that counted toward the student’s academic grade; the specific learning outcome(s) for the simulation scenario; and if the activity is for evaluation, whether it is a formative or summative evaluation) was an important consideration when choosing assessment tools to use. Twenty-eight percent (n=7) of the participants indicated the level of the students as an important consideration. Sixteen percent (n=4) stated that the tool’s ease of use was an important consideration, and eight percent (n=2) believed that the assessment tool chosen should fit within the nursing program’s philosophy and nursing framework, as well as be adaptable (i.e. usable throughout the program instead of just one term or scenario). One participant shared her specific viewpoint and experience:  The more planning and preparation that the educator completes prior to the  simulation, the more thorough the assessment of the student(s) will be. This  preparation would include the expectations of the students to be prepared, the desired  skills to be performed in the [simulation], the anticipation of certain events and how  to handle those events, [such as when] students miss the health issue and their  interventions are inappropriate or even harmful; or anticipating how to handle  inappropriate or difficult interactions between students or with the client. Also the  educator should have reasonable expectations of the students performance - as a newer  58  educator ([less than] 2 years as educator but 14 years as an acute care RN) I see the  frustration that experienced educators have with students to 'get it' quickly (meaning the  health issue) and to intervene  appropriately and in a timely manner. Additionally it is  unreasonable to expect students to know more than they have learned in their nursing  program thus far. Other specific considerations that participants indicated include: simulator limitations; educator training; difficulty of scenario; Canadian context; the tool’s ability to help the educator provide meaningful feedback to the participants; applicability to clinical practice; and validity and reliability of assessment tools. Each of these considerations was mentioned once in the survey. 4.4.4 Challenges  Participants were asked to share challenges they encountered in their practice specific to assessing students’ clinical competence when using HFS. Of the 25 participants, the most common experience (28%) reported was that students did not engage fully in the scenarios because they do not perceive them as real. The second-most prevalent challenge reported by 24% of the participants was the poor instructor-to-student ratio during HFS activities. Specific issues that arose in their practice related to poor instructor-to-student ratio include: “[multiple] distractions to deal with” at the same time, which prevents the ability to “attend to all students equally or at key moments in the simulation”; the “sheep effect – where one participant follows others even though they don’t agree” (8%); and group activities masking the performance of weaker students (16%). Inadequate training and preparation of participants to utilize HFS; poorly functioning mannequins; and equipment, space, and time constraints were other challenges raised by the participants (12%). One participant recalls their experience of the “mannequin  59 shutting down in the middle of a scenario”. Another participant shared their frustration about students:  Students who openly discuss their sim with other students who have not yet completed  the sim scenario. Despite having several prepared sim 'scenarios' (which can be a lot of  work to prepare) and numerous requests for students NOT to discuss sim, this is an  ongoing problem. Student anxiety related to HFS was another challenge reported by 16% of participants. Participants reported “the students nervousness can negatively affect their performance in sim,” stating it as the “biggest challenge for students.” One challenge of interest shared by a participant involved the faculty within their program:  The faculty are resistant to establishing learning objectives for the simulated learning  experience and they also do not put the scenario into a format that can be used by  everyone and anyone. This creates several issues including a lack of perceived  fairness for the learner being evaluated. Also, the faculty have been using HFS to assess  tasks, removing the task or skill from context of the situation, preventing the learner from  being assessed for critical thinking, safety, client teaching, assessment or communication.  Piecing the learning out is fine, but at some point we needed to see the whole picture.  Faculty time is very limited and this has led to resistance to  employ best practices for  HFS. One participant stated that they did not see simulation as a valuable assessment tool for clinical competence because “clinical competence is best assessed over time so that it is possible to assess if a change in behaviour has occurred”, viewing simulation as a “one-time situation”.  60 Unclear guidelines, goals, and a lack of valid and reliable assessment tools were other challenges indicated by participants. 4.5 Summary  The findings of the survey were presented in this chapter, describing the demographics of the participants, their nursing practice and teaching experience, as well as the types of nursing programs they teach in. HFS-specific training of program faculty and individual participants were also presented, as well as the amount of time nursing students are exposed to HFS within their programs, and the types of clinical specialty areas depicted. Frequency of assessments, both formative and summative, and the tools used, both competency-based and non-competency-based, were also presented. Educational theories that guide the choice of assessment tools and framework, as well as the evidence participants base their assessment decisions in student assessment were also explored. Finally, assessment-specific considerations, and challenges were also expounded on, with verbatim data from the participants. In the next chapter, the findings will be discussed within context to the research questions posed by the study, as well as the study’s limitations.   61 Chapter  5: Discussion  This chapter presents the discussions related to the study findings within the context of the research question and the literature. The study’s limitations are also explored.  5.1 Discussion of Findings 5.1.1 Demographics  The majority of participants in the study were female, whilst the number of male participants in the study was three (n=25), which reflected the disproportionate number of males versus females in the nursing profession (Andrews, Stewart, Morgan, & D’Arcy, 2012), in which males accounted for 6.7.% of registered nurses in BC in 2011 (Canadian Institute for Health Information, 2013). However, the number of Masters-trained participants did not reflect the trend in the general nursing population in BC or Canada. Twenty participants were Masters-trained, and two participants were PhD trained, accounting for 88% of the total participant population in the study. In 2011, the combined percentage of registered nurses with a graduate degree was 2.1% in BC, and 3.9% in Canada (Canadian Institute for Health Information). Though this may appear peculiar at first, this disparity can be explained by the fact that most graduate-trained registered nurses in Canada are employed in administration and education capacities (Canadian Institute for Health Information), and so the finding is unsurprising. This study’s inclusion criteria required participants to be teaching in a baccalaureate nursing program, which would account for the high graduate-trained participant population in the study. 5.1.2 HFS use  Given the increased uptake of HFS in most institutions in the Lower Mainland in BC, it was somewhat surprising that 40% of the participants reported not knowing the total HFS hours their students were exposed to in their nursing programs. Forty-four percent of the participants  62 reported the total hours of exposure throughout the program as 40 hours or less. Given the expenditure towards the purchase of simulation equipment to equip nursing schools in BC and Canada (Akhtar-Danesh, Baxter, Valaitis, Stanyon, & Sproul, 2009; Mullins, 2011), one would have expected that HFS use might be greater than this. However, it may be that participants did not interpret the survey question as total hours in the entire nursing program. Similarly, it was surprising that given the uptake of HFS as a learning technology by nursing programs, particularly in the Lower Mainland in BC, a majority of participants (52%) reported that their respective programs had five or less faculty members with specialized training in HFS use. Each participant who received training in HFS use reported completing a training program or course that was different from the other participants. This variety of training programs was an interesting finding given the geographic proximity of the nursing programs in the Lower Mainland, as well as membership of all Lower Mainland nursing programs in the Lower Mainland Nursing Clinical Education Steering Committee (LMNCESC). This is an organization that seeks to improve nursing practice education within the Lower Mainland region through collaboration (LMNCESC, 2011). However, given the small sample, it was difficult to ascertain whether this finding is representative of the consistency in training program completion of nurse educators in British Columbia. 5.1.3 Assessments  With regards to formative assessments, a majority of the participants reported incorporating formative assessment of their students after each HFS scenario, especially when more than one scenario was conducted per class. Remarkably 20% of the participants reported never incorporating any form of formative assessment. This finding may also be related to the participants having misunderstood the survey question and interpreted the term formative to  63 mean ‘formal’. However, there was no way to clarify this given the data collection tool’s anonymous nature. One participant reported inconsistencies in their program’s use of HFS, with some nurse educators focusing on specific tasks, and not incorporating any form of assessment into the scenario. Worse still, 72% of the participants reported that no summative assessments were completed when HFS was utilized as part of their teaching. This is worrisome given the role that assessment plays in making educational decisions, as well as assessing the effectiveness of the teaching effort (Attree, 2006; Murray et al., 2000; Traynor & Galanouli, 2015). Unfortunately, given the anonymous design of the data collection tool, it was impossible to elicit the reason why a large majority of participants reported not performing summative assessments. However, one could speculate that the lack of available competency-based assessment tools (Arnold et al., 2009; Attree, 2006; Gore, Hunt, & Raines, 2008; Kardong-Edgren et al., 2010; Radhakrishnan, Roche, & Cunningham, 2007) could have played a role in this finding. 5.1.3.1 Competency-Based Assessment Frameworks and Tools  With regards to the assessment frameworks and tools used by the participants, it was promising to see that 64% reported using competency-based assessment frameworks and tools. However, only 8% of the participants were able to report the name of the validated instrument they use. This seemed inconsistent, given that almost half of the participants who reported using competency-based assessment tools did not know their instrument’s origin or validity. Despite the definitions given in the data collection tool of competency-based assessment tools, and validity, participants may simply have answered ‘yes’ without actually knowing what the question was asking about. Unfortunately, there was no way to confirm this with the participants given the data collection tool’s anonymous design.   64  Based on the study data, the apparent inconsistency between the reported use of CBATs versus nurse educator knowledge about the tools used could be related to the study’s finding regarding the low numbers of nurse educators with specialized training in HFS. The literature suggests a correlation between specialized educational technology training for nurse educators and the effective use of educational technologies such as HFS (NCSBN, 2009; Nguyen, Zierler, & Nguyen, 2011). This is further complicated by the inconsistency in the training programs completed by the participants – particularly those from the same regional area. Given this, it would seem that increasing the emphasis and support in training more nurse educators on the effective use of HFS could translate to better outcomes related to their own knowledge base on HFS use and assessment, as well as delivery of knowledge to the students. Collaboration between the programs in standardizing the subscription to one or two training programs on HFS use could also result in consistency with regards to the use of competency-based assessments, especially in the choice of the instruments used.     Another potential explanation for this inconsistency could be related to the well-documented lack of available CBATs that are validated and reliable (Attree, 2006; Cant & Cooper, 2010; Cowan, Norman, & Coopamah, 2007; Kardong-Edgren, Adamson, & Fitzgerald, 2010; Nehring & Lashley, 2004; Murray, Gruppen, Catton, Hays, & Woolliscroft, 2000; Todd, Manz, Hawkins, Parsons, & Hercinger, 2008; Zasadny & Bull, 2015). The study data supports this, with only a small number of participants relying on literature-based validated assessment tools. Again, a joint approach in addressing this issue through increased training for nurse educators as well as collaboration between the different nursing programs could provide the best solution to this issue.   65 5.1.3.2 Other Assessment Frameworks and Tools (Non-CBAT)  The rest of the participants used non-competency-based assessment frameworks and tools, and none of them provided the name of the tools they used. Given the current evidence in the literature, it would not be surprising to find out that these tools were either self-reported measurements of satisfaction with the learning process, or student confidence (Attree, 2006; Kardong-Edgren et al., 2010; Todd et al., 2008). However, it was impossible to confirm this with the participants given the anonymous design of the data collection tool. 5.1.4 Challenges  Of note was the reported challenge experienced by one participant with regard to the nurse educators in her program. The participant reported that the nurse educators were resistant to applying best practice principles for HFS in her program due to ‘limited time’. However it was impossible to clarify what the participant meant by ‘limited time’ due to the anonymous design of the tool. She may be referring to workload issues, which the literature supports, describing the experience of nurse educators related to the increased workload that is inherent with HFS use (Gore et al., 2008; Hoffmann, O’Donnell, & Kim, 2007; Kardong-Edgren et al., 2010; Radhakrishnan et al., 2007). It was surprising that none of the other participants reported limited time, or increased workload as a challenge when using HFS in their teaching, given the current literature regarding this topic.   5.2 Limitations  This study was designed to ensure rigor and validity of all findings. However, like with all research studies, there were some limitations (both intrinsic and extrinsic) with this study. They are presented in this section, and recommendations on how to address them in future research are discussed in the next section.  66  One possible limitation of the study is the small sample size, which potentially decreased the representativeness of the findings outside of the local nurse educator population (Polit & Beck, 2012). This is particularly important in descriptive studies, because a large sample size would have allowed for more representative data (Polit & Beck). However, considering that the inclusion and exclusion criteria increased the possibility of a homogenous population, the small sample size may have been adequate enough to represent the nurse educator experience in BC (Polit & Beck).   Another limitation of the study was the chosen methodology. Univariate quantitative and qualitative descriptive research studies play an important role in describing the current state of a phenomenon particularly when not much is known (Polit & Beck). However, no relationship between the variables being described can be inferred with this specific approach (Polit & Beck). The study design was an appropriate choice to answer the research questions, but given the unique design of descriptive studies, further research would be necessary to determine correlations between the variables being described.  The data collection instrument used in the study had limitations regarding its validity and reliability, as well as the breadth of data collected. The instrument’s reliability was never determined prior to deployment, and only the face validity of the instrument was tested prior to use. The face validity was determined using a modified Delphi approach, but no other further testing of validity (e.g. content and construct validity) was conducted prior to using the instrument. Perhaps the biggest limitation in the study was related to the breadth of the data collected by the instrument. Care was taken to ensure the anonymity of the participants at the onset of the study, so it was decided in the design phase of the instrument to omit any questions that could identify the participants both individually or institutionally. Although this may have  67 helped ensure participants were candid with their answers because of their anonymity, there were instances during the data cleaning and analysis phases where clarification would have been helpful, but was impossible as there was no way to connect the data to specific participants or contact the participants. As well, it was impossible to perform any basic statistical correlations related to institutional affiliation given the anonymous design of the data tool.   This chapter discussed the study findings within the context of the literature and the research questions, as well as the limitations. The next chapter will summarize the research completed, and present recommendations for future research, and implications of the findings to nursing education.     68 Chapter  6: Conclusion  This chapter concludes with a summary of the key points related to the research question, as well as future research recommendations regarding this topic, and the implications of the findings for nursing education. 6.1 Summary  The central question for this study was:   What assessment tools and methods are nurse educators using to assess clinical competence when utilizing HFS?   Based on the study outcomes, it is apparent that a majority of nurse educators are utilizing CBATs and frameworks to assess clinical competence in the context of HFS. Inconsistencies exist with regards to utilization of existing validated and reliable instruments from the literature versus ones developed within their institution that have not been tested for validity or reliability. Specific competency-based assessment tools used by nurse educators include: the Simulation Design Scale © (National League of Nursing), Simulation Learning System Scenarios, and the Sweeney-Clark Simulation Rubric. However, nurse educators rely mostly on non-validated CBATs created within their own institutions, instead of the literature.  The following additional questions were also explored through the central question: i. What educational theories guide the choice of assessment tools?  From the data, it appears that nurse educators are guided by a variety of educational theories including: constructivism, Kolb & Lewin’s experiential learning theory, behaviourism, and Knowle’s andragogy. Others consider the objective or goal of the HFS exercise to guide their choice of assessment tool. However, the data shows that some nurse educators do not take into consideration any specific educational theories when deciding on the assessment tool to use.  69 This may be influenced by the fact that some nurse educators do not choose the assessment tool to use, chosen instead by their specific nursing program.  ii. On what evidence do instructors base their assessment decisions?  From the results, it seems that assessment decisions are primarily based on direct observation of students’ performance during the simulation. However, some nurse educators incorporate verbal expressions by the students combined with direct observation as part of the assessment data used to determine competence. Other educators base their assessment decisions on the students’ pre-simulation preparation, while others utilize debriefing activities to validate their assessment. iii. What are some challenges associated with objectively assessing clinical competence within the context of HFS?   From the study findings, the reported common challenge faced by nurse educators when objectively assessing clinical competence of students within the context of HFS is the students’ disengagement related to the lack of realism of the simulation. Another challenge seems to be related to the low instructor-to-student ratio (Cant & Cooper, 2010), while others struggle with poorly functioning equipment, or ill-equipped learning spaces (Kardong-Edgren, Adamson, & Fitzgerald, 2010). Time constraints and workload issues were other reported challenges that plagued nurse educators when utilizing HFS.  6.2 Recommendations for Future Research  Repetition of this study would be recommended in the future to further gain a better insight into the state of competency-based HFS assessment, but with a few adjustments, based on lessons learned from carrying out this research study. One important recommendation should this study be replicated is to ensure an increase in the sample size of the study participants. This  70 would increase the likelihood of a representative sample, allowing for generalizability of the study finding (Polit & Beck, 2012), further enhancing the possibility that the study data would paint a more complete picture of the state of HFS assessment, and use of competency-based assessment tools.   Another recommendation would be to increase the robustness of the data collection tool to adjust questions for clarity, and include personal identifier information (including contact information, and institutional affiliation), as well as other pertinent research questions. This would allow for clarification of any ambiguous or erroneous data, data aggregation related to specific institutions, and collection of other relevant data that was not gathered in this study.   One interesting area for further research would be to explore faculty attitudes regarding HFS, and competency-based assessment in HFS. Although the findings related to the challenging nurse educator colleagues was isolated to one participant, it would be interesting to see whether other participants may have encountered similar experiences, and if this experience was segregated to one institution, or other participants in other institutions shared similar experiences.   Ultimately, the literature and this study recommends more research into creating validated and reliable competency-based assessment instruments for HFS, as well as further testing the validity and reliability of existing instruments found in the literature. Ideally, nurse educators would seek out existing instruments to use with their HFS scenarios, and perform validity and reliability testing of the existing instrument within their own institution, and publish the results. If no existing instrument can be found that would work with a nurse educator’s HFS scenario, then developing a new instrument would be necessary. However, it is recommended that the instrument be tested for validity and reliability, and those findings are shared with the academic community.  71 6.3 Implications for Nursing Education  This study was designed to describe the current state of HFS use in British Columbia, with a focus on competency-based assessment, particularly the tools used. The findings suggested a high adoption rate of the technology within the province’s nursing program, but unfortunately a lack of training for nurse educators on the effective use of the technology for teaching and learning. The inconsistencies within the different programs regarding number of hours of HFS exposure, training courses undertaken by nurse educators, and assessment tools used, particularly in the Lower Mainland, should be a starting point for collaboration between nursing programs. A spirit of cooperation between the nursing programs province-wide should help improve the utilization of HFS and overall delivery of nursing education both within each specific nursing programs, as well as province-wide.  Although the use of competency-based assessment framework and tools seem promising according to this study, there remains an inconsistency in the actual knowledge of the tools used by the participants. This information should trigger nursing programs to address this issue by ensuring that the use of validated, reliable assessment instruments is consistent within their specific program. The chosen instrument(s) should adequately address the learning objectives outlined by their specific courses and HFS scenarios. Moreover, research within each individual nursing programs, as well as multi-site collaborations between programs that test the validity and reliability of existing instruments from the literature, as well as newly-developed instruments from each nursing program should be endeavoured. This will only serve to enhance the learning of nursing students, as well as advance the knowledge base of nursing education.   72 References  Abe, B. Y., Kawahara, C., Yamashina, A., & Tsuboi, R. (2013). Repeated scenario simulation to  improve competency in critical care: a new approach for nursing education. American Journal Of Critical Care, 22(1), 33–40. Akhtar-Danesh, N., Baxter, P., Valaitis, R. K., Stanyon, W., & Sproul, S. (2009). Nurse faculty  perceptions of simulation use in nursing education. Western Journal of Nursing  Research, 31(3), 312–329. http://doi.org/10.1177/0193945908328264 Andrews, M. E., Stewart, N. J., Morgan, D. G., & D’Arcy, C. (2012). More alike than different:  A comparison of male and female RNs in rural and remote Canada. Journal of Nursing  Management, 20(4), 561–570. http://doi.org/10.1111/j.1365-2834.2011.01195.x Arnold, J. J., Johnson, L. M., Tucker, S. J., Malec, J. F., Henrickson, S. E., & Dunn, W. F. (2009). Evaluation tools in simulation learning: performance and self-efficacy in emergency response. Clinical Simulation in Nursing, 5(1), e35–e43. http://doi.org/10.1016/j.ecns.2008.10.003 Arthur, C., Kable, A., & Levett-Jones, T. (2011). Human patient simulation manikins and  information communication technology use in Australian schools of nursing: A cross- sectional survey. Clinical Simulation in Nursing, 7(6), e219–e227.  http://doi.org/10.1016/j.ecns.2010.03.002     73 Association of Registered Nurses of Newfoundland and Labrador. (2013). Competencies in the context of entry-level registered nurse practice 2013-18. Retrieved on August 16, 2015 from https://www.arnnl.ca/sites/default/files/RD_Competencies_in_the_Context_of_Entry_Level_Registered_Nurse_Practice.pdf Attree, M. (2006). Evaluating healthcare education: Issues and methods. Nurse Education in Practice, 6(6), 332–338. http://doi.org/10.1016/j.nepr.2006.07.010 Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191-215. Bastable, S., & Doody, J. A. (2008). Nurse as educator: Principles for teaching and learning for nursing practice (3rd edition). Sudbury, MA: Jones & Bartlett Publishers. Bastable, S. (2014). Nurse as educator: Principles for teaching and learning for nursing practice (4th edition). Burlington, MA: Jones & Bartlett Learning. Baxter, P., & Norman, G. (2011). Self-assessment or self deception? A lack of association  between nursing students’ self-assessment and performance. Journal of Advanced  Nursing, 67(11), 2406–2413. http://doi.org/10.1111/j.1365-2648.2011.05658.x Blum, C. A., Borglund, S., & Parcells, D. (2010). High-fidelity nursing simulation: Impact on  student self-confidence and clinical competence. International Journal of Nursing  Education Scholarship, 7(1), Article 18. http://doi.org/10.2202/1548-923X.2035 Canadian Institute for Health Information. (2013). Regulated nurses: Canadian trends, 2007 to  2011. Retrieved on September 12, 2015 from https://secure.cihi.ca/free_products/Regulated_Nurses_EN.pdf  74 Canadian Nurses Association. (2007). Framework for the practice of registered nurses in Canada. Retrieved on July 24, 2015 from https://www.cna-aiic.ca/~/media/cna/page-content/pdf-en/framework-for-the-pracice-of-registered-nurses-in-canada.pdf?la=en. Canadian Nurses Association. (2008). Code of ethics for registered nurses. Retrieved on July 24, 2015 from https://www.cna-aiic.ca/~/media/cna/page-content/pdf-fr/code-of-ethics-for-registered-nurses.pdf?la=en. Canadian Nurses Association, Canadian Association of Schools of Nursing. (2013). Registered  nurses education in Canada statistics, 2011-2012. Registered nurse workforce, Canadian  production: Potential new supply. Retrieved on September 6, 2015 from https://www.cna- aiic.ca/~/media/cna/files/en/nsfs_report_2011-2012_e.pdf Cant, R. P., & Cooper, S. J. (2010). Simulation-based learning in nurse education: Systematic  review. Journal of Advanced Nursing, 66(1), 3–15. http://doi.org/10.1111/j.1365- 2648.2009.05240.x Carraccio, C., & Englander, R. (2000). The objective structured clinical examination. Archives of  Pediatrics & Adolescent Medicine, 154(6), 736–741. Carraccio, C., Wolfsthal, S. D., Englander, R., Ferentz, K., & Martin, C. (2002). Shifting  paradigms: from Flexner to competencies. Academic Medicine!: Journal of the  Association of American Medical Colleges, 77(5), 361–367.  http://doi.org/10.1097/00001888-200205000-00003 Chapman, H. (1999). Some important limitations of competency-based education with respect to  nurse education: an Australian perspective. Nurse Education Today, 19(2), 129–35. Clark, M. (2006). Evaluating an obstetric trauma scenario. Clinical Simulation in Nursing, 2(2),  e75–e77. http://doi.org/10.1016/j.ecns.2009.05.028  75 College & Association of Registered Nurses of Alberta. (2013). Entry-to-practice competencies for the registered nurses profession. Retrieved on August 7, 2015 from http://www.nurses.ab.ca/content/dam/carna/pdfs/DocumentList/Standards/RN_EntryPracticeCompetencies_May2013.pdf College of Nurses of Ontario. (2014). Competencies for entry-level registered nurse practice. Retrieved on August 7, 2015 from http://www.cno.org/Global/docs/reg/41037_EntryToPracitic_final.pdf College of Registered Nurses of British Columbia. (2010). Competencies required for nurse practitioners in British Columbia. Retrieved on August 7, 2015 from https://www.crnbc.ca/Registration/Lists/RegistrationResources/416CompetenciesNPs.pdf College of Registered Nurses of British Columbia. (2015). Competencies in the context of entry level registered nurse practice in British Columbia. Retrieved on August 7, 2015 from https://crnbc.ca/Registration/Lists/RegistrationResources/375CompetenciesEntrylevelRN.pdf#search=competencies Colthart I., Bagnalli G., Evans A., Allbutti H., Haigi A., Illing J. & McKinstry B. (2008) The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME Guide no. 10. Medical Teacher, 30, 124– 145. Cook, C., Heath, F., & Thompson, R. (2000). A meta-analysis of response rates in web- or  internet-based surveys. Educational and Psychological Measurement, 60(6), 821-836.  doi: 10.1177/00131640021970934 Cowan, D. T., Norman, I., & Coopamah, V. P. (2007). Competence in nursing practice: A controversial concept - a focused review of literature. Accident and Emergency Nursing, 15(1), 20–26. http://doi.org/10.1016/j.aaen.2006.11.002  76 Davis D.A., Mazmanian P.E., Fordis M., Van H.R., Thorpe K.E. & Perrier L. (2006) Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. JAMA: The Journal of the American Medical Association, 296, 1094–1102. DeYoung, S. (2009). Teaching Strategies for Nurse Educators (2nd ed.). Upper Saddle River, NJ: Prentice Hall. Drisko, J. W. (2014). Competencies and Their Assessment. Journal of Social Work Education,  (June 2015), 37–41. http://doi.org/10.1080/10437797.2014.917927 Eva K.W. & Regehr G. (2005) Self-assessment in the health professions: A reformulation and research agenda. Academic Medicine, 80, S46–S54. Feinstein, A. H., & Cannon, H. M. (2002). Constructs of simulation evaluation. Simulation &  Gaming, 33(4), 425–440. http://doi.org/10.1177/1046878102238606 Gallagher, P., Smith, T., & Ousey, K. (2012). Problems with competence assessment as it applies  to student nurses. Nurse Education in Practice, 12(6), 301–303.  http://doi.org/10.1016/j.nepr.2012.05.014 Gibbons, S. W., Adamo, G., Padden, D., Ricciardi, R., Graziano, M., Levine, E., & Hawkins, R.  (2002). Clinical evaluation in advanced practice nursing education: Using standardized  patients in health assessment. The Journal of Nursing Education, 41(5), 215–221. Goodstone, L., & Goodstone, M. (2013). Use of simulation to develop a medication administration safety assessment tool. Clinical Simulation in Nursing, 9(12), e609-e615. http://dx.doi.org/10.1016/j.ecns.2013.04.017 Gordon M.J. (1991) A review of the validity and accuracy of self- assessments in health professions training. Academic Medicine, 66, 762–769.  77 Gore, T., Hunt, C. W., & Raines, K. H. (2008). Mock hospital unit simulation: A teaching  strategy to promote safe patient care. Clinical Simulation in Nursing, 4(3), 57–64.  http://doi.org/10.1016/j.ecns.2008.08.006 Goldenberg, D., & Dietrich, P. (2002). A humanistic-educative approach to evaluation in nursing education. Nurse Education Today, 22(4), 301-310. Harden, R. M., Stevenson, M., Downie, W. W., & Wilson, G. M. (1975). Assessment of clinical  competence using objective structured examination. British Medical Journal, 1(5955),  447–451. http://doi.org/10.1136/bmj.1.5955.447 Hayden, J. K., Smiley, R. A., Alexander, M., Kardong-Edgren, S., & Jeffries, P. R. (2014). The  NCSBN national simulation study: A longitudinal, randomized, controlled study  replacing clinical hours with simulation in prelicensure nursing education. Journal of  Nursing Regulation, 5(2 Suppl), S1–S64. Retrieved on Sep 18, 2015 from  https://www.ncsbn.org/JNR_Simulation_Supplement.pdf Herm, S. M., Scott, K. A., & Copley, D. M. (2007). “Sim”sational revelations. Clinical  Simulation in Nursing, 3(1), e25–e30. http://doi.org/10.1016/j.ecns.2009.05.036 Hilliard, R. I., & Tallett, S. E. (1998). The use of an objective structured clinical examination  with postgraduate residents in pediatrics. Archives of Pediatrics & Adolescent Medicine,  152(1), 74–78. Hoffmann, R. L., O’Donnell, J. M., & Kim, Y. (2007). The effects of human patient simulators  on basic knowledge in critical care nursing with undergraduate senior baccalaureate  nursing students. Simulation in Healthcare!: Journal of the Society for Simulation in  Healthcare, 2(2), 110–114. http://doi.org/10.1097/SIH.0b013e318033abb5  78 Jeffries, P. (2005). A framework for designing, implementing, and evaluating simulations used  as teaching strategies in nursing. Nursing Education Perspectives, 26(2), 96–103. Joorabchi, B. (1991). Objective structured clinical examination in a pediatric residency program.  American Journal of Diseases of Children, 145(7), 757–762. Joorabchi, B., Devries, J. M. (1991). Evaluation of clinical competence: The gap between expectation and performance. Pediatrics, 97(2), 179-184. Kak, N., Burkhalter, B., & Cooper, M. (2001). Measuring the competences of healthcare  providers. Quality Assurance Project, 1(1), 1–28. Kaplowitz, M. D., Hadlock, T. D., & Levine, R. (2004). A comparison of web and mail survey response rates. The Public Opinion Quarterly, 68(1), 94-101. Retrieved on February 15, 2015 from http://www.jstor.org.ezproxy.library.ubc.ca/stable/3521538 Kardong-Edgren, S., Adamson, K. A., & Fitzgerald, C. (2010). A review of currently published  evaluation instruments for human patient simulation. Clinical Simulation in Nursing,  6(1), e25–e35. http://doi.org/10.1016/j.ecns.2009.08.004 Klein, C. J. (2006). Linking competency-based assessment to successful clinical practice.  Journal of Nursing Education, 45(9), 379–384. Kruger J. & Dunning D. (1999) Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77, 1121–1134.  Kuiper, R. A., Heinrich, C., Matthias, A., Graham, M. J.,  & Bell-Kotwall, L. (2008). Debriefing with the OPT model of clinical reasoning during high fidelity patient simulation. International Journal of Nursing Education Scholarship, 5(1), article 17.  79 Lane, D. S., & Ross, V. S. 1998. Defining competencies and performance indicators for physicians in medical management. American Journal of Preventive Medicine, 14(3), 229–36. Lasater, K. (2007). High-fidelity simulation and the development of clinical judgment: Students’ experiences. Journal of Nursing Education, 46, 269- 276. Lenburg, C. B. (1999). The framework, concepts and methods of the competency outcomes and performance assessment (COPA) model. Online Journal of Issues in Nursing, 4(2), manuscript 2. Retrieved on December 9, 2014 from http://www.nursingworld.org/MainMenuCategories/ANAMarketplace/ANAPeriodicals/OJIN/TableofContents/Volume41999/No2Sep1999/COPAModel.aspx Lenburg, C. B., Klein, C., Abdur-Rahman, V., Spencer, T., & Boyer, S. (2009). The COPA  model: A comprehensive framework designed to promote quality care and competence for patient safety. Nursing Education Perspectives, 30(5), 313-317. Lower Mainland Nursing Clinical Education Steering Committee. (2011). Developing a nursing practice education framework for the British Columbia Lower Mainland (Version 4). Vancouver, BC: Author. McClelland, D. (1973). Testing for competence rather than for intelligence. American Psychologist, 28, 1–14. Mullins, A. (2011). Provincial funding leads to makeover for nursing education resource  centre. Langara News. Retrieved on November 19, 2014 from http://www.langara.bc.ca/news-and-events/langara-news/2011/111220-provincial-funding-leads-to-makeover.html  80 Murray, E., Gruppen, L., Catton, P., Hays, R., & Woolliscroft, J. O. (2000). The accountability  of clinical education: its definition and assessment. Medical Education, 34, 871–879. National Council of State Boards of Nursing. (2009). Report of findings from the effect of high-fidelity simulation on nursing students’ knowledge and performance: A pilot study. Chicago: NCSBN. Nehring, W., & Lashley, F. R. (2004). Current use and opinions regarding human patient  simulators in nursing education: an international survey. Nursing Education Perspectives,  25(5), 244–248. Nguyen, D. N., Zierler, B., & Nguyen, H. Q. (2011). A survey of nursing faculty needs for training in use of new technologies for education and practice. The Journal of Nursing Education, 50(4), 181–9. http://doi.org/10.3928/01484834-20101130-06 Polit, D. F., & Beck, C. T. (2012). Nursing Research: Generating and Assessing Evidence for  Nursing Practice (9th ed.). Philadelphia, PA: Lippincott Williams & Wilkins. Radhakrishnan, K., Roche, J. P., & Cunningham, H. (2007). Measuring clinical practice  parameters with human patient simulation: A pilot study. International Journal of  Nursing Education Scholarship, 4(1), Article8. http://doi.org/10.2202/1548-923X.1307 Schoening, A. M., Sittner, B. J., & Todd, M. J. (2006). Nursing students’ perceptions and the  educators’ role. Nurse Educator, 31(6), 253–258. Society for Simulation in Healthcare. (2012). What is simulation? Retrieved on May 18, 2014 from http://ssih.org/about-simulation. Suliman, W., & Halabi, J. (2007). Critical thinking, self-esteem, and state anxiety of nursing students. Nurse Education Today, 27, 162-168. doi:10.1016/j.nedt.2006.04.008  81 Tanner, C. A. (2006). Thinking like a nurse: A research-based model of clinical judgment in  nursing. The Journal of Nursing Education, 45(6), 204–211. Tarrant, J. (2000). What is wrong with competence? Journal of Further and Higher Education,  24(1), 77–83. http://doi.org/10.1080/030987700112336 Thurman, G. K., & Sanders, M. K. (1987). Competency-based education versus traditional  education: A comparison of effectiveness. Radiologic Technology, 59(2), 164–169. Todd, M., Manz, J. A., Hawkins, K. S., Parsons, M. E., & Hercinger, M. (2008). The  development of a quantitative evaluation tool for simulations in nursing education.  International Journal of Nursing Education Scholarship, 5(1), Article 41.  http://doi.org/10.2202/1548-923X.1705 Traynor, M., & Galanouli, D. (2015). Have OSCEs come of age in nursing education!? British  Journal of Nursing, 24(7), 388–392. U.S. Department of Education, Commission on the Future of Higher Education. (2006). A test of  leadership: Charting the future of U.S. higher education. Retrieved on April 21, 2015  from http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf Watson, R., Stimpson, A., Topping, A., & Porock, D. (2002). Clinical competence assessment in  nursing: A systematic review of the literature. Journal of Advanced Nursing, 39(5), 421– 431. http://doi.org/10.1046/j.1365-2648.2002.02307.x Zasadny, M. F., & Bull, R. M. (2015). Nurse education in practice assessing competence in  undergraduate nursing students: The amalgamated students assessment in practice model.  Nurse Education in Practice, 15, 126–134.  82 Appendices Appendix A:  Initial Email Contact Letter Dear Colleague,  My name is Peterson Masigan, a graduate student completing my thesis for my Master of Science in Nursing degree at the University of British Columbia.  I invite you to participate in a study that aims to explore the methods used by professional nurse educators to assess students’ clinical competence when utilizing High Fidelity Simulation (HFS) as part of their teaching. This study will form the basis for my thesis for a graduate Master of Science in Nursing degree.  The online survey will ask nursing faculty about their experiences with assessment using HFS and will take them approximately 20 minutes to complete. The survey will close after April 30, 2015.  There will be no personal identifiers required in the survey. Completion of the survey implies agreement to participate and your understanding of the research study. As a token of appreciation, those who complete the survey will be given an option to enter a draw for a $25.00 gift card of their choice (Starbucks/Tim Hortons/iTunes – 4 gift cards in total). A link at the end of the survey will take you to a different webpage where you can enter your name and email address. There is no way to link this information to your survey response.   Please see the attached information letter for details and the link to the survey.  If you are interested in learning more about this study please do contact me: Peterson Masigan, or my supervisory committee chair: Dr. Bernie Garrett.  Copies of the completed work will be distributed to your nursing department director/chair for distribution.  Thank you for considering taking part in this study.  Best regards,  Peterson Masigan, BSN, RN Graduate Student, Masters of Science in Nursing  University of British Columbia  83 Appendix B: Letter of Information LETTER OF INFORMATION  TITLE: Competency Assessment in Clinical High-Fidelity Simulation: A Survey of Methods Used in Undergraduate Nursing  PRINCIPAL INVESTIGATOR:   Bernie Garrett, PhD, RN        School of Nursing      University of British Columbia  CO-INVESTIGATORS:    Peterson Masigan, BSN, RN        Graduate Student      Master of Science in Nursing      School of Nursing      University of British Columbia       Tarnia Taverner, RN, PhD, MSc       School of Nursing      University of British Columbia       Elsie Tan, RN, MSN         School of Nursing      University of British Columbia  CONTACT ADDRESS:  UBC School of Nursing      T201 2211 Wesbrook Mall      Vancouver, BC, Canada V6T 2B5      604-822-7417  INTRODUCTION  You are being invited to take part in this study because you are a nursing faculty in a Baccalaureate-granting nursing program in British Columbia who utilizes High-Fidelity Simulation (HFS) as part of your teaching. This study aims to explore the methods used by professional nurse educators to assess students’ clinical competence when utilizing HFS.   This study is being conducted by graduate student Peterson Masigan as a part of a thesis for a Master of Science in Nursing degree. The main results may be published at a later date. Only the graduate student and principal and co-Investigators will have access to the raw data gathered in this study.  84 STUDY PROCEDURES  If you decide to participate in this study:  1. You will complete an online survey. The survey will take approximately 20 minutes to complete. The survey will not gather any personal information that could be used to identify you. The link to the online survey can be found at the end of this letter.  Please note that the online survey will be managed through ‘FluidSurvey’, a subsidiary company of ‘SurveyMonkey’ an American web-survey company. The online survey is hosted on computers located in Canada, but may also be stored and accessed in the USA and as such is subject to U.S. laws. In particular, the US Patriot Act which allows federal authorities access to the records of Internet service providers. This questionnaire does not ask for any personal identifiers or any information that may be used to identify you personally. However, the web-survey company servers record incoming IP addresses of the computer that you use to access the survey but no connection is made between your data and your computer’s IP address. If you participate in the survey, you understand that your responses to the survey questions may be stored and accessed in the USA.  For more information the security and privacy policy for the web-survey company can be found at the following link: http://www.surveymonkey.com/Monkey_Privacy.aspx  2. Once you complete the survey, you will be invited to enter a draw for a $25.00 gift card of your choice (4 gift cards to be won in total). This is optional. If you do not want to enter the draw, simply close your webpage at the end of the survey. If you choose to enter the draw, you can click the link at the end of the survey, which will take you to another webpage. You will be asked to provide your name, and your email address to be entered into the draw. Your name and email address will not be linked to your survey response, and there is no way to connect your survey response to your name and email address. If you win, the gift card will be mailed to your institution address (as found on your institution’s website)  3. There is no further follow-up after you complete the online survey.  STUDY RESULTS  The results of this study will be reported in a graduate thesis and may also be published in a journal article and books. The main study findings will be emailed to the director of your nursing program for further dissemination to members of your nursing faculty.   POTENTIAL RISKS OF THE STUDY  There are no potential risks associated with participating in this study.      85 POTENTIAL BENEFITS OF THE STUDY  There are no potential or actual personal benefits associated with participating in this study. However, in the future, information gathered from this study may help improve baccalaureate-nursing education when utilizing HFS.  RESEARCH USE AND CONFIDENTIALITY  Your confidentiality will be respected. No personal identifier will be gathered in the survey. If you choose to participate in the gift card draw, your name and email address will be collected, and the information encrypted, kept solely on the computers of the graduate student, Peterson Masigan. All data gathered will be kept on a computer hard disk. The computer, as well as the data file, will both be password protected. Only the research team will have access to the password-protected computer and study data file throughout the study. As a research participant, you will not be identified by name in any reports of the completed study  PAYMENT  You will not be paid to take part in this study. However, you will have an opportunity to enter a draw to win a $25.00 gift card (Starbucks/Tim Hortons/iTunes) if you complete the survey. You can complete the survey but not enter the draw if you wish. If you would like to enter the draw, please refer to ‘Study Procedures – number 2’ on the second page of this letter for more information.  CONTACT FOR FURTHER INFORMATION ABOUT THE STUDY  If you have any questions or concerns about the study, or the consent process, please contact the principal investigator or one of the co-investigators. The names and telephone numbers are listed at the top of the first page of this letter.  CONTACT FOR CONCERNS OR COMPLAINTS  If you have any concerns or complaints about your rights as a research participant and/or your experiences while participating in this study, contact the Research Participant Complaint Line in the UBC Office of Research Ethics at 604-822-8598 or if long distance e-mail RSIL@ors.ubc.ca or call toll free 1-877-822-8598.  CONSENT  Your participation in this study is entirely voluntary and you may refuse to participate or withdraw from the study or any element of the activities listed above at any time without jeopardy to your employment.   By completing the online survey, it will be assumed that you consent to participate in this study    86 SURVEY LINK  If you would like to participate in this study please complete the survey at the following link: http://fluidsurveys.com/surveys/bernie-k/assessment-methods-in-nursing-simulation/  The survey will be open until April 30, 2015.  Yours faithfully   Peterson Masigan Graduate Student  87 Appendix C: Follow-up Email Contact Letter  Dear Colleague,  My name is Peterson Masigan, a graduate student completing my thesis for my Master of Science in Nursing degree at the University of British Columbia.  Two weeks ago, I sent you ‘Letter of Information’ requesting your consideration to take part in my study that aims to explore the methods used by professional nurse educators to assess students’ clinical competence when utilizing High Fidelity Simulation (HFS) as part of their teaching. If you have already participated, I thank you for your support. Please disregard this letter.   If you have not yet participated, I would like to remind you of this opportunity to participate if you are interested. The survey will close after April 30, 2015.  The online survey will ask nursing faculty about their experiences with assessment using HFS and will take them approximately 20 minutes to complete.  There will be no personal identifiers required in the survey. Completion of the survey implies agreement to participate and your understanding of the research study. As a token of appreciation, those who complete the survey will be given an option to enter a draw for a $25.00 gift card of their choice (Starbucks/Tim Hortons/iTunes – 4 gift cards in total). A link at the end of the survey will take you to a different webpage where you can enter your name and email address. There is no way to link this information to your survey response.   Please see the attached information letter for details and the link to the survey.  If you are interested in learning more about this study please do contact me: Peterson Masigan, or my supervisory committee chair: Dr. Bernie Garrett.  Copies of the completed work will be distributed to your nursing department director/chair for distribution.  Thank you for considering taking part in this study.  Best regards,  Peterson Masigan, BSN, RN Graduate Student, Masters of Science in Nursing  University of British Columbia  88 Appendix D:  Permission Letter to Adapt Data Collection Tool From Dr. Wendy Nehring      89 Appendix E: Online Survey Assessment Methods in Nursing Simulation Consent to Participate Thank!you!for!considering!to!participate!in!this!study.!This!study!aims!to!explore!the!methods!used!by!professional!nurse!educators!to!assess!students’!clinical!competence!when!utilizing!High>Fidelity!Simulation!as!part!of!their!teaching.!This!study!and!the!collected!information!are!part!of!a!thesis!for!a!graduate!degree.!This!online!survey!will!ask!you!about!your!experience!with!High>Fidelity!Simulation!as!a!nursing!faculty.!This!survey!will!take!approximately!20!minutes!to!complete.!There!will!be!no!further!follow>up!after!completing!this!survey.!There!will!be!no!personal!identifier!collected!in!the!survey.!There!is!no!way!to!connect!your!response!in!the!survey!to!you!or!your!school.!You!can,!at!any!time,!change!your!mind!about!completing!this!survey.!Just!close!the!browser!window!before!you!reach!the!end!of!the!survey.!Your!response!will!only!be!collected!if!you!complete!the!entire!survey.!Completion!of!the!survey!confirms!your!agreement!to!participate!and!your!understanding!of!the!research!study.!!You must click "I Consent" in order to complete the survey ! I!Consent!! ! Inclusion & Exclusion Criteria  Are you a Nurse Educator currently teaching in an undergraduate baccalaureate nursing program? ! Yes!! No! Have you been teaching for at least a year as a professional educator? ! Yes!! No! Do you use High-Fidelity Simulation in the course(s) you teach?  (Hover on blue text for definition) ! Yes!! No! Do you have previous experience (at least 1 semester/term) using High-Fidelity Simulation as a teaching strategy?  (Hover on blue text for definition) ! Yes!! No!      90 Program Use of High-Fidelity Simulation  What type of baccalaureate nursing programs does your institution offer? (Check all that apply) ! 3!or!4!year!BSN!program!(No!minimum!college/university!prerequisite!credits!or!previous!undergraduate!degree!required)!! Advanced!Standing/Accelerated!BSN!programs!(minimum!college/university!prerequisite!credits!or!!previous!undergraduate!degree!required)!! Bridge>in!Licensed!Practical!Nurse!program!! Refresher/Re>Entry!program!! Other,!please!specify...!______________________! On average, how many hours of High-Fidelity Simulation are baccalaureate nursing students exposed to in your program? If you don't know, please type 'unknown'. (Hover over blue text for definition) ! ! How many faculty members have specialized training in the use of High-Fidelity Simulation for baccalaureate nursing education? If you don't know, please type 'unknown'. (Hover over blue text for definition) ! ! What clinical areas of simulation scenarios are utilized in your nursing program? (check all that apply) ! Population!Health!! Extended!Care!! Medical!! Surgical!! Obstetrics!! Neonatology!! Pediatrics!! Community!Health!! Mental!Health!! Critical!Care!! Other,!please!specify...!______________________! Theoretical Context, and Challenges with High-Fidelity Simulation  What educational theories guide your choice of assessment tools? ! ! Did you have input into the assessment tool(s) selected/used for High-Fidelity Simulation? (Hover over blue text for definition) ! Yes!! No! 91 What assessment-related considerations should be taken into account when selecting assessment tools to use in High-Fidelity Simulation activities? (Hover over blue text for definition) ! ! What evidence do you use to base your assessment decisions in assessing the competence of a particular student when using High-Fidelity Simulation activities? (Hover over blue text for definition) ! ! What are some of the challenges you have encountered in assessing student's clinical competence when using High-Fidelity Simulation activities? (Hover over blue text for definition) ! ! Assessment of Students  How often are students formatively assessed with regard to achieving their learning outcomes when using High-Fidelity Simulation scenarios as a teaching tool? (Hover over blue text for definition) ! Never!! After!every!scenario!(if!more!than!one!scenario!per!class)!! After!every!class!! Once!a!week!! Other,!please!specify...!______________________! How often are students summatively assessed with regard to achieving their learning outcomes when using High-Fidelity Simulation scenarios as a teaching tool? (Hover over blue text for definition) ! Never!! Every!two!weeks!! Every!three!weeks!! Every!month!! Every!two!months!! Every!three!months!! Once!per!term/semester!! Other,!please!specify...!______________________! Is your High-Fidelity Simulation assessment framework competency-based? (Hover over blue text for definition) ! Yes!! No!    92 Competency-Based Assessment Tools (Hover on blue text for definition)  Indicate the origin of these assessment tools and if they have been validated (Check all that apply): ! Created!within!my!institution,!tested!for!validity!! Created!within!my!institution,!untested!for!validity!! Created!by!another!institution,!tested!for!validity!! Created!by!another!institution,!untested!for!validity!! From!the!literature,!tested!for!validity!! From!the!literature,!untested!for!validity!! Don't!know!! Other,!please!specify...!______________________! Please provide the name(s) of the assessment tools used, and indicate with a ‘V’ after the name if tested for validity, or ‘D’ if validity testing is unknown: ! ! Other Assessment Tools  Indicate the origin of these assessment tools and if they have been validated (Check all that apply): ! Created!within!my!institution,!tested!for!validity!! Created!within!my!institution,!untested!for!validity!! Created!by!another!institution,!tested!for!validity!! Created!by!another!institution,!untested!for!validity!! From!the!literature,!tested!for!validity!! From!the!literature,!untested!for!validity!! Don't!know!! Other,!please!specify...!______________________! Please provide the name(s) of the assessment tools used, and indicate with a ‘V’ after the name if tested for validity, or ‘D’ if validity testing is unknown: ! ! Educator Demographics  What is your gender? ! Woman!! Man!! Other!! Prefer!not!to!answer! What is your year of birth (YYYY)? ! !  93  What is the highest level of education you have completed? ! Associate!(Diploma)!! Bachelors!(Nursing)!! Bachelors!(Other)!>!please!specify!______________________!! Post>Bachelors!Certification!(Nursing)!! Post>Bachelors!Certification!(Other)!>!please!specify!______________________!! Masters!(Nursing)!! Masters!(Other)!>!please!specify!______________________!! PhD!(Nursing)!! PhD!(Other)!>!please!specify!______________________! How many years have you been practicing as a Registered Nurse (RN)? ! ! How many years have you been teaching in a BSN program? ! ! How many years have you been using High Fidelity Simulation in your practice as an educator? ! ! Please describe your training related to the use of High Fidelity simulation as a teaching tool? ! ! Do you have any comments regarding the assessment of competence using High Fidelity Simulation? ! ! Do you have any feedback/comments about this survey? ! ! 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0166759/manifest

Comment

Related Items