@prefix vivo: . @prefix edm: . @prefix ns0: . @prefix dcterms: . @prefix skos: . vivo:departmentOrSchool "Education, Faculty of"@en, "Educational Studies (EDST), Department of"@en ; edm:dataProvider "DSpace"@en ; ns0:degreeCampus "UBCV"@en ; dcterms:creator "Houlahan, Nora"@en ; dcterms:issued "2018-10-18T19:39:52Z"@en, "2018"@en ; vivo:relatedDegree "Doctor of Education - EdD"@en ; ns0:degreeGrantor "University of British Columbia"@en ; dcterms:description """Police training is traditionally delivered in a didactic, para-military style that contrasts with modern day public expectations of patrol level police officers. The predominant methods of instruction and assessment for police recruits remain lecture-based and memorization-driven. In British Columbia, all municipal, transit, and tribal police recruits are trained at the Justice Institute of British Columbia (JIBC) Police Academy. In 2016, the JIBC Police Academy implemented a recruit-training program that is centred on the development and assessment of the Police Sector Council (PSC) National Framework of Constable Competencies. The core aspects of this program include: integrated delivery of materials focused around common patrol-level calls, application and performance through case-based and scenario-based learning activities, development of individualized training plans with instructors mentoring recruits over the course of training, performance-based assessment exam scenarios, and assessment portfolios at the end of each component of training. This is the first police recruit training program in Canada to directly integrate the PSC competencies. This project used a quantitative approach to evaluate the first component (Block I) of the new training delivery model through surveying recruits and their Field Training Officers (FTOs) from one class trained in the old lecture-based delivery model and two classes trained in the new competency-based delivery model. The survey used the PSC constable competencies as the reference point and, for each of the nine core competencies, asked about the recruits’ ability and how well their Block I training prepared them for Block II. Recruits in the lecture-based delivery model rated their ability significantly higher than those from the competency-based delivery model in: adaptability, ethical accountability, organizational awareness, problem solving, risk management, stress tolerance, and teamwork. No significant difference in how FTOs rated recruits in the lecture-based and competency-based delivery models was identified. Analysis of the comments indicate the recruits in the lecture-based delivery model may have a less robust understanding of the role of a patrol level police officer due to their limited exposure to scenarios and the lack of formative feedback on their performance, and may over-estimate their own ability. The impacts of organizational cynicism and change management are included in the discussion."""@en ; edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/67610?expand=metadata"@en ; skos:note "EVALUATION OF A COMPETENCY-BASED EDUCATION FRAMEWORK FOR POLICE RECRUIT TRAINING IN BRITISH COLUMBIA by Nora Houlahan B.Sc., The University of Guelph, 2000 M.Sc., The University of British Columbia, 2004 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF EDUCATION in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Educational Leadership and Policy) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) October 2018 © Nora Houlahan, 2018 ii The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, the dissertation entitled: Evaluation of a Competency-based Education Framework for Police Recruit Training in _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ British Columbia __________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ submitted by Nora Houlahan in partial fulfillment of the requirements for the __________________________________________________________________________________________________________________________________________ the degree of Doctor of Education _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ in Educational Leadership and Policy _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ Examining Committee: Don Fisher, Educational Studies ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ Supervisor Tom Sork, Educational Studies ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ Supervisory Committee Steve Schnitzer, Police Academy ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ Supervisory Committee Alison Taylor, Educational Studies ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ University Examiner Penney Clark, Curriculum & Pedagogy ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ University Examiner iii Abstract Police training is traditionally delivered in a didactic, para-military style that contrasts with modern day public expectations of patrol level police officers. The predominant methods of instruction and assessment for police recruits remain lecture-based and memorization-driven. In British Columbia, all municipal, transit, and tribal police recruits are trained at the Justice Institute of British Columbia (JIBC) Police Academy. In 2016, the JIBC Police Academy implemented a recruit-training program that is centred on the development and assessment of the Police Sector Council (PSC) National Framework of Constable Competencies. The core aspects of this program include: integrated delivery of materials focused around common patrol-level calls, application and performance through case-based and scenario-based learning activities, development of individualized training plans with instructors mentoring recruits over the course of training, performance-based assessment exam scenarios, and assessment portfolios at the end of each component of training. This is the first police recruit training program in Canada to directly integrate the PSC competencies. This project used a quantitative approach to evaluate the first component (Block I) of the new training delivery model through surveying recruits and their Field Training Officers (FTOs) from one class trained in the old lecture-based delivery model and two classes trained in the new competency-based delivery model. The survey used the PSC constable competencies as the reference point and, for each of the nine core competencies, asked about the recruits’ ability and how well their Block I training prepared them for Block II. Recruits in the lecture-based delivery model rated their ability significantly higher than those from the iv competency-based delivery model in: adaptability, ethical accountability, organizational awareness, problem solving, risk management, stress tolerance, and teamwork. No significant difference in how FTOs rated recruits in the lecture-based and competency-based delivery models was identified. Analysis of the comments indicate the recruits in the lecture-based delivery model may have a less robust understanding of the role of a patrol level police officer due to their limited exposure to scenarios and the lack of formative feedback on their performance, and may over-estimate their own ability. The impacts of organizational cynicism and change management are included in the discussion. v Lay Summary In September of 2016 the JIBC Police Academy implemented a new competency-based delivery format for the municipal police recruit training program in British Columbia. In this format, training moved away from a traditional lecture-based and memorization driven model to one that focuses on application and performance and is aligned with the Police Sector Council national framework of constable competencies. The PSC competencies are the only nationally recognized standard for policing in Canada and this program is the first time nationally that the competencies have been integrated into police recruit training. This project evaluates the changes in the program delivery model by using surveys to compare recruit ability and preparedness for their field training in one class of recruits from the lecture-based delivery model and two classes of recruits from the competency-based delivery model. vi Preface The identification and design of the research program for this project was conducted entirely by me. I performed all aspects of the research and analysis of the research data. With the permission of my supervisor, I consulted with a graduate student representative from the University of British Columbia (UBC) Department of Statistics Short Term Consulting Service, on the statistical analysis of the data. This representative provided advice on the approach to the analysis and statistical tests to use. This project required ethics approval from both the University of British Columbia and the Justice Institute of British Columbia (JIBC). Ethics approval was obtained from the UBC Behavioural Research Ethics Board under certificate H16-01401 and from the JIBC Ethics Review Committee under certificate JIBBCER2016-10-02-CBEF. vii Table of Contents Abstract ................................................................................................................................... iii Lay Summary ...........................................................................................................................v Preface ..................................................................................................................................... vi Table of Contents .................................................................................................................. vii List of Tables ........................................................................................................................ xiv List of Figures ...................................................................................................................... xxii List of Abbreviations ...........................................................................................................xxv Acknowledgements ............................................................................................................ xxvi Dedication .......................................................................................................................... xxvii Chapter 1: Introduction ..........................................................................................................1 1.1 My Perspective.......................................................................................................... 2 1.2 Theoretical Framework ............................................................................................. 8 1.3 The Context of Police Training in BC .................................................................... 10 1.4 Summary of Recruit Training Delivery Models ..................................................... 14 1.5 Research Question - Program Implementation and Evaluation .............................. 16 1.6 Summary ................................................................................................................. 17 Chapter 2: Literature Review ...............................................................................................19 2.1 Research on Police Training ................................................................................... 19 2.2 Police Competencies in Canada .............................................................................. 29 2.3 Competency-Based Education ................................................................................ 32 2.3.1 Overview of competency-based education ......................................................... 32 viii 2.3.2 Defining Competency-Based Education Terminology ....................................... 34 2.3.3 Determining Competencies ................................................................................. 36 2.3.4 Elements of Competency-Based Learning.......................................................... 37 2.4 The Learning Process .............................................................................................. 39 2.5 Assessment of Competencies .................................................................................. 44 2.6 Criticisms of Competency-Based Education .......................................................... 50 2.7 Summary ................................................................................................................. 52 Chapter 3: Program Description ..........................................................................................54 3.1.1 Recruit Training Program Structure Prior to Delivery Model Changes ............. 54 3.2 Design and Development ........................................................................................ 58 3.3 Proposed Recruit Training Program Structure Delivery Model Changes .............. 60 3.4 The New Program Structure ................................................................................... 65 3.4.1 Block I ................................................................................................................. 69 3.4.1.1 Weekly pre-reading and quizzes ................................................................. 71 3.4.1.2 Classroom case-based application .............................................................. 73 3.4.1.3 Directed study time ..................................................................................... 75 3.4.1.4 Practical scenarios ....................................................................................... 76 3.4.1.5 Practical Scenario Acting ............................................................................ 76 3.4.1.6 Practical Scenario Self-Assessment and Report Writing ............................ 77 3.4.1.7 COPS Days ................................................................................................. 79 3.4.1.8 Skills Development – Use of Force, Firearms, and Driving ....................... 79 3.4.1.9 Assessment .................................................................................................. 80 3.4.2 Block II ............................................................................................................... 82 ix 3.4.3 Block III .............................................................................................................. 85 3.4.3.1 Pre-reading and quizzes .............................................................................. 85 3.4.3.2 Teaching sims ............................................................................................. 86 3.4.3.3 Longitudinal Cases...................................................................................... 86 3.4.3.4 Advanced Operational Policing Skills (AOPS) days .................................. 87 3.4.3.5 Mentoring Junior Recruits .......................................................................... 87 3.4.3.6 Assessment .................................................................................................. 88 3.4.4 Block IV .............................................................................................................. 89 3.5 Development ........................................................................................................... 89 3.6 Implementation ....................................................................................................... 92 3.7 Delivered Curriculum ............................................................................................. 93 3.7.1 Class 152 Case Studies ....................................................................................... 94 3.7.2 Class 153 Case Studies ....................................................................................... 95 3.7.3 Practical Scenarios .............................................................................................. 96 3.7.4 Mentoring ............................................................................................................ 98 3.7.5 Directed Study .................................................................................................. 100 3.8 Summary ............................................................................................................... 101 Chapter 4: Methodology......................................................................................................102 4.1 Research Design.................................................................................................... 102 4.1.1 Program Evaluation Framework ....................................................................... 102 4.1.2 Evaluation Design and Methodology................................................................ 105 4.1.2.1 Survey design ............................................................................................ 110 4.1.2.2 Survey Administration and Timeline ........................................................ 113 x 4.1.2.3 Statistical Analysis .................................................................................... 116 4.1.2.4 Qualitative Data Analysis ......................................................................... 117 4.2 Project Narrative ................................................................................................... 117 4.2.1 Changes to Project Design ................................................................................ 120 4.3 Summary ............................................................................................................... 123 Chapter 5: Results................................................................................................................124 5.1 Descriptive Survey Results ................................................................................... 124 5.1.1 Lecture-based delivery model: Class 151 ........................................................ 124 5.1.1.1 Demographic Characteristics of 151 FTOs............................................... 126 5.1.2 Competency-based delivery model: Class 152 ................................................ 128 5.1.2.1 Demographic Characteristics of 152 FTOs............................................... 130 5.1.3 Competency-based delivery model: Class 153 ................................................ 132 5.1.3.1 Demographic Characteristics of 153 FTOs............................................... 134 5.1.4 Competency-based delivery model: Exam Assessors ...................................... 136 5.2 Quantitative Survey Analysis ............................................................................... 138 5.2.1 Differences in perception before and after Block II experience ....................... 142 5.2.2 Comparison within classes ................................................................................ 144 5.2.2.1 Lecture-based delivery model ................................................................... 145 5.2.2.1.1 Recruit characteristics ......................................................................... 145 5.2.2.1.2 FTO characteristics ............................................................................. 146 5.2.2.1.3 Recruit Characteristics on FTO Responses ......................................... 147 5.2.2.2 Competency-based delivery model ........................................................... 148 5.2.2.2.1 Recruit characteristics ......................................................................... 148 xi 5.2.2.2.2 FTO characteristics ............................................................................. 149 5.2.2.2.3 Recruit characteristics on FTO responses ........................................... 149 5.2.3 Comparison across classes ................................................................................ 150 5.2.3.1 Global comparison across classes ............................................................. 151 5.2.3.2 Adaptability............................................................................................... 154 5.2.3.3 Ethical Accountability .............................................................................. 159 5.2.3.4 Interactive Communication ....................................................................... 164 5.2.3.5 Organizational Awareness ........................................................................ 168 5.2.3.6 Problem Solving........................................................................................ 173 5.2.3.7 Risk Management ..................................................................................... 178 5.2.3.8 Stress Tolerance ........................................................................................ 183 5.2.3.9 Teamwork ................................................................................................. 188 5.2.3.10 Written Skills ........................................................................................ 193 5.2.4 Analysis of Recruit Responses Compared with FTO responses ....................... 198 5.2.4.1 Recruit and FTO Responses – Lecture-based delivery model .................. 199 5.2.4.2 Recruit and FTO Responses – Competency-based delivery model .......... 202 5.2.5 Analysis of Assessor Responses ....................................................................... 205 5.2.6 Qualitative Analysis of Survey Comments ....................................................... 210 5.2.6.1 Lecture-based delivery model: Recruit Survey 1..................................... 210 5.2.6.2 Lecture-based delivery model: Recruit Survey 2..................................... 213 5.2.6.3 Lecture-based delivery model: FTO Survey ............................................ 215 5.2.6.4 Competency-based delivery model: Recruit Survey 1 ............................. 216 5.2.6.4.1 Class 152 – Recruit Survey 1 .............................................................. 216 xii 5.2.6.4.2 Class 153 – Recruit Survey 1 .............................................................. 219 5.2.6.5 Competency-based delivery model: Recruit Survey 2 ............................. 220 5.2.6.5.1 Class 152 – Recruit Survey 2 .............................................................. 220 5.2.6.5.2 Class 153 – Recruit Survey 2 .............................................................. 222 5.2.6.6 Competency-based delivery model: FTO Survey .................................... 223 5.2.6.6.1 Class 152 – FTO Survey ..................................................................... 223 5.2.6.6.2 Class 153 – FTO Survey ..................................................................... 225 5.2.6.7 Competency-based delivery model: Assessor Survey ............................. 226 5.3 Focus Group Analysis ........................................................................................... 227 5.4 Summary ............................................................................................................... 228 Chapter 6: Discussion ..........................................................................................................230 6.1 Survey Results ...................................................................................................... 231 6.1.1 Recruit Ability and Preparedness ...................................................................... 231 6.1.2 Course Content and Structure ........................................................................... 235 6.2 Faculty Development ............................................................................................ 243 6.3 Organizational Cynicism and Organizational Change .......................................... 245 6.4 Changes Following Class 152 ............................................................................... 253 6.5 Summary ............................................................................................................... 254 Chapter 7: Conclusion .........................................................................................................256 7.1 Lessons Learned.................................................................................................... 257 7.2 Limitations ............................................................................................................ 261 7.3 Recommendations ................................................................................................. 261 7.3.1 Designing a Major Curriculum Change ............................................................ 262 xiii 7.3.2 Implementing Competency-Based Education................................................... 264 7.3.3 Conducting Program Evaluation within a Major Curriculum Change ............. 267 7.3.4 Recommendations for Practitioner Research .................................................... 269 7.4 Conclusion ............................................................................................................ 270 References .............................................................................................................................273 Appendices ............................................................................................................................292 Appendix A - Template Schedule for Competency-Based Delivery Model of Recruit Training ............................................................................................................................. 293 A.1 Block I Template Schedule ............................................................................... 293 A.2 Block III Template Schedule ............................................................................ 306 Appendix B - Surveys ...................................................................................................... 314 B.1 Recruit Survey .................................................................................................. 315 B.2 FTO Survey ....................................................................................................... 336 B.3 Assessor Survey ................................................................................................ 357 Appendix C - Consistency Tables: Comparison Within Classes ..................................... 374 C.1 Lecture-based delivery model - Recruit characteristics .................................... 374 C.2 Lecture-based delivery model - FTO characteristics ........................................ 378 C.3 Lecture-based delivery model - Recruit Characteristics on FTO Responses ... 382 C.4 Competency-based delivery model - Recruit characteristics ............................ 386 C.5 Competency-based delivery model - FTO characteristics ................................ 390 C.6 Competency-based delivery model - Recruit characteristics on FTO responses 395 xiv List of Tables Table 2-1 Police Sector Council core Constable competencies with proficiency levels 1 and 2 (Police Sector Council, 2011) .........................................................................................32 Table 2-2 Summarization of the stages of adult skill development (Dreyfus, 2004) related to competency in medical practitioners (Carraccio et al., 2005) and the level of supervision required (ten Cate and Scheele, 2007) ...............................................................................44 Table 3-1 Comparison of program elements 10 years before the program change proposal (2005), before change implementation (2015), and in the new delivery model (2016) ....65 Table 3-2 Expected progression through proficiency levels 1 and 2 in each of the core Constable competencies .....................................................................................................82 Table 4-1 Summary of Kiripatrick model of program evaluation and modifications from Alliger et al., (1997) and Wang and Wilcox (2006) that influenced the program evaluation design of this study .........................................................................................104 Table 4-2 Summary of program evaluation model from Table 4-1 with data sources from the project design ...................................................................................................................107 Table 5-1 Class 151 demographic characteristics and survey response rates ........................125 Table 5-2 Education levels of Class 151 prior to police academy .........................................126 Table 5-3 Previous policing experience of Class 151 prior to police academy ....................126 Table 5-4 Demographic characteristics for FTO respondents for Class 151 ........................127 Table 5-5 Characteristics of recruits trained by FTO respondents in Class 151. .................128 Table 5-6 Class 152 demographic characteristics and survey response rates .......................128 Table 5-7 Education levels of Class 152 respondents prior to police academy....................129 Table 5-8 Previous policing experience of Class 152 prior to police academy ....................129 xv Table 5-9 Demographic characteristics for FTO respondents for Class 152 ........................131 Table 5-10 Characteristics of recruits trained by FTO respondents in Class 152. ...............132 Table 5-11 Class 153 demographic characteristics and survey response rates .....................132 Table 5-12 Education levels of Class 153 respondents prior to police academy..................133 Table 5-13 Previous policing experience of Class 153 prior to police academy ..................134 Table 5-14 Demographic characteristics for FTO respondents for Class 153 ......................135 Table 5-15 Characteristics of recruits trained by FTO respondents in Class 153. ...............136 Table 5-16 Demographic characteristics of competency-based exam assessors ..................137 Table 5-17 Mann Whitney U test results comparing distribution of responses to Recruit Survey 1 and Recruit Survey 2 between Class 152 and 153 ............................................139 Table 5-18 Mann Whitney U test results comparing distribution of responses to the FTO survey and the difference between Recruit Survey 1 and the FTO survey between Class 152 and 153 ......................................................................................................................141 Table 5-19 Differences between recruit perceptions before and after Block II training experience ........................................................................................................................143 Table 5-20 Global mean ratings for overall ability and overall preparation from Recruit Survey 1 and FTO responses clustered across training delivery methods .......................152 Table 5-21 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type ................................................................154 Table 5-22 Mean ratings for ability and preparation in the adaptability competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ..........156 Table 5-23 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type ................................................................158 xvi Table 5-24 Cross-tabulation report from Recruit Survey 1 for ability in the adaptability competency area...............................................................................................................158 Table 5-25 Mean ratings for ability and preparation in the ethics competency from Recruit Survey 1 and FTO responses clustered across training delivery methods .......................161 Table 5-26 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type ................................................................163 Table 5-27 Cross-tabulation report from Recruit Survey 1 for ability in the ethics competency area...............................................................................................................163 Table 5-28 Mean ratings for ability and preparation in the communication competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ..........166 Table 5-29 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type ................................................................167 Table 5-30 Mean ratings for ability and preparation in the organizational awareness competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ............................................................................................................................169 Table 5-31 Mann Whitney U Test of ability and preparedness for organizational awareness competency from Recruit Survey 1 and FTO survey, grouped across training type .......171 Table 5-32 Cross-tabulation report from Recruit Survey 1 for ability (top) and preparedness (bottom) in the organizational awareness competency area ............................................172 Table 5-33 Mean ratings for ability and preparation in the problem solving competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ..........175 Table 5-34 Mann Whitney U Test of ability and preparedness for problem solving competency from Recruit Survey 1 and FTO survey, grouped across training type .......177 xvii Table 5-35 Cross-tabulation report from Recruit Survey 1 for ability in the problem solving competency area...............................................................................................................177 Table 5-36 Mean ratings for ability and preparation in the risk management competency from Recruit Survey 1 and FTO responses clustered across training delivery methods .180 Table 5-37 Mann Whitney U Test of ability and preparedness for risk management competency from Recruit Survey 1 and FTO survey, grouped across training type .......182 Table 5-38 Cross-tabulation report from Recruit Survey 1 for ability in the risk management competency area...............................................................................................................182 Table 5-39 Mean ratings for ability and preparation in the stress tolerance competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ..........185 Table 5-40 Mann Whitney U Test of ability and preparedness for stress tolerance competency from Recruit Survey 1 and FTO survey, grouped across training type .......187 Table 5-41 Cross-tabulation report from Recruit Survey 1 for ability in the stress tolerance competency area...............................................................................................................187 Table 5-42 Mean ratings for ability and preparation in the teamwork competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ..........190 Table 5-43 Mann Whitney U Test of ability and preparedness for teamwork competency from Recruit Survey 1 and FTO survey, grouped across training type ...........................192 Table 5-44 Cross-tabulation report from Recruit Survey 1 for ability in the teamwork competency area...............................................................................................................192 Table 5-45 Mean ratings for ability and preparation in the written skills competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ..........195 xviii Table 5-46 Mann Whitney U Test of ability and preparedness for written skills competency from Recruit Survey 1 and FTO survey, grouped across training type ...........................197 Table 5-47 Cross-tabulation report from Recruit Survey 1 for ability in the written skills competency area...............................................................................................................197 Table 5-48 Mann Whitney U test results for lecture-based delivery model for ability and preparedness overall and for each of the competencies grouped across recruit/FTO responses ..........................................................................................................................201 Table 5-49 Cross-tabulation analysis of recruit ability in the risk management (top) and stress tolerance (bottom) competency areas grouped by recruit/FTO responses .............202 Table 5-50 Mann Whitney U test results for competency-based delivery model for ability and preparedness overall and for each of the competencies grouped across recruit/FTO responses ..........................................................................................................................205 Table 5-51 Cross-tabulation analysis of recruit preparedness in the adaptability (top) and interactive communication (bottom) competency areas grouped by recruit/FTO responses..........................................................................................................................................205 Table 5-52 Summary of mean and standard deviation of assessors’ ranking of recruits in the competency-based delivery model ...................................................................................207 Table 5-53 Kruskal-Wallis test results for ability and preparedness overall and in each of the competencies grouped across recruit, FTO, or assessor ..................................................209 Table 5-54 Recruit comments and coding from Class 151, lecture-based delivery model, Survey 1 ...........................................................................................................................211 Table 5-55 Recruit comments and coding from Class 151, lecture-based delivery model, Survey 2 ...........................................................................................................................213 xix Table 5-56 Recruit comments and coding from Class 151, lecture-based delivery model, FTO survey ......................................................................................................................215 Table 5-57 Recruit comments and coding from Class 152, competency-based delivery model, Survey 1 ...............................................................................................................217 Table 5-58 Recruit comments and coding from Class 153, competency-based delivery model, Survey 1 ...............................................................................................................219 Table 5-59 Recruit comments and coding from Class 152, competency-based delivery model, Survey 2 ...............................................................................................................221 Table 5-60 FTO comments and coding from Class 152, competency-based delivery model, FTO survey ......................................................................................................................224 Table 5-61 Recruit comments and coding from Class 153, competency-based delivery model, FTO survey ..........................................................................................................226 Table 6-1 Summary of changes made to the recruit training program since Classes 152 and 153....................................................................................................................................254 Table C- 1 Mean values and Mann-Whitney U test results grouped across recruit genders .374 Table C- 2 Mean and Kruskal-Wallis Test values grouped by recruit age range .................375 Table C- 3 Mean and Kruskal-Wallis Test values grouped across recruit post-secondary education level .................................................................................................................376 Table C- 4 Mean values and Mann-Whitney U Test values grouped by recruit previous policing experience ..........................................................................................................377 Table C- 5 Mean and Mann-Whitney U Test values grouped by FTO gender.....................378 Table C- 6 Mean and Kruskal Wallis Test values grouped across FTO years of service .....379 xx Table C- 7 Mean values and Kruskal-Wallis Test values grouped across FTO years as FTO..........................................................................................................................................380 Table C- 8 Mean and Kruskal-Wallis Test values grouped across FTO number of recruits trained ..............................................................................................................................381 Table C- 9 Mean and Mann-Whitney U Test values for FTO responses grouped by recruit gender ...............................................................................................................................382 Table C- 10 Mean and Kruskal-Wallis Test values for FTO responses grouped by recruit age..........................................................................................................................................383 Table C- 11 Mean and Kruskal-Wallis Test values for FTO responses grouped by recruit post-secondary education .................................................................................................384 Table C- 12 Mean and Mann-Whitney U Test values for FTO responses grouped by recruit previous police experience ...............................................................................................385 Table C- 13 Mean and Mann-Whitney U Test values grouped across recruit gender ..........386 Table C- 14 Mean and Kruskal-Wallis test values grouped across recruit age category .....387 Table C- 15 Mean and Kruskal-Wallis Test values grouped across recruit post-secondary education ..........................................................................................................................388 Table C- 16 Mean and Kruskal-Wallis Test values grouped across recruit previous policing experience ........................................................................................................................389 Table C- 17 Mean and Mann-Whitney U Test values grouped across FTO gender .............390 Table C- 18 Mean and Kruskal-Wallis Test values grouped across FTO age range ............391 Table C- 19 Mean and Kruskal-Wallis Test values grouped across FTO years of service ..392 Table C- 20 Mean and Kruskal-Wallis Test values grouped across FTO years as field trainer..........................................................................................................................................393 xxi Table C- 21 Mean and Kruskal-Wallis Test values grouped across FTO number of recruits trained ..............................................................................................................................394 Table C- 22 Mean and Mann-Whitney U Test FTO responses grouped across recruit gender..........................................................................................................................................395 Table C- 23 Mean and Kruskal-Wallis test values FTO responses grouped across recruit age category ............................................................................................................................396 Table C- 24 Cross-tabulation report of FTO responses grouped across recruit age category..........................................................................................................................................397 Table C- 25 Mean and Kruskal-Wallis Test values FTO responses grouped across recruit post-secondary education .................................................................................................398 Table C- 26 Mean and Kruskal-Wallis Test values FTO responses grouped across recruit previous policing experience ...........................................................................................399 xxii List of Figures Figure 1-1 Progression through police recruit training in British Columbia ..........................14 Figure 2-1 Overlay of levels of learning, assessment tools, and ability assessed (Shumway and Harden, 2003) with concepts of reflective practice for learning (Creuss et al., 2005)............................................................................................................................................49 Figure 4-1 Project timeline for recruit survey administration for classes 151 (pre-intervention, lecture-based), 152 and 153 (post-intervention, competency-based) ..............................114 Figure 5-1 Global mean ratings for overall ability (blue) and overall preparation (red) from Recruit Survey 1 clustered across training delivery methods ..........................................151 Figure 5-2 Global mean ratings for overall ability (blue) and overall preparation (red) from FTO survey clustered across training delivery methods ..................................................152 Figure 5-3 Mean ratings for ability in the Adaptability competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method ..............................155 Figure 5-4 Mean ratings for preparation in the Adaptability competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method ..................155 Figure 5-5 Mean ratings for ability in the Ethics competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .................................159 Figure 5-6 Mean ratings for preparation in the Ethics competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method .................................160 Figure 5-7 Mean ratings for ability in the communication competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .................165 Figure 5-8 Mean ratings for preparation in the communication competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method ..................165 xxiii Figure 5-9 Mean ratings for ability in the organizational awareness competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .....168 Figure 5-10 Mean ratings for preparation in the organizational awareness competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method..........................................................................................................................................169 Figure 5-11 Mean ratings for ability in the problem solving competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .................174 Figure 5-12 Mean ratings for preparation in the problem solving competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method .....174 Figure 5-13 Mean ratings for ability in the risk management competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .................179 Figure 5-14 Mean ratings for preparation in the risk management competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method .....179 Figure 5-15 Mean ratings for ability in the stress tolerance competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .................184 Figure 5-16 Mean ratings for preparation in the stress tolerance competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method .....184 Figure 5-17 Mean ratings for ability in the teamwork competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method ..............................189 Figure 5-18 Mean ratings for preparation in the teamwork competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method ..................189 Figure 5-19 Mean ratings for ability in the written skills competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .................194 xxiv Figure 5-20 Mean ratings for preparation in the written skills competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method ..................194 xxv List of Abbreviations AC Assessment Centre ADDIE Assess, Design, Develop, Implement, Evaluate BC British Columbia BCAMCP British Columbia Association of Municipal Chiefs of Police CBL Case-based learning CBRN Chemical, Biological, Radiological, and Nuclear defense CID Crisis Intervention and De-Escalation CPIC Canadian Police Information Centre CTS Course Training Standard DACUM Develop A CurriculUM EdD Educational Doctorate (Degree) FST Field Sobriety Test FTO Field Training Officer HR Human Resources IRD Immediate Rapid Deployment IRP Immediate Roadside Prohibition JIBC Justice Institute of British Columbia K-12 Kindergarten to Grade 12 LAPD Los Angeles Police Department MDT Mobile Data Terminal MHA Mental Health Act OC Oleoresin capsicum (spray) (pepper spray) PBL Problem Based Learning PBLE Problem Based Learning Exercise POPAT Police Officers Physical Abilities Test PRIME Police Records Information Management Environment PSB Policing & Security Branch PSC Police Sector Council SBORT Subject Behaviour Officer Response Training SME Subject Matter Expert SoTL Scholarship of Teaching and Learning STEM Science, Technology, Engineering, and Math UBC University of British Columbia UoF Use of Force xxvi Acknowledgements I would like to extend my thanks to the following people: My supervisor, Dr. Donald Fisher, for taking me on as a stranded EdD student, for guidance with the freedom to do my own thing, and for co-teaching the best class of my EdD program. My committee members, Dr. Tom Sork and Mr. Steve Schnitzer, for being a part of this journey with me. Steve, in particular, for bearing the brunt of the political blows and for not wavering in his support for the new model. Mike Massine, for being my only support and talking me down off a ledge more times than I can count during the development of the new curriculum. Steve Hyde for stepping up during implementation and teaching more than is humanly possible to ensure things went as smoothly as possible. Evan Hilchey, my dear friend who I met on our first day of the EdD program for the never-ending support and commiseration, and for all our backpacking adventures. My family, both here and gone, including my dog Pickles, for continuing to help me keep things in perspective. xxvii Dedication For my dad, Joseph Paul Houlahan (March 25, 1945 - September 5, 2010), whose memory at times was the only thing that kept me going in this program. I love you always daddy. Man In The Arena (AKA Daring Greatly) \"...It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.\" - Theodore Roosevelt, 1910 1 Chapter 1: Introduction The focus of this EdD thesis is the development, implementation, and evaluation of a competency-based training model for police recruits in British Columbia. This evaluation study used quantitative methods to analyze survey responses from police recruits and field training officers (FTOs) and compare recruit ability and preparedness for field training in the old lecture-based delivery model and the new competency-based delivery model. The EdD program is a doctoral program intended for professionals in the field of education who are employed while completing the program. The focus of the program is moving from practice to theory and back to practice. Dissertation topics in this program are required to be related to the candidate’s job and contribute to their professional field of practice. When I began working at the JIBC Police Academy my position was as Curriculum Developer but has since been reclassified to Program Manager in recognition of the higher level of work I was undertaking. In this role I reviewed the existing recruit training curriculum and delivery model, researched and developed a new proposed competency-based delivery model, developed the curriculum materials to implement the new program, oversaw the implementation of the changes, and carried out the evaluation described in this thesis. This chapter will expand on my perspective in approaching the program evaluation to situate myself within the research and describe the theoretical framework I used to approach the development and evaluation of the program. I will provide an overview of policing in BC to provide context for the role of the BC Police Academy within the policing community and outline the structure of recruit training. I will briefly summarize the delivery models for recruit training before and after the implementation of the changes and outline the key 2 questions addressed in this thesis. The chapter will conclude by summarizing how each chapter in this document contributes to the overall program evaluation project. 1.1 My Perspective I began my role as the Curriculum Developer for the BC Police Academy Recruit Training program, located at the Justice Institute of British Columbia (JIBC) in April 2013. Previous to this position, I had worked for eight years as the Problem Based Learning (PBL) Program Manager in the Medical Undergraduate program at the University of British Columbia (UBC). In British Columbia, policing is governed by the Policing & Security Branch (PSB) of the Provincial Government. All municipal, transit, and tribal police recruits in the province are trained at the Police Academy. The Police Academy’s annual operating budget is presented in a letter of agreement between PSB and the Police Academy that outlines key deliverables for the fiscal year in return for funding the academy. My hiring, and mandate, are both derived from the letter of agreement between the Policing & Security Branch and the Police Academy. The first responsibility outlined in my job description is to “design and develop defensible competency-based curriculum that is aligned with Police Academy and Institute strategic directions and meets applicable PSB standards.” To this end, the primary focus of my work has been mapping the current Recruit Training curriculum to the Police Sector Council National Framework of Constable Competencies, developing the proposal for change to align the program delivery with competency-based education principles, working with subject matter experts to design the educational materials and lesson plans, overseeing the implementation of the new program, and evaluating the program. This program is 3 unique in the Canadian policing context and has garnered much attention from across the country. This job was my first experience with the policing community. I spent much of my first few months observing recruit training classes to learn what and how recruits were taught before I began to try to map the curriculum to the PSC National Framework of Constable Competencies. During this time I also began to read available literature on police training. One of the things that struck me was the relative paucity of peer reviewed literature on police training or police education. What little available information there was, was typically limited to in-service training instead of recruit training. The majority of published literature seems to be on police and their interactions with the community, typically from the perspective of the community. Little research has been published for police themselves to use to enhance their training, particularly at the recruit level. Another revelation was the many similarities I began to uncover between medical education, the field where I had previously worked, and police training. Both fields are concerned with developing communication skills in their learners, are centred around the development of physical and technical skills, and both have national frameworks of competencies around which to structure their curriculum. Medical education has been implementing competency-based education for decades whereas the concept is relatively new to the policing world. As such, I have relied heavily on the literature from medical education, of which there is an abundance, as I mapped and developed the new delivery model for the Recruit Training program. As the PBL Program Manager in the medical program at UBC, I was immersed in problem-based and case-based learning methodology. I worked with Subject Matter Experts (SMEs) to develop and write case material, trained and provided support and guidance for 4 PBL tutors (facilitators), and tutored PBL groups myself. The goal of using PBL as methodology is twofold: the foundational content material is learned in a format that relates to real-life application, making storage and retrieval easier and increasing motivation to learn; and the small group, discussion based format builds communication, facilitation, and teamwork skills in the learners. I left the medical program at a time when there was a shift in educational philosophy, moving away from the PBL format, with its associated emphasis on communication and facilitation skills, towards more of a case based learning (CBL) format that focuses primarily on the content material. At the time, and still to this day, I believe that this is the wrong move for medical students’ education. I was surprised when, after observing many classes and the teaching methodology used in the Recruit Training program, I began to feel that CBL might be an appropriate format for the curriculum for police recruits. Communication skills are extremely important for a career in policing, arguably as much if not more so than for a career in medicine. I will expand on the differences between PBL and CBL, and my reasoning behind the chosen methodology, later in this section While observing recruit training classes, I generated a curriculum map that mapped the existing curriculum against the PSC Constable Competencies. This analysis was done at a high, discipline outcome-based level and required a re-writing of the discipline outcomes for the program. The existing discipline ‘objectives’ were not reflective of the level of achievement expected of recruits as they focused mainly on recall ability instead of skill acquisition. I consulted with the discipline instructors and rewrote these objectives into discipline outcomes that provide a ‘bigger picture’ overview of what is expected of recruits. I then proceeded to map the constable competencies to the new discipline outcomes and to use the existing PSC Constable task map as a tool to validate the competency map. The 5 result of this analysis was a confirmation that the Recruit Training program is indeed teaching the necessary content to recruits so that they meet the minimum standards upon graduation. Through mapping analysis, however, it became apparent to me that the recruit training curriculum could be delivered in a way that would more effectively build and assess competency in the recruits. In fitting with the mandate from PSB, it was clear that the program structure needed to be redesigned to fit with a competency-based model centered around the National Framework of Constable Competencies. A case-based delivery format, centred around integration of concepts, fits well with the principles of competency-based education and is the foundation for the new delivery model for recruit training. During this analysis of the curriculum, I observed one particular class where three recruits were struggling to meet the expectations of the program. Throughout their training they were offered very little feedback on their progress or suggestions on how to improve. The recruits were then told, very close to the end of their training, that the instructors had great concerns about their ability to successfully fulfill the job requirements. In one case, a departmental representative had actually travelled from Vancouver Island to the Police Academy to take steps to terminate the recruit’s employment. When the representative arrived and was informed that the recruit had not received any feedback on his performance during his Block III training, the planned job action could not take place. This lack of feedback generated frustration for the recruits when they were finally told of the instructors’ concerns, the program instructors, and for the recruits’ hiring departments. Many instructors complained about the lack of time in the program for them to work with recruits who were struggling. It also seemed, surprisingly, that the instructors were not equipped with the knowledge or skill to deliver formative feedback to recruits or to help the recruits set goals 6 for improvement. This lack of time and capacity further cemented the notion that the recruit program needed to move to a competency-based model to ensure dedicated time in the curriculum for recruits to build their abilities, particularly in areas where they were struggling. The new delivery model for a case-based and competency-based Recruit Training program evolved from the class observations, literature reviews, curriculum mapping, exit interviews with graduates, and discussions with instructors. This change required a wholesale change in the educational philosophy of the Police Academy. It was not sufficient to update a few lesson plans and claim that the program was meeting the required standards. After the initial proposal was accepted by PSB, I travelled with the Police Academy Director to each of the police departments to provide an overview of the proposed changes. This outreach was part of our communication strategy for change management, to ensure that all departments were aware of the upcoming changes. The proposed changes to the Recruit Training program were universally well received. The two concerns that were raised by departments in the consultations were the extension of Block II training and the timing of starting a class in January. The extension of Block II was a concern because of the potential strain on departmental FTO resources and the financial implications of paying recruit salaries for a longer training period. The timing of the January class starts was a concern because of the proximity to the end of the fiscal year. There were no objections to the competency-based approach that was explained in these meetings. The actual development of the curriculum, lesson plans, and associated learning materials took place over the next two and a half years. During this time, the Police Academy was still training recruits under the old delivery model and no additional 7 instructional staff were brought on to help with development. When it became apparent that additional support was needed, a contract instructional designer was hired to help revise the manual readings for the topics. That position is now a full time staff member. The development of the materials was an exceptionally challenging process. I was leading a major change in a police training environment as a female civilian with no support. I used strategies that I believed would help with change management, involving the instructors in all aspects of the planning and development so that they felt ownership of the material. Often, however, these sessions degenerated into an interrogation with me at the front of the room defending the changes to several instructors who were quite vocally opposed to the concept. Included in these meetings was the person who, at the time, I reported to, who was supposedly in favour of the changes but was conspicuously silent throughout all meetings. At times these meetings were completely unproductive. I implemented a strategy of breaking the larger group of instructors into smaller groups to isolate the vocal opponents and accomplish some of the development goals. This time of development was perhaps the most challenging time of my career. Had I not been absolutely convinced that this change was needed to bring police recruit training in BC to current standards, I would have folded under the constant harassment and bullying I was subjected to. It is exceptionally difficult for me to remove these experiences from my analysis of the program, even though the work climate has now changed considerably. Despite recognizing the change of climate, I find that I still have emotional scars from the process that sometimes make it hard to work within the context of my daily responsibilities. And while the climate in our office at the Police Academy is significantly improved, there is much work remaining to be done both internally and with the departments. This experience has certainly been 8 significantly more challenging than I anticipated and I have learned a great deal about change management. It is within this overall context that I complete my thesis. The research for my EdD focused on the implementation and evaluation of the effectiveness of these changes. 1.2 Theoretical Framework In approaching the design, implementation, and evaluation of the Police Recruit training program, I drew on the theoretical framework of constructivism. This is perhaps not surprising, given my background in problem-based learning, as PBL is situated within constructivism (Slavich & Zimbardo, 2012; Stentoft, 2017). The central tenets of constructivism are knowledge is actively constructed by learners based on their experiences and context is an indispensable part of the learning process (Biggs, 1996; Narayan, Rodriguez, Araujo, Shaqlaih, & Moss, 2013; Stentoft, 2017; Thayer-Bacon, 2013). Central to learning is that students are provided with authentic experiences that represent the complexity of real life events and allow for student-centred learning (Narayan et al., 2013). Social exchange is an essential part of the learning experience so learners can test their understanding against those of others (Narayan et al., 2013). Allowing learners to interact with the material in different formats or from different perspectives will increase their understanding (Narayan et al., 2013). Finally, through reflection, the learner develops a self-awareness of their own thought process and understanding (Narayan et al., 2013). The instructor plays a complex role in providing these authentic learning experiences and facilitating as the learners move through the learning process (Biggs, 1996; Narayan et al., 2013; Stentoft, 2017). 9 Further, within the constructivist perspective, the concept of transformative learning (Alfred, Cherrstrom, & Friday, 2013; Slavich & Zimbardo, 2012) informed my theoretical approach, particularly the description of transformative teaching offered by Slavich and Zimbardo (2012). Transformative learning involves a deep shift in perspective created by cognitive dissonance, or a “disorienting dilemma” (Alfred et al., 2013; Cranton, 2011; Slavich & Zimbardo, 2012), often caused by a major life change (Alfred et al., 2013). This “disorienting dilemma” triggers an examination of previously existing beliefs and perspectives that involves critical reflection, exploring new roles and relationships, acquiring new knowledge and skills, achieving competence in these new roles and, ultimately, integrating these new perspectives, roles and actions into daily life. When this integration of a changed action happens, transformative learning has occurred (Alfred et al., 2013; Cranton, 2011; Slavich & Zimbardo, 2012). Biggs (1996) outlines the concept of constructive alignment, which he defines as the combination of constructivism with instructional design practices whereby the foundational beliefs of constructivism are incorporated into all aspects of the designed program: from objectives to learning activities, to assessment and reporting. Biggs asserts that “attempts to enhance teaching need to address the system as a whole, not simply add “good” components, such as new curriculum or methods” (p. 350). Similarly, Slavich and Zimbardo (2012) advocate for a whole system approach to transformative teaching, which they define as an “expressed or unexpressed goal to increase students’ mastery of key course concepts while transforming their learning-related attitudes, values, beliefs, and skills” (p. 576). They identify three overarching principles of transformational teaching: facilitating students’ mastery of core concepts, facilitating skill development during learning, and promoting 10 reflection to develop attitudes, values, and beliefs that match positive expectations in the chosen field (Slavich & Zimbardo, 2012). The values and beliefs espoused by the constructivist framework, and transformative learning therein, were a guide for the development of the new curriculum delivery model for recruit training. Great care was taken to ensure that real-life, complex learning activities were the backbone of the curriculum, supported by opportunities for self-examination through guided critical reflection, and individualized support and formative feedback through a mentoring system. What follows is a contextualization of policing in British Columbia as well as research into police training. Then a general overview of the literature provides context to the program redesign. Finally, the chapter concludes with an overview of the program design as well as a summary of my research question for evaluating the program change. 1.3 The Context of Police Training in BC Policing in Canada has three different levels: federal, provincial, and municipal. The federal police are the Royal Canadian Mounted Police (RCMP). Trainees in the RCMP are called cadets. All cadets have their initial training at a central location in Regina, known as ‘Depot’. From here they are deployed to postings across the nation. Provincially, each province is different. Some provinces, such as Ontario and Québec, have their own provincial police force (Ontario Provincial Police and Surété du Québec, respectively) while others, such as British Columbia, contract with the RCMP to provide provincial policing services. 11 In British Columbia, municipal regions either have their own municipal police force or contract the RCMP to provide this service. In addition to the municipal police forces, residents of the Lower Mainland are also served by the Transit Police Department and members of the Stl’atl’mix First Nation are served by the Stl’atl’mix Tribal Police. All municipal, transit, and tribal police in British Columbia are trained in the Recruit Training program of the Police Academy. The Police Academy is physically housed at the Justice Institute of British Columbia (JIBC) in New Westminster. The BC municipalities that have their own police services and train at the JI are:  Victoria  Oak Bay  Saanich  Central Saanich  West Vancouver  Vancouver  Port Moody  New Westminster  Delta  Abbotsford  Nelson  Transit Police  Stl’atl’imx Tribal Police Policing within a province (provincial and municipal) falls under the jurisdiction of the Provincial Government for that province. Because of this lack of centralized governance, there is not one standard method of training police in Canada. Recruit/cadet training programs vary in length, residency status, job status (pre-hire or post-hire) of trainees, and even skills they are able to train. These discrepancies make it difficult to compare training programs in Canada and also make it necessary to outline the specific conditions of a given training program when engaging in discussion or beginning an evaluation study. 12 In the British Columbia context, municipal police recruits are hired by their home departments and sworn in as Recruit Constables when they enter training at the Police Academy. This means that, as recruits, they are governed by the BC Police Act and can be held accountable under this act. This differs from municipalities in Ontario where recruits are also hired before they attend training at the Ontario Police College (OPC) but are not sworn in until after they complete their training, and from recruits who attend the Atlantic Police College (APC), who are not hired until after their graduation. It also means that, in BC, municipal police recruits are members of their police unions throughout their time as a recruit. Union membership is relevant during training because of the possibility for union grievances should a recruit not meet the expectations required to pass training. Because recruits are hired prior to attending training, the municipal departments have control over the entrance requirements and standards for recruits. Further, each municipality has its own hiring requirements and process, making for a diverse group of recruits who come to the Police Academy. This lack of control over hiring also creates an interesting situation for the Police Academy, where the training institution has no input into who is admitted into the program. The majority of departments have a guideline that suggests a minimum of 2 years of post-secondary education for recruitment, but this is not an absolute requirement. Recruit classes can consist of students with a range of educational experience from a minimum number of post-secondary credits (or occasionally no post-secondary education) to advanced degrees and previous law practices. Class sizes and demographic trends depend entirely on departmental hiring practices, targets, and budgets. Hiring levels can fluctuate dramatically due to community demands; extra classes had to be scheduled prior to the 2010 Olympics so that Vancouver Police Department (VPD) could have enough 13 trained members before the games began. VPD hiring declined immediately afterwards and scheduled intakes had to be cancelled or run with small class sizes. Similarly, an unexpected budgetary expense may prevent the planned hiring of recruits or an unpredicted number of retirements may necessitate an increased hiring in any given municipal department. Recruitment is typically an extensive process involving multiple interviews, physical fitness assessments, written exams, and background checks. A candidate can be deemed unsuitable at any stage. Because of the many factors involved in police recruiting, often the Police Academy does not have a final number of recruits expected to attend training until 1-2 weeks prior to the start of class. The Recruit Training program is divided into four separate blocks. Recruits progress through the program as a cohort until their graduation from Block III. While recruits are in Blocks I through III, they are considered a Recruit Constable and must be either in training at the Police Academy or in their hiring department working under the supervision of a Field Training Officer (FTO). After successful completion of Block III, they graduate from the Recruit Training program as Qualified Municipal Constables. During this time they are able to complete their policing duties independently but are in a probationary period (Block IV) at their home department. Following completion of the probationary period, they are a fully Certified Municipal Constable. Figure 1-1 illustrates this progression from hiring to fully certified municipal constable. 14 1.4 Summary of Recruit Training Delivery Models A full description of the recruit training delivery models is included in Chapter 3: Program Description. This section provides a brief overview of the training delivery model before and after implementation of the changes to provide context for the subsequent sections. When I started my role at the JIBC Police Academy, there had been little change to the delivery model of police training since the introduction of PowerPoint when lectures on overhead transparencies were converted to lectures on PowerPoint slides. Each ‘discipline’ such as Legal Studies, Investigation and Patrol, Traffic Studies, among others, was taught independently of each other with little to no integration between instructors or topics. The primary delivery model was PowerPoint based lectures. Occasional simulation days were included, two in Block I and two in Block III, where recruits either participated in the scenario or observed other recruits participating in the scenario. Limited to no formative feedback was provided to recruits and the underlying philosophy was ‘If you don’t hear Recruit Constable Qualified Municipal Constable Block I 13 weeks Police Academy Block II 12-17 weeks Field Training in home Municipal Department Block III 8 weeks Police Academy Block IV 1 year Home Municipal Department Certified Municipal Constable Figure 1-1 Progression through police recruit training in British Columbia 15 anything, you’re doing well’. No time was provided for instructors to work with recruits who were struggling or to conduct remedial training. The sole method of formal evaluation was written exams that mostly relied on recruits regurgitating memorized facts where they were often required to reproduce the answer verbatim from what was provided in class. The Police Academy shared the common underlying philosophy in the policing culture that “adult education” meant telling learners exactly what was going to be on the exam. After observing a multitude of sessions, it was clear that the focus on rote memorization, limited opportunities to apply concepts to practice, lack of formative feedback, and inability of instructors to provide help to recruits who needed it, together meant that training was not delivered as effectively as it could be. The training delivery model needed to be modified to ensure that recruits were leaving the Police Academy with the best training possible to prepare them to serve their communities. The new delivery model aligns the recruit training curriculum with the Police Sector National Framework of Constable Competencies, as mandated by the BC Provincial Government. This framework is the only nationally accepted standard for the requirements of police officers in Canada and was developed through extensive research and collaboration with stakeholders in the Canadian policing community. Topics that were previously taught as separate disciplines are now integrated. The curriculum is structured around the most common patrol level calls and material is learned in the context that it is needed to respond to these calls. The new delivery model uses refined readings that have been significantly reduced to focus on core “need-to-know” information and associated quizzes to ensure recruits have a foundational understanding of the key concepts prior to arriving in the classroom. The knowledge gained through the readings and quizzes is then applied through 16 case-based exercises where instructors monitor recruit progress, understanding, and try to clarify misconceptions. Recruits then have the opportunity to apply what they have learned to practical scenarios. They receive formative feedback on their performance in these scenarios, watch recordings and self-assess their performance, and set related training goals for the upcoming weeks. Recruits are assigned an instructor mentor who follows their progression through recruit training and provides guidance and feedback throughout, while also ensuring the recruits are held responsible for their learning. Time is built into the curriculum where recruits can work on their individual training plans to improve, with instructor guidance, on the areas where they most need improvement. Recruits are examined by both written and practical scenario exams and complete an overall assessment portfolio at the end of each block. The training was designed specifically to address issues observed in the old delivery model, feedback from past classes of recruits, and recommendations from the literature. 1.5 Research Question - Program Implementation and Evaluation The focus of this project was the implementation and evaluation of this new curriculum delivery model for Police Recruit Training in BC. The program evaluation addressed the question: what are the effects of introducing a competency-based education framework on police recruit preparedness for field training? This question was addressed using surveys administered to recruits and field trainers for one class trained in the old delivery model and two classes trained in the new delivery model. A secondary question that arose from this primary evaluation question is if there is a difference in recruit perceptions of their ability or preparedness for field training (Block II) 17 between the end of their Block I training when they may not have any knowledge of the requirements of patrol work and after they have some Block II training and have experienced the realities of patrol level policing. Surveys were administered to recruits at the end of their Block I training and after approximately 10 weeks of field training to address this question. I selected Block II training for this evaluation because recruits work closely with a Field Training Officer (FTO) during Block II. The FTO is an experienced police officer and can provide an objective evaluation of a recruit’s ability and readiness for the road. The program evaluation design used this FTO evaluation to compare to the recruits’ self-evaluations. This additional survey data source and comparison addressed another secondary question about whether there were differences between the recruits’ perceptions of their ability and preparedness and the perceptions of their FTOs. Upon graduation, most departments work individually so Block II is the only opportunity to compare recruit self-evaluations with those of a more experienced police officer. Any evaluation of the effects of the program post-graduation would rely mainly on the recruits’ own perceptions and would lack the objective assessment of an experienced officer. Because of this lack of comparative data, the program evaluation was limited to Block I and how it prepared recruits for their Block II field training experience. 1.6 Summary This project is a program evaluation study using survey data to compare recruit ability and preparedness in recruits from one class trained using the lecture-based delivery model and two classes trained using the new competency-based delivery model. Chapter 2 will review key areas of the literature that informed the new delivery model. Chapter 3 will 18 outline the lecture-based delivery model of police training and describe the competency-based delivery model in detail. Chapter 4 will outline the methodology of the study, required changes in project design, and situate the study within the current political context of policing in BC. Chapter 5 will present the findings of the study. Chapter 6 will discuss possible interpretations of the findings and the significance of organizational cynicism and organizational change to this study. Lastly, Chapter 7 will conclude with lessons learned and recommendations. 19 Chapter 2: Literature Review After mapping the Recruit Training curriculum to the Police Sector Council Constable competencies and realizing that a major change in the philosophy and design of the program was required, a literature review was conducted to ensure the proposal for the new program was based on evidence. That review encompassed research in police training, in competency-based education, and assessment. This chapter will provide an overview of research from each of these areas that formed the foundation of the proposal for the new program and that has been published since that proposal was written. 2.1 Research on Police Training Research on policing from a Canadian perspective is scarce (Huey, 2016; Huey & Bennell, 2017). A comprehensive review of the literature revealed 218 research articles on Canadian policing published between 2000-2015 (Huey & Bennell, 2017). While the majority of published research on policing in Canada might not specifically address training, often areas of research lead to recommendations for future training. The work of Rick Parent, from Simon Fraser University in Burnaby BC, is one such example. Parent examined the police use of deadly force in Canada in comparison to that in the United States (Parent, 2006; Parent, 2007; Parent, 2011). The homicide rate in the United States is approximately threefold higher than that in Canada, and the rate of police murders is also considerably higher. Parent (2006) concluded that while the circumstances surrounding the use of lethal force do not differ between the United States and Canada, the frequency differs considerably. This increase in violent crime, combined with the higher rate of murders of police officers leads to an increase in both perceived threat and calculated risk, which may 20 result in American police using lethal force more frequently than their Canadian counterparts (Parent, 2006). Further, an analysis of the thirty lethal force incidents in British Columbia from 2000-2009 revealed that approximately 25% involved subjects with a known history of mental illness or suicidal behaviour (Parent, 2011) and it is estimated that approximately one third of police shootings involve someone in a crisis caused by mental health issues, emotional stress, or substance use (Parent, 2007). To address the unique circumstances surrounding a person in crisis, particularly with a mental illness, Parent recommends training for both new and current police officers in recognizing the signs of mental illness and Crisis Intervention Training, as in place in some American jurisdictions. Officers who have considerable specialized training in de-escalation have been shown to decrease the arrest rates of people with mental illness as well as decrease the rates of police injuries and the requirement for specialized emergency response units (Parent, 2007; Parent, 2011). A study in British Columbia comparing people with mental illness and the general public found that 60% of people with mental illness had some contact with police in the preceding year compared to 40% of the general public (Desmarais et al., 2014). The study found that people with mental illnesses were not just more likely to commit crimes than the general public but also more likely to be the victims of crimes (Desmarais et al., 2014). People with mental illness also rated the police significantly lower on aspects regarding procedural justice, such as being fair and approachable, than did the general public (Desmarais et al., 2014). These findings led to the recommendation that police must be trained to develop skills to better interact with people with mental illness, including de-escalation (Desmarais et al., 2014). 21 Following training recommendations such as these, the BC Crisis Intervention and De-Escalation (CID) program is now a mandatory training program for all front-line police officers and front-line supervisors in BC. The initial implementation of the training was completed in 2015. CID training is now a mandatory part of recruit training at the JIBC Police Academy. Recruit training also involves significant components on recognizing and interacting with people with mental illnesses. It would be interesting to replicate the study conducted by Desmarais et al. (2014) following the completion of the CID training initiative. Additionally, local research from Simon Fraser University on young offenders’ recidivism decisions is not directed specifically at police but provides valuable insights into the thought process of young offenders that could help police when interacting with young offenders (Corrado, Cohen, Glackman, & Odgers, 2003). Recognizing that the majority of the 400 incarcerated youth from the Greater Vancouver Region were not motivated by cost-benefit decisions, punishment, or re-integration (Corrado et al., 2003) may suggest strategies to combat youth crime should focus on approaches outside of these motivators. The literature on Canadian police recruit training is scarce (Huey, 2016; Huey & Bennell, 2017; Robertson, 2012). The majority of research focuses on in-service training on specific topics, such as ethics or use of force, or on evaluation of departmental initiatives. Huey (2016) found there were no peer-reviewed articles on Canadian police training, let alone recruit training, published between 2000-2015. The majority of the available literature is from the United States, which has a very different approach to policing than in Canada. In general, there is much more gun violence in the US (Parent, 2006) and police are trained in a much more militaristic fashion. Policing in Canada, where there is comparatively little gun violence, tends to emphasize communication skills and de-escalation techniques. Policing in 22 Canada, as in other British Commonwealth countries, is founded on Peel’s principles of the police are the public and the public are the police. Canadian police most often exercise their authority through use of officer presence and persuasion techniques (Robertson, 2012). Further, Canadian police are frequently sworn in using oaths that include a duty to uphold the principles of the Canadian Charter of Rights and Freedoms, which means recognizing their obligation to all members of society, especially those who are members of marginalized populations (Robertson, 2012). These differences make it difficult to draw comparisons between policing cultures and training between the Canadian and American contexts. Additionally, because of the lack of standardization of police training within Canada, it can be difficult to draw direct comparisons between provinces. There is, however, a small amount of literature focusing on adult learning in the police training context. Again, most of this body of work is from the United States, but because it focuses on adult learning theory rather than state-specific training practices, it is more applicable to a discussion of police training in the Canadian context. Despite the lack of available literature, there is a general recognition that police training should be evidence-based, following best-practices from both theory and research (Kratcoski, 2016). There remains a debate within the policing community about the differences between police training and police education, and which is most appropriate at a given stage of training (Cordner & Shain, 2016; Haberfield, 2013; Kratcoski, 2016; Oliva & Compton, 2010; Paterson, 2016; White & Heslop, 2012). Traditional police training is seen as teaching how to do policing, or how to perform a certain task a specific way and is frequently para-military, lecture-based, and concerned with conveying a large amount of information and frequent “war stories” (Haberfield, 2013; Kratcoski, 2016; Paterson, 2016). This conception 23 of training is at odds with the evolving role of police, particularly in light of the current community policing paradigm, the procedural justice focus on individuals and communication, and the continued globalization of policing (Cordner & Shain, 2016; Oliva & Compton, 2010; Otwin, 2005; Paterson, 2016). Police education, on the other hand, is seen as encouraging critical thinking, problem solving, and using values-based thinking to come up with alternative approaches (Haberfield, 2013; Paterson, 2016). Typically, this type of education is associated with a higher education institution, such as a university, and is obtained prior to attending police training (Cordner & Shain, 2016; Paterson, 2016). Interestingly, unlike other professions such as teaching or nursing, where credentialing or certification are directly tied to higher education, police education historically has been marginalized by both the police training academies and the police profession itself (White & Heslop, 2012). The issue of whether or not a university education better prepares people to enter the policing profession is a matter of separate debate, and most departments in BC currently require a minimum of two years of post-secondary education as part of their selection criteria. Despite the perceived tension between police training and police education, there seems to currently be a general recognition that in order to meet the community-based demands of policing, police training needs to involve components from both training and education models, and should follow the general principles of adult learning to be most effective (Cordner & Shain, 2016; Golden & Seehafer, 2009; Haberfield, 2013; Hundersmarck, 2009; Kratcoski, 2016; Mugford, Corey, & Bennell, 2013; Oliva & Compton, 2010). Research from the Police Research Lab, located at Carleton University in Ottawa, Ontario, Canada, has applied cognitive load theory to the use of simulator-based training in 24 use of force (Bennell, Jones, & Corey, 2007) and to police training in general (Mugford et al., 2013). Cognitive load theory posits that working memory can hold an extremely limited number of “elements” or new pieces of information whereas long term memory can hold a virtually unlimited number of elements (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G, Clark, & de Croock, Marcel B M, 2002). The working memory actively integrates new information into schemas that serve to group new information so that it can be understood and easily accessed (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G et al., 2002). Schemas are stored in the long term memory and are processed by the working memory as one element, thereby reducing the burden on working memory (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G et al., 2002). With sufficient practice, schemas can become automated, or performed without conscious thought, further reducing the burden on working memory (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G et al., 2002). According to cognitive load theory, the primary goal of training is to promote the acquisition and automation of schemas (Mugford et al., 2013). Additionally, cognitive load theory describes three forms of cognitive load: intrinsic load, extraneous load, and germane load. These three types of load are additive in the working memory and training should be designed to ensure the additive effects do not exceed working memory capacity of the learners, as is typical of traditional training methods (Mugford et al., 2013). Intrinsic load is a function of the complexity of the material to be learned and can be managed by providing simple examples at the start of training, providing worked examples, and by dividing complex material into a series of steps before moving to integration of the complete concepts (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G et al., 2002). Extraneous load is a function of the 25 complexity of the training activity and can be managed by providing simple and clear instructions, ensuring there is not unintentional redundancy in training material, and integrating sources of information (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G et al., 2002). Lastly, germane load is a function of training design but, unlike extraneous load, germane load is directly relevant to schema formation and automation. Germane load involves the incorporation of variety into training both in terms of the variety of situations encountered and in terms of the variety of examples within a specific type of situation (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G et al., 2002). By applying cognitive load theory to police training, components of training such as use of force simulator training and e-learning activities can be structured to manage intrinsic and extraneous loads while maximizing germane load, thereby increasing the effectiveness of police training (Bennell et al., 2007; Mugford et al., 2013). Despite anecdotal evidence that many use of force trainers were unknowingly applying concepts from cognitive load theory to their training structure, this application is inconsistent and requires more investigation (Bennell et al., 2007). Similarly, with the increase in prevalence of e-learning strategies for police training initiatives, care must be taken to design training that does not unintentionally exceed the working memory capacity of police officers. The ease with which multimedia and various resources can be integrated into e-learning mean the likelihood of increasing extraneous load is high if cognitive load theory is not incorporated into the training design (Mugford et al., 2013). Within the context of the adoption of a community oriented policing philosophy across much of the United States in the 1990s and 2000s, there was an interest in examining how the theories of andragogy applied to police training (Birzer & Tannehill, 2001; Birzer, 26 2003a). In recognizing that the majority of policing activities involve interacting with the public by providing information, assistance, aid to the injured, and mediation, Birzer and Tannehill (2001) critiqued the prevalence of the behaviourist approach to teaching in police training academies. They determined that this approach may be appropriate for skills such as shooting or force options but is much less effective for topics such as interpersonal communication, cultural diversity, problem solving or conflict resolution (Birzer & Tannehill, 2001). The suggestion from the application of andragogy is that training should be interactive, participatory, and experiential, providing trainees practice applying the skills they are developing through problem solving, case study, and simulation activities (Birzer, 2003a). Hundersmarck (2009) followed a small number of cadets (the equivalent of recruits in a US context) through their initial classroom training and then into their field training component to determine how knowledge and skills gained at the academy were transferred to the practical setting of field training. Through observing the academic portion of training, it was determined that the classroom time was mostly spent in didactic lectures, with approximately less than three percent of time focused on scenarios and application of learning (Hundersmarck, 2009). The cadets, however, believed that the scenarios held much higher relevance than the lectures to their future role as a police officer and only referred to the scenario component of training during their field training component (Hundersmarck, 2009). Hundersmarck also noted, however, that the police culture and attitude of the field training officers towards the academy as not relevant to how things are “really” done may have made the cadets reluctant to explicitly draw on anything they had learned in their classroom training (Hundersmarck, 2009). 27 Several examples of innovations in police training have embraced the principles of adult learning. First, in the Idaho Peace Officer Standards and Training (POST) academy, Werth (2011) developed and implemented a Problem Based Learning Exercise (PBLE) that spanned the entire ten weeks of recruit training to allow extended time to apply their learning and develop higher level critical thinking and investigative skills. This exercise involved a dispatched scenario and recruits following up the investigations over the remaining weeks through simulated interviews and phone calls, investigations, evidence gathering, and case presentations to staff (Werth, 2011). To evaluate the effectiveness of the exercise, a total of ten academy classes were surveyed on how they believed the exercise developed their mechanical and non-mechanical competencies. The majority of the 413 respondents indicated that the PBLE helped develop their problem-solving, decision-making, communication, and multi-tasking skills. Werth did note, however, that there were some students, instructors, and admin staff who resisted the concept of this type of self-directed learning and that implementation required a culture-shift within the organization (Werth, 2011). Similarly, in examining the existent culture around training in the Los Angeles Police Department (LAPD), it was recognized that the existing military-based training culture did not represent the mindset expected of recruits once they had graduated and began serving their community (Pannell, 2012; Pannell, 2016). The redesign of the LAPD police academy training included a focus on thinking through reasoning and articulating actions, team teaching from integrated teams of instructors, individual development and remediation, debriefs focusing on the whole person instead of just the tactical actions taken, and developing critical thinking skills to apply to novel scenarios (Pannell, 2012; Pannell, 2016). 28 The results from this change indicate that the recruits appreciate understanding why they are taking the actions they take and the field trainers and administrative staff feel the recruits are better than those who trained in the previous model. In implementing these changes, the LAPD recognized the need to overhaul the entire training program, including educational philosophy and culture, not just to add on an additional training component. A requirement for success of such an initiative would be the training and preparedness of the instructors themselves, and their willingness to be involved in such a cultural shift. The importance of training instructors in a new methodology to better employ the concepts of andragogy is also highlighted by Birzer, describing how the Chicago Police Department had designed a new training module for community oriented policing that was centred around the principles of adult education but ultimately ended up being delivered in a lecture-based format because of the comfort level of the instructors (Birzer, 2003a). In analyzing the teaching practices and preferences of police instructors at an agency that trains police instructors, McCoy (2006) discovered that, while the majority of instructors scored very high in a teacher-centered style of instruction, a deeper analysis revealed their preferences were to be student-centred and noted they lacked the knowledge and skills to implement the student-centred methodology. In addition to teacher skill and preference, one reason commonly given for teaching using a purely didactic style of instruction that is perhaps unique to the policing community is the concern over liability issues and the coverage of content. Course topics are frequently seen as an item on a list of check boxes to indicate that a recruit has taken the relevant training. McCoy (2006) points out that the real liability issue should be whether or not a police officer can apply what they have learned. If they cannot apply the learning, that is when an increase in liability for the instructors and 29 training institution should be found (McCoy, 2006). The preparation and training of future instructors should focus on developing their skills to teach using a method that is based around the principles of adult education where learners need to demonstrate their ability to perform the necessary skills, not simply sit in the training room (Birzer & Tannehill, 2001; McCoy, 2006). Fittingly, Oliva and Compton (2010) examined a small number of police officers to determine what their preferences were with respect to teaching style of their instructors. Overall, there was a self-reported preference for adult learning techniques, particularly highlighting how the training should be engaging, practical, efficient, and allow time for interaction with the other learners. To meet the preferences of both the trainees and the instructors in a policing context, Birzer (2003) suggested what he called a “mission-oriented” approach focusing on the skills and knowledge police need to perform the duties of their job on a daily basis. This type of training is best known now as competency-based education, where competencies are determined based on a job analysis and a trainee’s performance is measured according to how well they meet these job competencies. In Canada, a set of national police competencies has been developed through extensive collaboration facilitated by the Police Sector Council (PSC). 2.2 Police Competencies in Canada The Police Sector Council (PSC) was a not-for-profit agency funded through the Government of Canada Sector Council Program that brought together experts in policing and police training from across the country. Representatives included training organizations, municipalities, Chiefs of police, military police, and the RCMP. The PSC conducted research projects around perceptions of police, skills perishability, and human resources 30 (HR) challenges and solutions for the policing context. Competency-Based Management arose from the exploration of HR solutions and from 2008-2010 the PSC undertook extensive collaboration to identify the core competencies, and associated tasks, for each rank of Police Officer (Police Sector Council, 2011). These competencies are now known as the PSC National Framework of Competencies. The goal of this process was to facilitate standardization of policing, police promotion, and police training across Canada. Unfortunately, the PSC lost funding in 2012, at a crucial point for the incorporation of competency-based practices into police training. At this stage many agencies had begun the competency-based management HR Practices for evaluation and promotion and many had expressed interest in extending this framework to their training programs. Without the PSC as a guiding force, however, departments focused their attention on HR practices and the momentum for curricular change was lost. In British Columbia, policing is governed by the Policing & Security Branch (PSB) of the Provincial Government. PSB was represented on the Police Sector Council when the National Framework of Competencies was developed and has been instrumental in the plans to adopt a competency-based framework in the Recruit Training program in BC. The Police Academy’s annual operating budget is presented in a letter of agreement between PSB and the Police Academy. In 2013 this letter of agreement stipulated that the Police Academy must hire a Curriculum Developer to map the Recruit Training curriculum to the PSC National Framework Constable Competencies and generate a Course Training Standard (CTS) for the program. This mapping would ensure that Recruit Training in BC was producing graduates who met the competencies for the rank of Constable, that the program 31 was teaching all necessary concepts, and that the program was not teaching unnecessary material. Table 2-1 summarizes the nine core constable-level competencies: adaptability, ethical accountability and responsibility, interactive communication, organizational awareness, risk management, stress tolerance, teamwork, and written skills. Each competency has five associated proficiency levels that increase progressively in difficulty. The minimum expectation for the rank of Constable in Canada is proficiency level 2, as described in Table 2-1. Recruits are expected to move through proficiency level 1 to proficiency level 2 at various points in their training and reach level 2 by graduation. Because of the advocacy from the BC Policing & Security Branch, BC is currently at the forefront of mapping recruit training curriculum to the Constable competencies and of extending competency-based principles into the recruit training program. As such, the next section will focus on the general principles of competency-based education. 32 Competency Proficiency Level 1 Proficiency Level 2 Adaptability Recognizes the need to adapt to change Modifies own behaviour or approach to adapt to a situation Ethical Accountability and Responsibility Embraces high standards of conduct and ethics Handles ethical dilemmas effectively Interactive Communication Presents information clearly Fosters two-way communication Organizational Awareness Understands formal policing structure Understands informal policing structure and culture Problem Solving Identifies basic problems Solves basic problems Risk Management Participates in the management of situations and calls Manages a limited range of situations and calls with minimal guidance Stress Tolerance Works effectively with standard situations Works effectively in the face of occasional disruptions Teamwork Participates as a team member Fosters teamwork Written Skills Conveys basic information Selects and structures information Decision Making Makes decisions based on existing rules Makes decisions by interpreting rules Table 2-1 Police Sector Council core Constable competencies with proficiency levels 1 and 2 (Police Sector Council, 2011) 2.3 Competency-Based Education The following sections provide an overview of the literature on competency-based education, terminology, and elements of competency-based learning. 2.3.1 Overview of competency-based education Traditional, lecture-driven curricula are taught as content-heavy isolated components where memory-based assessment practices make it difficult to determine if graduates are competent in the requirements of their intended practice (Frank et al., 2010; Smith & 33 Dollase, 1999). Typically, students see traditional classroom learning as a mostly arbitrary sequence of unrelated content. Competency-based education provides the framework to learn how concepts interconnect (Black & Wiliam, 1998; Fraser & Greenhalgh, 2001). Learning concepts and skills in their real-life context, highlighting relationships, enhances motivation, learning, and accessibility of stored information in adult learners (Black & Wiliam, 1998; Bowen, 2006; Fraser & Greenhalgh, 2001). Professions that face an increase in accountability and scrutiny need to demonstrate ability in their graduates, which can be difficult to do using traditional assessment methods (Frank et al., 2010). Competency-based education addresses these deficiencies in traditional curricula by focusing on the end product of observable behaviours that reflect the learners’ knowledge, skills, and attitudes (KSA) (Albanese, Mejicano, Mullan, Kokotailo, & Gruppen, 2008; Frank et al., 2010; Hodge & Harris, 2012; Mansfield, 1989; Shumway & Harden, 2003; Smith & Dollase, 1999) Many professions, such as medicine (Frank & Danoff, 2007), education (Darling-Hammond, 2006), and policing (Police Sector Council, 2011), as well as trades such as automotive repair (Hodge & Harris, 2012), have defined sets of key competencies required of practitioners. These competencies are not only sets of abilities, but they are also political statements of societal values (Albanese et al., 2008) and a framework for curriculum development and reform (Hodge & Harris, 2012; Tuxworth, 1989). The transition to competency-based education in many of these fields, particularly medical education, has been ongoing for many years. This introduction to competency-based education will focus on the learning process, the elements of competency-based programs, and the assessment of competencies. It will draw heavily on the literature from medical education but it is easily translatable to the police 34 context. With an increase in accountability to regulators and to the public, physicians have seen an increase in the need to assess graduates in ways that include addressing values as well as the social and community context (Frank et al., 2010; Smith, Goldman, Dollase, & Taylor, 2007), and police face the same, if not higher, levels of scrutiny. Competencies, and thus curriculum, must be able to adjust to changes in societal values and needs in both of these professions (Davis & Harden, 2003a; Epstein & Hundert, 2002) and practitioners in both fields must be able to manage ambiguous problems, tolerate uncertainty, and make quick decisions with limited information (Epstein & Hundert, 2002). A competency-based curriculum, as explored below, provides the framework to respond to the needs of society while ensuring graduates meet the required standards to practice and meet high public expectations. 2.3.2 Defining Competency-Based Education Terminology As competency-based and outcome-based education has evolved over the years, there has been much debate as to the intent and meaning of the various terms used in describing the curriculum (R. M. Harden, 2002; ten Cate & Scheele, 2007). For clarity, it is important to define the set of terms that will be used in this paper. Drawing on the work in medical education (Albanese et al., 2008; Davis & Harden, 2003a; R. M. Harden, 2002; Rethans et al., 2002), I am working with the definitions outlined below. First it is important to distinguish between outcomes and competencies, as both can be used in developing a competency-based curriculum. Both describe broad characterizations of knowledge, skill and attitudes important for graduates to possess (Albanese et al., 2008; Davis & Harden, 2003a). Outcomes are developed by linking the 35 expectations of learners to the skills and abilities of a practicing professional. They are statements describing what the program wants graduates to have (Albanese et al., 2008). Competencies, on the other hand, are statements describing the knowledge, skills, and attitudes that graduates need to have to ensure that they have the basic abilities to practice on their own (Albanese et al., 2008; Fraser & Greenhalgh, 2001). A program can therefore have both competencies, describing the minimum standard to meet qualifications, and outcomes, describing how the program aspires their graduates to go beyond meeting the basic qualifications. In describing competency-based education, these two terms are sometimes used interchangeably when describing the need to document a learner’s progress through the expected competencies or outcomes designated by the program. Another set of terms that needs defining is competency-based assessment and performance-based assessment. Again, both of these elements can be used effectively in a competency-based education program. Competency-based assessment is a measure of what the learner can do in a controlled representation of professional practice whereas performance-based assessment is a measure of what the learner can do in actual professional practice (Davis & Harden, 2003a; Rethans et al., 2002). Lastly, the concept of professional competence, or what graduates aspire to, has been defined by Epstein and Hundurt (2002) as “…the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and community being served.” (p. 226). They also describe the ability to manage ambiguity, uncertainty, and to make decisions with limited information as being central to professional competence (Epstein & Hundert, 2002). While this definition was written to describe a practicing physician, the elements and 36 expectations are similar to the expectations of an active police officer. Removing the word clinical from the definition renders it applicable to the policing context. Indeed, this is an apt description of the day to day requirements of an on-duty police officer. 2.3.3 Determining Competencies A crucial component of a competency-based education approach is the accurate and thoughtful definition of competencies. In professional fields or vocational training, determining these competencies requires an occupational analysis to identify the key competencies and tasks required to successfully perform in the occupation. The process of determining the key competencies for an occupation is problematic as it privileges the experiences of those involved in the determination and is potentially open to influence from political, economic, media, or other such factors (Jansen, 1998; Schwarz & Cavener, 1994). One robust method of occupational analysis that is frequently used across multiple professions is DACUM, which stands for Developing A CurriculUM. DACUM is a model for the development and management of competencies that was developed in Canada in the late 1960s (Canadian Vocational Association, 2013; Wyrostek & Downey, 2017). DACUM relies on three assumptions: expert workers are the best at describing and defining their job; a job can be defined by precisely describing the tasks performed by expert workers; and all job tasks require enablers such as the use of knowledge, skills, tools, and positive worker behaviours to be done correctly (Canadian Vocational Association, 2013; DeOnna, 2002; Norton, 1998; Norton, 2009; Wyrostek & Downey, 2017). Following the initial DACUM process, the resultant competency profile and task list is validated by surveying a larger group of subject matter experts (DeOnna, 2002). The competency profile and task list can 37 then be used as a guide for curriculum design to ensure what is taught is aligned with what needs to be taught (DeOnna, 2002). Most frequently the DACUM process is used in conjunction with an instructional design model such as ADDIE (Wyrostek & Downey, 2017), as discussed in Section 3.3. The competencies, once defined, act as the guide for both curriculum and assessment in a competency-based education program. 2.3.4 Elements of Competency-Based Learning The hallmark of a competency-based education program is that progress is gauged by achievement of the competencies instead of by time spent in the program or the process of teaching (Albanese et al., 2008; R. Harden, Crosby, Davis, Howie, & Struthers, 2000; Hodge & Harris, 2012; Leung, 2002; Smith & Dollase, 1999). Additionally, the assessment practices are closely aligned with the competencies and are viewed as part of a continuous learning experience (Albanese et al., 2008; Ben-David, 1999; Davis & Harden, 2003a; R. Harden et al., 2000; Hodge & Harris, 2012; Mirosław Dąbrowski & Jerzy Wiśniewski, 2011). To this end, the program is structured to facilitate the development and achievement of the competencies by the learners (Hodge & Harris, 2012) and the expected progression through the competencies is documented such that each learner can gauge their own progress (Davis & Harden, 2003b; R. M. Harden, 2007; Mirosław Dąbrowski & Jerzy Wiśniewski, 2011; Smith et al., 2007). With competency-based education the learner takes responsibility for their own progression down this path and the responsibility for learning is shared by learners and instructors (Frank et al., 2010; Hodge & Harris, 2012). As the learners demonstrate their progress through the competencies, the quality of feedback received from 38 instructors is one of the most important factors for increasing performance (Brightwell & Grant, 2013). In a purist form, competency-based education can be a difficult and unsustainable proposition as learners are free to progress through the program at their own pace and this open-ended time frame can be very resource intensive and problematic (Hodge & Harris, 2012). Frequently programs adapt to scheduling requirements such that the total time frame of the program is fixed, but learners have more freedom as to how their curricular time is used to ensure that they meet the required competencies (Davis & Harden, 2003b; Hodge & Harris, 2012). Even this change is sometimes a difficult adjustment for learners and instructors, but a unified team approach is required to support the program and ensure success (Davis & Harden, 2003b). One major advantage of a competency-based education program is the ability of the program to be responsive to societal expectations and needs (Davis & Harden, 2003b; Fraser & Greenhalgh, 2001; Hodge & Harris, 2012), as competencies are also political statements about what is valued by society, the profession, and the institution (Albanese et al., 2008; Mansfield, 1989; Tuxworth, 1989). Development of the required competencies is a continuous process with instruction, assessment, and governance of the program, leading to an inclusive and holistic educational experience (Tuxworth, 1989). An extensive and thorough curriculum map is essential to development and implementation of a competency-based education program. The curriculum map needs to include both information from the task analysis and information about where specified competencies can be attained, because a focus only on the specific tasks can lead to a reductionist assessment (Cox, 2011; Tuxworth, 1989). This map also facilitates discussion of educational principles and curriculum 39 implementation, allowing the program to respond to change as necessary (Davis & Harden, 2003b; Tuxworth, 1989). As a learner moves through the curriculum, starting with basic cases, Harden (2007) identified four dimensions along which the learner can progress to meet the expected competencies. The problems encountered by the learner can increase in: breadth, difficulty, utility and application, and proficiency. Each of these dimensions is necessary to fully achieve a competency and need to be structured into the curriculum to support learning. 2.4 The Learning Process Context plays an important role in adult education: adults need to know why they are asked to learn information and to immediately see the relevance of what they are learning (Birzer & Tannehill, 2001; Birzer, 2003a; Fraser & Greenhalgh, 2001). Adult learning is influenced not just by the content, but also by the context and by personal influences (Carraccio, Benson, Nixon, & Derstine, 2008; Schenck & Cruickshank, 2015). The context, or way, in which information is stored makes it more or less readily available when needed (Bowen, 2006). As such, many competency-based learning activities are structured around ‘authentic tasks’ where learning occurs in a context that includes most of the cognitive demands of real world situations (Koens, Mann, Custers, Eugène J F M, & Ten Cate, 2005). In situations where a complex task is learned, the physical context (i.e. surrounding space) is of little importance but the semantic, or cognitive, context plays a larger role in skill acquisition and (Koens et al., 2005). One common curriculum strategy to structure authentic tasks into the learning process is through case based learning, where the learning of theoretical information and skills is 40 integrated into case presentations (Barrows, 1986; Carraccio et al., 2008). Case-based learning can be thought of as a part of a continuum with problem-based learning, with the amount of guidance provided to learners decreasing and the amount of self-directed learning increasing as one moves from case-based to problem-based learning (Aditomo, Goodyear, Bliuc, & Ellis, 2013; Barrows, 1986). Each of these techniques provides engaging and meaningful learning, and should be selected as appropriate for the desired learning outcomes (Aditomo et al., 2013; Barrows, 1986). Developing curricular material based on real-life situations helps learners to be able to adapt their learning to the new situations they face and actively build their knowledge because it places them in the key role of decision maker with real-life problems (Aditomo et al., 2013; Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013; Fraser & Greenhalgh, 2001; Nkhoma, Sriratanaviriyakul, & Quang, 2017; Vander Kooi & Bierlein Palmer, 2014). The real problems and cases also highlight the interdisciplinary nature of critical thinking and problem solving, and facilitate the integration of theory into practice (Stentoft, 2017). More experience with cases fosters the recognition of common elements, or patterns, enabling practitioners to make faster decisions and focus attention on other important aspects of the situation (Carraccio et al., 2008). Further, working in small groups to solve cases can also build teamwork and communication skills, increase participation, and increase motivation and engagement with the material (Jones, 2006; Nkhoma et al., 2017). Well-crafted cases that model desired behaviours and approaches to professional expectations can also help develop the learners’ tacit knowledge about their future role (Aditomo et al., 2013). In addition to providing the opportunity to learn through case-based exercises, authentic tasks can be incorporated into the curriculum through scenario-based exercises, 41 through practicums, and through experiential education. Scenarios should be as realistic as possible, and require a great deal of preparation and organization to ensure that they provide opportunities for the students to meet the educational objectives (Werth, 2011). Such experiential activities have the potential to be truly transformative learning experiences for the students, so must be used thoughtfully and intentionally within the curriculum to avoid reduction of the impact of the experience (Sakofs, 2001). The experiences should be chosen so that they fit with the intention and goals of other curricular activities and that the students can be fully engaged. Incorporating action and reflection into the experience helps ensure that the students are able to understand their learning experience and develop the critical thinking skills necessary to examine their performance and experiences (Estes, 2004). Dreyfus (2004) outlined five progressive stages for adult skill acquisition (Dreyfus, 2004). Carraccio et al. (2008) expanded on these stages to include descriptions and strategies for case-based learning to achieve competency in medical practitioners. An alternative progression proposed by ten Cate and Scheele (2007) approaches the achievement of competencies in terms of the level of supervision learners require as they build their skill levels: • Has knowledge • Act under full supervision • Act under moderate supervision • Act independently • Act as a supervisor or instructor 42 These levels can be correlated to the various stages discussed by Dreyfus (2004) and Carraccio et al (2008). Table 2-2 below summarizes their descriptions (note that the use of the term competency is not related to its use in ‘competency-based learning’). The key point ten Cate and Scheele (2007) add to the discussion of adult skill development summarized in Table 2-2 is that the level of proficiency can be correlated to the level of supervision an individual requires to perform. This global type of evaluation, based on how much the candidate can be trusted to work independently, can be easier for expert-level practitioners to make than a determination based on lengthy checklists (ten Cate & Scheele, 2007). Regardless of how the progression of learning is described, it is evident from these models that they reflect a movement of knowledge, skill, and attitude acquisition that builds upon previous stages. It is important that learners at the beginning stages are first exposed to the common problems to anchor the learning in their memory (Bowen, 2006). Once learners are comfortable with the common problems, it is easier for them to compare new concepts, building and elaborating on their learning (Bowen, 2006). Indeed, competency-based education aims to build abilities in a constructivist manner by incorporating previous learning into later stages of the curriculum and by focusing development and assessment on observable abilities (Frank et al., 2010). The goal of competency-based education is to promote conceptualization instead of memorization (Bowen, 2006). Care must be taken when designing a competency-based framework to not shift from a constructivist to a behaviourist framework thereby reducing the competencies into long lists of individual tasks that do not reflect the complexity of the real world (Birzer, 2003b; Cox, 2011; Leung, 2002). 43 Stage Dreyfus (2004) Carraccio et al (2008) ten Cate and Scheele (2007) Novice  No context  Follows basic rules  No emotional attachment  Teaching does not guarantee learning, particularly in this stage  Use teaching methods that integrate theory and practice to help learners build connections  Strategies for case-based learning:  Highlight meaningful information in case  Eliminate irrelevant information  Highlight discriminating features and their importance  Has knowledge Advanced beginner  Experience and some understanding of context enables recognition of clear and easy examples  Uses situational and non-situational cues  Learning is detached and analytical  Learning relies on instructions and given examples  Work from common to uncommon cases  Help with formulating and verbalizing their assessments and plans  Use team structure and near-peer coaching  Act under full supervision Competency  Able to differentiate between important and not-important information  Choose a perspective  Take responsibility for choices regardless of success  Invested in outcome because actively making decisions  Deeper learning occurs from mistakes because of both cognitive and emotional involvement  Balance supervision with autonomy  Hold accountable for their decisions  Act under moderate supervision 44 Stage Dreyfus (2004) Carraccio et al (2008) ten Cate and Scheele (2007) Proficiency  Decision making influenced by success and failure in previous stage  Situational discrimination  See bigger picture but not enough experience at act automatically  Situational discriminators and pattern recognition predominate over rules  Learn to know limitations and use additional resources when needed  Mentor by an expert  Act independently Expertise  Quickly appraises situation and takes action  Able to see more subtle differences or cues  Pattern recognition saves time and resources for use in more complex problems  Progressive problem solving to move beyond the comfort zone  Keep cases interesting and complex to ensure learners are challenged  Act as a supervisor or instructor Master N/A  Sensitivity to big picture within context and culture  Act as a supervisor or instructor Table 2-2 Summarization of the stages of adult skill development (Dreyfus, 2004) related to competency in medical practitioners (Carraccio et al., 2005) and the level of supervision required (ten Cate and Scheele, 2007) 2.5 Assessment of Competencies A competency-based curriculum is greater than a list of required competencies; it is an integrated approach to skills and assessment (Davis & Harden, 2003b). Assessment is so central to the shift from a traditional curriculum based on memorization to a competency-based curriculum that a failure to also change assessment practices will result in little to no actual changes in the curriculum (Shumway & Harden, 2003). To ensure that learners are able to plan and navigate their way through the competency-based program, it is essential 45 that the learning and assessment activities are clearly mapped to the competencies (Davis & Harden, 2003a; R. Harden et al., 2000). In addition to focusing on the performance of tasks, assessment in a competency-based program must also reflect the integrated nature of the program such that assessment activities are integrated as well (Ben-David, 1999; Davis & Harden, 2003a; R. Harden et al., 2000). “Good assessment is a form of learning and should provide guidance and support to address learning needs.” (Epstein & Hundert, 2002). The purpose of the assessment should be clear to the learner, to address the learning needs of adults to know why they are being asked to do something (David & Harden 2003a). To keep the learners motivated and aware of the expectations of them, the educational philosophy of the program should be overtly stated and the assessment practices should remain congruent across the duration of the program (Ben-David, 1999; Hodge & Harris, 2012). By following the progression of a learner across the program, their development can be tracked, providing a holistic longitudinal assessment of their progress (Carraccio et al., 2008; Shumway & Harden, 2003). Traditional assessment tools are not always able to assess the complexities of a competency-based program (Frank et al., 2010; Leung, 2002; Shumway & Harden, 2003). As competencies extend into the areas of values, beliefs, communication, and teamwork, assessment tools such as reflection, self-assessment, feedback, and portfolios may be better able to assess the required components (R. M. Harden, 2007; Shumway & Harden, 2003). Many students see traditional learning systems as a random sequence of events, but structuring the curriculum and assessment to convey the bigger picture enables learners to become more involved and situates assessments as a tool for learning (Black & Wiliam, 1998). Formative assessment provides the backbone of competency-based education, giving 46 students the information on how they are progressing and teaching them the necessary skills of reflection and self-assessment (Black & Wiliam, 1998). For formative assessment to be effective, it should address a reference standard, provide information about the students’ performance in relation to that standard, and include strategies to close the gap between the observed performance and the standard (Black & Wiliam, 1998; Price, Handley, Millar, & O'Donovan, 2010; Rust, 2002; Sadler, 1989). Students may be critical of feedback if it is too general or if it does not contain explicit instructions on how to improve (Hepplestone & Chikwa, 2014; Morris & Chikwa, 2016; Price et al., 2010; Scott, Shields, Gardner, Hancock, & Nutt, 2011) and should be able to directly see the relevance of the feedback to their future performance (Black & Wiliam, 1998; Rust, 2002). Opportunities for formative feedback should be structured progressively throughout the program since a student may be reluctant to seek out feedback on their own (Black & Wiliam, 1998; Hepplestone & Chikwa, 2014). Despite a preference for individualized written feedback (Hepplestone & Chikwa, 2014; Morris & Chikwa, 2016), providing model answers or exemplars either before or after a learning event may increase the students’ performance on a later test (Gibbs & Taylor, 2016; Hendry, White, & Herbert, 2016; Huxham, 2007). This increase in performance may be due to the students’ increased engagement with the feedback and with comparing the expectations with their own performance (Black & Wiliam, 1998; Gibbs & Taylor, 2016; Hendry et al., 2016; Huxham, 2007; Rust, 2002; Sadler, 1989; Sadler, 2010). Through this process, however, it is essential that the students’ interpretations and perceptions of the feedback be monitored to ensure their understanding is aligned with what the instructor intended (Black & Wiliam, 1998; Price et al., 2010; Sadler, 1998; Sadler, 2010). Providing 47 opportunities for students to actively engage with the feedback, the exemplar/standard, and the instructor providing the feedback to ensure it is effectively interpreted and used will help develop the students’ self-assessment skills, which will lead to improved self-monitoring and performance (Sadler, 1989; Sadler, 2010). The developmental progression through the program should be structured into both the learning activities and the assessment design of the program (Rust, 2002; Sadler, 1998; Smith et al., 2007). This is true for both skill-based competencies and competencies involving social and community contexts (Smith et al., 2007). For competencies involving recognizing own biases, beliefs, and values, self-reflection is a powerful tool when it focuses on articulating values, recognition of the importance of issues, and recognizing the various cognitive, affective, personal , and professional elements of the learning experience (Smith et al., 2007). Creuss et al (2008) describe three elements of reflection to guide learning: reflection “in action”, where learners debrief what they did in the moment; reflection “on action”, where learners discuss the effect of their actions on all parties involved; and reflection “for action”, where learners relate an activity to their future action (Cruess, Cruess, & Steinert, 2008). Progression through a competency may occur along several different axes as well (R. M. Harden, 2007). Progression to increased breadth helps the learner apply their existing abilities to new topics or new contexts. Progression to increased difficulty helps the learner apply their existing abilities to more complex, multifactorial problems that may also include a combination of social issues beyond what is already learned. Progression to increased utility and application helps learners move from a theoretical understanding to the application of existing knowledge. Finally, progression to increased proficiency helps learners improve 48 their existing skills, knowledge, and attitudes such that they are able to perform tasks faster, to a higher standard, with fewer errors, and independently (R. M. Harden, 2007). All of these elements should be structured into the curriculum and assessment so that the learners have a clear picture of where they are going and can set goals and plan how to get there (Black & Wiliam, 1998; R. M. Harden, 2007). Care must be taken when assessing the progression of learning to provide a strong mentoring framework (Epstein & Hundert, 2002), to provide positive reinforcement through achievement of competence (Albanese et al., 2008), and to avoid reducing the competencies into a list of tasks that eliminates the complexity of the real world (Cox, 2011; Leung, 2002). Knowledge base is just one element of a given competency, so learners can achieve an acceptable knowledge base but still not meet the competency because they are lacking in skills or attitude (Smith & Dollase, 1999). A balanced assessment plan must be valued by all stakeholders, learners and instructors, for it to be successful in the program (Challis, 2000). Shumway and Harden (2003) provide a summary of different levels of learning, corresponding assessment tools, and developmental aspects of a competency that each can measure (Figure 2-1). This information, when combined with the work of Creuss et al (2005), can illustrate the essential role of reflection as both a teaching and assessment component. As reflection is based on performance, it contributes to the higher-level competencies described by Shumway and Harden (2003). The highest level, “doing” as seen in Figure 2-1, is developed through reflection that is forward looking by reflecting “on” and “for” action. In this highest level, assessment tools such as assessment portfolios and observation can be used to assess attitudes, decision making ability, and proficiency in the role. Thus developing forward-looking reflective practice should facilitate learner 49 development to proficiency in “doing” or being in their role, which can be holistically evaluated by observation and assessment portfolios. Lastly, the scale that is used to assess performance is of importance to ensure valid and reliable assessment of learners as they progress in their competencies (Crossley, Johnson, Booth, & Wade, 2011; Frank et al., 2010; Regehr, Regehr, Bogo, & Power, 2007; ten Cate, 2006; ten Cate & Scheele, 2007). Frequently checklists that break an activity down into different dimensions of performance can be difficult for assessors to use because they are able to formulate an overall impression of performance but then have to break this down into the identified dimensions and generate a separate rating for each dimension of the performance (Regehr et al., 2007). Assessors are much better, and their assessments more reliable, when they provide an overall, or global, rating of performance (Regehr et al., 2007; Does Shows how Knows how Knows Observation Portfolios Logs Peer assessment Attitudes/ethics Decision making Role Personal development Clinical/ practical assessments Clinical skills Practical procedures Communication Information handling Written assessment Written assessment Investigation Management Medical sciences (theoretical knowledge) Assessment Tool Knowledge/ Ability Assessed Reflective Practice Reflecting ‘for action’ Reflecting ‘on action’ Reflecting ‘on action’ Reflecting ‘in action’ Figure 2-1 Overlay of levels of learning, assessment tools, and ability assessed (Shumway and Harden, 2003) with concepts of reflective practice for learning (Creuss et al., 2005) 50 ten Cate, 2006; ten Cate & Scheele, 2007). Additionally, Crossley et al (2011) suggest that rating scales should not be tied solely to expectations at different stages of training because assessors might not be familiar with the stages of training or associated expectations. Incorporating the ability to perform independently into assessments increased the reliability and including behavioural descriptors of this developing independence increased the inter-rater reliability and differentiation among learners (Crossley et al., 2011). Ten Cate (2006) presents these descriptors as a level of trust that the assessor has in the learner to perform independently, and Frank et al (2010) agree that contextual and developmental descriptors regarding the level of supervision required at a given stage are essential for accurate assessment. 2.6 Criticisms of Competency-Based Education Competency-based education is not without criticism from a variety of fields such as public education (Jansen, 1998; Schwarz & Cavener, 1994; Spady & Mitchell, 1977), medical education (Morcke, Dornan, & Eika, 2013; Swing, 2010; Talbot, 2004; ten Cate, 2006), and nurse education (Chapman, 1999). Perhaps the most common criticism of competency-based education is its positivist, behaviourist foundation that critics claim excludes personal values, reflection, responsibility, and other elements of learning that are similarly difficult to measure (Chapman, 1999; Jansen, 1998; Morcke et al., 2013; Schwarz & Cavener, 1994; Talbot, 2004). Critics cite a reductionist approach to instruction and assessment that focuses on identified tasks as check-lists of achievements as a potential pitfall of a competency-based education approach (Chapman, 1999; Jansen, 1998; Schwarz & Cavener, 1994; Talbot, 2004). Indeed, as noted in Section 2.4 The Learning Process, the 51 approach to designing the curriculum and assessment must consciously maintain a constructivist approach to avoid reducing the competencies into long lists of individual tasks that do not reflect the complexity of the real world (Birzer, 2003b; Cox, 2011; Leung, 2002). To avoid a focus on only the easily measurable tasks, the curriculum design must also integrate learning processes that facilitate the development of higher-order learning such as integration of skills and reflection (Swing, 2010). Another frequent criticism of competency-based education is the imbalance of power exposed in the development of competencies. While some occupational analysis approaches, such as DACUM (Developing A CurriculUM) rely on expert workers to develop the competency profile of a particular position (Canadian Vocational Association, 2013; Norton, 1998; Norton, 2009; Wyrostek & Downey, 2017), the development and selection of competencies privileges the individuals involved and is open to influence from political, business, media, or other outside factors (Chapman, 1999; Jansen, 1998; Schwarz & Cavener, 1994). In this light, competency-based education may be seen as a system of control and regulation of students through prescriptive competencies decided on by those in power (Chapman, 1999; Schwarz & Cavener, 1994). Additional criticisms of competency-based education include a potential misalignment between the expectations of the educational institution and those of the profession (Chapman, 1999; Talbot, 2004), an increase in administrative burden for teachers tracking the progress of students progressing at different rates (Jansen, 1998; Schwarz & Cavener, 1994), the difficulty in generating meaningful assessment activities (Chapman, 1999; Jansen, 1998), and a general resistance when the change to competency-based 52 education is mandated by government, institutions, or accreditation bodies (Chapman, 1999; Jansen, 1998; Schwarz & Cavener, 1994; Talbot, 2004). Approaching any educational intervention as a universal solution is problematic. Where a competency-based education approach may be less appropriate for a high-school English class (Schwarz & Cavener, 1994), it may be more appropriate for trades or medical education, as outlined in the preceding sections. Despite criticisms of competency-based education, many educational institutions, public education systems, and occupational training programs have continued to focus on a competency-based approach with varying levels of success. In British Columbia, the mandate that police recruit training be aligned with the PSC Constable Competencies was a natural progression into a competency-based framework for training. In developing the curriculum for the competency-based program, caution was taken to avoid a reductionist behaviorist approach, to design assessment activities that were authentic and meaningful and allowed for flexibility in performance, and to include learning activities such as frequent feedback and reflection to facilitate deep and transformative learning. 2.7 Summary This chapter has reviewed the literature on police training, with its focus towards an increase in critical thinking and applied skills. It has discussed the development of the Police Sector Council Constable Competencies, which form the framework for the BC Police Recruit Training program. It has reviewed literature on competency-based education and assessment, as well as identified some common criticisms of competency-based education. 53 The following chapter will describe how these elements of competency-based education were conceptualized in the design of the BC Police Recruit Training. 54 Chapter 3: Program Description This chapter will summarize the structure of the Recruit Training Program prior to the implementation of the changes to the program, the initial proposal for the changes, the design and development process, and the current structure after implementation of the changes. Where relevant, the description of the current (new) program will draw on areas of the literature not already covered in Chapter 2. 3.1.1 Recruit Training Program Structure Prior to Delivery Model Changes Several structural issues with the delivery of recruit training needed to be addressed with the move to a new model. Previously, class sizes were typically restricted to 24 recruits. This limit was due to the capacity of the on-site firing range, where recruits learn to shoot and qualify on their departmental issued firearms. Recruits typically rotated through a Firearms training schedule with half of the class in the range while the other half was at the driving track or participating in a full day practical session. If departmental hiring activity increased such that recruitment numbers exceed the maximum capacity at the Police Academy, some recruits may have had to be deferred to a later class. Alternatively, a class of 36 recruits could be accommodated with major adjustments to the schedule. Typically there were four class starts per year, in October, November, March, and April. One important implication of this method of scheduling was that, on occasion, three or four classes were on campus at the same time. The overlap, however, was unpredictable and did not occur at regular places in the recruits’ schedules. This varying overlap of classes, without additional 55 instructional resources, meant that curriculum topics were scheduled based on the instructors’ availability rather than a logical progression of topics or educational principles. The classroom component of the Recruit Training program (Blocks I and III) was divided into different courses, or disciplines, each of which was taught independently. The disciplines were:  Investigation and Patrol  Legal Studies  Use of Force  Traffic Studies  Firearms  Driver Training  Dress and Deportment (Drill)  Physical Training The term discipline, in this context, is used with intent instead of the original designation as a course. This distinction was made shortly after I began the mapping of the curriculum. Referring to each course as a discipline was a conscious effort to begin to break down the individual siloed nature of each topic and start to see the skills taught in the program as an integrated set of competencies to be mastered. While this distinction started as purely semantic and a first step in changing thinking, the concept of an integrated curriculum is central to the new delivery model. Specifically, the term “discipline” was chosen over “subject” to reflect the ongoing study necessary to stay current in each of the areas. Laws, tactics, technology, and knowledge of human physiology are constantly changing and 56 instructors must maintain their currency in their areas by continual study, discussion, and learning. This necessity of ongoing study seemed more suited to the term discipline. Within the previous structure, each discipline had a designated number of hours in the curriculum and each topic within a discipline had a designated amount of time for teaching, resulting in a highly structured program. The program delivery was primarily lecture-based, with several days per block (Blocks I and III) devoted to practical application. Each block also had two “Simulation Days” where actors were hired to play members of the public and recruits responded to simulated calls. On a typical Sim day there were six different call stations. Depending on the class size, recruits were in groups of two to six. Typically, one pair of recruits per group would get to respond to a call and the rest of the recruits in the group would observe. During simulation days, police officers from the municipal departments came to the Police Academy to act as assessors for the recruits. After each call, the assessors debriefed with the recruits to provide both verbal and written feedback. The feedback was considered formative because the recruits had very little experience with practical application and the simulation days were not designed as examinations. The summative assessment for the recruits was through written exams that tested recall and a limited amount of application through scenario-based questions. A grade of 70% or higher was required to pass in Block I and a grade of 80% or higher was required to pass in Block III. Recruits who did not pass a written exam were required to re-take the exam after supplemental instruction. A failure of a written exam also resulted in either half or one demerit point assigned to the recruit. All demerits were communicated back to the home department and if a recruit accrued 3 demerit points over the course of any Block of training they could be dismissed by their home department. 57 For Block II of training, recruits work in their hiring department under the supervision of a Field Training Officer (FTO). Block II is where the majority of the consolidation of learning happens for recruits, as they are exposed to the practical aspects of policing. Block II is an extremely crucial part of the training. Despite the importance of the training, the Police Academy has no input into the members who are chosen as FTOs. These members are appointed by their municipal departments. A three day Field Trainer course is offered by the Police Academy and recommended for future FTOs. The course is not mandatory, however, and departments will sometimes use an untrained member as a FTO if they do not have enough people trained or if their field trainers have transferred out of patrol to specialty units. The lack of input into the selection of an FTO makes standardization of expectations extremely difficult. Previously, the FTO was required document the recruit’s progress in what was called a “Block II Book”. The FTO documented the recruit’s actions at a given call, noting what was done well and what was not done well. The recruit was supposed to discuss the content of the book with their FTO on an ongoing basis throughout Block II and sign off on weekly evaluations. Very little, if any, communication came back to the Police Academy as to how a recruit was progressing through Block II. At the end of the block, when recruits returned to the Police Academy for Block III, they brought their completed Block II book with them and it was reviewed by an instructor. This review was often the first time the staff at the Police Academy were made aware of how a recruit was performing on the job. The lack of communication between department, FTO, recruit, and Police Academy made preparing for Block III training difficult as the instructors could not anticipate what training issues may need to be addressed. Additionally, the previous assessment scale the FTO used to rate their recruit was based on where the FTO believed a 58 recruit should be at that particular point in their training. These expectations, of course, varied from FTO to FTO and were extremely difficult to standardize. Because of this structure, the marks that a recruit received in their Block II assessments often did not show any progression or development over the course of their training. If a recruit was consistently where their FTO expected they should be, then they would get a constant grade across the entire Block II. The marking scheme also was problematic if a recruit had two different FTOs over the course of their Block II training, as happened in some departments. The two FTOs would have different expectations of where the recruit “should be” in their training. This could result in the recruit’s marks suddenly changing drastically as a reflection of the new expectations. Feedback in the previous program was limited to exam grades and written feedback from assessors on simulation days (Blocks I and III) and from the FTO (Block II). In the blocks that are at the Police Academy there was very little, if any, opportunity for a recruit to obtain specific formative feedback from an instructor. This lack of formative feedback was a deficiency that was directly addressed in the new delivery model. 3.2 Design and Development After observing many of the days of training for recruit training classes throughout 2013 and into 2014, and mapping the curriculum to the Police Sector Council Constable Competencies, I proposed a new delivery model for the curriculum could significantly improve the quality of training over what was then the current, primarily didactic, delivery model. I recognized that successfully changing the delivery model required a systematic change at all levels of the Police Academy delivery. As Biggs (1996) notes: “attempts to 59 enhance teaching need to address the system as a whole, not simply add ‘good’ components, such as new curriculum or methods” (p. 350). Indeed, the structure of a program, from content to delivery to evaluation, is not just a reflection of the material to be learned, but also a reflection of the professional values of the organization (Glazier, Bolick, & Stutts, 2017; Pannell, 2016). Constructive alignment uses constructivism as a framework throughout the design and development of a program, ensuring the learning opportunities are structured to promote development and achievement of the performances to be assessed and that the objectives, teaching activities, and assessment strategies are all structured at an appropriately high cognitive level based on the final expectations of the program (Biggs, 1996). This was the strategy adopted for designing and developing the delivery model. The literature was reviewed to determine how the delivery of the program would be structured. Following the review of the literature, a generic template schedule was created for both Block I and Block III that incorporated all of the new elements as described in Section 3.4 The New Program Structure . This template schedule is included in Appendix A - Template Schedule for Competency-Based Delivery Model of Recruit Training. The content from all disciplines in the program was broken down into topics, and these topics were mapped to the new template schedules, according to where they would be required to respond to the type of call for that particular week of the program. This created a spiralized structure for the topics, where they are introduced early in the program and progressively revisited and built on over the course of training (Haberfield, 2013). It also ensured that no content was lost in the transition between delivery models. Once the proposal and template schedules were created, they were approved by the Director of the Police Academy and by the Policing & Security Branch of the Provincial Government. Following approval, a meeting was set up at each of the 60 departments where myself and the Director of the Police Academy met with the training departments and other senior staff of each Police Department to give them an overview of the proposed changes and obtain agreement on the plans to move forward. 3.3 Proposed Recruit Training Program Structure Delivery Model Changes The proposed changes were to shift the Recruit Training program from the original delivery model of mostly lecture-based theory with sporadic practical, or “simulation” days, to a competency-based framework in keeping with the recently developed Police Sector Council National Framework of Constable Competencies. The primary delivery model was proposed as a case-based method, designed following the ADDIE model of instructional design, which is common practice at JIBC and is well suited to competency-based education development (Wyrostek & Downey, 2017). The proposed changes were based on assumptions about learning that are founded in competency-based learning theory, as outlined in Chapter 2:. Adult learning  Learning is enhanced when it occurs in the context in which it will be applied  Adult learners are motivated when they can see the relevance and application of what they are learning  Adult learners can take ownership of their learning process and work to meet agreed upon goals. 61 Developmental progression of learning  Learners enter the program with different strengths and experiences based on their background. They will, therefore, achieve milestones at different rates. Time and support need to be provided throughout the learning process.  Each week of the program should have time that is preserved for the learners to work on developing their competencies and meeting their learning goals. Instructors should be available during this time to provide assistance.  Recruits are actively involved in goal setting and creating their training plans as they progress through each component of the curriculum. Integration  Theory should be taught in an integrated, case-based manner.  Theory based exams should be integrated across all disciplines.  Simulation based exams should be integrated across all disciplines. Framework  The National Use of Force and the Conflict Intervention and De-escalation Models should be the framework for articulation across all areas of the program  Leadership and mentorship competencies should be developed and fostered within the program 62 Assessment  Competence can be demonstrated through portfolio-based assessment where the learner collects evidence that they have reached an acceptable level of ability.  Portfolios should demonstrate progression, not just competence.  Simulation days should be integrated, practical exams. Recruits need ample opportunity to practice application before these exams so practical sessions should exist every week. Assessments from the practical sessions and simulation exams will be used as evidence of meeting competencies in the portfolio.  Developing portfolios to demonstrate achievement of competencies will better prepare recruits for the departmental promotion process. Feedback  Specific, focused feedback is central to progression of learning and development of skills. Time needs to be dedicated to providing this sort of feedback throughout the learning process.  Instructors will serve as facilitators and coaches, providing formative feedback to recruits and helping them devise their personalized training plans.  Self-assessment builds the skills and framework for developing into a reflective practitioner.  Peer-feedback builds effective communication skills and also promotes reflective learning.  If learners are asked to self-assess and/or provide feedback then they also need to be taught how to do these things. 63 Cultural competence  Practical sessions should reflect the current realities of policing in the communities recruits serve.  Recruits will serve as actors for practical scenarios. Each scenario will have a brief information write up about the people portrayed in the simulation and relevant information about any marginalized groups represented (i.e. important statistics, special considerations, etc.). Following practical sessions, each actor will be required to briefly present what they learned in that role to the rest of the class. This will also build presentation and communication skills in recruits. The proposal for the new delivery method included a drastic reduction of the time spent in lecture in favour of using that time for case-based and practical application. Table 3-1 shows a comparison of the time allotted to various elements of the program ten years prior to the proposal (2005), the year before implementation of the new model (2015) and the first class in the new curriculum (2016). Minimal change in the structure of training occurred between 2005-2015 and, if anything, the hours spent in lectures had increased. The observed decrease in driving time between 2005 and 2015 was to accommodate additional mandated training in the program, such as Crisis Intervention and De-Escalation (CID), without increasing the length of training. Prior to the new model, no time was spent in case based application, nor in receiving feedback from instructors or working on individualized training plans. With the introduction of the new delivery model, there is dedicated time in the curriculum for all of these activities. 64 Class 99 (2005) Class 148 (2015) Class 152 (2016) Hours in simulations or practical application Block I 35 42 89 Block III 58 52 92 Hours in PRIME training Block I 21 (3 days straight) 28 (4 days straight) 23 (integrated across curriculum) Block III 7 N/A 3 (integrated) Hours driving Block I 56 28 28 Block III N/A 7 7 Hours in firearms Block I 56 (all indoor) 58 (all indoor) 66 (7 outdoor) Block III 14 (7 indoor, 7 outdoor) 14 (7 indoor, 7 outdoor) 14 (7 indoor, 7 outdoor) Hours in Use of Force Block I 38 (5 days straight) 40 36 Block III 24 30 19 Hours in PT Block I 27 23 19 Block III 28 19 18 Hours in Drill Block I 12 11 10 Block III 12 10 9 Hours in written exams Block I 10 13 2 exam days combined written and practical Block III 6 6 2 exam days combined written and practical Hours in practical exams Block I N/A N/A 2 exam days combined written and practical Block III N/A N/A 2 exam days combined written and practical Hours in lecture Block I 154 174 37 Block III 82 90 7 Hours in case-based application Block I N/A N/A 36 Block III N/A N/A 28 65 Class 99 (2005) Class 148 (2015) Class 152 (2016) Hours for diversity projects Block I N/A N/A N/A Block III N/A 7 6 Hours for CID training Block I N/A N/A 7 Block III N/A 7 integrated Hours for directed study – to work on individualized training plan skill development Block I N/A N/A 40 Block III N/A N/A 16 Hours receiving feedback from instructors and developing individualized training plans Block I N/A N/A 13 Block III N/A N/A 7 Table 3-1 Comparison of program elements 10 years before the program change proposal (2005), before change implementation (2015), and in the new delivery model (2016) 3.4 The New Program Structure The structure of the new program remains divided into four different blocks. The duration of Blocks I, III, and IV remains unchanged. The duration of Block II was extended from the previous 12-17 weeks to 18-21 weeks, schedule dependent, to allow for consistent overlap between the senior (Block III) and junior (Block I) recruit classes. Several components of the program span all of recruit training: the longitudinal themes and the mentorship program. Four longitudinal themes are interwoven throughout the program: ethics, professional communication, officer wellness, and fair and impartial policing. Each of these themes represents values the recruit program tries to cultivate across training and are best addressed on an ongoing basis rather than in a short stand-alone session. Recruits are introduced to the concepts of ethics and fair and impartial policing in one of their introductory sessions in Week 1. For ethics, they talk about the importance of ethical standards and accountability as well as potential sources of unethical behavior. After that 66 concepts related to ethics are integrated into a variety of case and scenario components. For fair and impartial policing, they are introduced to the concept of implicit bias and potential negative consequences for police investigations. As a regular component of their debrief on scenarios, recruits are then asked to identify any actions or experiences that were personal triggers for them to help identify sources of potential bias. They are also asked to identify any strategies they used to ensure their investigations were fair and impartial. This continued reflection is intended to increase self-awareness with respect to implicit bias in the recruits (Ossa Parra, Gutiérrez, & Aldana, 2015). Also as a component of their debrief, recruits are asked to identify strategies they used to maintain their composure. This question is intended as a self-assessment, or check-in, with respect to officer wellness. In their training plans recruits use goal setting strategies that are monitored by their mentors. They also participate in a session on visualization and learn about tactical breathing. These are all identified as strategies to promote skills that maintain officer wellness over the course of a career in policing. Recruits also participate in the Road to Mental Readiness training, which is proprietary training originally developed by the Department of National Defense and then modified for the police context by the Mental Health Commission of Canada. Lastly, the professional communication theme is integrated across all aspects of training. Recruits formally focus on communication skills during a full training day early in Block I where they respond to nine different calls all with a focus on communication. This day, COPS I – Effective Communication, uses actors in the scenarios and includes calls involving subjects with mental health concerns, PTSD, autism, and abuse. The recruits are given feedback on their ability to communicate effectively, empathetically, and professionally during the scenario. Later in Block I, recruits participate in another full 67 training day on Crisis Intervention and De-escalation. They are introduced to this model on Effective Communication day but expand on it here. The day includes learning the BC CID model as an effective approach to de-escalate a situation with a person in crisis, particularly a mental health crisis. In addition to these full days of training, recruits participate as actors in scenarios for other recruits. This experience is intended to transfer tacit knowledge about policing, but also for recruits to experience the impact of different communication strategies used by their peers. Through this experience, they should develop a better understanding of what it is like to be the subject of a police investigation and how their actions can impact a member of the public. After each scenario day they are prompted to reflect on what they learned by participating as an actor. This critical reflection should bring to the forefront any implicit knowledge gained through the experience (Ossa Parra et al., 2015). Through their frequent scenario practice, recruits continually receive feedback on their communication and their ability to build rapport. Assessing their communication skills is also incorporated into their practical exam scenarios. Finally, recruits’ scenarios are recorded and they view their handling of each call. Watching the recordings helps recruits appreciate how their actions were perceived by the subjects in the call and also illustrates how their perceptions in the moment may be different from what they see on the video or what the subjects experienced. The mentoring program is another aspect that is integrated throughout recruit training. In the second week of training recruits are assigned a member of the instructional staff who follows their progress throughout Blocks I through III. This system follows an integrated mentoring structure whereby the mentors follow a model that combines the pastoral, 68 professional, and curriculum models of mentoring (Livingstone & Naismith, 2017). The model integrates the continuity of the same mentor for the entire training program who meets individually with recruits from the pastoral model of mentoring, the referral to specific support services or other instructors as needed from the professional model of mentoring, and the integration of the mentoring into the curriculum class time from the curriculum model of mentoring (Livingstone & Naismith, 2017). Under this model, the mentor provides the recruit with developmental, formative feedback, reviews their scenario debrief forms and weekly training plans, and has regular individual meetings with their recruit. This strong relationship is intended to help the recruits develop their self-monitoring skills by facilitating and honest assessment of their strengths and weaknesses, providing support and guidance in developing an individualized training plan to use directed study time to close the gap between their current performance and their performance goals, and to monitor the interpretation and incorporation of feedback (Black & Wiliam, 1998; Price et al., 2010; Slavich & Zimbardo, 2012). In educational programs, feedback is frequently provided to students, but how the students use this feedback is seldom monitored (Price et al., 2010; Sadler, 2010). Similarly, reflection is often promoted but little attention is paid to the nature of the reflection and if the students are engaging in surface reflection or critical reflection (Alfred et al., 2013). The structure of the mentorship program in Recruit Training is designed to monitor the recruits’ incorporation of feedback and support their continued growth and development throughout the program. The mentorship program is designed to provide support and feedback for all recruits in the program, not just those who are weaker as each recruit meets with their mentor regardless of their perceived strength in the program. The mentors also hold their recruits 69 accountable for their performance and convey some of the tacit organizational culture aspects of policing through their interactions with the recruits. Success of this mentorship model requires a clear understanding of the role of the mentor from the staff, instructors, and students (Livingstone & Naismith, 2017). 3.4.1 Block I In Block I, the focus of training is building the basic skills required for a patrol level police officer through exposure and repetition. During the 13 weeks of Block I at the Police Academy, each week is organized around one (or two) general types of call(s). The calls were selected by determining the most frequent calls encountered by patrol-level police officers. The in-class material is presented in an integrated, interdisciplinary manner wherever possible. Interdisciplinary learning is aligned with the constructivist perspective by focusing on the relationships between concepts and how the learner constructs their knowledge (Stentoft, 2017). The general structure of a week in Block I of the new program will consist of the following elements:  Pre-reading quiz on the theory necessary for the week.  Application of theory in class through case presentations. The pre-reading content will not be repeated with lectures.  Just in time information about specific skills relevant to the case (call) topic, such as filling out specific forms, reading specific documents, etc.  Practice putting the theory into practice through practical sessions.  Reflection on the practice as well as writing reports. 70  Ongoing formative feedback and support provided by an assigned mentor who is a member of the instructional staff. The design of the week progresses from the basic level of understanding (assessed by the pre-week quiz) to the ability to apply and synthesize the material (assessed during practical sessions and reflection). Over the course of the week, the program provides opportunities for recruits to both acquire and consolidate knowledge at the surface, deep level as well as to transfer this knowledge to new situations (Hattie & Donoghue, 2016). Hattie and Donoghue (2016) differentiate between acquiring knowledge, which is a function of short-term memory, and consolidating knowledge, which is a function of long-term memory. They also advocate for a conscious selection of learning activities to promote each of these processes. As the recruits move through the week, they have the opportunity to acquire and consolidate surface level knowledge through the pre-reading and completion of the associated quiz, which align with learning strategies identified as facilitating surface-level learning (Hattie & Donoghue, 2016). As the week progresses, and recruits apply their new knowledge to case studies through small group discussions with peers and clarify their understandings through interactions with instructors during directed study, they have the opportunity to elaborate on what they have learned, organize the knowledge around real-life contexts and problems, question their own understanding, verbalize their decisions, and engage in critical thinking and collaborative learning, all of which are strategies to acquire and consolidate deep learning (Hattie & Donoghue, 2016). Finally, they have the opportunity to transfer their knowledge and understanding to new situations through the 71 scenarios, where they are able to choose which strategies they will use, evaluate their choices, and receive feedback on their performance (Hattie & Donoghue, 2016). 3.4.1.1 Weekly pre-reading and quizzes Because the focus of classroom time is on the application of concepts, the recruits are required to come to each week with a basic level of knowledge relevant to that week’s call. This is accomplished through pre-reading of manual chapters and successfully completing a knowledge-based quiz online before the start of the week. Recruits may complete the reading and quiz at any point before the start of the week, so have flexibility in when they complete the work. The quiz is completed on their own time outside of the class electronically through the JIBC learning management system and they must score 100% on the quiz to be considered as having successfully completed the pre-work. They can take the quiz as many times as necessary to achieve this grade and are free to work in groups and with their course reading material. Once a quiz is submitted, the students receive immediate feedback through the LMS that indicates the correct answer to all questions. Failure to complete the quiz at the required level results in the recruit being assigned one demerit; recruits are allowed six demerits in one Block of training before they are sent back to their home department for re-evaluation of suitability. In preparation for delivery of the new program model, the existing discipline manual chapters were re-written and significantly reduced in volume to focus on the core concepts. This revision resulted in the overall length of all combined manuals dropping by seven hundred pages. In Block I the recruits have approximately one thousand pages of reading spread out over the 13 weeks. 72 The structure of the program with pre-reading followed by in-class application borrows from recent models of the “flipped” classroom whereby students are exposed to course content, typically as short video clips, as homework and then can focus their classroom time on application and inquiry. This approach has developed to address the issues of content overload in the curriculum combined with the need to foster critical thinking and decision making skills in students (Bristol, 2014; Burke & Fedorek, 2017; Heijstra & Sigurdardottir, 2017). The approach is now used in classrooms in both K-12 and post-secondary systems, although is becoming particularly common in STEM classrooms. Key to the success of the flipped classroom approach is engaging the students to complete the pre-class work and ensuring that classroom time is actually used for application and not for attempting to incorporate additional content (Braun, Rittter, & Vasko, 2014; Burke & Fedorek, 2017; Heijstra & Sigurdardottir, 2017) In the Recruit Training program, the use of weekly pre-reading quizzes was incorporated to ensure that recruits completed the pre-class work and came in to the classroom with a base level of knowledge to start to apply the concepts to cases and scenarios. The use of quizzes also draws on the concept of the testing effect, which has demonstrated that testing with immediate feedback enhances recall and retention (Agarwal, Finley, Rose, & Roediger, 2017; Butler, Karpicke, & Roediger III, 2008; Dunlosky et al., 2013; Fazio, Huelser, Johnson, & Marsh, 2010; Karpicke & Roediger III, 2008; Roediger III & Karpicke, 2006b; Wiklund-Hornqvist, Andersson, Jonsson, & Nyberg, 2017). The testing effect, or retrieval practice, shows an increase in long term recall over studying alone in the laboratory setting (Agarwal et al., 2017; Karpicke & Roediger III, 2008; Wiklund-Hornqvist et al., 2017) and also in educational settings (Holmes, 2015; Holmes, 2017; Roediger III & 73 Karpicke, 2006a; Roediger & Butler, 2011; Wiklund-Hörnqvist, Jonsson, & Nyberg, 2014). Continuous assessment, either through online tests, in-class low-stakes tests, or assignments, has been demonstrated to increase student engagement and understanding (Holmes, 2015; Holmes, 2017; Trotter, 2006; Wiklund-Hörnqvist et al., 2014) and time spent engaged with the curricular material (Holmes, 2017). The mechanism of action of the testing effect is believed to be through effortful retrieval (Roediger III & Karpicke, 2006a; Roediger & Butler, 2011) whereby retrieval practice, or recalling information to answer a test question, strengthens the knowledge and the accessibility of this knowledge in the brain (Roediger III & Karpicke, 2006a; Roediger & Butler, 2011). Associated with this mechanism is the concept of desirable difficulties, whereby students learn more from successfully completing something that is more difficult, such as answering a test question, than from simply being told the correct answer (Fazio et al., 2010; Roediger III & Karpicke, 2006a). 3.4.1.2 Classroom case-based application Once the recruits have successfully completed the weekly reading and quiz, their time in class is focused on progressively more complex application. The content from the readings is not re-delivered in lectures during class time, which is important for the success of a flipped classroom model (Braun et al., 2014). The cases were developed based on actual calls instructors had taken and are structured with prompting questions to draw out relevant legal, patrol, tactical, officer safety, investigative, and traffic related topics, as relevant. Additionally, care was taken in writing the cases to ensure that a variety of ethnicities, genders, and socioeconomic backgrounds were represented as victims, suspects, and witnesses, to avoid conveying any implicit bias to the recruits. 74 In a given case study session, recruits will work as a group to complete a number of different cases designed to cover the required topics. Some cases are designed as short, single-page vignettes and others are designed as longer progressive release cases where recruits get more and more information about their investigation as they work through the case. Recruits work in small groups of six and discuss the answers to each of the prompting questions before moving on to the next sheet or the next case. They pace their own discussions and learning, so that they are able to spend more time on one question if a member of the group is struggling with that aspect of the case. Through the application of their knowledge from the pre-reading and the process of explaining why they would choose a particular course of action, the recruits should develop a deeper understanding of the material (Aditomo et al., 2013; Dunlosky et al., 2013). The process of recalling information to apply it to the case studies aligns with the concept of desirable difficulties, as described in the section on the testing effect. Here students derive more long term benefit from learning activities that require them to struggle with the material and may be slower initially than they do by simply being told the answer (Fazio et al., 2010). A number of instructors are present, specializing in different disciplines, to monitor the recruits’ progress, understanding, and help answer questions if the recruits are struggling with a particular concept. Typically each instructor is responsible for monitoring two groups throughout the session. Once the groups have worked through the case sheets, there is a debrief with the whole class that touches on key issues they discussed in their groups. This gives the instructors the opportunity to ensure there is consistent understanding across the class. The recruits post their answers to the prompting questions on the learning management system so that they are available for study purposes after the class. 75 The recruit groups are shuffled each week so that they are always working with a different group of people and hear a variety of perspectives over the course of the Block. The groups are balanced based on recruit performance so that there is peer support for recruits who may be struggling with the application of concepts. 3.4.1.3 Directed study time Almost each week in the Block I schedule has directed study time for recruits to work on the areas where they most need to improve in the program. Recruits complete a weekly training plan that outlines their strengths and their areas for improvement in the program, sets training goals, and outlines how they plan to use their directed study time to achieve these goals. This training plan is uploaded to the learning management system before the start of the week and reviewed and approved by their mentor. During directed study time, the Use of Force instructors are available in the gym, the Firearms instructors are available in the range, and other instructors are available in the classroom to assist recruits with their learning. Recruits can move between activities and classrooms, so they can work on several different areas in one directed study period. They are also able to take radios and practice additional scenarios with each other. In the classroom, they are instructed to find other recruits who would like to work on the same topic so that they are engaged in small group discussions while the instructors circulate to monitor discussions and answer questions. Directed study time is active and participatory learning and recruits are discouraged from using it to complete their reading. Following directed study, recruits submit a simple form that outlines how they actually used their time in case there were discrepancies between what they planned and what actually happened. 76 3.4.1.4 Practical scenarios As the week progresses, the recruits apply what they have learned in their reading and case-studies to practical scenarios. In some weeks this is done through practical scenario days and in other weeks this is done through Core Operational Policing Skills (COPS) days, which are full days of training tailored to a specific topic or skill. For the practical scenario days, the recruits have the opportunity to practice responding to the type of call for that particular week. Depending on the week and the call, they may respond as a single-person, or as a partnership. Each scenario has specific learning objectives, legal knowledge related questions that are asked of the recruits, and an associated checklist feedback form that details the expected response to the call. Each recruit is able to participate in several different calls during the day, so they practice responding to a variety of situations. Their performance is recorded, for formative feedback purposes. This recording is retained only by the recruit; the Police Academy neither watches nor keeps a copy of the recording. Where the Block I scenarios overlap with Block III training, select Block III recruits are chosen as the “lead recruit” to run the scenario and provide feedback to the Block I recruits. Each lead recruit has a team of Block III recruits who are actors, filmers, or dispatchers for the Block I recruit scenarios. An instructor is also present to monitor the feedback provided to the Block I recruit. 3.4.1.5 Practical Scenario Acting In addition to participating in practical scenarios by taking calls, the Block I recruits also participate in Block III scenarios as actors and by filming the scenarios taken by the senior recruits. This participation allows junior recruits to learn some tacit knowledge about 77 how to respond to calls, communicate effectively, and act like a police officer from their senior peers. Tacit knowledge is informal knowledge that is often difficult to articulate and is acquired through experience as expertise develops (Collins, 2001; Farrar & Trorey, 2008; Matthew, Cianciolo, & Sternberg, 2005; Matthew & Sternberg, 2009; Sternberg, 1999; Sternberg & Hedlund, 2002; Taylor et al., 2013). The ability to use tacit knowledge to solve problems is often considered a hallmark of an expert (Matthew et al., 2005; Matthew & Sternberg, 2009) but can be present at all stages of expertise development as learners begin to acquire tacit knowledge through their own experiences (Farrar & Trorey, 2008). The scenarios are structured in such a way so that the Block I recruits will observe a relevant scenario before they have to perform the same skill themselves (i.e. seeing a senior recruit respond to a Mental Health Act call in the week before they learn about the Mental Health Act). Reflection can help to develop tacit knowledge by making explicit what has been implicitly learned (Matthew et al., 2005; Matthew & Sternberg, 2009; Taylor et al., 2013). To aid in this development process, recruits are asked to reflect on what they learned by being an actor in the scenarios as a regular part of their scenario debrief self-assessment, as described in the following section. 3.4.1.6 Practical Scenario Self-Assessment and Report Writing The day after the practical scenarios, recruits have a “debrief” period built into the curriculum where they are tasked with watching the videos from their scenarios, completing a self-assessment form, reviewing the feedback from the assessor, and working on their report writing skills. Students may struggle to incorporate feedback not because they are disinterested but because they may not understand the feedback or their view of their 78 performance may be skewed by what they intended to do (Sadler, 2010). Watching the video of their scenarios and comparing it with the assessment form and the verbal feedback they received from the assessor is designed to help recruits align their perceptions of their performance with the perceptions of the more experienced assessors. The debrief form includes self-assessment questions that focus on their strengths and areas for improvement, what they learned from participating as an actor, the most valuable piece of feedback they received, what strategies they used to maintain their composure throughout the call, identify any personal triggers they may have encountered and any strategies they used to ensure their investigation was fair and impartial, and to map the calls to the PSC constable competencies. These structured reflective questions cover content reflection, process reflection, and premise reflection to stimulate the transformative learning process (Alfred et al., 2013; Ossa Parra et al., 2015). The form is completed and submitted to the learning management system, where it is reviewed by their mentor. During the debrief time, the recruits also have one hour to work on a report based on one of the scenarios they completed the previous day. This report writing time uses dedicated computers that are connected to a PRIME training server so recruits are able to practice using the software they will be using while on patrol. The reports focus on the language and content expectations and on the proper use of the PRIME software. An instructor reviews each report and provides feedback to the recruit. During the debrief time, recruits also meet individually with their mentor to review their performance to date and to discuss their training plan to ensure that their directed study time use reflects their actual strengths and weaknesses in the program. 79 3.4.1.7 COPS Days Core Operational Policing Skills (COPS) days are full days of training that focus on a specific topic or skill, such as effective communication, basic investigations, containment and searching, or high risk vehicle stops. Many of these full days of training were incorporated into the old delivery model but several key adjustments were made. Effective communication was moved to early in the Block I schedule to introduce the importance of communication techniques throughout all police encounters. Crisis Intervention and De-escalation was moved from Block III into Block I so that recruits have those essential skills before they start their field training in Block II. Also, an outdoor range day was incorporated into Block I training so that recruits have experience moving and shooting with their own pistol before field training. Additional new COPS days include a scenario day to practice calls and a use of force qualification day. 3.4.1.8 Skills Development – Use of Force, Firearms, and Driving Some basic physical skills are required of all police officers. Use of Force, or force options training, includes soft and hard physical control as well as intermediate weapons such as batons and OC spray. While some of the concepts involved in Use of Force training, such as the legal aspects of when force is allowed, are integrated into training, the acquisition of the core physical skills remains a separate component of the curriculum. Similarly, firearms training is taught exclusively by firearms instructors on the firearms range, and driving is taught separately by specially trained instructors at the driving track. 80 3.4.1.9 Assessment Central to assessing if recruits have been able to reach the required level of competencies is the ability to test recruits using practical, real-life, and authentic assessment tools. In Block I, recruits have a Progress Assessment exam in Week 5 of their training and a Final Exam in Week 12 of training. During this day, recruits complete five written exam stations and four practical scenarios. The written exam stations are a mix of some multiple choice exams and practical exercises such as completing a ticket or release documentation based on a written description of an event. Here, the written exam stations are tailored to various levels of understanding including basic memorization, critical thinking, scenario-based questions, and completion of a real life task. The variety of exam questions reflects the diverse knowledge and application requirements of a patrol level officer (Brady, 2005). The practical scenarios are two stations taken as a single police officer and two stations taken as a partnership. The scenarios are based on the types of simulations that the recruits have practiced, and received formative feedback on, during their practical scenario days and reflect the complex performance expectations of real-life police work, which is an essential component of authentic assessment (Narayan et al., 2013). The practical exam stations are assessed by external police officers (or retired police officers) who have been trained in assessment and who are using standard rubrics developed for the scenarios. The recruits respond to the call and then the scenario is followed by a short five minute oral exam where the assessor asks them to articulate the grounds for their actions. If a recruit fails a written station or a scenario, they are assigned a demerit and remediation is planned with their mentor either in directed study time or by reviewing the rubric and re-doing the scenario. Thus the exam scenarios are both assessments and learning 81 opportunities. Recruits are capped at four demerits on a given exam day so that no recruit can fail out of training based on one day’s performance as they are only allowed to accrue 6 demerits in one block of training. In addition to the exam days, recruits must also complete an “Application for Advancement”, or assessment portfolio, that outlines evidence that they have reached the required level of each of the competencies for that stage of their training. Assessment portfolios are considered a form of authentic assessment because they allow students to demonstrate their progression in learning and their critical reflection while using concrete examples that they have achieved the required level (Narayan et al., 2013). In Block I, the evidence consists mainly of feedback from scenario days and rubrics from exam scenarios. The recruits must upload all of their documentation and then complete a one to two page summary for each of the competencies that outlines how the evidence shows their progression of skill and that they have reached the required minimum level in each of the competencies. Table 2-1 summarized each of the nine core Constable competencies and proficiency levels 1 and 2. Recruits are required to demonstrate they have reached level 2 proficiency in each of the competencies by graduation at the end of Block III but may progress through the competencies at different rates over the three blocks of training. In order to map the predicted progression of a recruit through training, the competency descriptions and behavioural indicators were used to identify where there were learning opportunities for recruits to develop in each of the competencies. This information is summarized in Table 3-2 and shows that by the end of Block I training, recruits are expected to be at level 1 proficiency in: organizational awareness, problem solving, risk management, stress tolerance, 82 and written skills. They are expected to be at level 2 proficiency in the remaining competencies: adaptability, ethical accountability, interactive communication, and teamwork. The expected progression through proficiency levels is combined with the global judgements from ten Cate and Scheele (2007) outlined in Table 2-2, indicating that a recruit is expected to be able to act under full supervision by the end of Block I, under moderate supervision by the end of Block II, and independently by the end of Block III. Competency Block I Act under full supervision Block II Act under moderate supervision Block III Act independently Adaptability Level 2 proficiency Level 2 proficiency Level 2 proficiency Ethical Accountability and Responsibility Level 2 proficiency Level 2 proficiency Level 2 proficiency Interactive Communication Level 2 proficiency Level 2 proficiency Level 2 proficiency Organizational Awareness Level 1 proficiency Level 2 proficiency Level 2 proficiency Problem Solving Level 1 proficiency Level 2 proficiency Level 2 proficiency Risk Management Level 1 proficiency Level 2 proficiency Level 2 proficiency Stress Tolerance Level 1 proficiency Level 2 proficiency Level 2 proficiency Teamwork Level 2 proficiency Level 2 proficiency Level 2 proficiency Written Skills Level 1 proficiency Level 2 proficiency Level 2 proficiency Table 3-2 Expected progression through proficiency levels 1 and 2 in each of the core Constable competencies 3.4.2 Block II Block II training is the field training component that happens in the recruits’ home department, under the supervision of a Field Training Officer (FTO), who is a specially trained experienced member of patrol. The FTO is responsible for providing feedback to the recruit, documenting their progress, and assessing their performance. Documentation is completed and returned to the Police Academy at the end of Block II when the recruit returns 83 for Block III. During Block II, the focus is on applying the skills learned in Block I to a real policing environment and progressing in development of the core competencies. In the new delivery model, Block II was lengthened to 18-21 weeks, depending on scheduling, to allow for a consistent overlap of Block III and Block I. Structure was also introduced into the Block II experience. Previously, much of how field training unfolded was left to the discretion of the FTO. This lack of structure led to a great deal of inconsistency in experience for Block II recruits. Some recruits would be driving the police car on their first shift where others might not be driving until their ninth week. The structure introduced into Block II is intended to provide some guidance for the FTOs to increase the consistency of approach for recruits in training. Also, standardized rubrics were introduced to assess recruits performance as they move through Block II. Block II is now divided into Phase I and Phase II. Phase I is a short introductory session where the recruits focus on their legal knowledge, their officer safety and officer presence, and learning to use the computer software in a live environment instead of a training environment. Phase I lasts a minimum of one “work period” (one work period is four shifts) and a maximum of three work periods. During this time the recruit does not drive the police car and focuses on becoming comfortable with the basic skills. Once the recruit has successfully completed Phase I by consistently Meeting Expectations or Exceeding Expectations on the rubric, they move into Phase II. Phase II sees the recruit taking progressively more responsibility from their FTO. The assessment criteria continue to assess on the basics of legal knowledge, officer safety, and officer presence, but now also include the PSC constable competencies. After approximately fourteen weeks, the previous length of Block II, there is a check in to ensure that the recruit is meeting or exceeding expectations. If not, the extra time in Block II should 84 be used to provide extra support to the recruit to ensure that they meet the core competencies and avoid backtrooping1 the recruit. If the recruit is progressing as expected, the extra time can be used to meet the departmental specific tasks and objectives that are included in the required competencies. Also, in Block II, recruits complete a “Diversity Project” where they work in small groups of three recruits and identify an underserved or minority population in the community they have been hired to serve. Recruits meet with members of their community and interview them about their lives and their experience with police. When they return for Block III, recruits deliver a presentation based on this project. Typical project topics include indigenous populations, sex trade workers, vulnerable youth, and homeless populations. This Diversity Project has been a successful component of Block II training for many years. New documentation was also introduced into Block II. Recruits now complete a short monthly summary of their performance and submit this on the Police Academy learning management system. This documentation is monitored by the recruits’ mentors, to ensure that they are progressing as expected, and to initiate conversations with the department to provide additional support if needed. Recruits also complete an Application for Advancement assessment portfolio at the end of Block II. The majority of evidence in this assessment portfolio is taken from calls the recruit has encountered during their field training. 1 Backtrooping refers to the practice of holding a recruit who is not passing Block II back so they have more time to spend on the road with an FTO. Typically a recruit who is backtrooped will have their Block II extended and start Block III with the next class. 85 3.4.3 Block III Block III training is designed to build on the experience in Blocks I and II by applying advanced patrol and investigative topics and by developing mentorship abilities in the senior recruit class. Overall, the general goal in Block III is to minimize time spent in the classroom and maximize the time spent focused on practical applications. Unlike Block I, the structure of Block III mixes the type of calls in both case studies and practical scenarios, to be more reflective of an actual patrol shift. The longitudinal themes from Block I continue to be integrated into training in Block III as the recruits move to independence in each of the core competencies. Several new key components to Block III training, include teaching sims, longitudinal cases, and mentorship. 3.4.3.1 Pre-reading and quizzes The pre-reading in Block III is not prepared in manual chapters as it is in Block I. In Block III, recruits are directed to read specific sections of the Criminal Code of Canada, or of different Provincial Acts. Structuring the advanced reading in this way is designed to build the recruits’ ability to read and interpret various pieces of legislation. Similarly, for the majority of the weeks in Block III, the pre-reading is not associated with content knowledge quizzes; recruits must simply acknowledge that they have completed the required reading and are ready to discuss and apply it in class. Recruits are still able to clarify any confusion during directed study time. 86 3.4.3.2 Teaching sims In the previous Block III, there were a large number of guest speakers from specialty units who came to address the recruits. While these sessions were interesting, they were not relevant to all of the recruits because many of the smaller departments do not have these specialty units. In redesigning the curriculum, these guest speakers were engaged as subject matter experts to help develop “teaching sims” where recruits respond to a short scenario, followed by a longer debrief where the subject matter expert or other instructor guides the recruits through what a patrol member would need to know when responding to this type of call. The teaching sims occur in two separate days, with one day focusing on vulnerable populations and including teaching sims on the Youth Criminal Justice Act (YCJA), missing persons, hate crimes, elder abuse, sex assault, and child abuse. The remaining teaching sims include: internet investigations, credit card fraud and investigations, cell phone investigations, prohibited weapons, source handling, and criminal harassment. Through these teaching sims, the recruits gain hands-on experience in responding to these advanced type of calls. 3.4.3.3 Longitudinal Cases Longitudinal cases are investigations that carry over multiple weeks of training. The cases are delivered through a computer-based simulation program that uses a combination of video, photo, and text injects and assigns recruits specific tasks, questions, or assignments to progress through the investigations. One of the longitudinal cases is a continuation of the sex assault teaching sim. Recruits work on the investigations in small groups and their answers are monitored by instructors. As assignments, they must complete tasks like writing an 87 operational plan, writing several different kinds of warrants, and writing an arrest plan. These assignments are reviewed by an instructor who provides the recruits feedback on their work. Inspired by the work of Werth (Werth, 2009; Werth, 2011) in police training, the longitudinal cases are designed to help the recruits build their advanced investigation skills as well as manage a case load of ongoing investigations. 3.4.3.4 Advanced Operational Policing Skills (AOPS) days Similar to the COPS days in Block I, the AOPS days are full days of training that build on specific advanced topics. Some of the AOPS days remain unchanged from the old curriculum and new additions include a final Use of Force sign off day, the teaching sims days, and a new advanced outdoor range day. 3.4.3.5 Mentoring Junior Recruits It is important to prepare recruits to take on leadership roles within the communities they serve so the Block III curriculum looks to build leadership skills in the recruits through a structured mentoring of the junior Block I recruits. Select Block III recruits are chosen to be a “Lead Recruit” for Block I scenarios. A lead recruit is responsible for a team of Block III recruits who they assign to roles as actors, filmers, and dispatch for the Block I scenarios. The Lead Recruit is responsible for ensuring the scenarios are set up properly, for running the scenarios, and for providing performance feedback for the Block I recruits. At the start of Block III, all recruits receive specific training in how to give feedback to help them develop this skill. After this session, recruits who are selected as Lead Recruits will provide feedback to the Block I recruits. The Lead Recruits are provided with the 88 instructor guide for the scenarios they will be running in advance so that they can review the material and prepare for their role. Providing feedback to peers can also help solidify a student’s own understanding of their performance (McCarthy, 2017). An instructor is present to monitor the Lead Recruit’s performance and to provide them feedback on their leadership skills and the feedback they delivered, but they do not intervene in the scenario or feedback unless there is something unlawful that is given as feedback. The lead recruits are changed each week to provide the opportunity to as many recruits as possible. This experience is also an opportunity for the Block III recruits who are not selected as Lead Recruits to develop their teamwork skills by working together to support their peer who has the responsibility of running the scenario. At the end of Block III, the recruits who were selected as Lead Recruits are provided with a letter of commendation that is forwarded to their home departments in recognition of their demonstrated leadership for the junior recruits. 3.4.3.6 Assessment As in Block I, there are two exam days in Block III: an entrance exam during the first week of Block III and a final exam during week seven. The exam days are the same format as in Block I with five written stations and four practical scenarios assessed using standardized rubrics. Recruits also complete an Application for Graduation assessment portfolio that indicates how they have achieved the required level in each of the core competencies and are able to work at an independent level. The majority of evidence in this Application for Graduation is calls from Block II, supplemented by feedback and exam rubrics from Block III training. 89 3.4.4 Block IV Block IV is the probationary period after graduation from the Police Academy but before full certification as a Certified Police Constable. To date the structure of Block IV is unchanged with the new delivery model. 3.5 Development The development of the curriculum began with familiarizing the cohort of instructors with the new model and its underlying educational philosophy. A series of meetings were set with the instructional cohort where each session in each week of the template schedules were reviewed and the topics that would fit into each session identified. Also, through this process, elements that were already present in the curriculum, such as guest speakers, were questioned as to their contribution to the overall learning of the recruits. Some sessions were identified as fun, but not contributing to overall development or as not helpful, and were discarded from the program and replaced with learning activities designed to build skill and ability. Other sessions were identified as important for learning and development and retained in the program. Similarily, some topics that were taught in Block III, such as CID, were identified as crucial for success during Block II and moved into Block I training. Other topics, such as familiarization with Indigenous issues, were identified as building on the foundational principles of professional communication, and moved from Block I to Block III. Cranton (2011) discusses using transformative learning and critical theory as a framework for the scholarship of teaching and learning (SoTL) to question the question the underlying assumptions, beliefs, norms, and values of the discipline (Cranton, 2011). The process of starting to develop the new curriculum followed this framework by continuing to 90 question each element of the curriculum and why it was or should be included. This process was transformational for some instructors as they became more comfortable with the underlying philosophy of the new delivery model and more familiar with how it aligned with their own values as instructors. After each week was reviewed and the topics assigned to various teaching sessions, development meetings were held where instructors were divided into smaller groups, usually comprised of one instructor expert from the various disciplines, and the content of the sessions was created. As a group we would discuss the key learning objectives for the session and the instructors would be asked to identify calls they had responded to that included each of the objectives. Based on their identified calls, each small group would work to develop one case or scenario (depending on the session in development) using a blank template provided to help structure their thoughts. Their completed templates were then collected, edited, and structured to shape them into case exercises, scenarios, or lesson plans. Development was an iterative process, with the instructors sketching out the basic info, me compiling it into a case format and returning to the instructors with questions, until the final version was completed. Instructor guides with student material and notes on key points, as well as instructions on how to run the session, were created for each learning activity. Checklists were developed with comments sections to provide recruits written feedback on their scenario performance. Other lessons that are not in the case or scenario format were completed using a similar process with small groups of instructors. Exam scenarios were developed by a similar scenario development process, with the addition of a standardized rubric for assessment. The rubrics were developed as a draft and then brought to a group meeting of the instructional cohort where we reviewed the wording 91 of each element in the rubric to ensure it captured the desired intent of the scenario. Rubrics were designed to allow for flexibility in approach to achieve the desired outcomes in the scenarios. Also during this time a contractor was hired to edit and revise the existing manuals, as they are central to providing recruits with sufficient knowledge before the start of the week. The manuals were completely rewritten to focus on core concepts and, through this editing process, a total of over seven hundred pages were removed from the Block I reading material. The development process began in 2015 with a very ambitious timeframe for implementation in September of 2015. In the summer of 2015 it was decided that neither the material nor the instructors were sufficiently ready for implementation and the start of the new delivery model was ultimately delayed until September 2016. The extension of Block II to facilitate three classes of 36 with set start dates, however, was implemented in September 2015 one year before implementation of the curriculum delivery changes. Development of the material, including new lesson plans for almost every session in the program as well as new exams and scenario exams, continued using the instructional cohort as subject matter experts (SMEs). The development was at times difficult, because the instructors were both teaching and developing curriculum at the same time. Using the instructors as SMEs was part of a change management strategy to have the instructors invested in the new model by feeling ownership over its content because they were integral to its development. The strategy was only somewhat successful, however, because there was no additional time to work on the development activities. 92 3.6 Implementation Implementation of the new Block I delivery model started in September 2016. Implementation was based on a phased approach, whereby recruits who started in the old delivery model completed all three Blocks of training according to that model. That meant instructors were teaching Block III using the old delivery model and Block I using the new delivery model. During the first class of the new model, I sat in and observed all classes to ensure the lesson plans were being followed, to make any last minute or on the spot adjustments as needed, and to make notes on things that needed to be modified for the next class. Also during this time the new structure for Block II was created along with a training course for existing FTOs. The Block III curriculum was also developed during the first and second classes in the new Block I program, from September 2016 until May 2017. The first class through the new delivery model started training in September 2016 and graduated in June 2017. This class did not experience the full delivery model, however, as the senior class was still in the old delivery model. The structure of the practical scenarios had to be modified from their planned structure with Block III recruits mentoring Block I recruits, to a format where the Block I recruits acted in scenarios for each other and the instructors provided feedback directly to the Block I recruits. This modified version of the practical scenarios was used for the first two classes through the new program. The first class to experience the full version of the new delivery model started training in May 2017 and graduated in March 2018. After each class there have been modifications to the curriculum material to adjust the learning opportunities and increase the effectiveness of the program. Exam rubrics were validated by examining the reasons for recruit performance: any aspect of the rubric where 93 20% or more of the class did not meet or exceed expectations was struck for the first two classes, and the evaluation criteria was examined to determine if it was a flaw in the rubric or a flaw in the program. Necessary adjustments were made to either the lesson plans or the wording of the rubrics. Adjustments were also made to documentation processes, such as requiring recruits to complete one self-assessment debrief form per scenario day instead of one per scenario. There were also changes made to the naming of certain documents or activities to make them more “police friendly”. The self-assessment forms were renamed “practical scenario debrief forms” and the assessment portfolio was renamed the “Application for Advancement/Graduation”. 3.7 Delivered Curriculum The preceding sections have described the design, development, and implementation of the new curriculum delivery model for Police Recruit training in British Columbia. Inevitably when a program is delivered for the first time, however, there will be components of the program that are not delivered as they have been designed. This lack of alignment between design and delivery can be due to a variety of factors including faculty development and faculty comfort level with their various new roles, unforeseen administrative and coordinating requirements, student confusion or lack of understanding of new tasks, and organizational resistance to change. These influencing factors are discussed in Sections 6.2 through 6.4. The program that is evaluated is the program that is delivered, not the program that is designed, so it is important to note where discrepancies between the design and delivery occurred. Many key components of the curriculum delivery model took at least one offering before they ran as designed, with the largest and most impactful differences coming 94 in case studies, scenarios, mentoring, and directed study. As the evaluation was conducted during Block II training, only Block I is discussed in the following sections. 3.7.1 Class 152 Case Studies The intent of the case study sessions is to provide an opportunity to apply the knowledge learned through the pre-reading during small group discussion, facilitated by an instructor, at the beginning of the week before applying the knowledge to scenarios. In the case study sessions, each instructor is responsible for monitoring two groups of recruits to ensure that all group members are participating and that the group members all have a strong understanding of the legal, patrol, investigative, and/or traffic concepts that they are discussing. This small group facilitation style of teaching was new for many, if not all, of the instructors and many struggled with the new format. Each case study session has an accompanying “Instructor Guide” that includes the student material and the answers to the questions the recruits are asked, so that the instructors can effectively prepare for their teaching. It also includes directions for the instructors in terms of group facilitation. These directions include posing recruit questions back to their groups before answering them, monitoring participation, and communicating back to the lead instructor, who conducts the debrief of the cases. For instructors who were used to lecturing and being able to tell multiple stories throughout their lectures, there were many challenging aspects to this new format. Instructors struggled with their role and the directions to not directly answer recruit questions before exploring the knowledge of the group. Many instructors interpreted these directions as meaning that they were not to be involved in the groups’ discussions at all and thus did not interact or monitor the groups they were assigned 95 to monitor. Some instructors preferred to group together and discuss unrelated topics at the front of the room while the recruits worked, or to manage emails rather than engage with the groups. The instructors also struggled with discipline and keeping recruits on task during these sessions. Some recruits did not want to engage in the group discussions and would frequently side-track the group, preventing the recruits from fully completing the case activities. Unfortunately instructors did not correct this behaviour, and it became a larger issue as the block progressed. Lastly, some instructors seemed to actively involve themselves in this disruptive activity by spending the case discussion time talking to recruits about completely unrelated matters such as sports teams or scotch. The first offering of the case studies component of the curriculum did not meet its intended goals for all of the recruits. To address this issue for subsequent classes, a small core group of instructors who understood and were comfortable with the concept of case studies were assigned to each case study session to ensure consistency in delivery and classroom management. The assigning of core instructors has significantly improved the delivery of case study sessions and brought the delivery closer to the design. 3.7.2 Class 153 Case Studies While the issues with classroom management and delivery of the case study sessions were significantly improved by assigning a core group of instructors, the case study sessions for Class 153 were still not delivered as designed. The difference for this class, and other classes of 48 recruits, is in the timing of the case study sessions with respect to the other educational activities. With a class of 48 recruits, there are facility and scheduling issues that 96 arise and impact the delivery of the program as designed. The only classroom that can fit a class of 48 recruits at the JIBC is the lecture theatre, which is not conducive to small group work as the seats are fixed in place. Consequently, any sessions that were designed as full class need to be delivered either twice with half the class in a different session or in two separate classrooms with a different group of instructors. Also, the larger class size means that there is one additional group of 12 recruits that must attend the firing range to learn to shoot. This extra group of 12 recruits meant that some parts of the curriculum must be changed from whole class sessions to a group of 12 in the firearms rotation and delivered four times. The sessions that were moved into the rotation were Use of Force in the morning and case studies in the afternoon. This change meant that some recruits were participating in the case study sessions at the end of the week, after they had already completed the components of the curriculum where they practically applied their skills. This change in order from the intended learning through pre-reading, applying through case studies, and applying through practical scenarios design of the program has the potential to reduce the effectiveness of the sessions because recruits have not tested their understanding of the material before they attempt to use it in practice. This shift from the design of the program is a necessity with any class above 36 recruits and will remain an ongoing concern. 3.7.3 Practical Scenarios The practical scenarios were designed as the application and integration of the knowledge gained through pre-reading, case studies, and other relevant sessions in the week. Their design included integrating all of these skills and knowledge into both the scenario and the debrief components. The full program design includes the senior Block III recruits 97 running scenarios for the junior Block I recruits and also the junior recruits participating in the Block III scenarios as actors and filmers. As the first two iterations of the new curriculum delivery model did not have a senior Block III class, the scenarios were run by instructors using Block I recruits as actors. This necessary modification from the design resulted in multiple repetitions of the same scenario with the same group of recruits so that each recruit could participate. Although instructors were told to layer the feedback so that each recruit could improve incrementally over the other, some recruits found the repetition excessive and not helpful. Additionally, some instructors struggled to hold back feedback from the first scenario, resulting in a very long debrief that delayed the rest of the scenarios, and did not leave much room for an increase in performance in the last recruits. Further, many instructors struggled with the new concept of integrating the legal aspect into the scenarios. Directions in the instructor guides said to ask recruits, before the scenarios started, what the essential elements of that type of offence were and to have the recruits look it up if they didn’t know. There were also a series of legal questions to ask in the debrief, to reinforce the legal concepts and ensure a thorough understanding. This integration of legal concepts was a new approach to scenarios, as debriefs in the past had mostly focused on the tactical aspects. Many instructors did not realize the importance of incorporating the legal components into the scenarios for the first class. This resulted in an under-emphasis of the importance of knowing legal authorities. For the second class, and all further classes, an increased emphasis was placed on pre-briefing instructors before each scenario day to ensure they asked all the required legal-related questions. 98 Lastly, each of the scenarios are highly scripted in terms of actions and outcomes. This is another large change of scenarios from the old delivery model where both instructors and actors could improvise and alter the scenarios. In the new delivery model, the scripting of the scenarios ensures that all of the relevant points that recruits need to practice that week are integrated into the scenario and that components that recruits have not yet been taught are not introduced. This scripting and inability to improvise was a particularly challenging aspect for some instructors, who insisted on ‘ramping up’ scenarios as the day progressed instead of maintaining the script and allowing recruits to apply their new skills. Often key learning points were missed or glossed over because the instructor wanted a more entertaining scenario. Changes to the scenarios also impacted the recruits’ subsequent report writing assignments, which were based on the scripted scenarios. In the following classes, it was emphasized in the instructor briefings that the scenarios were not to be altered. The first class through the new delivery model did not experience the scenario application as designed because it lacked full integration of legal concepts and the scenarios frequently did not follow the intended script. While these issues were remedied in future classes, the program that Class 152 evaluated did not have the scenarios delivered completely as designed. 3.7.4 Mentoring The mentoring program was designed to involve all instructors who taught in the Recruit Training program. This design was intended to both share the work load of mentoring recruits, but also to fully involve instructors in the program. Because mentoring includes reviewing training plans and other documentation, it requires a certain level of 99 computer literacy. The intent of the mentoring was to have each instructor assigned a small number of recruits. Recruits would complete their weekly training plans by Sunday night and instructors would review them on Monday and give the recruits feedback on anything that needed to be changed for their directed study plans. At the weekly instructor meetings on Tuesdays, mentors would discuss what their recruits planned to do during directed study so that all instructors would have a sense of areas where recruits needed additional help. All instructors would then be available during directed study ready to help recruits. In practice, the instructors had varying levels of comfort with the computer learning management system and some were unable to monitor their recruits’ training plans. Others did not schedule time to review the plans and did not keep up with their recruits’ submissions. Other instructors were teaching and not available to meet with their recruits during the allotted time for face to face meetings. This discrepancy between design and delivery resulted in some recruits having active and supportive relationships with their mentors and other recruits having little to no interaction with theirs. The recruits who had sporadic interaction with their mentors felt isolated and did not value the mentoring component of the program as much as the recruits who did have active relationships with their mentors. After several classes, it was decided that a small group of instructors would share mentoring responsibilities and this approach has brought the mentoring component of the program in alignment with its design. For the classes involved in this study, the mentoring component of the program was not aligned with its intended design and did not consistently provide the accountability or support it intended. 100 3.7.5 Directed Study Directed study was perhaps the most difficult component of the program to implement. Both instructors and recruits struggled with the purpose and delivery of this component of the curriculum. The delivery of directed study is closely linked to the mentoring component whereby the mentors need to be aware of their recruits’ plans and communicate those plans to the other instructors. Instructors in the first offerings of the delivery model, but particularly in the first class, seemed unclear that they could and should be telling their recruits how to use their directed study time if that recruit had been observed struggling with a particular component or concept in the program. Because of this lack of certainty, recruits were left to do whatever they wanted during directed study time without feedback from the instructors. The intent of the classroom portion of directed study was that recruits would self-organize and be working in small groups on different areas with instructors circulating to answer questions. Instructors really struggled with this drop-in concept. Some instructors refused to talk to a group of recruits if another instructor was talking to a different group of recruits at the same time in that room. Other instructors would not circulate to the recruits and would simply sit at the front of the classroom waiting to be approached. Worse, instructors would sit in the cafeteria having coffee while the recruits worked. Often one instructor would start lecturing and all recruits would just focus on the lecture in case they missed something that was said. Recruits were confused by the process and often ended up working on their own doing their pre-reading for the next week, which was not the intent of directed study. Instructors tried to use the time to schedule additional sessions. They also tried to remove the drop-in component by scheduling review periods, basically changing 101 directed study into a series of lectures. This component of the delivery model was the most frustrating to implement and remained unaligned with the design for the longest period of time. Approximately one year after implementation of the first class, a small group of core directed study instructors were assigned to the classroom component and this approach seems to have helped align the delivery with the design as these instructors are ensuring that the time is used as intended. When the classes involved in this evaluation were in the program, however, directed study time was not delivered as it was designed. 3.8 Summary This chapter outlined the structure of Recruit Training before and after the changes to the delivery model. It discussed the design process as well as the key components of the new delivery model, including educational theory that was not discussed in Chapter 2. The chapter ended with a discussion of the implementation and delivery of the curriculum for the first classes. The areas of the curriculum where the delivery differed significantly from the design were discussed. These differences are crucial in analyzing the results of the evaluation because it is the delivered program that was evaluated, not the program as it was designed. The following chapters will outline the design of the study, present the results of the quantitative and qualitative analyses, and discuss the significance and implications of these results. 102 Chapter 4: Methodology This chapter outlines the process of design, development, implementation, and evaluation of the new delivery model for Police Recruit training in BC that took place over the course of 2014-2018. The project was intended as a quantitative evaluation of a program change using ‘pre-intervention’ and ‘post-intervention’ surveys of recruits and their field training officers. The quantitative analysis was intended to be supplemented with qualitative data from survey comments and from focus groups. In the course of carrying out the project, however, it expanded from the initial quantitative analysis into organizational and cultural change management, which will be included in the discussion. The research design section of this chapter will outline the intended project design including timeline and methodology. The project narrative section of this chapter will outline my perspective throughout my EdD program and in this project and detail changes that were made to the project design as the study progressed. 4.1 Research Design This section outlines the theoretical framework for program evaluation, the data sources, the timeline for survey administration, and the specific analytical methods to analyze the quantitative and qualitative results. 4.1.1 Program Evaluation Framework From the many different models for program evaluation, it is important to select a method that aligns with the information to be gathered and the decisions to be made (Bresciani, 2006). Whether measured through direct or indirect methods, the information 103 gathered should directly relate to the determination of whether the program is effectively meeting its goals (Bresciani, 2006). One common evaluation framework centres around Kirkpatrick’s four level framework of evaluation (Alliger & Janak, 1989; D. L. Kirkpatrick, 1977; D. L. Kirkpatrick & Kirkpatrick, 2006; J. Kirkpatrick & Kirkpatrick, 2016). Although not without controversy (Holton,Elwood F., I.,II & Kirkpatrick, 1996), this framework has been extensively used in educational development to evaluate educational and training programs (Alliger & Janak, 1989; J. Kirkpatrick & Kirkpatrick, 2016). It consists of four levels of program evaluation: reaction of the learners, learning during the program, behaviour change through applying the learning on the job, and results for the organizational impact of the training (Alliger & Janak, 1989; Alliger, Tannenbaum, Bennett, Traver, & Shotland, 1997; Grohmann & Kauffeld, 2013; D. L. Kirkpatrick & Kirkpatrick, 2006; J. Kirkpatrick & Kirkpatrick, 2016). Criticisms of the Kirkpatrick model include the view that the levels are hierarchical, with each level resulting in the next through positive correlation (Alliger & Janak, 1989; E. F. Holton, Bates, Noe, & Ruona, 2000; Holton,Elwood F., I.,II & Kirkpatrick, 1996). These assumptions are not necessarily supported by the research, in particular, there is no demonstrated correlation between learner reaction and learning or transfer of learning to the job environment (Alliger & Janak, 1989; Alliger et al., 1997; Grohmann & Kauffeld, 2013; E. F. Holton et al., 2000; Holton,Elwood F., I.,II & Kirkpatrick, 1996), although learner reactions are the most frequently measured level of the framework (Grohmann & Kauffeld, 2013). Alternative models, such as those championed by Holton, divide training into outcomes with different levels of primary and secondary influences, all of which can be measured to evaluate a program (E. F. Holton et al., 2000; Holton,Elwood F., I.,II & 104 Kirkpatrick, 1996). Another approach uses domains of learning – cognitive, psychomotor, and affective - as a conceptual framework for program evaluation (Kraiger, Ford, & Salas, 1993). In order to obtain the highest possible response rate, and for the broad dissemination of results, the evaluation model must be uncomplicated and evaluation surveys must be concise to obtain only the relevant information for the evaluation (Bresciani, 2006; Grohmann & Kauffeld, 2013). When viewed as a framework and tool to standardize language, the Kirkpatrick model can provide this familiar and widely accepted approach (Grohmann & Kauffeld, 2013; Wang & Wilcox, 2006). Alliger et al. (1997) and Wang and Wilcox (2006) propose modifications to the Kirkpatrick model that influenced my approach and are outlined in Table 3-1. Kirkpatrick (2006) Wang and Wilcox (2006) Alliger et al. (1997) Reactions Short term outcomes Reactions of learners Reactions Affective reactions Utility judgements Learning Learning by participants Learning Immediate knowledge Knowledge retention Behaviour/ skill demonstration Behaviour Long term outcomes Behaviour on the job Transfer Results Organizational impact and return on investment Results Table 4-1 Summary of Kiripatrick model of program evaluation and modifications from Alliger et al., (1997) and Wang and Wilcox (2006) that influenced the program evaluation design of this study Grohmann and Kauffeld (2013) demonstrated that the grouping into short term and long term outcomes by Wang and Wilcox (2006) was supported by statistical analysis. They emphasize that the evaluation of training should include both short term and long term 105 outcomes to encompass both reactions and transfer to practice (Grohmann & Kauffeld, 2013).There should also be sufficient time to allow opportunity for learners to use their new skills in practice before evaluation of the long term outcomes (Grohmann & Kauffeld, 2013; Wang & Wilcox, 2006). In order to evaluate the program changes, comparing the effectiveness of the lecture-based and competency-based training programs, both short and long term outcomes, as indicated in Table 4-1 were measured through measurements of recruit reactions, learning, and behaviour/transfer. As shown in Table 4-1, Alliger et al., (1997) indicate that reactions can be measured by affective reactions and by utility judgements. In the survey design, recruits were not asked how they enjoyed the program, rather how well they thought it prepared them for their responsibilities, thus targeting the utility judgement aspect of their reactions. All three domains of learning indicated in Table 4-1 were evaluated in project design to thoroughly cover recruit short-term learning. Additionally, to incorporate longer term outcomes, or behaviour/transfer of skills, the second survey administration and the FTO survey examined recruit ability and performance while performing the job during Block II. Table 4-2 indicates the data source that measured each of these elements of program evaluation. The project design incorporated all aspects of the program evaluation models outlined in Table 4-1 except the results that equate to organizational impact. 4.1.2 Evaluation Design and Methodology To address the primary research question about the effects of introducing a competency-based education framework on police recruit preparedness for field training, two groups of recruits were used for the analysis. Class 151 was trained using the traditional 106 didactic, lecture-based curriculum delivery model. Classes 152 and 153 were trained using the new competency-based training framework. Initial project design limited the analysis to Classes 151 and 152 but when the analysis of the results was taking place it was realized that Class 153 could be included in the study within the proposed timeframe and this modification was made to the project design and associated ethics applications. For the purpose of this study, it was not possible to directly compare performance during Block I on exams or tests because all of the measures of learning changed with the changes to the curriculum delivery. Analysis required administration of surveys where recruits-self reported their learning combined with data collection from field trainers. The initial project design included only data collection from recruits and field training officers. After the project began, it was recognized that the exam assessors in the competency-based model may be an additional data source and a survey and focus group of the competency-based exam assessors was added to the project design and associated ethics approvals. Table 4-2 summarizes the planned sources of data and the contribution of each to the project evaluation design, as described below: 107 Kirkpatrick Wang and Wilcox (2006) Alliger et al. (1997) Data source Reactions Short term outcomes Reactions of learners Reactions Affective reactions *not directly questioned Utility judgements Recruit survey 1 Recruit survey 2 Learning Learning by participants Learning Immediate knowledge Recruit survey 1 Assessor survey and focus group Knowledge retention Recruit survey 1 Assessor survey and focus group Behaviour/ skill demonstration Recruit survey 1 Assessor survey and focus group Behaviour Long term outcomes Behaviour on the job Transfer Recruit survey 2 FTO survey and focus group Table 4-2 Summary of program evaluation model from Table 4-1 with data sources from the project design • Recruit Survey 1: Recruits’ self-assessment of their perceived readiness for Block II training and the perceived utility of Block I in this preparation. This survey was administered before recruits left the Police Academy at the end of Block I. At this point of time in their training, recruits may not have a full picture of what the day to day operations of a patrol member entails. Although many recruits are new to policing and may not have a firm grasp on all of the job requirements, the perceived utility of the training will influence their motivation to learn and participate during Block I. This survey measured short term outcomes including the reactions of learners through utility judgements and the learning by participants through immediate knowledge, knowledge retention, and behaviour/skill demonstration, as shown in Table 4-2. 108 • Recruit Survey 2: Recruits’ self-assessment of their performance on the job, their ability to transfer their skills and knowledge from Block I to Block II training, and the perceived utility of Block I training. This survey was administered at the mid-point in Block II training (approximately 10 weeks in to Block II) when recruits had a better understanding of the requirements of a patrol member but while they could still recall their Block I training and attribute their ability/skills to training and not FTO influence. This survey measured the short term outcome of reactions of learners through utility judgements and the long term outcome of behaviour or transfer to job performance, as shown in Table 4-2. • FTO survey: An assessment from the recruits’ Field Training Officer on the recruits’ ability to transfer their skills and knowledge from Block I training to Block II job performance. This survey was administered at the same time as the second recruit survey. The FTO survey measured the long term outcome of behaviour or transfer to job performance, as shown in Table 4-2. • FTO Focus Group: Field Training Officers who have trained recruits in both the old and the new curriculum delivery models could provide a valuable source of comparative data in a qualitative focus group setting. While FTO selection is at the discretion of the departments and outside the control of this study, any FTOs who had trained recruits in the previous delivery model (Class 151 or earlier) and recruits in the new delivery model (Classes 152 or 153) were invited to participate in a focus group to further investigate the differences in recruit preparedness for Block II field training. The initial project design asked departments to use the same FTOs to train recruits in Class 151 and in Class 152 so a direct comparison could be made. 109 Between the 36 recruits in each of these classes, only one FTO trained recruits in both classes, so the project design was modified to include FTOs who trained recruits in Class 152 and had trained recruits in any class in the lecture-based delivery model. When Class 153 was added to the project design, this group was expanded to include FTOs who trained a recruit in Class 153 and a recruit in any class in the lecture-based delivery model. Changes were made to the associated ethics approval for the expansion of the inclusion criteria for the FTO group with each change. The FTO focus group measured the long term outcome of behaviour or transfer to job performance, as shown in Table 4-2. • Assessor Survey and Focus Group: Exam day assessors in the new curriculum delivery model are current or retired police officers who were trained to assess recruits in the Assessment Centre (AC). The AC was a screening tool used by departments prior to hiring candidates. Potential candidates were sent to participate in a day long scenario-based assessment of their potential for development as a police officer. Assessors for the AC were required to successfully complete a training course that involved standardization of expectations and documenting performance of candidates. The AC program was cancelled by the provincial government in 2016 and this pool of highly trained police officers were recruited to act as impartial assessors in the new delivery model. While they do not have direct experience with recruit training in the old delivery model, they were familiar with the skill level of incoming candidates through their involvement in the AC. At the time of the initial project design, this group was not included as a data source because it had not yet been determined who would assess the exams in the new delivery model. Once this 110 decision was made, the group of assessors was identified as a data source and the necessary changes to the project design and associated ethics approvals were completed. This group was sent a survey to determine their sense of the level of preparedness of the Block I recruits and invited to attend a focus group. The survey and focus group for the Block I exam assessors addressed the short term outcome of learning by participants through their observations of the recruits’ immediate knowledge, knowledge retention, and behaviour/skill demonstration, as outlined in Table 4-2. The combination of recruit self-reporting in surveys 1 and 2, and evaluation from field trainers, was intended to enhance the reliability of the evidence-base for the program evaluation (Braverman, 2013). To allow comparison across time-points, the recruits were administered the same survey for both recruit data point collections. This strategy was designed to enable an analysis of the utility of Block I training before and after recruits have practical job experience. 4.1.2.1 Survey design Validity of a construct relies on multiple sources of evidence to demonstrate that the construct measures what it purports to measure in its inferences and assumptions (Cohen, Manion, & Morrison, 2011; Cook, Brydges, Ginsburg, & Hatala, 2015; Downing, 2003; Kane, 2013). Validity is not a property inherent to a measure itself, it is contextually dependent on the interpretations and intended use of that measure or construct (Cohen et al., 2011; Cook & Hatala, 2016; Kane, 2013). The amount of evidence required to support a claim of validity is dependent on the impact of the claims made from the evaluation. The 111 more serious, or severe, the claims, the greater the amount of evidence required to support the validity argument of that evaluation (Kane, 2013). Kane’s framework for assessment validity relies on four categories of evidence and can be applied to tests designed for program evaluation (Kane, 2013). These four categories are: scoring, which includes assumptions and choices about the scoring criteria and response options; generalization, or reliability, that includes information that demonstrates that scores on the test are reflective of performance on the test; extrapolation, which includes information that demonstrates that performance on the test can be extrapolated to performance in real life; and the consequences or implications, which includes the intended use and decisions made from the test (Cook et al., 2015; Kane, 2013). Evidence from each four of these categories can be taken together to determine the validity of a particular evaluation for a particular use at a particular point in time (Kane, 2013). In measuring the recruits’ perceived ability to apply their Block I learning to their Block II field training, it was important to use a measurement scale that would adequately represent the complexities of patrol level police work. The Police Sector Council National Framework of Constable Competencies was selected as the appropriate construct to measure patrol level ability due to the unique depth of research and collaboration from the Canadian policing community that went into generating the competencies (Police Sector Council, 2011). The primary assumption in using these competencies as a reference point to measure patrol level ability is that they are an accurate representation of the requirements at the constable level of policing in BC. Further, multiple departments in BC, such as Abbotsford and Victoria, use the competencies for their HR management and promotion. This adoption by police departments for performance related evaluation and promotion decisions provides 112 strong evidence that the competencies are an accurate and valid representation of requirements. This evidence supports the use of the PSC competencies to asses recruit preparedness in both the scoring and extrapolation inferences of Kane’s framework. For the scoring, the wording of the questions uses the language and definitions from the PSC competencies. For the extrapolation inference, as discussed, the adoption of the competencies by departments indicates that they are representative of real world performance. For the implication inference, the consequences for stakeholders in responding to this survey were negligible and, providing the metric was an accurate representation of policing, any metric could have been used. The generalization inference, which looks at how well the items on the evaluation represent all of the possible attributes to be measured is again supported by the high level of collaboration that went into the formation of the PSC competencies. There exists nothing in the literature that provides a better description of the knowledge, skills, and attitudes required of contemporary police in Canada. Another factor to be considered in the validity argument is the choice of scale anchors for the responses. In the survey, there were two types of questions related to each competency. The first question asked about perceived ability in a particular competency and the second question asked about how well Block I training prepared them for that particular competency. Following the work of ten Cate and Scheele (2007) and Crossley et al. (2011), as described in the Introduction section on assessment, the anchors relating to the amount of supervision a recruit required to perform the demands of this competency were chosen. They will also be asked to rate the utility of Block I training in preparing them to meet each competency level. The amount of supervision required was selected as anchor points because of evidence in the literature that demonstrated an increased reliability when using 113 developing independence as a marker of skills assessment (Crossley et al., 2011; Frank et al., 2010; Regehr et al., 2007; ten Cate, 2006; ten Cate & Scheele, 2007). Because of the large amount of time, resources, and research that were involved in developing the Police Sector Council Competencies, because it is currently the only national framework for policing competencies, and because alignment of the recruit training curriculum with the PSC National Constable Competencies was mandated by the BC Provincial Government, the surveys were designed using these Constable Competencies. The surveys were not piloted because feedback during the pilot on the competencies would not have changed the survey design that used the nationally accepted and BC government mandated measure of the role of a police Constable. The surveys for the recruits, FTOs, and assessors all used the PSC competencies as the assessment construct and all used the same anchor points in the questions to facilitate comparison between groups. The surveys for each group can be found in Appendix B - Surveys. In addition to the PSC competencies, each survey also included a collection of demographic information such as age, gender, education level, previous policing experience and, for the FTOs and assessors, how many years they had been police and FTOs, and which exams they assessed, respectively. The last section on the survey was an open comment section to collect qualitative data from the respondents. 4.1.2.2 Survey Administration and Timeline Figure 4-1 outlines the project timeline for administration of surveys to compare the recruits’ ability and preparedness for Block II training before the intervention (Class 151, 114 lecture-based delivery model) and after the intervention (Classes 152 and 153, competency-based delivery model. Figure 4-1 Project timeline for recruit survey administration for classes 151 (pre-intervention, lecture-based), 152 and 153 (post-intervention, competency-based) The surveys were administered through the UBC survey tool Fluid Surveys. A consent form was included on the first page of each survey. In addition to the first page consent form, a letter was sent to each Department’s Training Officer to provide to FTOs. This letter outlined the purpose of the research and the consent process. The first survey completed by Block I recruits was sent the Friday of Week 12 of their classes at the Police Academy (Figure 4-1). Recruits then had the weekend and their last week of class to complete the survey. The survey closed following their last day of classes in Block I. The second survey that recruits completed, during Block II, was sent after 115 approximately 10 weeks of Block II training. This timing was chosen, rather than the end of Block II, so that Block I training would be fresh in their minds and that they would have only had one field trainer at that point in their training. The survey during Block II remained open for two weeks, to account for different departmental shift schedules and to allow sufficient time for completing the survey while on duty. At the same time as recruits were sent the survey for completion during Block II, their FTOs were also sent the FTO survey. This survey was sent out and remained open for the same duration of time as the recruit survey. For the purposes of tracking survey responses, the recruit and FTO surveys were collected based on class number and recruit last name. Once the surveys were collected, each recruit was assigned a unique identifier that replaced all references to their name in both the recruit and FTO surveys. This method of anonymizing the data was chosen to make it easier for both recruits and FTOs to complete the surveys during Block II as they did not need to remember an identifier code. It was anticipated that the majority of these surveys would be completed on shift when there was not a lot of time to search for identifier codes. The assessor surveys were sent out on a timeline independent of either class of recruits. This survey was distributed in February of 2018, using UBC’s Qualtrics survey program. This survey remained open for three weeks, to accommodate completion by several assessors who were on vacation but wanted to complete the survey when they returned. The assessor surveys did not collect any identifying information as they did not need to be correlated back to a specific recruit. 116 4.1.2.3 Statistical Analysis The data collected from the surveys was assumed to be non-normal due to the small sample size and the unknown characteristics of the population (Cohen et al., 2011). Because of the non-normal distribution, non-parametric statistical tests were used to analyze the collected data. While there is some controversy in the literature around the acceptability of using parametric statistical tests on non-normal data, it is generally accepted that non-parametric tests are the most appropriate form of analysis for populations where the data cannot assume normal distribution (Cohen et al., 2011). Although the non-parametric tests are generally less powerful than their parametric counterparts, they make no assumptions about the population studied (Cohen et al., 2011), so were the most appropriate for analysis in this project. The survey data were analyzed using IBM SPSS version.25. Data were collected from Fluid Surveys and exported into an SPSS file that was then imported into SPSS where the data was anonymized and coded for analysis. The Wilcoxon Signed Rank Test was used to compare responses between survey 1 and survey 2 within the same class. This test is used for related samples where you are looking at the same group answering the same question at two different points in time (Cohen et al., 2011). To compare differences between the class and the FTO scores and between the lecture-based and competency-based programs, the Mann-Whitney test was used. This test is used for two independent samples and indicates merely that a difference is present. To determine the source of the difference, or the direction of the difference, it is necessary to also run a Cross-tabulation report (Cohen et al., 2011). The Mann-Whitney test was also used to investigate any differences between reporting due to gender, and the Kruskal-Wallis test was used to investigate any differences between 117 reporting due to previous police experience, education level, and FTO experience. The Assessor survey data was analyzed using descriptive statistics, but not compared to other data sets because it was not in reference to a specific class or a specific recruit. 4.1.2.4 Qualitative Data Analysis The project design planned for qualitative analysis of focus group transcripts and of narrative comments from the various surveys. This data was to be analyzed using NVivo version 11 and a grounded theory methodology (Cohen et al., 2011), which builds themes and interconnections as they emerge from the data. As discussed in Section 4.2 Project Narrative, the focus groups either did not occur or were too small to be representative and were not included for analysis. 4.2 Project Narrative I began my EdD while employed as the PBL Program Manager for the MD Undergraduate Program in the Faculty of Medicine at UBC. During my time in this role, I completed my coursework, comprehensive exam, and developed a project that focused on peer feedback in a PBL program and its effect on developing communication skills in medical students. When I changed positions and moved to the role at the JIBC Police Academy, I no longer had access to the medical student population and PBL program that were the basis of my EdD project. When I started my role at the JIBC Police Academy, I took a leave from my EdD studies to learn my new position. During this time, I observed recruit training classes, read literature on policing and the little I could find on police training, and discovered that there 118 were a lot of similarities between medical education and police training. These similarities have been noted by others, particularly as a movement developed to include PBL-based exercises in police training (Weinblatt, 1999). My experience in my previous position, and some of the research I had conducted in developing my previous EdD project, was helpful in beginning the research and development of the proposal to change the structure of police recruit training at the JIBC. The foundations of the PBL program are in a constructivist framework and transferred well to the new project. Although I decided a pure PBL approach was too open ended to be accepted as the foundation of the recruit-training delivery model, the case-based method was much more appropriate for this level of training and fit well into the framework of the new program model. As noted in Section 1.1 My Perspective, the development process was challenging. While the development process was ongoing, historical questions about recruit training led to the BC Association of Municipal Chiefs of Police (BCAMCP) commissioning two retired police officers to conduct a review to identify training needs. This project was not intended as a curriculum review, but rather a gap analysis of training needs. During the fall of 2016, while the first offering of the new Block I was underway, the reviewers attended a day session at the JIBC with the Director of the Police Academy and the President of the JIBC to discuss several matters pertaining to the review. I presented an overview of the new delivery model drawing on evidence from the literature in PBL and case-based learning, competency-based learning, and the concept of a ‘flipped classroom’ to increase time for application. Despite the review being categorized as not a review of curriculum, the draft report contained a large amount of criticism of the new delivery model. In response to the initial review, the BC Provincial Government commissioned its own review of the Police Academy 119 governance. They hired a retired police officer and former Police Academy Director to conduct the review. This final report has been submitted to the government and an executive summary has been released to the Police Academy Director. The focus of this review was to provide models to fund and manage the JIBC Police Academy during the current climate of decreasing resources and increasing demands. Further to the initial training needs review, the BCAMCP commissioned the same reviewers from the initial review to travel across Canada to conduct an analysis of recruit training. They visited many departmental training facilities that train in-service members as well as the Atlantic Police Academy and the Seattle Police Department. This review has been submitted to the BCAMCP, who commissioned the review but at the time of writing its contents have not been released. Lastly, an additional curriculum review commissioned by the BC Provincial Government is planned for 2018. This review will examine the current curriculum delivery model, in response to questions or misunderstandings that have arisen about the new delivery model. The timing of these reviews such that they coincide with the development and implementation of the first classes through the new delivery model has complicated both the implementation and the findings of the reviews. Obtaining an accurate projection of staffing requirements for the JIBC Police Academy is difficult because development for the new program increased instructor demands beyond what typically is expected. Further, after the first class in the new delivery model, the Police Academy increased its maximum class size to accommodate an increase in departmental hiring. Because of this increase, alterations to the schedule have put an increased demand on instructional resources. Additionally, the timing of these reviews to coincide with the implementation of the new curriculum delivery model created a climate of speculation that cannot be separated from the program evaluation results. The impact of this 120 climate on the change management process is discussed in Section 6.3 Organizational Cynicism and Organizational Change. 4.2.1 Changes to Project Design As noted in Section 4.1.2.1 Survey design, there were several changes made to the project design as the research unfolded. These changes included the addition of Class 153 to the project, the expansion of the FTO inclusion criteria to FTOs for Class 152 or Class 153 who had trained a recruit in any class in the old curriculum model, and the addition of the exam assessor group to the project design. As survey results were collected and response rates observed, there were also changes to the proposed analysis methodology. The survey design included both the PCS Constable Competencies and the Constable Task List, despite the task list containing several categories that do not apply to police recruit training because they must be trained in a department-specific manner. After collection of the survey data and preliminary analysis of the quantitative and qualitative information from the surveys, it was determined that the task category data did not add any information insight into the program evaluation and so the analysis focused solely on the Constable Competencies. The response rates for Class 153 were exceptionally low for both recruits (Table 5-11 Class 153 demographic characteristics and survey response rates) and FTOs (Table 5-14 Demographic characteristics for FTO respondents for Class 153). To determine if the data collected from Class 152 and Class 153 could be grouped into one post-intervention, competency-based training group, statistical analysis was carried out to determine if there were statistically significant differences between the Recruit Survey 1 from Class 152 and 121 Class 153 and between the FTO survey from Class 152 and Class 153. This analysis is presented in Section 5.2 Quantitative Survey Analysis and demonstrates no statistically significant differences in how Classes 152 and 153 responded to Recruit Survey 1, Recruit Survey 2, nor in the difference between how recruits responded to Survey 1 and 2 (R2-R1) in the global ability and preparedness ratings. No statistically significant difference between how the FTOs for Classes 152 and 153 responded in the global ability and preparedness ratings was observed. Because no statistically significant difference was observed, it was determined that the two classes could be grouped as one “competency-based delivery model” group for the purposes of further analysis. Additionally, the response rates to Recruit Survey 1 were much higher than the response rates to Recruit Survey 2 (see Table 5-1 Class 151 demographic characteristics and survey response rates, Table 5-6 Class 152 demographic characteristics and survey response rates, and Table 5-11 Class 153 demographic characteristics and survey response rates). Recruit Survey 1 was administered while the recruits were still in Block I at the JIBC whereas Survey 2 was administered when the recruits were working as patrol officers during Block II. Although police response rates to survey research are typically exceptionally low, often less than 10% (Huey, Blaskovits, Bennell, Kalyal, & Walker, 2017), low response rates when starting with a small sample size as in this study make analysis difficult. The initial project design called for Recruit Survey 2 to be compared with the FTO survey since Survey 2 was administered at the same timepoint as the FTO survey, after recruits had experienced 10 weeks of patrol work. To determine if Survey 1 could be used to be compared against the FTO survey, statistical analysis was carried out to identify any statistically significant differences in recruit answers between Survey 1, prior to field experience, and Survey 2, after 122 field experience. The Wilcoxon ranked sign test was used to test for a statistically significant difference between these two surveys for each of the groups of lecture-based and competency-based delivery models. The results of this analysis are included in Section 5.2.1- Differences in perception before and after Block II experience. The analysis determined no statistically significant differences for any of the classes between Recruit Survey 1 and Recruit Survey 2, so Recruit Survey 1 was used for further analysis. The planned focus groups presented further challenges to data collection because of an extremely low interest and participation rate. For the competency-based exam assessors, a focus group invitation was sent to all eligible assessors (n=28) and two available dates and times were offered. Three assessors responded that they were available and interested in participating in the focus group (a 10.7% response rate) but in the two to three days leading up to the scheduled focus group two of these assessors indicated they had double booked themselves and had to withdrawal their participation. I contacted the remaining scheduled assessor and informed them of the situation and they opted not to participate as the only person involved. As such, no focus group was conducted with the assessors. Similar difficulties were encountered with the FTO focus group. The invitation for the focus group was sent to all 82 field trainers (36 from Class 152 and 46 from Class 153) and two possible dates and times were offered. Of these 82, it is unknown the exact number who were eligible for participation based on the inclusion criteria of training a recruit in Class 152 or 153 and training a recruit in any previous recruit class. Of all the FTOs who were invited, three responded that they would like to participate. If it is assumed that all 82 recipients were eligible, that is a 3.6% response rate. Additionally, all three respondents were from the same department. Regardless of the small sample size, the “focus group” was 123 conducted and was rather productive. Because the sample size was not representative of the group, however, the transcript was not fully analyzed and several key observations are only mentioned in Section 5.3 Focus Group Analysis. 4.3 Summary The research design was intended to be a quantitative evaluation of pre-intervention and post-intervention classes after a foundational change to police recruit training in BC. As complications with data collection emerged, some changes had to be made to the initial project design, including grouping the competency-based classes into one group for analysis and using Recruit Survey 1 for the majority of the analysis. Additionally, low interest in the focus groups resulted in cancelling that portion of the program evaluation design, or minimal inclusion of results. The next chapter outlines the results from the quantitative and qualitative data analysis. 124 Chapter 5: Results This chapter outlines the project results, starting with descriptive information from each of the data groups (Recruit classes, FTOs, and assessors). Following the descriptive characteristics, analysis to determine if Class 152 and Class 153 could be grouped into one competency-based delivery model for analysis was carried out. Recruit Survey 1 was analyzed against Recruit Survey 2 to determine if there were differences in recruit perceptions before and after exposure to practical experience and to determine if Recruit Survey 1 could be used for subsequent analysis. Following these tests, the survey data was analyzed within groups, to determine if any recruit or FTO demographic characteristics influenced the results and between classes to answer the primary research question about the comparison between recruit ability and preparedness between the lecture-based and competency-based delivery models. Qualitative analysis of the comments from each of the surveys is presented followed lastly by several comments on the discussion with the three FTOs who agreed to participate in the focus group. 5.1 Descriptive Survey Results The following section outlines the descriptive characteristics of each of the survey administrations, including response rate, gender, and experience level of the recruits and FTOs. 5.1.1 Lecture-based delivery model: Class 151 The response rate for Class 151 was very high through both administrations of the survey. The class itself was composed of 62.9% male and 37.1% female recruits. Of these 125 recruits, 31.4% were 20-24 years old, 42.9% were 25-29, 14.3% were 30-34, and 11.4% were 35-39. No recruits were in the upper age category. Table 5-1 outlines the percentages and age demographics for each of the survey administrations for Class 151. Male Female 20-24 25-29 30-34 35-39 40-44 Did not respond Total respondents Class statistics n 22 13 11 15 5 4 0 N/A 35 % 62.9 37.1 31.4 42.9 14.3 11.4 0 N/A N/A Survey 1 Respondents n 22 13 11 15 5 4 N/A 0 35 % 62.9 37.1 31.4 42.9 14.3 11.4 N/A 0 100% Survey 2 Respondents n 18 8 7 11 5 3 N/A 9 26 % 69.2 30.8 26.9 42.3 19.2 11.5 N/A 25.7 74.3% Table 5-1 Class 151 demographic characteristics and survey response rates The departments who had recruits in Class 151 were: Abbotsford (n=3, 8.6%), Delta (n=4, 11.4%), New Westminster (n=2, 5.7%), Port Moody (n=1, 2.9%), Saanich (n=4, 11.4%), Transit (n=5, 14.3%), Vancouver (n=13, 37.1%), and Victoria (n=3, 8.6%). Table 5-2 presents the recruits’ reported education levels prior to starting at the Police Academy. An option for no post-secondary was not included in the selection option as departmental recruiting policy states that applicants must have at least some post-secondary experience. The most common level of previous education was a university degree, with 37.1% of the class having earned this before starting at the police academy. Responses in the “Other” category included JIBC Fire Academy, JIBC Paramedic training, and trades certification. 126 Education Frequency Percent Some college 7 20.0 College diploma 2 5.7 Some university 4 11.4 Undergraduate degree 13 37.1 Graduate degree 3 8.6 Other 5 14.3 Did not respond 1 2.9 Table 5-2 Education levels of Class 151 prior to police academy Within the class, 25 recruits (71.4%) had no previous policing experience and 10 recruits (28.5%) indicated they had some previous policing experience. Table 5-3 indicates the types of previous policing experience reported by the recruits. Experience Frequency Percent No previous police related experience 25 71.4 Community Safety Officer, jail guard, auxiliary/reserve constable, international police officer 10 28.6 Traffic authority, Canadian Border Services Agency, corrections, civilian staff at police department, dispatch 0 0 Volunteer (Community safety office) 0 0 Table 5-3 Previous policing experience of Class 151 prior to police academy 5.1.1.1 Demographic Characteristics of 151 FTOs The FTO survey was sent to FTOs for all 35 recruits in Class 151. Fifteen (15) FTOs responded, for a response rate of 42.9%. The most common characteristics for the FTO respondents for Class 151 were that they were in the age range of 35-39 years (40%), had 5-9 years of service (60.0%), had been an FTO for four years or less (86.7%), and had trained 127 four or fewer recruits (80.0%). Table 5-4 outlines the full demographic characteristics for the FTO respondents for Class 151. Demographic Frequency Percent Gender Male 12 80.0 Female 3 20.0 Age Range 25-29 1 6.7 30-34 5 33.3 35-39 6 40.0 40-44 2 13.3 45-49 1 6.7 Years of Service 0-4 2 13.3 5-9 9 60.0 10-14 2 13.3 15-19 2 13.3 Years as FTO 0-4 13 86.7 5-9 0 0 10-14 2 13.3 Number of Recruits Trained 0-4 12 80.0 5-9 1 6.7 10-14 2 3.3 Table 5-4 Demographic characteristics for FTO respondents for Class 151 The demographic characteristics of the recruits who were trained by the FTOs who responded to the survey are presented in Table 5-5. Two of the FTO responses were unable to be matched to a recruit so information for 13 recruits is provided in the Table. One FTO was not on the list of FTOs provided by the department so could not be matched to a recruit. Likely this FTO was substituting for a regular FTO who was on leave and was forwarded the survey. The other FTO did not provide a name and so was unable to be matched to a recruit in the class. 128 Demographic Frequency Percent Recruit Gender Male 7 46.2 Female 6 53.8 Recruit Age Range 20-24 5 38.5 25-29 7 53.8 30-34 1 7.7 35-39 0 0 40-44 0 0 Recruit Previous Education Some college 3 23.1 College diploma 0 0 Some university 0 0 University degree 7 53.8 Graduate degree 1 7.7 Other 2 15.4 Recruit Previous Police Experience Yes 5 38.5 No 8 61.5 Table 5-5 Characteristics of recruits trained by FTO respondents in Class 151. 5.1.2 Competency-based delivery model: Class 152 Male Female 20-24 25-29 30-34 35-39 40-44 Did not respond Total respondents Class statistics n 24 12 13 13 10 0 0 N/A 36 % 66.7 33.3 36.1 36.1 27.8 N/A N/A N/A N/A Survey 1 Respondents n 22 7 7 13 9 N/A N/A 7 29 % 75.9 24.1 24.1 44.8 31.0 N/A N/A 19.4 80.6% Survey 2 Respondents n 8 1 1 5 3 N/A N/A 27 9 % 88.9 11.1 11.1 55.6 33.3 N/A N/A 75.0 25.0% Table 5-6 Class 152 demographic characteristics and survey response rates The departments who had recruits in Class 152 were: Abbotsford (n=2, 5.6%), Central Saanich (n=1, 2.8%), Delta (n=3, 8.3%), Nelson (n=1, 2.8%), New Westminster (n=2, 5.6%), Saanich (n=4, 11.1%), Vancouver (n=20, 55.6%), Victoria (n=2, 5.6%), and West Vancouver (n=1, 2.8%). 129 Table 5-7 presents the recruits’ reported education levels prior to starting at the Police Academy for the 27 recruits who completed Survey 1. The most common level of previous education was a university degree, with 55.2% of the respondents having earned this before starting at the Police Academy. Responses in the “Other” category included UK A-levels and a university certificate. Education Frequency Percent Some college 0 0 College diploma 4 13.8 Some university 4 13.8 Undergraduate degree 16 55.2 Graduate degree 3 10.3 Other 2 6.9 Did not respond 0 0 Table 5-7 Education levels of Class 152 respondents prior to police academy Within the respondents, 12 recruits (41.4%) had no previous policing experience and 17 recruits (58.6%) indicated they had some previous policing experience. Table 5-8 indicates the types of previous policing experience reported by the recruits. Experience Frequency Percent No previous police related experience 12 41.4 Community Safety Officer, jail guard, auxiliary/reserve constable, international police officer 13 44.8 Traffic authority, Canadian Border Services Agency, corrections, civilian staff at police department, dispatch 3 10.3 Volunteer (Community safety office) 1 3.4 Table 5-8 Previous policing experience of Class 152 prior to police academy 130 5.1.2.1 Demographic Characteristics of 152 FTOs The FTO survey was sent to FTOs for all 36 recruits in Class 152. Eleven (11) FTOs responded, for a response rate of 30.6%. There were the same number of FTO respondents for Class 152 in the age ranges of 30-34, 35-39, and 40-44 years (27.3% each). The most common other characteristics were the same as the group of FTOs for Class 151: that the responding FTOs had five to nine years of service (45.5%), had been an FTO for four years or less (63.6%), and had trained four or fewer recruits (63.6%). Table 5-9 outlines the full demographic characteristics for the FTO respondents for Class 152. Despite a request to the departments that FTOs who had trained recruits in Class 151 also be used to train recruits from Class 152, to help with the evaluation project, only one FTO trained recruits in both Class 151 and 152. 131 Demographic Frequency Percent Gender Male 9 81.8 Female 2 18.2 Age Range 25-29 0 0 30-34 3 27.3 35-39 3 27.3 40-44 3 27.3 45-49 1 9.1 50-54 1 9.1 Years of Service 0-4 0 0 5-9 5 45.5 10-14 4 36.4 15-19 1 9.1 20-24 1 9.1 Years as FTO 0-4 7 63.6 5-9 1 9.1 10-14 2 18.2 15-19 1 9.1 Number of Recruits Trained 0-4 7 63.6 5-9 2 18.2 10-14 2 18.2 Table 5-9 Demographic characteristics for FTO respondents for Class 152 The demographic characteristics of the recruits who were trained by the FTOs who responded to the survey are presented in Table 5-10. Four of the FTO responses were unable to be matched to a recruit so information for seven recruits is provided in the Table. Two FTOs completed the survey but the recruit they were training did not and two FTOs completed the survey but did not provide a name so could not be matched with the recruit. 132 Demographic Frequency Percent Recruit Gender Male 6 85.7 Female 1 14.3 Recruit Age Range 20-24 1 14.3 25-29 2 28.6 30-34 4 57.1 35-39 0 0 40-44 0 0 Recruit Previous Education Some college 0 0 College diploma 1 14.3 Some university 1 14.3 University degree 4 57.1 Graduate degree 1 14.3 Other 0 0 Recruit Previous Police Experience Yes 4 57.1 No 3 42.9 Table 5-10 Characteristics of recruits trained by FTO respondents in Class 152. 5.1.3 Competency-based delivery model: Class 153 Due to an increase in hiring by departments, primarily Vancouver, the class size for Class 153 was increased from the maximum of 36 that the program was designed for to a class size of 48 recruits. Male Female 20-24 25-29 30-34 35-39 40-44 Did not respond Total respondents Class statistics n 32 16 11 22 10 1 4 N/A 48 % 66.7 33.3 22.9 45.8 20.8 2.1 8.3 N/A N/A Survey 1 Respondents n 14 6 3 12 3 1 1 28 20 % 70.0 30.0 15.0 60.0 15.0 5.0 5.0 58.3 41.7% Survey 2 Respondents n 6 1 1 2 2 1 1 41 7 % 85.7 14.3 14.3 28.6 28.6 14.3 14.3 85.4 14.6% Table 5-11 Class 153 demographic characteristics and survey response rates The departments who had recruits in Class 153 were: Abbotsford (n=3, 6.3%), Central Saanich (n=1, 2.1%), Delta (n=2, 4.2%), New Westminster (n=4, 8.3%), Saanich 133 (n=2, 4.2%), Transit (n=5, 10.4%), Vancouver (n=26, 54.2%), Victoria (n=3, 6.3%), and West Vancouver (n=2, 4.2%). Table 5-12 presents the recruits’ reported education levels prior to starting at the Police Academy for the 20 recruits who completed Survey 1. The most common level of previous education was a college diploma or university degree, with 40.0% of the respondents having earned a college diploma and 40% of respondents having earned an undergraduate degree before starting at the police academy. Education Frequency Percent Some college 0 0 College diploma 8 40.0 Some university 3 15.0 Undergraduate degree 8 40.0 Graduate degree 1 5.0 Other 0 0 Did not respond 0 0 Table 5-12 Education levels of Class 153 respondents prior to police academy Within the respondents, 15 recruits (75.0%) had no previous policing experience and five recruits (25.0%) indicated they had some previous policing experience. Table 5-13 indicates the types of previous policing experience reported by the recruits. Experience Frequency Percent No previous police related experience 15 75.0 134 Community Safety Officer, jail guard, auxiliary/reserve constable, international police officer 3 15.0 Traffic authority, Canadian Border Services Agency, corrections, civilian staff at police department, dispatch 1 5.0 Volunteer (Community safety office) 0 0 Experience not described 1 5.0 Table 5-13 Previous policing experience of Class 153 prior to police academy 5.1.3.1 Demographic Characteristics of 153 FTOs The FTO survey was sent to FTOs for all 48 recruits in Class 153. Nine FTOs responded, for a response rate of 18.8%. The most common characteristics for the FTO respondents for Class 153 were that they were in the age range of 40-45 years (44.4%), had 10-14 years of service (55.6%), had been an FTO for four years or less (66.7%), and had trained four or fewer recruits (66.7%). Table 5-14 outlines the full demographic characteristics for the FTO respondents for Class 153. 135 Demographic Frequency Percent Gender Male 7 77.8 Female 2 22.2 Age Range 25-29 0 0 30-34 2 22.1 35-39 1 11.1 40-44 2 22.2 45-49 4 44.4 Years of Service 0-4 0 0 5-9 1 11.1 10-14 5 55.6 15-19 2 22.2 20-24 1 11.1 Years as FTO 0-4 6 66.7 5-9 1 11.1 10-14 2 22.2 Number of Recruits Trained 0-4 6 66.7 5-9 1 11.1 10-14 1 11.1 15-19 1 11.1 Table 5-14 Demographic characteristics for FTO respondents for Class 153 The demographic characteristics of the recruits who were trained by the FTOs who responded to the survey are presented in Table 5-15. Seven of the FTO responses were unable to be matched to a recruit so information for two recruits is provided in the Table. Four FTOs completed the survey but the recruit they trained did not while three FTOs completed the survey but did not provide a name and so were unable to be matched to a recruit in the class. 136 Demographic Frequency Percent Recruit Gender Male 2 100 Female 0 0 Recruit Age Range 20-24 0 0 25-29 1 50.0 30-34 1 50.0 35-39 0 0 40-44 0 0 Recruit Previous Education Some college 0 0 College diploma 0 0 Some university 0 0 University degree 2 100 Graduate degree 0 0 Other 0 0 Recruit Previous Police Experience Yes 1 50.0 No 1 50.0 Table 5-15 Characteristics of recruits trained by FTO respondents in Class 153. 5.1.4 Competency-based delivery model: Exam Assessors The FTO survey was sent the 28 assessors who had acted as exam assessors for classes 152 or 153. Seventeen (17) assessors responded, for an apparent response rate of 60.1%. Of these responses, however, seven were not completed, dropping the actual response rate to 35.7%. The most common characteristics for the assessor respondents were that they were female (60%) in the age range of 50-54 years (60%), had 10-14 years of service or 20-24 years of service (30% each), and had been an assessment centre assessor for 5-9 years before the program was shut down (70%). Of the group, five also had experience as an FTO (50%), and of those, the most common time as an FTO was 0-4 years (50% of respondents as only 4 people indicated how long they had been an FTO). The majority of 137 respondents (60%) had assessed both the Week 5 Progress Assessment and the Week 12 Final exams in Block I for either Class 152 or Class 153. Demographic Frequency Percent Gender Male 4 40.0 Female 6 60.0 Age Range 25-29 0 0 30-34 1 10.0 35-39 0 0 40-44 1 10.0 45-49 2 20.0 50-54 6 60.0 Years of Service 0-4 0 0 5-9 0 0 10-14 3 30.0 15-19 2 20.0 20-24 3 30.0 25-29 1 10.0 30-34 1 10.0 Years as an Assessment Centre assessor 0-4 2 20.0 5-9 7 70.0 10-14 0 0 15-19 1 10.0 FTO Experience Yes 5 50.0 No 5 50.0 Years as an FTO 0-4 2 50.0 5-9 0 0 10-14 1 25.0 15-19 0 0 20-24 1 25.0 Exam Assessed Week 5 2 20.0 Week 12 2 20.0 Both 6 60.0 Table 5-16 Demographic characteristics of competency-based exam assessors 138 5.2 Quantitative Survey Analysis Data analysis was complicated by the low response rates, particularly for Survey 2, the FTO survey, and Class 153. To facilitate analysis, the two competency-based delivery model data sets, Class 152 and 153, were grouped for analysis. Before grouping, the data were analyzed across class categories for Class 152 and 153 to ensure that there were no significant differences and that the data could be grouped. To determine if there were significant differences, I used the two global questions related to overall ability and overall preparation for Block II. Additionally, since I was interested in determining if there was a difference in how recruits responded to these questions before and after they had experienced what it was like to actually work as a patrol level police officer, I created a new column that measured the differences between Survey 2 and Survey 1 (R2-R1). This column was also analyzed to ensure there were no significant differences between Class 152 and 153 before their data was grouped. The data were coded as belonging to either Class 152 or Class 153 and the responses across classes were compared for the following questions: 1. Overall, please rate your general ability to perform as a Recruit Constable in Block II – Survey administration 1 2. Overall, please rate your general ability to perform as a Recruit Constable in Block II – Survey administration 2 3. The difference in overall ability reported between Survey 2 and Survey 1 (R2-R1) 4. How well do you feel your Block I training prepared you to meet the expectations as a Recruit Constable in Block II – Survey administration 1 5. How well do you feel your Block I training prepared you to meet the expectations as a Recruit Constable in Block II – Survey administration 2 139 6. The difference in overall training preparation reported between Survey 2 and Survey 1 (R2-R1) In each of these cases, the null hypothesis was no difference in the distribution of responses between Class 152 and Class 153. The Mann Whitney U test was used to test this null hypothesis because the samples being compared were independent. Analysis was carried out to a significance level of p=0.05. Table 5-17 indicates the results of this analysis for each of the six comparisons listed above. In each of the tests, the null hypothesis was retained, indicating no difference in the distribution between classes and the class data could be grouped. # Null Hypothesis Test Sig. Decision 1 The distribution of overall ability responses from Recruit Survey 1 is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 0.164 Retain the null hypothesis 2 The distribution of overall ability responses from Recruit Survey 2 is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 0.1811 Retain the null hypothesis 3 The distribution of the difference between Recruit Survey 1 and Recruit Survey 2 (R2-R1) overall ability responses is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 0.2241 Retain the null hypothesis 4 The distribution of overall preparedness responses from Recruit Survey 1 is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 0.104 Retain the null hypothesis 5 The distribution of overall preparedness responses from Recruit Survey 2 is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 0.3881 Retain the null hypothesis 6 The distribution of the difference between Recruit Survey 1 and Recruit Survey 2 (R2-R1) overall preparedness responses is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 0.8641 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-17 Mann Whitney U test results comparing distribution of responses to Recruit Survey 1 and Recruit Survey 2 between Class 152 and 153 140 To further test if it was acceptable to group the data from Class 152 and Class 153, I analyzed the responses of the FTOs and the difference between recruit and FTO responses. As the recruit response rate for Survey 2 much lower than Survey 1, the data from the FTO was compared with the data from Survey 1 to increase the number of comparison points for analysis. The following four questions were examined: 1. Overall, please rate the general ability of your recruit to perform as a Recruit Constable in Block II – FTO survey 2. The difference in overall ability reported between FTO and recruit Survey 1 (FTO-R1) 3. How well do you feel your recruit’s Block I training prepared them to meet the expectations as a Recruit Constable in Block II – FTO survey 4. The difference in overall training preparation reported between FTO and recruit Survey 1 (FTO-R1) In each of these cases, the null hypothesis was no difference in the distribution of responses between Class 152 and Class 153. The Mann Whitney U test was used to test this null hypothesis because the samples being compared were independent. Analysis was carried out to a significance level of p=0.05. # Null Hypothesis Test Sig. Decision 1 The distribution of overall ability responses from the FTO survey is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 0.6671 Retain the null hypothesis 2 The distribution of the difference between FTO survey and Recruit Survey 1 (FTO-R1) overall ability responses is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 1.0001 Retain the null hypothesis 3 The distribution of overall preparedness responses from Recruit Survey 1 is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 0.8891 Retain the null hypothesis 141 4 The distribution of the difference between FTO survey and Recruit Survey 1 (FTO-R1) overall preparedness responses is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 0.5001 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-18Table 5-18 indicates the results of this analysis for each of the four comparisons listed above. In each of the tests, the null hypothesis was retained, indicating no difference in the distribution between classes and the class data could be grouped. # Null Hypothesis Test Sig. Decision 1 The distribution of overall ability responses from the FTO survey is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 0.6671 Retain the null hypothesis 2 The distribution of the difference between FTO survey and Recruit Survey 1 (FTO-R1) overall ability responses is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 1.0001 Retain the null hypothesis 3 The distribution of overall preparedness responses from Recruit Survey 1 is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 0.8891 Retain the null hypothesis 4 The distribution of the difference between FTO survey and Recruit Survey 1 (FTO-R1) overall preparedness responses is the same across Class 152 and Class 153 Independent Samples: Mann-Whitney U Test 0.5001 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-18 Mann Whitney U test results comparing distribution of responses to the FTO survey and the difference between Recruit Survey 1 and the FTO survey between Class 152 and 153 Since the null hypothesis was retained in each of the 10 areas analyzed, indicating no difference in the distribution of responses between Class 152 and Class 153, it was deemed 142 acceptable to group the data from the two classes for further analysis. The data was combined into a “competency-based delivery model” grouping. 5.2.1 Differences in perception before and after Block II experience One research question asked if a difference in how recruits perceived their ability and their preparation before and after they had experience working as a patrol officer during Block II was observed. To address this question, the Wilcoxon Signed Rank Test was used to compare the answers provided by each recruit in Survey 1, administered before the start of Block II training, and Survey 2, administered after ten weeks of Block II training. The Wilcoxon Signed Rank Test is used to compare related samples, such as the same group at two different points in time, as is the case with this sample (Cohen et al., 2011). The overall ability and overall preparation for Block II questions were used to compare the differences between these two surveys. In each case, the null hypothesis stated no difference in the answers before and after the Block II experience. Phrased another way, the null hypothesis was that the median of differences between Survey 1 and Survey 2 was zero. The Wilcoxon Signed Rank Test was used to test this null hypothesis for each of the following survey groupings: 1. Lecture-based group: answer to the question “Overall, please rate your general ability to perform as a Recruit Constable in Block II.” (n=26) 2. Lecture-based group: answer to the question “How well do you feel your Block I training prepared you to meet the expectations of a Recruit Constable in Block II?” (n=26) 143 3. Competency-based group: answer to the question “Overall, please rate your general ability to perform as a Recruit Constable in Block II.” (n=15, 16 respondents to Survey 2 but one recruit did not provide a name so could not be matched to their Survey 1 response) Competency-based group: answer to the question “How well do you feel your Block I training prepared you to meet the expectations of a Recruit Constable in Block II?” (n=15, 16 respondents to Survey 2 but one recruit did not provide a name so could not be matched to their Survey 1 response) Table 5-19 indicates that, in each case, the null hypothesis was retained, indicating no significant difference between the recruits’ perceptions of their ability or preparation before Block II began and after ten weeks of Block II training. # Null Hypothesis Test Sig. Decision 1 For the lecture-based delivery model recruits, the median of the differences between overall ability from Recruit Survey 1 and Recruit Survey 2 equals 0. Related samples: Wilcoxon Signed Rank Test 0.282 Retain the null hypothesis 2 For the lecture-based delivery model recruits, the median of the differences between overall preparedness from Recruit Survey 1 and Recruit Survey 2 equals 0. Related samples: Wilcoxon Signed Rank Test 0.713 Retain the null hypothesis 3 For the competency-based delivery model recruits, the median of the differences between overall ability from Recruit Survey 1 and Recruit Survey 2 equals 0. Related samples: Wilcoxon Signed Rank Test 0.366 Retain the null hypothesis 4 For the competency-based delivery model recruits, the median of the differences between overall preparedness from Recruit Survey 1 and Recruit Survey 2 equals 0. Related samples: Wilcoxon Signed Rank Test 0.180 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table 5-19 Differences between recruit perceptions before and after Block II training experience 144 5.2.2 Comparison within classes To determine if there were any factors that influenced recruits’ or FTO responses within each group, analysis was conducted using the overall rankings from the Recruit Survey 1 responses and the FTO responses from both the lecture-based and competency-based delivery models. To calculate means for each category, the following values were assigned to responses: For the overall ability question: 1= “has knowledge”, 2= “act under full supervision”, 3= “act under moderate supervision”, 4= “act independently”, and 5= “act as a supervisor or instructor”. For the overall preparedness question: 1= “extremely poorly prepared”, 2= “poorly prepared”, 3= “well prepared”, 4= “extremely well prepared”, and 0= “N/A”. The N/A response was not excluded from the analysis because each of the competencies has been determined to be core to policing at the constable level, so a rating of N/A represents a complete failure on the part of recruit training to meet the basic requirements, and this response should also be captured in the analysis. As outlined in the previous section, the responses from Recruit Survey 1 were used for analysis to increase the sample size. Because recruits completed this survey prior to meeting and interacting with their FTO, I did not analyze the effect of FTO characteristics on recruit responses, such as FTO gender or years of service. For ease of presentation, the tables for these analyses are included in Appendix C - Consistency Tables instead of directly in the text. 145 5.2.2.1 Lecture-based delivery model Analyses were conducted to determine if there were any recruit characteristics or FTO characteristics that influenced the responses The following sections present the results from recruits in Class 151, who were trained in the lecture-based model. 5.2.2.1.1 Recruit characteristics Within the lecture-based delivery model responses, recruit responses to the overall ability and overall preparedness questions were analyzed across recruit gender, age range, post-secondary education level, and previous policing experience. The results for recruit gender are presented in Table C- 1. The mean values for each gender indicate that females scored themselves as lower in both overall ability and overall preparedness, but the Mann-Whitney U Test indicated that the null hypothesis that the distribution across genders was equal was to be retained, so the observed difference between genders was not significant. The results for recruit age-range are presented in Table C- 2. As there were more than two categories of response, the Kruskal-Wallis test was used instead of the Mann-Whitney U Test. Again, although differences are observed in the means of each of the age range categories for the two global questions, these differences were not large enough to reject the null hypothesis of no difference in distribution across age range categories. The null hypothesis was retained, indicating recruit age was not a factor in their response to the global questions. The results for recruit post-secondary education level are presented in Table C- 3. Again, although differences are observed in the means of each of the education level categories for the two global questions, these differences were not large enough to reject the 146 null hypothesis of no difference in distribution across education level categories. The null hypothesis was retained, indicating that recruit education level was not a factor in recruit response to the global questions. Lastly, the results for recruit previous policing experience presented in Table C- 4. Here there were only two categories for analysis as the recruits in this particular class either had no previous policing experience or were grouped into the category that included community safety police, jail guards, auxiliary/reserve constables, and international police. Again, although there are observed difference in the mean values for each of the global questions, with the recruits who had previous experience rating themselves slightly higher than their inexperienced classmates, the difference was not enough the reject the null hypothesis. The null hypothesis of no difference in distribution across previous policing experience categories was retained, indicating that recruit previous policing experience was not a factor in recruit response to the global questions. 5.2.2.1.2 FTO characteristics To examine if FTO characteristics had any influence on how the FTOs rated their Block II recruits, the ratings were grouped and analyzed across FTO gender, FTO years of service, FTO years as an FTO, and the number of recruits an FTO had trained. Table C- 5 shows the results of the mean comparison and Mann-Whitney U Test across FTO gender. Although the results indicate that female FTOs rated their recruits with a lower overall ability but marginally higher in preparedness, the differences were not large enough to reject the null hypothesis that the distribution of responses was the same across FTO gender categories. Table C- 6 shows the results of the mean comparison and Kruskal-Wallis Test across FTO years of service. The results indicate that more experienced FTOs 147 tended to rate their recruits’ ability slightly higher, but the differences were not large enough to reject the null hypothesis that the distribution of responses was the same across FTO years of service. No trend emerged across how the FTOs rated their recruits’ preparedness and the null hypothesis was again retained. Table C- 7 shows the results of the mean comparison and Kruskal-Wallis Test across FTO years as a field trainer. The differences in this category were not sufficient to reject the null hypothesis, indicating that there is no significant difference across distribution of responses based on FTO years as a field trainer. Table C- 8 shows the results of the mean comparison and Kruskal-Wallis Test across FTO number of recruits trained. The differences in this category were not sufficient to reject the null hypothesis, indicating that there is no significant difference across distribution of responses based on the number of recruits an FTO has trained. 5.2.2.1.3 Recruit Characteristics on FTO Responses In addition to the potential influence of FTO characteristics on FTO responses, the potential influence of recruit characteristics on FTO responses was also examined. FTO responses were analyzed across recruit gender, age range, post-secondary education level, and previous policing experience. Table C- 9 shows the means and Mann-Whitney U Test values for FTO responses grouped across recruit gender. Although female recruits are rated lower in both overall ability and overall preparation than their male classmates, this difference is not large enough to reject the null hypothesis that the distribution across responses is the same. Table C- 10 shows the means and Kruskal-Wallis Test values for FTO responses grouped against recruit 148 age. The differences in mean ratings are minimal and the null hypothesis that the distribution across categories is equal is retained. Table C- 11shows means and Kruskal-Wallis Test values for FTO responses grouped across recruit post-secondary education. Again, the differences in mean ratings are minimal and the null hypothesis that the distribution across categories is equal is retained. Lastly, Table C- 12 shows the means and Mann-Whitney U Test values for FTO responses grouped by recruit previous policing experience. For this class, the recruits fell into one of two categories with their previous policing experience, as indicated in Section 5.2.2.1.1, so the Mann-Whitney U Test was used. Recruit previous police experience did not influence the ratings and the null hypothesis that the distribution of responses across categories of experience was equal was retained. 5.2.2.2 Competency-based delivery model As with the lecture-based delivery model, analyses were conducted to determine if there were any recruit characteristics or FTO characteristics that influenced the responses. 5.2.2.2.1 Recruit characteristics Recruit responses to the overall ability and overall preparedness questions were analyzed when grouped against recruit gender, age range, post-secondary education, and previous police experience. In each case, the null hypothesis tested was that the distribution of responses was equal between categories examined. In all four of these areas, the null hypotheses were retained, indicating that none of the recruit characteristics had a significant impact on their overall sense of their own ability or on how prepared they believed they 149 were. Table C- 13 through Table C- 16 show the recruits’ responses grouped across gender, age category, post-secondary experience, and previous police experience, respectively. 5.2.2.2.2 FTO characteristics FTO responses were examined across trainer characteristics of gender, age category, years of service, years as FTO, and number of recruits trained, as represented in Table C- 17 through Table C- 21. The null hypothesis for each of the tests was that the distribution of responses was equal across the groupings. In all cases the hull hypotheses were retained, indicating that no FTO characteristics significantly influenced how the FTOs rated their recruits. Interestingly, in the lecture-based delivery model, female FTOs rated their recruits’ ability lower than the male FTOs (although not significantly). In the competency-based model, female FTOs rated their recruits’ overall ability higher than did the male FTOs (although also not significantly). 5.2.2.2.3 Recruit characteristics on FTO responses To determine if any characteristics of recruits influenced how their FTOs rated them, I carried out an analysis of FTO responses grouped across recruit gender (Table C- 22), recruit age category (Table C- 23), recruit post-secondary education (Table C- 25), and recruit previous police experience (Table C- 26). In each case, the null hypothesis tested was that the distribution of responses was equal across all categories examined. The sample size was extremely low as only nine FTO responses could be matched to recruit responses. Recruit gender, recruit post-secondary education and recruit previous policing experience showed no influence on FTO responses and the null hypotheses were all 150 retained in these cases. For recruit age category, Kruskal-Wallis Test scores indicated that the null hypothesis was to be rejected, meaning that recruit age category did have a significant influence on the FTO rating of the recruit. Cross-tabulation reports were run for both the overall ability and the overall preparedness questions to indicate the source of the difference, as non-parametric tests do not indicate this information. The cross-tabulation reports shown in Table C- 24 indicate that for the overall ability question, recruits in the age range of 25-29 years (n=3) were ranked below their peers, with one recruit ranked as “has knowledge” and the remaining two recruits ranked as “act under full supervision”. All recruits in the other age categories (n=6) were ranked as “act under moderate supervision” (n=5) or act independently (n=1). For the overall preparedness question, recruits in both the 20-24 (n=1) and 25-29 (n=3) age ranges were ranked as “poorly prepared” whereas their classmates who were 30-34 were ranked as “well prepared” (n=4) or “extremely well prepared” (n=1). 5.2.3 Comparison across classes To answer the most central question, responses were compared between the lecture-based and competency-based groups, using Recruit Survey 1 for analysis. The FTO responses were also compared across both groups, to answer the question if a difference in how FTOs described their recruits ability and preparedness was observed. This analysis was carried out descriptively by comparing means across classes and statistically using the Mann-Whitney U Test and, where a statistically significant difference was detected, carrying out a crosstabs analysis to determine the source of the detected difference. This was conducted for the global scores as well as for each of the Police Sector Council Constable competencies. 151 5.2.3.1 Global comparison across classes Figure 5-1 and Figure 5-2 show the mean rankings for overall ability and preparedness from Recruit Survey 1 and from the FTO survey respectively. Table 5-20 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation. For the overall ability question, recruits in the competency-based delivery model ranked themselves slightly higher than recruits in the lecture-based delivery model, with means of 2.71 and 2.69 respectively. For the overall question about how well Block I prepared them for Block II, however, recruits in the lecture-based model ranked themselves higher (3.14) than those in the competency-based model (3.04). The FTOs ranked recruits in the lecture-based model higher in both global ability and overall preparation (means of 2.83 and 3.00) than the recruits from the competency-based delivery model (means of 2.67 for both ability and preparedness). Figure 5-1 Global mean ratings for overall ability (blue) and overall preparation (red) from Recruit Survey 1 clustered across training delivery methods 152 Figure 5-2 Global mean ratings for overall ability (blue) and overall preparation (red) from FTO survey clustered across training delivery methods Overall ability – Recruit Survey 1 Overall Ability - FTO Overall Preparation – Recruit Survey 1 Overall Preparation - FTO Lecture-based Mean 2.69 2.83 3.14 3.00 N 35 12 35 10 Std. Deviation .676 .718 .430 .667 Competency-based Mean 2.71 2.67 3.04 2.67 N 49 9 49 9 Std. Deviation .645 .866 .498 .707 Total Mean 2.70 2.76 3.08 2.84 N 84 21 84 19 Std. Deviation .655 .768 .471 .688 Table 5-20 Global mean ratings for overall ability and overall preparation from Recruit Survey 1 and FTO responses clustered across training delivery methods 153 The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models. 1. No difference in distribution of responses across lecture-based and competency-based delivery models for overall ability ratings from Recruit Survey 1 2. No difference in distribution of responses across lecture-based and competency-based delivery models for overall ability ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for overall preparation ratings from Recruit Survey 1 4. No difference in distribution of responses across lecture-based and competency-based delivery models for overall preparation ratings from the FTO survey Table 5-21 shows the results of this analysis, indicating that, in each of the questions, the null hypothesis is retained. The observed differences then are not statistically significant. # Null Hypothesis Test Sig. Decision 1 The distribution of responses from recruits for the overall ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.652 Retain the null hypothesis 2 The distribution of responses from recruits for the overall preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.8081 Retain the null hypothesis 3 The distribution of responses from FTOs for the overall recruit ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.149 Retain the null hypothesis 154 4 The distribution of responses from FTOs for the overall recruit preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.3561 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-21 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type The following sections will examine recruit rankings for each of the core Constable Competencies: adaptability, ethical accountability, interactive communication, organizational awareness, problem solving, risk management, stress tolerance, teamwork, and written skills. 5.2.3.2 Adaptability Figure 5-3 and Figure 5-4 show the mean rankings for ability and preparedness for the adaptability competency from Recruit Survey 1 and from the FTO survey respectively. Table 5-22 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation. For the ability in the adaptability competency question, recruits in the competency-based delivery model ranked themselves considerably lower than recruits in the lecture-based delivery model, with means of 2.88 and 3.26 respectively. For the question about how well Block I prepared them for Block II with respect to the adaptability competency, the recruits in the lecture-based model ranking their preparedness slightly lower (2.71) than those in the competency-based model (2.94). The FTOs ranked recruits in the lecture-based model slightly higher in ability for this competency (means of 155 2.85 for lecture-based and 2.78 for competency-based) but slightly lower for preparedness for this competency (means of 2.38 for lecture-based and 2.44 for competency-based). Figure 5-3 Mean ratings for ability in the Adaptability competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method Figure 5-4 Mean ratings for preparation in the Adaptability competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method Ability in Adaptability competency – Recruit Survey 1 Ability in Adaptability competency – FTO survey Preparedness in Adaptability competency – Recruit Survey 1 Preparedness in Adaptability competency – FTO survey 156 Lecture-based Mean 3.26 2.85 2.71 2.38 N 35 13 35 13 Std. Deviation .852 .899 1.045 1.121 Competency-based Mean 2.88 2.78 2.94 2.44 N 49 9 49 9 Std. Deviation .634 .972 .317 1.014 Total Mean 3.04 2.82 2.85 2.41 N 84 22 84 22 Std. Deviation .752 .907 .720 1.054 Table 5-22 Mean ratings for ability and preparation in the adaptability competency from Recruit Survey 1 and FTO responses clustered across training delivery methods The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models. 1. No difference in distribution of responses was recorded across lecture-based and competency-based delivery models for ability in the adaptability competency area for ratings from Recruit Survey 1 2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the adaptability competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the adaptability competency area for ratings from Recruit Survey 1 4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the adaptability competency area for ratings from the FTO survey 157 Table 5-23 shows the results of this analysis, indicating that, in each of the FTO questions and in the recruits’ response about how prepared they were in this competency area, the null hypothesis is retained and there are no statistically significant differences in how the FTOs rated the recruits or in how the recruits rated themselves across delivery methods. The null hypothesis is rejected, however, in analysis 1, indicating a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the adaptability competency area. The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program. Table 5-24 indicates the results of a cross-tabulation report on the recruit ability question to examine the breakdown of responses. The cross-tabulation report shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=35) and the next most frequent category was “act under full supervision” (n=7). This distribution is in contrast to the lecture-based delivery model where recruits most frequently scored themselves as able to “act independently” (n=16), followed by “act under moderate supervision” (n=14). Therefore, the reason for rejecting the null hypothesis in this case was that the recruits in the competency-based delivery model more frequently scored their ability in the adaptability competency area lower than did the recruits in the lecture-based delivery model. Null Hypothesis Test Sig. Decision 1 The distribution of responses from recruits for the Adaptability ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-0.005 Reject the null hypothesis 158 Whitney U Test 2 The distribution of responses from FTOs for the Adaptability ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.8961 Retain the null hypothesis 3 The distribution of responses from recruits for the Adaptability preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.908 Retain the null hypothesis 4 The distribution of responses from FTOs for the Adaptability preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 1.0001 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-23 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type Cross-tabulation Report lecture competency-based Adaptability ability - recruit has knowledge 2 2 4 act under full supervision 3 7 10 act under moderate supervision 14 35 49 act independently 16 5 21 Total 35 49 84 Table 5-24 Cross-tabulation report from Recruit Survey 1 for ability in the adaptability competency area 159 5.2.3.3 Ethical Accountability Figure 5-5 and Figure 5-6 show the mean rankings for ability and preparation for the ethical accountability and responsibility competency from Recruit Survey 1 and from the FTO survey respectively. Table 5-25 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation. For the ability in the ethical accountability and responsibility competency question, recruits in the competency-based delivery model ranked themselves considerably lower than recruits in the lecture-based delivery model, with means of 3.1 and 3.51 respectively. For the question about how well Block I prepared them for Block II with respect to the ethical accountability competency, the recruits in the lecture-based model ranked their preparedness slightly higher (3.0) than those in the competency-based model (2.9). The FTOs ranked recruits in the lecture-based model considerably higher in ability and in their preparedness for this competency with means in ability of 3.62 for lecture-based and 2.89 for competency-based, and with means in preparedness of 3.08 for lecture-based and 2.67 for competency-based. Figure 5-5 Mean ratings for ability in the Ethics competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method 160 Figure 5-6 Mean ratings for preparation in the Ethics competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method 161 Ability in Ethics competency area – Recruit Survey 1 Ability in Ethics competency area – FTO survey Preparedness in Ethics competency area – Recruit Survey 1 Preparedness in Ethics competency area – FTO survey Lecture-based Mean 3.51 3.62 3.00 3.08 N 35 13 35 13 Std. Deviation .919 .506 .907 1.038 Competency-based Mean 3.10 2.89 2.90 2.67 N 49 9 49 9 Std. Deviation .848 1.269 .653 1.118 Total Mean 3.27 3.32 2.94 2.91 N 84 22 84 22 Std. Deviation .896 .945 .766 1.065 Table 5-25 Mean ratings for ability and preparation in the ethics competency from Recruit Survey 1 and FTO responses clustered across training delivery methods The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models. 1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the ethics competency area for ratings from Recruit Survey 1 2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the ethics competency area for ratings from the FTO survey 162 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the ethics competency area for ratings from Recruit Survey 1 4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the ethics competency area for ratings from the FTO survey Table 5-26 shows the results of this analysis, indicating that, in each of the FTO questions and in the recruits’ response about how prepared they were in this competency area, the null hypothesis is retained and there are no statistically significant differences in how the FTOs rated the recruits or in how the recruits rated themselves across delivery methods. The null hypothesis is rejected, however, in analysis 1, indicating that there is a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the adaptability competency area. The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program. Table 5-27 indicates the results of a cross-tabulation report on the recruit ability question to examine the breakdown of responses. 163 Null Hypothesis Test Sig. Decision 1 The distribution of responses from recruits for the Ethical Accountability ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.004 Reject the null hypothesis 2 The distribution of responses from FTOs for the Ethical Accountability ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.2621 Retain the null hypothesis 3 The distribution of responses from recruits for the Ethical Accountability preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.168 Retain the null hypothesis 4 The distribution of responses from FTOs for the Ethical Accountability preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.2921 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-26 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type Cross-tabulation Report Total lecture competency-based Ethical Accountability ability - recruit has knowledge 3 3 6 act under full supervision 1 6 7 act under moderate supervision 6 23 29 act independently 25 17 42 Total 35 49 84 Table 5-27 Cross-tabulation report from Recruit Survey 1 for ability in the ethics competency area 164 The cross-tabulation report shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=23) and the next most frequent category was “act independently” (n=17). This distribution is in contrast to the lecture-based delivery model where recruits most frequently scored themselves as able to “act independently” (n=25), followed by “act under moderate supervision” (n=6). Therefore, the reason for rejecting the null hypothesis in this case was that the recruits in the competency-based delivery model more frequently scored their ability in the ethical accountability and responsibility competency area lower than did the recruits in the lecture-based delivery model. 5.2.3.4 Interactive Communication Figure 5-7 and Figure 5-8 show the mean rankings for ability and preparedness for the interactive communication competency from Recruit Survey 1 and from the FTO survey respectively. Table 5-28 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation. For the ability in the interactive communication competency question, recruits in the competency-based delivery model ranked themselves lower than recruits in the lecture-based delivery model, with means of 3.00 and 3.26 respectively. For the question about how well Block I prepared them for Block II with respect to the interactive communication competency, the recruits in the lecture-based model ranked their preparedness lower (2.74) than those in the competency-based model (2.96). The FTOs ranked recruits in the lecture-based model considerably higher in ability and in their preparedness for this competency with means in ability of 3.08 for lecture-based 165 and 2.56 for competency-based, and with means in preparedness of 2.92 for lecture-based and 2.44 for competency-based. Figure 5-7 Mean ratings for ability in the communication competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method Figure 5-8 Mean ratings for preparation in the communication competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method 166 Ability in Communication competency area – Recruit Survey 1 Ability in Communication competency area – FTO survey Preparedness in Communication competency area – Recruit Survey 1 Preparedness in Communication competency area – FTO survey Lecture-based Mean 3.26 3.08 2.74 2.92 N 35 13 35 13 Std. Deviation .701 .862 .980 .494 Competency-based Mean 3.00 2.56 2.96 2.44 N 49 9 49 9 Std. Deviation .645 1.130 .406 1.014 Total Mean 3.11 2.86 2.87 2.73 N 84 22 84 22 Std. Deviation .677 .990 .708 .767 Table 5-28 Mean ratings for ability and preparation in the communication competency from Recruit Survey 1 and FTO responses clustered across training delivery methods The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models. 1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the interactive communication competency area for ratings from Recruit Survey 1 2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the interactive communication competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the interactive communication competency area for ratings from Recruit Survey 1 167 4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the interactive communication competency area for ratings from the FTO survey Table 5-29 shows the results of this analysis, indicating that, in each of the questions the null hypothesis is retained and there are no statistically significant differences in distribution in the responses across the classes Null Hypothesis Test Sig. Decision 1 The distribution of responses from recruits for the Interactive Communication ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.052 Retain the null hypothesis 2 The distribution of responses from FTOs for the Interactive Communication ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.2921 Retain the null hypothesis 3 The distribution of responses from recruits for the Interactive Communication preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.572 Retain the null hypothesis 4 The distribution of responses from FTOs for the Interactive Communication preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.3571 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-29 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type 168 5.2.3.5 Organizational Awareness Figure 5-9 and Figure 5-10 show the mean rankings for ability and preparedness for the organizational awareness competency from Recruit Survey 1 and from the FTO survey respectively. Table 5-30 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation. For the ability in the organizational awareness competency question, recruits in the competency-based delivery model ranked themselves considerably lower than recruits in the lecture-based delivery model, with means of 2.39 and 3.06 respectively. For the question about how well Block I prepared them for Block II with respect to the ethical accountability competency, the recruits in the lecture-based model ranked their preparedness higher (2.63) than those in the competency-based model (2.35). The FTOs ranked recruits in the lecture-based model slightly higher in ability and approximately the same in their preparedness for this competency with means in ability of 2.77 for lecture-based and 2.67 for competency-based, and with means in preparedness of 2.46 for lecture-based and 2.44 for competency-based. Figure 5-9 Mean ratings for ability in the organizational awareness competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method 169 Figure 5-10 Mean ratings for preparation in the organizational awareness competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method Ability in Organizational Awareness competency area – Recruit Survey 1 Ability in Organizational Awareness competency area – FTO survey Preparedness in Organizational Awareness competency area – Recruit Survey 1 Preparedness in Organizational Awareness competency area – FTO survey lecture Mean 3.06 2.77 2.63 2.46 N 35 13 35 13 Std. Deviation .873 1.092 1.165 .877 competency-based Mean 2.39 2.67 2.35 2.44 N 49 9 49 9 Std. Deviation .759 1.225 .879 1.014 Total Mean 2.67 2.73 2.46 2.45 N 84 22 84 22 Std. Deviation .869 1.120 1.011 .912 Table 5-30 Mean ratings for ability and preparation in the organizational awareness competency from Recruit Survey 1 and FTO responses clustered across training delivery methods 170 The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models. 1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the organizational awareness competency area for ratings from Recruit Survey 1 2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the organizational awareness competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the organizational awareness competency area for ratings from Recruit Survey 1 4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the organizational awareness competency area for ratings from the FTO survey Table 5-31 shows the results of this analysis, indicating that, in the two FTO questions, the null hypothesis is retained and there are no statistically significant differences in how the FTOs rated the recruits across delivery methods. The null hypothesis is rejected, however, in the recruit questions, indicating a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability and preparedness in the organizational awareness competency area. The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program. Table 5-32 indicates the 171 results of a cross-tabulation report on the recruit ability and preparedness questions to examine the breakdown of responses. Null Hypothesis Test Sig. Decision 1 The distribution of responses from recruits for the Organizational Awareness ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.000 Reject the null hypothesis 2 The distribution of responses from FTOs for the Organizational Awareness ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.8961 Retain the null hypothesis 3 The distribution of responses from recruits for the Organizational Awareness preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.000 Reject the null hypothesis 4 The distribution of responses from FTOs for the Organizational Awareness preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.6951 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-31 Mann Whitney U Test of ability and preparedness for organizational awareness competency from Recruit Survey 1 and FTO survey, grouped across training type 172 Cross-tabulation Report Total Lecture-based Competency-based Organizational Awareness ability - recruit has knowledge 3 8 11 act under full supervision 3 14 17 act under moderate supervision 18 27 45 act independently 11 0 11 Total 35 49 84 Organizational Awareness preparedness - recruit N/A 5 3 8 extremely poorly prepared 0 4 4 poorly prepared 2 15 17 well prepared 24 27 51 extremely well prepared 4 0 4 Total 35 49 84 Table 5-32 Cross-tabulation report from Recruit Survey 1 for ability (top) and preparedness (bottom) in the organizational awareness competency area The cross-tabulation report for ability shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=27) and the next most frequent category was “act under full supervision” (n=14). No recruits in the competency-based model stated that they believed they were able to “act independently” in this competency area. This distribution is in contrast to the lecture-based delivery model where recruits also most frequently scored themselves as able to “act under moderate supervision” (n=18), but the next most frequent category was “act independently” (n=11). The reason for rejecting the null hypothesis in the ability category of the organizational awareness competency was that the recruits in the competency-based 173 delivery model more frequently scored their ability lower than did the recruits in the lecture-based delivery model. As in the responses to the recruits’ perceptions of their ability in organizational awareness, the cross-tabulation report for how well recruits believed their Block I training prepared them for Block II in the organizational awareness competency indicated that recruits in the lecture-based model typically ranked themselves higher than those from the competency-based model. The most frequent category selected by the recruits from the lecture-based model was “well prepared” (n=24), followed by N/A (n=5), and “extremely well prepared” (n=4). For the competency-based model, the most frequent category selected by recruits was also “well prepared” (n=27), followed by “poorly prepared” (n=15), “extremely poorly prepared” (n=4), and N/A (n=3). No recruits in the competency-based model selected “extremely well prepared”. While the majority of recruits in both models selected “well prepared”, recruits in the competency-based model also selected “poorly prepared” and “extremely poorly prepared” resulting in the rejection of the null hypothesis and the observed significantly lower ratings in preparedness from recruits in the competency-based model. 5.2.3.6 Problem Solving Figure 5-11 and Figure 5-12 show the mean rankings for ability and preparation for the problem solving competency from Recruit Survey 1 and from the FTO survey respectively. Table 5-33 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation. For the ability in the problem solving competency question, recruits in the competency-based delivery model ranked themselves 174 considerably lower than recruits in the lecture-based delivery model, with means of 2.84 and 3.49 respectively. For the question about how well Block I prepared them for Block II with respect to the ethical accountability competency, the recruits in the lecture-based model ranked their preparedness slightly lower (2.74) than those in the competency-based model (2.84). The FTOs ranked recruits in the lecture-based model higher in both ability and preparedness means in ability of 3.23 for lecture-based and 2.84 for competency-based, and with means in preparedness of 3.00 for lecture-based and 2.56 for competency-based. Figure 5-11 Mean ratings for ability in the problem solving competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method Figure 5-12 Mean ratings for preparation in the problem solving competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method 175 Ability in Problem Solving competency area – Recruit Survey 1 Ability in Problem Solving competency area – FTO survey Preparedness in Problem Solving competency area – Recruit Survey 1 Preparedness in Problem Solving competency area – FTO survey Lecture-based Mean 3.49 3.23 2.74 3.00 N 35 13 35 13 Std. Deviation .702 .599 1.039 .408 Competency-based Mean 2.84 2.89 2.84 2.56 N 49 9 49 9 Std. Deviation .624 1.054 .514 1.014 Total Mean 3.11 3.09 2.80 2.82 N 84 22 84 22 Std. Deviation .728 .811 .773 .733 Table 5-33 Mean ratings for ability and preparation in the problem solving competency from Recruit Survey 1 and FTO responses clustered across training delivery methods The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models. 1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the problem solving competency area for ratings from Recruit Survey 1 2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the problem solving competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the problem solving competency area for ratings from Recruit Survey 1 176 4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the problem solving competency area for ratings from the FTO survey Table 5-34 shows the results of this analysis, indicating that, in the two FTO questions and in the recruit preparedness, the null hypothesis is retained and there are no statistically significant differences the ratings across delivery methods. The null hypothesis is rejected, however, in the recruit question about ability, indicating that there is a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the problem solving competency area. The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program. Table 5-35 indicates the results of a cross-tabulation report on the recruit ability question to examine the breakdown of responses. 177 Null Hypothesis Test Sig. Decision 1 The distribution of responses from recruits for the Problem Solving ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.000 Reject the null hypothesis 2 The distribution of responses from FTOs for the Problem Solving ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.5561 Retain the null hypothesis 3 The distribution of responses from recruits for the Problem Solving preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.375 Retain the null hypothesis 4 The distribution of responses from FTOs for the Problem Solving preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.4311 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-34 Mann Whitney U Test of ability and preparedness for problem solving competency from Recruit Survey 1 and FTO survey, grouped across training type Cross-tabulation Report Total Lecture-based Competency-based Please rate your ability in the PROBLEM SOLVING competency area: has knowledge 1 2 3 act under full supervision 1 8 9 act under moderate supervision 13 35 48 act independently 20 4 24 Total 35 49 84 Table 5-35 Cross-tabulation report from Recruit Survey 1 for ability in the problem solving competency area 178 The cross-tabulation report for ability shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=35) and the next most frequent category was “act under full supervision” (n=8). Four recruits from the competency-based model scored themselves as able to “act independently” in this competency area. This distribution is in contrast to the lecture-based delivery model where recruits most frequently scored themselves as able to “act independently” (n=20), followed by “act under moderate supervision” (n=13). The reason for rejecting the null hypothesis in the ability category of the problem solving competency, therefore, was that the recruits in the competency-based delivery model more frequently scored their ability lower than did the recruits in the lecture-based delivery model. 5.2.3.7 Risk Management Figure 5-13 and Figure 5-14 show the mean rankings for ability and preparation for the risk management competency from Recruit Survey 1 and from the FTO survey respectively. Table 5-36 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation. For the ability in the risk management competency question, recruits in the competency-based delivery model ranked themselves lower than recruits in the lecture-based delivery model, with means of 2.75 and 3.17 respectively. For the question about how well Block I prepared them for Block II with respect to the risk management competency, the recruits in the lecture-based model ranked their preparedness slightly lower (2.89) than those in the competency-based model (2.96). The FTOs ranked recruits in the lecture-based model approximately the same as those in the competency-based model in ability (means of 2.46 and 2.44 respectively) and ranked recruits 179 in the lecture-based model as less prepared than recruits from the competency-based model (means of 2.46 and 2.67 respectively). Figure 5-13 Mean ratings for ability in the risk management competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method Figure 5-14 Mean ratings for preparation in the risk management competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method 180 Ability in Risk Management competency area – Recruit Survey 1 Ability in Risk Management competency area – FTO survey Preparedness in Risk Management competency area – Recruit Survey 1 Preparedness in Risk Management competency area – FTO survey lecture Mean 3.17 2.46 2.89 2.46 N 35 13 35 13 Std. Deviation .785 .967 .796 .877 competency-based Mean 2.75 2.44 2.96 2.67 N 48 9 49 9 Std. Deviation .601 .882 .576 .707 Total Mean 2.93 2.45 2.93 2.55 N 83 22 84 22 Std. Deviation .712 .912 .673 .800 Table 5-36 Mean ratings for ability and preparation in the risk management competency from Recruit Survey 1 and FTO responses clustered across training delivery methods The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models. 1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the risk management competency area for ratings from Recruit Survey 1 2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the risk management competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the risk management competency area for ratings from Recruit Survey 1 181 4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the risk management competency area for ratings from the FTO survey Table 5-37 shows the results of this analysis, indicating that, in the two FTO questions and in the recruit preparedness, the null hypothesis is retained and there are no statistically significant differences the ratings across delivery methods. The null hypothesis is rejected, however, in the recruit question about ability, indicating that there is a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the risk management competency area. The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program. Table 5-38 indicates the results of a cross-tabulation report on the recruit ability question to examine the breakdown of responses. 182 Null Hypothesis Test Sig. Decision 1 The distribution of responses from recruits for the Risk Management ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.004 Reject the null hypothesis 2 The distribution of responses from FTOs for the Risk Management ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.8451 Retain the null hypothesis 3 The distribution of responses from recruits for the Risk Management preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.972 Retain the null hypothesis 4 The distribution of responses from FTOs for the Risk Management preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.5561 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-37 Mann Whitney U Test of ability and preparedness for risk management competency from Recruit Survey 1 and FTO survey, grouped across training type Cross-tabulation Report Total Lecture-based Competency-based Risk Management ability - recruit has knowledge 1 3 4 act under full supervision 5 7 12 act under moderate supervision 16 37 53 act independently 13 1 14 Total 35 48 83 Table 5-38 Cross-tabulation report from Recruit Survey 1 for ability in the risk management competency area 183 The cross-tabulation report for ability shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=37) and the next most frequent category was “act under full supervision” (n=7). Only one recruit from the competency-based model scored themselves as able to “act independently” and three recruits scored themselves as “has knowledge” in this competency area. Similarly, the most frequent scoring in the lecture-based delivery model was “act under moderate supervision” (n=16) but this was followed by “act independently” (n=13), “act under full supervision” (n=5) and “has knowledge” (n=1). The distribution of scoring of the recruits from the lecture-based model was again weighted towards the more independent ability end of the scale. The reason then for rejecting the null hypothesis in the ability category of the risk management competency was that the recruits in the competency-based delivery model more frequently scored their ability lower than did the recruits in the lecture-based delivery model. 5.2.3.8 Stress Tolerance Figure 5-15 and Figure 5-16 show the mean rankings for ability and preparation for the stress tolerance competency from Recruit Survey 1 and from the FTO survey respectively. Table 5-39 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation. For the ability in the stress tolerance competency question, recruits in the competency-based delivery model ranked themselves considerably lower than recruits in the lecture-based delivery model, with means of 2.86 and 3.43 respectively. For the question about how well Block I prepared them for Block II with respect to the stress tolerance competency, the recruits in the lecture-based model ranked 184 their preparedness approximately equal (2.94) as those in the competency-based model did (2.96). The FTOs ranked recruits in the lecture-based model approximately the same as those in the competency-based model in both ability and preparedness with means of 2.77 for lecture-based in both categories and of 2.78 for competency-based in both categories. Figure 5-15 Mean ratings for ability in the stress tolerance competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method Figure 5-16 Mean ratings for preparation in the stress tolerance competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method 185 Ability in Stress Tolerance competency area – Recruit Survey 1 Ability in Stress Tolerance competency area – FTO survey Preparedness in Stress Tolerance competency area – Recruit Survey 1 Preparedness in Stress Tolerance competency area – FTO survey Lecture-based Mean 3.43 2.77 2.94 2.77 N 35 13 35 13 Std. Deviation .739 1.013 .838 .599 Competency-based Mean 2.86 2.78 2.96 2.78 N 49 9 49 9 Std. Deviation .677 .833 .644 .441 Total Mean 3.10 2.77 2.95 2.77 N 84 22 84 22 Std. Deviation .754 .922 .727 .528 Table 5-39 Mean ratings for ability and preparation in the stress tolerance competency from Recruit Survey 1 and FTO responses clustered across training delivery methods The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models. 1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the stress tolerance competency area for ratings from Recruit Survey 1 2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the stress tolerance competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the stress tolerance competency area for ratings from Recruit Survey 1 186 4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the stress tolerance competency area for ratings from the FTO survey Table 5-40 shows the results of this analysis, indicating that, in the two FTO questions and in the recruit preparedness, the null hypothesis is retained and there are no statistically significant differences the ratings across delivery methods. The null hypothesis is rejected, however, in the recruit question about ability, indicating that there is a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the stress tolerance competency area. The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program. Table 5-41 indicates the results of a cross-tabulation report on the recruit ability question to examine the breakdown of responses. 187 Null Hypothesis Test Sig. Decision 1 The distribution of responses from recruits for the Stress Tolerance ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.000 Reject the null hypothesis 2 The distribution of responses from FTOs for the Stress Tolerance ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.8961 Retain the null hypothesis 3 The distribution of responses from recruits for the Stress Tolerance preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.727 Retain the null hypothesis 4 The distribution of responses from FTOs for the Stress Tolerance preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.8451 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-40 Mann Whitney U Test of ability and preparedness for stress tolerance competency from Recruit Survey 1 and FTO survey, grouped across training type Cross-tabulation Report Total Lecture-based Competency-based Stress Tolerance ability - recruit has knowledge 1 2 3 act under full supervision 2 9 11 act under moderate supervision 13 32 45 act independently 19 6 25 Total 35 49 84 Table 5-41 Cross-tabulation report from Recruit Survey 1 for ability in the stress tolerance competency area 188 The cross-tabulation report for ability shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=32) and the next most frequent category was “act under full supervision” (n=9), followed by “act independently” (n=6). The most frequent scoring in the lecture-based delivery model was “act independently” (n=19), followed by “act under moderate supervision” (n=13), and “act under full supervision” (n=2). The distribution of scoring of the recruits from the lecture-based model was again weighted towards the more independent ability end of the scale. The reason then for rejecting the null hypothesis in the ability category of the stress tolerance competency was that the recruits in the competency-based delivery model more frequently scored their ability lower than did the recruits in the lecture-based delivery model. 5.2.3.9 Teamwork Figure 5-17 and Figure 5-18 show the mean rankings for ability and preparation for the teamwork competency from Recruit Survey 1 and from the FTO survey respectively. Table 5-42 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation. For the ability in the teamwork competency question, recruits in the competency-based delivery model ranked themselves considerably lower than recruits in the lecture-based delivery model, with means of 3.12 and 3.63 respectively, although this was still a very high ranking from both classes. For the question about how well Block I prepared them for Block II with respect to the stress tolerance competency, the recruits in the lecture-based model ranked their preparedness slightly lower (3.00) than those in the competency-based model did (3.16). The FTOs ranked recruits in the lecture-based 189 model higher than those in the competency-based model in both ability and preparedness with means in ability of 3.38 for lecture-based and 3.00 for competency-based, and means in preparedness of 3.15 for lecture-based and 2.67 for competency-based. Figure 5-17 Mean ratings for ability in the teamwork competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method Figure 5-18 Mean ratings for preparation in the teamwork competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method 190 Ability in Teamwork competency area – Recruit Survey 1 Ability in Teamwork competency area – FTO survey Preparedness in Teamwork competency area – Recruit Survey 1 Preparedness in Teamwork competency area – FTO survey Lecture-based Mean 3.63 3.38 3.00 3.15 N 35 13 35 13 Std. Deviation .731 .650 1.029 .376 Competency-based Mean 3.12 3.00 3.16 2.67 N 49 9 49 9 Std. Deviation .696 1.118 .657 1.000 Total Mean 3.33 3.23 3.10 2.95 N 84 22 84 22 Std. Deviation .750 .869 .830 .722 Table 5-42 Mean ratings for ability and preparation in the teamwork competency from Recruit Survey 1 and FTO responses clustered across training delivery methods The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models. 1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the teamwork competency area for ratings from Recruit Survey 1 2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the teamwork competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the teamwork competency area for ratings from Recruit Survey 1 191 4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the teamwork competency area for ratings from the FTO survey Table 5-43 shows the results of this analysis, indicating that, in the two FTO questions and in the recruit preparedness, the null hypothesis is retained and there are no statistically significant differences the ratings across delivery methods. The null hypothesis is rejected, however, in the recruit question about ability, indicating that there is a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the teamwork competency area. The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program. Table 5-44 indicates the results of a cross-tabulation report on the recruit ability question to examine the breakdown of responses. 192 Null Hypothesis Test Sig. Decision 1 The distribution of responses from recruits for the Teamwork ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.000 Reject the null hypothesis 2 The distribution of responses from FTOs for the Teamwork ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.5561 Retain the null hypothesis 3 The distribution of responses from recruits for the Teamwork preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.818 Retain the null hypothesis 4 The distribution of responses from FTOs for the Teamwork preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.3571 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-43 Mann Whitney U Test of ability and preparedness for teamwork competency from Recruit Survey 1 and FTO survey, grouped across training type Cross-tabulation Report Total Lecture-based Competency-based Teamwork ability - recruit has knowledge 1 2 3 act under full supervision 1 3 4 act under moderate supervision 9 31 40 act independently 23 13 36 act as a supervisor or instructor 1 0 1 Total 35 49 84 Table 5-44 Cross-tabulation report from Recruit Survey 1 for ability in the teamwork competency area 193 The cross-tabulation report for ability shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=31) and the next most frequent category was “act independently” (n=13), followed by “act under full supervision” (n=3) and “has knowledge” (n=2). The most frequent scoring in the lecture-based delivery model was “act independently” (n=23), followed by “act under moderate supervision” (n=9), and one recruit each scoring themselves as able to “act as a supervisor or instructor”, “act under full supervision”, and “has knowledge”. The distribution of scoring of the recruits from the lecture-based model was again weighted towards the more independent ability end of the scale and included one recruit who indicated they were able to act as a supervisor or instructor in this competency area. The reason then for rejecting the null hypothesis in the ability category of the teamwork competency was that the recruits in the competency-based delivery model more frequently scored their ability lower than did the recruits in the lecture-based delivery model. 5.2.3.10 Written Skills Figure 5-19 and Figure 5-20 show the mean rankings for ability and preparation for the written skills competency from Recruit Survey 1 and from the FTO survey respectively. Table 5-45 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation. For the ability in the written skills competency question, recruits in the competency-based delivery model ranked themselves lower than recruits in the lecture-based delivery model, with means of 2.73 and 3.03 respectively. For the question about how well Block I prepared them for Block II with respect to the stress tolerance competency, the recruits in the competency-based model again ranked their preparedness 194 lower (2.61) than those in the lecture-based model did (2.94). The FTOs ranked recruits in the lecture-based model higher than those in the competency-based model in both ability and preparedness with means in ability of 3.00 for lecture-based and 2.56 for competency-based, and means in preparedness of 2.85 for lecture-based and 2.33 for competency-based. Figure 5-19 Mean ratings for ability in the written skills competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method Figure 5-20 Mean ratings for preparation in the written skills competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method 195 Ability in Written Skills competency area – Recruit Survey 1 Ability in Written Skills competency area – FTO survey Preparedness in Written Skills competency area – Recruit Survey 1 Preparedness in Written Skills competency area – FTO survey lecture Mean 3.03 3.00 2.94 2.85 N 35 13 35 13 Std. Deviation .747 .816 .416 .801 competency-based Mean 2.73 2.56 2.61 2.33 N 49 9 49 9 Std. Deviation .670 1.236 .571 1.118 Total Mean 2.86 2.82 2.75 2.64 N 84 22 84 22 Std. Deviation .714 1.006 .535 .953 Table 5-45 Mean ratings for ability and preparation in the written skills competency from Recruit Survey 1 and FTO responses clustered across training delivery methods The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models. 1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the written skills competency area for ratings from Recruit Survey 1 2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the written skills competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the written skills competency area for ratings from Recruit Survey 1 196 4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the written skills competency area for ratings from the FTO survey Table 5-46 shows the results of this analysis, indicating that, in the two FTO questions and in the recruit ability question, the null hypothesis is retained and there are no statistically significant differences the ratings across delivery methods. The null hypothesis is rejected, however, in the recruit question about preparedness, indicating that there is a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the written skills competency area. The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program. Table 5-47 indicates the results of a cross-tabulation report on the recruit preparedness question to examine the breakdown of responses. 197 Null Hypothesis Test Sig. Decision 1 The distribution of responses from recruits for the Written Skills ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.060 Retain the null hypothesis 2 The distribution of responses from FTOs for the Written Skills ability question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.4311 Retain the null hypothesis 3 The distribution of responses from recruits for the Written Skills preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.005 Reject the null hypothesis 4 The distribution of responses from FTOs for the Written Skills preparedness question is the same across lecture-based and competency-based delivery models Independent samples: Mann-Whitney U Test 0.3571 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-46 Mann Whitney U Test of ability and preparedness for written skills competency from Recruit Survey 1 and FTO survey, grouped across training type Cross-tabulation Report Total Lecture-based Competency-based Written Skills preparedness - recruit extremely poorly prepared 0 1 1 poorly prepared 4 18 22 well prepared 29 29 58 extremely well prepared 2 1 3 Total 35 49 84 Table 5-47 Cross-tabulation report from Recruit Survey 1 for ability in the written skills competency area 198 The cross-tabulation report for ability shows that the large majority of the recruits in both delivery models rated themselves as “well prepared” in the written skills competency (n=29, both classes). The competency-based delivery model also scored themselves as “poorly prepared” (n=18) and one recruit each scored themselves as “extremely well prepared” and “extremely poorly prepared”. In the lecture-based delivery model, in addition to the well prepared rankings, recruits also scored themselves as “poorly prepared” (n=4) and “extremely well prepared” (n=2). No recruits in the lecture-based model ranked themselves as “extremely poorly prepared”. The reason then for rejecting the null hypothesis in the preparedness category of the written skills competency was that the recruits in the competency-based delivery model more frequently scored their ability lower than did the recruits in the lecture-based delivery model. 5.2.4 Analysis of Recruit Responses Compared with FTO responses The observation in the analysis of the recruit responses for the competency areas indicated a general trend where the recruits in the lecture-based training rated their ability higher than the recruits in the competency-based training. To explore this observation, data were compared between recruit responses to Recruit Survey 1 and FTO responses for the global ratings and for each of the competency areas to determine if there were any significant differences between recruit and FTO ratings. This analysis was completed separately for the lecture-based and competency-based delivery models. 199 5.2.4.1 Recruit and FTO Responses – Lecture-based delivery model Table 5-48 shows the results from the Mann-Whitney U Test to determine if any significant difference in the ranking between recruits and FTOs in the ability and preparedness questions for global questions and for the questions relating to each of the competencies was observed. In each case, the null hypothesis tested was that the distribution was the same between recruit and FTO responses for that particular question. In all cases except for ability in the risk management competency and ability in the stress tolerance competency, the null hypothesis was retained. In these two competencies, a significant difference between how the recruits rated themselves and how their FTOs rated them was observed. Table 5-49 shows a cross-tabulation analysis for these two areas to determine the source of the difference. In both of these competency areas, large numbers of recruits rated their ability as able to “act independently” (n=13 or 37% for risk management and n=19 or 54% for stress tolerance, compared to the FTO ratings where one FTO (8%) rated their recruit as able to “act independently” in risk management and three FTOs (25%) rated their recruits as able to “act independently” in stress tolerance. In the lecture-based delivery model, it appears that recruits were over-estimating their ability in the risk management and stress tolerance competencies when compared to their FTO ratings. 200 Null Hypothesis Test Sig. Decision For the lecture-based delivery model, the distribution of responses for the global ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.656 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the global preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.8611 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Adaptability ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.132 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Adaptability preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.285 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Ethical Accountability ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.952 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Ethical Accountability preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.267 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Interactive Communication ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.614 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Interactive Communication preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.904 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Organizational Awareness ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.426 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Organizational Awareness preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.143 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Problem Solving ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.179 Retain the null hypothesis 201 Null Hypothesis Test Sig. Decision For the lecture-based delivery model, the distribution of responses for the Problem Solving preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.422 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Risk Management ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.038 Reject the null hypothesis For the lecture-based delivery model, the distribution of responses for the Risk Management preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.058 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Stress Tolerance ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.049 Reject the null hypothesis For the lecture-based delivery model, the distribution of responses for the Stress Tolerance ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.355 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Teamwork ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.245 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Teamwork ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.903 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Written Skills ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.968 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Written Skills preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.816 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table 5-48 Mann Whitney U test results for lecture-based delivery model for ability and preparedness overall and for each of the competencies grouped across recruit/FTO responses 202 Cross-tabulation Analysis Total recruit FTO Risk Management - ability has knowledge 1 3 4 act under full supervision 5 1 6 act under moderate supervision 16 7 23 act independently 13 1 14 Total 35 12 47 Stress Tolerance - ability has knowledge 1 2 3 act under full supervision 2 1 3 act under moderate supervision 13 6 19 act independently 19 3 22 Total 35 12 47 Table 5-49 Cross-tabulation analysis of recruit ability in the risk management (top) and stress tolerance (bottom) competency areas grouped by recruit/FTO responses 5.2.4.2 Recruit and FTO Responses – Competency-based delivery model Table 5-50 shows the results from the Mann-Whitney U Test to determine if any significant difference in the ranking between recruits and FTOs in the ability and preparedness questions for global questions and for the questions relating to each of the competencies was observed. In each case, the null hypothesis tested was that the distribution was the same between recruit and FTO responses for that particular question. In all cases except for preparedness in the adaptability and interactive communication competencies, the null hypothesis was retained. In these two competencies, a significant difference between how prepared the recruits rated themselves and how their prepared the FTOs rated them was observed. Table 5-51 shows a cross-tabulation analysis for these two areas to determine the source of the difference. In each of the categories, one of the FTOs indicated that the Block I 203 training was N/A for ability in these two competency areas. In the adaptability competency area, one recruit indicated that they were “extremely well prepared” and three recruits indicated the same for the interactive communication competency area. The majority of both recruits and FTOs indicated that the recruits were “well prepared” in each of these two competency errors. The small sample size for the FTOs combined with the decision of one FTO to indicate N/A appears to be the source of the significant differences seen between recruit and FTO responses. Null Hypothesis Test Sig. Decision For the competency-based delivery model, the distribution of responses for the global ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.837 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the global preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.102 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Adaptability ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.806 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Adaptability preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.030 Reject the null hypothesis For the competency-based delivery model, the distribution of responses for the Ethical Accountability ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.845 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Ethical Accountability preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.722 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Interactive Communication ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.226 Retain the null hypothesis 204 Null Hypothesis Test Sig. Decision For the competency-based delivery model, the distribution of responses for the Interactive Communication preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.049 Reject the null hypothesis For the competency-based delivery model, the distribution of responses for the Organizational Awareness ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.408 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Organizational Awareness preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.578 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Problem Solving ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.684 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Problem Solving preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.389 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Risk Management ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.131 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Risk Management preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.137 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Stress Tolerance ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.598 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Stress Tolerance ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.144 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Teamwork ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.971 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Teamwork ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.072 Retain the null hypothesis 205 Null Hypothesis Test Sig. Decision For the competency-based delivery model, the distribution of responses for the Written Skills ability question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.602 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Written Skills preparedness question is the same across recruit and FTO Independent samples: Mann-Whitney U Test 0.519 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table 5-50 Mann Whitney U test results for competency-based delivery model for ability and preparedness overall and for each of the competencies grouped across recruit/FTO responses Cross-tabulation Report Total recruit FTO Adaptability - preparedness N/A 0 1 1 poorly prepared 4 2 6 well prepared 44 6 50 extremely well prepared 1 0 1 Total 49 9 58 Interactive Communication - preparedness N/A 0 1 1 poorly prepared 5 2 7 well prepared 41 6 47 extremely well prepared 3 0 3 Total 49 9 58 Table 5-51 Cross-tabulation analysis of recruit preparedness in the adaptability (top) and interactive communication (bottom) competency areas grouped by recruit/FTO responses 5.2.5 Analysis of Assessor Responses The assessors for exams in the competency-based delivery model were also asked about their impressions of the ability and preparedness of Block I recruits. Although this group did not work with the lecture-based group in the same capacity, they did have exposure 206 to previous incoming recruits in their roles as assessors for the Assessment Centre. They would be familiar with the level of incoming recruits prior to training in the lecture-based delivery model. Because of the high stakes of the Assessment Centre, this group of current and retired police officers is very experienced, with many having senior ranks in their various departments. Because of the level of experience in this group, it was thought that they may have valuable insights into the performance demonstrated by the recruits in the competency-based model. This group has no interaction with the recruits during the training process, they are brought in as impartial exam assessors to evaluate the recruits’ performance on exam day scenarios using standardized rubrics. The assessors were sent the survey as precursor to focus group registration and participation. The mean values for the assessors rankings of the recruits in their ability and preparedness overall and for each of the competencies are presented in Table 5-52. 207 Competency N Mean Standard Deviation Valid Missing Overall - ability 10 0 2.80 0.422 Overall - preparedness 10 0 3.00 0.000 Adaptability – ability 10 0 2.60 0.669 Adaptability – preparedness 10 0 2.90 0.316 Ethical accountability – ability 10 0 2.80 0.789 Ethical accountability – preparedness 10 0 3.00 0.471 Interactive communication – ability 10 0 2.80 0.422 Interactive communication – preparedness 10 0 3.00 0.000 Organizational awareness – ability 10 0 2.50 0.850 Organizational awareness – preparedness 9 1 2.67 0.500 Problem solving – ability 10 0 2.70 0.483 Problem solving – preparedness 10 0 2.80 0.422 Risk management – ability 10 0 2.70 0.483 Risk management – preparedness 10 0 2.90 0.316 Stress tolerance – ability 10 0 2.70 0.483 Stress tolerance – preparedness 10 0 3.00 0.000 Teamwork – ability 10 0 3.00 0.000 Teamwork – preparedness 10 0 3.00 0.000 Written skills – ability 9 1 2.67 0.500 Written skills – preparedness 9 1 2.67 0.500 Table 5-52 Summary of mean and standard deviation of assessors’ ranking of recruits in the competency-based delivery model To identify any difference in how the assessors ranked the recruits’ ability and preparedness in the competency-based program with those rankings of the FTOs and the recruits, the assessor scores were compared to the Recruit Survey 1 and FTO survey responses for the overall global scores as well as each of the competencies. The Kruskal-Wallis test was used to determine if a difference in the distribution of responses across the three groups was observed. In all cases, the null hypothesis tested was no difference in distribution of responses between the three groups. Table 5-53 shows the results of this analysis. In all cases, the null hypothesis was retained, indicating no statistically significant difference in how the assessors, FTOs, and recruits rated the recruits in ability and preparedness in any of the competencies. 208 Null Hypothesis Test Sig. Decision For the competency-based delivery model, the distribution of responses for the global ability question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.957 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the global preparedness question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.135 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Adaptability ability question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.521 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Adaptability preparedness question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.085 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Ethical Accountability ability question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.521 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Ethical Accountability preparedness question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.857 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Interactive Communication ability question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.319 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Interactive Communication preparedness question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.086 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Organizational Awareness ability question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.623 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Organizational Awareness preparedness question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.623 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Problem Solving ability question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.640 Retain the null hypothesis 209 Null Hypothesis Test Sig. Decision For the competency-based delivery model, the distribution of responses for the Problem Solving preparedness question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.630 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Risk Management ability question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.291 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Risk Management preparedness question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.276 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Stress Tolerance ability question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.648 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Stress Tolerance ability question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.278 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Teamwork ability question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.658 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Teamwork ability question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.087 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Written Skills ability question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.800 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Written Skills preparedness question is the same across recruit, FTO, and assessor Independent samples: Kruskal-Wallis Test 0.769 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table 5-53 Kruskal-Wallis test results for ability and preparedness overall and in each of the competencies grouped across recruit, FTO, or assessor 210 5.2.6 Qualitative Analysis of Survey Comments This section reviews the narrative comments for Recruit Survey 1, Recruit Survey 2 and the FTO surveys for each of the delivery models, as well as the comments from the assessor survey. Despite a low number of respondents who provided comments, there are some valuable insights. Although the recruits who responded to Recruit Survey 2 had already responded to Recruit Survey 1, the comments they offered bring a different perspective after their application as a patrol level police officer on the road and so the comments from those surveys are also included. Although the survey results did not show a statistical difference between the quantitative results for surveys 1 and 2, the qualitative analysis did bring forth new information from Survey 2. 5.2.6.1 Lecture-based delivery model: Recruit Survey 1 Of the 35 respondents to the survey, 14 provided comments for analysis. Table 5-54 summarizes the coding of the comments. 211 Node and Sub-node Names Description References Instructors Three of the comments in this node were positive statements about instructor ability. One statement was critical about a specific instructor. 4 Survey This node included comments that were about the survey itself, including timing and competencies. 4 Timing Comments in this node indicated that the survey would be better timed if it was administered after some Block II experience when recruits had a better understanding of the job requirements. 3 Training structure Comments in this section addressed the program structure in Block I, including course content and assessments 7 Additional content Additional content or material desired by recruits included report writing. 4 More application Comments by recruits in this section indicated they would have liked more opportunity to apply their knowledge in more scenarios. 2 Well prepared Recruits comments in this section indicated that the recruits felt prepared for Block II. 6 Hesitant As a sub-set of the well-prepared comments, some recruits indicated that they were still hesitant about some of the material. 3 Table 5-54 Recruit comments and coding from Class 151, lecture-based delivery model, Survey 1 Overall the comments from this survey were positive but there were a mixture of positive and critical comments in all of the major themes. Comments on the structure of the training included both positive, such as: I think it is amazing how much I was exposed to over just a 3 month period. All my anxieties were addressed and I feel extremely confident in starting block 2. I think the simulation days were the most useful and I only wish we had some informal sessions in addition to the more formal ones. and critical, such as: 212 Too many exams in one week with added physical activities on top of that. The work load was almost too much and too rushed. We had to memorize and not apply. Could have been spaced out better over the three months. Time utilization. Both of these comments reflect the overall program feedback that had been received in previous verbal program debrief sessions and that formed part of the basis for the redesign of the curriculum. Additionally, the comments about desiring more training on report writing, in particular, were another very common comment in program debrief sessions. Interestingly, six of the recruits said that they believed they were well prepared for Block II but also specified areas where they would have liked more training to be better prepared. As noted, these areas included report writing, transporting prisoners, and court testimony. One comment reflecting this sentiment is: While I feel confident about heading into Block II, I feel like you can never really tell how prepared you are until you hit the streets. Overall, I would say I feel prepared. However there are some things that we only touched on once, or very briefly, and some more time would be beneficial. This includes report writing, ensuring good health of a subject (first aid, etc., if needed), and dealing with people who have been arrested (there was a lot of talk about bringing someone back to cells, but that’s as far as we got). Of these three topic areas that this respondent listed, two are departmental specific and difficult to train at the JIBC Police Academy where we serve 13 different departments. Each department has its own first aid certification requirements, so the duty to provide care is discussed in general terms. Similarly, each department has its own procedures for transporting and booking detainees, with some departments using wagons driven by a police officer and others transporting in the back of patrol vehicles, so specifics around this topic are typically left for Block II. Three of the comments indicated that the survey would be better timed after Block II experience because the recruits were unsure of what their role on 213 the road would look like. Overall the comments were favourable towards their training at the JIBC Police Academy. 5.2.6.2 Lecture-based delivery model: Recruit Survey 2 Of the 20 respondents to the survey, 10 provided comments for analysis. Table 5-55 summarizes the contents of the responses. Node and Sub-node Names Description References Block II Application Comments in this section indicated that recruits learned the most by being on the road in Block II. 3 Structure Comments in this section included the structure of training in Block I and course content. 9 Content This section included comments about the course content in Block I. Overall, with one exception, the comments were critical 6 Articulation Comment from a recruit who struggled with articulation 1 Computer Comments about the inadequacy of CPIC/MRE/PRIME training 2 Report writing Comments about recruits wanting more report writing in Block I 3 Simulation time Comments indicating need more simulations and application during Block I. 3 Survey Comments in this section were specific to the survey structure, wanting more options for answers or struggling to link the competencies to Block I training. 2 Table 5-55 Recruit comments and coding from Class 151, lecture-based delivery model, Survey 2 The comments in this survey were considerably more critical than those in Survey 1. Three recruits noted that they learned much from their FTO: I have learned a lot being on the road at my own department as most of the JI training is based on what Vancouver does. 214 and In the beginning of my block 2, I had some challenges regarding report writing and articulation, however I managed to improve significantly after spending 2 months with my field trainer. The toughest part of report writing was to figure out what is important and what is not as important to mention in my reports. Although it is not specifically mentioned, this last comment also targets legal articulation, as report writing is centered around establishing how the essential elements of an offence were met. Computer training was mentioned as a deficit in the program, which had been an ongoing struggle with the four days in a row of PRIME training without any link to practical application. Also, it is not possible for the JIBC Police Academy to connect to the live CPIC environment because of network security concerns about the location within a public institution. The structure of the curriculum was also noted in terms of the way time was used in traffic studies and the desire for more practical scenarios. One recruit commented I lack a lot of knowledge in Traffic Enforcement that should have been taught in the academy. Need more instruction on what to do and why, not practice filling out the forms. Several recruits mentioned that they would have benefited from more simulation training in Block I: I think we should have been reading at home and doing simulation training throughout Block 1 which I believe is the model that you moved to so that’s great. Overall, as expected, after exposure to the realities of patrol level police work, the recruits identified areas in Block I training that would have better prepared them for Block II. Their comments are again consistent with the informal verbal program debrief sessions that were conducted in the lecture-based delivery model. 215 5.2.6.3 Lecture-based delivery model: FTO Survey Of the 15 respondents to the survey, nine provided comments for analysis. Table 5-56 summarizes the comments. Node and Sub-node Names Description References Course content FTOs expressed opinions on course content in Block I. 4 Articulation Comment about recruit lacking legal knowledge. 1 Report writing Comments on more report writing required in Block I. 2 Program structure Comment on how Block I should be more scenario-based to better prepare recruits to interact with subjects. 1 Recruit with previous experience Recruits were hired with previous policing experience so were strong performers. 2 Survey Difficulty determining if competencies are personality traits or related to training. 2 Well prepared FTOs believed their recruit was well prepared for Block II. 3 Table 5-56 Recruit comments and coding from Class 151, lecture-based delivery model, FTO survey The comments on course content were focused on the FTO’s impressions that their recruits did not have a lot of experience writing reports during their Block I training. Also, one FTO commented on their recruit’s legal understanding: There seemed to be less knowledge on essential elements and what the recruit needed to satisfy a criminal offence in speaking to a complainant or victim. Well prepared overall. Another FTO also indicated that their recruit struggled with subject interactions: I feel the most beneficial change to the block 1 program at the JI would be subject interaction. Have more scenario-based training that gets the recruit implementing the knowledge (use of force, legal, investigation/patrol). I felt the academy did not fully prepare the recruit for this. Further, the recruit did not appear to have much experience writing reports (detailing a proper synopsis and narrative). 216 These comments are consistent with the anecdotal feedback the program had been receiving from FTOs for many years under the lecture-based delivery model. 5.2.6.4 Competency-based delivery model: Recruit Survey 1 The next sections outline the comments from the recruit and FTO surveys from Class 152 and Class 153. They are presented in order of the surveys, so Survey 1 will be discussed for both classes, followed by Survey 2 and then the FTO survey. 5.2.6.4.1 Class 152 – Recruit Survey 1 Of the 29 respondents to the survey, 10 provided comments for analysis. In contrast to the surveys from Class 151, the recruit comments on course content typically occurred separately from their comments on program structure, and so “Content” was coded as a separate node instead of a sub-node under “Structure”. Node and Sub-node Names Description References Competencies Recruit comments that specifically mentioned the competencies. One overall well prepared and one identifying specific competencies that were difficult to teach in Block I. 2 Content Recruits identified specific areas where they would have liked more exposure in Block I, including report writing, radio use, legal knowledge, and Use of Force. 6 Legal Three recruits identified they would have liked more time spent on discussing legal topics. 3 Report writing One recruit identified they would have liked more time working on report writing. 1 Effort Recruit comments on the amount of time they spent learning and understanding material outside of the classroom. 2 217 Node and Sub-node Names Description References Prepared Recruit comments that identified they felt prepared for Block II. 6 Structure Recruit comments on specific elements of the structure of the new program. 7 Directed study Recruits recognized the purpose of directed study time but thought that there was too much of it or it was not spent in a meaningful manner with instructor help. 2 Dislike quizzes Recruits critical of the weekly quizzes to ensure they had an understanding of the pre-reading, particularly that they had to achieve 100% on the quiz prior to the start of the week. 3 Dislike reflection Recruit comments that they did not appreciate the self-reflection required in the program, thought it was too frequent, a waste of time, or lacked meaningful instructor feedback. 6 Pre-reading Recruits indicated there were too many readings or disliked being required to complete the readings. 5 Scenarios These comments indicated that the recruits benefited from the scenarios and believed they were prepared for Block II because of this practical experience. 6 Wanted lectures These recruits indicated they would have preferred more lectures in the program or a lecture-style review of the material. 6 Survey One recruit said the questions on the survey were too closed-ended. 1 Table 5-57 Recruit comments and coding from Class 152, competency-based delivery model, Survey 1 Overall, the comments from Survey 1 in Class 152 were critical of specific aspects of the structure of the new delivery model. Recruits did not enjoy the amount of pre-reading, the weekly quizzes where they had to achieve 100% before the start of the week, the directed study time, or the reflection that was built into the program. No mention was made of the case-studies in the recruit comments. One theme was that recruits wanted lectures, particularly on legal concepts: 218 I did not get too much from the self-assessments and I would have enjoyed the information from our readings reviewed and discussed by an instructor by putting the material into context. and I would like to see a little more time spent having the law explained to us, rather than reading it from copious amounts of readings… The recruits did not seem to recognize that the case-studies were intended as the application and review of their reading knowledge prior to applying it in scenarios, and the case-study debriefs were the time with the instructor to review any misunderstandings. With respect to content, recruits wanted more focus on report writing and on legal elements of the curriculum. Recruits had positive comments on how prepared they believed they were for Block II training and on the practical scenarios: I enjoyed the general layout of the curriculum for Block 1 and truly believe that the practical sessions have been the greatest preparation tool for our Block 2. and I fell that the increased amount of simulations and scenario based exercises has helped prepare me for block two far more than had I gone through the old curriculum. I am entering block two confident that I know what I am doing and that I will be successful. Within the structure of the program area, there were two interesting comments that indicated a difference in pedagogical philosophy: I am much more of a proponent of a teaching-based learning environment, whether it be in-class lecture style, scenario based interactive learning, or theoretical group discussion led by an instructor. As someone who has been in this training for the past 13 weeks, I believe I am lacking in experience-based, instructor-led teaching. I look forward to Block II for much of this type of training and teaching. and 219 I think that much of my knowledge of policing has come from prior experience in a policing environment, rather than Block I of training. I believe that this is the case because of the lack of teaching in Block I. We had much learning, but little teaching (emphasis added). The first comment is slightly puzzling in that the basis of the program was scenario based interactive learning and theoretical group discussion. The second comment is interesting because of the recognition of the amount of learning that occurred, but the apparent opinion that learning is only relevant if it happens in a lecture-based environment. Interestingly, of the six recruits who indicated they believed they were well prepared for Block II (the same number of recruits as did in Class 151), there was not the same hesitancy indicated in the comments as seen in the previous class. 5.2.6.4.2 Class 153 – Recruit Survey 1 Of the 20 respondents to the survey, three provided comments for analysis. The response rate for this class was exceptionally low but their comments are presented nonetheless in Table 5-58. Node and Sub-node Names Description References Program Structure Comments on specific components of the program or structure of coverage. 2 Application for Advancement Dislike self-assessment and thought assessment was not standardized between mentors. 1 Introductory Coverage Some topics covered in introductory fashion such as radio use, CPIC, and detainee transport. 1 Survey – constable tasks Recruit found it difficult to comment on constable tasks as they were not a specific focus and some were covered only briefly in training. 1 Well prepared Positive comment about program and preparedness for Block II. 1 Table 5-58 Recruit comments and coding from Class 153, competency-based delivery model, Survey 1 220 It is difficult to draw conclusions from such a small sampling of comments. As seen in the previous survey from Class 152, there were a mixture of positive and critical comments. The critical comments were about specific aspects of the structure of the curriculum I felt frustrated in regards to the end of block portfolio self-assessment for the constable competencies – articulating why you believe you met these requirements. It was not standardized between mentors and it did nothing to help my learning or preparedness for block. and the positive comments were about the recruits’ preparedness for Block II Excellent program that has prepared me well for Block II of training. As someone who entered the program with no previous police experience and an incredibly small amount of policing knowledge I appreciated how the learning build upon concepts as we went. 5.2.6.5 Competency-based delivery model: Recruit Survey 2 The recruit responses to Survey 2, administered during Block II training, are quite low, making it difficult to draw conclusions from the comments. Nevertheless, the comments are presented because they do indicate that some recruits gained new insight after they had experienced the realities of patrol level police work during Block II. 5.2.6.5.1 Class 152 – Recruit Survey 2 Of the nine respondents to the survey, four provided comments for analysis. The analysis of their responses is presented in Table 5-59. As in Survey 1, Class 152 comments mentioned that they would like more lectures, particularly in legal concepts. Interestingly, one recruit suggested a ride-along part way through Block I to help provide context to the material that was being learned. This comment is particularly interesting because the original proposal for curriculum change included a “Pre-Block I” ride along week where recruits 221 would complete a minimum of three ride-alongs in their home department where they were given specific tasks and types of calls to observe to provide context for their upcoming training. This part of the proposal was not approved by the departments because of union opposition and a perception that it would not be needed for a recruit who had some form of related experience, so it was not included in the final version of the new delivery model. Node and Sub-node Names Description References Content Comments on course content in the training that could have been increased. 4 Legal Recruits comments indicated they would have liked more focus on legal concepts in Block I. 2 Report writing Recruits comments indicated they would have liked more focus on report writing and MDT/PRIME/CPIC in Block I. 2 Learning styles Recruit commented there was not enough teaching geared towards someone with an “audible” learning style. 1 Prepared Recruit commented they were prepared overall. 1 Ride-along Recruit suggested a ride-along part way through Block I to provide context. 1 Structure Long comments that focused on different specific aspects of the structure of the new delivery model. 2 Case studies Unfavourable comments about case studies. 2 Dislike quizzes One recruit indicated they did not try to understand the material, they just tried to pass the quiz. 1 Pre-reading Two recruits commented (one recruit twice) that they did not like the reading because it was self-directed. 3 Scenarios Two recruits had positive comments about the scenarios and one recruit had two negative comments about scenarios. 4 Wanted lecture Two recruits commented that they wanted more lectures. One recruit indicated the reason they wanted lectures was to hear the stories of the instructors. 3 Table 5-59 Recruit comments and coding from Class 152, competency-based delivery model, Survey 2 222 Although case studies were not mentioned in Class 152 Survey 1, they were mentioned unfavourably by two recruits in Survey 2. The ‘case study’ element is good in theory, but I found too often we would get off topic and peers in the class would turn it into an opportunity to talk for the sake of talking rather than letting the instructors ‘teach’..if that makes sense. Overall, the comment that perhaps best illustrates the change in philosophy with the delivery model change as well as touches on the tensions with introducing this change and the expectations of FTOs is the following: On the whole, I do think we are prepared when it comes to knowing what to do and how to react in any given situation, which ultimately is what matters most. I’ve yet to go to a call where I didn’t know what do to, albeit if I couldn’t understand the legal grounds/ recite the essential elements verbatim/ articulate why I did what I did in legal terms. This comment is particularly interesting because it equates understanding and articulating the legal aspects of policing with reciting elements and authorities verbatim. Recitation is frequently an expectation of FTOs, even if the focus in the new delivery model is on explaining and understanding. Recruits in Block II must navigate this tension, which was particularly evident with the first class through the new program. 5.2.6.5.2 Class 153 – Recruit Survey 2 Of the seven respondents to the survey, two provided comments for analysis. Of these recruits, one was positive and one was exceptionally negative. The recruit who included the negative comments did not provide any identifier so it could not be determined if their FTO also responded to the survey. The recruit who provided the positive comment said the program was “solid” in preparing recruits for Block II. The recruit who provided the negative comment indicated they were frustrated with the competencies, spent too much time 223 doing the documentation in Block II, disliked doing the application for advancement to demonstrate they were ready to move on from Block II, was unhappy that they were required to know municipal bylaws, did not feel that they would gain anything from being involved in Block I scenarios when they returned for Block III, and that they had paid: …a lot of money to go to the JIBC to learn, and I feel I have had to teach myself everything… A table of frequencies is not provided as there were only two comments to analyze. 5.2.6.6 Competency-based delivery model: FTO Survey The field trainers from both classes in the competency-based program were surveyed. Again, response rates were rather low, but the results of analysis of the comments they provided is presented below. 5.2.6.6.1 Class 152 – FTO Survey Of the 11 respondents to the survey, seven provided comments for analysis. Table 5-60 shows the analysis of their comments. 224 Node and Sub-node Names Description References Block II Book Negative comments about the changes to documentation in Block II. 3 Competencies Found competencies challenging to apply. 2 Documentation – negative Comments that the FTOs did not like the new documentation, found it repetitive, and said it took too much time. 4 Reflection – negative Did not like that the recruit had a self-assessment component in the Block II documentation. 1 Course content Comments that the recruits are not prepared in many areas from Block I training, including report writing, PRIME/CPIC, legal articulation, and Use of Force. 3 Legal Two FTOs thought their recruits needed a better legal understanding. 2 PRIME and CPIC One FTO thought that PRIME and CPIC training was inadequate. 1 Report Writing Two FTOs thought their recruits’ report writing ability was lower than in previous classes. 2 Scenarios One FTO commented scenario based training was essential. 1 Use of Force One FTO thought Use of Force training was inadequate. 1 FTO Update Course One FTO commented that the FTO update course did not provide enough information to already-trained FTOs. 1 Program Structure One FTO commented that recruits were not being told what was essential to know, appreciated the scenario based training, and wanted recruits to learn legal studies through anecdotal stories relayed from instructors. 1 Survey Two FTOs commented on the structure of the survey and indicated some of the constable tasks were not applicable. 2 Table 5-60 FTO comments and coding from Class 152, competency-based delivery model, FTO survey 225 While the FTOs from Class 151 provided some negative comments about the Block I training, their comments were worded in a polite and professional manner. The comments from the FTOs for Class 152 were vitriolic, the majority were directed towards the changes to the Block II book and the documentation process. The designer/architect of the new program in regards to class 152 has placed far too many tasks and requirements on the recruit and his or her respective field trainer during block II. The extra and redundant amount of self reflection, weekly and monthly documentation needs to be scaled back specifically. My recruit has spent far too much time writing about his own actions and thoughts… and The recruit book was poorly laid out and no additional information on how it was supposed to be filled out was given. From speaking to other field trainers, there appears to be no consistency in the completion of the documentation. The rubric was not applicable in most cases and shows a complete lack of understanding of patrol based police work and the role recruits take on while in block 2. Based on the new program I will not be volunteering to take any further recruits. FTOs noted some concerns about recruits’ ability to articulate legal matters and write reports, and blamed this observation on the new delivery model. Overall, the only comment that could be construed as mildly positive was the comment that I do believe that scenario based training is essential, especially in a field such as law enforcement, but… Clearly the seven respondents who provided comments were rather unhappy with the program, in particular the changes to Block II documentation. 5.2.6.6.2 Class 153 – FTO Survey Of the nine respondents to the survey, four provided comments for analysis. The comments from these four respondents were much more consistent with the FTOs from Class 226 151 in terms of the tone of the comments. While they still expressed dissatisfaction with recruits self-assessing, the comments were worded in a much less angry manner. Node and Sub-node Names Description References Block II Documentation Negative comments about the frequency and content of the documentation. 2 Self-assessment Negative comments about self-assessment. 2 New FTO One FTO was new and could not compare to recruits trained under the previous model. 1 Recruit with previous police experience One FTO had a recruit with previous police experience who performed well. 1 Recruiting One FTO commented that there are issues with recruiting that impact recruit performance and that hiring of people not suitable cannot be fixed by training. 1 Table 5-61 Recruit comments and coding from Class 153, competency-based delivery model, FTO survey One FTO who commented negatively about the self-assessments clearly believed that this was not an appropriate task for a recruit: Recruits applying self-assessment when they are learning to be police officers is a cart before the horse approach. Focus on the core skills and assessment by senior officers is a beneficial model. Asking a new member to assess themselves is difficult when they don’t know anything or very little of what the job entails. The negative comments from these four FTOs were limited to the new documentation procedures for Block II. 5.2.6.7 Competency-based delivery model: Assessor Survey Of the 10 respondents to the survey, three provided comments for analysis. All three comments stated that too much time had elapsed between their assessing of recruits in Class 152 and Class 153 and the administration of the focus groups. The gap in time of approximately one year was due to the time constraints around developing and delivering the 227 Block III component of training. Understandably, the assessors, many of whom had also assessed exams for classes subsequent to Class 153, had a difficult time recalling the specifics of these two classes. 5.3 Focus Group Analysis The transcript from the focus group for the FTO was not analyzed as the turnout of only three field training officers, from only one department, did not offer representation of the group. In addition to the small numbers, one of the field trainers who participated is a very experienced member who has been field training for many years and who is routinely assigned recruits who are known to be struggling in their training. A couple of points of interest, however, did emerge from the discussion. When the questions centred around recruit preparedness for Block II training, the FTOs seemed adamant that recruit legal knowledge had suffered with the new delivery model. As examples of this they cited a lack of recruit knowledge on criminal harassment charges and on municipal bylaws. Neither of these topics were included in the lecture-based delivery model, but the FTOs insisted that they were included when this was pointed out. It seemed that the FTOs had difficulty differentiating between the topics they learned in Block I versus Block III when they attended the academy. The FTOs were also clear in their belief that their role was not to “teach” the recruits legal knowledge but rather to expose the recruits to as many calls as possible. There was acknowledgement, brought up by two of the field trainers, that some of the general skills that recruits lacked were lacking when they started training and the deficiencies were a product of recruiting and not training at the Police Academy. In particular, the department that these three FTOs were from underwent a hiring surge after a 228 couple of years of a hiring freeze and was hiring large numbers of recruits. Further, the most animated discussions were centred around the Block II documentation in the Block II book. This component of the discussion actually ended up taking an extra hour beyond the scheduled time and resulted in some suggestions to modify the Block II book that have been implemented, including adding examples of how to document a call and adding space for comments after each criteria on the rubrics. While the discussion with these FTOs was informative, it does not constitute a valid representation of opinions for research purposes. 5.4 Summary The small sample size, particularly for the FTO survey, made analysis of the responses difficult. Overall, the quantitative analysis of the survey results for the Core Constable Competencies indicated that Block I recruits in the lecture-based delivery model rated their ability significantly higher than those of the recruits from the competency-based delivery model in the competency areas of: adaptability, ethical accountability, organizational awareness, problem solving, risk management, stress tolerance, and teamwork. Recruits in the lecture-based delivery model rated their preparedness in the written skills competency area significantly higher than recruits from the competency-based delivery model. No significant differences emerged in how the FTOs rated their recruits between delivery models for any of the competencies in either recruits’ ability or recruits’ preparedness for Block II. Recruits in the lecture-based delivery model rated their ability in the risk management and stress tolerance competency areas significantly higher than their FTOs rated them. No significant differences in ability ratings between recruits and FTOs emerged in any of the competency areas for the competency-based delivery model. 229 Qualitative analysis of the comments section of each of the surveys indicated that recruits from the lecture-based delivery model believed they were well prepared but were also hesitant about their Block II training whereas this same hesitancy was not articulated in the recruits who were trained in the competency-based delivery model. Several recruits from the lecture-based delivery model indicated they would have benefited from an increase in application through scenarios and a decreased focus on memorization. In contrast, recruits in the competency-based model indicated they would have preferred more lectures although they did see benefit from the scenario-based application. Many recruits comments in the competency-based program centred around the dissatisfaction with the amount of work required to complete the readings and associated quizzes, as well as a distaste for the self-assessment components of the curriculum. The FTOs identified report writing, CPIC/PRIME training, and legal articulation as lacking in both the lecture-based and competency-based curriculum. FTOs from the lecture-based model indicated their recruits would benefit from more scenarios whereas FTOs from the competency-based model indicated their recruits would benefit from more lectures. The FTO comments around the new delivery model focused on their dissatisfaction with the new documentation process for Block II and the amount of self-assessment required of recruits. The significance of both the quantitative and qualitative results will be discussed in the following chapter. 230 Chapter 6: Discussion Increasingly, the traditional methods of para-military didactic police instruction are no longer regarded as sufficient to meet the demands of patrol level police (Cleveland & Saville, 2007; Hundersmarck, 2009; Pannell, 2016; Vander Kooi & Bierlein Palmer, 2014; Werth, 2011). Further, these traditional training methods may promote a mindset that contradicts the public expectations of how police should behave (Pannell, 2016). Police training should focus on developing critical thinking skills and the ability to adapt to ever-changing environments (Cleveland & Saville, 2007; Pannell, 2016; Werth, 2011). To accomplish this change, the underlying philosophy of police training must be changed rather than merely introducing more training (Pannell, 2016). Indeed, the underlying messages conveyed by how material is taught may send a more powerful message than the actual curriculum content, so it is essential that training practice aligns with behavioural expectations (Glazier et al., 2017). This thesis was intended to be an evaluation of a transformational change in municipal police recruit training in BC, following the design, development, implementation, and evaluation of a new competency-based delivery model implemented at the JIBC Police Academy. Through the process, however, it became apparent that many factors other than the design and quality of the training were influencing the evaluation. Indeed, as Pannell (2016) notes in relation to attempts to revise police training: “The formidable social forces of formal and informal cultures have derailed many good ideas from becoming successful” (p. 5). This chapter will discuss the results obtained from the program evaluation as well as other factors that influenced the implementation and evaluation of the program, including faculty development, and organizational and cultural change within a police training environment. Following that, there will be an outline of the 231 changes that have been made since the first two classes in the new delivery model and recommendations as the program moves forward. 6.1 Survey Results Although the low response rates to surveys makes analysis difficult, police survey research typically has extremely low response rates of less than ten percent (Huey et al., 2017). The response rates seen in this survey, particularly for the surveys administered during Block II training to recruits and FTOs, are not unusual although the low response rates does limit the conclusions that can be drawn from the findings, particularly with respect to analyzing FTO perceptions. As the assessor survey results were not statistically significantly different from either the recruit or the FTO results in any areas and the assessor comments were minimal, they are not included in the discussion. 6.1.1 Recruit Ability and Preparedness From the quantitative survey results, no statistically significant difference in the global ability and global preparedness for Block II ratings from either recruits or FTOs between delivery models was identified. In each of the competency areas, however, recruits in the lecture-based delivery model rated their ability statistically significantly higher than did recruits in the competency-based model in all of the competencies except interactive communication and written skills. The only statistically significant competency for recruits in how well Block I prepared them for Block II was in the area of written skills, where recruits in the lecture-based model believed they were better prepared than did recruits in the competency-based model. No statistically significant differences emerged in how FTOs 232 rated their recruits between delivery models, despite there being an overall trend for lower ratings from FTOs in the competency-based delivery model. The observation that recruits self-assessed as having a higher ability when taught with the lecture-based model raised the question of the accuracy of their self-assessments. The qualitative comments from the surveys in the lecture-based model contained some indication of a hesitancy with which they discussed their readiness for Block II. Of the six recruits who said they believed they were well prepared, half did so with a qualifying comment on areas where they thought they were underprepared. With the competency-based delivery model, this same hesitancy was not seen in the comments about readiness for Block II; the six recruits who said they believed they were well prepared for Block II did so without qualifying statements. Further, when commenting on the timing of the survey, two recruits from the lecture-based delivery model commented that the survey would be better timed after they had experienced Block II and knew what to expect from their role as a patrol level police officer. No recruits from the competency-based model who commented they were unsure of what to expect when on the road with their FTO. One possible interpretation of these results is that recruits who were taught in the lecture-based model and did not have previous policing experience did not have an accurate sense of what to expect in Block II. Their limited exposure to scenarios and to practical application during Block I may have had them guessing as to what to expect when interacting with subjects. This conclusion is supported by the FTO comments from two trainers in the lecture-based model that their recruits were unprepared for interacting with subjects and that they required more application during their Block I training. 233 Another possible explanation for these results is that the recruits trained in the competency-based model had a much better understanding of their actual ability. Whereas recruits in the lecture-based model participated in an extremely small number of scenarios and received extremely limited performance feedback that was usually focused on what they did well, recruits from the competency-based model participated in many more scenarios and received formative feedback in the form of feedback from the instructor leading the scenarios as well as from watching their performance on video the following day. Further, recruits in the competency-based delivery model had performance-based exams where they had to complete a scenario and were graded on a pass/fail basis based on their actions. Because of this repeated exposure to scenarios and the associated formative and summative feedback built into the competency-based model, it is possible the recruits in the competency-based delivery model had a much more accurate sense of their actual ability compared to the recruits in the lecture-based model. This conclusion is only partially supported by the quantitative survey results. To investigate this hypothesis, recruit scores were analyzed against FTO scores for both lecture-based and competency-based delivery models. There were no areas where recruits in the competency-based model rated their ability statistically significantly differently than their FTOs rated them. For the lecture-based delivery model, recruits’ analysis of their own ability was higher than their FTOs and statistically significant in the risk management and stress tolerance competency areas. Although the low numbers of FTO responses make this analysis difficulty, if the FTO scores are taken as an accurate representation of recruit ability, then the recruits from the lecture-based model over-estimated their ability in two of the competency areas whereas recruits in the competency-based model had an accurate sense of their ability. 234 A statistically significant difference emerged in how well recruits believed Block I prepared them for Block II in the written skills competency area. Recruits from the competency-based delivery model indicated they believed they were less well prepared than recruits from the lecture-based model. This observation was very interesting as the competency-based model has an increased focus on report writing using the PRIME training environment. Further, the reports written in the competency-based model are based on the notes recruits took during their scenarios. One possible explanation for this difference is that the report writing exercises in the new delivery model have focused too much on the specific technical aspects of the PRIME software to the detriment of overall writing ability and how to structure a report. This is one area of the curriculum that may need to be addressed to regain a balance of writing and technical computer skills. Another possible explanation for this observed difference is, because the reports in the competency-based delivery model actually use notes the recruits took during scenarios, the recruits may have a heightened awareness of the complexities of notetaking during calls and translating those notes to reports after the fact. While recruits in the lecture-based model were shown how to set up their notebook and were required to take notes of their day to day classroom activities, there was little focus during training on taking notes during and after a call because of little scenario opportunity to do so. When recruits in this model were shown how to write a report it was done using provided information that contained all the information required, so recruits in the lecture-based model may not have developed an accurate appreciation of their own notetaking and report writing skills. Because of this lack of practical experience in taking notes during a call and then using these notes to write a report, recruits in the lecture-based 235 delivery model may have over-estimated their preparedness to actually write a report during Block II. When the recruit and FTO scores for how well Block I training prepared recruits for Block II were compared for both classes, there were no statistically significant differences between the recruits and FTO for the lecture-based delivery model. A statistically significant difference for the competency-based model was observed for two competencies: adaptability and interactive communication. The distribution of responses was significantly different for these two areas, but interpretation is difficult because of the low response rates and because one of the FTOs selected N/A for how well Block I prepared their recruit in all of the competencies. These two competencies stood out because some recruits indicated that they indicated Block I had made them “Extremely well prepared” (1 recruit in adaptability and 3 recruits for interactive communication). The combination of low FTO response rates, one FTO choosing to say N/A for the competency preparedness, and some recruits feeling they were extremely well prepared led to the observed differences. 6.1.2 Course Content and Structure The qualitative comments from the FTOs in both the lecture-based and competency-based models indicate recruits struggled with report writing, and legal articulation. These two topics were consistent between delivery models, although the comments made by the FTOs for the first class in the new delivery model were worded quite hostilely and expressed much anger. An interpretation of the FTO comments from this class will be discussed in a later section on organizational change. 236 For recruits in the lecture-based delivery model, their comments on course content before they started Block II indicated they wanted more application of the concepts and more focus on report writing. After they had experienced Block II training, the comments still included a desire for more application through scenarios and more focus on report writing but also included more on legal articulation and CPIC/PRIME/MDT training. The same concepts were identified by recruits in the competency-based program. Recruits in this model indicated they wanted more focus on legal concepts and articulation, report writing, radio use, CPIC/PRIME/MRE, and detainee transport. Interestingly, the comments on areas that they would have benefitted from more training did not change between Survey 1 and Survey 2 for the competency-based model, supporting the earlier hypothesis that recruits in the competency-based model have a better understanding of the expectations of a patrol-level police officer. Of the topics identified, CPIC/PRIME/MRE training has been an ongoing theme in recruit training. The ability to deliver training is limited by security concerns with the Police Academy housed at the JIBC. Because JIBC is a public institution and cannot provide the same level of internet security as a police department, recruits in training do not have access to CPIC or the live PRIME environment because they contain real citizen data. Improvements have been made with the competency-based model where recruits have access to a PRIME training environment and write their reports using the same software they will on the road but it will never exactly replicate what they will experience in Block II and so some of that training, out of necessity, will be left to the FTOs. Legal articulation is a continual source of anxiety for new recruits as their ability to articulate their legal grounds for decisions and actions is central to their role as a police 237 officer, both in guiding their actions and in writing their reports where they recommend charges. Recruits in the competency-based delivery model indicated that they felt less confident with their legal articulation. Several factors could account for these comments. During the first class implementation, instructors involved in the delivery of the curriculum faced a large learning curve. Many instructors were not asking the legal-related questions associated with the scenarios and outlined in the instructor guide. The instructors may not have fully appreciated the implications of not following the lesson plans and felt overwhelmed by the changes they were experiencing. Since the first class, there has been an increased emphasis on ensuring that legal-related questions are used during the scenarios to ensure the recruits have a clear understanding and are able to articulate their legal grounds. Additional articulation questions have also been included during the oral exam portion of each of the exam scenarios to ensure recruits have a solid legal foundation. Additionally, part of the discomfort around the ability to articulate may stem from philosophical differences in what is required to effectively articulate grounds. Many FTOs were trained in a para-military type of system where rote memorization and regurgitation were the focus, believe that this recitation is the only effective way to articulate legal concepts. The philosophy of the new delivery model focuses on understanding and application rather than rote memorization, and an integration of legal learning across all Blocks of training. FTOs who are expecting their recruits to recite legal concepts may be upset by this change, and their perceptions and educational beliefs may be relayed to their recruits. Recruits in the lecture-based delivery model indicated that they would have benefitted from more scenario-based application. This observation was consistent across both surveys and also noted by FTOs. Conversely, recruits in the competency-based delivery model 238 indicated that they wanted more lecture, although they greatly valued the experience they gained from the scenarios. The desire for more lectures was also noted by the FTOs for the competency-based model recruits. These contrasting results are similar to those obtained by Vander Kooi and Bierlein (2014) in comparing lecture-based training to PBL based training in a policing environment. They found that the lecture-based group found their lectures boring and valued the hands on application, group work, and the little amount of scenario-based training they received. Their PBL group also liked hands on work but disliked the ill-structured problems associated with PBL because of their struggle when engaging to make meaning of the content (Vander Kooi & Bierlein Palmer, 2014). Indeed, it is harder work for both teachers and students to teach and learn in a PBL based environment because it is no longer passive delivery/receipt of information and this increase in difficulty may cause anxiety and dissatisfaction among learners (Cleveland & Saville, 2007; Werth, 2011). Despite this potential dissatisfaction, however, the police interact and communicate with groups of people on a daily basis, so incorporating collaborative group work and opportunities to practice this communication and interaction are essential to effective police training (Cleveland & Saville, 2007). Student perceptions of the amount of effort required to construct meaning from the course material may negatively influence their satisfaction with their training, despite facilitating a deeper level of learning, critical thinking, and understanding. Additionally, recruits in the competency-based delivery model indicated dissatisfaction with the pre-readings, weekly quizzes, case-based exercises, directed study, self-assessment, and assessment portfolios. This dissatisfaction is echoed in the end-of-program evaluations recruits complete in the week of their graduation from Block III. 239 Previous to the new-delivery model, program evaluation was conducted as an informal group discussion in the classroom but in the new delivery model recruits complete an extensive survey that outlines each component of the curriculum, its intended purpose, and asks recruits how successful it was in achieving that purpose. Unfortunately this same level of data does not exist for the lecture-based delivery model and a technical issue prevented data collection from Class 153. The program evaluation survey for Class 152, however, indicated that recruits thought the amount of reading in the pre-reading was excessive. Interestingly, in preparing for implementation, the Recruit Training manuals were extensively edited and re-written down, removing almost 800 pages of reading from the material by focusing on what was essential for basic patrol-level policing. The difference in delivery models, however, is that recruits in the lecture-based model were not held accountable for reading the material in the manuals as it was repeated in the lectures. In the new delivery model recruits must complete the reading so that they have a solid foundation for application during the week. The quizzes are in place to ensure the reading is completed and the recruits understand the basics of the material. Comments from the evaluation survey indicate that recruits do not like that they are required to achieve 100% on the quizzes and would prefer a lesser standard. They complain that they must take the quizzes multiple times to answer all the questions correctly. In each class, however, there have been recruits who have scored 100% on the quizzes on their first attempt. Further, there is some educational benefit, through the testing effect, to repeated testing. Anecdotally, the recruits in the first classes in the new delivery model were quite vocal about their dislike of the reading and associated quizzes but the complaints have subsided in the more current classes as the change in expectations becomes better established. 240 The case-based exercises are intended as a review of the reading and a table-top application of concepts before participating in the practical scenarios. The recruit comment from Class 152 Survey 2 reflects one possible reason for the recruits’ overall dissatisfaction with the case-based exercises: that the time was not used effectively. Indeed, in observing these sessions through the first delivery, I often observed recruits socializing and instructors ignoring the recruits and talking amongst themselves or talking to the recruits about unrelated topics like scotch. As Werth (2011) noted, there can be resistance from both students and instructors to self-directed learning and the shift in curriculum delivery requires an associated cultural shift. For the case studies, different instructors were assigned to these sessions for subsequent classes and the sessions have been proceeding much more as intended. Directed study has been the most difficult element of the curriculum to implement. It has been an ongoing struggle to explain to the instructors how the session should run, and they have struggled with the lack of structure. These struggles were definitely noticed by the recruits in their end of program evaluations. Recruits mention that no instructors were available for help, that they spent most of their directed study time doing the readings, and that they thought the directed study time could be greatly reduced. Finally, after more than a year of implementation and four classes starting in the “new” delivery model, we believe that we have achieved what directed study was intended to do. The apparent key to success of the directed study time was assigning specific instructors to be present for each directed study session, and ensuring that these instructors were ambassadors for how directed study time should be used. Previous classes have called for “all available instructors” to be in directed study time, but instructors would deem themselves unavailable because of administrative requirements or because they wanted to go and have coffee. Werth (2011) noted that self-241 directed learning and scenario-based training required a high level of organization and preparation on the part of the organization and the instructors and I believe that we have now finally achieved that balance. When the current Block I class, Class 156, completes their end of program evaluation we will be able to see if their perceptions of the value of directed study time have increased from previous classes. Lastly, recruits note a dissatisfaction with the self-assessment and assessment portfolio components of the curriculum. Firstly, the criticism from Class 152 with respect to this element of their Block I training is completely relevant in terms of volume of the self-assessments. For the first iteration of Block I, the recruits were required to complete one self-assessment form for each practical scenario they completed during the program. It was quickly realized that this level of documentation was unreasonable and was changed for Class 153 whereby they were required to complete only one form per scenario day but were prompted to reflect on all of the scenarios they participated in. Comments from the end of program evaluation are completely varied on the usefulness that recruits found in these self-assessments. Some recruits who engaged in the exercise indicate that they derived a lot of learning from the reflection while other recruits approached the exercise at a surface level and did not benefit from the experience. Also, during the first offering of the curriculum, the instructors did not have access to the self-assessments until the end of the Block so were unable to review and comment on them, although they did have access to the recruits’ training plans. The concept of reflection is central to transformative learning (Alfred et al., 2013) and was intentionally integrated into the curriculum through the video watching, written reflection, and mentor meetings. While the opportunity is offered for recruits to engage deeply with their learning, we cannot force this upon them. However, there is one 242 additional reason for the resistance to the self-assessment and the assessment portfolios that we discovered on implementation, particularly after the recruits were interacting with their FTOs, and this is the terminology used in the curriculum design. The educational terminology of “self-assessment” and “assessment portfolio” were not well received by the policing culture and served to emphasize that the curriculum was designed by a civilian. In response to this unforeseen source of resistance, these elements of the curriculum have been rebranded as “scenario debriefs” and “application for advancement”. This language is more consistent with police terminology and seems to create less resistance. The FTOs for the competency-based delivery model were particularly unhappy with the changes to the Block II documentation process, as indicated in their survey comments. A part of this issue may have been the “self-assessment” terminology that was used in the first version of the Block II book, but a large part of it seems to be in response to the change itself. Many struggles ensued trying to effectively communicating the changes to the FTOs in a timely manner. The new documentation process, Block II book, and associated FTO refresher online course were all developed during the first implementation of Block I with Class 152. As there are no provincial standards or requirements for training of FTOs, the Police Academy relied solely on the goodwill of departments to ensure their FTOs took the new online course and were up to date on the new program and the requirements. As indicated in the survey comments and in the comments from the three FTOs in the focus group, the online course did not provide adequate instruction on the required documentation. While each documentation piece was explained, no examples were provided because none existed yet. Further, FTOs did not understand the purpose of each documentation component and believed they were redundant. Without the ability to communicate face-to-face with the 243 FTOs and review the new process, the transition was extremely difficult. Typical struggles were associated with practicum components of training programs, such as the trainers not being aware of what is taught in the classroom (Hundersmarck, 2009) and focusing on things like speed and completing the job instead of facilitating integration and learning (Dettlaff & Wallace, 2003). Trainers may also feel they do not have any input into the classroom teaching, which can lead to a disassociation between theory and practice (Lyter & Smith, 2004). Often it is assumed that the transfer of knowledge from the classroom to the practicum will simply happen, but a structured framework for integration is essential for successful integration (Dettlaff & Wallace, 2003; Hundersmarck, 2009). The new Block II structure and manual attempted to provide this structure, with Phase I and Phase II. The monthly documentation was intended to be a brief summary to increase communication with the Police Academy and provide a mechanism to identify and support recruits who were struggling. FTOs who did not understand the reasons for the changes, however, were vocal about their dislike of the new program. FTO attitudes can influence the recruits’ perceptions of their academy learning (Hundersmarck, 2009), which to some degree seemed to happen with the new delivery model. A later section will focus on the various aspects of organizational and cultural change associated with the new delivery model. 6.2 Faculty Development Instructors experienced a considerable learning curve with the introduction of the new delivery model. Adequate faculty development is essential to the success of any new program as instructors must understand the reasons for the change and the associated methodology (Cleveland & Saville, 2007). The tension with this program lay in the 244 intersection of instructors delivering the old model of curriculum while at the same time working on development for the new model and finding time for faculty development activities. Initially the plan was to include several weeks of faculty development activities immediately before implementation to familiarize instructors with the course structure and overall content, but as the deadlines for instructors completing their development tasks were delayed, the associated faculty development time was diminished. Further, suggestions that instructors engage in activities to promote “generalist” instructors and ensure that everyone was up to date on the current best practices in all discipline areas were not well received. Some short faculty development exercises were carried out to prepare instructors for the case-based sessions where instructors watched videos of groups of recruits interacting and discussed the behaviours they observed, but overall very little faculty development was conducted. For this reason, it took several classes for the instructors to become comfortable with the expectations in the new model. Indeed, instructors held a misconception with the “flipped classroom” of recruits pre-reading and applying knowledge in class that instructors had a less active role in the classroom. This is consistent with other “flipped classroom” observations but there is still an active role for the instructor and the clarity of instructions provided is essential for student success (Heijstra & Sigurdardottir, 2017). Similarly, in a police training environment applying PBL for the first time, Vander Kooi (2014) felt that the instructors’ learning curve was a limiting factor in their evaluation study because the learning environment was not as intended while instructors learned their new role. In applying a scenario based PBL activity to a police training environment, Werth (2011) similarly found that instructional staff struggled to resist the urge to direct while still being available to help during the self-directed learning. To this end, instructional staff required a greater time 245 commitment for preparation, organization, and oversight (Werth, 2011). This is a large adjustment for instructional staff who may have little to no training in teaching. It may not be the effectiveness of the program, but rather the staff attitudes and lack of understanding that become the biggest barriers to success for innovations in police training (Cleveland & Saville, 2007). The development and implementation of this new model of police training required a substantial cultural shift on the part of the instructors, recruits, FTOs, and departments and will be the focus of the next section of this chapter. 6.3 Organizational Cynicism and Organizational Change I believe that one of the major contributing factors to the development and implementation of the new delivery model was organizational cynicism. Organizational cynicism involves a series of behaviours, affect, and beliefs that are negative and directed towards an organization (Dean Jr., Brandes, & Dharwadkar, 1998; Stanley, Meyer, & Topolnytsky, 2005). It can be categorized in terms of its focus, with, among others, an occupational cynicism focus such as police cynicism, or an organizational change focus where the cynicism is directed towards a change (Dean Jr. et al., 1998). Cynicism is centred around the belief that others, particularly those in management or those driving the organizational change, lack integrity (Stanley et al., 2005). This change-specific cynicism is a predictor of people’s intentions to resist the introduced change because of a disbelief of the motives of the individuals leading the change (Stanley et al., 2005). This cynicism strongly influenced the organizational change process as we worked through the development and implementation of the curriculum. In particular, the cynicism was directed at me, as the civilian who was leading the change efforts within the police training context. This cynicism 246 was evident from the initial introduction of the planned changes when certain instructors would question my motives for implementing the change, stating that we were only changing so that I would have something to do for my thesis. They were also quite vocal about how they did not report to a civilian and they worked only for the police. This cynicism manifested in a refusal to engage in the assigned development activities, and continued resistance and challenge to the curriculum model during the development meetings. Stanley (2005) notes that resistance based on this type of cynicism is extremely hard to address as the cynics do not respond to facts, figures, or logical arguments. Two instructors were the most vocally cynical during the development phase and actively resisted the change. When their secondments ended and they returned to their home department, the overall atmosphere in the group changed, although there remained some underlying cynicism. The expressed attitudes were not limited to the instructors, however, as evidenced by the FTO comments and recruit instructor evaluations. Several comments in the FTO survey from the FTOs in the competency-based delivery model indicated that the person who developed the new model had no understanding of policing. This attitude implied that the civilian outsider had questionable motives for driving the change. Further, this cynicism extended to recruits and was conveyed either by their FTOs or others within their home departments. Two recruits from different classes chose to fill out an end of program instructor evaluation form and target me personally when I was not an instructor to be evaluated. They both explicitly state they believed that I lacked integrity in my motives for the curriculum change and question my role within the Police Academy: Ms Houlahan is seems very intelligent but does not seem to understand that this is a police academy, not her thesis experiment. A huge part of our jobs as police officers is discretion which Ms Houlahan does not seem to understand. It also does not make 247 sense that she can overrule decisions of Sgts. If a Sgt decides that a demerit should not apply, they made that decision based on knowledge of the recruit and of the situation. Ms Houlahan does not understand what it is like to go through the curriculum she has developed. As well, dress and deportment are important at the academy, we are professionals. However, Ms Houlahan does not seem to be held to any sort of professional standard. Directed study is a waste of time. We need more driving, firearms + use of force. I could do most of the curriculum as an online course. The self-assessments, directed study docs, + training plans were way too much. That time could have been way better spent. Even the time it took to upload them to the portfolio when we had already submitted them to blackboard. and …I feel strongly about the fact that we are being used as guinea pigs for Nora’s program. Our job is such a high risk that using us as an experiment makes me feel like it is a blatant disregard for our careers and lives. I also find it hard to respect the program when it is so completely run by someone who has no experience in policing, Nora. I respect the fact that Nora is learned, but I do not understand how she has carte blanche in regards to our learning, when she has no policing experience herself. For example, we have had people questioning exams and she appears to have final say. I feel as if that should not rest with her, but should rest with someone with real policing experience, such as Inspector McCartney. Policing is rarely black and white, therefore there are different ways of achieving the same results… Their cynicism is conveyed through the statements about being my “guinea pigs” for my school program, me putting their lives at risk, and not feeling able to respect the program because of my involvement. These two comments were particularly surprising because the recruits went out of their way to make their cynicism known and to direct their barbs at me on a personal level. This expressed cynicism from instructors, FTOs, and recruits, has added a complex layer to the already difficult management of organizational and cultural change. Structuring the new competency-based delivery model through a framework of constructivism and transformative learning involved questioning the underlying assumptions, norms, beliefs and values of the JIBC Police Academy, the recruit training program, and by extension the police departments who we serve. This questioning extended to policies, procedures, and curriculum components that supported recruit training. Challenging all of 248 these aspects was central to incorporating transformative learning theory (Cranton, 2011). The goal of the project was to redesign and evaluate the entire Recruit Training program, not merely to add an additional component to training. In changing the underlying assumptions on which the program was founded, a shift in the norms, beliefs, and values, or the organizational culture was also required (Pannell, 2016; Werth, 2011). A pedagogical philosophical change of this nature requires explicit support from senior management in order to be successful (Vander Kooi & Bierlein Palmer, 2014). Cleveland and Saville (2007) observed that a double standard may be applied whereby critics demand evidence and statistical proof of the success of the change but do not apply the same rigor to the previous program. In the case of this project, the cultural change did not just encompass the instructors at the Police Academy but also extended to each of the departments and their field training officers. When a major change is introduced, there is often an initial period of resistance and reduced productivity before any improvements are observed (Elrod & Tippett, 2002). The duration of this decrease may be extended by cynics actively resisting or sabotaging the change effort, or by premature abandoning the change effort in the face of criticism or resistance (Elrod & Tippett, 2002). Organizational change of the magnitude of this project is a gradual process and requires long-term commitment and dedication to the end goal in order to move the change along past the initial resistance (Wilkinson et al., 2017). Because of the complexity and different organizational groups impacted by the change, the change curve was experienced by different groups at different times and for different durations. The first group to move through the change curve was the instructor group, who have already been discussed. Their overt resistance to the new model changed 249 significantly with the departure of two key instructors, but it was not until after implementation of the first classes when they became more comfortable with the new expectations that the remaining resistance started to subside. The change management process was further complicated with the increase in hiring by one of the departments, requiring class size to expand beyond maximum capacity and modifications to the intended structure of the new delivery model. With the larger class sizes there is an associated drain on instructor resources as sessions must be held at additional times, which increases instructor fatigue and may contribute to cynicism. Since that first implementation and despite the increase in class size, there has been a continued upturn in both instructor attitudes and instructor engagement with the new curriculum model as they discover their new roles and become comfortable with the organization of the new format. Some aspects of the curriculum, such as directed study, took longer for the instructors to visualize, but the trajectory on the change management has been in the upward direction particularly since the start of Class 154, which was the first class where the full model, with overlapping Block III and Block I classes both in the new model, and the instructors were able to see how the junior and senior classes interacted. However, the requirement to be responsive to departmental hiring surges has prevented the program from being fully offered as designed with two overlapping classes of 36 recruits. As part of the change management strategy with the departments, individual departmental consultations were conducted after approval from the provincial government was received and before development of the curriculum began. In these meetings an overview of the new program goals and structure, as well as the template schedule, was provided to departmental representatives. With verbal confirmation from all departments, 250 development proceeded. During the development process, however, a growing movement from the departments to criticize the Recruit Training program began. This movement was coincidentally timed with the return of the two instructors who actively resisted the change to their home department. This movement involved the senior management of the police departments and grew into the commissioning of several reports by both the BCAMCP and the Provincial Government to review the Police Academy. While the stated goal of these reviews was not the curriculum, at least one report dedicated a significant amount of space to the new curriculum model and asserted that a lecture-based model was the most effective means of training police officers. The results of all of these reviews have been released to the internal stakeholders who commissioned the reports but not to the Police Academy, so it is not possible at this stage to comment on their final recommendations. Anecdotally, however, the cynicism displayed within one department in particular was evident when rumors of how poorly the graduates of the new model were performing were widely circulating. These rumors were circulating at a time when Class 152, the first class in the new model, was part way through their Block I training so had not yet been on the road in Block II let alone graduate. The departmental organizational change management process is ongoing and will require particularly strong determination and dedication to move past the cynicism that has been on display. When recruits in Class 152 arrived for their Block I training, many had already completed some training at their home departments and had learned about the new model from people not involved in the Police Academy. There are several sessions in the first two weeks of recruit training that address the pedagogical philosophy of the new delivery model, explaining why sessions are structured certain ways and the goals of each of the activities in 251 the program. This information is broken into multiple sessions in an attempt to reduce overload and the recruits are introduced to the purpose of each component of the program as they encounter it. I delivered these sessions, as the person who was most familiar with the educational literature and the reasons for the change. I suspect that the introduction of program components by a civilian contributed to a change-oriented cynicism that had begun in the home departments for many of the recruits. The learning curve of the instructors also impacted the experiences of these first classes through the new curriculum, and the perception of disorganization or disengagement likely contributed to cynicism and dissatisfaction, particularly in the first class. As we progress through multiple offerings and curriculum components are refined and instructors become more comfortable with their roles, the recruit reaction to the new delivery model appears to be improving. Although the end of program evaluations do not directly address recruit preparedness for Block II, as was the focus of this study, they do provide insight into the overall satisfaction and perceptions of the program. To date, these evaluations have been collected for classes 152 and 154 (as mentioned earlier, a technical problem prevented evaluation data collection from Class 153). The evaluations of the program components in the new delivery model appear to be more positive from Class 154 than they were from Class 152. More data from graduating classes is needed to confirm this trend, but it is a positive indication that the change is gaining better acceptance. The last group that experienced the cultural change was the FTO group. This group was particularly difficult as there is no oversight of the FTO group from the Police Academy. As discussed earlier, the FTO preparation and communication prior to the first implementation of the new Block II was perceived as lacking and the program was not well 252 received by the FTOs for Class 152. With Class 152, many of the new Block II documentation was not submitted, or submitted extremely late after multiple requests from the Police Academy. Recruits reported that their FTOs were refusing to sign required documentation because they thought it was redundant or because they did not think there was educational value in the recruits self-assessing. This battle put recruits in an awkward position between their FTO and the Police Academy and undoubtedly the FTO attitudes transferred to some of the recruits. Since the first class, the wording in the Block II book has been changed to remove reference to self-assessments, and outreach has been done to some departments (and offered repeatedly to others) for face-to-face sessions with FTOs to review the new procedures. Further, as more FTOs are trained in the new FTO Training Course that reflects the new program and new documentation, the message of expectations and acceptance is spreading. If the recruit submission of their required Block II documentation can be used as a gauge of FTO acceptance of the change, then significant gains have been made. The recruits in the current Block II (Class 155) are almost all submitting their documentation complete and on time. A large part of reducing the resistance to change, and shortening that initial dip in productivity is responsiveness to feedback on the program. The next section outlines the changes that have been made to the program since the classes in this study, Class 152 and 153, completed Block I. 253 6.4 Changes Following Class 152 Many of the changes since the first implementation of the new curriculum delivery model have already been mentioned in various sections. This last section of the discussion gathers them into one area and outlines the concerns each change was meant to address. Concern Change Education-specific language Self-assessments are now called “Scenario Debriefs” and the assessment portfolios are now called “Application for Advancement” (Blocks I and II) or “Application for Graduation” (Block III) Weight of the Assessment Portfolio The assessment portfolio was initially intended to be the final determining factor in whether or not a recruit moved on to the next block. Thus failing the portfolio would mean failing the block. On implementation, it was clear that there was no appetite to remove a recruit from training based on their written portfolio, so recruits are now just assigned a demerit if they fail any component of the portfolio/application for advancement. Excessive documentation following scenarios Recruits now complete one debrief form per scenario day instead of one form per scenario. Ineffective use of case study time Assigning different instructors to these sessions to ensure the time is used appropriately and effectively Recruits’ legal knowledge and ability to articulate their grounds As this is an essential component of policing, several steps were taken to address this perceived issue:  Ensuring the legal related questions were asked and discussed before and after scenarios.  Including questions on the weekly quizzes about the basic powers of arrest and common law authorities on every quiz subsequent to the concepts’ introduction.  Including extra articulation questions about basic police authorities, in addition to essential elements, in the oral exam portion of every scenario exam station. Weekly quizzes are too complicated and/or require too many attempts to achieve 100% Plans are underway to review and rewrite many of the quiz questions to make them easier to understand. In the meantime the quiz questions will be divided into separate topic areas so recruits will have multiple short quizzes to complete each week but will only need to re-do the topic areas where they did not achieve 100%, not the quizzes for all of the reading material. 254 Concern Change Directed study time excessive and instructors not available Specific instructors have been assigned to directed study time to provide continuity of expectations and support during this time. A process has been implemented where instructor mentors review their recruits’ training plan and report back to the core directed study instructors what the recruits will be working on so the instructors can prepare adequate support. Inconsistent involvement and support by instructor mentors Initially every instructor was assigned a small number of recruits to mentor but in practice not all instructors were comfortable in this role or with the use of the Learning Management System required to review the recruits’ documentation. Select instructors have been identified as excellent mentors and these instructors now have the primary role of mentoring recruits through the program. This change provides continuity to the recruits and ensures expectations of the recruits are more consistent. Unclear expectations in Block II documentation  Educational language has been removed from the Block II book.  Examples of well-documented calls and other required documentation have been provided. Comment fields added below each criterion row in the rubric. Table 6-1 Summary of changes made to the recruit training program since Classes 152 and 153 6.5 Summary The overall results of this study indicate that the cultural change and associated change management process was much more complicated than originally anticipated and is an ongoing process. Evaluation of the effectiveness of the program after only two offerings does not provide an entire picture of the program results because of a multitude of factors including instructor learning curves, recruit attitudes, technical hurdles, communication with FTOs and departmental politics. Nonetheless, analysis of the survey responses from classes before and after the program delivery change indicate that recruits in the competency-based model may have a more accurate understanding of the role of a patrol-level police officer and 255 a more accurate sense of their own ability in this role at the start of their Block II training. Recruits in the lecture-based delivery model may have been over-estimating their own ability at this same time point due to a lack of exposure to practical scenarios and a general lack of both formative and summative feedback on their performance. 256 Chapter 7: Conclusion With the adoption of the Police Sector Council National Framework of Competencies, many police departments across Canada have been moving towards competency-based processes for Human Resources management purposes such as performance review and promotional competitions. Recruit training has fallen behind in this area and remains focused on traditional didactic methods of instruction and evaluation in a para-military setting. This project evaluated the development and implementation of a competency-based delivery model for municipal police recruits in British Columbia. The hypothesis was that the competency-based delivery model, with its focus on application, performance, and individualized feedback and training plans, would better prepare the recruits for their Block II training experience where they worked as a Recruit Constable on patrol with their field training officer. Recruits and FTOs from one class prior to the change and two classes after the change were surveyed about the recruits’ ability and how well Block I prepared them for the requirements of Block II using the PSC National Framework of Constable Competencies as a measure. While the study was intended as a quantitative evaluation project, it grew to encompass cultural and organizational change management through the reactions to change from instructors, recruits, FTOs, and the departmental senior management. Results from the analysis indicate that recruits in the competency-based delivery model score themselves significantly lower in ability in eight of the nine core Constable competencies but may have a more realistic view of the requirements of patrol level police work and of their own ability. Culturally, the change in training philosophy was very difficult for some individuals and organizations to accept and the change management process is ongoing in many areas. 257 7.1 Lessons Learned In a project of this scale there are bound to be many lessons learned as the first implementation will never run perfectly. To this end, I believe the most significant lesson learned is that when such a foundational philosophical shift is introduced, the resistance to change needs to be anticipated and evaluation strategies need to be structured longitudinally to account for the initial decrease in satisfaction and productivity. The program evaluation survey, implemented with the curriculum change, will provide the data to continue program evaluation on a long term basis. It is essential that the integrity of this data collection be maintained to ensure that it can be used to draw valid conclusions. The evaluation design of this project was based on the desire to not rely solely on recruit self-reporting. The field training officers in Block II were the best situated to provide this comparative data because many recruits immediately move to single-person units after graduation from Block III. In reality, the low response rate typical of police survey research made it difficult to draw conclusions from the data. The harsh resistance to the change in documentation from the FTOs was also not anticipated and appears to have influenced the evaluation results. In an ideal world, the new documentation process for Block II would have been completed well in advance, leaving plenty of time to communicate the changes to the FTOs. There would have been examples of how to document calls and of what the new monthly documentation should include and exclude. Face-to-face sessions would have been conducted at each department to introduce the changes and the rationale for the changes. In reality, working within an extremely tight budget with no additional support for development and delivery, it was simply not possible to have the Block II material completed earlier. Further, with no control over the departmental FTOs, face-to-face training sessions were not 258 possible as departments were not willing to call their members in off the road for the training. The online update course was constructed as best as possible in a very short timeline to provide information about what the recruits learned in Block I, the structure of the new program, the PSC National Constable Competencies, and the documentation process. It is apparent that the efforts were insufficient to meet the needs of the FTOs. Knowing the time and training related constraints, perhaps a support system for recruits could have been put in place to prepare them for the resistance they might experience from their FTOs and provide them with strategies to deal with this resistance when it was encountered. Faculty development is another key area where the implementation was lacking. As mentioned, the faculty development time was mostly consumed by extended deadlines for instructors to complete their development work. It is evident, however, that sufficient training is essential for instructors and senior administrators of a new program to understand not just the reasons for the change but also what their role will look like in the new program. Initial plans for faculty development for instructors in the recruit training program included practice case-study sessions with undergraduate students. Unfortunately time constraints significantly limited the amount of faculty development that was conducted. With minimal faculty development opportunities, it took several classes in the new curriculum model before instructors were comfortable with their new roles and responsibilities. As noted by Werth (2011), organization is essential for the effective delivery of a new program of this type. Recruit schedules must be specific and instructors must have a clear and complete understanding of each session in order to provide direction to the recruits. Especially on days where there are practical scenarios when there are many moving parts, it is essential that all instructors understand what is happening. Appointing a “lead instructor” 259 for each of the sessions has been extremely helpful in coordinating movements and ensuring sessions run as smoothly as possible. During the first implementation I observed all sessions that were delivered to the recruits. This activity was extremely helpful in identifying areas that needed to be modified in lesson plans, ensuring all required materials were covered or making note if they weren’t, and observing instructor and recruit behaviour during the sessions. Although it was extremely time consuming, particularly while also developing the new Block II and Block III material, it was essential to the effective delivery of the program. I was able to make adjustments in real time to support recruit needs that had not been considered, to advise instructors on impromptu modifications to lesson plans if sessions did not go as planned, to ensure adequate integration of concepts as planned, and to observe instructor behaviour that accounted for recruit resistance to certain sessions. It is highly recommended that a person be assigned a role dedicated to overseeing and observing at least the first offering of a new program when changes are made on this scale. Shortly after the change process was underway, one of the departments announced a major increase in hiring, which translated into an increase in the number of recruits in each class that went beyond maximum capacity. This expansion has undoubtedly influenced the delivery of the new program as some sessions can no longer be delivered at the optimal time during the week and must be delivered up to four times instead of once. There are additional costs both financially for facility rental, ammunition, and driving track time, but also in terms of instructor resources and burn out in teaching additional sessions. Further, because the class size has varied, the program has not yet been able to run as designed with two overlapping classes of 36 recruits. The variation in class sizes require a high level of 260 administrative support as schedules must be customized for each variation of overlapping class sizes. What was intended to provide consistency and reduce support has grown into a time-intensive exercise. Strong administrative support is required for a dynamic program such as this to ensure that instructions, schedules, due dates, and communications are always accurate and current. Lastly, it is important that there be continual and visible support from senior management to facilitate the change management process. During the development of this curriculum this visible support was lacking and I believe it had a severe negative impact on the change management process with the instructors. Throughout the introduction of the change and all of the subsequent development meetings, the Program Director at the time chose to be involved as a participant with the instructors, leaving me at the front of the room on my own reviewing the changes and addressing questions. While this arrangement made sense as I had conducted all of the research and design of the program so was most familiar with it, it was also problematic in that I was a civilian leading a room full of police officers through a change that I was telling them they needed to make. The Program Director did not step in to address questions about the requirement for change or to set boundaries for acceptable behaviour when the instructors became heated in their resistance. Further, the Program Director entertained complaints from the instructors in private about not wanting to change and about the reason for change being my school program without any rebuttal about how inappropriate this behaviour was on the part of the instructor. I believe what was required throughout this process was the oversight of the Director of the Police Academy. I believe that his presence at regular intervals throughout the process would have helped the 261 instructors realize that the change was a necessity and that the expectation was that they were to cooperate with the development process. 7.2 Limitations There are several important limitations of this study. Firstly, the analysis is specific to the training design for the JIBC Police Academy Recruit Training Program and is not generalizable to other programs. Further, while the JIBC Police Academy Recruit Training Program trains all municipal, tribal, and transit police in British Columbia, there are a significant number of Royal Canadian Mounted Police in BC who are trained at RCMP Depot in Regina. As this study is specific to JIBC Police Academy trained recruits, the findings are not applicable to RCMP. Secondly, the small sample sizes combined with low response rates, particularly from the field training officers, make it difficult to draw conclusions from the data as the samples may not be representative. Additionally, the difficulty recruiting participants for focus group discussions adversely impacted the ability to contextualize or explain the findings. Lastly, the timing of the analysis, during the most turbulent time of change implementation, likely affected both the response rates and the findings. 7.3 Recommendations The recommendations resulting from this study focus on three major areas: designing a wholesale change in curriculum delivery such as this, conducting program evaluations within a major curriculum change, and engaging in research on your own practice. 262 7.3.1 Designing a Major Curriculum Change The recommendations stemming from this study include both aspects that worked well and those that could be improved. Overall there are six recommendations regarding designing and delivering a major curriculum change: 1. Conduct a thorough review of literature extending to fields of practice outside of your own to provide a solid foundation of evidence and a “big picture” goal when working through the curriculum design and development processes. Because of the research into the program design, I remained confident in the foundational elements of the program throughout the resistance to change. This preparatory work is also useful in the midst of the multiple reviews that are currently underway. Adhering to this “big picture” maintained a benchmark that helped develop and refine program structure and individual learning activities. 2. Have a designated group of instructors who are responsible for the curriculum development as subject matter experts. In this project, all instructors were involved in development while simultaneously teaching the old delivery model. This approach was used because of extremely limited resources, but it was also hoped that it would act as a change management strategy by instilling a sense of ownership among the instructors. In reality, asking instructors to take on both roles was confusing and time-consuming for them. Having a dedicated group of designers who could focus on the development of the material for the new delivery model would have significantly decreased the amount of time required to prepare the new curriculum and the amount of resentment felt by the instructors about their increased workload. 263 3. Clearly demonstrate united and ongoing support for the change from management at the beginning and throughout the development process. This explicit support is essential for successful delivery of a new program model. This support needs to be continually visible to alleviate fears on the part of instructors and to outline what is and is not acceptable behaviour during the change process. 4. Ensure that there are sufficient administrative resources available to support the change. For the first few iterations of the new program there will undoubtedly be many last-minute changes that require urgent attention in uploading to the course website or photocopying material. Administrative support personnel who understand the new delivery model and are comfortable with ambiguity are an invaluable component of the change. Many aspects of record keeping and administrative tracking will also change with the new delivery model; a self-motivated and proactive administrative support person will help ensure success of the transition. 5. Schedule frequent, varied, and realistic faculty development exercises to help instructors understand their new role. The importance of adequate faculty development in preparing for a major curriculum change cannot be understated. Faculty development should include not just the reasons for the change but also authentic opportunities for instructors to practice their skills and become comfortable with the new expectations. 6. Lastly, have one person who is able to observe all classes and take immediate action to fix unexpected problems, tweak delivery instructions, find and update resources on short notice, respond to student needs as they emerge, and update lesson plans as required. This resource was crucial for the effective delivery of the new program in 264 Recruit Training. I would strongly recommend this approach to anyone implementing a large curriculum change. By being in the classroom, I was able to react to student concerns and make necessary changes, as well as identify areas in lessons plans that had not been delivered as designed and would impact later lessons in the curriculum. This presence and reactivity ensured that strategies were put in place to bring the curriculum that was delivered closer to the curriculum that was designed. 7.3.2 Implementing Competency-Based Education As discussed in Section 2.6, common criticisms of competency-based education include the possibility for a positivistic, reductionist approach to education at the expense of values and holistic abilities that are more difficult to measure in a standardized way (Chapman, 1999; Jansen, 1998; Schwarz & Cavener, 1994; Talbot, 2004). Another criticism is the increase in administrative documentation can be an overwhelming burden on instructors (Jansen, 1998; Schwarz & Cavener, 1994). Lastly, the development of the competencies themselves is an act of power and privilege that means competency-based education may be susceptible to political or other influences (Chapman, 1999; Jansen, 1998; Schwarz & Cavener, 1994). From my experiences implementing this competency-based program, I have several recommendations to help address these concerns. 1. Ensure a rigorous process that engages experts, such as the DACUM process, is used to develop the competencies. While the PSC did not explicitly use DACUM in its competency profile development, a similar consultation with police officers of various ranks formed the basis of each of the rank profiles. Once the initial 265 competency profile is established, a wider audience can be involved in validation of the competency and task list. This involvement is crucial for acceptance of the competencies and any resulting competency-based education program. A program relying on a set of competencies generated and imposed by external managers or administrators will almost certainly be met with resistance. 2. Develop a mentorship model whereby tasks such as monitoring student progress, providing formative feedback, helping students set and meet goals, and reporting back to the larger teaching cohort are shared between a core group of instructors. These mentor instructors should be skilled at delivering constructive feedback and at holding students accountable for their performance in a supportive manner. The mentoring component of the Recruit Training program is arguably the most valuable part of the program because the mentors help recruits through reflection and synthesis of their learning. This approach addresses the criticism of administrative burden, and also helps to avoid some of the reductionist pitfalls because mentors provide holistic feedback to recruits in addition to the task-focused feedback they receive from scenario debriefs. 3. Implement a time such as directed-study time where students are able to work on the areas where they are struggling. Do not compromise and allow scheduled sessions to be incorporated into this protected time, even if it takes several iterations to figure out the best way for instructors to prepare. The ability to use this time to work on individualized training plans is crucial, in my opinion, to the effective implementation of competency-based education programs. For the first classes through our program, I asked all instructors to be present during directed study so that everyone was 266 available to help recruits. This caused confusion with instructors who did not understand the purpose of the time and did not engage with students. Instructors felt they were not contributing and students felt a valuable resource was wasted. Having a small group of instructors who understand the purpose of this protected time and assigning them primary responsibility to work with students on their individual plans will help ensure the success of directed study time. 4. Design rubrics and assessment criteria that allow for different ways to achieve the required ends, or different successful outcomes. This was a challenging and painstaking task as each line of the exam rubrics was scrutinized to ensure it allowed for appropriate flexibility. As exam scenarios were enacted for the first times and unexpected actions or outcomes encountered, the rubrics were adjusted as appropriate. This methodical process to ensure realistic flexibility helped address some of the typical concerns around competency-based education focusing on a positivistic approach. 5. Allow opportunity for assessors to provide feedback outside the standardized assessment rubric. The Recruit Training program addressed the critique of competency-based education focusing on reductionist, task-based assessments in part by including a tick-box asking if remediation is required beyond what is captured in the rubric. This tick-box is followed by a comment field. It allows assessors to comment on holistic aspects of the “craft” or “art” of policing that may not be captured in the rubrics. After each exam, the mentor reviews the rubrics with the recruit and anything captured in this area is discussed with as much weight as a failure to perform an expected task. 267 6. Have a clear understanding of how each component of the curriculum design contributes to the overall goal of the competency-based framework. During development and implementation there will undoubtedly be compromise required. Knowing what elements of the curriculum design are central to the overall goal helps guide decisions about what can and cannot be compromised. Sometimes a small compromise in a less-crucial area will allow for the full implementation of an element central to the program design. An example of this compromise in the Recruit Training program is the reduction in value of the “Application for Advancement” (Assessment Portfolios). They were intended to be the final assessment that decided if a recruit could progress to the next Block of their training. Failing an Application for Advancement meant failing training and potentially returning to the home department. When it became clear that the implications of the Applications for Advancement were not fully understood by instructors and management in Recruit Training, I compromised on the design so that now it is an evaluation exercise that carries the same weight as an exam station. While I would prefer that the Applications be implemented as designed, it was a compromise I felt could be made without compromising the integrity of the whole program. 7.3.3 Conducting Program Evaluation within a Major Curriculum Change After five classes starting the new Block I and three classes now graduating the full program, the overwhelming opinion of the instructors and staff at the Police Academy is the recruits are better prepared in terms of their ability when they complete Block I and when they graduate the program. This observation is not reflected in the content of this study. The 268 reason for this discrepancy is possibly that only the first two classes in the new delivery model were included in the evaluation. The key recommendation is that program evaluation continue, on an ongoing basis, using the same scale and criteria as the end-of-program evaluation for Classes 152 and 154. This data will be integral to determine if the upwards trend in recruit evaluations continues and follows the change management curve. While the design of this study wanted FTO corroboration of recruit self-reporting, the low response rates and variability among FTOs in various departments makes ongoing surveying of FTOs not practical. If the FTOs are removed from the evaluation design then evaluation, relying on recruit self-reporting, can focus on the entire program – Blocks I, II, and III – and not just on Block I of training. The recommendations for others conducting a program evaluation of a major curriculum change are: 1. Expect an initial dissatisfaction, decrease in performance, and decrease in motivation in both instructor and student groups. This initial decrease should be accounted for in the program evaluation design, ensuring the duration of the evaluation extends past this initial dip. 2. Be responsive to unexpected needs of instructors and students in the program and take steps after each offering to bring the delivered program closer to the designed program. These changes can be to the content of the sessions, the structure of the sessions, or to the guidance provided to the instructors. After each class document the changes that have been made so these changes can be compared with the evaluation data. 3. Wherever possible, provide dedicated time for participants to complete the program evaluation. Response rates are likely to be much higher when there is time in class to 269 complete the evaluation compared to asking participants to complete the evaluation on their own time. 7.3.4 Recommendations for Practitioner Research For anyone who is embarking on a similar research project that involves researching their own practice, I have several recommendations from my experience in this project. 1. Have the evaluation process designed and ready to deploy before you begin development of the curriculum. There was a very long period of time when my days and nights were consumed 7 days a week with developing and implementing the curriculum. There simply was not time to create the surveys or learn a new survey software during this process. An evaluation process that is pre-designed can then be modified to account for issues or unexpected questions that arise during implementation. 2. Incorporate a reflective component to your evaluation. Program evaluation is traditionally a quantitative exercise using qualitative interventions such as survey comment fields or focus groups and interviews to supplement the quantitative data. Typical data sources are students and instructors. The stories that are not heard are those of the developers. Even if this reflective component is not included in the final evaluation report, it might provide valuable insight to help interpret evaluation data. 3. Ensure you have robust support systems in place. Change is a difficult process and can bring about a lot of anxiety and unexpected behaviour. If you are seen as the source of the change, the resistance might be directed personally at you, as it was at me in this case. 270 4. Value your work. Through the resistance to change, through the dismal response rates, through the organizational cynicism, and through the unexpected events that may be triggered by the change process, be sure to value the research that you are conducting and fall back on the knowledge that it will ultimately benefit the program. Find people who value the research and are interested in how it relates to your and their practice and reach out to them when you need the support. 7.4 Conclusion While the pre-treatment and post-treatment and FTO-recruit comparison design of this study was interesting, a variety of factors including cultural organizational change and low response rate in police survey studies negatively affected the study. The results indicate that recruits in the competency-based delivery model rate their ability in eight of the nine competencies as significantly lower than recruits from the lecture-based delivery model but that this difference may be due to a better understanding of role requirements and of their own ability from recruits in the competency-based program. Continued program evaluation, relying on endo-of-program recruit surveys, is strongly recommended to evaluate the program on a long term basis. Continued flexibility and slight modifications to program components, as needed, is also recommended to ensure that the program continues to meet the needs of dynamic patrol level police work. The change to a competency-based delivery model for Recruit Training has also had unintended consequences for police training in BC. Through the external reviews and the changes to Recruit Training, the issue of inadequate funding for police training has been brought to the foreground. While the reviews approach their tasks from different viewpoints, 271 the general consensus is that police training delivered through the JIBC Police Academy has been dramatically underfunded over the years as the annual grant from the Provincial Government has decreased and costs such as seconded police salaries, ammunition, and vehicle maintenance have all continued to rise. If the discussions around the actual cost of police recruit training and the value brought from the various educational activities in the new competency-based delivery model can result in an increase in funding to delivery recruit training, that would be an extremely exciting unanticipated outcome. Further, as the design of the competency-based delivery model is discussed at a variety of policing related conferences by myself and JIBC Police Academy management, there has been a great deal of interest from other agencies across Canada and in the United States. In 2017 a delegation from New Mexico state police visited the JIBC Police Academy to learn about our new delivery model and were extremely interested in many components of the training including the mentorship program, the regular scenario practice, and the Block II field component. While they felt that a wholesale change to their delivery was not immediately possible, they have implemented some components of the program, such as the mentorship model, into their training. The process of designing, implementing, and evaluating the competency-based delivery model for police recruit training in BC was an illuminating experience. Exploring the literature and designing the program was exciting. The change management process was challenging, as was the first implementation, but there has been an increased acceptance, enjoyment, and appreciation from instructors who have now become comfortable with the delivery or who are new to the program. While the results require more evidence from ongoing program evaluation surveys, the interest from outside agencies continues to mount. 272 The reviews have initiated a discussion around funding and training, bringing the importance of quality police training in BC to the forefront for the first time in a very long time. Overall, despite the disappointing results from the initial program evaluation presented in this study, I believe that the transition to the competency-based model for police recruit training will result in a positive impact on recruit training. As the rubrics for exam scenarios were developed to allow for flexibility in the path taken to achieve a solution to the scenario, so to must we allow for flexibility in the path to improving police recruit training. The results of this study did not indicate an immediate strong positive impact on recruit ability and performance, but this may simply be an unexpected path to success that we now need to readjust to account for in how we define our expectations. 273 References References Aditomo, A., Goodyear, P., Bliuc, A., & Ellis, R. A. (2013). Inquiry-based learning in higher education: Principal forms, educational objectives, and disciplinary variations. Studies in Higher Education, 38(9), 1239-1258. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ1023394&login.asp&site=ehost-live&scope=site http://dx.doi.org/10.1080/03075079.2011.616584 Agarwal, P. K., Finley, J. R., Rose, N. S., & Roediger, H. L. (2017). Benefits from retrieval practice are greater for students with lower working memory capacity. Memory, 25(6), 764-771. 10.1080/09658211.2016.1220579 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=123150473&site=ehost-live&scope=site Albanese, M. A., Mejicano, G., Mullan, P., Kokotailo, P., & Gruppen, L. (2008). Defining characteristics of educational competencies. Medical Education, 42(3), 248-255. 10.1111/j.1365-2923.2007.02996.x Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=18275412&site=ehost-live&scope=site Alfred, M. V., Cherrstrom, C. A., & Friday, A. R. (2013). Transformative learning theory. In B. J. Irby, G. Brown, R. Lara-Alecio & S. Jackson (Eds.), The handbook of educational theories (pp. 133-147). Charlotte, North Carolina, USA: Information Age Publishing. Alliger, G. M., & Janak, E. A. (1989). Kirkpatrick's levels of training criteria: Thirty years later. Personnel Psychology, 42(2), 331-42. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ398742&login.asp&site=ehost-live&scope=site Alliger, G. M., Tannenbaum, S. I., Bennett, W., Traver, H., & Shotland, A. (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50(2), 341-358. 10.1111/j.1744-6570.1997.tb00911.x Retrieved from http://dx.doi.org/10.1111/j.1744-6570.1997.tb00911.x Barrows, H. S. (1986). A taxonomy of problem-based learning-methods. Medical Education, 20(6), 481-486. Ben-David, M. F. (1999). AMEE guide no. 14: Outcome-based education: Part 3-assessment in outcome-based education. Medical Teacher, 21(1), 23-25. 274 Bennell, C., Jones, N. J., & Corey, S. (2007). Does use-of-force simulation training in canadian police agencies incorporate principles of effective training? Psychology, Public Policy, and Law, 13(1), 35. Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education (00181560), 32(3), 347. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=9703261378&site=ehost-live&scope=site Birzer, M. L. (2003a). The theory of andragogy applied to police training. Policing: An International Journal of Police Strategies & Management, 26(1), 29-42. 10.1108/13639510310460288 Retrieved from http://dx.doi.org/10.1108/13639510310460288 Birzer, M. L. (2003b). The theory of andragogy applied to police trainingnull. Policing, 26(1), 29-42. 10.1108/13639510310460288 Retrieved from http://dx.doi.org.ezproxy.library.ubc.ca/10.1108/13639510310460288 Birzer, M. L., & Tannehill, R. (2001). A more effective training approach for contemporary policing. Police Quarterly, 4(2), 233. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=5812090&login.asp&site=ehost-live&scope=site Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=725610&site=ehost-live&scope=site Bowen, J. L. (2006). Educational strategies to promote clinical diagnostic reasoning. New England Journal of Medicine, 355(21), 2217-2225. 10.1056/NEJMra054782 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=24932378&login.asp&site=ehost-live&scope=site Brady, A. (2005). Assessment of learning with multiple-choice questions//doi-org.ezproxy.library.ubc.ca/10.1016/j.nepr.2004.12.005 Retrieved from http://www.sciencedirect.com.ezproxy.library.ubc.ca/science/article/pii/S1471595305000065 Braun, I., Rittter, S., & Vasko, M. (2014). Inverted classroom by topic - A study in mathematics for electrical engineering students. International Journal of Engineering Pedagogy (iJEP), 4(3), 11-17. 10.3991/ijep.v4i3.3299 Retrieved from https://doaj.org/article/684ba2d456664233a0b2204e4cb6bfca 275 Braverman, M. T. (2013). Negotiating measurement: Methodological and interpersonal considerations in the choice and interpretation of instruments. American Journal of Evaluation, 34(1), 99-114. 10.1177/1098214012460565 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=89989857&site=ehost-live&scope=site Bresciani. (2006). Ooutcomes-based academic and co-curricular program review Brightwell, A., & Grant, J. (2013). Competency-based training: Who benefits? Postgraduate Medical Journal, 89(1048), 107-110. 10.1136/postgradmedj-2012-130881 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=85350500&login.asp&site=ehost-live&scope=site Bristol, T. J. (2014). Educate, excite, engage. Teaching and Learning in Nursing, 9, 43-46. Burke, A. S., & Fedorek, B. (2017). Does “flipping” promote engagement?: A comparison of a traditional, online, and flipped class. Active Learning in Higher Education, 18(1), 11-24. 10.1177/1469787417693487 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=122718582&site=ehost-live&scope=site Butler, A. C., Karpicke, J. D., & Roediger III, H. L. (2008). Correcting a metacognitive error: Feedback increases retention of low- confidence correct responses. Journal of Experimental Psychology.Learning, Memory & Cognition, 34(4), 918-928. 10.1037/0278-7393.34.4.918 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=33234489&site=ehost-live&scope=site Canadian Vocational Association. (2013). CVA dacum model presentation. Carraccio, C. L., Benson, B. J., Nixon, L. J., & Derstine, P. L. (2008). From the educational bench to the clinical bedside: Translating the dreyfus developmental model to the learning of clinical skills. Academic Medicine: Journal of the Association of American Medical Colleges, 83(8), 761-767. 10.1097/ACM.0b013e31817eb632 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=18667892&site=ehost-live&scope=site Challis, M. (2000). AMEE medical education guide no. 19: Personal learning plans. Medical Teacher, 22(3), 225-236. 10.1080/01421590050006160 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=3201470&login.asp&site=ehost-live&scope=site 276 Chapman, H. (1999). Some important limitations of competency-based education with respect to nurse education: An australian perspective. Nurse Education Today, 19(2), 129-135. Cleveland, G., & Saville, G. (2007). Police PBL: Blueprint for the 21st century. ().US Department of Justice. Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in education (7. ed., rewritten, expanded and updated ed.). London [u.a.]: Routledge. Collins, H. M. (2001). What is tacit knowledge? In T. R. Schatzki, K. K. Cetina & E. Von Savigny (Eds.), The practice turn in contemporary theory (pp. 115-127). New York, NY: Routledge. Cook, D. A., Brydges, R., Ginsburg, S., & Hatala, R. (2015). A contemporary approach to validity arguments: A practical guide to kane's framework. Medical Education, 49(6), 560-575. 10.1111/medu.12678 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=102715175&site=ehost-live&scope=site Cook, D. A., & Hatala, R. (2016). Validation of educational assessments: A primer for simulation and beyond. Advances in Simulation, 1(31), 1. Cordner, G., & Shain, C. C. (2016). Chapter 3: The changing landscape of police education and training. In P. C. Kratcoski, & M. Edelbacher (Eds.), Collaboarative policing: Police, academics, professionals, and communities working together for education, training, and program implementation (pp. 51). Boca Raton, FL, USA: CRC Press. Corrado, R. R., Cohen, I. M., Glackman, W., & Odgers, C. (2003). Serious and violent young offenders’ decisions to recidivate: An assessment of five sentencing models. Crime & Delinquency, 49(2), 179-200. Cox, D. (2011). Educating police for uncertain times: The australian experience and the case for a 'normative' approach. Journal of Policing, Intelligence, and Counter Terrorism, 6(1), 3. Cranton, P. (2011). A transformative perspective on the scholarship of teaching and learning. Higher Education Research & Development, 30(1), 75-86. 10.1080/07294360.2011.536974 Retrieved from https://doi-org.ezproxy.library.ubc.ca/10.1080/07294360.2011.536974 Crossley, J., Johnson, G., Booth, J., & Wade, W. (2011). Good questions, good answers: Construct alignment improves the performance of workplace-based assessment scales. Medical Education, 45(6), 560-569. 10.1111/j.1365-2923.2010.03913.x Retrieved from 277 http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=60538185&login.asp&site=ehost-live&scope=site Cruess, S. R., Cruess, R. L., & Steinert, Y. (2008). Role modelling--making the most of a powerful teaching strategy Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=31563898&login.asp&site=ehost-live&scope=site Darling-Hammond, L. (2006). Assessing teacher education. Journal of Teacher Education, 57(2), 120-138. 10.1177/0022487105283796 Retrieved from http://journals.sagepub.com/doi/full/10.1177/0022487105283796 Davis, M. H., & Harden, R. M. (2003a). Competency-based assessment: Making it a reality. Medical Teacher, 25(6), 565. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/15369903 Davis, M. H., & Harden, R. M. (2003b). Planning and implementing an undergraduate medical curriculum: The lessons learned. Medical Teacher, 25(6), 596-608. 10.1080/0142159032000144383 Retrieved from http://www.ingentaconnect.com/content/apl/cmte/2003/00000025/00000006/art00006?crawler=true Dean Jr., J. W., Brandes, P., & Dharwadkar, R. (1998). Organizational cynicism. Academy of Management Review, 23(2), 341-352. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=533230&site=ehost-live&scope=site DeOnna, J. (2002). DACUM: A versatile competency-based framework for staff development. Journal for Nurses in Professional Development, 18(1), 5-11. Desmarais, S. L., Livingston, J. D., Greaves, C. L., Johnson, K. L., Verdun-Jones, S., Parent, R., & Brink, J. (2014). Police perceptions and contact among people with mental illnesses: Comparisons with a general population survey. Psychology, Public Policy, and Law, 20(4), 431. Dettlaff, A. J., & Wallace, G. (2003). Promoting integration of theory and practice in field education: An instructional tool for field instructors and field educators. Clinical Supervisor, 21(2), 145-160. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=11922062&site=ehost-live&scope=site Downing, S. M. (2003). Validity: On the meaningful interpretation of assessment data. Medical Education, 37(9), 830. 10.1046/j.1365-2923.2003.01594.x Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=10691802&site=ehost-live&scope=site 278 Dreyfus, S. E. (2004). The five-stage model of adult skill acquisition. Bulletin of Science, Technology & Society, 24(3), 177-181. Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4-58. 10.1177/1529100612453266 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=84675751&site=ehost-live&scope=site Elrod, P. D., & Tippett, D. D. (2002). The \"death valley\" of change. Journal of Organizational Change Management, 15(3), 273-291. 10.1108/09534810210429309 Retrieved from http://www.emeraldinsight.com/doi/abs/10.1108/09534810210429309 Epstein, R. M., & Hundert, E. M. (2002). Defining and assessing professional competence. JAMA: The Journal of the American Medical Association, 287(2), 226-235. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=11779266&site=ehost-live&scope=site Estes, C. A. (2004). Promoting student-centered learning in experiential education. Journal of Experiential Education, 27(2), 141-160. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=16513639&site=ehost-live&scope=site Farrar, N., & Trorey, G. (2008). Maxims, tacit knowledge and learning: Developing expertise in dry stone walling. Journal of Vocational Education & Training, 60(1), 35-48. 10.1080/13636820701828812 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=29435046&site=ehost-live&scope=site Fazio, L. K., Huelser, B. J., Johnson, A., & Marsh, E. J. (2010). Receiving right/wrong feedback: Consequences for learning. Memory, 18(3), 335-350. 10.1080/09658211003652491 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=49261568&site=ehost-live&scope=site Frank, J. R., & Danoff, D. (2007). The CanMEDS initiative: Implementing an outcomes-based framework of physician competencies. Medical Teacher, 29(7), 642-647. 10.1080/01421590701746983 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=28723251&login.asp&site=ehost-live&scope=site Frank, J. R., Snell, L. S., Cate, O. T., Holmboe, E. S., Carraccio, C., Swing, S. R., . . . Harris, K. A. (2010). Competency-based medical education: Theory to practice. Medical 279 Teacher, 32(8), 638-645. 10.3109/0142159X.2010.501190 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=20662574&site=ehost-live&scope=site Fraser, S. W., & Greenhalgh, T. (2001). Coping with complexity: Educating for capability. BMJ: British Medical Journal (International Edition), 323(7316), 799. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=5286589&login.asp&site=ehost-live&scope=site Gibbs, J. C., & Taylor, J. D. (2016). Comparing student self-assessment to individualized instructor feedback. Active Learning in Higher Education, 17(2), 111-123. 10.1177/1469787416637466 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=115886207&site=ehost-live&scope=site Glazier, J., Bolick, C., & Stutts, C. (2017). Unstable ground: Unearthing the realities of experiential education in teacher education. Journal of Experiential Education, 40(3), 231-248. 10.1177/1053825917712734 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=124605608&site=ehost-live&scope=site Golden, T. L., & Seehafer, P. E. (2009). Delivering training material in a practical way. FBI Law Enforcement Bulletin, 78(2), 21-24. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=37562597&site=ehost-live&scope=site Grohmann, A., & Kauffeld, S. (2013). Evaluating training programs: Development and correlates of the questionnaire for professional training evaluation. International Journal of Training and Development, 17(2), 135-155. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ1001426&login.asp&site=ehost-live&scope=site http://dx.doi.org/10.1111/ijtd.12005 Haberfield, M. (2013). Critical issues in police training (Third Edition ed.). Uppersaddle River, New Jersey: Prentice Hall. Harden, R. M. (2002). Learning outcomes and instructional objectives: Is there a difference? Medical Teacher, 24(2), 151-155. 10.1080/0142159022020687 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=6410466&login.asp&site=ehost-live&scope=site Harden, R., Crosby, J., Davis, M. H., Howie, P. W., & Struthers, A. D. (2000). Task-based learning: The answer to integration and problem-based learning in the clinical years. Medical Education, 34(5), 391-397. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=10760125&site=ehost-live&scope=site 280 Harden, R. M. (2007). Learning outcomes as a tool to assess progression. Medical Teacher, 29(7), 678-682. 10.1080/01421590701729955 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=18236255&site=ehost-live&scope=site Hattie, J. A. C., & Donoghue, G. M. (2016). Learning strategies: A synthesis and conceptual model. Npj Science of Learning, 1, 16013. Retrieved from http://dx.doi.org/10.1038/npjscilearn.2016.13 Heijstra, T. M., & Sigurdardottir, M. S. (2017). The flipped classroom: Does viewing the recordings matter? Active Learning in Higher Education, , 1469787417723217. 10.1177/1469787417723217 Retrieved from https://doi.org/10.1177/1469787417723217 Hendry, G. D., White, P., & Herbert, C. (2016). Providing exemplar-based ‘feedforward’ before an assessment: The role of teacher explanation. Active Learning in Higher Education, 17(2), 99-109. 10.1177/1469787416637479 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=115886208&site=ehost-live&scope=site Hepplestone, S., & Chikwa, G. (2014). Understanding how students process and use feedback to support their learning. Practitioner Research in Higher Education, 8(1), 41-53. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ1130316&login.asp&site=ehost-live&scope=site Hodge, S., & Harris, R. (2012). Discipline, governmentality and 25 years of competency-based training. Studies in the Education of Adults, 44(2), 155-170. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=89530940&login.asp&site=ehost-live&scope=site Holmes, N. (2015). Student perceptions of their learning and engagement in response to the use of a continuous e-assessment in an undergraduate module. Assessment & Evaluation in Higher Education, 40(1), 1-14. 10.1080/02602938.2014.881978 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=99907660&site=ehost-live&scope=site Holmes, N. (2017). Engaging with assessment: Increasing student engagement through continuous assessment. Active Learning in Higher Education, , 1469787417723230. 10.1177/1469787417723230 Retrieved from https://doi.org/10.1177/1469787417723230 Holton, E. F., Bates, R. A., Noe, R. A., & Ruona, W. E. A. (2000). Development of a generalized learning transfer system inventory. [and] invited reaction: Development of a generalized learning transfer system inventory. Human Resource Development Quarterly, 11(4), 65. Retrieved from 281 http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ619057&login.asp&site=ehost-live&scope=site Holton,Elwood F., I.,II, & Kirkpatrick, D. L. (1996). The flawed four-level evaluation model [and] invited reaction: Reaction to holton article [and] final word: Response to reaction to holton article. Human Resource Development Quarterly, 7(1), 5-29. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ521126&login.asp&site=ehost-live&scope=site Huey, L. (2016). What one might expect: A scoping review of the canadian policing research literature. Sociology Publications, Article 36 Retrieved from http://ir.lib.uwo.ca/sociologypub/36 Huey, L., & Bennell, C. (2017). Replication and reproduction in canadian policing research: A note. Canadian Journal of Criminology and Criminal Justice, 59(1), 123-138. Huey, L., Blaskovits, B., Bennell, C., Kalyal, H. J., & Walker, T. (2017). To what extent do canadian police professionals believe that their agencies are ‘Targeting, testing, and tracking’ new policing strategies and programs? Police Practice & Research, 18(6), 544-555. 10.1080/15614263.2017.1363968 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=125437614&site=ehost-live&scope=site Hundersmarck, S. (2009). Police recruit training: Facilitating learning between the academy and field training . FBI Law Enforcement Bulletin, 78(8), 26. Huxham, M. (2007). Fast and effective feedback: Are model answers the answer? Assessment & Evaluation in Higher Education, 32(6), 601-611. 10.1080/02602930601116946 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=26946246&site=ehost-live&scope=site Jansen, J. D. (1998). Curriculum reform in south africa: A critical analysis of outcomes-based education [1]. Cambridge Journal of Education, 28(3), 321. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=1364016&site=ehost-live&scope=site Jones, P. R. (2006). Using groups in criminal justice courses: Some new twists on a traditional pedagogical tool. Journal of Criminal Justice Education, 17(1), 87-102. 10.1080/10511250500335643 Retrieved from http://www.tandfonline.com/doi/abs/10.1080/10511250500335643 Kane, M. T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50(1), 1-73. 10.1111/jedm.12000 Retrieved from 282 http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=86048945&site=ehost-live&scope=site Karpicke, J. D., & Roediger III, H. L. (2008). The critical importance of retrieval for learning. Science, 319(5865), 966-968. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=31185805&site=ehost-live&scope=site Kirkpatrick, D. L. (1977). Evaluating training programs: Evidence vs. proof Training and Development Journal. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ169223&login.asp&site=ehost-live&scope=site Kirkpatrick, D. L., & Kirkpatrick, J. D. (2006). Evaluating training programs (3. ed. ed.). San Francisco: Berrett-Koehler. Kirkpatrick, J., & Kirkpatrick, W. K. (2016). Kirkpatrick's four levels of training evaluation. Place of publication not identified: Association for Talent Development. Retrieved from http://www.books24x7.com/marc.asp?bookid=117523 Koens, F., Mann, K. V., Custers, Eugène J F M, & Ten Cate, O.,T.J. (2005). Analysing the concept of context in medical education. Medical Education, 39(12), 1243-1249. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=16313584&site=ehost-live&scope=site Kraiger, K., Ford, J. K., & Salas, E. (1993). Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology, 78(2), 311-328. 10.1037/0021-9010.78.2.311 Retrieved from http://ubc.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw3V1Lb9QwELaqnnpBvArLQ_K9TRQ_EicHDitUhOgFqe3ZchwbrQgbVLISP5-Z2E7CAkJcuebhOPanmfH4m8-ECJ4X2ZFNaDvTVMbaCiDEvPcVOL5WeiOULVwxFcytqDon6Si-5dr_MPHbZUs6cMYjPwgH89vnXd9n6Lm6RNs0E6ED-UNTTeMuiND2KWMyHEboWtCBgAg8njg9PZNOl1gphv8l1J2t7ZzHv4Y2PkVex-F-XHxiYNx_yC-uwXPPAL7Bqs_JfHcI7WHJWSBDcG2BIyUk-J9gdBvRZBA3yLVVVvUKfXxlYkU0zsFbi1Ba_osjCFIC89dyVec8n19dq24fecOZozjtzgvcnedMYxta1ZprbANV2b90Ozu-cfvs7gYFDCTDgxuuPm7nLSyI9EL9U-xCrNgKglLH3fopKvp9bDAFPLcPyYM4fXQbEPaInLj9Y3K2TOET8n0FNTp4OkPtkq6AdkkBZnSGGU0wwzcSzGiCGR0HCjCjEWb4TIIZXWD2lNy9u7p9-z6LJ3lklmMdKkT5Ttau5sJ7y9qKtaVztnJSti2smMGpiLKRhYfFqzFGWcW6tvSqMAxMhoU465yc7oe9e04oNFhbX6IyHaYWZAO3lbMlMzWHtYPakIs0kvprEGzRf57KDTnHwdaIHfgdq2FBVHNVVRvyLA2_7vpeY7pWcCaUfPFPH3hJzgJFFpN2r8jpeH9wryeFjx_a5Jq_ 283 Kratcoski, P. (2016). Introduction: Police-academic and professional practitioner collaboration in research, education, training, and programming. In P. C. Kratcoski, & M. Edelbacker (Eds.), Collaborative policing: Police, academics, professionals, and communities working together for education, training, and program implementation (pp. 5). Boca Raton, FL, USA: CRC Press. Leung, W. (2002). Competency based medical training: Review. BMJ (Clinical Research Ed.), 325(7366), 693-696. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=12351364&site=ehost-live&scope=site Livingstone, N., & Naismith, N. (2017). Faculty and undergraduate student perceptions of an integrated mentoring approach. Active Learning in Higher Education, , 1469787417723233. 10.1177/1469787417723233 Retrieved from https://doi-org.ezproxy.library.ubc.ca/10.1177/1469787417723233 Lyter, S. C., & Smith, S. H. (2004). Connecting the dots from curriculum to practicum: Implications for empowerment and integration in field education10.1300/J001v23n02_03 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=21109017&site=ehost-live&scope=site Mansfield, B. (1989). Competence and standards. In J. W. Burke (Ed.), Competency based education and training (pp. 26-38). Bristol, PA: The Falmer Press. Matthew, C. T., Cianciolo, A. T., & Sternberg, R. J. (2005). Developing effective military leaders: Facilitating the acquisition of experience-based tacit knowledge. (Technical Report).United States Army Research Institute for the Behavioral and Social Sciences. Matthew, C. T., & Sternberg, R. J. (2009). Developing experience-based (tacit) knowledge through reflection. Learning & Individual Differences, 19(4), 530-540. 10.1016/j.lindif.2009.07.001 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=44846190&site=ehost-live&scope=site McCarthy, J. (2017). Enhancing feedback in higher education: Students’ attitudes towards online and in-class formative assessment feedback models. Active Learning in Higher Education, 18(2), 127-141. 10.1177/1469787417707615 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=123770003&site=ehost-live&scope=site McCoy, M. R. (2006). Teaching style and the application of adult learning principles by police instructors. Policing: An International Journal of Police Strategies & Management, 29(1), 77-91. 10.1108/13639510610648494 Retrieved from http://dx.doi.org/10.1108/13639510610648494 284 Mirosław Dąbrowski, & Jerzy Wiśniewski. (2011). Translating key competences into the school curriculum: Lessons from the polish experience. European Journal of Education, 46(3), 323-334. 10.1111/j.1465-3435.2011.01483.x Retrieved from http://www.jstor.org/stable/41231583 Morcke, A. M., Dornan, T., & Eika, B. (2013). Outcome (competency) based education: An exploration of its origins, theoretical basis, and empirical evidence. Advances in Health Sciences Education, 18(4), 851-863. Morris, C., & Chikwa, G. (2016). Audio versus written feedback: Exploring learners’ preference and the impact of feedback format on students’ academic performance. Active Learning in Higher Education, 17(2), 125-137. 10.1177/1469787416637482 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=115886210&site=ehost-live&scope=site Mugford, R., Corey, S., & Bennell, C. (2013). Improving police training from a cognitive load perspective. Policing, 36(2), 312-337. 10.1108/13639511311329723 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=88133873&site=ehost-live&scope=site Narayan, R., Rodriguez, C., Araujo, J., Shaqlaih, A., & Moss, G. (2013). Constructivism - constructivist learning theory. In B. J. Irby, G. Brown, R. Lara-Alecio & S. Jackson (Eds.), The handbook of educational theories (pp. 169-183). Charlotte, North Carolina, USA: Information Age Publishing. Nkhoma, M., Sriratanaviriyakul, N., & Quang, H. L. (2017). Using case method to enrich students’ learning outcomes. Active Learning in Higher Education, 18(1), 37-50. 10.1177/1469787417693501 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=122718585&site=ehost-live&scope=site Norton, R. E. (1998). Quality instruction for the high performance workplace: DACUM Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=ED419155&login.asp&site=ehost-live&scope=site Norton, R. E. (2009). Competency-based education via the DACUM and SCID process: An overview. Columbus, OH: Center on Education and Training for Employment, the Ohio State University, Oliva, J. R., & Compton, M. T. (2010). What do police officers value in the classroom?: A qualitative study of the classroom social environment in law enforcement education. Policing: An International Journal of Police Strategies & Management, 33(2), 321-338. 285 10.1108/13639511011044911 Retrieved from http://dx.doi.org/10.1108/13639511011044911 Ossa Parra, M., Gutiérrez, R., & Aldana, M. F. (2015). Engaging in critically reflective teaching: From theory to practice in pursuit of transformative learning. Reflective Practice, 16(1), 16-30. 10.1080/14623943.2014.944141 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=100751175&site=ehost-live&scope=site Otwin, M. (2005). Building a global police studies community. Police Quarterly, 8(1), 99-136. 10.1177/1098611104267329 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=16166926&site=ehost-live&scope=site Pannell, L. (2012). Changing the training paradigm for a more resilient police department: Los angeles police department (LAPD). International Police Training Journal, (3), 18. Pannell, L. (2016). Changing training paradigm for a more resilient police force. California Board of Psychology, 8, 5. Parent, R. (2006). The police use of deadly force: International comparisons. The Police Journal, 79(3), 230-237. Parent, R. (2007). Crisis intervention: The police response to vulnerable individuals. The Police Journal, 80(2), 109-116. Parent, R. (2011). The police use of deadly force in british columbia: Mental illness and crisis intervention. Journal of Police Crisis Negotiations, 11(1), 57-71. Paterson, C. (2016). Chapter 7: Higher education, police training, and police reform: A review of police-academic educational collaborations. In P. C. Kratcoski, & M. Edelbacher (Eds.), Collaborative policing: Police, academics, professionals, and communities working together for education, training, and program implementation (pp. 119). Boca Raton, FL, USA: CRC Press. Police Sector Council. (2011). Police leadership education and training:  aligning programs and courses with leadership competencies. (). Ottawa, Ontario, Canada: Government of Canada. Price, M., Handley, K., Millar, J., & O'Donovan, B. (2010). Feedback : All that effort, but what is the effect? Assessment & Evaluation in Higher Education, 35(3), 277-289. 10.1080/02602930903541007 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=49754276&site=ehost-live&scope=site 286 Regehr, G., Regehr, C., Bogo, M., & Power, R. (2007). Can we build a better mousetrap? improving the measures of practice performance in the field practicum. Journal of Social Work Education, 43(2), 327-343. 10.5175/JSWE.2007.200600607 Retrieved from http://www.jstor.org/stable/23044269 Rethans, J., Norcini, J. J., Barón-Maldonado, M., Blackmore, D., Jolly, B. C., LaDuca, T., . . . Southgate, L. H. (2002). The relationship between competence and performance: Implications for assessing practice performance. Medical Education, 36(10), 901-909. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=12390456&site=ehost-live&scope=site Robertson, N. (2012). Policing: Fundamental principles in a canadian context. Canadian Public Administration, 55(3), 343-363. 10.1111/j.1754-7121.2012.00227.x Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=79823897&site=ehost-live&scope=site Roediger III, H. L., & Karpicke, J. D. (2006a). The power of testing memory basic research and implications for educational practice. Perspectives on Psychological Science, 1(3), 181-210. 10.1111/j.1745-6916.2006.00012.x Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=22146006&site=ehost-live&scope=site Roediger III, H. L., & Karpicke, J. D. (2006b). Test-enhanced learning. Psychological Science (0956-7976), 17(3), 249-255. 10.1111/j.1467-9280.2006.01693.x Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=19826521&site=ehost-live&scope=site Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), 20-27. 10.1016/j.tics.2010.09.003 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=57165258&site=ehost-live&scope=site Rust, C. (2002). The impact of assessment on student learning: How can the research literature practically help to inform the development of departmental assessment strategies and learner-centred assessment practices? Active Learning in Higher Education, 3(2), 145-158. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=10105761&site=ehost-live&scope=site Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119-44. Retrieved from 287 http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ400981&login.asp&site=ehost-live&scope=site Sadler, D. R. (1998). Formative assessment: Revisiting the territory. Assessment in Education: Principles, Policy & Practice, 5(1), 77. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=725612&site=ehost-live&scope=site Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535-550. 10.1080/02602930903541015 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=53155625&site=ehost-live&scope=site Sakofs, M. (2001). Perspectives. Journal of Experiential Education, 24(1), 5. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=5930221&site=ehost-live&scope=site Schenck, J., & Cruickshank, J. (2015). Evolving kolb: Experiential education in the age of neuroscience. Journal of Experiential Education, 38(1), 73-95. 10.1177/1053825914547153 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=100948154&site=ehost-live&scope=site Schwarz, G., & Cavener, L. A. (1994). Outcome-based education and curriculum change: Advocacy, practice, and critique. Journal of Curriculum & Supervision, 9(4), 326-338. Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=9502160066&site=ehost-live&scope=site Scott, J., Shields, C., Gardner, J., Hancock, A., & Nutt, A. (2011). Student engagement with feedback. Bioscience Education, 18 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ959261&login.asp&site=ehost-live&scope=site http://dx.doi.org/10.3108/beej.18.5SE Shumway, J. M., & Harden, R. M. (2003). AMEE guide no. 25: The assessment of learning outcomes for the competent and reflective physician. Medical Teacher, 25(6), 569-584. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=15369904&site=ehost-live&scope=site Slavich, G., & Zimbardo, P. (2012). Transformational teaching: Theoretical underpinnings, basic principles, and core methods. Educational Psychology Review, 24(4), 569-608. 10.1007/s10648-012-9199-6 Retrieved from 288 http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=82730917&site=ehost-live&scope=site Smith, S. R., & Dollase, R. (1999). AMEE guide no. 14: Outcome-based education: Part 2--planning, implementing and evaluating a competency-based curriculum. Medical Teacher, 21(1), 15. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=1707162&login.asp&site=ehost-live&scope=site Smith, S. R., Goldman, R. E., Dollase, R. H., & Taylor, J. S. (2007). Assessing medical students for non-traditional competencies. Medical Teacher, 29(7), 711-716. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=17963141&site=ehost-live&scope=site Spady, W. G., & Mitchell, D. E. (1977). Competency based education: Organizational issues and implications. Educational Researcher, 6(2), 9-15. Stanley, D. J., Meyer, J. P., & Topolnytsky, L. (2005). Employee cynicism and resistance to organizational change. Journal of Business and Psychology, 19(4), 429-459. 10.1007/s10869-005-4518-2 Retrieved from https://doi.org/10.1007/s10869-005-4518-2 Stentoft, D. (2017). From saying to doing interdisciplinary learning: Is problem-based learning the answer? Active Learning in Higher Education, 18(1), 51-61. 10.1177/1469787417693510 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=122718587&site=ehost-live&scope=site Sternberg, R. J. (1999). The theory of successful intelligence. Review of General Psychology, 3(4), 292-316. 10.1037/1089-2680.3.4.292 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=pdh&AN=1999-01689-003&site=ehost-live&scope=site Sternberg, R. J., & Hedlund, J. (2002). Practical intelligence, g, and work psychology. Human Performance, 15(1-2), 143-160. Swing, S. R. (2010). Perspectives on competency-based medical education from the learning sciences. Medical Teacher, 32(8), 663-668. 10.3109/0142159X.2010.500705 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=52488324&site=ehost-live&scope=site Talbot, M. (2004). Monkey see, monkey do: A critique of the competency model in graduate medical education. Medical Education, 38(6), 587-592. 10.1046/j.1365-2923.2004.01794.x Retrieved from 289 http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=13245694&site=ehost-live&scope=site Taylor, T. Z., Elison-Bowers, P., Werth, E., Bell, E., Carbajal, J., Lamm, K. B., & Velazquez, E. (2013). A police officer’s tacit knowledge inventory (POTKI): Establishing construct validity and exploring applications. Police Practice & Research, 14(6), 478-490. 10.1080/15614263.2013.802847 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=91949855&site=ehost-live&scope=site ten Cate, O. (2006). Trust, competence, and the supervisor's role in postgraduate training. BMJ (Clinical Research Ed.), 333(7571), 748-751. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=17023469&site=ehost-live&scope=site ten Cate, O., & Scheele, F. (2007). Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Academic Medicine: Journal of the Association of American Medical Colleges, 82(6), 542-547. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=mnh&AN=17525536&site=ehost-live&scope=site Thayer-Bacon, B. J. (2013). Epistemology and education. In B. J. Irby, G. Brown, R. Lara-Alecio & S. Jackson (Eds.), The handbook of educational theories (pp. 17-27). Charlotte, North Carolina, USA: Information Age Publishing. Trotter, E. (2006). Student perceptions of continuous summative assessment. Assessment & Evaluation in Higher Education, 31(5), 505-521. 10.1080/02602930600679506 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=20855315&site=ehost-live&scope=site Tuxworth, E. (1989). Competence based education and training: Background and origins. In J. W. Burke (Ed.), Competency based education and training (pp. 10-25). Bristol, PA: The Falmer Press. van Merrienboer, Jeroen J G, Clark, R. E., & de Croock, Marcel B M. (2002). Blueprints for complex learning: The 4C/ID-model. Educational Technology Research and Development, 50(2), 39-64. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ652497&login.asp&site=ehost-live&scope=site Vander Kooi, G. P., & Bierlein Palmer, L. (2014). Problem-based learning for police academy students: Comparison of those receiving such instruction with those in traditional programs. Journal of Criminal Justice Education, 25(2), 175-195. 10.1080/10511253.2014.882368 Retrieved from 290 http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=96223135&site=ehost-live&scope=site Wang, G. G., & Wilcox, D. (2006). Training evaluation: Knowing more than is practiced. Advances in Developing Human Resources, 8(4), 528-539. 10.1177/1523422306293007 Retrieved from http://ubc.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV07T8MwED6hDoiFR3mFh5SFgSHgRx42EkKoagVDJYbslZNzxFClpYCAf8_ZTUopQiysGSzFn3332b77PgApLli0EhPKshKKVShSxIQyFGYFR61kkRqr0YvtL5fqtCapDdptkPSRGyeluzS_FHR0oQWZipvpU-RcpNxra2upYRqrBbzmThDMhWilnbT-kPUWjwwi8-pSlMFkFAshv6emL765VOLls85gC96W5RTeP1abAX8oOv7Xj2zDZkNUw9v5ytqBNVt3Yb2tk-_STC5cendB5o3TRNhfqIdfhc5S230bTmY2zB9NHd4_hw9NWxbuQT7o5727qLFjiEq3b6MiVtIWWsXaaKUtcRUm4yr1ioMsLVFlhts4k6KktIhKo3DF2ESAikobrOQ-dOpJbQ8hrCRjJiHYY1EQgSho0Ipzm5Qx0ok5yQI4b4EYTeeiGyPe6JI7tBxYLCWiQgQngIN2Vkc4Hrsua0UsMuM8gDMH3J8jHP06wjFs-BsX33p4Ap2X2as99fIMn0sQ1sA Weinblatt, R. B. (1999). New police training philosophy. United States: LAW AND ORDERS USPS. Retrieved from http://libproxy.jibc.ca:2048/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=edsbl&AN=RN066233091&site=eds-live&scope=site Werth, E. P. (2009). Student perception of learning through a problem-based learning exercise: An exploratory study. Policing: An International Journal of Police Strategies & Management, 32(1), 21-37. 10.1108/13639510910937094 Retrieved from http://dx.doi.org/10.1108/13639510910937094 Werth, E. P. (2011). Scenario training in police academies: Developing students’ higher-level thinking skills. Police Practice & Research, 12(4), 325-340. 10.1080/15614263.2011.563970 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=65247373&login.asp&site=ehost-live&scope=site White, D., & Heslop, R. (2012). Educating, legitimising or accessorising? alternative conceptions of professional training in UK higher education: A comparative study of teacher, nurse and police officer educators. Police Practice & Research, 13(4), 342-356. 10.1080/15614263.2012.673290 Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=77959624&login.asp&site=ehost-live&scope=site Wiklund-Hornqvist, C., Andersson, M., Jonsson, B., & Nyberg, L. (2017). Neural activations associated with feedback and retrieval success. Npj Science of Learning, 2(1), 12. 10.1038/s41539-017-0013-6 Retrieved from https://doi.org/10.1038/s41539-017-0013-6 291 Wiklund-Hörnqvist, C., Jonsson, B., & Nyberg, L. (2014). Strengthening concept learning by repeated testing. Scandinavian Journal of Psychology, 55(1), 10-16. 10.1111/sjop.12093 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=93630838&site=ehost-live&scope=site Wilkinson, K., Boyd, K., Pearson, M., Farrimond, H., Lang, I. A., Fleischer, D., . . . Rappert, B. (2017). Making sense of evidence: Using research training to promote organisational change. Police Practice and Research, , 1-19. 10.1080/15614263.2017.1405266 Retrieved from https://doi.org/10.1080/15614263.2017.1405266 Wyrostek, W., & Downey, S. (2017). Compatibility of common instructional models with the DACUM process. Adult Learning, 28(2), 69-75. 10.1177/1045159516669702 Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=122565721&site=ehost-live&scope=site 292 Appendices The following Appendices are included, as discussed in the body of the thesis: 293 Appendix A - Template Schedule for Competency-Based Delivery Model of Recruit Training A.1 Block I Template Schedule 294 295 296 297 298 299 300 301 302 303 304 305 306 A.2 Block III Template Schedule 307 308 309 310 311 312 313 314 Appendix B - Surveys This appendix contains the surveys used for recruits, field trainers, and assessors. The same recruit survey was used for survey administration 1 and 2 for all classes. The same FTO survey was used for field trainers for all classes. 315 B.1 Recruit Survey 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 B.2 FTO Survey 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 B.3 Assessor Survey 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 Appendix C - Consistency Tables: Comparison Within Classes These Tables are from Section 5.2.2 - Comparison within classes. They are presented here in an appendix instead of in the text for ease of reading. C.1 Lecture-based delivery model - Recruit characteristics Recruit Gender Overall ability Overall preparedness Female Mean 2.38 3.00 N 13 13 Std. Deviation .768 .000 Male Mean 2.86 3.23 N 22 22 Std. Deviation .560 .528 Total Mean 2.69 3.14 N 35 35 Std. Deviation .676 .430 Null Hypothesis Test Sig. Decision For the lecture-based delivery model recruits, the distribution of responses for the overall ability question is the same across recruit genders Independent samples: Mann-Whitney U Test 0.1211 Retain the null hypothesis For the lecture-based delivery model recruits, the distribution of responses for the overall preparedness question is the same across recruit genders Independent samples: Mann-Whitney U Test 0.3891 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test. Table C- 1 Mean values and Mann-Whitney U test results grouped across recruit genders 375 Recruit age range Overall ability Overall preparedness 20-24 Mean 2.64 3.09 N 11 11 Std. Deviation .674 .302 25-29 Mean 2.73 3.13 N 15 15 Std. Deviation .704 .516 30-34 Mean 2.20 3.20 N 5 5 Std. Deviation .447 .447 35-39 Mean 3.25 3.25 N 4 4 Std. Deviation .500 .500 Total Mean 2.69 3.14 N 35 35 Std. Deviation .676 .430 Null Hypothesis Test Sig. Decision For the lecture-based delivery model recruits, the distribution of responses for the overall ability question is the same across recruit age ranges Independent samples: Kruskal-Wallis Test 0.069 Retain the null hypothesis For the lecture-based delivery model recruits, the distribution of responses for the overall preparedness question is the same across recruit age ranges Independent samples: Kruskal-Wallis Test 0.738 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 2 Mean and Kruskal-Wallis Test values grouped by recruit age range 376 Recruit post-secondary education Overall ability Overall preparedness some college Mean 2.86 3.00 N 7 7 Std. Deviation .378 .000 college diploma Mean 2.50 3.50 N 2 2 Std. Deviation .707 .707 some university Mean 2.50 3.00 N 4 4 Std. Deviation 1.000 .000 undergraduate degree Mean 2.54 3.23 N 13 13 Std. Deviation .776 .599 graduate degree Mean 3.33 3.33 N 3 3 Std. Deviation .577 .577 other Mean 2.80 3.00 N 5 5 Std. Deviation .447 .000 Total Mean 2.71 3.15 N 34 34 Std. Deviation .676 .436 Null Hypothesis Test Sig. Decision For the lecture-based delivery model recruits, the distribution of responses for the overall ability question is the same across recruit post-secondary education Independent samples: Kruskal-Wallis Test 0.473 Retain the null hypothesis For the lecture-based delivery model recruits, the distribution of responses for the overall preparedness question is the same across recruit post-secondary education Independent samples: Kruskal-Wallis Test 0.309 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 3 Mean and Kruskal-Wallis Test values grouped across recruit post-secondary education level 377 Recruit previous police experience Overall ability Overall preparedness No experience Mean 2.64 3.08 N 25 25 Std. Deviation .757 .277 CSP, jail guard, auxiliary/reserve or international police experience Mean 2.80 3.30 N 10 10 Std. Deviation .422 .675 Total Mean 2.69 3.14 N 35 35 Std. Deviation .676 .430 Null Hypothesis Test Sig. Decision For the lecture-based delivery model recruits, the distribution of responses for the overall ability question is the same across recruit previous police experience Independent samples: Mann-Whitney U Test 0.6271 Retain the null hypothesis For the lecture-based delivery model recruits, the distribution of responses for the overall preparedness question is the same across recruit previous police experience Independent samples: Mann-Whitney U Test 0.5771 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test. Table C- 4 Mean values and Mann-Whitney U Test values grouped by recruit previous policing experience 378 C.2 Lecture-based delivery model - FTO characteristics FTO Gender Overall recruit ability Overall recruit preparedness female Mean 2.33 3.00 N 3 3 Std. Deviation 1.155 1.000 male Mean 2.92 2.92 N 12 12 Std. Deviation .793 .515 Total Mean 2.80 2.93 N 15 15 Std. Deviation .862 .594 Null Hypothesis Test Sig. Decision For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall ability question is the same across FTO genders Independent samples: Mann-Whitney U Test 0.5361 Retain the null hypothesis For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall preparedness question is the same across FTO genders Independent samples: Mann-Whitney U Test 0.9451 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table C- 5 Mean and Mann-Whitney U Test values grouped by FTO gender 379 FTO years of policing Overall recruit ability Overall recruit preparedness 0-4 Mean 2.50 3.50 N 2 2 Std. Deviation .707 .707 5-9 Mean 2.67 2.78 N 9 9 Std. Deviation 1.000 .441 10-14 Mean 3.50 3.50 N 2 2 Std. Deviation .707 .707 15-19 Mean 3.00 2.50 N 2 2 Std. Deviation .000 .707 Total Mean 2.80 2.93 N 15 15 Std. Deviation .862 .594 Null Hypothesis Test Sig. Decision For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall ability question is the same across FTO years of service Independent samples: Mann-Whitney U Test 0.561 Retain the null hypothesis For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall preparedness question is the same across FTO years of service Independent samples: Mann-Whitney U Test 0.159 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 6 Mean and Kruskal Wallis Test values grouped across FTO years of service 380 FTO years as field trainer Overall recruit ability Overall recruit preparedness 0-4 Mean 2.77 3.00 N 13 13 Std. Deviation .927 .577 10-14 Mean 3.00 2.50 N 2 2 Std. Deviation .000 .707 Total Mean 2.80 2.93 N 15 15 Std. Deviation .862 .594 Null Hypothesis Test Sig. Decision For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall ability question is the same across FTO years as a field trainer Independent samples: Mann-Whitney U Test 0.8001 Retain the null hypothesis For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall preparedness question is the same across FTO years as a field trainer Independent samples: Mann-Whitney U Test 0.3811 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table C- 7 Mean values and Kruskal-Wallis Test values grouped across FTO years as FTO 381 FTO – number of recruits trained Overall recruit ability Overall recruit preparedness 0-4 Mean 2.83 3.08 N 12 12 Std. Deviation .937 .515 5-9 Mean 2.00 2.00 N 1 1 Std. Deviation . . 10-14 Mean 3.00 2.50 N 2 2 Std. Deviation .000 .707 Total Mean 2.80 2.93 N 15 15 Std. Deviation .862 .594 Null Hypothesis Test Sig. Decision For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall ability question is the same across FTO number of recruits trained Independent samples: Kruskal-Wallis Test 0.522 Retain the null hypothesis For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall preparedness question is the same across FTO number of recruits trained Independent samples: Kruskal-Wallis Test 0.107 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 8 Mean and Kruskal-Wallis Test values grouped across FTO number of recruits trained 382 C.3 Lecture-based delivery model - Recruit Characteristics on FTO Responses Recruit gender Overall recruit ability Overall recruit preparedness Female Mean 2.60 2.67 N 5 3 Std. Deviation .894 .577 Male Mean 3.00 3.14 N 7 7 Std. Deviation .577 .690 Total Mean 2.83 3.00 N 12 10 Std. Deviation .718 .667 Null Hypothesis Test Sig. Decision For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall ability question is the same across recruit genders Independent samples: Mann-Whitney U Test 0.3431 Retain the null hypothesis For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall preparedness question is the same across recruit genders Independent samples: Mann-Whitney U Test 0.3831 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table C- 9 Mean and Mann-Whitney U Test values for FTO responses grouped by recruit gender 383 Recruit age range Overall recruit ability Overall recruit preparedness 20-24 Mean 2.80 3.00 N 5 4 Std. Deviation .837 .816 25-29 Mean 2.67 3.00 N 6 5 Std. Deviation .516 .707 30-34 Mean 4.00 3.00 N 1 1 Std. Deviation . . Total Mean 2.83 3.00 N 12 10 Std. Deviation .718 .667 Null Hypothesis Test Sig. Decision For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall ability question is the same across recruit age range Independent samples: Kruskal-Wallis Test 0.279 Retain the null hypothesis For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall preparedness question is the same across recruit age range Independent samples: Kruskal-Wallis Test 1.000 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 10 Mean and Kruskal-Wallis Test values for FTO responses grouped by recruit age 384 Recruit post-secondary education Overall recruit ability Overall recruit preparedness some college Mean 2.00 3.00 N 2 1 Std. Deviation .000 . undergraduate degree Mean 3.14 3.17 N 7 6 Std. Deviation .690 .753 graduate degree Mean 3.00 3.00 N 1 1 Std. Deviation . . other Mean 2.50 2.50 N 2 2 Std. Deviation .707 .707 Total Mean 2.83 3.00 N 12 10 Std. Deviation .718 .667 Null Hypothesis Test Sig. Decision For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall ability question is the same across recruit post-secondary education Independent samples: Kruskal-Wallis Test 0.191 Retain the null hypothesis For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall preparedness question is the same across recruit post-secondary education Independent samples: Kruskal-Wallis Test 0.682 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 11 Mean and Kruskal-Wallis Test values for FTO responses grouped by recruit post-secondary education 385 Recruit previous police experience Overall recruit ability Overall recruit preparedness No experience Mean 2.71 3.00 N 7 5 Std. Deviation .756 .707 CSP, jail guard, auxiliary/reserve or international police Mean 3.00 3.00 N 5 5 Std. Deviation .707 .707 Total Mean 2.83 3.00 N 12 10 Std. Deviation .718 .667 Null Hypothesis Test Sig. Decision For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall ability question is the same across recruit previous police experience Independent samples: Mann-Whitney U Test 0.5301 Retain the null hypothesis For the lecture-based delivery model recruit FTOs, the distribution of responses for the overall preparedness question is the same across recruit previous police experience Independent samples: Mann-Whitney U Test 1.0001 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table C- 12 Mean and Mann-Whitney U Test values for FTO responses grouped by recruit previous police experience 386 C.4 Comparison within classesCompetency-based delivery model - Recruit characteristics Recruit gender Overall ability Overall preparedness Female Mean 2.62 3.00 N 13 13 Std. Deviation .768 .000 Male Mean 2.75 3.06 N 36 36 Std. Deviation .604 .583 Total Mean 2.71 3.04 N 49 49 Std. Deviation .645 .498 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruits, the distribution of responses for the overall ability question is the same across recruit gender Independent samples: Mann-Whitney U Test 0.718 Retain the null hypothesis For the competency-based delivery model recruits, the distribution of responses for the overall preparedness question is the same across recruit gender Independent samples: Mann-Whitney U Test 1.000 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 13 Mean and Mann-Whitney U Test values grouped across recruit gender 387 Recruit age range Overall ability Overall preparedness 20-24 Mean 2.80 3.10 N 10 10 Std. Deviation .422 .738 25-29 Mean 2.68 3.04 N 25 25 Std. Deviation .748 .539 30-34 Mean 2.75 3.00 N 12 12 Std. Deviation .622 .000 35-39 Mean 3.00 3.00 N 1 1 Std. Deviation . . 40-44 Mean 2.00 3.00 N 1 1 Std. Deviation . . Total Mean 2.71 3.04 N 49 49 Std. Deviation .645 .498 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruits, the distribution of responses for the overall ability question is the same across recruit age range Independent samples: Kruskal-Wallis Test 0.586 Retain the null hypothesis For the competency-based delivery model recruits, the distribution of responses for the overall preparedness question is the same across recruit age range Independent samples: Kruskal-Wallis Test 1.000 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 14 Mean and Kruskal-Wallis test values grouped across recruit age category 388 Recruit post-secondary education? Overall ability Overall preparedness college diploma Mean 2.50 3.00 N 12 12 Std. Deviation .905 .426 some university Mean 2.86 3.00 N 7 7 Std. Deviation .378 .000 undergraduate degree Mean 2.79 3.08 N 24 24 Std. Deviation .509 .654 graduate degree Mean 2.50 3.00 N 4 4 Std. Deviation 1.000 .000 other Mean 3.00 3.00 N 2 2 Std. Deviation .000 .000 Total Mean 2.71 3.04 N 49 49 Std. Deviation .645 .498 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruits, the distribution of responses for the overall ability question is the same across recruit post-secondary education Independent samples: Kruskal-Wallis Test 0.683 Retain the null hypothesis For the competency-based delivery model recruits, the distribution of responses for the overall preparedness question is the same across recruit post-secondary education Independent samples: Kruskal-Wallis Test 1.000 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 15 Mean and Kruskal-Wallis Test values grouped across recruit post-secondary education 389 Recruit previous police experience Overall ability Overall preparedness No previous police experience Mean 2.83 2.92 N 12 12 Std. Deviation .577 .289 CSP/ jail guard/ auxiliary or reserve/ international police Mean 2.77 3.00 N 13 13 Std. Deviation .439 .707 Traffic authority/ CBSA/ corrections/ dispatch Mean 3.00 3.00 N 3 3 Std. Deviation .000 .000 Volunteer Mean 3.00 3.00 N 1 1 Std. Deviation . . Total Mean 2.83 2.97 N 29 29 Std. Deviation .468 .499 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruits, the distribution of responses for the overall ability question is the same across recruit previous police experience Independent samples: Kruskal-Wallis Test 0.664 Retain the null hypothesis For the competency-based delivery model recruits, the distribution of responses for the overall preparedness question is the same across recruit previous police experience Independent samples: Kruskal-Wallis Test 0.981 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 16 Mean and Kruskal-Wallis Test values grouped across recruit previous policing experience 390 C.5 Competency-based delivery model - FTO characteristics FTO gender Overall recruit ability Overall recruit preparedness female Mean 3.00 2.75 N 4 4 Std. Deviation .816 .957 male Mean 2.75 2.81 N 16 16 Std. Deviation .683 .403 Total Mean 2.80 2.80 N 20 20 Std. Deviation .696 .523 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit ability question is the same across FTO gender Independent samples: Mann-Whitney U Test 0.6821 Retain the null hypothesis For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit preparedness question is the same across FTO gender Independent samples: Mann-Whitney U Test 0.7501 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table C- 17 Mean and Mann-Whitney U Test values grouped across FTO gender 391 FTO age range Overall recruit ability Overall recruit preparedness 30-34 Mean 2.80 2.80 N 5 5 Std. Deviation .447 .447 35-39 Mean 2.50 2.75 N 4 4 Std. Deviation 1.000 .500 40-44 Mean 2.80 3.00 N 5 5 Std. Deviation .837 .707 45-49 Mean 3.20 2.80 N 5 5 Std. Deviation .447 .447 50-54 Mean 2.00 2.00 N 1 1 Std. Deviation . . Total Mean 2.80 2.80 N 20 20 Std. Deviation .696 .523 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit ability question is the same across FTO age range Independent samples: Kruskal-Wallis Test 0.402 Retain the null hypothesis For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit preparedness question is the same across FTO age range Independent samples: Kruskal-Wallis Test 0.542 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 18 Mean and Kruskal-Wallis Test values grouped across FTO age range 392 FTO years of policing Overall recruit ability Overall recruit preparedness 5-9 Mean 2.67 2.83 N 6 6 Std. Deviation .516 .408 10-14 Mean 2.78 2.78 N 9 9 Std. Deviation .833 .441 15-19 Mean 2.67 3.00 N 3 3 Std. Deviation .577 1.000 5 Mean 3.50 2.50 N 2 2 Std. Deviation .707 .707 Total Mean 2.80 2.80 N 20 20 Std. Deviation .696 .523 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit ability question is the same across FTO years of policing Independent samples: Kruskal-Wallis Test 0.471 Retain the null hypothesis For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit preparedness question is the same across FTO years of policing Independent samples: Kruskal-Wallis Test 0.811 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 19 Mean and Kruskal-Wallis Test values grouped across FTO years of service 393 FTO years as field trainer Overall recruit ability Overall recruit preparedness 0-4 Mean 2.85 2.77 N 13 13 Std. Deviation .801 .439 5-9 Mean 3.00 3.00 N 2 2 Std. Deviation .000 .000 10-14 Mean 2.50 3.00 N 4 4 Std. Deviation .577 .816 4 Mean 3.00 2.00 N 1 1 Std. Deviation . . Total Mean 2.80 2.80 N 20 20 Std. Deviation .696 .523 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit ability question is the same across FTO years as field trainer Independent samples: Kruskal-Wallis Test 0.659 Retain the null hypothesis For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit preparedness question is the same across FTO years as field trainer Independent samples: Kruskal-Wallis Test 0.351 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 20 Mean and Kruskal-Wallis Test values grouped across FTO years as field trainer 394 FTO – number of recruits trained Overall recruit ability Overall recruit preparedness 0-4 Mean 2.92 2.85 N 13 13 Std. Deviation .641 .376 5-9 Mean 2.33 3.00 N 3 3 Std. Deviation 1.155 1.000 10-14 Mean 2.67 2.33 N 3 3 Std. Deviation .577 .577 4 Mean 3.00 3.00 N 1 1 Std. Deviation . . Total Mean 2.80 2.80 N 20 20 Std. Deviation .696 .523 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit ability question is the same across FTO number of recruits trained Independent samples: Kruskal-Wallis Test 0.775 Retain the null hypothesis For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit preparedness question is the same across FTO number of recruits trained Independent samples: Kruskal-Wallis Test 0.379 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 21 Mean and Kruskal-Wallis Test values grouped across FTO number of recruits trained 395 C.6 Competency-based delivery model - Recruit characteristics on FTO responses Recruit gender Overall recruit ability Overall recruit preparedness Female Mean 3.00 3.00 N 1 1 Std. Deviation . . Male Mean 2.63 2.62 N 8 8 Std. Deviation .916 .744 Total Mean 2.67 2.67 N 9 9 Std. Deviation .866 .707 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit ability question is the same across recruit gender Independent samples: Mann-Whitney U Test 0.8891 Retain the null hypothesis For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit preparedness question is the same across recruit gender Independent samples: Mann-Whitney U Test 0.6671 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 1 Exact significance is displayed for this test Table C- 22 Mean and Mann-Whitney U Test FTO responses grouped across recruit gender 396 Recruit age range Overall recruit ability Overall recruit preparedness 20-24 Mean 3.00 2.00 N 1 1 Std. Deviation . . 25-29 Mean 1.67 2.00 N 3 3 Std. Deviation .577 .000 30-34 Mean 3.20 3.20 N 5 5 Std. Deviation .447 .447 Total Mean 2.67 2.67 N 9 9 Std. Deviation .866 .707 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit ability question is the same across recruit age range Independent samples: Kruskal-Wallis Test 0.037 Reject the null hypothesis For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit preparedness question is the same across recruit age range Independent samples: Kruskal-Wallis Test 0.027 Reject the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 23 Mean and Kruskal-Wallis test values FTO responses grouped across recruit age category 397 Cross-tabulation reports Recruit age range Total 20-24 25-29 30-34 FTO response to the overall recruit ability question has knowledge 0 1 0 1 act under full supervision 0 2 0 2 act under moderate supervision 1 0 4 5 act independently 0 0 1 1 Total 1 3 5 9 What is your age range? Total 20-24 25-29 30-34 FTO response to the overall recruit preparedness question poorly prepared 1 3 0 4 well prepared 0 0 4 4 extremely well prepared 0 0 1 1 Total 1 3 5 9 Table C- 24 Cross-tabulation report of FTO responses grouped across recruit age category 398 Recruit post-secondary education? Overall recruit ability Overall recruit preparedness college diploma Mean 1.00 2.00 N 1 1 Std. Deviation . . some university Mean 3.00 3.00 N 1 1 Std. Deviation . . undergraduate degree Mean 2.83 2.50 N 6 6 Std. Deviation .753 .548 graduate degree Mean 3.00 4.00 N 1 1 Std. Deviation . . Total Mean 2.67 2.67 N 9 9 Std. Deviation .866 .707 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit ability question is the same across recruit post-secondary education Independent samples: Kruskal-Wallis Test 0.389 Retain the null hypothesis For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit preparedness question is the same across recruit post-secondary education Independent samples: Kruskal-Wallis Test 0.245 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 25 Mean and Kruskal-Wallis Test values FTO responses grouped across recruit post-secondary education 399 Recruit previous police experience Overall recruit ability Overall recruit preparedness No previous police experience Mean 2.67 2.67 N 3 3 Std. Deviation .577 .577 CSP/ jail guard/ auxiliary or reserve/ international police Mean 2.50 2.75 N 4 4 Std. Deviation 1.000 .957 Total Mean 2.57 2.71 N 7 7 Std. Deviation .787 .756 Null Hypothesis Test Sig. Decision For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit ability question is the same across recruit previous police experience Independent samples: Kruskal-Wallis Test 1.000 Retain the null hypothesis For the competency-based delivery model recruit FTOs, the distribution of responses for the overall recruit preparedness question is the same across recruit previous police experience Independent samples: Kruskal-Wallis Test 1.000 Retain the null hypothesis Asymptotic significances are displayed. The significance level is 0.05 Table C- 26 Mean and Kruskal-Wallis Test values FTO responses grouped across recruit previous policing experience "@en ; edm:hasType "Thesis/Dissertation"@en ; vivo:dateIssued "2018-11"@en ; edm:isShownAt "10.14288/1.0372878"@en ; dcterms:language "eng"@en ; ns0:degreeDiscipline "Educational Leadership and Policy"@en ; edm:provider "Vancouver : University of British Columbia Library"@en ; dcterms:publisher "University of British Columbia"@en ; dcterms:rights "Attribution-NonCommercial-NoDerivatives 4.0 International"@* ; ns0:rightsURI "http://creativecommons.org/licenses/by-nc-nd/4.0/"@* ; ns0:scholarLevel "Graduate"@en ; dcterms:title "Evaluation of a competency-based education framework for police recruit training in British Columbia"@en ; dcterms:type "Text"@en ; ns0:identifierURI "http://hdl.handle.net/2429/67610"@en .