UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Evaluation of a competency-based education framework for police recruit training in British Columbia Houlahan, Nora 2018

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2018_november_houlahan_nora.pdf [ 15.06MB ]
Metadata
JSON: 24-1.0372878.json
JSON-LD: 24-1.0372878-ld.json
RDF/XML (Pretty): 24-1.0372878-rdf.xml
RDF/JSON: 24-1.0372878-rdf.json
Turtle: 24-1.0372878-turtle.txt
N-Triples: 24-1.0372878-rdf-ntriples.txt
Original Record: 24-1.0372878-source.json
Full Text
24-1.0372878-fulltext.txt
Citation
24-1.0372878.ris

Full Text

EVALUATION OF A COMPETENCY-BASED EDUCATION FRAMEWORK FOR POLICE RECRUIT TRAINING IN BRITISH COLUMBIA by  Nora Houlahan  B.Sc., The University of Guelph, 2000 M.Sc., The University of British Columbia, 2004  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF EDUCATION in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Educational Leadership and Policy)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  October 2018  © Nora Houlahan, 2018 ii  The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, the dissertation entitled:  Evaluation of a Competency-based Education Framework for Police Recruit Training in  _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ British Columbia __________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________    submitted by     Nora Houlahan   in partial fulfillment of the requirements for the                                                                          __________________________________________________________________________________________________________________________________________  the degree of     Doctor of Education                                                                       _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________  in     Educational Leadership and Policy                                                                       _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________   Examining Committee:  Don Fisher, Educational Studies ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ Supervisor  Tom Sork, Educational Studies ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ Supervisory Committee  Steve Schnitzer, Police Academy ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ Supervisory Committee  Alison Taylor, Educational Studies ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ University Examiner  Penney Clark, Curriculum & Pedagogy ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ University Examiner  iii  Abstract  Police training is traditionally delivered in a didactic, para-military style that contrasts with modern day public expectations of patrol level police officers.  The predominant methods of instruction and assessment for police recruits remain lecture-based and memorization-driven.  In British Columbia, all municipal, transit, and tribal police recruits are trained at the Justice Institute of British Columbia (JIBC) Police Academy.  In 2016, the JIBC Police Academy implemented a recruit-training program that is centred on the development and assessment of the Police Sector Council (PSC) National Framework of Constable Competencies.  The core aspects of this program include: integrated delivery of materials focused around common patrol-level calls, application and performance through case-based and scenario-based learning activities, development of individualized training plans with instructors mentoring recruits over the course of training, performance-based assessment exam scenarios, and assessment portfolios at the end of each component of training.  This is the first police recruit training program in Canada to directly integrate the PSC competencies.     This project used a quantitative approach to evaluate the first component (Block I) of the new training delivery model through surveying recruits and their Field Training Officers (FTOs) from one class trained in the old lecture-based delivery model and two classes trained in the new competency-based delivery model.   The survey used the PSC constable competencies as the reference point and, for each of the nine core competencies, asked about the recruits’ ability and how well their Block I training prepared them for Block II.  Recruits in the lecture-based delivery model rated their ability significantly higher than those from the iv  competency-based delivery model in:  adaptability, ethical accountability, organizational awareness, problem solving, risk management, stress tolerance, and teamwork.  No significant difference in how FTOs rated recruits in the lecture-based and competency-based delivery models was identified.  Analysis of the comments indicate the recruits in the lecture-based delivery model may have a less robust understanding of the role of a patrol level police officer due to their limited exposure to scenarios and the lack of formative feedback on their performance, and may over-estimate their own ability.  The impacts of organizational cynicism and change management are included in the discussion.     v  Lay Summary In September of 2016 the JIBC Police Academy implemented a new competency-based delivery format for the municipal police recruit training program in British Columbia.  In this format, training moved away from a traditional lecture-based and memorization driven model to one that focuses on application and performance and is aligned with the Police Sector Council national framework of constable competencies.  The PSC competencies are the only nationally recognized standard for policing in Canada and this program is the first time nationally that the competencies have been integrated into police recruit training.  This project evaluates the changes in the program delivery model by using surveys to compare recruit ability and preparedness for their field training in one class of recruits from the lecture-based delivery model and two classes of recruits from the competency-based delivery model.     vi  Preface  The identification and design of the research program for this project was conducted entirely by me.  I performed all aspects of the research and analysis of the research data.  With the permission of my supervisor, I consulted with a graduate student representative from the University of British Columbia (UBC) Department of Statistics Short Term Consulting Service, on the statistical analysis of the data.  This representative provided advice on the approach to the analysis and statistical tests to use.    This project required ethics approval from both the University of British Columbia and the Justice Institute of British Columbia (JIBC).  Ethics approval was obtained from the UBC Behavioural Research Ethics Board under certificate H16-01401 and from the JIBC Ethics Review Committee under certificate JIBBCER2016-10-02-CBEF.  vii  Table of Contents  Abstract ................................................................................................................................... iii  Lay Summary ...........................................................................................................................v  Preface ..................................................................................................................................... vi  Table of Contents .................................................................................................................. vii  List of Tables ........................................................................................................................ xiv  List of Figures ...................................................................................................................... xxii  List of Abbreviations ...........................................................................................................xxv  Acknowledgements ............................................................................................................ xxvi  Dedication .......................................................................................................................... xxvii  Chapter 1: Introduction ..........................................................................................................1  1.1 My Perspective.......................................................................................................... 2 1.2 Theoretical Framework ............................................................................................. 8 1.3 The Context of Police Training in BC .................................................................... 10 1.4 Summary of Recruit Training Delivery Models ..................................................... 14 1.5 Research Question - Program Implementation and Evaluation .............................. 16 1.6 Summary ................................................................................................................. 17  Chapter 2: Literature Review ...............................................................................................19 2.1 Research on Police Training ................................................................................... 19 2.2 Police Competencies in Canada .............................................................................. 29 2.3 Competency-Based Education ................................................................................ 32 2.3.1 Overview of competency-based education ......................................................... 32 viii  2.3.2 Defining Competency-Based Education Terminology ....................................... 34 2.3.3 Determining Competencies ................................................................................. 36 2.3.4 Elements of Competency-Based Learning.......................................................... 37 2.4 The Learning Process .............................................................................................. 39 2.5 Assessment of Competencies .................................................................................. 44 2.6 Criticisms of Competency-Based Education .......................................................... 50 2.7 Summary ................................................................................................................. 52  Chapter 3: Program Description ..........................................................................................54 3.1.1 Recruit Training Program Structure Prior to Delivery Model Changes ............. 54 3.2 Design and Development ........................................................................................ 58 3.3 Proposed Recruit Training Program Structure Delivery Model Changes .............. 60 3.4 The New Program Structure ................................................................................... 65 3.4.1 Block I ................................................................................................................. 69 3.4.1.1 Weekly pre-reading and quizzes ................................................................. 71 3.4.1.2 Classroom case-based application .............................................................. 73 3.4.1.3 Directed study time ..................................................................................... 75 3.4.1.4 Practical scenarios ....................................................................................... 76 3.4.1.5 Practical Scenario Acting ............................................................................ 76 3.4.1.6 Practical Scenario Self-Assessment and Report Writing ............................ 77 3.4.1.7 COPS Days ................................................................................................. 79 3.4.1.8 Skills Development – Use of Force, Firearms, and Driving ....................... 79 3.4.1.9 Assessment .................................................................................................. 80 3.4.2 Block II ............................................................................................................... 82 ix  3.4.3 Block III .............................................................................................................. 85 3.4.3.1 Pre-reading and quizzes .............................................................................. 85 3.4.3.2 Teaching sims ............................................................................................. 86 3.4.3.3 Longitudinal Cases...................................................................................... 86 3.4.3.4 Advanced Operational Policing Skills (AOPS) days .................................. 87 3.4.3.5 Mentoring Junior Recruits .......................................................................... 87 3.4.3.6 Assessment .................................................................................................. 88 3.4.4 Block IV .............................................................................................................. 89 3.5 Development ........................................................................................................... 89 3.6 Implementation ....................................................................................................... 92  3.7 Delivered Curriculum ............................................................................................. 93  3.7.1 Class 152 Case Studies ....................................................................................... 94 3.7.2 Class 153 Case Studies ....................................................................................... 95 3.7.3 Practical Scenarios .............................................................................................. 96 3.7.4 Mentoring ............................................................................................................ 98 3.7.5 Directed Study .................................................................................................. 100 3.8 Summary ............................................................................................................... 101  Chapter 4: Methodology......................................................................................................102  4.1 Research Design.................................................................................................... 102 4.1.1 Program Evaluation Framework ....................................................................... 102 4.1.2 Evaluation Design and Methodology................................................................ 105 4.1.2.1 Survey design ............................................................................................ 110 4.1.2.2 Survey Administration and Timeline ........................................................ 113 x  4.1.2.3 Statistical Analysis .................................................................................... 116 4.1.2.4 Qualitative Data Analysis ......................................................................... 117 4.2 Project Narrative ................................................................................................... 117 4.2.1 Changes to Project Design ................................................................................ 120 4.3 Summary ............................................................................................................... 123  Chapter 5: Results................................................................................................................124  5.1 Descriptive Survey Results ................................................................................... 124 5.1.1 Lecture-based delivery model:  Class 151 ........................................................ 124 5.1.1.1 Demographic Characteristics of 151 FTOs............................................... 126 5.1.2 Competency-based delivery model:  Class 152 ................................................ 128 5.1.2.1 Demographic Characteristics of 152 FTOs............................................... 130 5.1.3 Competency-based delivery model:  Class 153 ................................................ 132 5.1.3.1 Demographic Characteristics of 153 FTOs............................................... 134 5.1.4 Competency-based delivery model:  Exam Assessors ...................................... 136 5.2 Quantitative Survey Analysis ............................................................................... 138 5.2.1 Differences in perception before and after Block II experience ....................... 142 5.2.2 Comparison within classes ................................................................................ 144 5.2.2.1 Lecture-based delivery model ................................................................... 145 5.2.2.1.1 Recruit characteristics ......................................................................... 145 5.2.2.1.2 FTO characteristics ............................................................................. 146 5.2.2.1.3 Recruit Characteristics on FTO Responses ......................................... 147 5.2.2.2 Competency-based delivery model ........................................................... 148 5.2.2.2.1 Recruit characteristics ......................................................................... 148 xi  5.2.2.2.2 FTO characteristics ............................................................................. 149 5.2.2.2.3 Recruit characteristics on FTO responses ........................................... 149 5.2.3 Comparison across classes ................................................................................ 150 5.2.3.1 Global comparison across classes ............................................................. 151 5.2.3.2 Adaptability............................................................................................... 154 5.2.3.3 Ethical Accountability .............................................................................. 159 5.2.3.4 Interactive Communication ....................................................................... 164 5.2.3.5 Organizational Awareness ........................................................................ 168 5.2.3.6 Problem Solving........................................................................................ 173 5.2.3.7 Risk Management ..................................................................................... 178 5.2.3.8 Stress Tolerance ........................................................................................ 183 5.2.3.9 Teamwork ................................................................................................. 188 5.2.3.10 Written Skills ........................................................................................ 193 5.2.4 Analysis of Recruit Responses Compared with FTO responses ....................... 198 5.2.4.1 Recruit and FTO Responses – Lecture-based delivery model .................. 199 5.2.4.2 Recruit and FTO Responses – Competency-based delivery model .......... 202 5.2.5 Analysis of Assessor Responses ....................................................................... 205 5.2.6 Qualitative Analysis of Survey Comments ....................................................... 210 5.2.6.1 Lecture-based delivery model:  Recruit Survey 1..................................... 210 5.2.6.2 Lecture-based delivery model:  Recruit Survey 2..................................... 213 5.2.6.3 Lecture-based delivery model:  FTO Survey ............................................ 215 5.2.6.4 Competency-based delivery model:  Recruit Survey 1 ............................. 216 5.2.6.4.1 Class 152 – Recruit Survey 1 .............................................................. 216 xii  5.2.6.4.2 Class 153 – Recruit Survey 1 .............................................................. 219 5.2.6.5 Competency-based delivery model:  Recruit Survey 2 ............................. 220 5.2.6.5.1 Class 152 – Recruit Survey 2 .............................................................. 220 5.2.6.5.2 Class 153 – Recruit Survey 2 .............................................................. 222 5.2.6.6 Competency-based delivery model:  FTO Survey .................................... 223 5.2.6.6.1 Class 152 – FTO Survey ..................................................................... 223 5.2.6.6.2 Class 153 – FTO Survey ..................................................................... 225 5.2.6.7 Competency-based delivery model:  Assessor Survey ............................. 226 5.3 Focus Group Analysis ........................................................................................... 227 5.4 Summary ............................................................................................................... 228  Chapter 6: Discussion ..........................................................................................................230  6.1 Survey Results ...................................................................................................... 231  6.1.1 Recruit Ability and Preparedness ...................................................................... 231 6.1.2 Course Content and Structure ........................................................................... 235 6.2 Faculty Development ............................................................................................ 243 6.3 Organizational Cynicism and Organizational Change .......................................... 245 6.4 Changes Following Class 152 ............................................................................... 253 6.5 Summary ............................................................................................................... 254  Chapter 7: Conclusion .........................................................................................................256  7.1 Lessons Learned.................................................................................................... 257 7.2 Limitations ............................................................................................................ 261 7.3 Recommendations ................................................................................................. 261 7.3.1 Designing a Major Curriculum Change ............................................................ 262 xiii  7.3.2 Implementing Competency-Based Education................................................... 264 7.3.3 Conducting Program Evaluation within a Major Curriculum Change ............. 267 7.3.4 Recommendations for Practitioner Research .................................................... 269 7.4 Conclusion ............................................................................................................ 270 References .............................................................................................................................273  Appendices ............................................................................................................................292  Appendix A - Template Schedule for Competency-Based Delivery Model of Recruit Training ............................................................................................................................. 293  A.1 Block I Template Schedule ............................................................................... 293 A.2 Block III Template Schedule ............................................................................ 306 Appendix B -  Surveys ...................................................................................................... 314  B.1 Recruit Survey .................................................................................................. 315 B.2 FTO Survey ....................................................................................................... 336 B.3 Assessor Survey ................................................................................................ 357 Appendix C - Consistency Tables:  Comparison Within Classes ..................................... 374 C.1 Lecture-based delivery model - Recruit characteristics .................................... 374 C.2 Lecture-based delivery model - FTO characteristics ........................................ 378 C.3 Lecture-based delivery model - Recruit Characteristics on FTO Responses ... 382 C.4 Competency-based delivery model - Recruit characteristics ............................ 386 C.5 Competency-based delivery model - FTO characteristics ................................ 390 C.6 Competency-based delivery model - Recruit characteristics on FTO responses 395  xiv  List of Tables Table 2-1  Police Sector Council core Constable competencies with proficiency levels 1 and 2 (Police Sector Council, 2011) .........................................................................................32  Table 2-2  Summarization of the stages of adult skill development (Dreyfus, 2004) related to competency in medical practitioners (Carraccio et al., 2005) and the level of supervision required (ten Cate and Scheele, 2007) ...............................................................................44 Table 3-1 Comparison of program elements 10 years before the program change proposal (2005), before change implementation (2015), and in the new delivery model (2016) ....65 Table 3-2 Expected progression through proficiency levels 1 and 2 in each of the core Constable competencies .....................................................................................................82 Table 4-1 Summary of Kiripatrick model of program evaluation and modifications from Alliger et al., (1997) and Wang and Wilcox (2006) that influenced the program evaluation design of this study .........................................................................................104 Table 4-2 Summary of program evaluation model from Table 4-1 with data sources from the project design ...................................................................................................................107  Table 5-1 Class 151 demographic characteristics and survey response rates ........................125 Table 5-2 Education levels of Class 151 prior to police academy .........................................126 Table 5-3  Previous policing experience of Class 151 prior to police academy ....................126 Table 5-4  Demographic characteristics for FTO respondents for Class 151 ........................127 Table 5-5  Characteristics of recruits trained by FTO respondents in Class 151. .................128 Table 5-6  Class 152 demographic characteristics and survey response rates .......................128 Table 5-7  Education levels of Class 152 respondents prior to police academy....................129 Table 5-8  Previous policing experience of Class 152 prior to police academy ....................129 xv  Table 5-9  Demographic characteristics for FTO respondents for Class 152 ........................131 Table 5-10  Characteristics of recruits trained by FTO respondents in Class 152. ...............132 Table 5-11  Class 153 demographic characteristics and survey response rates .....................132 Table 5-12  Education levels of Class 153 respondents prior to police academy..................133 Table 5-13  Previous policing experience of Class 153 prior to police academy ..................134 Table 5-14  Demographic characteristics for FTO respondents for Class 153 ......................135 Table 5-15  Characteristics of recruits trained by FTO respondents in Class 153. ...............136 Table 5-16  Demographic characteristics of competency-based exam assessors ..................137 Table 5-17  Mann Whitney U test results comparing distribution of responses to Recruit Survey 1 and Recruit Survey 2 between Class 152 and 153 ............................................139 Table 5-18  Mann Whitney U test results comparing distribution of responses to the FTO survey and the difference between Recruit Survey 1 and the FTO survey between Class 152 and 153 ......................................................................................................................141  Table 5-19  Differences between recruit perceptions before and after Block II training experience ........................................................................................................................143  Table 5-20 Global mean ratings for overall ability and overall preparation from Recruit Survey 1 and FTO responses clustered across training delivery methods .......................152 Table 5-21 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type ................................................................154 Table 5-22  Mean ratings for ability and preparation in the adaptability competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ..........156 Table 5-23 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type ................................................................158 xvi  Table 5-24  Cross-tabulation report from Recruit Survey 1 for ability in the adaptability competency area...............................................................................................................158 Table 5-25  Mean ratings for ability and preparation in the ethics competency from Recruit Survey 1 and FTO responses clustered across training delivery methods .......................161 Table 5-26 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type ................................................................163 Table 5-27  Cross-tabulation report from Recruit Survey 1 for ability in the ethics competency area...............................................................................................................163 Table 5-28  Mean ratings for ability and preparation in the communication competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ..........166 Table 5-29 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type ................................................................167 Table 5-30  Mean ratings for ability and preparation in the organizational awareness competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ............................................................................................................................169  Table 5-31 Mann Whitney U Test of ability and preparedness for organizational awareness competency from Recruit Survey 1 and FTO survey, grouped across training type .......171 Table 5-32  Cross-tabulation report from Recruit Survey 1 for ability (top) and preparedness (bottom) in the organizational awareness competency area ............................................172 Table 5-33  Mean ratings for ability and preparation in the problem solving competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ..........175 Table 5-34 Mann Whitney U Test of ability and preparedness for problem solving competency from Recruit Survey 1 and FTO survey, grouped across training type .......177 xvii  Table 5-35  Cross-tabulation report from Recruit Survey 1 for ability in the problem solving competency area...............................................................................................................177 Table 5-36  Mean ratings for ability and preparation in the risk management competency from Recruit Survey 1 and FTO responses clustered across training delivery methods .180 Table 5-37 Mann Whitney U Test of ability and preparedness for risk management competency from Recruit Survey 1 and FTO survey, grouped across training type .......182 Table 5-38  Cross-tabulation report from Recruit Survey 1 for ability in the risk management competency area...............................................................................................................182 Table 5-39  Mean ratings for ability and preparation in the stress tolerance competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ..........185 Table 5-40  Mann Whitney U Test of ability and preparedness for stress tolerance competency from Recruit Survey 1 and FTO survey, grouped across training type .......187 Table 5-41  Cross-tabulation report from Recruit Survey 1 for ability in the stress tolerance competency area...............................................................................................................187 Table 5-42  Mean ratings for ability and preparation in the teamwork competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ..........190 Table 5-43  Mann Whitney U Test of ability and preparedness for teamwork competency from Recruit Survey 1 and FTO survey, grouped across training type ...........................192 Table 5-44  Cross-tabulation report from Recruit Survey 1 for ability in the teamwork competency area...............................................................................................................192 Table 5-45  Mean ratings for ability and preparation in the written skills competency from Recruit Survey 1 and FTO responses clustered across training delivery methods ..........195 xviii  Table 5-46 Mann Whitney U Test of ability and preparedness for written skills competency from Recruit Survey 1 and FTO survey, grouped across training type ...........................197 Table 5-47  Cross-tabulation report from Recruit Survey 1 for ability in the written skills competency area...............................................................................................................197 Table 5-48  Mann Whitney U test results for lecture-based delivery model for ability and preparedness overall and for each of the competencies grouped across recruit/FTO responses ..........................................................................................................................201 Table 5-49  Cross-tabulation analysis of recruit ability in the risk management (top) and stress tolerance (bottom) competency areas grouped by recruit/FTO responses .............202 Table 5-50  Mann Whitney U test results for competency-based delivery model for ability and preparedness overall and for each of the competencies grouped across recruit/FTO responses ..........................................................................................................................205 Table 5-51  Cross-tabulation analysis of recruit preparedness in the adaptability (top) and interactive communication (bottom) competency areas grouped by recruit/FTO responses..........................................................................................................................................205  Table 5-52 Summary of mean and standard deviation of assessors’ ranking of recruits in the competency-based delivery model ...................................................................................207  Table 5-53  Kruskal-Wallis test results for ability and preparedness overall and in each of the competencies grouped across recruit, FTO, or assessor ..................................................209 Table 5-54  Recruit comments and coding from Class 151, lecture-based delivery model, Survey 1 ...........................................................................................................................211  Table 5-55  Recruit comments and coding from Class 151, lecture-based delivery model, Survey 2 ...........................................................................................................................213  xix  Table 5-56  Recruit comments and coding from Class 151, lecture-based delivery model, FTO survey ......................................................................................................................215  Table 5-57  Recruit comments and coding from Class 152, competency-based delivery model, Survey 1 ...............................................................................................................217 Table 5-58  Recruit comments and coding from Class 153, competency-based delivery model, Survey 1 ...............................................................................................................219 Table 5-59  Recruit comments and coding from Class 152, competency-based delivery model, Survey 2 ...............................................................................................................221 Table 5-60  FTO comments and coding from Class 152, competency-based delivery model, FTO survey ......................................................................................................................224  Table 5-61  Recruit comments and coding from Class 153, competency-based delivery model, FTO survey ..........................................................................................................226  Table 6-1  Summary of changes made to the recruit training program since Classes 152 and 153....................................................................................................................................254 Table C- 1 Mean values and Mann-Whitney U test results grouped across recruit genders .374 Table C- 2  Mean and Kruskal-Wallis Test values grouped by recruit age range .................375 Table C- 3  Mean and Kruskal-Wallis Test values grouped across recruit post-secondary education level .................................................................................................................376 Table C- 4  Mean values and Mann-Whitney U Test values grouped by recruit previous policing experience ..........................................................................................................377 Table C- 5  Mean and Mann-Whitney U Test values grouped by FTO gender.....................378 Table C- 6  Mean and Kruskal Wallis Test values grouped across FTO years of service .....379 xx  Table C- 7  Mean values and Kruskal-Wallis Test values grouped across FTO years as FTO..........................................................................................................................................380  Table C- 8  Mean and Kruskal-Wallis Test values grouped across FTO number of recruits trained ..............................................................................................................................381 Table C- 9  Mean and Mann-Whitney U Test values for FTO responses grouped by recruit gender ...............................................................................................................................382  Table C- 10  Mean and Kruskal-Wallis Test values for FTO responses grouped by recruit age..........................................................................................................................................383  Table C- 11  Mean and Kruskal-Wallis Test values for FTO responses grouped by recruit post-secondary education .................................................................................................384  Table C- 12  Mean and Mann-Whitney U Test values for FTO responses grouped by recruit previous police experience ...............................................................................................385  Table C- 13  Mean and Mann-Whitney U Test values grouped across recruit gender ..........386 Table C- 14  Mean and Kruskal-Wallis test values grouped across recruit age category .....387 Table C- 15  Mean and Kruskal-Wallis Test values grouped across recruit post-secondary education ..........................................................................................................................388  Table C- 16  Mean and Kruskal-Wallis Test values grouped across recruit previous policing experience ........................................................................................................................389  Table C- 17  Mean and Mann-Whitney U Test values grouped across FTO gender .............390 Table C- 18  Mean and Kruskal-Wallis Test values grouped across FTO age range ............391 Table C- 19  Mean and Kruskal-Wallis Test values grouped across FTO years of service ..392 Table C- 20  Mean and Kruskal-Wallis Test values grouped across FTO years as field trainer..........................................................................................................................................393  xxi  Table C- 21  Mean and Kruskal-Wallis Test values grouped across FTO number of recruits trained ..............................................................................................................................394 Table C- 22  Mean and Mann-Whitney U Test FTO responses grouped across recruit gender..........................................................................................................................................395  Table C- 23  Mean and Kruskal-Wallis test values FTO responses grouped across recruit age category ............................................................................................................................396 Table C- 24  Cross-tabulation report of FTO responses grouped across recruit age category..........................................................................................................................................397  Table C- 25  Mean and Kruskal-Wallis Test values FTO responses grouped across recruit post-secondary education .................................................................................................398  Table C- 26  Mean and Kruskal-Wallis Test values FTO responses grouped across recruit previous policing experience ...........................................................................................399  xxii  List of Figures Figure 1-1  Progression through police recruit training in British Columbia ..........................14 Figure 2-1  Overlay of levels of learning, assessment tools, and ability assessed (Shumway and Harden, 2003) with concepts of reflective practice for learning (Creuss et al., 2005)............................................................................................................................................49  Figure 4-1 Project timeline for recruit survey administration for classes 151 (pre-intervention, lecture-based), 152 and 153 (post-intervention, competency-based) ..............................114 Figure 5-1 Global mean ratings for overall ability (blue) and overall preparation (red) from Recruit Survey 1 clustered across training delivery methods ..........................................151 Figure 5-2 Global mean ratings for overall ability (blue) and overall preparation (red) from FTO survey clustered across training delivery methods ..................................................152 Figure 5-3 Mean ratings for ability in the Adaptability competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method ..............................155 Figure 5-4 Mean ratings for preparation in the Adaptability competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method ..................155 Figure 5-5 Mean ratings for ability in the Ethics competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .................................159 Figure 5-6 Mean ratings for preparation in the Ethics competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method .................................160 Figure 5-7 Mean ratings for ability in the communication competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .................165 Figure 5-8 Mean ratings for preparation in the communication competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method ..................165 xxiii  Figure 5-9  Mean ratings for ability in the organizational awareness competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .....168 Figure 5-10 Mean ratings for preparation in the organizational awareness competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method..........................................................................................................................................169  Figure 5-11 Mean ratings for ability in the problem solving competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .................174 Figure 5-12 Mean ratings for preparation in the problem solving competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method .....174 Figure 5-13 Mean ratings for ability in the risk management competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .................179 Figure 5-14  Mean ratings for preparation in the risk management competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method .....179 Figure 5-15  Mean ratings for ability in the stress tolerance competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .................184 Figure 5-16 Mean ratings for preparation in the stress tolerance competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method .....184 Figure 5-17  Mean ratings for ability in the teamwork competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method ..............................189 Figure 5-18  Mean ratings for preparation in the teamwork competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method ..................189 Figure 5-19  Mean ratings for ability in the written skills competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method .................194 xxiv  Figure 5-20  Mean ratings for preparation in the written skills competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method ..................194  xxv  List of Abbreviations AC Assessment Centre ADDIE Assess, Design, Develop, Implement, Evaluate BC British Columbia BCAMCP British Columbia Association of Municipal Chiefs of Police CBL Case-based learning CBRN Chemical, Biological, Radiological, and Nuclear defense CID Crisis Intervention and De-Escalation CPIC Canadian Police Information Centre CTS Course Training Standard DACUM Develop A CurriculUM EdD Educational Doctorate (Degree) FST Field Sobriety Test FTO Field Training Officer HR Human Resources IRD Immediate Rapid Deployment IRP Immediate Roadside Prohibition JIBC Justice Institute of British Columbia K-12 Kindergarten to Grade 12 LAPD Los Angeles Police Department MDT Mobile Data Terminal MHA Mental Health Act OC Oleoresin capsicum (spray) (pepper spray) PBL Problem Based Learning PBLE Problem Based Learning Exercise POPAT Police Officers Physical Abilities Test  PRIME Police Records Information Management Environment PSB Policing & Security Branch PSC Police Sector Council SBORT Subject Behaviour Officer Response Training SME Subject Matter Expert SoTL Scholarship of Teaching and Learning STEM Science, Technology, Engineering, and Math UBC University of British Columbia UoF Use of Force     xxvi  Acknowledgements  I would like to extend my thanks to the following people: My supervisor, Dr. Donald Fisher, for taking me on as a stranded EdD student, for guidance with the freedom to do my own thing, and for co-teaching the best class of my EdD program.    My committee members, Dr. Tom Sork and Mr. Steve Schnitzer, for being a part of this journey with me.  Steve, in particular, for bearing the brunt of the political blows and for not wavering in his support for the new model.    Mike Massine, for being my only support and talking me down off a ledge more times than I can count during the development of the new curriculum.  Steve Hyde for stepping up during implementation and teaching more than is humanly possible to ensure things went as smoothly as possible.  Evan Hilchey, my dear friend who I met on our first day of the EdD program for the never-ending support and commiseration, and for all our backpacking adventures.    My family, both here and gone, including my dog Pickles, for continuing to help me keep things in perspective.    xxvii  Dedication    For my dad, Joseph Paul Houlahan (March 25, 1945 - September 5, 2010), whose memory at times was the only thing that kept me going in this program.  I love you always daddy.      Man In The Arena (AKA Daring Greatly) "...It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat." - Theodore Roosevelt, 1910 1  Chapter 1: Introduction  The focus of this EdD thesis is the development, implementation, and evaluation of a competency-based training model for police recruits in British Columbia.  This evaluation study used quantitative methods to analyze survey responses from police recruits and field training officers (FTOs) and compare recruit ability and preparedness for field training in the old lecture-based delivery model and the new competency-based delivery model.  The EdD program is a doctoral program intended for professionals in the field of education who are employed while completing the program.  The focus of the program is moving from practice to theory and back to practice.  Dissertation topics in this program are required to be related to the candidate’s job and contribute to their professional field of practice.  When I began working at the JIBC Police Academy my position was as Curriculum Developer but has since been reclassified to Program Manager in recognition of the higher level of work I was undertaking.  In this role I reviewed the existing recruit training curriculum and delivery model, researched and developed a new proposed competency-based delivery model, developed the curriculum materials to implement the new program, oversaw the implementation of the changes, and carried out the evaluation described in this thesis. This chapter will expand on my perspective in approaching the program evaluation to situate myself within the research and describe the theoretical framework I used to approach the development and evaluation of the program.  I will provide an overview of policing in BC to provide context for the role of the BC Police Academy within the policing community and outline the structure of recruit training.  I will briefly summarize the delivery models for recruit training before and after the implementation of the changes and outline the key 2  questions addressed in this thesis.  The chapter will conclude by summarizing how each chapter in this document contributes to the overall program evaluation project.         1.1 My Perspective I began my role as the Curriculum Developer for the BC Police Academy Recruit Training program, located at the Justice Institute of British Columbia (JIBC) in April 2013.  Previous to this position, I had worked for eight years as the Problem Based Learning (PBL) Program Manager in the Medical Undergraduate program at the University of British Columbia (UBC).   In British Columbia, policing is governed by the Policing & Security Branch (PSB) of the Provincial Government.  All municipal, transit, and tribal police recruits in the province are trained at the Police Academy.  The Police Academy’s annual operating budget is presented in a letter of agreement between PSB and the Police Academy that outlines key deliverables for the fiscal year in return for funding the academy.  My hiring, and mandate, are both derived from the letter of agreement between the Policing & Security Branch and the Police Academy.  The first responsibility outlined in my job description is to “design and develop defensible competency-based curriculum that is aligned with Police Academy and Institute strategic directions and meets applicable PSB standards.”  To this end, the primary focus of my work has been mapping the current Recruit Training curriculum to the Police Sector Council National Framework of Constable Competencies, developing the proposal for change to align the program delivery with competency-based education principles, working with subject matter experts to design the educational materials and lesson plans, overseeing the implementation of the new program, and evaluating the program.   This program is 3  unique in the Canadian policing context and has garnered much attention from across the country.   This job was my first experience with the policing community.  I spent much of my first few months observing recruit training classes to learn what and how recruits were taught before I began to try to map the curriculum to the PSC National Framework of Constable Competencies.   During this time I also began to read available literature on police training.  One of the things that struck me was the relative paucity of peer reviewed literature on police training or police education.  What little available information there was, was typically limited to in-service training instead of recruit training.  The majority of published literature seems to be on police and their interactions with the community, typically from the perspective of the community.  Little research has been published for police themselves to use to enhance their training, particularly at the recruit level.  Another revelation was the many similarities I began to uncover between medical education, the field where I had previously worked, and police training.  Both fields are concerned with developing communication skills in their learners, are centred around the development of physical and technical skills, and both have national frameworks of competencies around which to structure their curriculum.  Medical education has been implementing competency-based education for decades whereas the concept is relatively new to the policing world.  As such, I have relied heavily on the literature from medical education, of which there is an abundance, as I mapped and developed the new delivery model for the Recruit Training program.   As the PBL Program Manager in the medical program at UBC, I was immersed in problem-based and case-based learning methodology. I worked with Subject Matter Experts (SMEs) to develop and write case material, trained and provided support and guidance for 4  PBL tutors (facilitators), and tutored PBL groups myself.  The goal of using PBL as methodology is twofold:  the foundational content material is learned in a format that relates to real-life application, making storage and retrieval easier and increasing motivation to learn; and the small group, discussion based format builds communication, facilitation, and teamwork skills in the learners.  I left the medical program at a time when there was a shift in educational philosophy, moving away from the PBL format, with its associated emphasis on communication and facilitation skills, towards more of a case based learning (CBL) format that focuses primarily on the content material.  At the time, and still to this day, I believe that this is the wrong move for medical students’ education.  I was surprised when, after observing many classes and the teaching methodology used in the Recruit Training program, I began to feel that CBL might be an appropriate format for the curriculum for police recruits.  Communication skills are extremely important for a career in policing, arguably as much if not more so than for a career in medicine.  I will expand on the differences between PBL and CBL, and my reasoning behind the chosen methodology, later in this section  While observing recruit training classes, I generated a curriculum map that mapped the existing curriculum against the PSC Constable Competencies.  This analysis was done at a high, discipline outcome-based level and required a re-writing of the discipline outcomes for the program.  The existing discipline ‘objectives’ were not reflective of the level of achievement expected of recruits as they focused mainly on recall ability instead of skill acquisition.  I consulted with the discipline instructors and rewrote these objectives into discipline outcomes that provide a ‘bigger picture’ overview of what is expected of recruits.  I then proceeded to map the constable competencies to the new discipline outcomes and to use the existing PSC Constable task map as a tool to validate the competency map.  The 5  result of this analysis was a confirmation that the Recruit Training program is indeed teaching the necessary content to recruits so that they meet the minimum standards upon graduation.  Through mapping analysis, however, it became apparent to me that the recruit training curriculum could be delivered in a way that would more effectively build and assess competency in the recruits.  In fitting with the mandate from PSB, it was clear that the program structure needed to be redesigned to fit with a competency-based model centered around the National Framework of Constable Competencies.  A case-based delivery format, centred around integration of concepts, fits well with the principles of competency-based education and is the foundation for the new delivery model for recruit training.   During this analysis of the curriculum, I observed one particular class where three recruits were struggling to meet the expectations of the program.  Throughout their training they were offered very little feedback on their progress or suggestions on how to improve.  The recruits were then told, very close to the end of their training, that the instructors had great concerns about their ability to successfully fulfill the job requirements.  In one case, a departmental representative had actually travelled from Vancouver Island to the Police Academy to take steps to terminate the recruit’s employment.  When the representative arrived and was informed that the recruit had not received any feedback on his performance during his Block III training, the planned job action could not take place.  This lack of feedback generated frustration for the recruits when they were finally told of the instructors’ concerns, the program instructors, and for the recruits’ hiring departments.  Many instructors complained about the lack of time in the program for them to work with recruits who were struggling.  It also seemed, surprisingly, that the instructors were not equipped with the knowledge or skill to deliver formative feedback to recruits or to help the recruits set goals 6  for improvement.  This lack of time and capacity further cemented the notion that the recruit program needed to move to a competency-based model to ensure dedicated time in the curriculum for recruits to build their abilities, particularly in areas where they were struggling.   The new delivery model for a case-based and competency-based Recruit Training program evolved from the class observations, literature reviews, curriculum mapping, exit interviews with graduates, and discussions with instructors.  This change required a wholesale change in the educational philosophy of the Police Academy.  It was not sufficient to update a few lesson plans and claim that the program was meeting the required standards.  After the initial proposal was accepted by PSB, I travelled with the Police Academy Director to each of the police departments to provide an overview of the proposed changes.  This outreach was part of our communication strategy for change management, to ensure that all departments were aware of the upcoming changes.  The proposed changes to the Recruit Training program were universally well received.  The two concerns that were raised by departments in the consultations were the extension of Block II training and the timing of starting a class in January.  The extension of Block II was a concern because of the potential strain on departmental FTO resources and the financial implications of paying recruit salaries for a longer training period.  The timing of the January class starts was a concern because of the proximity to the end of the fiscal year.  There were no objections to the competency-based approach that was explained in these meetings.     The actual development of the curriculum, lesson plans, and associated learning materials took place over the next two and a half years.  During this time, the Police Academy was still training recruits under the old delivery model and no additional 7  instructional staff were brought on to help with development.  When it became apparent that additional support was needed, a contract instructional designer was hired to help revise the manual readings for the topics.  That position is now a full time staff member.   The development of the materials was an exceptionally challenging process.  I was leading a major change in a police training environment as a female civilian with no support.  I used strategies that I believed would help with change management, involving the instructors in all aspects of the planning and development so that they felt ownership of the material.  Often, however, these sessions degenerated into an interrogation with me at the front of the room defending the changes to several instructors who were quite vocally opposed to the concept.  Included in these meetings was the person who, at the time, I reported to, who was supposedly in favour of the changes but was conspicuously silent throughout all meetings.  At times these meetings were completely unproductive.  I implemented a strategy of breaking the larger group of instructors into smaller groups to isolate the vocal opponents and accomplish some of the development goals.  This time of development was perhaps the most challenging time of my career.  Had I not been absolutely convinced that this change was needed to bring police recruit training in BC to current standards, I would have folded under the constant harassment and bullying I was subjected to.  It is exceptionally difficult for me to remove these experiences from my analysis of the program, even though the work climate has now changed considerably.  Despite recognizing the change of climate, I find that I still have emotional scars from the process that sometimes make it hard to work within the context of my daily responsibilities.  And while the climate in our office at the Police Academy is significantly improved, there is much work remaining to be done both internally and with the departments.  This experience has certainly been 8  significantly more challenging than I anticipated and I have learned a great deal about change management.  It is within this overall context that I complete my thesis.  The research for my EdD focused on the implementation and evaluation of the effectiveness of these changes.    1.2 Theoretical Framework In approaching the design, implementation, and evaluation of the Police Recruit training program, I drew on the theoretical framework of constructivism.  This is perhaps not surprising, given my background in problem-based learning, as PBL is situated within constructivism (Slavich & Zimbardo, 2012; Stentoft, 2017).  The central tenets of constructivism are knowledge is actively constructed by learners based on their experiences and context is an indispensable part of the learning process (Biggs, 1996; Narayan, Rodriguez, Araujo, Shaqlaih, & Moss, 2013; Stentoft, 2017; Thayer-Bacon, 2013).  Central to learning is that students are provided with authentic experiences that represent the complexity of real life events and allow for student-centred learning (Narayan et al., 2013).  Social exchange is an essential part of the learning experience so learners can test their understanding against those of others (Narayan et al., 2013).  Allowing learners to interact with the material in different formats or from different perspectives will increase their understanding (Narayan et al., 2013).  Finally, through reflection, the learner develops a self-awareness of their own thought process and understanding (Narayan et al., 2013).  The instructor plays a complex role in providing these authentic learning experiences and facilitating as the learners move through the learning process (Biggs, 1996; Narayan et al., 2013; Stentoft, 2017). 9  Further, within the constructivist perspective, the concept of transformative learning (Alfred, Cherrstrom, & Friday, 2013; Slavich & Zimbardo, 2012) informed my theoretical approach, particularly the description of transformative teaching offered by Slavich and Zimbardo (2012).  Transformative learning involves a deep shift in perspective created by cognitive dissonance, or a “disorienting dilemma” (Alfred et al., 2013; Cranton, 2011; Slavich & Zimbardo, 2012), often caused by a major life change (Alfred et al., 2013).  This “disorienting dilemma” triggers an examination of previously existing beliefs and perspectives that involves critical reflection, exploring new roles and relationships, acquiring new knowledge and skills, achieving competence in these new roles and, ultimately, integrating these new perspectives, roles and actions into daily life.  When this integration of a changed action happens, transformative learning has occurred (Alfred et al., 2013; Cranton, 2011; Slavich & Zimbardo, 2012).   Biggs (1996) outlines the concept of constructive alignment, which he defines as the combination of constructivism with instructional design practices whereby the foundational beliefs of constructivism are incorporated into all aspects of the designed program:  from objectives to learning activities, to assessment and reporting.  Biggs asserts that “attempts to enhance teaching need to address the system as a whole, not simply add “good” components, such as new curriculum or methods” (p. 350).  Similarly, Slavich and Zimbardo (2012) advocate for a whole system approach to transformative teaching, which they define as an “expressed or unexpressed goal to increase students’ mastery of key course concepts while transforming their learning-related attitudes, values, beliefs, and skills” (p. 576).  They identify three overarching principles of transformational teaching:  facilitating students’ mastery of core concepts, facilitating skill development during learning, and promoting 10  reflection to develop attitudes, values, and beliefs that match positive expectations in the chosen field (Slavich & Zimbardo, 2012).  The values and beliefs espoused by the constructivist framework, and transformative learning therein, were a guide for the development of the new curriculum delivery model for recruit training.  Great care was taken to ensure that real-life, complex learning activities were the backbone of the curriculum, supported by opportunities for self-examination through guided critical reflection, and individualized support and formative feedback through a mentoring system.   What follows is a contextualization of policing in British Columbia as well as research into police training.  Then a general overview of the literature provides context to the program redesign.  Finally, the chapter concludes with an overview of the program design as well as a summary of my research question for evaluating the program change.    1.3 The Context of Police Training in BC Policing in Canada has three different levels:  federal, provincial, and municipal.  The federal police are the Royal Canadian Mounted Police (RCMP).  Trainees in the RCMP are called cadets.  All cadets have their initial training at a central location in Regina, known as ‘Depot’.  From here they are deployed to postings across the nation.  Provincially, each province is different.  Some provinces, such as Ontario and Québec, have their own provincial police force (Ontario Provincial Police and Surété du Québec, respectively) while others, such as British Columbia, contract with the RCMP to provide provincial policing services.   11  In British Columbia, municipal regions either have their own municipal police force or contract the RCMP to provide this service.  In addition to the municipal police forces, residents of the Lower Mainland are also served by the Transit Police Department and members of the Stl’atl’mix First Nation are served by the Stl’atl’mix Tribal Police.  All municipal, transit, and tribal police in British Columbia are trained in the Recruit Training program of the Police Academy.  The Police Academy is physically housed at the Justice Institute of British Columbia (JIBC) in New Westminster.  The BC municipalities that have their own police services and train at the JI are:  Victoria  Oak Bay  Saanich  Central Saanich  West Vancouver  Vancouver  Port Moody  New Westminster  Delta  Abbotsford  Nelson  Transit Police  Stl’atl’imx Tribal Police  Policing within a province (provincial and municipal) falls under the jurisdiction of the Provincial Government for that province.  Because of this lack of centralized governance, there is not one standard method of training police in Canada.  Recruit/cadet training programs vary in length, residency status, job status (pre-hire or post-hire) of trainees, and even skills they are able to train.  These discrepancies make it difficult to compare training programs in Canada and also make it necessary to outline the specific conditions of a given training program when engaging in discussion or beginning an evaluation study.   12  In the British Columbia context, municipal police recruits are hired by their home departments and sworn in as Recruit Constables when they enter training at the Police Academy.  This means that, as recruits, they are governed by the BC Police Act and can be held accountable under this act.  This differs from municipalities in Ontario where recruits are also hired before they attend training at the Ontario Police College (OPC) but are not sworn in until after they complete their training, and from recruits who attend the Atlantic Police College (APC), who are not hired until after their graduation.  It also means that, in BC, municipal police recruits are members of their police unions throughout their time as a recruit.  Union membership is relevant during training because of the possibility for union grievances should a recruit not meet the expectations required to pass training.   Because recruits are hired prior to attending training, the municipal departments have control over the entrance requirements and standards for recruits.  Further, each municipality has its own hiring requirements and process, making for a diverse group of recruits who come to the Police Academy.  This lack of control over hiring also creates an interesting situation for the Police Academy, where the training institution has no input into who is admitted into the program.  The majority of departments have a guideline that suggests a minimum of 2 years of post-secondary education for recruitment, but this is not an absolute requirement.  Recruit classes can consist of students with a range of educational experience from a minimum number of post-secondary credits (or occasionally no post-secondary education) to advanced degrees and previous law practices.  Class sizes and demographic trends depend entirely on departmental hiring practices, targets, and budgets.  Hiring levels can fluctuate dramatically due to community demands; extra classes had to be scheduled prior to the 2010 Olympics so that Vancouver Police Department (VPD) could have enough 13  trained members before the games began.  VPD hiring declined immediately afterwards and scheduled intakes had to be cancelled or run with small class sizes.  Similarly, an unexpected budgetary expense may prevent the planned hiring of recruits or an unpredicted number of retirements may necessitate an increased hiring in any given municipal department.  Recruitment is typically an extensive process involving multiple interviews, physical fitness assessments, written exams, and background checks.  A candidate can be deemed unsuitable at any stage.  Because of the many factors involved in police recruiting, often the Police Academy does not have a final number of recruits expected to attend training until 1-2 weeks prior to the start of class.   The Recruit Training program is divided into four separate blocks.  Recruits progress through the program as a cohort until their graduation from Block III.   While recruits are in Blocks I through III, they are considered a Recruit Constable and must be either in training at the Police Academy or in their hiring department working under the supervision of a Field Training Officer (FTO).  After successful completion of Block III, they graduate from the Recruit Training program as Qualified Municipal Constables.  During this time they are able to complete their policing duties independently but are in a probationary period (Block IV) at their home department.  Following completion of the probationary period, they are a fully Certified Municipal Constable.  Figure 1-1 illustrates this progression from hiring to fully certified municipal constable.    14    1.4 Summary of Recruit Training Delivery Models A full description of the recruit training delivery models is included in Chapter 3:  Program Description.  This section provides a brief overview of the training delivery model before and after implementation of the changes to provide context for the subsequent sections.   When I started my role at the JIBC Police Academy, there had been little change to the delivery model of police training since the introduction of PowerPoint when lectures on overhead transparencies were converted to lectures on PowerPoint slides.  Each ‘discipline’ such as Legal Studies, Investigation and Patrol, Traffic Studies, among others, was taught independently of each other with little to no integration between instructors or topics.  The primary delivery model was PowerPoint based lectures.  Occasional simulation days were included, two in Block I and two in Block III, where recruits either participated in the scenario or observed other recruits participating in the scenario.  Limited to no formative feedback was provided to recruits and the underlying philosophy was ‘If you don’t hear  Recruit Constable Qualified Municipal Constable Block I 13 weeks Police Academy Block II 12-17 weeks Field Training in home Municipal Department Block III 8 weeks Police Academy Block IV 1 year Home Municipal Department Certified Municipal Constable Figure 1-1  Progression through police recruit training in British Columbia 15  anything, you’re doing well’.  No time was provided for instructors to work with recruits who were struggling or to conduct remedial training.  The sole method of formal evaluation was written exams that mostly relied on recruits regurgitating memorized facts where they were often required to reproduce the answer verbatim from what was provided in class.  The Police Academy shared the common underlying philosophy in the policing culture that “adult education” meant telling learners exactly what was going to be on the exam.  After observing a multitude of sessions, it was clear that the focus on rote memorization, limited opportunities to apply concepts to practice, lack of formative feedback, and inability of instructors to provide help to recruits who needed it, together meant that training was not delivered as effectively as it could be.  The training delivery model needed to be modified to ensure that recruits were leaving the Police Academy with the best training possible to prepare them to serve their communities.   The new delivery model aligns the recruit training curriculum with the Police Sector National Framework of Constable Competencies, as mandated by the BC Provincial Government.  This framework is the only nationally accepted standard for the requirements of police officers in Canada and was developed through extensive research and collaboration with stakeholders in the Canadian policing community.  Topics that were previously taught as separate disciplines are now integrated.  The curriculum is structured around the most common patrol level calls and material is learned in the context that it is needed to respond to these calls.  The new delivery model uses refined readings that have been significantly reduced to focus on core “need-to-know” information and associated quizzes to ensure recruits have a foundational understanding of the key concepts prior to arriving in the classroom.  The knowledge gained through the readings and quizzes is then applied through 16  case-based exercises where instructors monitor recruit progress, understanding, and try to clarify misconceptions.  Recruits then have the opportunity to apply what they have learned to practical scenarios.  They receive formative feedback on their performance in these scenarios, watch recordings and self-assess their performance, and set related training goals for the upcoming weeks.  Recruits are assigned an instructor mentor who follows their progression through recruit training and provides guidance and feedback throughout, while also ensuring the recruits are held responsible for their learning.  Time is built into the curriculum where recruits can work on their individual training plans to improve, with instructor guidance, on the areas where they most need improvement.  Recruits are examined by both written and practical scenario exams and complete an overall assessment portfolio at the end of each block.  The training was designed specifically to address issues observed in the old delivery model, feedback from past classes of recruits, and recommendations from the literature.    1.5 Research Question - Program Implementation and Evaluation The focus of this project was the implementation and evaluation of this new curriculum delivery model for Police Recruit Training in BC.  The program evaluation addressed the question: what are the effects of introducing a competency-based education framework on police recruit preparedness for field training?  This question was addressed using surveys administered to recruits and field trainers for one class trained in the old delivery model and two classes trained in the new delivery model.   A secondary question that arose from this primary evaluation question is if there is a difference in recruit perceptions of their ability or preparedness for field training (Block II) 17  between the end of their Block I training when they may not have any knowledge of the requirements of patrol work and after they have some Block II training and have experienced the realities of patrol level policing.  Surveys were administered to recruits at the end of their Block I training and after approximately 10 weeks of field training to address this question.   I selected Block II training for this evaluation because recruits work closely with a Field Training Officer (FTO) during Block II.  The FTO is an experienced police officer and can provide an objective evaluation of a recruit’s ability and readiness for the road.  The program evaluation design used this FTO evaluation to compare to the recruits’ self-evaluations.  This additional survey data source and comparison addressed another secondary question about whether there were differences between the recruits’ perceptions of their ability and preparedness and the perceptions of their FTOs.  Upon graduation, most departments work individually so Block II is the only opportunity to compare recruit self-evaluations with those of a more experienced police officer.  Any evaluation of the effects of the program post-graduation would rely mainly on the recruits’ own perceptions and would lack the objective assessment of an experienced officer.  Because of this lack of comparative data, the program evaluation was limited to Block I and how it prepared recruits for their Block II field training experience.    1.6 Summary This project is a program evaluation study using survey data to compare recruit ability and preparedness in recruits from one class trained using the lecture-based delivery model and two classes trained using the new competency-based delivery model.  Chapter 2 will review key areas of the literature that informed the new delivery model.  Chapter 3 will 18  outline the lecture-based delivery model of police training and describe the competency-based delivery model in detail.  Chapter 4 will outline the methodology of the study, required changes in project design, and situate the study within the current political context of policing in BC.  Chapter 5 will present the findings of the study.  Chapter 6 will discuss possible interpretations of the findings and the significance of organizational cynicism and organizational change to this study.  Lastly, Chapter 7 will conclude with lessons learned and recommendations.     19  Chapter 2: Literature Review After mapping the Recruit Training curriculum to the Police Sector Council Constable competencies and realizing that a major change in the philosophy and design of the program was required, a literature review was conducted to ensure the proposal for the new program was based on evidence.  That review encompassed research in police training, in competency-based education, and assessment.  This chapter will provide an overview of research from each of these areas that formed the foundation of the proposal for the new program and that has been published since that proposal was written.     2.1 Research on Police Training Research on policing from a Canadian perspective is scarce (Huey, 2016; Huey & Bennell, 2017).  A comprehensive review of the literature revealed 218 research articles on Canadian policing published between 2000-2015 (Huey & Bennell, 2017).  While the majority of published research on policing in Canada might not specifically address training, often areas of research lead to recommendations for future training.  The work of Rick Parent, from Simon Fraser University in Burnaby BC, is one such example.   Parent examined the police use of deadly force in Canada in comparison to that in the United States (Parent, 2006; Parent, 2007; Parent, 2011).  The homicide rate in the United States is approximately threefold higher than that in Canada, and the rate of police murders is also considerably higher.  Parent (2006) concluded that while the circumstances surrounding the use of lethal force do not differ between the United States and Canada, the frequency differs considerably.  This increase in violent crime, combined with the higher rate of murders of police officers leads to an increase in both perceived threat and calculated risk, which may 20  result in American police using lethal force more frequently than their Canadian counterparts (Parent, 2006).  Further, an analysis of the thirty lethal force incidents in British Columbia from 2000-2009 revealed that approximately 25% involved subjects with a known history of mental illness or suicidal behaviour (Parent, 2011) and it is estimated that approximately one third of police shootings involve someone in a crisis caused by mental health issues, emotional stress, or substance use (Parent, 2007).  To address the unique circumstances surrounding a person in crisis, particularly with a mental illness, Parent recommends training for both new and current police officers in recognizing the signs of mental illness and Crisis Intervention Training, as in place in some American jurisdictions. Officers who have considerable specialized training in de-escalation have been shown to decrease the arrest rates of people with mental illness as well as decrease the rates of police injuries and the requirement for specialized emergency response units (Parent, 2007; Parent, 2011).   A study in British Columbia comparing people with mental illness and the general public found that 60% of people with mental illness had some contact with police in the preceding year compared to 40% of the general public (Desmarais et al., 2014).  The study found that people with mental illnesses were not just more likely to commit crimes than the general public but also more likely to be the victims of crimes (Desmarais et al., 2014).  People with mental illness also rated the police significantly lower on aspects regarding procedural justice, such as being fair and approachable, than did the general public (Desmarais et al., 2014).  These findings led to the recommendation that police must be trained to develop skills to better interact with people with mental illness, including de-escalation (Desmarais et al., 2014).   21  Following training recommendations such as these, the BC Crisis Intervention and De-Escalation (CID) program is now a mandatory training program for all front-line police officers and front-line supervisors in BC.  The initial implementation of the training was completed in 2015.  CID training is now a mandatory part of recruit training at the JIBC Police Academy.  Recruit training also involves significant components on recognizing and interacting with people with mental illnesses.  It would be interesting to replicate the study conducted by Desmarais et al. (2014) following the completion of the CID training initiative.   Additionally, local research from Simon Fraser University on young offenders’ recidivism decisions is not directed specifically at police but provides valuable insights into the thought process of young offenders that could help police when interacting with young offenders (Corrado, Cohen, Glackman, & Odgers, 2003).  Recognizing that the majority of the 400 incarcerated youth from the Greater Vancouver Region were not motivated by cost-benefit decisions, punishment, or re-integration (Corrado et al., 2003) may suggest strategies to combat youth crime should focus on approaches outside of these motivators.   The literature on Canadian police recruit training is scarce (Huey, 2016; Huey & Bennell, 2017; Robertson, 2012).  The majority of research focuses on in-service training on specific topics, such as ethics or use of force, or on evaluation of departmental initiatives.  Huey (2016) found there were no peer-reviewed articles on Canadian police training, let alone recruit training, published between 2000-2015.  The majority of the available literature is from the United States, which has a very different approach to policing than in Canada.  In general, there is much more gun violence in the US (Parent, 2006) and police are trained in a much more militaristic fashion.  Policing in Canada, where there is comparatively little gun violence, tends to emphasize communication skills and de-escalation techniques.  Policing in 22  Canada, as in other British Commonwealth countries, is founded on Peel’s principles of the police are the public and the public are the police.  Canadian police most often exercise their authority through use of officer presence and persuasion techniques (Robertson, 2012).  Further, Canadian police are frequently sworn in using oaths that include a duty to uphold the principles of the Canadian Charter of Rights and Freedoms, which means recognizing their obligation to all members of society, especially those who are members of marginalized populations (Robertson, 2012).  These differences make it difficult to draw comparisons between policing cultures and training between the Canadian and American contexts.  Additionally, because of the lack of standardization of police training within Canada, it can be difficult to draw direct comparisons between provinces.   There is, however, a small amount of literature focusing on adult learning in the police training context.  Again, most of this body of work is from the United States, but because it focuses on adult learning theory rather than state-specific training practices, it is more applicable to a discussion of police training in the Canadian context.  Despite the lack of available literature, there is a general recognition that police training should be evidence-based, following best-practices from both theory and research (Kratcoski, 2016).   There remains a debate within the policing community about the differences between police training and police education, and which is most appropriate at a given stage of training (Cordner & Shain, 2016; Haberfield, 2013; Kratcoski, 2016; Oliva & Compton, 2010; Paterson, 2016; White & Heslop, 2012).  Traditional police training is seen as teaching how to do policing, or how to perform a certain task a specific way and is frequently para-military, lecture-based, and concerned with conveying a large amount of information and frequent “war stories” (Haberfield, 2013; Kratcoski, 2016; Paterson, 2016).  This conception 23  of training is at odds with the evolving role of police, particularly in light of the current community policing paradigm, the procedural justice focus on individuals and communication, and the continued globalization of policing (Cordner & Shain, 2016; Oliva & Compton, 2010; Otwin, 2005; Paterson, 2016).  Police education, on the other hand, is seen as encouraging critical thinking, problem solving, and using values-based thinking to come up with alternative approaches (Haberfield, 2013; Paterson, 2016).  Typically, this type of education is associated with a higher education institution, such as a university, and is obtained prior to attending police training (Cordner & Shain, 2016; Paterson, 2016).  Interestingly, unlike other professions such as teaching or nursing, where credentialing or certification are directly tied to higher education, police education historically has been marginalized by both the police training academies and the police profession itself (White & Heslop, 2012). The issue of whether or not a university education better prepares people to enter the policing profession is a matter of separate debate, and most departments in BC currently require a minimum of two years of post-secondary education as part of their selection criteria.  Despite the perceived tension between police training and police education, there seems to currently be a general recognition that in order to meet the community-based demands of policing, police training needs to involve components from both training and education models, and should follow the general principles of adult learning to be most effective (Cordner & Shain, 2016; Golden & Seehafer, 2009; Haberfield, 2013; Hundersmarck, 2009; Kratcoski, 2016; Mugford, Corey, & Bennell, 2013; Oliva & Compton, 2010).   Research from the Police Research Lab, located at Carleton University in Ottawa, Ontario, Canada, has applied cognitive load theory to the use of simulator-based training in 24  use of force (Bennell, Jones, & Corey, 2007) and to police training in general (Mugford et al., 2013).  Cognitive load theory posits that working memory can hold an extremely limited number of “elements” or new pieces of information whereas long term memory can hold a virtually unlimited number of elements (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G, Clark, & de Croock, Marcel B M, 2002).  The working memory actively integrates new information into schemas that serve to group new information so that it can be understood and easily accessed (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G et al., 2002).  Schemas are stored in the long term memory and are processed by the working memory as one element, thereby reducing the burden on working memory (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G et al., 2002).  With sufficient practice, schemas can become automated, or performed without conscious thought, further reducing the burden on working memory (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G et al., 2002).  According to cognitive load theory, the primary goal of training is to promote the acquisition and automation of schemas (Mugford et al., 2013).  Additionally, cognitive load theory describes three forms of cognitive load:   intrinsic load, extraneous load, and germane load. These three types of load are additive in the working memory and training should be designed to ensure the additive effects do not exceed working memory capacity of the learners, as is typical of traditional training methods (Mugford et al., 2013).  Intrinsic load is a function of the complexity of the material to be learned and can be managed by providing simple examples at the start of training, providing worked examples, and by dividing complex material into a series of steps before moving to integration of the complete concepts (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G et al., 2002).  Extraneous load is a function of the 25  complexity of the training activity and can be managed by providing simple and clear instructions, ensuring there is not unintentional redundancy in training material, and integrating sources of information (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G et al., 2002).  Lastly, germane load is a function of training design but, unlike extraneous load, germane load is directly relevant to schema formation and automation.  Germane load involves the incorporation of variety into training both in terms of the variety of situations encountered and in terms of the variety of examples within a specific type of situation (Bennell et al., 2007; Mugford et al., 2013; van Merrienboer, Jeroen J G et al., 2002).  By applying cognitive load theory to police training, components of training such as use of force simulator training and e-learning activities can be structured to manage intrinsic and extraneous loads while maximizing germane load, thereby increasing the effectiveness of police training (Bennell et al., 2007; Mugford et al., 2013).  Despite anecdotal evidence that many use of force trainers were unknowingly applying concepts from cognitive load theory to their training structure, this application is inconsistent and requires more investigation (Bennell et al., 2007).  Similarly, with the increase in prevalence of e-learning strategies for police training initiatives, care must be taken to design training that does not unintentionally exceed the working memory capacity of police officers.  The ease with which multimedia and various resources can be integrated into e-learning mean the likelihood of increasing extraneous load is high if cognitive load theory is not incorporated into the training design (Mugford et al., 2013).   Within the context of the adoption of a community oriented policing philosophy across much of the United States in the 1990s and 2000s, there was an interest in examining how the theories of andragogy applied to police training (Birzer & Tannehill, 2001; Birzer, 26  2003a).  In recognizing that the majority of policing activities involve interacting with the public by providing information, assistance, aid to the injured, and mediation, Birzer and Tannehill (2001) critiqued the prevalence of the behaviourist approach to teaching in police training academies.  They determined that this approach may be appropriate for skills such as shooting or force options but is much less effective for topics such as interpersonal communication, cultural diversity, problem solving or conflict resolution (Birzer & Tannehill, 2001).  The suggestion from the application of andragogy is that training should be interactive, participatory, and experiential, providing trainees practice applying the skills they are developing through problem solving, case study, and simulation activities (Birzer, 2003a).   Hundersmarck (2009) followed a small number of cadets (the equivalent of recruits in a US context) through their initial classroom training and then into their field training component to determine how knowledge and skills gained at the academy were transferred to the practical setting of field training.  Through observing the academic portion of training, it was determined that the classroom time was mostly spent in didactic lectures, with approximately less than three percent of time focused on scenarios and application of learning (Hundersmarck, 2009).  The cadets, however, believed that the scenarios held much higher relevance than the lectures to their future role as a police officer and only referred to the scenario component of training during their field training component (Hundersmarck, 2009).  Hundersmarck also noted, however, that the police culture and attitude of the field training officers towards the academy as not relevant to how things are “really” done may have made the cadets reluctant to explicitly draw on anything they had learned in their classroom training (Hundersmarck, 2009). 27  Several examples of innovations in police training have embraced the principles of adult learning.  First, in the Idaho Peace Officer Standards and Training (POST) academy, Werth (2011) developed and implemented a Problem Based Learning Exercise (PBLE) that spanned the entire ten weeks of recruit training to allow extended time to apply their learning and develop higher level critical thinking and investigative skills.  This exercise involved a dispatched scenario and recruits following up the investigations over the remaining weeks through simulated interviews and phone calls, investigations, evidence gathering, and case presentations to staff (Werth, 2011).  To evaluate the effectiveness of the exercise, a total of ten academy classes were surveyed on how they believed the exercise developed their mechanical and non-mechanical competencies.  The majority of the 413 respondents indicated that the PBLE helped develop their problem-solving, decision-making, communication, and multi-tasking skills.  Werth did note, however, that there were some students, instructors, and admin staff who resisted the concept of this type of self-directed learning and that implementation required a culture-shift within the organization (Werth, 2011).   Similarly, in examining the existent culture around training in the Los Angeles Police Department (LAPD), it was recognized that the existing military-based training culture did not represent the mindset expected of recruits once they had graduated and began serving their community (Pannell, 2012; Pannell, 2016).  The redesign of the LAPD police academy training included a focus on thinking through reasoning and articulating actions, team teaching from integrated teams of instructors, individual development and remediation, debriefs focusing on the whole person instead of just the tactical actions taken, and developing critical thinking skills to apply to novel scenarios (Pannell, 2012; Pannell, 2016).  28  The results from this change indicate that the recruits appreciate understanding why they are taking the actions they take and the field trainers and administrative staff feel the recruits are better than those who trained in the previous model.  In implementing these changes, the LAPD recognized the need to overhaul the entire training program, including educational philosophy and culture, not just to add on an additional training component.  A requirement for success of such an initiative would be the training and preparedness of the instructors themselves, and their willingness to be involved in such a cultural shift.   The importance of training instructors in a new methodology to better employ the concepts of andragogy is also highlighted by Birzer, describing how the Chicago Police Department had designed a new training module for community oriented policing that was centred around the principles of adult education but ultimately ended up being delivered in a lecture-based format because of the comfort level of the instructors (Birzer, 2003a).  In analyzing the teaching practices and preferences of police instructors at an agency that trains police instructors, McCoy (2006) discovered that, while the majority of instructors scored very high in a teacher-centered style of instruction, a deeper analysis revealed their preferences were to be student-centred and noted they lacked the knowledge and skills to implement the student-centred methodology.  In addition to teacher skill and preference, one reason commonly given for teaching using a purely didactic style of instruction that is perhaps unique to the policing community is the concern over liability issues and the coverage of content.  Course topics are frequently seen as an item on a list of check boxes to indicate that a recruit has taken the relevant training.  McCoy (2006) points out that the real liability issue should be whether or not a police officer can apply what they have learned.  If they cannot apply the learning, that is when an increase in liability for the instructors and 29  training institution should be found (McCoy, 2006).  The preparation and training of future instructors should focus on developing their skills to teach using a method that is based around the principles of adult education where learners need to demonstrate their ability to perform the necessary skills, not simply sit in the training room (Birzer & Tannehill, 2001; McCoy, 2006).  Fittingly, Oliva and Compton (2010) examined a small number of police officers to determine what their preferences were with respect to teaching style of their instructors.  Overall, there was a self-reported preference for adult learning techniques, particularly highlighting how the training should be engaging, practical, efficient, and allow time for interaction with the other learners.  To meet the preferences of both the trainees and the instructors in a policing context, Birzer (2003) suggested what he called a “mission-oriented” approach focusing on the skills and knowledge police need to perform the duties of their job on a daily basis.  This type of training is best known now as competency-based education, where competencies are determined based on a job analysis and a trainee’s performance is measured according to how well they meet these job competencies.  In Canada, a set of national police competencies has been developed through extensive collaboration facilitated by the Police Sector Council (PSC).    2.2 Police Competencies in Canada The Police Sector Council (PSC) was a not-for-profit agency funded through the Government of Canada Sector Council Program that brought together experts in policing and police training from across the country.  Representatives included training organizations, municipalities, Chiefs of police, military police, and the RCMP.  The PSC conducted research projects around perceptions of police, skills perishability, and human resources 30  (HR) challenges and solutions for the policing context.  Competency-Based Management arose from the exploration of HR solutions and from 2008-2010 the PSC undertook extensive collaboration to identify the core competencies, and associated tasks, for each rank of Police Officer (Police Sector Council, 2011).  These competencies are now known as the PSC National Framework of Competencies.  The goal of this process was to facilitate standardization of policing, police promotion, and police training across Canada.  Unfortunately, the PSC lost funding in 2012, at a crucial point for the incorporation of competency-based practices into police training.  At this stage many agencies had begun the competency-based management HR Practices for evaluation and promotion and many had expressed interest in extending this framework to their training programs.  Without the PSC as a guiding force, however, departments focused their attention on HR practices and the momentum for curricular change was lost.   In British Columbia, policing is governed by the Policing & Security Branch (PSB) of the Provincial Government.  PSB was represented on the Police Sector Council when the National Framework of Competencies was developed and has been instrumental in the plans to adopt a competency-based framework in the Recruit Training program in BC.  The Police Academy’s annual operating budget is presented in a letter of agreement between PSB and the Police Academy.  In 2013 this letter of agreement stipulated that the Police Academy must hire a Curriculum Developer to map the Recruit Training curriculum to the PSC National Framework Constable Competencies and generate a Course Training Standard (CTS) for the program.  This mapping would ensure that Recruit Training in BC was producing graduates who met the competencies for the rank of Constable, that the program 31  was teaching all necessary concepts, and that the program was not teaching unnecessary material. Table 2-1 summarizes the nine core constable-level competencies:  adaptability, ethical accountability and responsibility, interactive communication, organizational awareness, risk management, stress tolerance, teamwork, and written skills.  Each competency has five associated proficiency levels that increase progressively in difficulty.  The minimum expectation for the rank of Constable in Canada is proficiency level 2, as described in Table 2-1.  Recruits are expected to move through proficiency level 1 to proficiency level 2 at various points in their training and reach level 2 by graduation.  Because of the advocacy from the BC Policing & Security Branch, BC is currently at the forefront of mapping recruit training curriculum to the Constable competencies and of extending competency-based principles into the recruit training program.  As such, the next section will focus on the general principles of competency-based education.      32   Competency Proficiency Level 1 Proficiency Level 2 Adaptability Recognizes the need to adapt to change Modifies own behaviour or approach to adapt to a situation Ethical Accountability and Responsibility Embraces high standards of conduct and ethics Handles ethical dilemmas effectively Interactive Communication Presents information clearly Fosters two-way communication Organizational Awareness Understands formal policing structure Understands informal policing structure and culture Problem Solving Identifies basic problems Solves basic problems Risk Management Participates in the management of situations and calls Manages a limited range of situations and calls with minimal guidance Stress Tolerance Works effectively with standard situations Works effectively in the face of occasional disruptions Teamwork Participates as a team member Fosters teamwork Written Skills Conveys basic information Selects and structures information Decision Making Makes decisions based on existing rules Makes decisions by interpreting rules Table 2-1  Police Sector Council core Constable competencies with proficiency levels 1 and 2 (Police Sector Council, 2011)  2.3 Competency-Based Education  The following sections provide an overview of the literature on competency-based education, terminology, and elements of competency-based learning.    2.3.1 Overview of competency-based education Traditional, lecture-driven curricula are taught as content-heavy isolated components where memory-based assessment practices make it difficult to determine if graduates are competent in the requirements of their intended practice (Frank et al., 2010; Smith & 33  Dollase, 1999).  Typically, students see traditional classroom learning as a mostly arbitrary sequence of unrelated content.  Competency-based education provides the framework to learn how concepts interconnect (Black & Wiliam, 1998; Fraser & Greenhalgh, 2001).  Learning concepts and skills in their real-life context, highlighting relationships, enhances motivation, learning, and accessibility of stored information in adult learners (Black & Wiliam, 1998; Bowen, 2006; Fraser & Greenhalgh, 2001). Professions that face an increase in accountability and scrutiny need to demonstrate ability in their graduates, which can be difficult to do using traditional assessment methods (Frank et al., 2010).  Competency-based education addresses these deficiencies in traditional curricula by focusing on the end product of observable behaviours that reflect the learners’ knowledge, skills, and attitudes (KSA) (Albanese, Mejicano, Mullan, Kokotailo, & Gruppen, 2008; Frank et al., 2010; Hodge & Harris, 2012; Mansfield, 1989; Shumway & Harden, 2003; Smith & Dollase, 1999) Many professions, such as medicine (Frank & Danoff, 2007), education (Darling-Hammond, 2006), and policing (Police Sector Council, 2011), as well as trades such as automotive repair (Hodge & Harris, 2012), have defined sets of key competencies required of practitioners.  These competencies are not only sets of abilities, but they are also political statements of societal values (Albanese et al., 2008) and a framework for curriculum development and reform (Hodge & Harris, 2012; Tuxworth, 1989).  The transition to competency-based education in many of these fields, particularly medical education, has been ongoing for many years.   This introduction to competency-based education will focus on the learning process, the elements of competency-based programs, and the assessment of competencies.  It will draw heavily on the literature from medical education but it is easily translatable to the police 34  context.  With an increase in accountability to regulators and to the public, physicians have seen an increase in the need to assess graduates in ways that include addressing values as well as the social and community context (Frank et al., 2010; Smith, Goldman, Dollase, & Taylor, 2007), and police face the same, if not higher, levels of scrutiny.  Competencies, and thus curriculum, must be able to adjust to changes in societal values and needs in both of these professions (Davis & Harden, 2003a; Epstein & Hundert, 2002) and practitioners in both fields must be able to manage ambiguous problems, tolerate uncertainty, and make quick decisions with limited information (Epstein & Hundert, 2002).  A competency-based curriculum, as explored below, provides the framework to respond to the needs of society while ensuring graduates meet the required standards to practice and meet high public expectations.    2.3.2 Defining Competency-Based Education Terminology As competency-based and outcome-based education has evolved over the years, there has been much debate as to the intent and meaning of the various terms used in describing the curriculum (R. M. Harden, 2002; ten Cate & Scheele, 2007).  For clarity, it is important to define the set of terms that will be used in this paper.  Drawing on the work in medical education (Albanese et al., 2008; Davis & Harden, 2003a; R. M. Harden, 2002; Rethans et al., 2002), I am working with the definitions outlined below.   First it is important to distinguish between outcomes and competencies, as both can be used in developing a competency-based curriculum.  Both describe broad characterizations of knowledge, skill and attitudes important for graduates to possess (Albanese et al., 2008; Davis & Harden, 2003a).  Outcomes are developed by linking the 35  expectations of learners to the skills and abilities of a practicing professional.  They are statements describing what the program wants graduates to have (Albanese et al., 2008).  Competencies, on the other hand, are statements describing the knowledge, skills, and attitudes that graduates need to have to ensure that they have the basic abilities to practice on their own (Albanese et al., 2008; Fraser & Greenhalgh, 2001).  A program can therefore have both competencies, describing the minimum standard to meet qualifications, and outcomes, describing how the program aspires their graduates to go beyond meeting the basic qualifications.  In describing competency-based education, these two terms are sometimes used interchangeably when describing the need to document a learner’s progress through the expected competencies or outcomes designated by the program.   Another set of terms that needs defining is competency-based assessment and performance-based assessment.  Again, both of these elements can be used effectively in a competency-based education program.  Competency-based assessment is a measure of what the learner can do in a controlled representation of professional practice whereas performance-based assessment is a measure of what the learner can do in actual professional practice (Davis & Harden, 2003a; Rethans et al., 2002).  Lastly, the concept of professional competence, or what graduates aspire to, has been defined by Epstein and Hundurt (2002) as “…the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and community being served.” (p. 226).  They also describe the ability to manage ambiguity, uncertainty, and to make decisions with limited information as being central to professional competence (Epstein & Hundert, 2002).  While this definition was written to describe a practicing physician, the elements and 36  expectations are similar to the expectations of an active police officer.  Removing the word clinical from the definition renders it applicable to the policing context.   Indeed, this is an apt description of the day to day requirements of an on-duty police officer.     2.3.3 Determining Competencies A crucial component of a competency-based education approach is the accurate and thoughtful definition of competencies.  In professional fields or vocational training, determining these competencies requires an occupational analysis to identify the key competencies and tasks required to successfully perform in the occupation.  The process of determining the key competencies for an occupation is problematic as it privileges the experiences of those involved in the determination and is potentially open to influence from political, economic, media, or other such factors (Jansen, 1998; Schwarz & Cavener, 1994).  One robust method of occupational analysis that is frequently used across multiple professions is DACUM, which stands for Developing A CurriculUM.  DACUM is a model for the development and management of competencies that was developed in Canada in the late 1960s (Canadian Vocational Association, 2013; Wyrostek & Downey, 2017).  DACUM relies on three assumptions:  expert workers are the best at describing and defining their job; a job can be defined by precisely describing the tasks performed by expert workers; and all job tasks require enablers such as the use of knowledge, skills, tools, and positive worker behaviours to be done correctly (Canadian Vocational Association, 2013; DeOnna, 2002; Norton, 1998; Norton, 2009; Wyrostek & Downey, 2017).  Following the initial DACUM process, the resultant competency profile and task list is validated by surveying a larger group of subject matter experts (DeOnna, 2002).  The competency profile and task list can 37  then be used as a guide for curriculum design to ensure what is taught is aligned with what needs to be taught (DeOnna, 2002).  Most frequently the DACUM process is used in conjunction with an instructional design model such as ADDIE (Wyrostek & Downey, 2017), as discussed in Section 3.3.  The competencies, once defined, act as the guide for both curriculum and assessment in a competency-based education program.    2.3.4 Elements of Competency-Based Learning The hallmark of a competency-based education program is that progress is gauged by achievement of the competencies instead of by time spent in the program or the process of teaching (Albanese et al., 2008; R. Harden, Crosby, Davis, Howie, & Struthers, 2000; Hodge & Harris, 2012; Leung, 2002; Smith & Dollase, 1999).  Additionally, the assessment practices are closely aligned with the competencies and are viewed as part of a continuous learning experience (Albanese et al., 2008; Ben-David, 1999; Davis & Harden, 2003a; R. Harden et al., 2000; Hodge & Harris, 2012; Mirosław Dąbrowski & Jerzy Wiśniewski, 2011).  To this end, the program is structured to facilitate the development and achievement of the competencies by the learners (Hodge & Harris, 2012) and the expected progression through the competencies is documented such that each learner can gauge their own progress (Davis & Harden, 2003b; R. M. Harden, 2007; Mirosław Dąbrowski & Jerzy Wiśniewski, 2011; Smith et al., 2007).  With competency-based education the learner takes responsibility for their own progression down this path and the responsibility for learning is shared by learners and instructors (Frank et al., 2010; Hodge & Harris, 2012).  As the learners demonstrate their progress through the competencies, the quality of feedback received from 38  instructors is one of the most important factors for increasing performance (Brightwell & Grant, 2013).  In a purist form, competency-based education can be a difficult and unsustainable proposition as learners are free to progress through the program at their own pace and this open-ended time frame can be very resource intensive and problematic (Hodge & Harris, 2012).  Frequently programs adapt to scheduling requirements such that the total time frame of the program is fixed, but learners have more freedom as to how their curricular time is used to ensure that they meet the required competencies (Davis & Harden, 2003b; Hodge & Harris, 2012).  Even this change is sometimes a difficult adjustment for learners and instructors, but a unified team approach is required to support the program and ensure success (Davis & Harden, 2003b). One major advantage of a competency-based education program is the ability of the program to be responsive to societal expectations and needs (Davis & Harden, 2003b; Fraser & Greenhalgh, 2001; Hodge & Harris, 2012), as competencies are also political statements about what is valued by society, the profession, and the institution (Albanese et al., 2008; Mansfield, 1989; Tuxworth, 1989).  Development of the required competencies is a continuous process with instruction, assessment, and governance of the program, leading to an inclusive and holistic educational experience (Tuxworth, 1989).  An extensive and thorough curriculum map is essential to development and implementation of a competency-based education program.  The curriculum map needs to include both information from the task analysis and information about where specified competencies can be attained, because a focus only on the specific tasks can lead to a reductionist assessment (Cox, 2011; Tuxworth, 1989).  This map also facilitates discussion of educational principles and curriculum 39  implementation, allowing the program to respond to change as necessary (Davis & Harden, 2003b; Tuxworth, 1989).   As a learner moves through the curriculum, starting with basic cases, Harden (2007) identified four dimensions along which the learner can progress to meet the expected competencies.  The problems encountered by the learner can increase in: breadth, difficulty, utility and application, and proficiency.  Each of these dimensions is necessary to fully achieve a competency and need to be structured into the curriculum to support learning.    2.4 The Learning Process  Context plays an important role in adult education:  adults need to know why they are asked to learn information and to immediately see the relevance of what they are learning (Birzer & Tannehill, 2001; Birzer, 2003a; Fraser & Greenhalgh, 2001).  Adult learning is influenced not just by the content, but also by the context and by personal influences (Carraccio, Benson, Nixon, & Derstine, 2008; Schenck & Cruickshank, 2015).  The context, or way, in which information is stored makes it more or less readily available when needed (Bowen, 2006).  As such, many competency-based learning activities are structured around ‘authentic tasks’ where learning occurs in a context that includes most of the cognitive demands of real world situations (Koens, Mann, Custers, Eugène J F M, & Ten Cate, 2005).  In situations where a complex task is learned, the physical context (i.e. surrounding space) is of little importance but the semantic, or cognitive, context plays a larger role in skill acquisition and (Koens et al., 2005).   One common curriculum strategy to structure authentic tasks into the learning process is through case based learning, where the learning of theoretical information and skills is 40  integrated into case presentations (Barrows, 1986; Carraccio et al., 2008).  Case-based learning can be thought of as a part of a continuum with problem-based learning, with the amount of guidance provided to learners decreasing and the amount of self-directed learning increasing as one moves from case-based to problem-based learning (Aditomo, Goodyear, Bliuc, & Ellis, 2013; Barrows, 1986).  Each of these techniques provides engaging and meaningful learning, and should be selected as appropriate for the desired learning outcomes (Aditomo et al., 2013; Barrows, 1986). Developing curricular material based on real-life situations helps learners to be able to adapt their learning to the new situations they face and actively build their knowledge because it places them in the key role of decision maker with real-life problems (Aditomo et al., 2013; Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013; Fraser & Greenhalgh, 2001; Nkhoma, Sriratanaviriyakul, & Quang, 2017; Vander Kooi & Bierlein Palmer, 2014).  The real problems and cases also highlight the interdisciplinary nature of critical thinking and problem solving, and facilitate the integration of theory into practice (Stentoft, 2017).  More experience with cases fosters the recognition of common elements, or patterns, enabling practitioners to make faster decisions and focus attention on other important aspects of the situation (Carraccio et al., 2008).  Further, working in small groups to solve cases can also build teamwork and communication skills, increase participation, and increase motivation and engagement with the material (Jones, 2006; Nkhoma et al., 2017).  Well-crafted cases that model desired behaviours and approaches to professional expectations can also help develop the learners’ tacit knowledge about their future role (Aditomo et al., 2013). In addition to providing the opportunity to learn through case-based exercises, authentic tasks can be incorporated into the curriculum through scenario-based exercises, 41  through practicums, and through experiential education.  Scenarios should be as realistic as possible, and require a great deal of preparation and organization to ensure that they provide opportunities for the students to meet the educational objectives (Werth, 2011).  Such experiential activities have the potential to be truly transformative learning experiences for the students, so must be used thoughtfully and intentionally within the curriculum to avoid reduction of the impact of the experience (Sakofs, 2001).  The experiences should be chosen so that they fit with the intention and goals of other curricular activities and that the students can be fully engaged.  Incorporating action and reflection into the experience helps ensure that the students are able to understand their learning experience and develop the critical thinking skills necessary to examine their performance and experiences (Estes, 2004). Dreyfus (2004) outlined five progressive stages for adult skill acquisition (Dreyfus, 2004).  Carraccio et al. (2008) expanded on these stages to include descriptions and strategies for case-based learning to achieve competency in medical practitioners.  An alternative progression proposed by ten Cate and Scheele (2007) approaches the achievement of competencies in terms of the level of supervision learners require as they build their skill levels: • Has knowledge • Act under full supervision • Act under moderate supervision • Act independently • Act as a supervisor or instructor  42  These levels can be correlated to the various stages discussed by Dreyfus (2004) and Carraccio et al (2008).  Table 2-2 below summarizes their descriptions (note that the use of the term competency is not related to its use in ‘competency-based learning’).   The key point ten Cate and Scheele (2007) add to the discussion of adult skill development summarized in Table 2-2 is that the level of proficiency can be correlated to the level of supervision an individual requires to perform.  This global type of evaluation, based on how much the candidate can be trusted to work independently, can be easier for expert-level practitioners to make than a determination based on lengthy checklists (ten Cate & Scheele, 2007). Regardless of how the progression of learning is described, it is evident from these models that they reflect a movement of knowledge, skill, and attitude acquisition that builds upon previous stages.  It is important that learners at the beginning stages are first exposed to the common problems to anchor the learning in their memory (Bowen, 2006).   Once learners are comfortable with the common problems, it is easier for them to compare new concepts, building and elaborating on their learning (Bowen, 2006).  Indeed, competency-based education aims to build abilities in a constructivist manner by incorporating previous learning into later stages of the curriculum and by focusing development and assessment on observable abilities (Frank et al., 2010).  The goal of competency-based education is to promote conceptualization instead of memorization (Bowen, 2006).  Care must be taken when designing a competency-based framework to not shift from a constructivist to a behaviourist framework thereby reducing the competencies into long lists of individual tasks that do not reflect the complexity of the real world (Birzer, 2003b; Cox, 2011; Leung, 2002).  43  Stage Dreyfus (2004) Carraccio et al (2008) ten Cate and Scheele (2007) Novice   No context  Follows basic rules  No emotional attachment  Teaching does not guarantee learning, particularly in this stage  Use teaching methods that integrate theory and practice to help learners build connections  Strategies for case-based learning:  Highlight meaningful information in case  Eliminate irrelevant information  Highlight discriminating features and their importance  Has knowledge  Advanced beginner  Experience and some understanding of context enables recognition of clear and easy examples  Uses situational and non-situational cues  Learning is detached and analytical  Learning relies on instructions and given examples  Work from common to uncommon cases  Help with formulating and verbalizing their assessments and plans  Use team structure and near-peer coaching   Act under full supervision  Competency  Able to differentiate between important and not-important information  Choose a perspective  Take responsibility for choices regardless of success  Invested in outcome because actively making decisions   Deeper learning occurs from mistakes because of both cognitive and emotional involvement  Balance supervision with autonomy  Hold accountable for their decisions  Act under moderate supervision  44  Stage Dreyfus (2004) Carraccio et al (2008) ten Cate and Scheele (2007) Proficiency  Decision making influenced by success and failure in previous stage  Situational discrimination  See bigger picture but not enough experience at act automatically  Situational discriminators and pattern recognition predominate over rules  Learn to know limitations and use additional resources when needed  Mentor by an expert  Act independently  Expertise  Quickly appraises situation and takes action  Able to see more subtle differences or cues  Pattern recognition saves time and resources for use in more complex problems  Progressive problem solving to move beyond the comfort zone  Keep cases interesting and complex to ensure learners are challenged  Act as a supervisor or instructor  Master N/A  Sensitivity to big picture within context and culture  Act as a supervisor or instructor  Table 2-2  Summarization of the stages of adult skill development (Dreyfus, 2004) related to competency in medical practitioners (Carraccio et al., 2005) and the level of supervision required (ten Cate and Scheele, 2007)  2.5   Assessment of Competencies A competency-based curriculum is greater than a list of required competencies; it is an integrated approach to skills and assessment (Davis & Harden, 2003b).  Assessment is so central to the shift from a traditional curriculum based on memorization to a competency-based curriculum that a failure to also change assessment practices will result in little to no actual changes in the curriculum (Shumway & Harden, 2003).  To ensure that learners are able to plan and navigate their way through the competency-based program, it is essential 45  that the learning and assessment activities are clearly mapped to the competencies (Davis & Harden, 2003a; R. Harden et al., 2000).  In addition to focusing on the performance of tasks, assessment in a competency-based program must also reflect the integrated nature of the program such that assessment activities are integrated as well (Ben-David, 1999; Davis & Harden, 2003a; R. Harden et al., 2000).  “Good assessment is a form of learning and should provide guidance and support to address learning needs.” (Epstein & Hundert, 2002).  The purpose of the assessment should be clear to the learner, to address the learning needs of adults to know why they are being asked to do something (David & Harden 2003a).  To keep the learners motivated and aware of the expectations of them, the educational philosophy of the program should be overtly stated and the assessment practices should remain congruent across the duration of the program (Ben-David, 1999; Hodge & Harris, 2012).  By following the progression of a learner across the program, their development can be tracked, providing a holistic longitudinal assessment of their progress (Carraccio et al., 2008; Shumway & Harden, 2003). Traditional assessment tools are not always able to assess the complexities of a competency-based program (Frank et al., 2010; Leung, 2002; Shumway & Harden, 2003).  As competencies extend into the areas of values, beliefs, communication, and teamwork, assessment tools such as reflection, self-assessment, feedback, and portfolios may be better able to assess the required components (R. M. Harden, 2007; Shumway & Harden, 2003).  Many students see traditional learning systems as a random sequence of events, but structuring the curriculum and assessment to convey the bigger picture enables learners to become more involved and situates assessments as a tool for learning (Black & Wiliam, 1998). Formative assessment provides the backbone of competency-based education, giving 46  students the information on how they are progressing and teaching them the necessary skills of reflection and self-assessment (Black & Wiliam, 1998).   For formative assessment to be effective, it should address a reference standard, provide information about the students’ performance in relation to that standard, and include strategies to close the gap between the observed performance and the standard (Black & Wiliam, 1998; Price, Handley, Millar, & O'Donovan, 2010; Rust, 2002; Sadler, 1989).  Students may be critical of feedback if it is too general or if it does not contain explicit instructions on how to improve (Hepplestone & Chikwa, 2014; Morris & Chikwa, 2016; Price et al., 2010; Scott, Shields, Gardner, Hancock, & Nutt, 2011) and should be able to directly see the relevance of the feedback to their future performance (Black & Wiliam, 1998; Rust, 2002).  Opportunities for formative feedback should be structured progressively throughout the program since a student may be reluctant to seek out feedback on their own (Black & Wiliam, 1998; Hepplestone & Chikwa, 2014).   Despite a preference for individualized written feedback (Hepplestone & Chikwa, 2014; Morris & Chikwa, 2016), providing model answers or exemplars either before or after a learning event may increase the students’ performance on a later test (Gibbs & Taylor, 2016; Hendry, White, & Herbert, 2016; Huxham, 2007).  This increase in performance may be due to the students’ increased engagement with the feedback and with comparing the expectations with their own performance (Black & Wiliam, 1998; Gibbs & Taylor, 2016; Hendry et al., 2016; Huxham, 2007; Rust, 2002; Sadler, 1989; Sadler, 2010).  Through this process, however, it is essential that the students’ interpretations and perceptions of the feedback be monitored to ensure their understanding is aligned with what the instructor intended (Black & Wiliam, 1998; Price et al., 2010; Sadler, 1998; Sadler, 2010).  Providing 47  opportunities for students to actively engage with the feedback, the exemplar/standard, and the instructor providing the feedback to ensure it is effectively interpreted and used will help develop the students’ self-assessment skills, which will lead to improved self-monitoring and performance (Sadler, 1989; Sadler, 2010).  The developmental progression through the program should be structured into both the learning activities and the assessment design of the program (Rust, 2002; Sadler, 1998; Smith et al., 2007).  This is true for both skill-based competencies and competencies involving social and community contexts (Smith et al., 2007).  For competencies involving recognizing own biases, beliefs, and values, self-reflection is a powerful tool when it focuses on articulating values, recognition of the importance of issues, and recognizing the various cognitive, affective, personal , and professional elements of the learning experience (Smith et al., 2007).  Creuss et al (2008) describe three elements of reflection to guide learning:  reflection “in action”, where learners debrief what they did in the moment; reflection “on action”, where learners discuss the effect of their actions on all parties involved; and reflection “for action”, where learners relate an activity to their future action (Cruess, Cruess, & Steinert, 2008).   Progression through a competency may occur along several different axes as well (R. M. Harden, 2007).  Progression to increased breadth helps the learner apply their existing abilities to new topics or new contexts.  Progression to increased difficulty helps the learner apply their existing abilities to more complex, multifactorial problems that may also include a combination of social issues beyond what is already learned.  Progression to increased utility and application helps learners move from a theoretical understanding to the application of existing knowledge.  Finally, progression to increased proficiency helps learners improve 48  their existing skills, knowledge, and attitudes such that they are able to perform tasks faster, to a higher standard, with fewer errors, and independently (R. M. Harden, 2007).  All of these elements should be structured into the curriculum and assessment so that the learners have a clear picture of where they are going and can set goals and plan how to get there (Black & Wiliam, 1998; R. M. Harden, 2007).   Care must be taken when assessing the progression of learning to provide a strong mentoring framework (Epstein & Hundert, 2002), to provide positive reinforcement through achievement of competence (Albanese et al., 2008), and to avoid reducing the competencies into a list of tasks that eliminates the complexity of the real world (Cox, 2011; Leung, 2002).  Knowledge base is just one element of a given competency, so learners can achieve an acceptable knowledge base but still not meet the competency because they are lacking in skills or attitude (Smith & Dollase, 1999).  A balanced assessment plan must be valued by all stakeholders, learners and instructors, for it to be successful in the program (Challis, 2000).  Shumway and Harden (2003) provide a summary of different levels of learning, corresponding assessment tools, and developmental aspects of a competency that each can measure (Figure 2-1).  This information, when combined with the work of Creuss et al (2005), can illustrate the essential role of reflection as both a teaching and assessment component.  As reflection is based on performance, it contributes to the higher-level competencies described by Shumway and Harden (2003).  The highest level, “doing” as seen in Figure 2-1, is developed through reflection that is forward looking by reflecting “on” and “for” action.  In this highest level, assessment tools such as assessment portfolios and observation can be used to assess attitudes, decision making ability, and proficiency in the role.  Thus developing forward-looking reflective practice should facilitate learner 49  development to proficiency in “doing” or being in their role, which can be holistically evaluated by observation and assessment portfolios.      Lastly, the scale that is used to assess performance is of importance to ensure valid and reliable assessment of learners as they progress in their competencies (Crossley, Johnson, Booth, & Wade, 2011; Frank et al., 2010; Regehr, Regehr, Bogo, & Power, 2007; ten Cate, 2006; ten Cate & Scheele, 2007).  Frequently checklists that break an activity down into different dimensions of performance can be difficult for assessors to use because they are able to formulate an overall impression of performance but then have to break this down into the identified dimensions and generate a separate rating for each dimension of the performance (Regehr et al., 2007).  Assessors are much better, and their assessments more reliable, when they provide an overall, or global, rating of performance (Regehr et al., 2007;   Does Shows how Knows how Knows Observation Portfolios Logs Peer assessment Attitudes/ethics Decision making Role Personal development Clinical/ practical assessments Clinical skills Practical procedures Communication Information handling Written assessment Written assessment Investigation Management Medical sciences (theoretical knowledge) Assessment Tool                 Knowledge/ Ability Assessed      Reflective Practice Reflecting ‘for action’ Reflecting ‘on action’ Reflecting ‘on action’ Reflecting ‘in action’ Figure 2-1  Overlay of levels of learning, assessment tools, and ability assessed (Shumway and Harden, 2003) with concepts of reflective practice for learning (Creuss et al., 2005)   50  ten Cate, 2006; ten Cate & Scheele, 2007). Additionally, Crossley et al (2011) suggest that rating scales should not be tied solely to expectations at different stages of training because assessors might not be familiar with the stages of training or associated expectations.  Incorporating the ability to perform independently into assessments increased the reliability and including behavioural descriptors of this developing independence increased the inter-rater reliability and differentiation among learners (Crossley et al., 2011).  Ten Cate (2006) presents these descriptors as a level of trust that the assessor has in the learner to perform independently, and Frank et al (2010) agree that contextual and developmental descriptors regarding the level of supervision required at a given stage are essential for accurate assessment.    2.6 Criticisms of Competency-Based Education  Competency-based education is not without criticism from a variety of fields such as public education (Jansen, 1998; Schwarz & Cavener, 1994; Spady & Mitchell, 1977), medical education (Morcke, Dornan, & Eika, 2013; Swing, 2010; Talbot, 2004; ten Cate, 2006), and nurse education (Chapman, 1999).  Perhaps the most common criticism of competency-based education is its positivist, behaviourist foundation that critics claim excludes personal values, reflection, responsibility, and other elements of learning that are similarly difficult to measure (Chapman, 1999; Jansen, 1998; Morcke et al., 2013; Schwarz & Cavener, 1994; Talbot, 2004).  Critics cite a reductionist approach to instruction and assessment that focuses on identified tasks as check-lists of achievements as a potential pitfall of a competency-based education approach (Chapman, 1999; Jansen, 1998; Schwarz & Cavener, 1994; Talbot, 2004).  Indeed, as noted in Section 2.4  The Learning Process, the 51  approach to designing the curriculum and assessment must consciously maintain a constructivist approach to avoid reducing the competencies into long lists of individual tasks that do not reflect the complexity of the real world (Birzer, 2003b; Cox, 2011; Leung, 2002).  To avoid a focus on only the easily measurable tasks, the curriculum design must also integrate learning processes that facilitate the development of higher-order learning such as integration of skills and reflection (Swing, 2010). Another frequent criticism of competency-based education is the imbalance of power exposed in the development of competencies.  While some occupational analysis approaches, such as DACUM (Developing A CurriculUM) rely on expert workers to develop the competency profile of a particular position (Canadian Vocational Association, 2013; Norton, 1998; Norton, 2009; Wyrostek & Downey, 2017), the development and selection of competencies privileges the individuals involved and is open to influence from political, business, media, or other outside factors (Chapman, 1999; Jansen, 1998; Schwarz & Cavener, 1994).  In this light, competency-based education may be seen as a system of control and regulation of students through prescriptive competencies decided on by those in power (Chapman, 1999; Schwarz & Cavener, 1994).   Additional criticisms of competency-based education include a potential misalignment between the expectations of the educational institution and those of the profession (Chapman, 1999; Talbot, 2004), an increase in administrative burden for teachers tracking the progress of students progressing at different rates (Jansen, 1998; Schwarz & Cavener, 1994), the difficulty in generating meaningful assessment activities (Chapman, 1999; Jansen, 1998), and a general resistance when the change to competency-based 52  education is mandated by government, institutions, or accreditation bodies (Chapman, 1999; Jansen, 1998; Schwarz & Cavener, 1994; Talbot, 2004).   Approaching any educational intervention as a universal solution is problematic.  Where a competency-based education approach may be less appropriate for a high-school English class (Schwarz & Cavener, 1994), it may be more appropriate for trades or medical education, as outlined in the preceding sections.  Despite criticisms of competency-based education, many educational institutions, public education systems, and occupational training programs have continued to focus on a competency-based approach with varying levels of success.  In British Columbia, the mandate that police recruit training be aligned with the PSC Constable Competencies was a natural progression into a competency-based framework for training.  In developing the curriculum for the competency-based program, caution was taken to avoid a reductionist behaviorist approach, to design assessment activities that were authentic and meaningful and allowed for flexibility in performance, and to include learning activities such as frequent feedback and reflection to facilitate deep and transformative learning.    2.7 Summary This chapter has reviewed the literature on police training, with its focus towards an increase in critical thinking and applied skills.  It has discussed the development of the Police Sector Council Constable Competencies, which form the framework for the BC Police Recruit Training program.  It has reviewed literature on competency-based education and assessment, as well as identified some common criticisms of competency-based education.  53  The following chapter will describe how these elements of competency-based education were conceptualized in the design of the BC Police Recruit Training.   54  Chapter 3: Program Description  This chapter will summarize the structure of the Recruit Training Program prior to the implementation of the changes to the program, the initial proposal for the changes, the design and development process, and the current structure after implementation of the changes.  Where relevant, the description of the current (new) program will draw on areas of the literature not already covered in Chapter 2.    3.1.1 Recruit Training Program Structure Prior to Delivery Model Changes Several structural issues with the delivery of recruit training needed to be addressed with the move to a new model.  Previously, class sizes were typically restricted to 24 recruits.  This limit was due to the capacity of the on-site firing range, where recruits learn to shoot and qualify on their departmental issued firearms.  Recruits typically rotated through a Firearms training schedule with half of the class in the range while the other half was at the driving track or participating in a full day practical session.  If departmental hiring activity increased such that recruitment numbers exceed the maximum capacity at the Police Academy, some recruits may have had to be deferred to a later class.  Alternatively, a class of 36 recruits could be accommodated with major adjustments to the schedule.  Typically there were four class starts per year, in October, November, March, and April.  One important implication of this method of scheduling was that, on occasion, three or four classes were on campus at the same time.  The overlap, however, was unpredictable and did not occur at regular places in the recruits’ schedules.  This varying overlap of classes, without additional 55  instructional resources, meant that curriculum topics were scheduled based on the instructors’ availability rather than a logical progression of topics or educational principles.  The classroom component of the Recruit Training program (Blocks I and III) was divided into different courses, or disciplines, each of which was taught independently.  The disciplines were:  Investigation and Patrol  Legal Studies  Use of Force  Traffic Studies  Firearms  Driver Training  Dress and Deportment (Drill)  Physical Training  The term discipline, in this context, is used with intent instead of the original designation as a course.  This distinction was made shortly after I began the mapping of the curriculum.  Referring to each course as a discipline was a conscious effort to begin to break down the individual siloed nature of each topic and start to see the skills taught in the program as an integrated set of competencies to be mastered.  While this distinction started as purely semantic and a first step in changing thinking, the concept of an integrated curriculum is central to the new delivery model.  Specifically, the term “discipline” was chosen over “subject” to reflect the ongoing study necessary to stay current in each of the areas.  Laws, tactics, technology, and knowledge of human physiology are constantly changing and 56  instructors must maintain their currency in their areas by continual study, discussion, and learning.  This necessity of ongoing study seemed more suited to the term discipline.   Within the previous structure, each discipline had a designated number of hours in the curriculum and each topic within a discipline had a designated amount of time for teaching, resulting in a highly structured program.  The program delivery was primarily lecture-based, with several days per block (Blocks I and III) devoted to practical application.  Each block also had two “Simulation Days” where actors were hired to play members of the public and recruits responded to simulated calls.  On a typical Sim day there were six different call stations.  Depending on the class size, recruits were in groups of two to six.  Typically, one pair of recruits per group would get to respond to a call and the rest of the recruits in the group would observe.  During simulation days, police officers from the municipal departments came to the Police Academy to act as assessors for the recruits.  After each call, the assessors debriefed with the recruits to provide both verbal and written feedback.   The feedback was considered formative because the recruits had very little experience with practical application and the simulation days were not designed as examinations.  The summative assessment for the recruits was through written exams that tested recall and a limited amount of application through scenario-based questions.  A grade of 70% or higher was required to pass in Block I and a grade of 80% or higher was required to pass in Block III.  Recruits who did not pass a written exam were required to re-take the exam after supplemental instruction.  A failure of a written exam also resulted in either half or one demerit point assigned to the recruit.  All demerits were communicated back to the home department and if a recruit accrued 3 demerit points over the course of any Block of training they could be dismissed by their home department. 57  For Block II of training, recruits work in their hiring department under the supervision of a Field Training Officer (FTO).  Block II is where the majority of the consolidation of learning happens for recruits, as they are exposed to the practical aspects of policing.  Block II is an extremely crucial part of the training.  Despite the importance of the training, the Police Academy has no input into the members who are chosen as FTOs.  These members are appointed by their municipal departments.  A three day Field Trainer course is offered by the Police Academy and recommended for future FTOs.  The course is not mandatory, however, and departments will sometimes use an untrained member as a FTO if they do not have enough people trained or if their field trainers have transferred out of patrol to specialty units.  The lack of input into the selection of an FTO makes standardization of expectations extremely difficult.  Previously, the FTO was required document the recruit’s progress in what was called a “Block II Book”.  The FTO documented the recruit’s actions at a given call, noting what was done well and what was not done well.  The recruit was supposed to discuss the content of the book with their FTO on an ongoing basis throughout Block II and sign off on weekly evaluations.  Very little, if any, communication came back to the Police Academy as to how a recruit was progressing through Block II.  At the end of the block, when recruits returned to the Police Academy for Block III, they brought their completed Block II book with them and it was reviewed by an instructor.  This review was often the first time the staff at the Police Academy were made aware of how a recruit was performing on the job.  The lack of communication between department, FTO, recruit, and Police Academy made preparing for Block III training difficult as the instructors could not anticipate what training issues may need to be addressed.  Additionally, the previous assessment scale the FTO used to rate their recruit was based on where the FTO believed a 58  recruit should be at that particular point in their training.  These expectations, of course, varied from FTO to FTO and were extremely difficult to standardize.  Because of this structure, the marks that a recruit received in their Block II assessments often did not show any progression or development over the course of their training.  If a recruit was consistently where their FTO expected they should be, then they would get a constant grade across the entire Block II.  The marking scheme also was problematic if a recruit had two different FTOs over the course of their Block II training, as happened in some departments.  The two FTOs would have different expectations of where the recruit “should be” in their training.  This could result in the recruit’s marks suddenly changing drastically as a reflection of the new expectations.   Feedback in the previous program was limited to exam grades and written feedback from assessors on simulation days (Blocks I and III) and from the FTO (Block II).  In the blocks that are at the Police Academy there was very little, if any, opportunity for a recruit to obtain specific formative feedback from an instructor.  This lack of formative feedback was a deficiency that was directly addressed in the new delivery model.    3.2 Design and Development After observing many of the days of training for recruit training classes throughout 2013 and into 2014, and mapping the curriculum to the Police Sector Council Constable Competencies, I proposed a new delivery model for the curriculum could significantly improve the quality of training over what was then the current, primarily didactic, delivery model.  I recognized that successfully changing the delivery model required a systematic change at all levels of the Police Academy delivery.  As Biggs (1996) notes:  “attempts to 59  enhance teaching need to address the system as a whole, not simply add ‘good’ components, such as new curriculum or methods” (p. 350).  Indeed, the structure of a program, from content to delivery to evaluation, is not just a reflection of the material to be learned, but also a reflection of the professional values of the organization (Glazier, Bolick, & Stutts, 2017; Pannell, 2016).  Constructive alignment uses constructivism as a framework throughout the design and development of a program, ensuring the learning opportunities are structured to promote development and achievement of the performances to be assessed and that the objectives, teaching activities, and assessment strategies are all structured at an appropriately high cognitive level based on the final expectations of the program (Biggs, 1996).  This was the strategy adopted for designing and developing the delivery model.  The literature was reviewed to determine how the delivery of the program would be structured.  Following the review of the literature, a generic template schedule was created for both Block I and Block III that incorporated all of the new elements as described in Section 3.4  The New Program Structure .  This template schedule is included in Appendix A  - Template Schedule for Competency-Based Delivery Model of Recruit Training.  The content from all disciplines in the program was broken down into topics, and these topics were mapped to the new template schedules, according to where they would be required to respond to the type of call for that particular week of the program.  This created a spiralized structure for the topics, where they are introduced early in the program and progressively revisited and built on over the course of training (Haberfield, 2013).  It also ensured that no content was lost in the transition between delivery models.  Once the proposal and template schedules were created, they were approved by the Director of the Police Academy and by the Policing & Security Branch of the Provincial Government.  Following approval, a meeting was set up at each of the 60  departments where myself and the Director of the Police Academy met with the training departments and other senior staff of each Police Department to give them an overview of the proposed changes and obtain agreement on the plans to move forward.    3.3 Proposed Recruit Training Program Structure Delivery Model Changes  The proposed changes were to shift the Recruit Training program from the original delivery model of mostly lecture-based theory with sporadic practical, or “simulation” days, to a competency-based framework in keeping with the recently developed Police Sector Council National Framework of Constable Competencies.  The primary delivery model was proposed as a case-based method, designed following the ADDIE model of instructional design, which is common practice at JIBC and is well suited to competency-based education development (Wyrostek & Downey, 2017).  The proposed changes were based on assumptions about learning that are founded in competency-based learning theory, as outlined in Chapter 2:. Adult learning    Learning is enhanced when it occurs in the context in which it will be applied  Adult learners are motivated when they can see the relevance and application of what they are learning  Adult learners can take ownership of their learning process and work to meet agreed upon goals.     61  Developmental progression of learning  Learners enter the program with different strengths and experiences based on their background.  They will, therefore, achieve milestones at different rates.  Time and support need to be provided throughout the learning process.    Each week of the program should have time that is preserved for the learners to work on developing their competencies and meeting their learning goals.  Instructors should be available during this time to provide assistance.    Recruits are actively involved in goal setting and creating their training plans as they progress through each component of the curriculum.   Integration  Theory should be taught in an integrated, case-based manner.   Theory based exams should be integrated across all disciplines.  Simulation based exams should be integrated across all disciplines. Framework  The National Use of Force and the Conflict Intervention and De-escalation Models should be the framework for articulation across all areas of the program  Leadership and mentorship competencies should be developed and fostered within the program    62  Assessment  Competence can be demonstrated through portfolio-based assessment where the learner collects evidence that they have reached an acceptable level of ability.  Portfolios should demonstrate progression, not just competence.  Simulation days should be integrated, practical exams.  Recruits need ample opportunity to practice application before these exams so practical sessions should exist every week.  Assessments from the practical sessions and simulation exams will be used as evidence of meeting competencies in the portfolio.    Developing portfolios to demonstrate achievement of competencies will better prepare recruits for the departmental promotion process. Feedback  Specific, focused feedback is central to progression of learning and development of skills.  Time needs to be dedicated to providing this sort of feedback throughout the learning process.  Instructors will serve as facilitators and coaches, providing formative feedback to recruits and helping them devise their personalized training plans.  Self-assessment builds the skills and framework for developing into a reflective practitioner.  Peer-feedback builds effective communication skills and also promotes reflective learning.  If learners are asked to self-assess and/or provide feedback then they also need to be taught how to do these things. 63  Cultural competence  Practical sessions should reflect the current realities of policing in the communities recruits serve.  Recruits will serve as actors for practical scenarios.  Each scenario will have a brief information write up about the people portrayed in the simulation and relevant information about any marginalized groups represented (i.e. important statistics, special considerations, etc.).  Following practical sessions, each actor will be required to briefly present what they learned in that role to the rest of the class.  This will also build presentation and communication skills in recruits.   The proposal for the new delivery method included a drastic reduction of the time spent in lecture in favour of using that time for case-based and practical application.  Table 3-1 shows a comparison of the time allotted to various elements of the program ten years prior to the proposal (2005), the year before implementation of the new model (2015) and the first class in the new curriculum (2016).  Minimal change in the structure of training occurred between 2005-2015 and, if anything, the hours spent in lectures had increased.  The observed decrease in driving time between 2005 and 2015 was to accommodate additional mandated training in the program, such as Crisis Intervention and De-Escalation (CID), without increasing the length of training.  Prior to the new model, no time was spent in case based application, nor in receiving feedback from instructors or working on individualized training plans.  With the introduction of the new delivery model, there is dedicated time in the curriculum for all of these activities.   64   Class 99 (2005) Class 148 (2015) Class 152 (2016) Hours in simulations or practical application Block I 35 42 89 Block III 58 52 92 Hours in PRIME training Block I 21 (3 days straight) 28 (4 days straight) 23 (integrated across curriculum) Block III 7 N/A 3 (integrated) Hours driving Block I 56 28 28 Block III N/A 7 7 Hours in firearms Block I 56 (all indoor) 58 (all indoor) 66 (7 outdoor) Block III 14 (7 indoor, 7 outdoor) 14 (7 indoor, 7 outdoor) 14 (7 indoor, 7 outdoor) Hours in Use of Force Block I 38 (5 days straight) 40 36 Block III 24 30 19 Hours in PT Block I 27 23 19 Block III 28 19 18 Hours in Drill Block I 12 11 10 Block III 12 10 9 Hours in written exams Block I 10 13 2 exam days combined written and practical Block III 6 6 2 exam days combined written and practical Hours in practical exams Block I N/A N/A 2 exam days combined written and practical Block III N/A N/A 2 exam days combined written and practical Hours in lecture Block I 154 174 37 Block III 82 90 7 Hours in case-based application Block I N/A N/A 36 Block III N/A N/A 28 65   Class 99 (2005) Class 148 (2015) Class 152 (2016) Hours for diversity projects Block I N/A N/A N/A Block III N/A 7 6 Hours for CID training Block I N/A N/A 7 Block III N/A 7 integrated Hours for directed study – to work on individualized training plan skill development Block I N/A N/A 40 Block III N/A N/A 16 Hours receiving feedback from instructors and developing individualized training plans Block I N/A N/A 13 Block III N/A N/A 7 Table 3-1 Comparison of program elements 10 years before the program change proposal (2005), before change implementation (2015), and in the new delivery model (2016)  3.4 The New Program Structure  The structure of the new program remains divided into four different blocks.  The duration of Blocks I, III, and IV remains unchanged.  The duration of Block II was extended from the previous 12-17 weeks to 18-21 weeks, schedule dependent, to allow for consistent overlap between the senior (Block III) and junior (Block I) recruit classes.  Several components of the program span all of recruit training:  the longitudinal themes and the mentorship program. Four longitudinal themes are interwoven throughout the program:  ethics, professional communication, officer wellness, and fair and impartial policing.  Each of these themes represents values the recruit program tries to cultivate across training and are best addressed on an ongoing basis rather than in a short stand-alone session.   Recruits are introduced to the concepts of ethics and fair and impartial policing in one of their introductory sessions in Week 1.  For ethics, they talk about the importance of ethical standards and accountability as well as potential sources of unethical behavior.  After that 66  concepts related to ethics are integrated into a variety of case and scenario components.  For fair and impartial policing, they are introduced to the concept of implicit bias and potential negative consequences for police investigations.  As a regular component of their debrief on scenarios, recruits are then asked to identify any actions or experiences that were personal triggers for them to help identify sources of potential bias.  They are also asked to identify any strategies they used to ensure their investigations were fair and impartial.  This continued reflection is intended to increase self-awareness with respect to implicit bias in the recruits (Ossa Parra, Gutiérrez, & Aldana, 2015).  Also as a component of their debrief, recruits are asked to identify strategies they used to maintain their composure.  This question is intended as a self-assessment, or check-in, with respect to officer wellness.  In their training plans recruits use goal setting strategies that are monitored by their mentors.  They also participate in a session on visualization and learn about tactical breathing.  These are all identified as strategies to promote skills that maintain officer wellness over the course of a career in policing.  Recruits also participate in the Road to Mental Readiness training, which is proprietary training originally developed by the Department of National Defense and then modified for the police context by the Mental Health Commission of Canada.   Lastly, the professional communication theme is integrated across all aspects of training.  Recruits formally focus on communication skills during a full training day early in Block I where they respond to nine different calls all with a focus on communication.  This day, COPS I – Effective Communication, uses actors in the scenarios and includes calls involving subjects with mental health concerns, PTSD, autism, and abuse.  The recruits are given feedback on their ability to communicate effectively, empathetically, and professionally during the scenario.  Later in Block I, recruits participate in another full 67  training day on Crisis Intervention and De-escalation.  They are introduced to this model on Effective Communication day but expand on it here.  The day includes learning the BC CID model as an effective approach to de-escalate a situation with a person in crisis, particularly a mental health crisis.   In addition to these full days of training, recruits participate as actors in scenarios for other recruits.  This experience is intended to transfer tacit knowledge about policing, but also for recruits to experience the impact of different communication strategies used by their peers.  Through this experience, they should develop a better understanding of what it is like to be the subject of a police investigation and how their actions can impact a member of the public.  After each scenario day they are prompted to reflect on what they learned by participating as an actor. This critical reflection should bring to the forefront any implicit knowledge gained through the experience (Ossa Parra et al., 2015).   Through their frequent scenario practice, recruits continually receive feedback on their communication and their ability to build rapport.  Assessing their communication skills is also incorporated into their practical exam scenarios.  Finally, recruits’ scenarios are recorded and they view their handling of each call.  Watching the recordings helps recruits appreciate how their actions were perceived by the subjects in the call and also illustrates how their perceptions in the moment may be different from what they see on the video or what the subjects experienced.   The mentoring program is another aspect that is integrated throughout recruit training.  In the second week of training recruits are assigned a member of the instructional staff who follows their progress throughout Blocks I through III.  This system follows an integrated mentoring structure whereby the mentors follow a model that combines the pastoral, 68  professional, and curriculum models of mentoring (Livingstone & Naismith, 2017).  The model integrates the continuity of the same mentor for the entire training program who meets individually with recruits from the pastoral model of mentoring, the referral to specific support services or other instructors as needed from the professional model of mentoring, and the integration of the mentoring into the curriculum class time from the curriculum model of mentoring (Livingstone & Naismith, 2017).  Under this model, the mentor provides the recruit with developmental, formative feedback, reviews their scenario debrief forms and weekly training plans, and has regular individual meetings with their recruit.  This strong relationship is intended to help the recruits develop their self-monitoring skills by facilitating and honest assessment of their strengths and weaknesses, providing support and guidance in developing an individualized training plan to use directed study time to close the gap between their current performance and their performance goals, and to monitor the interpretation and incorporation of feedback (Black & Wiliam, 1998; Price et al., 2010; Slavich & Zimbardo, 2012).   In educational programs, feedback is frequently provided to students, but how the students use this feedback is seldom monitored (Price et al., 2010; Sadler, 2010).  Similarly, reflection is often promoted but little attention is paid to the nature of the reflection and if the students are engaging in surface reflection or critical reflection (Alfred et al., 2013).  The structure of the mentorship program in Recruit Training is designed to monitor the recruits’ incorporation of feedback and support their continued growth and development throughout the program.  The mentorship program is designed to provide support and feedback for all recruits in the program, not just those who are weaker as each recruit meets with their mentor regardless of their perceived strength in the program.  The mentors also hold their recruits 69  accountable for their performance and convey some of the tacit organizational culture aspects of policing through their interactions with the recruits.  Success of this mentorship model requires a clear understanding of the role of the mentor from the staff, instructors, and students (Livingstone & Naismith, 2017).  3.4.1 Block I In Block I, the focus of training is building the basic skills required for a patrol level police officer through exposure and repetition.  During the 13 weeks of Block I at the Police Academy, each week is organized around one (or two) general types of call(s).  The calls were selected by determining the most frequent calls encountered by patrol-level police officers.  The in-class material is presented in an integrated, interdisciplinary manner wherever possible.  Interdisciplinary learning is aligned with the constructivist perspective by focusing on the relationships between concepts and how the learner constructs their knowledge (Stentoft, 2017).  The general structure of a week in Block I of the new program will consist of the following elements:  Pre-reading quiz on the theory necessary for the week.  Application of theory in class through case presentations.  The pre-reading content will not be repeated with lectures.  Just in time information about specific skills relevant to the case (call) topic, such as filling out specific forms, reading specific documents, etc.  Practice putting the theory into practice through practical sessions.  Reflection on the practice as well as writing reports. 70   Ongoing formative feedback and support provided by an assigned mentor who is a member of the instructional staff.    The design of the week progresses from the basic level of understanding (assessed by the pre-week quiz) to the ability to apply and synthesize the material (assessed during practical sessions and reflection).  Over the course of the week, the program provides opportunities for recruits to both acquire and consolidate knowledge at the surface, deep level as well as to transfer this knowledge to new situations (Hattie & Donoghue, 2016).  Hattie and Donoghue (2016) differentiate between acquiring knowledge, which is a function of short-term memory, and consolidating knowledge, which is a function of long-term memory.  They also advocate for a conscious selection of learning activities to promote each of these processes.  As the recruits move through the week, they have the opportunity to acquire and consolidate surface level knowledge through the pre-reading and completion of the associated quiz, which align with learning strategies identified as facilitating surface-level learning (Hattie & Donoghue, 2016).  As the week progresses, and recruits apply their new knowledge to case studies through small group discussions with peers and clarify their understandings through interactions with instructors during directed study, they have the opportunity to elaborate on what they have learned, organize the knowledge around real-life contexts and problems, question their own understanding, verbalize their decisions, and engage in critical thinking and collaborative learning, all of which are strategies to acquire and consolidate deep learning (Hattie & Donoghue, 2016).  Finally, they have the opportunity to transfer their knowledge and understanding to new situations through the 71  scenarios, where they are able to choose which strategies they will use, evaluate their choices, and receive feedback on their performance (Hattie & Donoghue, 2016).    3.4.1.1 Weekly pre-reading and quizzes Because the focus of classroom time is on the application of concepts, the recruits are required to come to each week with a basic level of knowledge relevant to that week’s call.  This is accomplished through pre-reading of manual chapters and successfully completing a knowledge-based quiz online before the start of the week.  Recruits may complete the reading and quiz at any point before the start of the week, so have flexibility in when they complete the work.   The quiz is completed on their own time outside of the class electronically through the JIBC learning management system and they must score 100% on the quiz to be considered as having successfully completed the pre-work.  They can take the quiz as many times as necessary to achieve this grade and are free to work in groups and with their course reading material.  Once a quiz is submitted, the students receive immediate feedback through the LMS that indicates the correct answer to all questions.  Failure to complete the quiz at the required level results in the recruit being assigned one demerit; recruits are allowed six demerits in one Block of training before they are sent back to their home department for re-evaluation of suitability.   In preparation for delivery of the new program model, the existing discipline manual chapters were re-written and significantly reduced in volume to focus on the core concepts.  This revision resulted in the overall length of all combined manuals dropping by seven hundred pages.  In Block I the recruits have approximately one thousand pages of reading spread out over the 13 weeks.   72  The structure of the program with pre-reading followed by in-class application borrows from recent models of the “flipped” classroom whereby students are exposed to course content, typically as short video clips, as homework and then can focus their classroom time on application and inquiry.  This approach has developed to address the issues of content overload in the curriculum combined with the need to foster critical thinking and decision making skills in students (Bristol, 2014; Burke & Fedorek, 2017; Heijstra & Sigurdardottir, 2017).  The approach is now used in classrooms in both K-12 and post-secondary systems, although is becoming particularly common in STEM classrooms.  Key to the success of the flipped classroom approach is engaging the students to complete the pre-class work and ensuring that classroom time is actually used for application and not for attempting to incorporate additional content (Braun, Rittter, & Vasko, 2014; Burke & Fedorek, 2017; Heijstra & Sigurdardottir, 2017)    In the Recruit Training program, the use of weekly pre-reading quizzes was incorporated to ensure that recruits completed the pre-class work and came in to the classroom with a base level of knowledge to start to apply the concepts to cases and scenarios.  The use of quizzes also draws on the concept of the testing effect, which has demonstrated that testing with immediate feedback enhances recall and retention (Agarwal, Finley, Rose, & Roediger, 2017; Butler, Karpicke, & Roediger III, 2008; Dunlosky et al., 2013; Fazio, Huelser, Johnson, & Marsh, 2010; Karpicke & Roediger III, 2008; Roediger III & Karpicke, 2006b; Wiklund-Hornqvist, Andersson, Jonsson, & Nyberg, 2017).  The testing effect, or retrieval practice, shows an increase in long term recall over studying alone in the laboratory setting (Agarwal et al., 2017; Karpicke & Roediger III, 2008; Wiklund-Hornqvist et al., 2017) and also in educational settings (Holmes, 2015; Holmes, 2017; Roediger III & 73  Karpicke, 2006a; Roediger & Butler, 2011; Wiklund-Hörnqvist, Jonsson, & Nyberg, 2014).  Continuous assessment, either through online tests, in-class low-stakes tests, or assignments, has been demonstrated to increase student engagement and understanding (Holmes, 2015; Holmes, 2017; Trotter, 2006; Wiklund-Hörnqvist et al., 2014) and time spent engaged with the curricular material (Holmes, 2017).  The mechanism of action of the testing effect is believed to be through effortful retrieval (Roediger III & Karpicke, 2006a; Roediger & Butler, 2011) whereby retrieval practice, or recalling information to answer a test question, strengthens the knowledge and the accessibility of this knowledge in the brain (Roediger III & Karpicke, 2006a; Roediger & Butler, 2011).  Associated with this mechanism is the concept of desirable difficulties, whereby students learn more from successfully completing something that is more difficult, such as answering a test question, than from simply being told the correct answer (Fazio et al., 2010; Roediger III & Karpicke, 2006a).  3.4.1.2 Classroom case-based application Once the recruits have successfully completed the weekly reading and quiz, their time in class is focused on progressively more complex application.  The content from the readings is not re-delivered in lectures during class time, which is important for the success of a flipped classroom model (Braun et al., 2014).  The cases were developed based on actual calls instructors had taken and are structured with prompting questions to draw out relevant legal, patrol, tactical, officer safety, investigative, and traffic related topics, as relevant.  Additionally, care was taken in writing the cases to ensure that a variety of ethnicities, genders, and socioeconomic backgrounds were represented as victims, suspects, and witnesses, to avoid conveying any implicit bias to the recruits.   74  In a given case study session, recruits will work as a group to complete a number of different cases designed to cover the required topics.  Some cases are designed as short, single-page vignettes and others are designed as longer progressive release cases where recruits get more and more information about their investigation as they work through the case.  Recruits work in small groups of six and discuss the answers to each of the prompting questions before moving on to the next sheet or the next case.  They pace their own discussions and learning, so that they are able to spend more time on one question if a member of the group is struggling with that aspect of the case.  Through the application of their knowledge from the pre-reading and the process of explaining why they would choose a particular course of action, the recruits should develop a deeper understanding of the material (Aditomo et al., 2013; Dunlosky et al., 2013).  The process of recalling information to apply it to the case studies aligns with the concept of desirable difficulties, as described in the section on the testing effect.  Here students derive more long term benefit from learning activities that require them to struggle with the material and may be slower initially than they do by simply being told the answer (Fazio et al., 2010).  A number of instructors are present, specializing in different disciplines, to monitor the recruits’ progress, understanding, and help answer questions if the recruits are struggling with a particular concept.  Typically each instructor is responsible for monitoring two groups throughout the session.  Once the groups have worked through the case sheets, there is a debrief with the whole class that touches on key issues they discussed in their groups.  This gives the instructors the opportunity to ensure there is consistent understanding across the class.  The recruits post their answers to the prompting questions on the learning management system so that they are available for study purposes after the class.   75  The recruit groups are shuffled each week so that they are always working with a different group of people and hear a variety of perspectives over the course of the Block.  The groups are balanced based on recruit performance so that there is peer support for recruits who may be struggling with the application of concepts.  3.4.1.3 Directed study time Almost each week in the Block I schedule has directed study time for recruits to work on the areas where they most need to improve in the program.  Recruits complete a weekly training plan that outlines their strengths and their areas for improvement in the program, sets training goals, and outlines how they plan to use their directed study time to achieve these goals.  This training plan is uploaded to the learning management system before the start of the week and reviewed and approved by their mentor.  During directed study time, the Use of Force instructors are available in the gym, the Firearms instructors are available in the range, and other instructors are available in the classroom to assist recruits with their learning.  Recruits can move between activities and classrooms, so they can work on several different areas in one directed study period.  They are also able to take radios and practice additional scenarios with each other.  In the classroom, they are instructed to find other recruits who would like to work on the same topic so that they are engaged in small group discussions while the instructors circulate to monitor discussions and answer questions.  Directed study time is active and participatory learning and recruits are discouraged from using it to complete their reading.  Following directed study, recruits submit a simple form that outlines how they actually used their time in case there were discrepancies between what they planned and what actually happened.   76  3.4.1.4 Practical scenarios As the week progresses, the recruits apply what they have learned in their reading and case-studies to practical scenarios.  In some weeks this is done through practical scenario days and in other weeks this is done through Core Operational Policing Skills (COPS) days, which are full days of training tailored to a specific topic or skill.  For the practical scenario days, the recruits have the opportunity to practice responding to the type of call for that particular week.  Depending on the week and the call, they may respond as a single-person, or as a partnership.  Each scenario has specific learning objectives, legal knowledge related questions that are asked of the recruits, and an associated checklist feedback form that details the expected response to the call.  Each recruit is able to participate in several different calls during the day, so they practice responding to a variety of situations.  Their performance is recorded, for formative feedback purposes.  This recording is retained only by the recruit; the Police Academy neither watches nor keeps a copy of the recording.  Where the Block I scenarios overlap with Block III training, select Block III recruits are chosen as the “lead recruit” to run the scenario and provide feedback to the Block I recruits.  Each lead recruit has a team of Block III recruits who are actors, filmers, or dispatchers for the Block I recruit scenarios.  An instructor is also present to monitor the feedback provided to the Block I recruit.    3.4.1.5 Practical Scenario Acting  In addition to participating in practical scenarios by taking calls, the Block I recruits also participate in Block III scenarios as actors and by filming the scenarios taken by the senior recruits.  This participation allows junior recruits to learn some tacit knowledge about 77  how to respond to calls, communicate effectively, and act like a police officer from their senior peers. Tacit knowledge is informal knowledge that is often difficult to articulate and is acquired through experience as expertise develops (Collins, 2001; Farrar & Trorey, 2008; Matthew, Cianciolo, & Sternberg, 2005; Matthew & Sternberg, 2009; Sternberg, 1999; Sternberg & Hedlund, 2002; Taylor et al., 2013).   The ability to use tacit knowledge to solve problems is often considered a hallmark of an expert (Matthew et al., 2005; Matthew & Sternberg, 2009) but can be present at all stages of expertise development as learners begin to acquire tacit knowledge through their own experiences (Farrar & Trorey, 2008).  The scenarios are structured in such a way so that the Block I recruits will observe a relevant scenario before they have to perform the same skill themselves (i.e. seeing a senior recruit respond to a Mental Health Act call in the week before they learn about the Mental Health Act).  Reflection can help to develop tacit knowledge by making explicit what has been implicitly learned (Matthew et al., 2005; Matthew & Sternberg, 2009; Taylor et al., 2013).  To aid in this development process, recruits are asked to reflect on what they learned by being an actor in the scenarios as a regular part of their scenario debrief self-assessment, as described in the following section.    3.4.1.6 Practical Scenario Self-Assessment and Report Writing The day after the practical scenarios, recruits have a “debrief” period built into the curriculum where they are tasked with watching the videos from their scenarios, completing a self-assessment form, reviewing the feedback from the assessor, and working on their report writing skills.  Students may struggle to incorporate feedback not because they are disinterested but because they may not understand the feedback or their view of their 78  performance may be skewed by what they intended to do (Sadler, 2010).  Watching the video of their scenarios and comparing it with the assessment form and the verbal feedback they received from the assessor is designed to help recruits align their perceptions of their performance with the perceptions of the more experienced assessors.  The debrief form includes self-assessment questions that focus on their strengths and areas for improvement, what they learned from participating as an actor, the most valuable piece of feedback they received, what strategies they used to maintain their composure throughout the call, identify any personal triggers they may have encountered and any strategies they used to ensure their investigation was fair and impartial, and to map the calls to the PSC constable competencies.  These structured reflective questions cover content reflection, process reflection, and premise reflection to stimulate the transformative learning process (Alfred et al., 2013; Ossa Parra et al., 2015).  The form is completed and submitted to the learning management system, where it is reviewed by their mentor.   During the debrief time, the recruits also have one hour to work on a report based on one of the scenarios they completed the previous day.  This report writing time uses dedicated computers that are connected to a PRIME training server so recruits are able to practice using the software they will be using while on patrol.  The reports focus on the language and content expectations and on the proper use of the PRIME software.  An instructor reviews each report and provides feedback to the recruit.  During the debrief time, recruits also meet individually with their mentor to review their performance to date and to discuss their training plan to ensure that their directed study time use reflects their actual strengths and weaknesses in the program.    79  3.4.1.7 COPS Days Core Operational Policing Skills (COPS) days are full days of training that focus on a specific topic or skill, such as effective communication, basic investigations, containment and searching, or high risk vehicle stops.  Many of these full days of training were incorporated into the old delivery model but several key adjustments were made.  Effective communication was moved to early in the Block I schedule to introduce the importance of communication techniques throughout all police encounters.  Crisis Intervention and De-escalation was moved from Block III into Block I so that recruits have those essential skills before they start their field training in Block II.  Also, an outdoor range day was incorporated into Block I training so that recruits have experience moving and shooting with their own pistol before field training.  Additional new COPS days include a scenario day to practice calls and a use of force qualification day.    3.4.1.8 Skills Development – Use of Force, Firearms, and Driving Some basic physical skills are required of all police officers.  Use of Force, or force options training, includes soft and hard physical control as well as intermediate weapons such as batons and OC spray.  While some of the concepts involved in Use of Force training, such as the legal aspects of when force is allowed, are integrated into training, the acquisition of the core physical skills remains a separate component of the curriculum.  Similarly, firearms training is taught exclusively by firearms instructors on the firearms range, and driving is taught separately by specially trained instructors at the driving track.    80  3.4.1.9 Assessment Central to assessing if recruits have been able to reach the required level of competencies is the ability to test recruits using practical, real-life, and authentic assessment tools.  In Block I, recruits have a Progress Assessment exam in Week 5 of their training and a Final Exam in Week 12 of training.  During this day, recruits complete five written exam stations and four practical scenarios.  The written exam stations are a mix of some multiple choice exams and practical exercises such as completing a ticket or release documentation based on a written description of an event.  Here, the written exam stations are tailored to various levels of understanding including basic memorization, critical thinking, scenario-based questions, and completion of a real life task.  The variety of exam questions reflects the diverse knowledge and application requirements of a patrol level officer (Brady, 2005).  The practical scenarios are two stations taken as a single police officer and two stations taken as a partnership.  The scenarios are based on the types of simulations that the recruits have practiced, and received formative feedback on, during their practical scenario days and reflect the complex performance expectations of real-life police work, which is an essential component of authentic assessment (Narayan et al., 2013).  The practical exam stations are assessed by external police officers (or retired police officers) who have been trained in assessment and who are using standard rubrics developed for the scenarios.  The recruits respond to the call and then the scenario is followed by a short five minute oral exam where the assessor asks them to articulate the grounds for their actions.   If a recruit fails a written station or a scenario, they are assigned a demerit and remediation is planned with their mentor either in directed study time or by reviewing the rubric and re-doing the scenario.  Thus the exam scenarios are both assessments and learning 81  opportunities.  Recruits are capped at four demerits on a given exam day so that no recruit can fail out of training based on one day’s performance as they are only allowed to accrue 6 demerits in one block of training.   In addition to the exam days, recruits must also complete an “Application for Advancement”, or assessment portfolio, that outlines evidence that they have reached the required level of each of the competencies for that stage of their training.  Assessment portfolios are considered a form of authentic assessment because they allow students to demonstrate their progression in learning and their critical reflection while using concrete examples that they have achieved the required level (Narayan et al., 2013).  In Block I, the evidence consists mainly of feedback from scenario days and rubrics from exam scenarios.  The recruits must upload all of their documentation and then complete a one to two page summary for each of the competencies that outlines how the evidence shows their progression of skill and that they have reached the required minimum level in each of the competencies.  Table 2-1 summarized each of the nine core Constable competencies and proficiency levels 1 and 2.  Recruits are required to demonstrate they have reached level 2 proficiency in each of the competencies by graduation at the end of Block III but may progress through the competencies at different rates over the three blocks of training.  In order to map the predicted progression of a recruit through training, the competency descriptions and behavioural indicators were used to identify where there were learning opportunities for recruits to develop in each of the competencies.  This information is summarized in Table 3-2 and shows that by the end of Block I training, recruits are expected to be at level 1 proficiency in: organizational awareness, problem solving, risk management, stress tolerance, 82  and written skills.  They are expected to be at level 2 proficiency in the remaining competencies:  adaptability, ethical accountability, interactive communication, and teamwork.  The expected progression through proficiency levels is combined with the global judgements from ten Cate and Scheele (2007) outlined in Table 2-2, indicating that a recruit is expected to be able to act under full supervision by the end of Block I, under moderate supervision by the end of Block II, and independently by the end of Block III.    Competency Block I Act under full supervision Block II Act under moderate supervision Block III Act independently Adaptability Level 2 proficiency Level 2 proficiency Level 2 proficiency Ethical Accountability and Responsibility Level 2 proficiency Level 2 proficiency Level 2 proficiency Interactive Communication Level 2 proficiency Level 2 proficiency Level 2 proficiency Organizational Awareness Level 1 proficiency Level 2 proficiency Level 2 proficiency Problem Solving Level 1 proficiency Level 2 proficiency Level 2 proficiency Risk Management Level 1 proficiency Level 2 proficiency Level 2 proficiency Stress Tolerance Level 1 proficiency Level 2 proficiency Level 2 proficiency Teamwork Level 2 proficiency Level 2 proficiency Level 2 proficiency Written Skills Level 1 proficiency Level 2 proficiency Level 2 proficiency  Table 3-2 Expected progression through proficiency levels 1 and 2 in each of the core Constable competencies  3.4.2 Block II Block II training is the field training component that happens in the recruits’ home department, under the supervision of a Field Training Officer (FTO), who is a specially trained experienced member of patrol.  The FTO is responsible for providing feedback to the recruit, documenting their progress, and assessing their performance.  Documentation is completed and returned to the Police Academy at the end of Block II when the recruit returns 83  for Block III.  During Block II, the focus is on applying the skills learned in Block I to a real policing environment and progressing in development of the core competencies.   In the new delivery model, Block II was lengthened to 18-21 weeks, depending on scheduling, to allow for a consistent overlap of Block III and Block I.  Structure was also introduced into the Block II experience.  Previously, much of how field training unfolded was left to the discretion of the FTO.  This lack of structure led to a great deal of inconsistency in experience for Block II recruits.  Some recruits would be driving the police car on their first shift where others might not be driving until their ninth week.  The structure introduced into Block II is intended to provide some guidance for the FTOs to increase the consistency of approach for recruits in training.  Also, standardized rubrics were introduced to assess recruits performance as they move through Block II.  Block II is now divided into Phase I and Phase II.  Phase I is a short introductory session where the recruits focus on their legal knowledge, their officer safety and officer presence, and learning to use the computer software in a live environment instead of a training environment.  Phase I lasts a minimum of one “work period” (one work period is four shifts) and a maximum of three work periods.  During this time the recruit does not drive the police car and focuses on becoming comfortable with the basic skills.  Once the recruit has successfully completed Phase I by consistently Meeting Expectations or Exceeding Expectations on the rubric, they move into Phase II.  Phase II sees the recruit taking progressively more responsibility from their FTO.  The assessment criteria continue to assess on the basics of legal knowledge, officer safety, and officer presence, but now also include the PSC constable competencies.  After approximately fourteen weeks, the previous length of Block II, there is a check in to ensure that the recruit is meeting or exceeding expectations.  If not, the extra time in Block II should 84  be used to provide extra support to the recruit to ensure that they meet the core competencies and avoid backtrooping1 the recruit.  If the recruit is progressing as expected, the extra time can be used to meet the departmental specific tasks and objectives that are included in the required competencies.  Also, in Block II, recruits complete a “Diversity Project” where they work in small groups of three recruits and identify an underserved or minority population in the community they have been hired to serve.  Recruits meet with members of their community and interview them about their lives and their experience with police.  When they return for Block III, recruits deliver a presentation based on this project.  Typical project topics include indigenous populations, sex trade workers, vulnerable youth, and homeless populations.  This Diversity Project has been a successful component of Block II training for many years.   New documentation was also introduced into Block II.  Recruits now complete a short monthly summary of their performance and submit this on the Police Academy learning management system.  This documentation is monitored by the recruits’ mentors, to ensure that they are progressing as expected, and to initiate conversations with the department to provide additional support if needed.  Recruits also complete an Application for Advancement assessment portfolio at the end of Block II.  The majority of evidence in this assessment portfolio is taken from calls the recruit has encountered during their field training.                                                     1 Backtrooping refers to the practice of holding a recruit who is not passing Block II back so they have more time to spend on the road with an FTO.  Typically a recruit who is backtrooped will have their Block II extended and start Block III with the next class.   85  3.4.3 Block III Block III training is designed to build on the experience in Blocks I and II by applying advanced patrol and investigative topics and by developing mentorship abilities in the senior recruit class.  Overall, the general goal in Block III is to minimize time spent in the classroom and maximize the time spent focused on practical applications.  Unlike Block I, the structure of Block III mixes the type of calls in both case studies and practical scenarios, to be more reflective of an actual patrol shift.  The longitudinal themes from Block I continue to be integrated into training in Block III as the recruits move to independence in each of the core competencies.  Several new key components to Block III training, include teaching sims, longitudinal cases, and mentorship.    3.4.3.1 Pre-reading and quizzes The pre-reading in Block III is not prepared in manual chapters as it is in Block I.  In Block III, recruits are directed to read specific sections of the Criminal Code of Canada, or of different Provincial Acts.  Structuring the advanced reading in this way is designed to build the recruits’ ability to read and interpret various pieces of legislation.  Similarly, for the majority of the weeks in Block III, the pre-reading is not associated with content knowledge quizzes; recruits must simply acknowledge that they have completed the required reading and are ready to discuss and apply it in class.  Recruits are still able to clarify any confusion during directed study time.  86  3.4.3.2 Teaching sims In the previous Block III, there were a large number of guest speakers from specialty units who came to address the recruits.  While these sessions were interesting, they were not relevant to all of the recruits because many of the smaller departments do not have these specialty units.  In redesigning the curriculum, these guest speakers were engaged as subject matter experts to help develop “teaching sims” where recruits respond to a short scenario, followed by a longer debrief where the subject matter expert or other instructor guides the recruits through what a patrol member would need to know when responding to this type of call.  The teaching sims occur in two separate days, with one day focusing on vulnerable populations and including teaching sims on the Youth Criminal Justice Act (YCJA), missing persons, hate crimes, elder abuse, sex assault, and child abuse.  The remaining teaching sims include: internet investigations, credit card fraud and investigations, cell phone investigations, prohibited weapons, source handling, and criminal harassment.  Through these teaching sims, the recruits gain hands-on experience in responding to these advanced type of calls.    3.4.3.3 Longitudinal Cases Longitudinal cases are investigations that carry over multiple weeks of training.  The cases are delivered through a computer-based simulation program that uses a combination of video, photo, and text injects and assigns recruits specific tasks, questions, or assignments to progress through the investigations.  One of the longitudinal cases is a continuation of the sex assault teaching sim.   Recruits work on the investigations in small groups and their answers are monitored by instructors.  As assignments, they must complete tasks like writing an 87  operational plan, writing several different kinds of warrants, and writing an arrest plan.  These assignments are reviewed by an instructor who provides the recruits feedback on their work.  Inspired by the work of Werth (Werth, 2009; Werth, 2011) in police training, the longitudinal cases are designed to help the recruits build their advanced investigation skills as well as manage a case load of ongoing investigations.    3.4.3.4 Advanced Operational Policing Skills (AOPS) days Similar to the COPS days in Block I, the AOPS days are full days of training that build on specific advanced topics.  Some of the AOPS days remain unchanged from the old curriculum and new additions include a final Use of Force sign off day, the teaching sims days, and a new advanced outdoor range day.    3.4.3.5 Mentoring Junior Recruits It is important to prepare recruits to take on leadership roles within the communities they serve so the Block III curriculum looks to build leadership skills in the recruits through a structured mentoring of the junior Block I recruits.  Select Block III recruits are chosen to be a “Lead Recruit” for Block I scenarios.  A lead recruit is responsible for a team of Block III recruits who they assign to roles as actors, filmers, and dispatch for the Block I scenarios.  The Lead Recruit is responsible for ensuring the scenarios are set up properly, for running the scenarios, and for providing performance feedback for the Block I recruits.     At the start of Block III, all recruits receive specific training in how to give feedback to help them develop this skill.  After this session, recruits who are selected as Lead Recruits will provide feedback to the Block I recruits.  The Lead Recruits are provided with the 88  instructor guide for the scenarios they will be running in advance so that they can review the material and prepare for their role.  Providing feedback to peers can also help solidify a student’s own understanding of their performance (McCarthy, 2017).  An instructor is present to monitor the Lead Recruit’s performance and to provide them feedback on their leadership skills and the feedback they delivered, but they do not intervene in the scenario or feedback unless there is something unlawful that is given as feedback.  The lead recruits are changed each week to provide the opportunity to as many recruits as possible.  This experience is also an opportunity for the Block III recruits who are not selected as Lead Recruits to develop their teamwork skills by working together to support their peer who has the responsibility of running the scenario.  At the end of Block III, the recruits who were selected as Lead Recruits are provided with a letter of commendation that is forwarded to their home departments in recognition of their demonstrated leadership for the junior recruits.    3.4.3.6 Assessment As in Block I, there are two exam days in Block III: an entrance exam during the first week of Block III and a final exam during week seven.  The exam days are the same format as in Block I with five written stations and four practical scenarios assessed using standardized rubrics. Recruits also complete an Application for Graduation assessment portfolio that indicates how they have achieved the required level in each of the core competencies and are able to work at an independent level.  The majority of evidence in this Application for Graduation is calls from Block II, supplemented by feedback and exam rubrics from Block III training.   89  3.4.4 Block IV Block IV is the probationary period after graduation from the Police Academy but before full certification as a Certified Police Constable.  To date the structure of Block IV is unchanged with the new delivery model.    3.5 Development  The development of the curriculum began with familiarizing the cohort of instructors with the new model and its underlying educational philosophy.  A series of meetings were set with the instructional cohort where each session in each week of the template schedules were reviewed and the topics that would fit into each session identified.  Also, through this process, elements that were already present in the curriculum, such as guest speakers, were questioned as to their contribution to the overall learning of the recruits.  Some sessions were identified as fun, but not contributing to overall development or as not helpful, and were discarded from the program and replaced with learning activities designed to build skill and ability.  Other sessions were identified as important for learning and development and retained in the program.  Similarily, some topics that were taught in Block III, such as CID, were identified as crucial for success during Block II and moved into Block I training.  Other topics, such as familiarization with Indigenous issues, were identified as building on the foundational principles of professional communication, and moved from Block I to Block III.   Cranton (2011) discusses using transformative learning and critical theory as a framework for the scholarship of teaching and learning (SoTL) to question the question the underlying assumptions, beliefs, norms, and values of the discipline (Cranton, 2011).  The process of starting to develop the new curriculum followed this framework by continuing to 90  question each element of the curriculum and why it was or should be included.  This process was transformational for some instructors as they became more comfortable with the underlying philosophy of the new delivery model and more familiar with how it aligned with their own values as instructors.   After each week was reviewed and the topics assigned to various teaching sessions, development meetings were held where instructors were divided into smaller groups, usually comprised of one instructor expert from the various disciplines, and the content of the sessions was created.  As a group we would discuss the key learning objectives for the session and the instructors would be asked to identify calls they had responded to that included each of the objectives.  Based on their identified calls, each small group would work to develop one case or scenario (depending on the session in development) using a blank template provided to help structure their thoughts.  Their completed templates were then collected, edited, and structured to shape them into case exercises, scenarios, or lesson plans.  Development was an iterative process, with the instructors sketching out the basic info, me compiling it into a case format and returning to the instructors with questions, until the final version was completed.  Instructor guides with student material and notes on key points, as well as instructions on how to run the session, were created for each learning activity.  Checklists were developed with comments sections to provide recruits written feedback on their scenario performance.  Other lessons that are not in the case or scenario format were completed using a similar process with small groups of instructors.   Exam scenarios were developed by a similar scenario development process, with the addition of a standardized rubric for assessment.  The rubrics were developed as a draft and then brought to a group meeting of the instructional cohort where we reviewed the wording 91  of each element in the rubric to ensure it captured the desired intent of the scenario.  Rubrics were designed to allow for flexibility in approach to achieve the desired outcomes in the scenarios.   Also during this time a contractor was hired to edit and revise the existing manuals, as they are central to providing recruits with sufficient knowledge before the start of the week.  The manuals were completely rewritten to focus on core concepts and, through this editing process, a total of over seven hundred pages were removed from the Block I reading material.   The development process began in 2015 with a very ambitious timeframe for implementation in September of 2015.  In the summer of 2015 it was decided that neither the material nor the instructors were sufficiently ready for implementation and the start of the new delivery model was ultimately delayed until September 2016.  The extension of Block II to facilitate three classes of 36 with set start dates, however, was implemented in September 2015 one year before implementation of the curriculum delivery changes.   Development of the material, including new lesson plans for almost every session in the program as well as new exams and scenario exams, continued using the instructional cohort as subject matter experts (SMEs).  The development was at times difficult, because the instructors were both teaching and developing curriculum at the same time.  Using the instructors as SMEs was part of a change management strategy to have the instructors invested in the new model by feeling ownership over its content because they were integral to its development.  The strategy was only somewhat successful, however, because there was no additional time to work on the development activities.    92  3.6 Implementation  Implementation of the new Block I delivery model started in September 2016.  Implementation was based on a phased approach, whereby recruits who started in the old delivery model completed all three Blocks of training according to that model.  That meant instructors were teaching Block III using the old delivery model and Block I using the new delivery model.  During the first class of the new model, I sat in and observed all classes to ensure the lesson plans were being followed, to make any last minute or on the spot adjustments as needed, and to make notes on things that needed to be modified for the next class.  Also during this time the new structure for Block II was created along with a training course for existing FTOs.  The Block III curriculum was also developed during the first and second classes in the new Block I program, from September 2016 until May 2017.  The first class through the new delivery model started training in September 2016 and graduated in June 2017.  This class did not experience the full delivery model, however, as the senior class was still in the old delivery model.  The structure of the practical scenarios had to be modified from their planned structure with Block III recruits mentoring Block I recruits, to a format where the Block I recruits acted in scenarios for each other and the instructors provided feedback directly to the Block I recruits.  This modified version of the practical scenarios was used for the first two classes through the new program.  The first class to experience the full version of the new delivery model started training in May 2017 and graduated in March 2018.   After each class there have been modifications to the curriculum material to adjust the learning opportunities and increase the effectiveness of the program.  Exam rubrics were validated by examining the reasons for recruit performance:  any aspect of the rubric where 93  20% or more of the class did not meet or exceed expectations was struck for the first two classes, and the evaluation criteria was examined to determine if it was a flaw in the rubric or a flaw in the program.  Necessary adjustments were made to either the lesson plans or the wording of the rubrics.  Adjustments were also made to documentation processes, such as requiring recruits to complete one self-assessment debrief form per scenario day instead of one per scenario.  There were also changes made to the naming of certain documents or activities to make them more “police friendly”.  The self-assessment forms were renamed “practical scenario debrief forms” and the assessment portfolio was renamed the “Application for Advancement/Graduation”.    3.7 Delivered Curriculum The preceding sections have described the design, development, and implementation of the new curriculum delivery model for Police Recruit training in British Columbia.  Inevitably when a program is delivered for the first time, however, there will be components of the program that are not delivered as they have been designed.  This lack of alignment between design and delivery can be due to a variety of factors including faculty development and faculty comfort level with their various new roles, unforeseen administrative and coordinating requirements, student confusion or lack of understanding of new tasks, and organizational resistance to change.  These influencing factors are discussed in Sections 6.2 through 6.4.  The program that is evaluated is the program that is delivered, not the program that is designed, so it is important to note where discrepancies between the design and delivery occurred.  Many key components of the curriculum delivery model took at least one offering before they ran as designed, with the largest and most impactful differences coming 94  in case studies, scenarios, mentoring, and directed study.  As the evaluation was conducted during Block II training, only Block I is discussed in the following sections.    3.7.1 Class 152 Case Studies The intent of the case study sessions is to provide an opportunity to apply the knowledge learned through the pre-reading during small group discussion, facilitated by an instructor, at the beginning of the week before applying the knowledge to scenarios.  In the case study sessions, each instructor is responsible for monitoring two groups of recruits to ensure that all group members are participating and that the group members all have a strong understanding of the legal, patrol, investigative, and/or traffic concepts that they are discussing.  This small group facilitation style of teaching was new for many, if not all, of the instructors and many struggled with the new format.   Each case study session has an accompanying “Instructor Guide” that includes the student material and the answers to the questions the recruits are asked, so that the instructors can effectively prepare for their teaching.  It also includes directions for the instructors in terms of group facilitation.  These directions include posing recruit questions back to their groups before answering them, monitoring participation, and communicating back to the lead instructor, who conducts the debrief of the cases.  For instructors who were used to lecturing and being able to tell multiple stories throughout their lectures, there were many challenging aspects to this new format.  Instructors struggled with their role and the directions to not directly answer recruit questions before exploring the knowledge of the group.  Many instructors interpreted these directions as meaning that they were not to be involved in the groups’ discussions at all and thus did not interact or monitor the groups they were assigned 95  to monitor.  Some instructors preferred to group together and discuss unrelated topics at the front of the room while the recruits worked, or to manage emails rather than engage with the groups.   The instructors also struggled with discipline and keeping recruits on task during these sessions.  Some recruits did not want to engage in the group discussions and would frequently side-track the group, preventing the recruits from fully completing the case activities.  Unfortunately instructors did not correct this behaviour, and it became a larger issue as the block progressed.  Lastly, some instructors seemed to actively involve themselves in this disruptive activity by spending the case discussion time talking to recruits about completely unrelated matters such as sports teams or scotch.   The first offering of the case studies component of the curriculum did not meet its intended goals for all of the recruits.  To address this issue for subsequent classes, a small core group of instructors who understood and were comfortable with the concept of case studies were assigned to each case study session to ensure consistency in delivery and classroom management.  The assigning of core instructors has significantly improved the delivery of case study sessions and brought the delivery closer to the design.  3.7.2 Class 153 Case Studies   While the issues with classroom management and delivery of the case study sessions were significantly improved by assigning a core group of instructors, the case study sessions for Class 153 were still not delivered as designed.  The difference for this class, and other classes of 48 recruits, is in the timing of the case study sessions with respect to the other educational activities.  With a class of 48 recruits, there are facility and scheduling issues that 96  arise and impact the delivery of the program as designed.  The only classroom that can fit a class of 48 recruits at the JIBC is the lecture theatre, which is not conducive to small group work as the seats are fixed in place.  Consequently, any sessions that were designed as full class need to be delivered either twice with half the class in a different session or in two separate classrooms with a different group of instructors.  Also, the larger class size means that there is one additional group of 12 recruits that must attend the firing range to learn to shoot.  This extra group of 12 recruits meant that some parts of the curriculum must be changed from whole class sessions to a group of 12 in the firearms rotation and delivered four times.  The sessions that were moved into the rotation were Use of Force in the morning and case studies in the afternoon.  This change meant that some recruits were participating in the case study sessions at the end of the week, after they had already completed the components of the curriculum where they practically applied their skills.  This change in order from the intended learning through pre-reading, applying through case studies, and applying through practical scenarios design of the program has the potential to reduce the effectiveness of the sessions because recruits have not tested their understanding of the material before they attempt to use it in practice.  This shift from the design of the program is a necessity with any class above 36 recruits and will remain an ongoing concern.    3.7.3 Practical Scenarios  The practical scenarios were designed as the application and integration of the knowledge gained through pre-reading, case studies, and other relevant sessions in the week.  Their design included integrating all of these skills and knowledge into both the scenario and the debrief components.  The full program design includes the senior Block III recruits 97  running scenarios for the junior Block I recruits and also the junior recruits participating in the Block III scenarios as actors and filmers.  As the first two iterations of the new curriculum delivery model did not have a senior Block III class, the scenarios were run by instructors using Block I recruits as actors.  This necessary modification from the design resulted in multiple repetitions of the same scenario with the same group of recruits so that each recruit could participate.  Although instructors were told to layer the feedback so that each recruit could improve incrementally over the other, some recruits found the repetition excessive and not helpful.  Additionally, some instructors struggled to hold back feedback from the first scenario, resulting in a very long debrief that delayed the rest of the scenarios, and did not leave much room for an increase in performance in the last recruits.  Further, many instructors struggled with the new concept of integrating the legal aspect into the scenarios.  Directions in the instructor guides said to ask recruits, before the scenarios started, what the essential elements of that type of offence were and to have the recruits look it up if they didn’t know.  There were also a series of legal questions to ask in the debrief, to reinforce the legal concepts and ensure a thorough understanding.  This integration of legal concepts was a new approach to scenarios, as debriefs in the past had mostly focused on the tactical aspects.  Many instructors did not realize the importance of incorporating the legal components into the scenarios for the first class.  This resulted in an under-emphasis of the importance of knowing legal authorities.  For the second class, and all further classes, an increased emphasis was placed on pre-briefing instructors before each scenario day to ensure they asked all the required legal-related questions.   98  Lastly, each of the scenarios are highly scripted in terms of actions and outcomes.  This is another large change of scenarios from the old delivery model where both instructors and actors could improvise and alter the scenarios.  In the new delivery model, the scripting of the scenarios ensures that all of the relevant points that recruits need to practice that week are integrated into the scenario and that components that recruits have not yet been taught are not introduced.  This scripting and inability to improvise was a particularly challenging aspect for some instructors, who insisted on ‘ramping up’ scenarios as the day progressed instead of maintaining the script and allowing recruits to apply their new skills.  Often key learning points were missed or glossed over because the instructor wanted a more entertaining scenario. Changes to the scenarios also impacted the recruits’ subsequent report writing assignments, which were based on the scripted scenarios.  In the following classes, it was emphasized in the instructor briefings that the scenarios were not to be altered.   The first class through the new delivery model did not experience the scenario application as designed because it lacked full integration of legal concepts and the scenarios frequently did not follow the intended script.  While these issues were remedied in future classes, the program that Class 152 evaluated did not have the scenarios delivered completely as designed.    3.7.4 Mentoring The mentoring program was designed to involve all instructors who taught in the Recruit Training program.  This design was intended to both share the work load of mentoring recruits, but also to fully involve instructors in the program.  Because mentoring includes reviewing training plans and other documentation, it requires a certain level of 99  computer literacy.  The intent of the mentoring was to have each instructor assigned a small number of recruits.  Recruits would complete their weekly training plans by Sunday night and instructors would review them on Monday and give the recruits feedback on anything that needed to be changed for their directed study plans.  At the weekly instructor meetings on Tuesdays, mentors would discuss what their recruits planned to do during directed study so that all instructors would have a sense of areas where recruits needed additional help.  All instructors would then be available during directed study ready to help recruits.  In practice, the instructors had varying levels of comfort with the computer learning management system and some were unable to monitor their recruits’ training plans.  Others did not schedule time to review the plans and did not keep up with their recruits’ submissions.  Other instructors were teaching and not available to meet with their recruits during the allotted time for face to face meetings.  This discrepancy between design and delivery resulted in some recruits having active and supportive relationships with their mentors and other recruits having little to no interaction with theirs.  The recruits who had sporadic interaction with their mentors felt isolated and did not value the mentoring component of the program as much as the recruits who did have active relationships with their mentors.  After several classes, it was decided that a small group of instructors would share mentoring responsibilities and this approach has brought the mentoring component of the program in alignment with its design.  For the classes involved in this study, the mentoring component of the program was not aligned with its intended design and did not consistently provide the accountability or support it intended.   100  3.7.5 Directed Study Directed study was perhaps the most difficult component of the program to implement.  Both instructors and recruits struggled with the purpose and delivery of this component of the curriculum.  The delivery of directed study is closely linked to the mentoring component whereby the mentors need to be aware of their recruits’ plans and communicate those plans to the other instructors.  Instructors in the first offerings of the delivery model, but particularly in the first class, seemed unclear that they could and should be telling their recruits how to use their directed study time if that recruit had been observed struggling with a particular component or concept in the program.  Because of this lack of certainty, recruits were left to do whatever they wanted during directed study time without feedback from the instructors.   The intent of the classroom portion of directed study was that recruits would self-organize and be working in small groups on different areas with instructors circulating to answer questions.  Instructors really struggled with this drop-in concept.  Some instructors refused to talk to a group of recruits if another instructor was talking to a different group of recruits at the same time in that room.  Other instructors would not circulate to the recruits and would simply sit at the front of the classroom waiting to be approached.  Worse, instructors would sit in the cafeteria having coffee while the recruits worked.  Often one instructor would start lecturing and all recruits would just focus on the lecture in case they missed something that was said.  Recruits were confused by the process and often ended up working on their own doing their pre-reading for the next week, which was not the intent of directed study.  Instructors tried to use the time to schedule additional sessions.  They also tried to remove the drop-in component by scheduling review periods, basically changing 101  directed study into a series of lectures.  This component of the delivery model was the most frustrating to implement and remained unaligned with the design for the longest period of time.  Approximately one year after implementation of the first class, a small group of core directed study instructors were assigned to the classroom component and this approach seems to have helped align the delivery with the design as these instructors are ensuring that the time is used as intended.  When the classes involved in this evaluation were in the program, however, directed study time was not delivered as it was designed.    3.8 Summary This chapter outlined the structure of Recruit Training before and after the changes to the delivery model.  It discussed the design process as well as the key components of the new delivery model, including educational theory that was not discussed in Chapter 2.  The chapter ended with a discussion of the implementation and delivery of the curriculum for the first classes.  The areas of the curriculum where the delivery differed significantly from the design were discussed.  These differences are crucial in analyzing the results of the evaluation because it is the delivered program that was evaluated, not the program as it was designed.  The following chapters will outline the design of the study, present the results of the quantitative and qualitative analyses, and discuss the significance and implications of these results.    102  Chapter 4: Methodology This chapter outlines the process of design, development, implementation, and evaluation of the new delivery model for Police Recruit training in BC that took place over the course of 2014-2018.  The project was intended as a quantitative evaluation of a program change using ‘pre-intervention’ and ‘post-intervention’ surveys of recruits and their field training officers.  The quantitative analysis was intended to be supplemented with qualitative data from survey comments and from focus groups.  In the course of carrying out the project, however, it expanded from the initial quantitative analysis into organizational and cultural change management, which will be included in the discussion.   The research design section of this chapter will outline the intended project design including timeline and methodology.  The project narrative section of this chapter will outline my perspective throughout my EdD program and in this project and detail changes that were made to the project design as the study progressed.    4.1 Research Design This section outlines the theoretical framework for program evaluation, the data sources, the timeline for survey administration, and the specific analytical methods to analyze the quantitative and qualitative results.    4.1.1 Program Evaluation Framework From the many different models for program evaluation, it is important to select a method that aligns with the information to be gathered and the decisions to be made (Bresciani, 2006).  Whether measured through direct or indirect methods, the information 103  gathered should directly relate to the determination of whether the program is effectively meeting its goals (Bresciani, 2006).  One common evaluation framework centres around Kirkpatrick’s four level framework of evaluation (Alliger & Janak, 1989; D. L. Kirkpatrick, 1977; D. L. Kirkpatrick & Kirkpatrick, 2006; J. Kirkpatrick & Kirkpatrick, 2016).  Although not without controversy (Holton,Elwood F., I.,II & Kirkpatrick, 1996), this framework has been extensively used in educational development to evaluate educational and training programs (Alliger & Janak, 1989; J. Kirkpatrick & Kirkpatrick, 2016).  It consists of four levels of program evaluation:  reaction of the learners, learning during the program, behaviour change through applying the learning on the job, and results for the organizational impact of the training (Alliger & Janak, 1989; Alliger, Tannenbaum, Bennett, Traver, & Shotland, 1997; Grohmann & Kauffeld, 2013; D. L. Kirkpatrick & Kirkpatrick, 2006; J. Kirkpatrick & Kirkpatrick, 2016).   Criticisms of the Kirkpatrick model include the view that the levels are hierarchical, with each level resulting in the next through positive correlation (Alliger & Janak, 1989; E. F. Holton, Bates, Noe, & Ruona, 2000; Holton,Elwood F., I.,II & Kirkpatrick, 1996).  These assumptions are not necessarily supported by the research, in particular, there is no demonstrated correlation between learner reaction and learning or transfer of learning to the job environment (Alliger & Janak, 1989; Alliger et al., 1997; Grohmann & Kauffeld, 2013; E. F. Holton et al., 2000; Holton,Elwood F., I.,II & Kirkpatrick, 1996), although learner reactions are the most frequently measured level of the framework (Grohmann & Kauffeld, 2013).  Alternative models, such as those championed by Holton, divide training into outcomes with different levels of primary and secondary influences, all of which can be measured to evaluate a program (E. F. Holton et al., 2000; Holton,Elwood F., I.,II & 104  Kirkpatrick, 1996).  Another approach uses domains of learning – cognitive, psychomotor, and affective - as a conceptual framework for program evaluation (Kraiger, Ford, & Salas, 1993).   In order to obtain the highest possible response rate, and for the broad dissemination of results, the evaluation model must be uncomplicated and evaluation surveys must be concise to obtain only the relevant information for the evaluation (Bresciani, 2006; Grohmann & Kauffeld, 2013).  When viewed as a framework and tool to standardize language, the Kirkpatrick model can provide this familiar and widely accepted approach (Grohmann & Kauffeld, 2013; Wang & Wilcox, 2006).  Alliger et al. (1997) and Wang and Wilcox (2006) propose modifications to the Kirkpatrick model that influenced my approach and are outlined in Table 3-1.   Kirkpatrick (2006) Wang and Wilcox (2006) Alliger et al. (1997) Reactions Short term outcomes Reactions of learners Reactions Affective reactions Utility judgements Learning Learning by participants Learning Immediate knowledge Knowledge retention Behaviour/ skill demonstration Behaviour Long term outcomes Behaviour on the job Transfer Results Organizational impact and return on investment Results Table 4-1 Summary of Kiripatrick model of program evaluation and modifications from Alliger et al., (1997) and Wang and Wilcox (2006) that influenced the program evaluation design of this study  Grohmann and Kauffeld (2013) demonstrated that the grouping into short term and long term outcomes by Wang and Wilcox (2006) was supported by statistical analysis.  They emphasize that the evaluation of training should include both short term and long term 105  outcomes to encompass both reactions and transfer to practice (Grohmann & Kauffeld, 2013).There should also be sufficient time to allow opportunity for learners to use their new skills in practice before evaluation of the long term outcomes (Grohmann & Kauffeld, 2013; Wang & Wilcox, 2006). In order to evaluate the program changes, comparing the effectiveness of the lecture-based and competency-based training programs, both short and long term outcomes, as indicated in Table 4-1 were measured through measurements of recruit reactions, learning, and behaviour/transfer.  As shown in Table 4-1, Alliger et al., (1997) indicate that reactions can be measured by affective reactions and by utility judgements.  In the survey design, recruits were not asked how they enjoyed the program, rather how well they thought it prepared them for their responsibilities, thus targeting the utility judgement aspect of their reactions.  All three domains of learning indicated in Table 4-1 were evaluated in project design to thoroughly cover recruit short-term learning.  Additionally, to incorporate longer term outcomes, or behaviour/transfer of skills, the second survey administration and the FTO survey examined recruit ability and performance while performing the job during Block II.  Table 4-2 indicates the data source that measured each of these elements of program evaluation.  The project design incorporated all aspects of the program evaluation models outlined in Table 4-1 except the results that equate to organizational impact.    4.1.2 Evaluation Design and Methodology To address the primary research question about the effects of introducing a competency-based education framework on police recruit preparedness for field training, two groups of recruits were used for the analysis.  Class 151 was trained using the traditional 106  didactic, lecture-based curriculum delivery model.  Classes 152 and 153 were trained using the new competency-based training framework.  Initial project design limited the analysis to Classes 151 and 152 but when the analysis of the results was taking place it was realized that Class 153 could be included in the study within the proposed timeframe and this modification was made to the project design and associated ethics applications.  For the purpose of this study, it was not possible to directly compare performance during Block I on exams or tests because all of the measures of learning changed with the changes to the curriculum delivery.  Analysis required administration of surveys where recruits-self reported their learning combined with data collection from field trainers.  The initial project design included only data collection from recruits and field training officers.  After the project began, it was recognized that the exam assessors in the competency-based model may be an additional data source and a survey and focus group of the competency-based exam assessors was added to the project design and associated ethics approvals.  Table 4-2 summarizes the planned sources of data and the contribution of each to the project evaluation design, as described below:   107    Kirkpatrick Wang and Wilcox (2006) Alliger et al. (1997) Data source Reactions Short term outcomes Reactions of learners Reactions Affective reactions *not directly questioned Utility judgements Recruit survey 1 Recruit survey 2 Learning Learning by participants Learning Immediate knowledge Recruit survey 1 Assessor survey and focus group Knowledge retention Recruit survey 1 Assessor survey and focus group Behaviour/ skill demonstration Recruit survey 1 Assessor survey and focus group Behaviour Long term outcomes Behaviour on the job Transfer Recruit survey 2 FTO survey and focus group Table 4-2 Summary of program evaluation model from Table 4-1 with data sources from the project design  • Recruit Survey 1:  Recruits’ self-assessment of their perceived readiness for Block II training and the perceived utility of Block I in this preparation.  This survey was administered before recruits left the Police Academy at the end of Block I.  At this point of time in their training, recruits may not have a full picture of what the day to day operations of a patrol member entails.  Although many recruits are new to policing and may not have a firm grasp on all of the job requirements, the perceived utility of the training will influence their motivation to learn and participate during Block I.  This survey measured short term outcomes including the reactions of learners through utility judgements and the learning by participants through immediate knowledge, knowledge retention, and behaviour/skill demonstration, as shown in Table 4-2. 108  • Recruit Survey 2:  Recruits’ self-assessment of their performance on the job, their ability to transfer their skills and knowledge from Block I to Block II training, and the perceived utility of Block I training.  This survey was administered at the mid-point in Block II training (approximately 10 weeks in to Block II) when recruits had a better understanding of the requirements of a patrol member but while they could still recall their Block I training and attribute their ability/skills to training and not FTO influence.  This survey measured the short term outcome of reactions of learners through utility judgements and the long term outcome of behaviour or transfer to job performance, as shown in Table 4-2. • FTO survey:  An assessment from the recruits’ Field Training Officer on the recruits’ ability to transfer their skills and knowledge from Block I training to Block II job performance.  This survey was administered at the same time as the second recruit survey.  The FTO survey measured the long term outcome of behaviour or transfer to job performance, as shown in Table 4-2.  • FTO Focus Group:  Field Training Officers who have trained recruits in both the old and the new curriculum delivery models could provide a valuable source of comparative data in a qualitative focus group setting.  While FTO selection is at the discretion of the departments and outside the control of this study, any FTOs who had trained recruits in the previous delivery model (Class 151 or earlier) and recruits in the new delivery model (Classes 152 or 153) were invited to participate in a focus group to further investigate the differences in recruit preparedness for Block II field training.   The initial project design asked departments to use the same FTOs to train recruits in Class 151 and in Class 152 so a direct comparison could be made.  109  Between the 36 recruits in each of these classes, only one FTO trained recruits in both classes, so the project design was modified to include FTOs who trained recruits in Class 152 and had trained recruits in any class in the lecture-based delivery model.  When Class 153 was added to the project design, this group was expanded to include FTOs who trained a recruit in Class 153 and a recruit in any class in the lecture-based delivery model.  Changes were made to the associated ethics approval for the expansion of the inclusion criteria for the FTO group with each change.  The FTO focus group measured the long term outcome of behaviour or transfer to job performance, as shown in Table 4-2.   • Assessor Survey and Focus Group:  Exam day assessors in the new curriculum delivery model are current or retired police officers who were trained to assess recruits in the Assessment Centre (AC).  The AC was a screening tool used by departments prior to hiring candidates.  Potential candidates were sent to participate in a day long scenario-based assessment of their potential for development as a police officer.  Assessors for the AC were required to successfully complete a training course that involved standardization of expectations and documenting performance of candidates.  The AC program was cancelled by the provincial government in 2016 and this pool of highly trained police officers were recruited to act as impartial assessors in the new delivery model.  While they do not have direct experience with recruit training in the old delivery model, they were familiar with the skill level of incoming candidates through their involvement in the AC.  At the time of the initial project design, this group was not included as a data source because it had not yet been determined who would assess the exams in the new delivery model.  Once this 110  decision was made, the group of assessors was identified as a data source and the necessary changes to the project design and associated ethics approvals were completed.  This group was sent a survey to determine their sense of the level of preparedness of the Block I recruits and invited to attend a focus group.  The survey and focus group for the Block I exam assessors addressed the short term outcome of learning by participants through their observations of the recruits’ immediate knowledge, knowledge retention, and behaviour/skill demonstration, as outlined in Table 4-2. The combination of recruit self-reporting in surveys 1 and 2, and evaluation from field trainers, was intended to enhance the reliability of the evidence-base for the program evaluation (Braverman, 2013).  To allow comparison across time-points, the recruits were administered the same survey for both recruit data point collections.  This strategy was designed to enable an analysis of the utility of Block I training before and after recruits have practical job experience.    4.1.2.1 Survey design Validity of a construct relies on multiple sources of evidence to demonstrate that the construct measures what it purports to measure in its inferences and assumptions (Cohen, Manion, & Morrison, 2011; Cook, Brydges, Ginsburg, & Hatala, 2015; Downing, 2003; Kane, 2013).  Validity is not a property inherent to a measure itself, it is contextually dependent on the interpretations and intended use of that measure or construct (Cohen et al., 2011; Cook & Hatala, 2016; Kane, 2013).  The amount of evidence required to support a claim of validity is dependent on the impact of the claims made from the evaluation.  The 111  more serious, or severe, the claims, the greater the amount of evidence required to support the validity argument of that evaluation (Kane, 2013).  Kane’s framework for assessment validity relies on four categories of evidence and can be applied to tests designed for program evaluation (Kane, 2013).  These four categories are: scoring, which includes assumptions and choices about the scoring criteria and response options; generalization, or reliability, that includes information that demonstrates that scores on the test are reflective of performance on the test; extrapolation, which includes information that demonstrates that performance on the test can be extrapolated to performance in real life; and the consequences or implications, which includes the intended use and decisions made from the test (Cook et al., 2015; Kane, 2013).  Evidence from each four of these categories can be taken together to determine the validity of a particular evaluation for a particular use at a particular point in time (Kane, 2013).       In measuring the recruits’ perceived ability to apply their Block I learning to their Block II field training, it was important to use a measurement scale that would adequately represent the complexities of patrol level police work.  The Police Sector Council National Framework of Constable Competencies was selected as the appropriate construct to measure patrol level ability due to the unique depth of research and collaboration from the Canadian policing community that went into generating the competencies (Police Sector Council, 2011).   The primary assumption in using these competencies as a reference point to measure patrol level ability is that they are an accurate representation of the requirements at the constable level of policing in BC.  Further, multiple departments in BC, such as Abbotsford and Victoria, use the competencies for their HR management and promotion.  This adoption by police departments for performance related evaluation and promotion decisions provides 112  strong evidence that the competencies are an accurate and valid representation of requirements.  This evidence supports the use of the PSC competencies to asses recruit preparedness in both the scoring and extrapolation inferences of Kane’s framework.  For the scoring, the wording of the questions uses the language and definitions from the PSC competencies.  For the extrapolation inference, as discussed, the adoption of the competencies by departments indicates that they are representative of real world performance.  For the implication inference, the consequences for stakeholders in responding to this survey were negligible and, providing the metric was an accurate representation of policing, any metric could have been used.  The generalization inference, which looks at how well the items on the evaluation represent all of the possible attributes to be measured is again supported by the high level of collaboration that went into the formation of the PSC competencies.  There exists nothing in the literature that provides a better description of the knowledge, skills, and attitudes required of contemporary police in Canada. Another factor to be considered in the validity argument is the choice of scale anchors for the responses.  In the survey, there were two types of questions related to each competency.  The first question asked about perceived ability in a particular competency and the second question asked about how well Block I training prepared them for that particular competency.  Following the work of ten Cate and Scheele (2007) and Crossley et al. (2011), as described in the Introduction section on assessment, the anchors relating to the amount of supervision a recruit required to perform the demands of this competency were chosen.  They will also be asked to rate the utility of Block I training in preparing them to meet each competency level.  The amount of supervision required was selected as anchor points because of evidence in the literature that demonstrated an increased reliability when using 113  developing independence as a marker of skills assessment (Crossley et al., 2011; Frank et al., 2010; Regehr et al., 2007; ten Cate, 2006; ten Cate & Scheele, 2007).  Because of the large amount of time, resources, and research that were involved in developing the Police Sector Council Competencies, because it is currently the only national framework for policing competencies, and because alignment of the recruit training curriculum with the PSC National Constable Competencies was mandated by the BC Provincial Government, the surveys were designed using these Constable Competencies.  The surveys were not piloted because feedback during the pilot on the competencies would not have changed the survey design that used the nationally accepted and BC government mandated measure of the role of a police Constable.   The surveys for the recruits, FTOs, and assessors all used the PSC competencies as the assessment construct and all used the same anchor points in the questions to facilitate comparison between groups.  The surveys for each group can be found in Appendix B  -  Surveys.  In addition to the PSC competencies, each survey also included a collection of demographic information such as age, gender, education level, previous policing experience and, for the FTOs and assessors, how many years they had been police and FTOs, and which exams they assessed, respectively.  The last section on the survey was an open comment section to collect qualitative data from the respondents.    4.1.2.2 Survey Administration and Timeline Figure 4-1 outlines the project timeline for administration of surveys to compare the recruits’ ability and preparedness for Block II training before the intervention (Class 151, 114  lecture-based delivery model) and after the intervention (Classes 152 and 153, competency-based delivery model.    Figure 4-1 Project timeline for recruit survey administration for classes 151 (pre-intervention, lecture-based), 152 and 153 (post-intervention, competency-based)  The surveys were administered through the UBC survey tool Fluid Surveys.  A consent form was included on the first page of each survey.  In addition to the first page consent form, a letter was sent to each Department’s Training Officer to provide to FTOs.  This letter outlined the purpose of the research and the consent process.   The first survey completed by Block I recruits was sent the Friday of Week 12 of their classes at the Police Academy (Figure 4-1).  Recruits then had the weekend and their last week of class to complete the survey.  The survey closed following their last day of classes in Block I.  The second survey that recruits completed, during Block II, was sent after 115  approximately 10 weeks of Block II training.  This timing was chosen, rather than the end of Block II, so that Block I training would be fresh in their minds and that they would have only had one field trainer at that point in their training.  The survey during Block II remained open for two weeks, to account for different departmental shift schedules and to allow sufficient time for completing the survey while on duty.   At the same time as recruits were sent the survey for completion during Block II, their FTOs were also sent the FTO survey.  This survey was sent out and remained open for the same duration of time as the recruit survey.   For the purposes of tracking survey responses, the recruit and FTO surveys were collected based on class number and recruit last name.  Once the surveys were collected, each recruit was assigned a unique identifier that replaced all references to their name in both the recruit and FTO surveys.  This method of anonymizing the data was chosen to make it easier for both recruits and FTOs to complete the surveys during Block II as they did not need to remember an identifier code.  It was anticipated that the majority of these surveys would be completed on shift when there was not a lot of time to search for identifier codes.   The assessor surveys were sent out on a timeline independent of either class of recruits.  This survey was distributed in February of 2018, using UBC’s Qualtrics survey program.  This survey remained open for three weeks, to accommodate completion by several assessors who were on vacation but wanted to complete the survey when they returned.  The assessor surveys did not collect any identifying information as they did not need to be correlated back to a specific recruit.        116  4.1.2.3 Statistical Analysis The data collected from the surveys was assumed to be non-normal due to the small sample size and the unknown characteristics of the population (Cohen et al., 2011).  Because of the non-normal distribution, non-parametric statistical tests were used to analyze the collected data.  While there is some controversy in the literature around the acceptability of using parametric statistical tests on non-normal data, it is generally accepted that non-parametric tests are the most appropriate form of analysis for populations where the data cannot assume normal distribution (Cohen et al., 2011).  Although the non-parametric tests are generally less powerful than their parametric counterparts, they make no assumptions about the population studied (Cohen et al., 2011), so were the most appropriate for analysis in this project. The survey data were analyzed using IBM SPSS version.25.  Data were collected from Fluid Surveys and exported into an SPSS file that was then imported into SPSS where the data was anonymized and coded for analysis.  The Wilcoxon Signed Rank Test was used to compare responses between survey 1 and survey 2 within the same class.  This test is used for related samples where you are looking at the same group answering the same question at two different points in time (Cohen et al., 2011).  To compare differences between the class and the FTO scores and between the lecture-based and competency-based programs, the Mann-Whitney test was used.  This test is used for two independent samples and indicates merely that a difference is present.  To determine the source of the difference, or the direction of the difference, it is necessary to also run a Cross-tabulation report (Cohen et al., 2011).  The Mann-Whitney test was also used to investigate any differences between reporting due to gender, and the Kruskal-Wallis test was used to investigate any differences between 117  reporting due to previous police experience, education level, and FTO experience.   The Assessor survey data was analyzed using descriptive statistics, but not compared to other data sets because it was not in reference to a specific class or a specific recruit.    4.1.2.4  Qualitative Data Analysis The project design planned for qualitative analysis of focus group transcripts and of narrative comments from the various surveys.  This data was to be analyzed using NVivo version 11 and a grounded theory methodology (Cohen et al., 2011), which builds themes and interconnections as they emerge from the data.  As discussed in Section 4.2 Project Narrative, the focus groups either did not occur or were too small to be representative and were not included for analysis.    4.2 Project Narrative I began my EdD while employed as the PBL Program Manager for the MD Undergraduate Program in the Faculty of Medicine at UBC.  During my time in this role, I completed my coursework, comprehensive exam, and developed a project that focused on peer feedback in a PBL program and its effect on developing communication skills in medical students.  When I changed positions and moved to the role at the JIBC Police Academy, I no longer had access to the medical student population and PBL program that were the basis of my EdD project.   When I started my role at the JIBC Police Academy, I took a leave from my EdD studies to learn my new position.  During this time, I observed recruit training classes, read literature on policing and the little I could find on police training, and discovered that there 118  were a lot of similarities between medical education and police training.  These similarities have been noted by others, particularly as a movement developed to include PBL-based exercises in police training (Weinblatt, 1999).  My experience in my previous position, and some of the research I had conducted in developing my previous EdD project, was helpful in beginning the research and development of the proposal to change the structure of police recruit training at the JIBC.  The foundations of the PBL program are in a constructivist framework and transferred well to the new project.  Although I decided a pure PBL approach was too open ended to be accepted as the foundation of the recruit-training delivery model, the case-based method was much more appropriate for this level of training and fit well into the framework of the new program model.  As noted in Section 1.1 My Perspective, the development process was challenging.  While the development process was ongoing, historical questions about recruit training led to the BC Association of Municipal Chiefs of Police (BCAMCP) commissioning two retired police officers to conduct a review to identify training needs.  This project was not intended as a curriculum review, but rather a gap analysis of training needs.  During the fall of 2016, while the first offering of the new Block I was underway, the reviewers attended a day session at the JIBC with the Director of the Police Academy and the President of the JIBC to discuss several matters pertaining to the review.  I presented an overview of the new delivery model drawing on evidence from the literature in PBL and case-based learning, competency-based learning, and the concept of a ‘flipped classroom’ to increase time for application.  Despite the review being categorized as not a review of curriculum, the draft report contained a large amount of criticism of the new delivery model.  In response to the initial review, the BC Provincial Government commissioned its own review of the Police Academy 119  governance.  They hired a retired police officer and former Police Academy Director to conduct the review.  This final report has been submitted to the government and an executive summary has been released to the Police Academy Director.  The focus of this review was to provide models to fund and manage the JIBC Police Academy during the current climate of decreasing resources and increasing demands.  Further to the initial training needs review, the BCAMCP commissioned the same reviewers from the initial review to travel across Canada to conduct an analysis of recruit training.  They visited many departmental training facilities that train in-service members as well as the Atlantic Police Academy and the Seattle Police Department.  This review has been submitted to the BCAMCP, who commissioned the review but at the time of writing its contents have not been released.  Lastly, an additional curriculum review commissioned by the BC Provincial Government is planned for 2018.  This review will examine the current curriculum delivery model, in response to questions or misunderstandings that have arisen about the new delivery model.  The timing of these reviews such that they coincide with the development and implementation of the first classes through the new delivery model has complicated both the implementation and the findings of the reviews.  Obtaining an accurate projection of staffing requirements for the JIBC Police Academy is difficult because development for the new program increased instructor demands beyond what typically is expected.  Further, after the first class in the new delivery model, the Police Academy increased its maximum class size to accommodate an increase in departmental hiring.  Because of this increase, alterations to the schedule have put an increased demand on instructional resources.  Additionally, the timing of these reviews to coincide with the implementation of the new curriculum delivery model created a climate of speculation that cannot be separated from the program evaluation results.  The impact of this 120  climate on the change management process is discussed in Section 6.3 Organizational Cynicism and Organizational Change.  4.2.1 Changes to Project Design As noted in Section 4.1.2.1 Survey design, there were several changes made to the project design as the research unfolded.  These changes included the addition of Class 153 to the project, the expansion of the FTO inclusion criteria to FTOs for Class 152 or Class 153 who had trained a recruit in any class in the old curriculum model, and the addition of the exam assessor group to the project design.  As survey results were collected and response rates observed, there were also changes to the proposed analysis methodology.   The survey design included both the PCS Constable Competencies and the Constable Task List, despite the task list containing several categories that do not apply to police recruit training because they must be trained in a department-specific manner.  After collection of the survey data and preliminary analysis of the quantitative and qualitative information from the surveys, it was determined that the task category data did not add any information insight into the program evaluation and so the analysis focused solely on the Constable Competencies.   The response rates for Class 153 were exceptionally low for both recruits (Table 5-11  Class 153 demographic characteristics and survey response rates) and FTOs (Table 5-14  Demographic characteristics for FTO respondents for Class 153).  To determine if the data collected from Class 152 and Class 153 could be grouped into one post-intervention, competency-based training group, statistical analysis was carried out to determine if there were statistically significant differences between the Recruit Survey 1 from Class 152 and 121  Class 153 and between the FTO survey from Class 152 and Class 153.  This analysis is presented in Section 5.2 Quantitative Survey Analysis and demonstrates no statistically significant differences in how Classes 152 and 153 responded to Recruit Survey 1, Recruit Survey 2, nor in the difference between how recruits responded to Survey 1 and 2 (R2-R1) in the global ability and preparedness ratings.  No statistically significant difference between how the FTOs for Classes 152 and 153 responded in the global ability and preparedness ratings was observed.  Because no statistically significant difference was observed, it was determined that the two classes could be grouped as one “competency-based delivery model” group for the purposes of further analysis.   Additionally, the response rates to Recruit Survey 1 were much higher than the response rates to Recruit Survey 2 (see Table 5-1 Class 151 demographic characteristics and survey response rates, Table 5-6  Class 152 demographic characteristics and survey response rates, and Table 5-11  Class 153 demographic characteristics and survey response rates).  Recruit Survey 1 was administered while the recruits were still in Block I at the JIBC whereas Survey 2 was administered when the recruits were working as patrol officers during Block II.  Although police response rates to survey research are typically exceptionally low, often less than 10% (Huey, Blaskovits, Bennell, Kalyal, & Walker, 2017), low response rates when starting with a small sample size as in this study make analysis difficult.  The initial project design called for Recruit Survey 2 to be compared with the FTO survey since Survey 2 was administered at the same timepoint as the FTO survey, after recruits had experienced 10 weeks of patrol work.  To determine if Survey 1 could be used to be compared against the FTO survey, statistical analysis was carried out to identify any statistically significant differences in recruit answers between Survey 1, prior to field experience, and Survey 2, after 122  field experience.  The Wilcoxon ranked sign test was used to test for a statistically significant difference between these two surveys for each of the groups of lecture-based and competency-based delivery models.  The results of this analysis are included in Section 5.2.1- Differences in perception before and after Block II experience.  The analysis determined no statistically significant differences for any of the classes between Recruit Survey 1 and Recruit Survey 2, so Recruit Survey 1 was used for further analysis.   The planned focus groups presented further challenges to data collection because of an extremely low interest and participation rate.  For the competency-based exam assessors, a focus group invitation was sent to all eligible assessors (n=28) and two available dates and times were offered.  Three assessors responded that they were available and interested in participating in the focus group (a 10.7% response rate) but in the two to three days leading up to the scheduled focus group two of these assessors indicated they had double booked themselves and had to withdrawal their participation.  I contacted the remaining scheduled assessor and informed them of the situation and they opted not to participate as the only person involved.  As such, no focus group was conducted with the assessors.   Similar difficulties were encountered with the FTO focus group.  The invitation for the focus group was sent to all 82 field trainers (36 from Class 152 and 46 from Class 153) and two possible dates and times were offered.  Of these 82, it is unknown the exact number who were eligible for participation based on the inclusion criteria of training a recruit in Class 152 or 153 and training a recruit in any previous recruit class.  Of all the FTOs who were invited, three responded that they would like to participate.  If it is assumed that all 82 recipients were eligible, that is a 3.6% response rate.  Additionally, all three respondents were from the same department.  Regardless of the small sample size, the “focus group” was 123  conducted and was rather productive.  Because the sample size was not representative of the group, however, the transcript was not fully analyzed and several key observations are only mentioned in Section 5.3 Focus Group Analysis.  4.3 Summary The research design was intended to be a quantitative evaluation of pre-intervention and post-intervention classes after a foundational change to police recruit training in BC.  As complications with data collection emerged, some changes had to be made to the initial project design, including grouping the competency-based classes into one group for analysis and using Recruit Survey 1 for the majority of the analysis.  Additionally, low interest in the focus groups resulted in cancelling that portion of the program evaluation design, or minimal inclusion of results.  The next chapter outlines the results from the quantitative and qualitative data analysis.      124  Chapter 5: Results This chapter outlines the project results, starting with descriptive information from each of the data groups (Recruit classes, FTOs, and assessors).  Following the descriptive characteristics, analysis to determine if Class 152 and Class 153 could be grouped into one competency-based delivery model for analysis was carried out.  Recruit Survey 1 was analyzed against Recruit Survey 2 to determine if there were differences in recruit perceptions before and after exposure to practical experience and to determine if Recruit Survey 1 could be used for subsequent analysis.  Following these tests, the survey data was analyzed within groups, to determine if any recruit or FTO demographic characteristics influenced the results and between classes to answer the primary research question about the comparison between recruit ability and preparedness between the lecture-based and competency-based delivery models.  Qualitative analysis of the comments from each of the surveys is presented followed lastly by several comments on the discussion with the three FTOs who agreed to participate in the focus group.    5.1 Descriptive Survey Results The following section outlines the descriptive characteristics of each of the survey administrations, including response rate, gender, and experience level of the recruits and FTOs.    5.1.1 Lecture-based delivery model:  Class 151 The response rate for Class 151 was very high through both administrations of the survey.  The class itself was composed of 62.9% male and 37.1% female recruits.  Of these 125  recruits, 31.4% were 20-24 years old, 42.9% were 25-29, 14.3% were 30-34, and 11.4% were 35-39.   No recruits were in the upper age category.  Table 5-1 outlines the percentages and age demographics for each of the survey administrations for Class 151.     Male Female 20-24 25-29 30-34 35-39 40-44 Did not respond Total respondents Class statistics n 22  13  11 15 5  4  0 N/A 35 % 62.9 37.1 31.4 42.9 14.3 11.4 0 N/A N/A Survey 1 Respondents n 22 13 11 15 5  4  N/A 0 35 % 62.9 37.1 31.4 42.9 14.3 11.4 N/A 0 100% Survey 2 Respondents n 18 8 7 11 5 3 N/A 9 26 % 69.2 30.8 26.9 42.3 19.2 11.5 N/A 25.7 74.3% Table 5-1 Class 151 demographic characteristics and survey response rates  The departments who had recruits in Class 151 were:  Abbotsford (n=3, 8.6%), Delta (n=4, 11.4%), New Westminster (n=2, 5.7%), Port Moody (n=1, 2.9%), Saanich (n=4, 11.4%), Transit (n=5, 14.3%), Vancouver (n=13, 37.1%), and Victoria (n=3, 8.6%).   Table 5-2 presents the recruits’ reported education levels prior to starting at the Police Academy.  An option for no post-secondary was not included in the selection option as departmental recruiting policy states that applicants must have at least some post-secondary experience.  The most common level of previous education was a university degree, with 37.1% of the class having earned this before starting at the police academy.  Responses in the “Other” category included JIBC Fire Academy, JIBC Paramedic training, and trades certification.   126   Education Frequency Percent Some college 7 20.0 College diploma 2 5.7 Some university 4 11.4 Undergraduate degree 13 37.1 Graduate degree 3 8.6 Other 5 14.3 Did not respond 1 2.9 Table 5-2 Education levels of Class 151 prior to police academy Within the class, 25 recruits (71.4%) had no previous policing experience and 10 recruits (28.5%) indicated they had some previous policing experience.  Table 5-3 indicates the types of previous policing experience reported by the recruits. Experience Frequency Percent No previous police related experience 25 71.4 Community Safety Officer, jail guard, auxiliary/reserve constable, international police officer 10 28.6 Traffic authority, Canadian Border Services Agency, corrections, civilian staff at police department, dispatch 0 0 Volunteer (Community safety office) 0 0 Table 5-3  Previous policing experience of Class 151 prior to police academy  5.1.1.1 Demographic Characteristics of 151 FTOs The FTO survey was sent to FTOs for all 35 recruits in Class 151.  Fifteen (15) FTOs responded, for a response rate of 42.9%.  The most common characteristics for the FTO respondents for Class 151 were that they were in the age range of 35-39 years (40%), had 5-9 years of service (60.0%), had been an FTO for four years or less (86.7%), and had trained 127  four or fewer recruits (80.0%).  Table 5-4 outlines the full demographic characteristics for the FTO respondents for Class 151.   Demographic Frequency Percent Gender  Male 12 80.0 Female 3 20.0 Age Range  25-29 1 6.7 30-34 5 33.3 35-39 6 40.0 40-44 2 13.3 45-49 1 6.7 Years of Service  0-4 2 13.3 5-9 9 60.0 10-14 2 13.3 15-19 2 13.3 Years as FTO  0-4 13 86.7 5-9 0 0 10-14 2 13.3 Number of Recruits Trained  0-4 12 80.0 5-9 1 6.7 10-14 2 3.3 Table 5-4  Demographic characteristics for FTO respondents for Class 151  The demographic characteristics of the recruits who were trained by the FTOs who responded to the survey are presented in Table 5-5.  Two of the FTO responses were unable to be matched to a recruit so information for 13 recruits is provided in the Table.  One FTO was not on the list of FTOs provided by the department so could not be matched to a recruit.  Likely this FTO was substituting for a regular FTO who was on leave and was forwarded the survey.  The other FTO did not provide a name and so was unable to be matched to a recruit in the class.   128  Demographic Frequency Percent Recruit Gender  Male 7 46.2 Female 6 53.8 Recruit Age Range  20-24 5 38.5 25-29 7 53.8 30-34 1 7.7 35-39 0 0 40-44 0 0 Recruit Previous Education  Some college 3 23.1 College diploma 0 0 Some university 0 0 University degree 7 53.8 Graduate degree 1 7.7 Other 2 15.4 Recruit Previous Police Experience  Yes 5 38.5 No 8 61.5 Table 5-5  Characteristics of recruits trained by FTO respondents in Class 151.  5.1.2 Competency-based delivery model:  Class 152   Male Female 20-24 25-29 30-34 35-39 40-44 Did not respond Total respondents Class statistics n 24 12 13 13 10 0 0 N/A 36 % 66.7 33.3 36.1 36.1 27.8 N/A N/A N/A N/A Survey 1 Respondents n 22 7 7 13 9 N/A N/A 7 29 % 75.9 24.1 24.1 44.8 31.0 N/A N/A 19.4 80.6% Survey 2 Respondents n 8 1 1 5 3 N/A N/A 27 9 % 88.9 11.1 11.1 55.6 33.3 N/A N/A 75.0 25.0% Table 5-6  Class 152 demographic characteristics and survey response rates  The departments who had recruits in Class 152 were:  Abbotsford (n=2, 5.6%), Central Saanich (n=1, 2.8%), Delta (n=3, 8.3%), Nelson (n=1, 2.8%), New Westminster (n=2, 5.6%), Saanich (n=4, 11.1%), Vancouver (n=20, 55.6%), Victoria (n=2, 5.6%), and West Vancouver (n=1, 2.8%). 129  Table 5-7 presents the recruits’ reported education levels prior to starting at the Police Academy for the 27 recruits who completed Survey 1.    The most common level of previous education was a university degree, with 55.2% of the respondents having earned this before starting at the Police Academy.  Responses in the “Other” category included UK A-levels and a university certificate. Education Frequency Percent Some college 0 0 College diploma 4 13.8 Some university 4 13.8 Undergraduate degree 16 55.2 Graduate degree 3 10.3 Other 2 6.9 Did not respond 0 0 Table 5-7  Education levels of Class 152 respondents prior to police academy Within the respondents, 12 recruits (41.4%) had no previous policing experience and 17 recruits (58.6%) indicated they had some previous policing experience.  Table 5-8 indicates the types of previous policing experience reported by the recruits. Experience Frequency Percent No previous police related experience 12 41.4 Community Safety Officer, jail guard, auxiliary/reserve constable, international police officer 13 44.8 Traffic authority, Canadian Border Services Agency, corrections, civilian staff at police department, dispatch 3 10.3 Volunteer (Community safety office) 1 3.4 Table 5-8  Previous policing experience of Class 152 prior to police academy 130  5.1.2.1 Demographic Characteristics of 152 FTOs The FTO survey was sent to FTOs for all 36 recruits in Class 152.  Eleven (11) FTOs responded, for a response rate of 30.6%.  There were the same number of FTO respondents for Class 152 in the age ranges of 30-34, 35-39, and 40-44 years (27.3% each).  The most common other characteristics were the same as the group of FTOs for Class 151:  that the responding FTOs had five to nine years of service (45.5%), had been an FTO for four years or less (63.6%), and had trained four or fewer recruits (63.6%).  Table 5-9 outlines the full demographic characteristics for the FTO respondents for Class 152.  Despite a request to the departments that FTOs who had trained recruits in Class 151 also be used to train recruits from Class 152, to help with the evaluation project, only one FTO trained recruits in both Class 151 and 152.    131  Demographic Frequency Percent Gender  Male 9 81.8 Female 2 18.2 Age Range  25-29 0 0 30-34 3 27.3 35-39 3 27.3 40-44 3 27.3 45-49 1 9.1 50-54 1 9.1 Years of Service  0-4 0 0 5-9 5 45.5 10-14 4 36.4 15-19 1 9.1 20-24 1 9.1 Years as FTO  0-4 7 63.6 5-9 1 9.1 10-14 2 18.2 15-19 1 9.1 Number of Recruits Trained  0-4 7 63.6 5-9 2 18.2 10-14 2 18.2 Table 5-9  Demographic characteristics for FTO respondents for Class 152  The demographic characteristics of the recruits who were trained by the FTOs who responded to the survey are presented in Table 5-10.  Four of the FTO responses were unable to be matched to a recruit so information for seven recruits is provided in the Table.  Two FTOs completed the survey but the recruit they were training did not and two FTOs completed the survey but did not provide a name so could not be matched with the recruit.      132  Demographic Frequency Percent Recruit Gender  Male 6 85.7 Female 1 14.3 Recruit Age Range  20-24 1 14.3 25-29 2 28.6 30-34 4 57.1 35-39 0 0 40-44 0 0 Recruit Previous Education  Some college 0 0 College diploma 1 14.3 Some university 1 14.3 University degree 4 57.1 Graduate degree 1 14.3 Other 0 0 Recruit Previous Police Experience  Yes 4 57.1 No 3 42.9 Table 5-10  Characteristics of recruits trained by FTO respondents in Class 152.  5.1.3 Competency-based delivery model:  Class 153 Due to an increase in hiring by departments, primarily Vancouver, the class size for Class 153 was increased from the maximum of 36 that the program was designed for to a class size of 48 recruits.   Male Female 20-24 25-29 30-34 35-39 40-44 Did not respond Total respondents Class statistics n 32 16 11 22 10 1 4 N/A 48 % 66.7 33.3 22.9 45.8 20.8 2.1 8.3 N/A N/A Survey 1 Respondents n 14 6 3 12 3 1 1 28 20 % 70.0 30.0 15.0 60.0 15.0 5.0 5.0 58.3 41.7% Survey 2 Respondents n 6 1 1 2 2 1 1 41 7 % 85.7 14.3 14.3 28.6 28.6 14.3 14.3 85.4 14.6% Table 5-11  Class 153 demographic characteristics and survey response rates The departments who had recruits in Class 153 were:  Abbotsford (n=3, 6.3%), Central Saanich (n=1, 2.1%), Delta (n=2, 4.2%), New Westminster (n=4, 8.3%), Saanich 133  (n=2, 4.2%), Transit (n=5, 10.4%), Vancouver (n=26, 54.2%), Victoria (n=3, 6.3%), and West Vancouver (n=2, 4.2%). Table 5-12 presents the recruits’ reported education levels prior to starting at the Police Academy for the 20 recruits who completed Survey 1.    The most common level of previous education was a college diploma or university degree, with 40.0% of the respondents having earned a college diploma and 40% of respondents having earned an undergraduate degree before starting at the police academy.    Education Frequency Percent Some college 0 0 College diploma 8 40.0 Some university 3 15.0 Undergraduate degree 8 40.0 Graduate degree 1 5.0 Other 0 0 Did not respond 0 0 Table 5-12  Education levels of Class 153 respondents prior to police academy Within the respondents, 15 recruits (75.0%) had no previous policing experience and five recruits (25.0%) indicated they had some previous policing experience.  Table 5-13 indicates the types of previous policing experience reported by the recruits.      Experience Frequency Percent No previous police related experience 15 75.0 134  Community Safety Officer, jail guard, auxiliary/reserve constable, international police officer 3 15.0 Traffic authority, Canadian Border Services Agency, corrections, civilian staff at police department, dispatch 1 5.0 Volunteer (Community safety office) 0 0 Experience not described 1 5.0 Table 5-13  Previous policing experience of Class 153 prior to police academy  5.1.3.1 Demographic Characteristics of 153 FTOs The FTO survey was sent to FTOs for all 48 recruits in Class 153.  Nine FTOs responded, for a response rate of 18.8%.  The most common characteristics for the FTO respondents for Class 153 were that they were in the age range of 40-45 years (44.4%), had 10-14 years of service (55.6%), had been an FTO for four years or less (66.7%), and had trained four or fewer recruits (66.7%).  Table 5-14 outlines the full demographic characteristics for the FTO respondents for Class 153.    135  Demographic Frequency Percent Gender  Male 7 77.8 Female 2 22.2 Age Range  25-29 0 0 30-34 2 22.1 35-39 1 11.1 40-44 2 22.2 45-49 4 44.4 Years of Service  0-4 0 0 5-9 1 11.1 10-14 5 55.6 15-19 2 22.2 20-24 1 11.1 Years as FTO  0-4 6 66.7 5-9 1 11.1 10-14 2 22.2 Number of Recruits Trained  0-4 6 66.7 5-9 1 11.1 10-14 1 11.1 15-19 1 11.1 Table 5-14  Demographic characteristics for FTO respondents for Class 153  The demographic characteristics of the recruits who were trained by the FTOs who responded to the survey are presented in Table 5-15.  Seven of the FTO responses were unable to be matched to a recruit so information for two  recruits is provided in the Table.  Four FTOs completed the survey but the recruit they trained did not while three FTOs completed the survey but did not provide a name and so were unable to be matched to a recruit in the class.      136   Demographic Frequency Percent Recruit Gender  Male 2 100 Female 0 0 Recruit Age Range  20-24 0 0 25-29 1 50.0 30-34 1 50.0 35-39 0 0 40-44 0 0 Recruit Previous Education  Some college 0 0 College diploma 0 0 Some university 0 0 University degree 2 100 Graduate degree 0 0 Other 0 0 Recruit Previous Police Experience  Yes 1 50.0 No 1 50.0 Table 5-15  Characteristics of recruits trained by FTO respondents in Class 153.  5.1.4 Competency-based delivery model:  Exam Assessors The FTO survey was sent the 28 assessors who had acted as exam assessors for classes 152 or 153.  Seventeen (17) assessors responded, for an apparent response rate of 60.1%.  Of these responses, however, seven were not completed, dropping the actual response rate to 35.7%.  The most common characteristics for the assessor respondents were that they were female (60%) in the age range of 50-54 years (60%), had 10-14 years of service  or 20-24 years of service (30% each), and had been an assessment centre assessor for 5-9 years before the program was shut down (70%).  Of the group, five also had experience as an FTO (50%), and of those, the most common time as an FTO was 0-4 years (50% of respondents as only 4 people indicated how long they had been an FTO).  The majority of 137  respondents (60%) had assessed both the Week 5 Progress Assessment and the Week 12 Final exams in Block I for either Class 152 or Class 153.   Demographic Frequency Percent Gender  Male 4 40.0 Female 6 60.0 Age Range  25-29 0 0 30-34 1 10.0 35-39 0 0 40-44 1 10.0 45-49 2 20.0 50-54 6 60.0 Years of Service  0-4 0 0 5-9 0 0 10-14 3 30.0 15-19 2 20.0 20-24 3 30.0 25-29 1 10.0 30-34 1 10.0 Years as an Assessment Centre assessor  0-4 2 20.0 5-9 7 70.0 10-14 0 0 15-19 1 10.0 FTO Experience  Yes 5 50.0 No 5 50.0 Years as an FTO  0-4 2 50.0 5-9 0 0 10-14 1 25.0 15-19 0 0 20-24 1 25.0 Exam Assessed  Week 5 2 20.0 Week 12 2 20.0 Both 6 60.0 Table 5-16  Demographic characteristics of competency-based exam assessors  138  5.2 Quantitative Survey Analysis Data analysis was complicated by the low response rates, particularly for Survey 2, the FTO survey, and Class 153.  To facilitate analysis, the two competency-based delivery model data sets, Class 152 and 153, were grouped for analysis.  Before grouping, the data were analyzed across class categories for Class 152 and 153 to ensure that there were no significant differences and that the data could be grouped.  To determine if there were significant differences, I used the two global questions related to overall ability and overall preparation for Block II.  Additionally, since I was interested in determining if there was a difference in how recruits responded to these questions before and after they had experienced what it was like to actually work as a patrol level police officer, I created a new column that measured the differences between Survey 2 and Survey 1 (R2-R1).  This column was also analyzed to ensure there were no significant differences between Class 152 and 153 before their data was grouped.  The data were coded as belonging to either Class 152 or Class 153 and the responses across classes were compared for the following questions: 1. Overall, please rate your general ability to perform as a Recruit Constable in Block II – Survey administration 1 2. Overall, please rate your general ability to perform as a Recruit Constable in Block II – Survey administration 2 3. The difference in overall ability reported between Survey 2 and Survey 1 (R2-R1) 4. How well do you feel your Block I training prepared you to meet the expectations as a Recruit Constable in Block II – Survey administration 1 5. How well do you feel your Block I training prepared you to meet the expectations as a Recruit Constable in Block II – Survey administration 2 139  6. The difference in overall training preparation reported between Survey 2 and Survey 1 (R2-R1) In each of these cases, the null hypothesis was no difference in the distribution of responses between Class 152 and Class 153.  The Mann Whitney U test was used to test this null hypothesis because the samples being compared were independent.  Analysis was carried out to a significance level of p=0.05.   Table 5-17 indicates the results of this analysis for each of the six comparisons listed above.  In each of the tests, the null hypothesis was retained, indicating no difference in the distribution between classes and the class data could be grouped.   # Null Hypothesis Test Sig.  Decision 1 The distribution of overall ability responses from Recruit Survey 1 is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 0.164 Retain the null hypothesis 2 The distribution of overall ability responses from Recruit Survey 2 is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 0.1811 Retain the null hypothesis 3 The distribution of the difference between Recruit Survey 1 and Recruit Survey 2 (R2-R1) overall ability responses is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 0.2241 Retain the null hypothesis 4 The distribution of overall preparedness responses from Recruit Survey 1 is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 0.104 Retain the null hypothesis 5 The distribution of overall preparedness responses from Recruit Survey 2 is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 0.3881 Retain the null hypothesis 6 The distribution of the difference between Recruit Survey 1 and Recruit Survey 2 (R2-R1) overall preparedness responses is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 0.8641 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test  Table 5-17  Mann Whitney U test results comparing distribution of responses to Recruit Survey 1 and Recruit Survey 2 between Class 152 and 153  140  To further test if it was acceptable to group the data from Class 152 and Class 153, I analyzed the responses of the FTOs and the difference between recruit and FTO responses.  As the recruit response rate for Survey 2 much lower than Survey 1, the data from the FTO was compared with the data from Survey 1 to increase the number of comparison points for analysis.  The following four questions were examined: 1. Overall, please rate the general ability of your recruit to perform as a Recruit Constable in Block II – FTO survey 2. The difference in overall ability reported between FTO and recruit Survey 1 (FTO-R1) 3. How well do you feel your recruit’s Block I training prepared them to meet the expectations as a Recruit Constable in Block II – FTO survey 4. The difference in overall training preparation reported between FTO and recruit Survey 1 (FTO-R1) In each of these cases, the null hypothesis was no difference in the distribution of responses between Class 152 and Class 153.  The Mann Whitney U test was used to test this null hypothesis because the samples being compared were independent.  Analysis was carried out to a significance level of p=0.05.   # Null Hypothesis Test Sig.  Decision 1 The distribution of overall ability responses from the FTO survey is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 0.6671 Retain the null hypothesis 2 The distribution of the difference between FTO survey and Recruit Survey 1 (FTO-R1) overall ability responses is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 1.0001 Retain the null hypothesis 3 The distribution of overall preparedness responses from Recruit Survey 1 is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 0.8891 Retain the null hypothesis 141  4 The distribution of the difference between FTO survey and Recruit Survey 1 (FTO-R1) overall preparedness responses is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 0.5001 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test  Table 5-18Table 5-18 indicates the results of this analysis for each of the four comparisons listed above.  In each of the tests, the null hypothesis was retained, indicating no difference in the distribution between classes and the class data could be grouped.      # Null Hypothesis Test Sig.  Decision 1 The distribution of overall ability responses from the FTO survey is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 0.6671 Retain the null hypothesis 2 The distribution of the difference between FTO survey and Recruit Survey 1 (FTO-R1) overall ability responses is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 1.0001 Retain the null hypothesis 3 The distribution of overall preparedness responses from Recruit Survey 1 is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 0.8891 Retain the null hypothesis 4 The distribution of the difference between FTO survey and Recruit Survey 1 (FTO-R1) overall preparedness responses is the same across Class 152 and Class 153 Independent Samples:  Mann-Whitney U Test 0.5001 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test  Table 5-18  Mann Whitney U test results comparing distribution of responses to the FTO survey and the difference between Recruit Survey 1 and the FTO survey between Class 152 and 153  Since the null hypothesis was retained in each of the 10 areas analyzed, indicating no difference in the distribution of responses between Class 152 and Class 153, it was deemed 142  acceptable to group the data from the two classes for further analysis.  The data was combined into a “competency-based delivery model” grouping.    5.2.1 Differences in perception before and after Block II experience One research question asked if a difference in how recruits perceived their ability and their preparation before and after they had experience working as a patrol officer during Block II was observed.  To address this question, the Wilcoxon Signed Rank Test was used to compare the answers provided by each recruit in Survey 1, administered before the start of Block II training, and Survey 2, administered after ten weeks of Block II training.  The Wilcoxon Signed Rank Test is used to compare related samples, such as the same group at two different points in time, as is the case with this sample (Cohen et al., 2011).  The overall ability and overall preparation for Block II questions were used to compare the differences between these two surveys.  In each case, the null hypothesis stated no difference in the answers before and after the Block II experience.  Phrased another way, the null hypothesis was that the median of differences between Survey 1 and Survey 2 was zero.  The Wilcoxon Signed Rank Test was used to test this null hypothesis for each of the following survey groupings: 1. Lecture-based group:  answer to the question “Overall, please rate your general ability to perform as a Recruit Constable in Block II.” (n=26) 2. Lecture-based group:  answer to the question “How well do you feel your Block I training prepared you to meet the expectations of a Recruit Constable in Block II?” (n=26) 143  3. Competency-based group:  answer to the question “Overall, please rate your general ability to perform as a Recruit Constable in Block II.” (n=15, 16 respondents to Survey 2 but one recruit did not provide a name so could not be matched to their Survey 1 response) Competency-based group:  answer to the question “How well do you feel your Block I training prepared you to meet the expectations of a Recruit Constable in Block II?” (n=15, 16 respondents to Survey 2 but one recruit did not provide a name so could not be matched to their Survey 1 response) Table 5-19 indicates that, in each case, the null hypothesis was retained, indicating no significant difference between the recruits’ perceptions of their ability or preparation before Block II began and after ten weeks of Block II training.   # Null Hypothesis Test Sig.  Decision 1 For the lecture-based delivery model recruits, the median of the differences between overall ability from Recruit Survey 1 and Recruit Survey 2 equals 0.   Related samples:  Wilcoxon Signed Rank Test 0.282 Retain the null hypothesis 2 For the lecture-based delivery model recruits, the median of the differences between overall preparedness from Recruit Survey 1 and Recruit Survey 2 equals 0.   Related samples:  Wilcoxon Signed Rank Test 0.713 Retain the null hypothesis 3 For the competency-based delivery model recruits, the median of the differences between overall ability from Recruit Survey 1 and Recruit Survey 2 equals 0.   Related samples:  Wilcoxon Signed Rank Test 0.366 Retain the null hypothesis 4 For the competency-based delivery model recruits, the median of the differences between overall preparedness from Recruit Survey 1 and Recruit Survey 2 equals 0.   Related samples:  Wilcoxon Signed Rank Test 0.180 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05  Table 5-19  Differences between recruit perceptions before and after Block II training experience  144  5.2.2 Comparison within classes To determine if there were any factors that influenced recruits’ or FTO responses within each group, analysis was conducted using the overall rankings from the Recruit Survey 1 responses and the FTO responses from both the lecture-based and competency-based delivery models.  To calculate means for each category, the following values were assigned to responses: For the overall ability question:  1= “has knowledge”, 2= “act under full supervision”, 3= “act under moderate supervision”, 4= “act independently”, and 5= “act as a supervisor or instructor”.  For the overall preparedness question:  1= “extremely poorly prepared”, 2= “poorly prepared”, 3= “well prepared”, 4= “extremely well prepared”, and 0= “N/A”.  The N/A response was not excluded from the analysis because each of the competencies has been determined to be core to policing at the constable level, so a rating of N/A represents a complete failure on the part of recruit training to meet the basic requirements, and this response should also be captured in the analysis.     As outlined in the previous section, the responses from Recruit Survey 1 were used for analysis to increase the sample size.  Because recruits completed this survey prior to meeting and interacting with their FTO, I did not analyze the effect of FTO characteristics on recruit responses, such as FTO gender or years of service.   For ease of presentation, the tables for these analyses are included in Appendix C  - Consistency Tables instead of directly in the text.    145  5.2.2.1 Lecture-based delivery model Analyses were conducted to determine if there were any recruit characteristics or FTO characteristics that influenced the responses The following sections present the results from recruits in Class 151, who were trained in the lecture-based model.   5.2.2.1.1 Recruit characteristics Within the lecture-based delivery model responses, recruit responses to the overall ability and overall preparedness questions were analyzed across recruit gender, age range, post-secondary education level, and previous policing experience.  The results for recruit gender are presented in Table C- 1.  The mean values for each gender indicate that females scored themselves as lower in both overall ability and overall preparedness, but the Mann-Whitney U Test indicated that the null hypothesis that the distribution across genders was equal was to be retained, so the observed difference between genders was not significant.    The results for recruit age-range are presented in Table C- 2.  As there were more than two categories of response, the Kruskal-Wallis test was used instead of the Mann-Whitney U Test.  Again, although differences are observed in the means of each of the age range categories for the two global questions, these differences were not large enough to reject the null hypothesis of no difference in distribution across age range categories.  The null hypothesis was retained, indicating recruit age was not a factor in their response to the global questions.   The results for recruit post-secondary education level are presented in Table C- 3.  Again, although differences are observed in the means of each of the education level categories for the two global questions, these differences were not large enough to reject the 146  null hypothesis of no difference in distribution across education level categories.  The null hypothesis was retained, indicating that recruit education level was not a factor in recruit response to the global questions.   Lastly, the results for recruit previous policing experience presented in Table C- 4.  Here there were only two categories for analysis as the recruits in this particular class either had no previous policing experience or were grouped into the category that included community safety police, jail guards, auxiliary/reserve constables, and international police.  Again, although there are observed difference in the mean values for each of the global questions, with the recruits who had previous experience rating themselves slightly higher than their inexperienced classmates, the difference was not enough the reject the null hypothesis.  The null hypothesis of no difference in distribution across previous policing experience categories was retained, indicating that recruit previous policing experience was not a factor in recruit response to the global questions.   5.2.2.1.2 FTO characteristics To examine if FTO characteristics had any influence on how the FTOs rated their Block II recruits, the ratings were grouped and analyzed across FTO gender, FTO years of service, FTO years as an FTO, and the number of recruits an FTO had trained.   Table C- 5 shows the results of the mean comparison and Mann-Whitney U Test across FTO gender.  Although the results indicate that female FTOs rated their recruits with a lower overall ability but marginally higher in preparedness, the differences were not large enough to reject the null hypothesis that the distribution of responses was the same across FTO gender categories.  Table C- 6 shows the results of the mean comparison and Kruskal-Wallis Test across FTO years of service.  The results indicate that more experienced FTOs 147  tended to rate their recruits’ ability slightly higher, but the differences were not large enough to reject the null hypothesis that the distribution of responses was the same across FTO years of service.  No trend emerged across how the FTOs rated their recruits’ preparedness and the null hypothesis was again retained.     Table C- 7 shows the results of the mean comparison and Kruskal-Wallis Test across FTO years as a field trainer.  The differences in this category were not sufficient to reject the null hypothesis, indicating that there is no significant difference across distribution of responses based on FTO years as a field trainer.     Table C- 8 shows the results of the mean comparison and Kruskal-Wallis Test across FTO number of recruits trained.  The differences in this category were not sufficient to reject the null hypothesis, indicating that there is no significant difference across distribution of responses based on the number of recruits an FTO has trained.      5.2.2.1.3 Recruit Characteristics on FTO Responses In addition to the potential influence of FTO characteristics on FTO responses, the potential influence of recruit characteristics on FTO responses was also examined.  FTO responses were analyzed across recruit gender, age range, post-secondary education level, and previous policing experience.   Table C- 9 shows the means and Mann-Whitney U Test values for FTO responses grouped across recruit gender.  Although female recruits are rated lower in both overall ability and overall preparation than their male classmates, this difference is not large enough to reject the null hypothesis that the distribution across responses is the same.  Table C- 10 shows the means and Kruskal-Wallis Test values for FTO responses grouped against recruit 148  age.  The differences in mean ratings are minimal and the null hypothesis that the distribution across categories is equal is retained.  Table C- 11shows means and Kruskal-Wallis Test values for FTO responses grouped across recruit post-secondary education.  Again, the differences in mean ratings are minimal and the null hypothesis that the distribution across categories is equal is retained.  Lastly, Table C- 12 shows the means and Mann-Whitney U Test values for FTO responses grouped by recruit previous policing experience.  For this class, the recruits fell into one of two categories with their previous policing experience, as indicated in Section 5.2.2.1.1, so the Mann-Whitney U Test was used.  Recruit previous police experience did not influence the ratings and the null hypothesis that the distribution of responses across categories of experience was equal was retained.    5.2.2.2 Competency-based delivery model As with the lecture-based delivery model, analyses were conducted to determine if there were any recruit characteristics or FTO characteristics that influenced the responses.    5.2.2.2.1 Recruit characteristics Recruit responses to the overall ability and overall preparedness questions were analyzed when grouped against recruit gender, age range, post-secondary education, and previous police experience.  In each case, the null hypothesis tested was that the distribution of responses was equal between categories examined.  In all four of these areas, the null hypotheses were retained, indicating that none of the recruit characteristics had a significant impact on their overall sense of their own ability or on how prepared they believed they 149  were.  Table C- 13 through Table C- 16 show the recruits’ responses grouped across gender, age category, post-secondary experience, and previous police experience, respectively.    5.2.2.2.2 FTO characteristics FTO responses were examined across trainer characteristics of gender, age category, years of service, years as FTO, and number of recruits trained, as represented in Table C- 17 through Table C- 21.  The null hypothesis for each of the tests was that the distribution of responses was equal across the groupings.  In all cases the hull hypotheses were retained, indicating that no FTO characteristics significantly influenced how the FTOs rated their recruits.  Interestingly, in the lecture-based delivery model, female FTOs rated their recruits’ ability lower than the male FTOs (although not significantly).  In the competency-based model, female FTOs rated their recruits’ overall ability higher than did the male FTOs (although also not significantly).    5.2.2.2.3 Recruit characteristics on FTO responses To determine if any characteristics of recruits influenced how their FTOs rated them, I carried out an analysis of FTO responses grouped across recruit gender (Table C- 22), recruit age category (Table C- 23), recruit post-secondary education (Table C- 25), and recruit previous police experience (Table C- 26).  In each case, the null hypothesis tested was that the distribution of responses was equal across all categories examined.   The sample size was extremely low as only nine FTO responses could be matched to recruit responses.  Recruit gender, recruit post-secondary education and recruit previous policing experience showed no influence on FTO responses and the null hypotheses were all 150  retained in these cases.  For recruit age category, Kruskal-Wallis Test scores indicated that the null hypothesis was to be rejected, meaning that recruit age category did have a significant influence on the FTO rating of the recruit.  Cross-tabulation reports were run for both the overall ability and the overall preparedness questions to indicate the source of the difference, as non-parametric tests do not indicate this information.  The cross-tabulation reports shown in Table C- 24 indicate that for the overall ability question, recruits in the age range of 25-29 years (n=3) were ranked below their peers, with one recruit ranked as “has knowledge” and the remaining two recruits ranked as “act under full supervision”.  All recruits in the other age categories (n=6) were ranked as “act under moderate supervision” (n=5) or act independently (n=1).  For the overall preparedness question, recruits in both the 20-24 (n=1) and 25-29 (n=3) age ranges were ranked as “poorly prepared” whereas their classmates who were 30-34 were ranked as “well prepared” (n=4) or “extremely well prepared” (n=1).    5.2.3 Comparison across classes To answer the most central question, responses were compared between the lecture-based and competency-based groups, using Recruit Survey 1 for analysis.  The FTO responses were also compared across both groups, to answer the question if a difference in how FTOs described their recruits ability and preparedness was observed.  This analysis was carried out descriptively by comparing means across classes and statistically using the Mann-Whitney U Test and, where a statistically significant difference was detected, carrying out a crosstabs analysis to determine the source of the detected difference.  This was conducted for the global scores as well as for each of the Police Sector Council Constable competencies.   151   5.2.3.1 Global comparison across classes   Figure 5-1 and Figure 5-2 show the mean rankings for overall ability and preparedness from Recruit Survey 1 and from the FTO survey respectively.  Table 5-20 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation.  For the overall ability question, recruits in the competency-based delivery model ranked themselves slightly higher than recruits in the lecture-based delivery model, with means of 2.71 and 2.69 respectively.  For the overall question about how well Block I prepared them for Block II, however, recruits in the lecture-based model ranked themselves higher (3.14) than those in the competency-based model (3.04).  The FTOs ranked recruits in the lecture-based model higher in both global ability and overall preparation (means of 2.83 and 3.00) than the recruits from the competency-based delivery model (means of 2.67 for both ability and preparedness).     Figure 5-1 Global mean ratings for overall ability (blue) and overall preparation (red) from Recruit Survey 1 clustered across training delivery methods  152   Figure 5-2 Global mean ratings for overall ability (blue) and overall preparation (red) from FTO survey clustered across training delivery methods        Overall ability – Recruit Survey 1 Overall Ability - FTO Overall Preparation – Recruit Survey 1 Overall Preparation - FTO Lecture-based Mean 2.69 2.83 3.14 3.00 N 35 12 35 10 Std. Deviation .676 .718 .430 .667 Competency-based Mean 2.71 2.67 3.04 2.67 N 49 9 49 9 Std. Deviation .645 .866 .498 .707 Total Mean 2.70 2.76 3.08 2.84 N 84 21 84 19 Std. Deviation .655 .768 .471 .688 Table 5-20 Global mean ratings for overall ability and overall preparation from Recruit Survey 1 and FTO responses clustered across training delivery methods  153  The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models.   1. No difference in distribution of responses across lecture-based and competency-based delivery models for overall ability ratings from Recruit Survey 1  2. No difference in distribution of responses across lecture-based and competency-based delivery models for overall ability ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for overall preparation ratings from Recruit Survey 1 4. No difference in distribution of responses across lecture-based and competency-based delivery models for overall preparation ratings from the FTO survey  Table 5-21 shows the results of this analysis, indicating that, in each of the questions, the null hypothesis is retained.  The observed differences then are not statistically significant.    # Null Hypothesis Test Sig.  Decision 1 The distribution of responses from recruits for the overall ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test  0.652 Retain the null hypothesis 2 The distribution of responses from recruits for the overall preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.8081 Retain the null hypothesis 3 The distribution of responses from FTOs for the overall recruit ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.149 Retain the null hypothesis 154  4 The distribution of responses from FTOs for the overall recruit preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.3561 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test  Table 5-21 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type  The following sections will examine recruit rankings for each of the core Constable Competencies:  adaptability, ethical accountability, interactive communication, organizational awareness, problem solving, risk management, stress tolerance, teamwork, and written skills.  5.2.3.2 Adaptability Figure 5-3 and Figure 5-4 show the mean rankings for ability and preparedness for the adaptability competency from Recruit Survey 1 and from the FTO survey respectively.  Table 5-22 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation.  For the ability in the adaptability competency question, recruits in the competency-based delivery model ranked themselves considerably lower than recruits in the lecture-based delivery model, with means of 2.88 and 3.26 respectively.  For the question about how well Block I prepared them for Block II with respect to the adaptability competency, the recruits in the lecture-based model ranking their preparedness slightly lower (2.71) than those in the competency-based model (2.94).  The FTOs ranked recruits in the lecture-based model slightly higher in ability for this competency (means of 155  2.85 for lecture-based and 2.78 for competency-based) but slightly lower for preparedness for this competency (means of 2.38 for lecture-based and 2.44 for competency-based).     Figure 5-3 Mean ratings for ability in the Adaptability competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method   Figure 5-4 Mean ratings for preparation in the Adaptability competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method     Ability in Adaptability competency – Recruit Survey 1 Ability in Adaptability competency – FTO survey Preparedness in Adaptability competency – Recruit Survey 1 Preparedness in Adaptability competency – FTO survey 156  Lecture-based Mean 3.26 2.85 2.71 2.38 N 35 13 35 13 Std. Deviation .852 .899 1.045 1.121 Competency-based Mean 2.88 2.78 2.94 2.44 N 49 9 49 9 Std. Deviation .634 .972 .317 1.014 Total Mean 3.04 2.82 2.85 2.41 N 84 22 84 22 Std. Deviation .752 .907 .720 1.054 Table 5-22  Mean ratings for ability and preparation in the adaptability competency from Recruit Survey 1 and FTO responses clustered across training delivery methods  The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models.   1. No difference in distribution of responses was recorded across lecture-based and competency-based delivery models for ability in the adaptability competency area for ratings from Recruit Survey 1 2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the adaptability competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the adaptability competency area for ratings from Recruit Survey 1 4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the adaptability competency area for ratings from the FTO survey 157  Table 5-23 shows the results of this analysis, indicating that, in each of the FTO questions and in the recruits’ response about how prepared they were in this competency area, the null hypothesis is retained and there are no statistically significant differences in how the FTOs rated the recruits or in how the recruits rated themselves across delivery methods.  The null hypothesis is rejected, however, in analysis 1, indicating a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the adaptability competency area.  The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program.  Table 5-24 indicates the results of a cross-tabulation report on the recruit ability question to examine the breakdown of responses.   The cross-tabulation report shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=35) and the next most frequent category was “act under full supervision” (n=7).  This distribution is in contrast to the lecture-based delivery model where recruits most frequently scored themselves as able to “act independently” (n=16), followed by “act under moderate supervision” (n=14).  Therefore, the reason for rejecting the null hypothesis in this case was that the recruits in the competency-based delivery model more frequently scored their ability in the adaptability competency area lower than did the recruits in the lecture-based delivery model.    Null Hypothesis Test Sig.  Decision 1 The distribution of responses from recruits for the Adaptability ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-0.005 Reject the null hypothesis 158  Whitney U Test  2 The distribution of responses from FTOs for the Adaptability ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.8961 Retain the null hypothesis 3 The distribution of responses from recruits for the Adaptability preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.908 Retain the null hypothesis 4 The distribution of responses from FTOs for the Adaptability preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 1.0001 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test  Table 5-23 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type  Cross-tabulation Report     lecture competency-based  Adaptability ability - recruit has knowledge 2 2 4 act under full supervision 3 7 10 act under moderate supervision 14 35 49 act independently 16 5 21 Total 35 49 84  Table 5-24  Cross-tabulation report from Recruit Survey 1 for ability in the adaptability competency area  159  5.2.3.3 Ethical Accountability Figure 5-5 and Figure 5-6 show the mean rankings for ability and preparation for the ethical accountability and responsibility competency from Recruit Survey 1 and from the FTO survey respectively.  Table 5-25 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation.  For the ability in the ethical accountability and responsibility competency question, recruits in the competency-based delivery model ranked themselves considerably lower than recruits in the lecture-based delivery model, with means of 3.1 and 3.51 respectively.  For the question about how well Block I prepared them for Block II with respect to the ethical accountability competency, the recruits in the lecture-based model ranked their preparedness slightly higher (3.0) than those in the competency-based model (2.9).  The FTOs ranked recruits in the lecture-based model considerably higher in ability and in their preparedness for this competency with means in ability of 3.62 for lecture-based and 2.89 for competency-based, and with means in preparedness of 3.08 for lecture-based and 2.67 for competency-based.     Figure 5-5 Mean ratings for ability in the Ethics competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method  160   Figure 5-6 Mean ratings for preparation in the Ethics competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method     161    Ability in Ethics competency area – Recruit Survey 1 Ability in Ethics competency area – FTO survey Preparedness in Ethics competency area – Recruit Survey 1 Preparedness in Ethics competency area – FTO survey Lecture-based Mean 3.51 3.62 3.00 3.08 N 35 13 35 13 Std. Deviation .919 .506 .907 1.038 Competency-based Mean 3.10 2.89 2.90 2.67 N 49 9 49 9 Std. Deviation .848 1.269 .653 1.118 Total Mean 3.27 3.32 2.94 2.91 N 84 22 84 22 Std. Deviation .896 .945 .766 1.065 Table 5-25  Mean ratings for ability and preparation in the ethics competency from Recruit Survey 1 and FTO responses clustered across training delivery methods   The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models.   1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the ethics competency area for ratings from Recruit Survey 1  2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the ethics competency area for ratings from the FTO survey 162  3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the ethics competency area for ratings from Recruit Survey 1 4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the ethics competency area for ratings from the FTO survey Table 5-26 shows the results of this analysis, indicating that, in each of the FTO questions and in the recruits’ response about how prepared they were in this competency area, the null hypothesis is retained and there are no statistically significant differences in how the FTOs rated the recruits or in how the recruits rated themselves across delivery methods.  The null hypothesis is rejected, however, in analysis 1, indicating that there is a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the adaptability competency area.  The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program.  Table 5-27 indicates the results of a cross-tabulation report on the recruit ability question to examine the breakdown of responses.     163   Null Hypothesis Test Sig.  Decision 1 The distribution of responses from recruits for the Ethical Accountability ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test  0.004 Reject the null hypothesis 2 The distribution of responses from FTOs for the Ethical Accountability ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.2621 Retain the null hypothesis 3 The distribution of responses from recruits for the Ethical Accountability preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.168 Retain the null hypothesis 4 The distribution of responses from FTOs for the Ethical Accountability preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.2921 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test  Table 5-26 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type  Cross-tabulation Report      Total lecture competency-based Ethical Accountability ability - recruit has knowledge 3 3 6 act under full supervision 1 6 7 act under moderate supervision 6 23 29 act independently 25 17 42 Total 35 49 84  Table 5-27  Cross-tabulation report from Recruit Survey 1 for ability in the ethics competency area 164  The cross-tabulation report shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=23) and the next most frequent category was “act independently” (n=17).  This distribution is in contrast to the lecture-based delivery model where recruits most frequently scored themselves as able to “act independently” (n=25), followed by “act under moderate supervision” (n=6).  Therefore, the reason for rejecting the null hypothesis in this case was that the recruits in the competency-based delivery model more frequently scored their ability in the ethical accountability and responsibility competency area lower than did the recruits in the lecture-based delivery model.    5.2.3.4 Interactive Communication Figure 5-7 and Figure 5-8 show the mean rankings for ability and preparedness for the interactive communication competency from Recruit Survey 1 and from the FTO survey respectively.  Table 5-28 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation.  For the ability in the interactive communication competency question, recruits in the competency-based delivery model ranked themselves lower than recruits in the lecture-based delivery model, with means of 3.00 and 3.26 respectively.  For the question about how well Block I prepared them for Block II with respect to the interactive communication competency, the recruits in the lecture-based model ranked their preparedness lower (2.74) than those in the competency-based model (2.96).  The FTOs ranked recruits in the lecture-based model considerably higher in ability and in their preparedness for this competency with means in ability of 3.08 for lecture-based 165  and 2.56 for competency-based, and with means in preparedness of 2.92 for lecture-based and 2.44 for competency-based.     Figure 5-7 Mean ratings for ability in the communication competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method   Figure 5-8 Mean ratings for preparation in the communication competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method          166   Ability in Communication competency area – Recruit Survey 1 Ability in Communication competency area – FTO survey Preparedness in Communication competency area – Recruit Survey 1 Preparedness in Communication competency area – FTO survey Lecture-based Mean 3.26 3.08 2.74 2.92 N 35 13 35 13 Std. Deviation .701 .862 .980 .494 Competency-based Mean 3.00 2.56 2.96 2.44 N 49 9 49 9 Std. Deviation .645 1.130 .406 1.014 Total Mean 3.11 2.86 2.87 2.73 N 84 22 84 22 Std. Deviation .677 .990 .708 .767 Table 5-28  Mean ratings for ability and preparation in the communication competency from Recruit Survey 1 and FTO responses clustered across training delivery methods   The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models.   1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the interactive communication competency area for ratings from Recruit Survey 1  2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the interactive communication competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the interactive communication competency area for ratings from Recruit Survey 1 167  4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the interactive communication competency area for ratings from the FTO survey  Table 5-29 shows the results of this analysis, indicating that, in each of the questions the null hypothesis is retained and there are no statistically significant differences in distribution in the responses across the classes   Null Hypothesis Test Sig.  Decision 1 The distribution of responses from recruits for the Interactive Communication ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test  0.052 Retain the null hypothesis 2 The distribution of responses from FTOs for the Interactive Communication ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.2921 Retain the null hypothesis 3 The distribution of responses from recruits for the Interactive Communication preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.572 Retain the null hypothesis 4 The distribution of responses from FTOs for the Interactive Communication preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.3571 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test  Table 5-29 Mann Whitney U Test of overall ability and preparedness from Recruit Survey 1 and FTO survey, grouped across training type    168  5.2.3.5 Organizational Awareness Figure 5-9 and Figure 5-10 show the mean rankings for ability and preparedness for the organizational awareness competency from Recruit Survey 1 and from the FTO survey respectively.  Table 5-30 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation.  For the ability in the organizational awareness competency question, recruits in the competency-based delivery model ranked themselves considerably lower than recruits in the lecture-based delivery model, with means of 2.39 and 3.06 respectively.  For the question about how well Block I prepared them for Block II with respect to the ethical accountability competency, the recruits in the lecture-based model ranked their preparedness higher (2.63) than those in the competency-based model (2.35).  The FTOs ranked recruits in the lecture-based model slightly higher in ability and approximately the same in their preparedness for this competency with means in ability of 2.77 for lecture-based and 2.67 for competency-based, and with means in preparedness of 2.46 for lecture-based and 2.44 for competency-based.     Figure 5-9  Mean ratings for ability in the organizational awareness competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method  169   Figure 5-10 Mean ratings for preparation in the organizational awareness competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method   Ability in Organizational Awareness competency area – Recruit Survey 1 Ability in Organizational Awareness competency area – FTO survey Preparedness in Organizational Awareness competency area – Recruit Survey 1 Preparedness in Organizational Awareness competency area – FTO survey lecture Mean 3.06 2.77 2.63 2.46 N 35 13 35 13 Std. Deviation .873 1.092 1.165 .877 competency-based Mean 2.39 2.67 2.35 2.44 N 49 9 49 9 Std. Deviation .759 1.225 .879 1.014 Total Mean 2.67 2.73 2.46 2.45 N 84 22 84 22 Std. Deviation .869 1.120 1.011 .912  Table 5-30  Mean ratings for ability and preparation in the organizational awareness competency from Recruit Survey 1 and FTO responses clustered across training delivery methods    170  The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models.   1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the organizational awareness competency area for ratings from Recruit Survey 1  2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the organizational awareness competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the organizational awareness competency area for ratings from Recruit Survey 1 4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the organizational awareness competency area for ratings from the FTO survey Table 5-31 shows the results of this analysis, indicating that, in the two FTO questions, the null hypothesis is retained and there are no statistically significant differences in how the FTOs rated the recruits across delivery methods.  The null hypothesis is rejected, however, in the recruit questions, indicating a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability and preparedness in the organizational awareness competency area.  The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program.  Table 5-32 indicates the 171  results of a cross-tabulation report on the recruit ability and preparedness questions to examine the breakdown of responses.     Null Hypothesis Test Sig.  Decision 1 The distribution of responses from recruits for the Organizational Awareness ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test  0.000 Reject the null hypothesis 2 The distribution of responses from FTOs for the Organizational Awareness ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.8961 Retain the null hypothesis 3 The distribution of responses from recruits for the Organizational Awareness preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.000 Reject the null hypothesis 4 The distribution of responses from FTOs for the Organizational Awareness preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.6951 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test Table 5-31 Mann Whitney U Test of ability and preparedness for organizational awareness competency from Recruit Survey 1 and FTO survey, grouped across training type   172   Cross-tabulation Report   Total Lecture-based Competency-based Organizational Awareness ability - recruit has knowledge 3 8 11 act under full supervision 3 14 17 act under moderate supervision 18 27 45 act independently 11 0 11 Total 35 49 84 Organizational Awareness preparedness - recruit N/A 5 3 8 extremely poorly prepared 0 4 4 poorly prepared 2 15 17 well prepared 24 27 51 extremely well prepared 4 0 4 Total 35 49 84 Table 5-32  Cross-tabulation report from Recruit Survey 1 for ability (top) and preparedness (bottom) in the organizational awareness competency area  The cross-tabulation report for ability shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=27) and the next most frequent category was “act under full supervision” (n=14).  No recruits in the competency-based model stated that they believed they were able to “act independently” in this competency area.  This distribution is in contrast to the lecture-based delivery model where recruits also most frequently scored themselves as able to “act under moderate supervision” (n=18), but the next most frequent category was “act independently” (n=11).  The reason for rejecting the null hypothesis in the ability category of the organizational awareness competency was that the recruits in the competency-based 173  delivery model more frequently scored their ability lower than did the recruits in the lecture-based delivery model.   As in the responses to the recruits’ perceptions of their ability in organizational awareness, the cross-tabulation report for how well recruits believed their Block I training prepared them for Block II in the organizational awareness competency indicated that recruits in the lecture-based model typically ranked themselves higher than those from the competency-based model.  The most frequent category selected by the recruits from the lecture-based model was “well prepared” (n=24), followed by N/A (n=5), and “extremely well prepared” (n=4).  For the competency-based model, the most frequent category selected by recruits was also “well prepared” (n=27), followed by “poorly prepared” (n=15), “extremely poorly prepared” (n=4), and N/A (n=3).  No recruits in the competency-based model selected “extremely well prepared”.  While the majority of recruits in both models selected “well prepared”, recruits in the competency-based model also selected “poorly prepared” and “extremely poorly prepared” resulting in the rejection of the null hypothesis and the observed significantly lower ratings in preparedness from recruits in the competency-based model.     5.2.3.6 Problem Solving Figure 5-11 and Figure 5-12 show the mean rankings for ability and preparation for the problem solving competency from Recruit Survey 1 and from the FTO survey respectively.  Table 5-33 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation.  For the ability in the problem solving competency question, recruits in the competency-based delivery model ranked themselves 174  considerably lower than recruits in the lecture-based delivery model, with means of 2.84 and 3.49 respectively.  For the question about how well Block I prepared them for Block II with respect to the ethical accountability competency, the recruits in the lecture-based model ranked their preparedness slightly lower (2.74) than those in the competency-based model (2.84).  The FTOs ranked recruits in the lecture-based model higher in both ability and preparedness means in ability of 3.23 for lecture-based and 2.84 for competency-based, and with means in preparedness of 3.00 for lecture-based and 2.56 for competency-based.     Figure 5-11 Mean ratings for ability in the problem solving competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method   Figure 5-12 Mean ratings for preparation in the problem solving competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method  175   Ability in Problem Solving competency area – Recruit Survey 1 Ability in Problem Solving competency area – FTO survey Preparedness in Problem Solving competency area – Recruit Survey 1 Preparedness in Problem Solving competency area – FTO survey Lecture-based Mean 3.49 3.23 2.74 3.00 N 35 13 35 13 Std. Deviation .702 .599 1.039 .408 Competency-based Mean 2.84 2.89 2.84 2.56 N 49 9 49 9 Std. Deviation .624 1.054 .514 1.014 Total Mean 3.11 3.09 2.80 2.82 N 84 22 84 22 Std. Deviation .728 .811 .773 .733 Table 5-33  Mean ratings for ability and preparation in the problem solving competency from Recruit Survey 1 and FTO responses clustered across training delivery methods  The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models.   1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the problem solving competency area for ratings from Recruit Survey 1  2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the problem solving competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the problem solving competency area for ratings from Recruit Survey 1 176  4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the problem solving competency area for ratings from the FTO survey Table 5-34 shows the results of this analysis, indicating that, in the two FTO questions and in the recruit preparedness, the null hypothesis is retained and there are no statistically significant differences the ratings across delivery methods.  The null hypothesis is rejected, however, in the recruit question about ability, indicating that there is a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the problem solving competency area.  The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program.  Table 5-35 indicates the results of a cross-tabulation report on the recruit ability question to examine the breakdown of responses.     177   Null Hypothesis Test Sig.  Decision 1 The distribution of responses from recruits for the Problem Solving ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test  0.000 Reject the null hypothesis 2 The distribution of responses from FTOs for the Problem Solving ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.5561 Retain the null hypothesis 3 The distribution of responses from recruits for the Problem Solving preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.375 Retain the null hypothesis 4 The distribution of responses from FTOs for the Problem Solving preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.4311 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test  Table 5-34 Mann Whitney U Test of ability and preparedness for problem solving competency from Recruit Survey 1 and FTO survey, grouped across training type  Cross-tabulation Report   Total Lecture-based Competency-based Please rate your ability in the PROBLEM SOLVING competency area: has knowledge 1 2 3 act under full supervision 1 8 9 act under moderate supervision 13 35 48 act independently 20 4 24 Total 35 49 84  Table 5-35  Cross-tabulation report from Recruit Survey 1 for ability in the problem solving competency area  178  The cross-tabulation report for ability shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=35) and the next most frequent category was “act under full supervision” (n=8).  Four recruits from the competency-based model scored themselves as able to “act independently” in this competency area.  This distribution is in contrast to the lecture-based delivery model where recruits most frequently scored themselves as able to “act independently” (n=20), followed by “act under moderate supervision” (n=13).  The reason for rejecting the null hypothesis in the ability category of the problem solving competency, therefore, was that the recruits in the competency-based delivery model more frequently scored their ability lower than did the recruits in the lecture-based delivery model.    5.2.3.7 Risk Management Figure 5-13 and Figure 5-14 show the mean rankings for ability and preparation for the risk management competency from Recruit Survey 1 and from the FTO survey respectively.  Table 5-36 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation.  For the ability in the risk management competency question, recruits in the competency-based delivery model ranked themselves lower than recruits in the lecture-based delivery model, with means of 2.75 and 3.17 respectively.  For the question about how well Block I prepared them for Block II with respect to the risk management competency, the recruits in the lecture-based model ranked their preparedness slightly lower (2.89) than those in the competency-based model (2.96).  The FTOs ranked recruits in the lecture-based model approximately the same as those in the competency-based model in ability (means of 2.46 and 2.44 respectively) and ranked recruits 179  in the lecture-based model as less prepared than recruits from the competency-based model (means of 2.46 and 2.67 respectively).    Figure 5-13 Mean ratings for ability in the risk management competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method   Figure 5-14  Mean ratings for preparation in the risk management competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method      180   Ability in Risk Management competency area – Recruit Survey 1 Ability in Risk Management competency area – FTO survey Preparedness in Risk Management competency area – Recruit Survey 1 Preparedness in Risk Management competency area – FTO survey lecture Mean 3.17 2.46 2.89 2.46 N 35 13 35 13 Std. Deviation .785 .967 .796 .877 competency-based Mean 2.75 2.44 2.96 2.67 N 48 9 49 9 Std. Deviation .601 .882 .576 .707 Total Mean 2.93 2.45 2.93 2.55 N 83 22 84 22 Std. Deviation .712 .912 .673 .800 Table 5-36  Mean ratings for ability and preparation in the risk management competency from Recruit Survey 1 and FTO responses clustered across training delivery methods  The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models.   1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the risk management competency area for ratings from Recruit Survey 1  2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the risk management competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the risk management competency area for ratings from Recruit Survey 1 181  4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the risk management competency area for ratings from the FTO survey Table 5-37 shows the results of this analysis, indicating that, in the two FTO questions and in the recruit preparedness, the null hypothesis is retained and there are no statistically significant differences the ratings across delivery methods.  The null hypothesis is rejected, however, in the recruit question about ability, indicating that there is a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the risk management competency area.  The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program.  Table 5-38 indicates the results of a cross-tabulation report on the recruit ability question to examine the breakdown of responses.     182   Null Hypothesis Test Sig.  Decision 1 The distribution of responses from recruits for the Risk Management ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test  0.004 Reject the null hypothesis 2 The distribution of responses from FTOs for the Risk Management ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.8451 Retain the null hypothesis 3 The distribution of responses from recruits for the Risk Management preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.972 Retain the null hypothesis 4 The distribution of responses from FTOs for the Risk Management preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.5561 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test Table 5-37 Mann Whitney U Test of ability and preparedness for risk management competency from Recruit Survey 1 and FTO survey, grouped across training type  Cross-tabulation Report   Total Lecture-based Competency-based Risk Management ability - recruit has knowledge 1 3 4 act under full supervision 5 7 12 act under moderate supervision 16 37 53 act independently 13 1 14 Total 35 48 83 Table 5-38  Cross-tabulation report from Recruit Survey 1 for ability in the risk management competency area  183  The cross-tabulation report for ability shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=37) and the next most frequent category was “act under full supervision” (n=7).  Only one recruit from the competency-based model scored themselves as able to “act independently” and three recruits scored themselves as “has knowledge” in this competency area.  Similarly, the most frequent scoring in the lecture-based delivery model was “act under moderate supervision” (n=16) but this was followed by “act independently” (n=13), “act under full supervision” (n=5) and “has knowledge” (n=1).  The distribution of scoring of the recruits from the lecture-based model was again weighted towards the more independent ability end of the scale.  The reason then for rejecting the null hypothesis in the ability category of the risk management competency was that the recruits in the competency-based delivery model more frequently scored their ability lower than did the recruits in the lecture-based delivery model.    5.2.3.8 Stress Tolerance Figure 5-15 and Figure 5-16 show the mean rankings for ability and preparation for the stress tolerance competency from Recruit Survey 1 and from the FTO survey respectively.  Table 5-39 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation.  For the ability in the stress tolerance competency question, recruits in the competency-based delivery model ranked themselves considerably lower than recruits in the lecture-based delivery model, with means of 2.86 and 3.43 respectively.  For the question about how well Block I prepared them for Block II with respect to the stress tolerance competency, the recruits in the lecture-based model ranked 184  their preparedness approximately equal (2.94) as those in the competency-based model did (2.96).  The FTOs ranked recruits in the lecture-based model approximately the same as those in the competency-based model in both ability and preparedness with means of 2.77 for lecture-based in both categories and of 2.78 for competency-based in both categories.      Figure 5-15  Mean ratings for ability in the stress tolerance competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method   Figure 5-16 Mean ratings for preparation in the stress tolerance competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method  185   Ability in Stress Tolerance competency area – Recruit Survey 1 Ability in Stress Tolerance competency area – FTO survey Preparedness in Stress Tolerance competency area – Recruit Survey 1 Preparedness in Stress Tolerance competency area – FTO survey Lecture-based Mean 3.43 2.77 2.94 2.77 N 35 13 35 13 Std. Deviation .739 1.013 .838 .599 Competency-based Mean 2.86 2.78 2.96 2.78 N 49 9 49 9 Std. Deviation .677 .833 .644 .441 Total Mean 3.10 2.77 2.95 2.77 N 84 22 84 22 Std. Deviation .754 .922 .727 .528  Table 5-39  Mean ratings for ability and preparation in the stress tolerance competency from Recruit Survey 1 and FTO responses clustered across training delivery methods  The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models.   1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the stress tolerance competency area for ratings from Recruit Survey 1  2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the stress tolerance competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the stress tolerance competency area for ratings from Recruit Survey 1 186  4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the stress tolerance competency area for ratings from the FTO survey Table 5-40 shows the results of this analysis, indicating that, in the two FTO questions and in the recruit preparedness, the null hypothesis is retained and there are no statistically significant differences the ratings across delivery methods.  The null hypothesis is rejected, however, in the recruit question about ability, indicating that there is a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the stress tolerance competency area.  The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program.  Table 5-41 indicates the results of a cross-tabulation report on the recruit ability question to examine the breakdown of responses.     187   Null Hypothesis Test Sig.  Decision 1 The distribution of responses from recruits for the Stress Tolerance ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test  0.000 Reject the null hypothesis 2 The distribution of responses from FTOs for the Stress Tolerance ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.8961 Retain the null hypothesis 3 The distribution of responses from recruits for the Stress Tolerance preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.727 Retain the null hypothesis 4 The distribution of responses from FTOs for the Stress Tolerance preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.8451 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test  Table 5-40  Mann Whitney U Test of ability and preparedness for stress tolerance competency from Recruit Survey 1 and FTO survey, grouped across training type  Cross-tabulation Report   Total Lecture-based Competency-based Stress Tolerance ability - recruit has knowledge 1 2 3 act under full supervision 2 9 11 act under moderate supervision 13 32 45 act independently 19 6 25 Total 35 49 84  Table 5-41  Cross-tabulation report from Recruit Survey 1 for ability in the stress tolerance competency area  188  The cross-tabulation report for ability shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=32) and the next most frequent category was “act under full supervision” (n=9), followed by “act independently” (n=6).  The most frequent scoring in the lecture-based delivery model was “act independently” (n=19), followed by “act under moderate supervision” (n=13), and “act under full supervision” (n=2).  The distribution of scoring of the recruits from the lecture-based model was again weighted towards the more independent ability end of the scale.  The reason then for rejecting the null hypothesis in the ability category of the stress tolerance competency was that the recruits in the competency-based delivery model more frequently scored their ability lower than did the recruits in the lecture-based delivery model.    5.2.3.9 Teamwork Figure 5-17 and Figure 5-18 show the mean rankings for ability and preparation for the teamwork competency from Recruit Survey 1 and from the FTO survey respectively.  Table 5-42 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation.  For the ability in the teamwork competency question, recruits in the competency-based delivery model ranked themselves considerably lower than recruits in the lecture-based delivery model, with means of 3.12 and 3.63 respectively, although this was still a very high ranking from both classes.  For the question about how well Block I prepared them for Block II with respect to the stress tolerance competency, the recruits in the lecture-based model ranked their preparedness slightly lower (3.00) than those in the competency-based model did (3.16).  The FTOs ranked recruits in the lecture-based 189  model higher than those in the competency-based model in both ability and preparedness with means in ability of 3.38 for lecture-based and 3.00 for competency-based, and means in preparedness of 3.15 for lecture-based and 2.67 for competency-based.      Figure 5-17  Mean ratings for ability in the teamwork competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method   Figure 5-18  Mean ratings for preparation in the teamwork competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method      190   Ability in Teamwork competency area – Recruit Survey 1 Ability in Teamwork competency area – FTO survey Preparedness in Teamwork competency area – Recruit Survey 1 Preparedness in Teamwork competency area – FTO survey Lecture-based Mean 3.63 3.38 3.00 3.15 N 35 13 35 13 Std. Deviation .731 .650 1.029 .376 Competency-based Mean 3.12 3.00 3.16 2.67 N 49 9 49 9 Std. Deviation .696 1.118 .657 1.000 Total Mean 3.33 3.23 3.10 2.95 N 84 22 84 22 Std. Deviation .750 .869 .830 .722  Table 5-42  Mean ratings for ability and preparation in the teamwork competency from Recruit Survey 1 and FTO responses clustered across training delivery methods  The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models.   1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the teamwork competency area for ratings from Recruit Survey 1  2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the teamwork competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the teamwork competency area for ratings from Recruit Survey 1 191  4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the teamwork competency area for ratings from the FTO survey Table 5-43 shows the results of this analysis, indicating that, in the two FTO questions and in the recruit preparedness, the null hypothesis is retained and there are no statistically significant differences the ratings across delivery methods.  The null hypothesis is rejected, however, in the recruit question about ability, indicating that there is a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the teamwork competency area.  The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program.  Table 5-44 indicates the results of a cross-tabulation report on the recruit ability question to examine the breakdown of responses.     192   Null Hypothesis Test Sig.  Decision 1 The distribution of responses from recruits for the Teamwork ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test  0.000 Reject the null hypothesis 2 The distribution of responses from FTOs for the Teamwork ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.5561 Retain the null hypothesis 3 The distribution of responses from recruits for the Teamwork preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.818 Retain the null hypothesis 4 The distribution of responses from FTOs for the Teamwork preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.3571 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test Table 5-43  Mann Whitney U Test of ability and preparedness for teamwork competency from Recruit Survey 1 and FTO survey, grouped across training type  Cross-tabulation Report   Total Lecture-based Competency-based Teamwork ability - recruit has knowledge 1 2 3 act under full supervision 1 3 4 act under moderate supervision 9 31 40 act independently 23 13 36 act as a supervisor or instructor 1 0 1 Total 35 49 84  Table 5-44  Cross-tabulation report from Recruit Survey 1 for ability in the teamwork competency area 193  The cross-tabulation report for ability shows that the large majority of the recruits in the competency-based delivery model scored themselves as able to “act under moderate supervision” (n=31) and the next most frequent category was “act independently” (n=13), followed by “act under full supervision” (n=3) and “has knowledge” (n=2).  The most frequent scoring in the lecture-based delivery model was “act independently” (n=23), followed by “act under moderate supervision” (n=9), and one recruit each scoring themselves as able to “act as a supervisor or instructor”, “act under full supervision”, and “has knowledge”.  The distribution of scoring of the recruits from the lecture-based model was again weighted towards the more independent ability end of the scale and included one recruit who indicated they were able to act as a supervisor or instructor in this competency area.  The reason then for rejecting the null hypothesis in the ability category of the teamwork competency was that the recruits in the competency-based delivery model more frequently scored their ability lower than did the recruits in the lecture-based delivery model.    5.2.3.10 Written Skills Figure 5-19 and Figure 5-20 show the mean rankings for ability and preparation for the written skills competency from Recruit Survey 1 and from the FTO survey respectively.  Table 5-45 shows the values represented in each of these figures, including the sample size, mean value, and standard deviation.  For the ability in the written skills competency question, recruits in the competency-based delivery model ranked themselves lower than recruits in the lecture-based delivery model, with means of 2.73 and 3.03 respectively.  For the question about how well Block I prepared them for Block II with respect to the stress tolerance competency, the recruits in the competency-based model again ranked their preparedness 194  lower (2.61) than those in the lecture-based model did (2.94).  The FTOs ranked recruits in the lecture-based model higher than those in the competency-based model in both ability and preparedness with means in ability of 3.00 for lecture-based and 2.56 for competency-based, and means in preparedness of 2.85 for lecture-based and 2.33 for competency-based.      Figure 5-19  Mean ratings for ability in the written skills competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across delivery method   Figure 5-20  Mean ratings for preparation in the written skills competency area from Recruit Survey 1 (blue) and the FTO survey (red) clustered across training method    195   Ability in Written Skills competency area – Recruit Survey 1 Ability in Written Skills competency area – FTO survey Preparedness in Written Skills competency area – Recruit Survey 1 Preparedness in Written Skills competency area – FTO survey lecture Mean 3.03 3.00 2.94 2.85 N 35 13 35 13 Std. Deviation .747 .816 .416 .801 competency-based Mean 2.73 2.56 2.61 2.33 N 49 9 49 9 Std. Deviation .670 1.236 .571 1.118 Total Mean 2.86 2.82 2.75 2.64 N 84 22 84 22 Std. Deviation .714 1.006 .535 .953  Table 5-45  Mean ratings for ability and preparation in the written skills competency from Recruit Survey 1 and FTO responses clustered across training delivery methods  The Mann-Whitney U Test was used to test the null hypothesis of no difference in distribution of responses across the class delivery models.   1. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the written skills competency area for ratings from Recruit Survey 1  2. No difference in distribution of responses across lecture-based and competency-based delivery models for ability in the written skills competency area for ratings from the FTO survey 3. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the written skills competency area for ratings from Recruit Survey 1 196  4. No difference in distribution of responses across lecture-based and competency-based delivery models for preparation in the written skills competency area for ratings from the FTO survey Table 5-46 shows the results of this analysis, indicating that, in the two FTO questions and in the recruit ability question, the null hypothesis is retained and there are no statistically significant differences the ratings across delivery methods.  The null hypothesis is rejected, however, in the recruit question about preparedness, indicating that there is a statistically significant difference between how the recruits in the lecture-based delivery model and those in the competency-based delivery model rated themselves for their ability in the written skills competency area.  The means of the class data indicate that the competency-based recruits rated themselves lower than those from the lecture-based program.  Table 5-47 indicates the results of a cross-tabulation report on the recruit preparedness question to examine the breakdown of responses.     197   Null Hypothesis Test Sig.  Decision 1 The distribution of responses from recruits for the Written Skills ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test  0.060 Retain the null hypothesis 2 The distribution of responses from FTOs for the Written Skills ability question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.4311 Retain the null hypothesis 3 The distribution of responses from recruits for the Written Skills preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.005 Reject the null hypothesis 4 The distribution of responses from FTOs for the Written Skills preparedness question is the same across lecture-based and competency-based delivery models Independent samples:  Mann-Whitney U Test 0.3571 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test  Table 5-46 Mann Whitney U Test of ability and preparedness for written skills competency from Recruit Survey 1 and FTO survey, grouped across training type  Cross-tabulation Report   Total Lecture-based Competency-based Written Skills preparedness - recruit extremely poorly prepared 0 1 1 poorly prepared 4 18 22 well prepared 29 29 58 extremely well prepared 2 1 3 Total 35 49 84  Table 5-47  Cross-tabulation report from Recruit Survey 1 for ability in the written skills competency area  198  The cross-tabulation report for ability shows that the large majority of the recruits in both delivery models rated themselves as “well prepared” in the written skills competency (n=29, both classes).  The competency-based delivery model also scored themselves as “poorly prepared” (n=18) and one recruit each scored themselves as “extremely well prepared” and “extremely poorly prepared”.  In the lecture-based delivery model, in addition to the well prepared rankings, recruits also scored themselves as “poorly prepared” (n=4) and “extremely well prepared” (n=2).  No recruits in the lecture-based model ranked themselves as “extremely poorly prepared”.  The reason then for rejecting the null hypothesis in the preparedness category of the written skills competency was that the recruits in the competency-based delivery model more frequently scored their ability lower than did the recruits in the lecture-based delivery model.    5.2.4 Analysis of Recruit Responses Compared with FTO responses The observation in the analysis of the recruit responses for the competency areas indicated a general trend where the recruits in the lecture-based training rated their ability higher than the recruits in the competency-based training.  To explore this observation, data were compared between recruit responses to Recruit Survey 1 and FTO responses for the global ratings and for each of the competency areas to determine if there were any significant differences between recruit and FTO ratings.  This analysis was completed separately for the lecture-based and competency-based delivery models.   199  5.2.4.1 Recruit and FTO Responses – Lecture-based delivery model Table 5-48 shows the results from the Mann-Whitney U Test to determine if any significant difference in the ranking between recruits and FTOs in the ability and preparedness questions for global questions and for the questions relating to each of the competencies was observed.  In each case, the null hypothesis tested was that the distribution was the same between recruit and FTO responses for that particular question.  In all cases except for ability in the risk management competency and ability in the stress tolerance competency, the null hypothesis was retained.  In these two competencies, a significant difference between how the recruits rated themselves and how their FTOs rated them was observed.  Table 5-49 shows a cross-tabulation analysis for these two areas to determine the source of the difference.  In both of these competency areas, large numbers of recruits rated their ability as able to “act independently” (n=13 or 37% for risk management and n=19 or 54% for stress tolerance, compared to the FTO ratings where one FTO (8%) rated their recruit as able to “act independently” in risk management and three FTOs (25%) rated their recruits as able to “act independently” in stress tolerance.  In the lecture-based delivery model, it appears that recruits were over-estimating their ability in the risk management and stress tolerance competencies when compared to their FTO ratings.     200  Null Hypothesis Test Sig.  Decision For the lecture-based delivery model, the distribution of responses for the global ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test  0.656 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the global preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.8611 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Adaptability ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.132 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Adaptability preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.285 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Ethical Accountability ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.952 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Ethical Accountability preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.267 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Interactive Communication ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.614 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Interactive Communication preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.904 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Organizational Awareness ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.426 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Organizational Awareness preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.143 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Problem Solving ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.179 Retain the null hypothesis 201  Null Hypothesis Test Sig.  Decision For the lecture-based delivery model, the distribution of responses for the Problem Solving preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.422 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Risk Management ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.038 Reject the null hypothesis For the lecture-based delivery model, the distribution of responses for the Risk Management preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.058 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Stress Tolerance ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.049 Reject the null hypothesis For the lecture-based delivery model, the distribution of responses for the Stress Tolerance ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.355 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Teamwork ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.245 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Teamwork ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.903 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Written Skills ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.968 Retain the null hypothesis For the lecture-based delivery model, the distribution of responses for the Written Skills preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.816 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05 1 Exact significance is displayed for this test Table 5-48  Mann Whitney U test results for lecture-based delivery model for ability and preparedness overall and for each of the competencies grouped across recruit/FTO responses   202  Cross-tabulation Analysis   Total recruit FTO Risk Management - ability has knowledge 1 3 4 act under full supervision 5 1 6 act under moderate supervision 16 7 23 act independently 13 1 14 Total 35 12 47 Stress Tolerance - ability has knowledge 1 2 3 act under full supervision 2 1 3 act under moderate supervision 13 6 19 act independently 19 3 22 Total 35 12 47 Table 5-49  Cross-tabulation analysis of recruit ability in the risk management (top) and stress tolerance (bottom) competency areas grouped by recruit/FTO responses  5.2.4.2 Recruit and FTO Responses – Competency-based delivery model Table 5-50 shows the results from the Mann-Whitney U Test to determine if any significant difference in the ranking between recruits and FTOs in the ability and preparedness questions for global questions and for the questions relating to each of the competencies was observed.  In each case, the null hypothesis tested was that the distribution was the same between recruit and FTO responses for that particular question.  In all cases except for preparedness in the adaptability and interactive communication competencies, the null hypothesis was retained.  In these two competencies, a significant difference between how prepared the recruits rated themselves and how their prepared the FTOs rated them was observed.   Table 5-51 shows a cross-tabulation analysis for these two areas to determine the source of the difference.  In each of the categories, one of the FTOs indicated that the Block I 203  training was N/A for ability in these two competency areas.  In the adaptability competency area, one recruit indicated that they were “extremely well prepared” and three recruits indicated the same for the interactive communication competency area.  The majority of both recruits and FTOs indicated that the recruits were “well prepared” in each of these two competency errors.  The small sample size for the FTOs combined with the decision of one FTO to indicate N/A appears to be the source of the significant differences seen between recruit and FTO responses.   Null Hypothesis Test Sig.  Decision For the competency-based delivery model, the distribution of responses for the global ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test  0.837 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the global preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.102 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Adaptability ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.806 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Adaptability preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.030 Reject the null hypothesis For the competency-based delivery model, the distribution of responses for the Ethical Accountability ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.845 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Ethical Accountability preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.722 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Interactive Communication ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.226 Retain the null hypothesis 204  Null Hypothesis Test Sig.  Decision For the competency-based delivery model, the distribution of responses for the Interactive Communication preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.049 Reject the null hypothesis For the competency-based delivery model, the distribution of responses for the Organizational Awareness ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.408 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Organizational Awareness preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.578 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Problem Solving ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.684 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Problem Solving preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.389 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Risk Management ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.131 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Risk Management preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.137 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Stress Tolerance ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.598 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Stress Tolerance ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.144 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Teamwork ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.971 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Teamwork ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.072 Retain the null hypothesis 205  Null Hypothesis Test Sig.  Decision For the competency-based delivery model, the distribution of responses for the Written Skills ability question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.602 Retain the null hypothesis For the competency-based delivery model, the distribution of responses for the Written Skills preparedness question is the same across recruit and FTO Independent samples:  Mann-Whitney U Test 0.519 Retain the null hypothesis Asymptotic significances are displayed.  The significance level is 0.05  Table 5-50  Mann Whitney U test results for competency-based delivery model for ability and preparedness overall and for each of the competencies grouped across recruit/FTO responses   Cross-tabulation Report   Total recruit FTO Adaptability - preparedness N/A 0 1 1 poorly prepared 4 2 6 well prepared 44 6 50 extremely well prepared 1 0 1 Total 49 9 58 Interactive Communication - preparedness N/A 0 1 1 poorly prepared 5 2 7 well prepared 41 6 47 extremely well prepared 3 0 3 Total 49 9 58  Table 5-51  Cross-tabulation analysis of recruit preparedness in the adaptability (top) and interactive communication (bottom) competency areas grouped by recruit/FTO responses  5.2.5 Analysis of Assessor Responses The assessors for exams in the competency-based delivery model were also asked about their impressions of the ability and preparedness of Block I recruits.  Although this group did not work with the lecture-based group in the same capacity, they did have exposure 206  to previous incoming recruits in their roles as assessors for the Assessment Centre.  They would be familiar with the level of incoming recruits prior to training in the lecture-based delivery model.  Because of the high stakes of the Assessment Centre, this group of current and retired police officers is v