UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The predictive validity of an operational assessment centre Gale, Cheryl Ann 1983

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1983_A4_6 G34.pdf [ 4.23MB ]
Metadata
JSON: 831-1.0095588.json
JSON-LD: 831-1.0095588-ld.json
RDF/XML (Pretty): 831-1.0095588-rdf.xml
RDF/JSON: 831-1.0095588-rdf.json
Turtle: 831-1.0095588-turtle.txt
N-Triples: 831-1.0095588-rdf-ntriples.txt
Original Record: 831-1.0095588-source.json
Full Text
831-1.0095588-fulltext.txt
Citation
831-1.0095588.ris

Full Text

THE PREDICTIVE VALIDITY OF AN OPERATIONAL ASSESSMENT CENTRE By CHERYL ANNE GALE B.A., The University of B r i t i s h Columbia, 1978 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF M A S T E R OF S C I E N C E ( B U S I N E S S A D M I N I S T R A T I O N ) i n THE FACULTY OF GRADUATE STUDIES (Department of Commerce and Business Administration) We accept t h i s thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA A p r i l 1983 © C h e r y l Anne Gale, 1983 In presenting t h i s thesis i n p a r t i a l f u l f i l m e n t of the requirements for an advanced degree at the University of B r i t i s h Columbia, I agree that the Library s h a l l make i t f r e e l y available for reference and study. I further agree that permission for extensive copying of t h i s thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. I t i s understood that copying or publication of t h i s thesis for f i n a n c i a l gain s h a l l not be allowed without my written permission. Department of COMMERCE AND BUSINESS ADMINISTRATION The University of B r i t i s h Columbia 1956 Main Mall Vancouver, Canada V6T 1Y3 Date APRIL 2 7 , 1983 DE-6 (.3/81) ABSTRACT i i In recent years, assessment centres have become an i n c r e a s i n g l y popular approach to s e l e c t i n g and promoting p o l i c e managers. Yet few empirical studies have been conducted on the v a l i d i t y of these assessment centres. The intent of t h i s i n v e s t i g a t i o n was to conduct one of the f i r s t c r i t e r i o n - r e l a t e d v a l i d i t y studies of an assessment centre i n law enforcement. Findings i n d i c a t e l i t t l e p r e d i c t i v e v a l i d i t y of the o v e r a l l assessment centre r a t i n g (OAR) with the performance a p p r a i s a l measure (r=.0784, N.S.) or with the p o t e n t i a l i t y measure (r=-.0083, N.S.). Comparison of pre vs. post assessment centre r e c r u i t s shows that r e c r u i t s hired a f t e r implementation of the centre are rated higher i n p o t e n t i a l i t y and perform better at the academy than those hired p r i o r to the centre. There i s no d i f f e r e n c e i n performance appraisal measures. The study also revealed, through multiple regression, that assessment centre dimensions of stress tolerance, interpersonal s e n s i t i v i t y , interpersonal tolerance, i n t e g r i t y , p r a c t i c a l i n t e l l i g a n c e , problem confrontation, i n i t i a t i v e , personal impact, and f a c t f i n d i n g were the major determinants of o v e r a l l assessment r a t i n g s . Factor analysis suggests that the assessment centre ratings a c t u a l l y r e f l e c t only three underlying f a c t o r s : o v e r a l l a c t i v i t y and general effectiveness, interpersonal e f f e c t i v e n e s s , and probity. Results are presented with the appropriate reservations regarding methodological weaknesses. The study should not be seen as a case study, but rather as an a d d i t i o n a l source of assessment centre information of p a r t i c u l a r i n t e r e s t to law enforcement agencies, and also relevant to assessment centres i n general. i i i TABLE OF CONTENTS Page Abstract i i Table of Contents i i i L i s t of Tables v L i s t of Figures v i Introduction 1 D e f i n i t i o n of Assessment Centre 2 History of Assessment Centres 6 Assessment Centre Standards 8 Assessment Centre R e l i a b i l i t y 10 Assessment Centre V a l i d i t y 13 Assessment Centres and the Police 22 Method 26 Introduction 26 Subjects 27 Predictors 29 C r i t e r i a 31 Data Analysis 33 Results 35 OAR vs. the C r i t e r i a 35 Experimental vs. Control Group 40 Assessment Centre Dimensions 41 Discussion 53 Pr e d i c t i v e V a l i d i t y of the OAR 53 Pre vs. Post Assessment P o l i c e Candidates 59 OAR and Assessment Centre Dimensions 61 Factor Analysis of Assessment Centre Ratings 63 Conclusion 64 Bibliography 67 i v TABLE OF CONTENTS (Continued) Page Appendices 73 Appendix 1 - Description of the Assessment Centre Operation 73 Appendix 2 - D e f i n i t i o n of the Assessment Centre Dimensions 78 Appendix 3 - Description of P o l i c y Academy Courses 80 Appendix 4 - Progress Report Rating Form 82 Appendix 5 - Revised Progress Report Rating Form 86 LIST OF TABLES Page Table I Revised Assessment Rating Scale 30 Table II Ranges, Means and Standard Deviations for a l l Research Variables 36 Table I II Co r r e l a t i o n Matrix of Academy Grades and Overall Assessment Ratings 39 Table IV Mult i p l e Regression Function f o r Predicting Overall Assessment Rating with Academy Grades 40 Table V Correlation Matrix f o r the Assessment Centre Dimensions and the Overa l l Assessment Rating 42 Table VI Co r r e l a t i o n Matrix f o r the Assessment Centre Dimensions, Performance Appraisal Measure and P o t e n t i a l i t y Measure 44 Table VII Mult i p l e Regression Function for Predicting Performance Appraisal Measure 47 Table VIII Mu l t i p l e Regression Function f o r Pr e d i c t i n g P o t e n t i a l i t y Measure 47 Table IX Mult i p l e Regression Function for Predicti n g o v e r a l l Assessment Rating 48 Table X Rank Ordering of Assessment Centre Dimensions as Determined by Multiple Regression and Job Analysis 51 Table XI Rotated Factor Solution of Assessment Centre Ratings 52 v i LIST OF FIGURES Page Figure 1 Breakdown of Assessment Centre Sample into Departments 28 Figure 2 Mean Peformance Appraisal Measure as a Function of Over a l l Assessment Rating 37 Figure 3 Proportion of Above Average Candidates at Each Level of Overall Assessment Rating 38 Figure 4 Percentage of Candidates at Each Level of O v e r a l l Assessment Rating 56 1 INTRODUCTION Organizations have long been concerned with i d e n t i f y i n g and se l e c t i n g employee t a l e n t . H i s t o r i c a l l y , e f f o r t s to make s e l e c t i o n decisions demonstrate management's willingness to t r y almost anything: endurance t e s t s , contours of the s k u l l , ink b l o t s , and even phys i c a l combat (Wilson and Tatge, 1973). Today, an even wider assortment of data c o l l e c t i o n techniques and s e l e c t i o n instruments i s av a i l a b l e (eg. a b i l i t y t e s t s , biographical information blanks, personality t e s t s , interviews, e t c . ) . While each technique has demonstrated varying degrees of p r e d i c t i v e success i n p a r t i c u l a r s i t u a t i o n s , c r i t i c i s m has been b u i l d i n g against use of these " t r a d i t i o n a l " s e l e c t i o n instruments. For instance, they tend to be narrow banded (S l e v i n , 1972) i n that they capture only one aspect of a candidate's makeup. They have come under heavy l e g a l f i r e i n the United States because of the lack of sound v a l i d i t y studies and the suspicion that such t e s t s may be u n f a i r to women and/or m i n o r i t i e s . And, emphasis has more recently s h i f t e d to the development and use of s i t u a t i o n a l tests or work samples because of t h e i r face and content v a l i d i t y , t h e i r f l e x i b i l i t y and t h e i r demonstrated p r e d i c t i v e v a l i d i t y i n some organizational s e t t i n g s . In the wake of such c r i t i c i s m has emerged a s e l e c t i o n technique known as the assessment centre. The assessment centre technique has become e x t r a o r d i n a r i l y popular among professionals and p r a c t i t i o n e r s involved i n the s e l e c t i o n of personnel. The popularity of the technique i s manifest i n the large numbers of papers published on i t s use, the formation of associations devoted to the research and a p p l i c a t i o n of assessment centres (eg. Development Dimensions, 2 Assessment Designs, Inc.), and the unusual support afforded assessment centres by the U.S. f e d e r a l courts (Klimoski & Strickland, 1977). The assessment centre approach i s f l e x i b l e enough to be adapted to a v a r i e t y of organizational needs and while i t i s not without c e r t a i n l i m i t a t i o n s , consultants and academics can both be pleased to have i s o l a t e d a s e l e c t i o n strategy which has t h e o r e t i c a l underpinnings and also appears to work (MacKinnon, 1975). D e f i n i t i o n of Assessment Centre An assessment centre i s not a p h y s i c a l s e t t i n g - rather, i t i s a dynamic multiphasic process i n which information on an i n d i v i d u a l i s brought together from a v a r i e t y of measurement techniques. The process involves the evaluation of a number of i n d i v i d u a l s (assessees) by a team of assessors through assessee p a r t i c i p a t i o n i n a standard-i z e d program of exercises. The exercises are selected and designed to evoke s p e c i f i c , observable behaviours i n a context that simulates the task demands of the job f o r which the assessee i s being considered. The dimensions that the exercises are b u i l t around are predetermined q u a l i t i e s or t r a i t s considered to be necessary f o r success i n the job. The are t y p i c a l l y based on a job analysis and are agreed upon by management before t h e i r use i n the centre. The exercises serve as the s t i m u l i for the behaviour to be observed. The assessors are trained evaluators unbiased by previous associations with the assessees, who are f a m i l i a r with the s k i l l s to be measured, the personal c h a r a c t e r i s t i c s desired, and the environment i n which the job i s to be performed. Their t r a i n i n g focuses on 3 e s s e n t i a l l y two things: 1. thorough understanding of the exercises used and the types of behaviour they evoke, and 2. making s p e c i f i c , as opposed to g l o b a l , observations and recordings of behaviour. Each assessor i s responsible f o r conducting the assessment exercise as well as observing and formally recording p a r t i c i p a n t performance during the exercise. The usual r a t i o of assessors to assessees i s 1 assessor to 2 or 3 assessees. This low r a t i o allows for focused observation of the p a r t i c i p a n t s and multiple evaluation by the s t a f f . The assessees meet away from th job on "equal" ground where te c h n i c a l knowledge of a job or personal biases of supervisors cannot give one candidate u n f a i r advantage over another. Through the exercises, assessees are given an equal opportunity to display t h e i r talents and c a p a b i l i t i e s . Assessees are seen i n at l e a s t one exercise by each assessor. Once a l l the exercises have been completed by a l l assessees, the assessors meet to combine exercise information. Discussion proceeds on each assessee i n d i v i d u a l l y . Each assessor reports to the other members of the evaluation team the behaviours observed i n his/her e x e r c i s e ( s ) . A l l members of the team judge the effectiveness of the behaviours noted and prepare an assessment centre summary report for each assessee. These reports are not usually of a hire-don't h i r e nature but rather they d e t a i l the strengths and weaknesses of p a r t i c i p a n t s on the assessment centre dimensions with examples of s p e c i f i c behaviours. Unlike t r a d i t i o n a l approaches to s e l e c t i o n , the assessment centre provides a broad band approach to the evaluation of candidate p o t e n t i a l . The s i t u a t i o n a l or simulation exercises found i n 4 assessment centres allow for the measurement of more complex or dynamic behaviour rather than i s o l a t e d t r a i t s or aptitudes (Howard, 1974). In f a c t , t r a d i t i o n a l s e l e c t i o n methods are oveshadowed by the assessment centre technique on several counts: - assessment centres, more so than other approaches to job s e l e c t i o n , place great emphasis on behaviourally based evaluations rather than i n f e r r i n g l i k e l y behaviour from correlates such as test scores or biographical information. - the assessmnt centre looks p r i m a r i l y at what the assessee can do i n a new p o s i t i o n , not what he i s doing or has done i n h i s present one - i t gathers performance information relevant to the new job rather than attempting to predict performance with present or past job performance. - samples, not signs of future performance are u t i l i z e d . - exercises are standardized so that assessors evaluate candidates under r e l a t i v e l y constant conditions. - assessors uaually do not know the candidates personally, removing any personal bias that might e x i s t . - assessors are shielded from any interruptions that might occur and can therefore pay f u l l a t t e n t i o n to the behaviour of the canditates. - assessors have also been trained to observe and evaluate the kinds of behaviour the exercises have been designed to evoke. The assessment centre use of multiple observers, multiple sources of information, and s p e c i f i c a l l y defined objective dimensions of performance helps to ensure the o b j e c t i v i t y of the process (Byham, 1979; Byham & Wettengel, 1974; Jaffee & Frank, 1978; Jaffee, Frank & R o l l i n s , 1976; and Moses, 1977). 5 Perhaps the s i n g l e most important feature of the assessment centre concept i s the v a r i e t y of structures and/or instruments that can be employed. Assessment centres d i f f e r greatly i n length, cost, content, s t a f f i n g , and administration depending on the objectives of the centre, the dimensions to be assessed, and the employee population (Alexander, 1979; Bender, 1973; Blumenfeld, 1971; Byham, 1970, 1971; Byham & Thornton, 1970; L i t t l e , 1974; MacKinnon, 1975; Pomerleau, 1973). A few of the more common factors on which assessment centres have been found to d i f f e r include: - number of tests and procedures used: reported as ranging from 4 to 40 (Bender, 1973); - length of the centre: one day to one week; - number of dimensions used: 10 to 52; - administrative arangements: inhouse or consultants; - t r a i n i n g of assessors: minimal vs. extensive; - number of assessors; - number of assessees: t y p i c a l l y from 6 to 12 (Blumenfeld, 1971); - r a t i o of assessors to assessees; - s e l e c t i o n of candidates: s e l f nomination or nomination by supervisor, and - purpose of centre: s e l e c t i o n , promotion, development. As concluded i n Bender's 1973 examination of operating assessment centre c h a r a c t e r i s t i c s , no two centres are exactly a l i k e . There i s no r i g h t or wrong way to structure a centre - the s p e c i f i c a p p l i c a t i o n must be t a i l o r e d to meet s p e c i f i c company needs and operating requirements. 6 History of Assessment Centres Assessment centres, as we know them today, have generally been acknowledged to be a d i r e c t outgrowth of t e s t i n g done i n the s e l e c t i o n of o f f i c e r s f o r the German m i l i t a r y command during the 1930's (Brown, 1978; Kraut, 1976; M i l l a r d & Pinsky, 1980). During World War I I , t h i s work was i n turn borrowed by the B r i t i s h War O f f i c e to aid i n the s e l e c t i o n of o f f i c e r s as w e l l as by the U.S. O f f i c e of Strategic Services who used centres to s e l e c t i n t e l l i g e n c e agents. The work of the O f f i c e of Strategic Services, under the general d i r e c t i o n of Dr. Henry Murray, produced the f i r s t widely used assessment centre approch (Moses, 1977). Over 5000 r e c r u i t s were assessed and while the actual techniques and exercises obviously vary from those used today, the process and method remains e s s e n t i a l l y the same. The assessment centre concept was v i r t u a l l y abandoned a f t e r the war u n t i l the mid 1950's when the technique was revived and introduced to industry as a r e s u l t of the pioneering work done at the American Telephone and Telegraph Company (AT&T). AT&T f i r s t applied the assessment centre method i n t h e i r now famous Management Progress Study. This study, an ambitious l o n g i t u d i n a l research project, attempted to gain i n s i g h t i n t o the management development process and to i d e n t i f y the variables r e l a t e d to management success. The subjects of the study, newly hired college graduates, p a r t i c i p a t e d i n a 3 1/2 day s p e c i a l research assessment centre. The centre consisted of a v a r i e t y of techniques including leaderless group exercises, business games, an in-basket, indepth interviews, and a number of psychological and personality measures. Each p a r t i c i p a n t was rated on 25 dimensions and an o v e r a l l judgement of the l i k e l i h o o d 7 each would have i n reaching middle managment i n the next ten years was made. Assessment data was not made av a i l a b l e to e i t h e r the p a r t i c i p a n t or h i s organization. Extensive follow-up data was c o l l e c t e d and the p a r t i c i p a n t s s t i l l with the company eight years a f t e r the i n i t i a l assessment were reassessed. The r e s u l t s of the study are presented i n Formative Years i n Business, a book by Bray, Campbell and Grant (1974). At t h i s point i t s u f f i c e s to say that the Management Progress Study Assessment Centre did much to e s t a b l i s h the v a l i d i t y of the assessment process. AT&T, only two years a f t e r the study had begun, developed i t s f i r s t operational assessment program for l i n e use. Gradually, programs were developed for higher l e v e l management assessment as w e l l as for the e a r l y i d e n t i f i c a t i o n of p o t e n t i a l i n very recent employees within the B e l l System. The p u b l i c i t y attending t h i s pioneer work at AT&T spurred a number of other large companies i n the U.S. to t r y assessment centres i n the following decade. Further development and use of assessment centres has been c a r r i e d out by such corporate giants as Standard O i l of Ohio, IBM, General E l e c t r i c , J.C. Penny, and Sears. In Canada, early assessment centre progrms were developed at Imperial O i l , Northern E l e c t r i c , Ontario Hydro, and one or two departments of the Federal Government. The growth of assessment centres i n Canada has paralled that of the U.S., and i t would seem that the penetration of the method i n Canadian industry approximates i t s penetration i n the U.S. (Byham, 1977). In the 1970's a p p l i c a t i o n of the assessment centre approach accelerated r a p i d l y . Increased adoption of the method occurred as 8 public sector employers began to adapt the private sector experience to t h e i r own s e l e c t i o n and promotional needs (Ross, 1979). The American Management Association introduced a multi-media assessment centre f o r use by i t s member organizations and i n the three years subsequent to i t s p u b l i c a t i o n (1970-73) over 150 organizations made use of i t . While estimates of the growth of assessment centre technology vary (over 500-Cohen, 1978; over 1,000-Byham, 1977; F i n k l e , 1976; Skoff, 1975; over 2,000-Business Week, 1979; M i l l a r d & Pinsky, 1980; Parker, 1980; Yager, 1976; Zemke, 1980; over 4,000-Ross, 1979), there i s l i t t l e disagreement that i t s widespread adoption has been l i t t l e short of phenomenal. Assessment Centre Standards Concern over the rapid growth i n the use of the assessment centre method brought about the creation of a task force charged with developing a set of standards or guidelines f o r users of the assessment centre method. The task force report, "Standards and E t h i c a l Considerations f o r Assessment Centre Operations", was based on the observations and experience of a representative group of professionals drawn from many of the largest users of the method. The report was endorsed at the Third International Congress on the Assessment Centre Method Meeting i n May of 1975. The report set f o r t h seven minimum requirements that must be met before a s e l e c t i o n procedure can be considered an assessment centre. These include: 1. Multiple assessment techniques must be used, with at l e a s t one technique a simulation. 9 2. M u l t i p l e assessors must be used, with thorough t r a i n i n g required p r i o r to p a r t i c i p a t i n g i n a centre. 3. Outcome judgements (ex. select-don't s e l e c t ) must be based on pooling information across assessors and techniques. 4. Overall behaviour evaluations must be made at a time separate from the observation of behaviour during the exercises. 5. Pre-tested, job r e l a t e d simulations are to be used. 6. The dimensions evaluated are to be determined by an analysis of relevant job behaviours. 7. Techniques should be designed to provide information regarding the previously determined dimensions. Because the report committee feared careless use of the term assessment centre, the report also i d e n t i f i e d the kinds of a c t i v i t i e s that do not constitute an assessment centre: 1. panel or sequential interviews as the sole technique; 2. r e l i a n c e on one technique as the sole basis for evaluation; 3. using only a test battery of a number of p e n c i l and paper measures; 4. si n g l e assessor assessment; 5. use of several simulations with more than one assessor where there i s no pooling of date, and 6. a physical l o c a t i o n l a b e l l e d as an "assessment centre" which does not conform to the requirements noted above. (Third International Congress on the Assessment Centre Method, 1975). Adherence to the Task Force Standards i s becoming more c r i t i c a l as assessment centres are tested i n the courts. One of the f i r s t l e g a l challenges to the use of the assessment centre method as a selection/promotion device occurred i n the City of Omaha where p o l i c e o f f i c e r s challenged the r e l i a b i l i t y of the method (Berry v. City of Omaha, D.C. Douglas County, Nebraska, Nov. 17, 1975). The Court placed great r e l i a n c e on the Standards which the judge ruled the centre had met or surpassed. The issue was decided i n the City's favour p r i m a r i l y because of i t s adherence to the Standards. The Standards are, of course, a minimum pre r e q u i s i t e for the operation of an assessment centre. Aside from d e f i n i n g what can and cannot be considered an assessment centre, they also address such issues as r i g h t s of the p a r t i c i p a n t , assessor t r a i n i n g , and v a l i d a t i o n issues. Adherence to the Standards, however, does not preclude the necessity to demonstrate the r e l i a b i l i t y and v a l i d i t y of any assessment centre. Assessment Centre R e l i a b i l i t y A unique feature of the assessment centre procedure i s the use of multiple assessors whose judgements regarding the observed performance of candidates i s pooled. This, of course, r a i s e s the question of i n t e r - r a t e r r e l i a b i l i t y i n the assessment process. Although few studies report the i n t e r - r a t e r r e l i a b i l i t y of assessment centre judgements, the research d i rected at t h i s question i s rather conclusive i n showing that the assessment centre process i s not li m i t e d by low r e l i a b i l i t i e s (Bray & Grant, 1966; Dicken & Black, 1965; Greenwood & Mcnamara, 1967; Schmitt, 1977; and Thomson, 1969). Dicken and Black (1965) i n an i n t e r - r a t e r r e l i a b i l i t y study focusing on assessment dimensions found high r e l i a b i l i t i e s ( i n t e r -r a t e r agreement) i n the assessment ratings of two d i f f e r e n t samples. 11 In one sample, r e l i a b i l i t i e s across dimensions ranged from .68 to .99, with a median r e l i a b i l i t y of .89, while i n the second sample, the range was from .85 to .98, with a median of .92. S i m i l a r l y , high l e v e l s of rater agreement were found by Thomson (1969;1970) and Schmitt (1977). Thomson reported i n t e r - r a t e r r e l i a b i l i t i e s of ratings by two psychologists on 13 dimensions ranging from .73 to .93 (r=.85) and by three managers ranging from .78 to .95 (r=.89). Schmitt reported a range of i n t e r - r a t e r r e l i a b i l i t i e s on 17 dimensions of .77 to .97 (r=.89). In a study which examined i n t e r - r a t e r r e l i a b i l i t y i n three d i f f e r e n t s i t u a t i o n a l exercises, Greenwood and McNamara (1967) found mean assessor r e l i a b i l i t i e s f o r effectiveness ratings of .66, .70 and .74 and mean r e l i a b i l i t i e s f o r p a r t i c i p a n t rankings of .64, .71 and .75. This compares favourably to assessor r e l i a b i l i t i e s reported by Bray and Grant (1966). Using two s i t u a t i o n a l exercises, assessor r e l i a b i l i t i e s i n o v e r a l l ratings of .60 and .75 were reported as were r e l i a b i l i t i e s i n rankings of .69 and .75. A few studies have been reported i n the l i t e r a t u r e dealng with the background of assessors serving on assessment centre s t a f f s . The main question of these studies has been whether professional psychologists would provide a higher degree of i n t e r - r a t e r r e l i a b i l i t y as assessors than trained l i n e managers. The evidence" suggests that they do not. Thomson (1969) found no s i g n i f i c a n t differences between ratings made by psychologists and by managers with regard to mean, standard deviations, and r e l i a b i l i t i e s of dimensions (MacKinnon, 1975). Greenwood and McNamara (1967) s i m i l a r l y found no d i f f e r e n c e i n r e l i a b i l i t i e s when comparing psychologists and managers as assessors i n r a t i n g performance across s i t u a t i o n a l t e s t s . It must be remembered, however, that the managers i n these studies were trained i n observation techniques. Instructions to both types of assessors were e x p l i c i t as to the s p e c i f i c type of behaviour to be evaluated, examples of the kinds of behaviour that the s i t u a t i o n a l exercises could be expected to e l i c i t were provided and standardized r a t i n g forms were provided f o r use. Not s u r p r i s i n g l y , r e l i a b i l i t i e s have been shown to markedly increase as a r e s u l t of assessor t r a i n i n g . Richards and Jaffee (1972) report that mean i n t e r - r a t e r r e l i a b i l i t i e s increased from .46 on a human r e l a t i o n s s k i l l dimension and .58 on an administrative s k i l l dimension f o r untrained assessors to .78 and .90 r e s p e c t i v e l y f o r the trained assessor. Likewise, Thomson (1970) i l l u s t r a t e s that managers asked to rate subordinates using assessment centre forms f a l l short of the q u a l i t y of ratings obtained from a trained assessment s t a f f . He found that assessment centre raters were able to discriminate among the various dimensions despite the r e l a t i v e l y high l e v e l of t r a i t i n t e r c o r r e l a t i o n - t h i s was not true of supervisors' r a t i n g s . Differences were postulated to be a t t r i b u t a b l e to the pre-assessment centre t r a i n i n g on the meaning and use of scales which created a common frame of reference and standardized conditions for r a t i n g . Schmitt (1977) also demonstrates that i n t e r - r a t e r r e l i a b i l i t i e s increase a f t e r assessor discussion. Inter-rater r e l i a b i l i t i e s on 17 dimensions were reported as ranging from .46 to .88 (r=.68) before discussion and from .77 to .97 (r=.89) a f t e r discussion. The high r e l i a b i l i t i e s reported i n assessment centre research r e s u l t from the extensive t r a i n i n g provided to the assessment s t a f f s 13 i n evaluating assessee performance, and from the standardization incorporated into the assessment operation. R e l i a b i l i t y , however, cannot be assumed - i t must be demonstrated. Examination of r e l i a b i l i t y within an assessment program can ensure that i t operates at peak e f f i c i e n c y (Hinrichs and Haanpera, 1976). Assessment centre r e l i a b i l i t y i s also a necessary precusor for assessment centre v a l i d i t y . Assessment Centre V a l i d i t y Any assessment centre designed to reveal and measure employee effectiveness, both r e a l and p o t e n t i a l , should well be expected to demonstrate i t s own e f f e c t i v e n e s s . Yet, i n the process of rapid assessment centre growth, an alarming number of organizations have sloughed over what should be one of the major concerns associated with any s e l e c t i o n approach: the demonstration of v a l i d i t y (Hinrichs, 1978). More often than not, organizations have "borrowed" assessment centre v a l i d i t y information generated elsewhere. And, while the volume of research on assessment centre v a l i d i t y has been encourag-ing l y p o s i t i v e , most of i t has addressed i t s e l f to four programs: AT&T, Standard O i l of Ohio, IBM, and Sears. The most extensive v a l i d a t i o n of an assessment program i n industry was undertaken by Bray & Grant (1966) as part of AT&T's Management Progress Study. That study and a l a t e r one (Bray & Campbell, 1968), also conducted at AT&T, are the only two "pure" v a l i d i t y studies. C r i t e r i a v a r i a b l e s of salary and advancement were uncontaminated because assessment centre r e s u l t s were not made known to e i t h e r the candidates or the organization. C r i t e r i o n contamination occurs when assessment centre r e s u l t s are made known. Invariably, successful candidates at the assessment centre become well known as successful candidates within t h e i r own organizations. As a r e s u l t , perceptions and/or ratings of t h e i r performance may well be favourable because the assessment centre previously "annointed" them as good and capable performers. The Bray & Grant (1966) study i s what must be regarded as "The Study" i n assessment centre v a l i d i t y (Howard, 1974). I t continues to serve as the premier long term v a l i d i t y study most frequently r e l i e d upon by organizations planning to undertake t h i s method. The study involved two samples of men (one of college men and one of non-college men entering careers i n management) for whom i t was predicted whether or not they would make middle management within ten years of the time of assessment. These predictions were checked 5 to 7 years l a t e r against the l e v e l of management a c t u a l l y achieved. As Dunnette (1971) summarized, the p r e d i c t i v e v a l i d i t i e s of the assessment s t a f f ' s global predictions were moderately high; for the college men, 82% of the men who made middle management were c o r r e c t l y i d e n t i f i e d , f o r the non-college men, 75% of the men who made middle management were c o r r e c t l y i d e n t i f i e d . In contrast, 94% of the men (both college and non-college) who did not advance beyond the f i r s t l e v e l of management were c o r r e c t l y i d e n t i f i e d . The combined r e s u l t s of the two samples showed that, at the time of the study, 45% of those who were predicted to make middle management had, i n f a c t , done so; whereas only 7% of those predicted not to make management had achieved that l e v e l . The point b i s e r i a l c o r r e l a t i o n between assessment centre predictions and l e v e l achieved i n management was .44 for the college men and .71 for the non-college men. 15 The other pure v a l i d i t y study (Bray & Campbell, 1968) looked at assessment s t a f f judgements of a c c e p t a b i l i t y as salesmen (more than acceptable, acceptable, le s s than acceptable, unacceptable) compared to job performance s i x months l a t e r as evaluated by a s p e c i a l observation team. The c o r r e l a t i o n between assessment centre ratings and l a t e r sales performance was reported as .51. V a l i d a t i o n studies such as the above two conducted at AT&T are rare. The press f o r a p p l i c a t i o n and operational use of the assessment centre technique and the time and expense of pure research prevent i d e a l studies. D i f f i c u l t i e s can include: - the lack of a comparison group that can be l e g i t i m a t e l y compared with those who have done well at the assessment centre - poorer performing assessees are not usually hired/promoted; - the d i f f i c u l t y i n obtaining r e l i a b l e , relevant c r i t e r i o n measures, and - the contamination of c r i t e r i a as assessment centre r e s u l t s are fed back into the l i n e organization. The following i s a review of some of the "less than pure" assessment centre v a l i d i t y studies ( i e . a l l are subject to various biases and s t a t i s t i c a l r e s t r i c t i o n s i n range that are t y p i c a l of p r a c t i c a l operational programs). The study i n v o l v i n g the large s t number of subjects (over 5,000) was conducted by Moses (1972) at the B e l l System Personnel Assessment Program. Results showed a highly s i g n i f i c a n t r e l a t i o n s h i p between the assessment centre r a t i n g (one of four categories ranging from "more acceptable" to "not acceptable") and the progress i n management. Moses reported that i n d i v i d u a l s assessed as "more than acceptable" were twice as l i k e l y to be promoted two or more times than i n d i v i d u a l s 16 assessed as "acceptable" and almost ten times as l i k e l y to be promoted beyond an entry l e v e l assignment than those rated "not acceptable". The c o r r e l a t i o n between o v e r a l l assessment r a t i n g and progress was .44 (p<.001). AT&T has reported a d d i t i o n a l studies which further substantiate the success of the assessment technique i n the B e l l System (Bray & Campbell, 1968; Bray, Campbell & Grant, 1974; Campbell & Bray, 1967; Grant & Bray, 1969; Grant, Katkovsky & Bray, 1967; Huck & Bray, 1976; Ja f f e e , Bender & Calvert, 1970; Moses & Boehm, 1975). Companies outside the B e l l System have found s i m i l a r evidence for the v a l i d i t y of operational assessment centres. At 1MB a number of studies of the assessment programs have shown p o s i t i v e r e l a t i o n s h i p s between centre findings and various c r i t e r i a of job success (Dodd, 1970; Hinrichs, 1969; Kraut, 1972). Kraut and Scott (1972) who reviewed the career progress of over 1,000 non-management candidates at an IBM assessment centre found that the program was u s e f u l i n making discriminations of management p o t e n t i a l which were l a t e r confirmed by two major organizational c r i t e r i a -second l e v e l promotions and demotions from f i r s t l e v e l management. Upon reaching f i r s t l i n e management, those men rated higher at the assessment centre were more l i k e l y to move further up the hierarchy while those rated poorly were more l i k e l y to f a i l as managers at the f i r s t l i n e l e v e l . In an e a r l i e r IBM study Wollowick and McNamara (1969) found a c o r r e l a t i o n of .37 (p<.01) with the g l o b a l assessment r a t i n g and a c r i t e r i o n of increase i n managerial r e s p o n s i b i l i t y three years a f t e r assessment. This was i n s p i t e of a highly r e s t r i c t e d range due to intense p r e - s e l e c t i o n of assessment centre candidates. The findings of both the pure and operational v a l i d i t y research studies have been, for the most part, impressive, p o s i t i v e and 17 consistent (Bray & Grant, 1966; Bray, Campbell & Grant, 1974; Byham, 1970; Cohen, Moses & Byham, 1974; Dunnette, 1971; Hinrichs, 1978; Howard, 1974; Huck, 1973; Jaffee, Bender & Calvert, 1970; Kraut & Scott, 1972; Moses, 1973; Parker, 1980; Thomson, 1970; Wollowick & McNamara, 1969; Worbois, 1975). In a recent review a r t i c l e , Cohen, Moses and Byham (1974) focused on the p r e d i c t i v e accuracy of the o v e r a l l assessment centre r a t i n g . In a summary of 22 v a l i d i t y studies they reported a median c r i t e r i o n - r e l a t e d c o r r e l a t i o n of .37. In i n d u s t r i a l a p plications the median c o r r e l a t i o n with job performance (Ratings of performance on the job for which the candidate was assessed) was reported at .33, with job p o t e n t i a l (ratings of the l i k l i h o o d of future progress) at .63, and with job progress (promotions, demotions, increases i n salary, etc.) at .40. While assessment centre studies and reviews conclude that the centres can be remarkably e f f e c t i v e i n i d e n t i f y i n g employee p o t e n t i a l , performance and progress, the question of p r e c i s e l y how e f f i c t i v e remains. The problem of c r i t e r i o n contamination has generally l e d researchers to conclude that reported v a l i d i t i e s have been spuriously high. There are, however, also reasons to think that i n some instances they may have been spuriously low due to i n v a l i d i t y of the c r i t e r i a or r e s t r i c t i o n of range. Huck (1973) i n an i n v e s t i g a t i o n of the contribution of the assessment centre approach as compared to more t r a d i t i o n a l methods combined a seri e s of independent studies conducted over a period of years on d i f f e r e n t assessment programs. The data allowed him to estimate the p r o b a b i l i t y of s e l e c t i n g an "above average" performer given various methods of s e l e c t i o n . Huck reported p r o b a b i l i t i e s of 18 15 percent when choosing an i n d i v i d u a l at random, 35 percent when management nominates an i n d i v i d u a l f o r a supervisory p o s i t i o n based on whatever factors are a v a i l a b l e other than assessment centre r e s u l t s , and 76 percent when management recommends an i n d i v i d u a l and that i n d i v i d u a l i s also rated "acceptable" i n the assessment centre. He concludes that by u t i l i z i n g r e s u l t s of the assessmnt process the chance rate i s s u b s t a n t i a l l y increased and the p r o b a b i l i t y of s e l e c t i n g a "winer" more than doubles. Byham i n a 1971 review a r t i c l e concludes that the accumulation of research findings from a v a r i e t y of types of centres lends consider-able c r e d i b i l i t y to the o v e r a l l v a l i d i t y of the technique. In a survey of assessment centres he reports uncovering 22 studies i n a l l that showed assessment centres more e f f e c t i v e than other approaches and only one that showed i t exactly as e f f e c t i v e as some other approaches. None showed i t les s e f f e c t i v e . E ffectiveness, as defined i n t h i s a r t i c l e , i s represented by the c o r r e l a t i o n s between assessment centre p r e d i c t i o n and achievement c r i t e r i a such as advancement, salary grade, and performance r a t i n g s . There are only two reported studies reaching a negative judgement on the assessment centre method. One study at the C a t e r p i l l a r Tractor Co. (reported by Cohen et. a l , 1974), compared the job performance of 37 men appointed as f i r s t l i n e shop supervisors following assessment with that of 27 men whose appointment was based on the t r a d i t i o n a l methods of s e l e c t i o n . It was found that supervisors selected by the t r a d i t i o n a l methods were rated higher i n job performance than those picked by the assessment centre. L i t t l e j u s t i f i c a t i o n could be found for abandoning the t r a d i t i o n a l s e l e c t i o n methods for the assessment 19 centre process. It has been suggested, however, that the poor showing made by the assessment centre candidates may have resulted from the f a i l u r e of the assessment centre to evaluate t e c h n i c a l s k i l l s - a c r i t i c a l component of successful supervisory performance. The second study (Hinrichs, 1969) seems to ind i c a t e that a thorough study of p a r t i c i p a n t s ' personnel f i l e s and an interview would provide information comparable to the r e s u l t s of an assessment centre. Hinrichs showed that ratings of managerial p o t e n t i a l based on information already a v a i l a b l e i n the personnel records of 47 IBM managers correlated .46 with the ratings of managerial p o t e n t i a l received i n t h e i r assessment program. However, the intent of a s e l e c t i o n assessment program i s p r e d i c t i v e rather than concurrent i n design and as Hinrichs maintains; " i f the focus i s on the early i d e n t i f i c a t i o n of p o t e n t i a l where l i t t l e job h i s t o r y has accrued, then the assessment process i s probably a very e f f e c t i v e means of synthesizing a rather close approximation of the type of p o t e n t i a l p r e d i c t i o n which could eventually evolve through on-the-job performance". With the exceptions of the C a t e r p i l l a r Tractor Co. and the Hinrich's study, the performance of men selected or promoted a f t e r having been assessed has been shown to be more successful than that of men selected or promoted without assessment. In an early study conducted at Michigan B e l l i n 1962 (as reported i n MacKinnon, 1975) the f i r s t f o r t y men assessed and promoted were compared with the l a s t f o r t y men promoted before the assessment centre program began. The findings showed that approximately two-thirds of the assessed group was rated "better than s a t i s f a c t o r y " i n job performance, as compared 20 to only one-third of the group not assessed. In addition, whereas 67 percent of the assessed group were rated as having the a b i l i t i e s required for the next l e v e l of management, only 35 percent of the non-assessed group demonstrated t h i s p o t e n t i a l . These findings have been supported by Campbell and Bray (1967) who, conducting a s i m i l a r type study, reported that while only s l i g h t l y more of the assessment group were judged higher on job performance, considerably more of the assessment group were rated as having higher future p o t e n t i a l (approximately twice as many). A study by Jaffee, Bender and Calvert (1970) also supports the f i n d i n g s . Given a l l the uncertainties i n the research on the v a l i d i t y of assessment centres, many researchers and p r a c t i t i o n e r s s t i l l think i t prudent to conclude that the best of them are remarkably e f f e c t i v e s e l e c t i o n devices. However, the assessment centre i s a much more comprehensive approach f o r the c o l l e c t i o n and analysis of s e l e c t i o n data than t r a d i t i o n a l measures and i s consequently more c o s t l y . Accordingly, assessment centres must be shown to be demonstrably more u t i l e than the t r a d i t i o n a l , l e s s expensive methods of s e l e c t i o n i f they are to be f u l l y j u s t i f i e d . In determining the u t i l i t y of an assessment centre one must address the question of whether the expected increase i n costs i s j u s t i f i e d by the expected increase i n v a l i d i t y when choosing to employ the assessment centre rather than more t r a d i t i o n a l s e l e c t i o n methods. This topic was the subject of an a r t i c l e that estimated the u t i l i t y of the assessment centre process as a s e l e c t i o n device i n r e l a t i o n to other s e l e c t i o n tools (Cascio and Silbey, 1979). In t h i s a r t i c l e , the authors agrue that u t i l i t y depends not only on v a l i d i t y but also on the s e l e c t i o n r a t i o (proportion of applicants selected), the standard deviation of c r i t e r i o n scores (which indicates both the magnitude and p r a c t i c a l s i g n i f i c a n c e of i n d i v i d u a l differences i n payoff), and the cost of the s e l e c t i o n procedure. They o f f e r a mathematical model that can be used i n estimating the incremental gain/loss of u t i l i z i n g the assessment centre process. A somewhat les s sophistocated treatment of assessment centre u t i l i t y i s offered by Cohen (1980a) who investigated the return of investment (ROI) of u t i l i z i n g the assessment process. The ROI strategy employed involves estimating a l l assessment centre costs and comparing them with the calculated amount i t would cost the organization for just one person to f a i l i n the target p o s i t i o n . A survey conducted by the Journal of Assessment Centre Technology under Cohen i n 1979 found that, while assessment centres were c o s t l y to set up and maintain, they pay f o r themselves i n terms of estimated savings by more than four times t h e i r cost. This ROI strategy does not even consider the p o t e n t i a l for improved production with the s e l e c t i o n of one person who succeeds. Nor does t h i s strategy provide comparative information about the effectiveness of assessment centre technology v i s - a - v i s other s e l e c t i o n methods. Both above strategies could prove invaluable as an aid i n deciding the pragmatic u t i l i t y of an assessment centre program. Of even more importance, however, i s the emphasis by both st r a t e g i e s that both v a l i d i t y and cost/benefit r a t i o should be evaluated i n estimating the u t i l i t y of the s e l e c t i o n procedure. It cannot be forgotten that while a centre may not demonstrate s t a t i s t i c a l or p r a c t i c a l s i g n i f i c a n c e i t i s possible for i t to show a return on investment and/or an incremental gain i n u t i l i t y over another s e l e c t i o n method. Compared with a l t e r n a t i v e s f o r s e l e c t i o n , assessment centres look promising - nevertheless the research on them, though p o s i t i v e , i s sparce, comes from too few sources, covers too many va r i a t i o n s i n components, lacks r e p l i c a t i o n and i s plagued by methodological problems (Howard, 1974, Ungerson, 1974). Besides, the h i s t o r i c a l record of the v a l i d i t y of t h i s process, no matter how d e f i n i t i v e the research, cannot be taken as a guarantee that any given assessment centre program w i l l or w i l l not be v a l i d i n a given s e t t i n g . Also, i t i s important to remember that the assessment centre program i s not intended as the f i n a l judgemental factor i n the s e l e c t i o n of job performance. The process simply makes one more piece of information a v a i l a b l e that can be f i t t e d into an employee's record. U t i l i z i n g assessment centre r e s u l t s as the sole basis f o r personnel decisions i s asking a technology to do more than i t has been designed to do and i s i n v i t i n g i n v a l i d i t y to a process that deserves much more judicious treatment (Byham, 1980, Cohen, 1980). Assessment centre programs have been shown, with remarkable success, to work i n and of themselves, but i t must be made cl e a r that they serve t h e i r greatest end as the foundation of a s e l e c t i o n system not as THE s e l e c t i o n system. Assessment Centres and the P o l i c e Like any other organization, two major issues facing p o l i c e organizations today are s e l e c t i o n and promotion procedures. In the century and a h a l f since the development of modern po l i c e departments selection/promotion c r i t e r i a and procedures have changed very l i t t e . The seventies, however, brought a movement towards p o l i c e profession-23 alism and modern personnel management concepts swept into law enforcement agencies. This movement was spurred on by increasing l e g a l pressures and humanistic philosophies. Consequently, a number of p o l i c e departments across the United States and Canada turned to assessment centres as a s e l e c t i o n device for overcoming many of the p i t f a l l s created by r e l y i n g s o l e l y on the t r a d i t i o n a l methods of t e s t i n g and an o r a l interview for h i r i n g and promoting p o l i c e o f f i c e r s (Bozza & Umshied, 1979; Brown, 1978; Buracker, 1980; Francis, 1975; Kent, Wall & Bailey, 1974; McGhee & Deen, 1979; Ross, 1980; Sanchez, 1981). T r a d i t i o n a l t e s t i n g procedures had f a l l e n i n the wake of v a l i d i t y requirements and court cases across the United States made i t obvious to p o l i c e administrators and personnel managers that they must make serious e f f o r t s to ensure as "job r e l a t e d " a s e l e c t i o n process as possible ( i e . a process that taps r e a d i l y observant, job relevant, and q u a n t i f i a b l e behaviours). After studying the methods a v a i l a b l e , many p o l i c e departments decided that assessment centres o f f e r a high q u a l i t y a l t e r n a t i v e to t r a d i t i o n a l procedures. This decision was perhaps engendered by such favourable U.S. decisions on assessment centres as Berry v. C i t y of Omaha and Richmond Police O f f i c e r s v. Richmond (Kennedy, 1982). The increasing acceptance of the assessment centre method i n p o l i c e departments i s v i s i b l e i n the amount of l i t e r a t u r e on the use of assessment centres i n the s e l e c t i o n and promotion of p o l i c e o f f i c e r s i n departments both i n the United States and Canads. Bozza and Umshied (1979), Brown (1978), Eisenberg (1980), and Sanchez (1981) o f f e r descriptions of the assessment centre process i n general and discuss i t s relevance to p o l i c i n g . Buracker (1980), Driggs and Whisenand (1976), Francis (1975), Kent, Wall and Bailey (1974), McGhee and Deen (1979), McGinnis and Carpenter (1980), Quigley (1976), Ross (1980), Turner (1978), and Van Kirk (1975) o f f e r descriptions of various assessment centre programs i n operation at p o l i c e departments throughout Canada and the United States. The majority of studies are de s c r i p t i v e i n nature and o f f e r how-to-advice f o r interested p o l i c e departments. L i t t l e empirical evidence has been c o l l e c t e d on the v a l i d i t y of assessment centres i n law enforcement. Two studies which have attempted to address the issue of assessment centres v a l i d i t y i n a p o l i c e s e t t i n g are those conducted by McGinnis and Carpenter (1980) and Ross (1980). McGinnis and Carpenter i n an analysis of assessment centre r e s u l t s i n an R.C.M.P. p i l o t study found l i t t l e agreement (regardless of dimension assessed) between ratings of performance made by a candidate's immediate supervisor and ratings of performance made by the assessors i n the assessment centre. The authors concluded that any true r e l a t i o n s h i p which might e x i s t was probably obscured by i n f l a t e d , i n d i s c r i m i n a t i n g ratings by supervisors. Ross (1980) i n a pr e d i c t i v e v a l i d i t y study of a pol i c e assessment centre i n southern C a l i f o r n i a found a c o r r e l a t i o n of .47 (p<.D5) between job performance ratings and o v e r a l l assessment centre r a t i n g s . She concluded that the assessment centre under i n v e s t i g a t i o n was capable of d i f f e r e n t i a t i n g between e f f e c t i v e and i n e f f e c t i v e performance, as measured by a follow-up evaluation of performance two years l a t e r , even though the assessment centre did not meet a l l of the guidelines set f o r t h by the Task Force on Assessment Center Standards (eg. a job analysis was not conducted, assessors were not tra i n e d ) . As mentioned, the i n v e s t i g a t i o n of assessment centre v a l i d i t y i n law enforcement has received l i t t l e a ttention and t y p i c a l l y only passing mention i n the l i t e r a t u r e . Yet, v a l i d i t y research i s c r i t i c a l to the successful future of assessment centres i n law enforcement. Despite the popular common l a b l e , there i s no one p r o f e s s i o n a l l y endorsed assessment centre e n t i t y with proven p r e d i c t i v e v a l i d i t y . P o l i c e departments across North America employ a v a r i e t y of assessment centre programs each of which must be evaluated i n the context of i t s own objectives and circumstances. P r e d i c t i v e v a l i d i t y research on assessment centres must be pursued on a continuous basis i n connction with a l l assessment programs. Such study w i l l provide the necessary v e r i f i c a t i o n that the techniques employed i n a p a r t i c u l a r program do, i n f a c t , function as intended and w i l l add to the r e l a t i v e l y l i m i t e d v a l i d t y information presently a v a i l a b l e on p o l i c e assessment centres. The c r e d i b i l i t y of the assessment centre technique can only be established with the accumulation of research findings from a v a r i e t y of centres. The intent of t h i s study was to conduct a c r i t e r i o n r e l a t e d v a l i d i t y study of an assessment centre i n law enforcement. The assessment centre, which began operation i n 1978, was i n c r e a s i n g l y being questioned as to i t s c r e d i b i l i t y and e f f e c t i v e n e s s , p a r t i c u l a r l y as the novelty of the program wore o f f . The objective was to determine the p r e d i c t i v e v a l i d i t y of t h i s p a r t i c u l a r l y extensive and somewhat popular assessment centre used i n the s e l e c t i o n of applicants into the p o l i c e force. 26 METHOD Introduction The present study grew out of a Pol i c e Academy desire to evaluate i t s assessment centre process. The centre had been the object of study only once before when an attempt was made to determine the r e l i a b i l i t y of the assessment process. While the prime purpose of t h i s study was to assess the v a l i d i t y of the assessment centre method, an attempt was made to investigate a l l aspects of the centre's administration. This was deemed to be important i n explaining, as well as i n o f f e r i n g suggestions f o r the improvement of, the obtained v a l i d i t y c o e f f i c i e n t s . As part of t h i s i n v e s t i g a t i o n , the author underwent the four day assessor t r a i n i n g course offered to prospective assessment centre assessors. Approximately one month l a t e r , the author also p a r t i c i p a t e d as an observer i n the day-long r e c r u i t assessment centre and the subsequent day of assessee discussion by the assessors. Because the assessment centre i s an on-going operational e n t i t y , i t was necessary to conduct the study within a number of organiza-t i o n a l c o n s t r a i n t s . Probably the greatest of these constraints was the necessity to use a v a i l a b l e c r i t e r i a measures of p o l i c e r e c r u i t e f f e c t i v e n e s s . The intr o d u c t i o n of any new c r i t e r i o n would have been met with extreme resistance as the department was already i n the process of introducing i t s own new performance appraisal form. Even i f the introduction of a new measure had been possible, the time constraints imposed on the study would hve precluded i t (a condition of access to data was a nine month l i m i t on the researcher -organization a s s o c i a t i o n ) . 27 The predictor and c r i t e r i a data employed i n the study were, there-fore, r e t r i e v e d from a number of a r c h i v a l sources. Assessment centre f i l e s of assessed candidates, while complete with regard to assessment rat i n g s , unfortunately contained no i n d i c a t i o n as to whether or not candidates had been accepted by the p o l i c e and, i f so, to which department. This information involved cross matching candidate names with the departmental personnel d i r e c t o r i e s . Academy grade data for located candidates could then be obtained from academy records stored at the P o l i c e Academy. Obtained performance data, on the other hand, involved approaching the department for the necessary permission to view personnel records. A l i s t of those candidates whose records were required was submitted to the department and data r e t r i e v a l was done through an intermediary. Subjects The sample was drawn from p o l i c e candidates who had attended assessment centres conducted by the Police Academy at the B r i t i s h Columbia J u s t i c e I n s t i t u t e (see Appendix 1 for a d e s c r i p t i o n of the assessment centre operation). In the period from March 5, 1978 to December 12, 1981 the assessment centre of the P o l i c e Constable Selection Program processed 466 r e c r u i t candidates: approximately 37 percent of which were l a t e r h i r e d . D i f f e r i n g s e l e c t i o n p o l i c i e s and performance measures necessitate the separate examination of r e c r u i t s by department. A breakdown of the assessment centre sample into departments (Figure 1) reveals only one department, Vancouver, for which a s u f f i c i e n t l y large enough number of r e c r u i t s were processed and l a t e r hired (n=118). Complete 28 FIGURE 1 BREAKDOWN OF ASSESSMENT CENTRE SAMPLE INTO DEPARTMENTS | # i n d i c a t i n g department as r e c i p i e n t for ratings # accepted by department E l -325 -J00 GOO 100 rC o • H C cd cd CXI It iH cd • r l ca B 3 a u cd • H c r o u 4-J 3 CO CO a rH CT* 4J i-H a) CD CO ccj cu u O w Is rJ CJ CO c CO' CD & cu cd pq r* o rJ CU > 3 o -d U o o CU cd c o rC > •i-i cd S o 3 u > • H O o •U c O 4-> 4J u cd C o CO o cd cd • H CU C/3 > > 29 c r i t e r i a data (Police Academy grades, Performance Appraisal measure, and Rated P o t e n t i a l measure) were a v a i l a b l e f o r 96 or the r e c r u i t s . These candidates comprised the experimental group. A random sample of r e c r u i t s hired just previous to the implementation of the assessment centre (n=56) was selected as the co n t r o l group. The assessment centre, unfortunately, keeps no systematic record of biographical data on the candidates. The samples' candidates can only be described as having met the minimum requirements to j o i n the force. Aside from a number of physical requirements, each candidate must be a Canadian c i t i z e n , f a l l i n the age range of 18 to 35, and have at l e a s t a grade 13 (or equivalent) education. Predictors Data from the assessment centre were a v a i l a b l e on sixteen v a r i a b l e s . These included assessor ratings on each of the f i f t e e n job c a p a b i l i t y dimensions as determined i n Turner and Higgins' P o l i c e  Constable Job Analysis Report: I n i t i a l Selection Assessment and an o v e r a l l assessment r a t i n g ( f o r d e f i n i t i o n of the assessment centre dimensions see Appendix 2). The o v e r a l l assessment r a t i n g (OAR) i s ar r i v e d at by consensus of the assessors who, i n d i v i d u a l l y , have formed a judgement of the candidate's o v e r a l l r a t i n g based on a c l i n i c a l analysis of the f i f t e e n dimension r a t i n g s . Since the o r i g i n a l ratings on the sixteen variables used by the assessors were of a scale not appropriate for s t a t i s t i c a l a n a l y s i s , the r a t i n g scale was revised as shown i n Table 1. 30 TABLE 1 REVISED ASSESSMENT RATING SCALE ORIGINAL REVISED RATING SCALE DESCRIPTOR RATINGS RATING SCALE 5 Excellent 11 4+ A great deal of a b i l i t y shown 10 4 Well above average 9 4- Above average 8 3+ S l i g h t l y above average 7 3 Average 6 3- S l i g h t l y below average 5 2+ Below average 4 2 Well below average 3 2- Very l i t t l e a b i l i t y 2 1 Poor 1 0 No opportunity to observe 0 31 In those cases where assessors could not agree on a r a t i n g and theefore assigned a " s p l i t r a t i n g " to a candidate, the two ratings were averaged rather than lose the information. Averaging i n t h i s instance i s quite common and acceptable i n the assessment centre l i t e r a t u r e . C r i t e r i a Three d i f f e r e n t measures of p o l i c e r e c r u i t effectiveness were employed i n the study: reported academy grades, a derived performance appraisal measure, and rated p o t e n t i a l as a p o l i c e constable. Academy Grades: Grades as reported by academy i n s t r u c t o r s at the end of Block I I I of the r e c r u i t t r a i n i n g were employed. At t h i s time, r e c r u i t s have undergone 14 weeks of classroom i n s t r u c t i o n (Block I ) , 8 weeks of departmental assignments under the supervision of a F i e l d Training O f f i c e r (Block I I ) , and a further 10 weeks of classroom i n s t r u c t i o n (Block I I I ) . In order to provide a common scale of measurement for the d i f f e r e n t r e c r u i t classes, i n s t r u c t o r grades of "A", "Outstanding" or "86 to 100%" were coded as 3; grades of "B", "Very Good" or "73 to 85%" as 2; and grades of "C", "Sa t i s f a c t o r y " or "60 to 72%" as 1. Lower grades ( i e . "D", " F a i l " , or "less than 60%"), although a v a i l a b l e to the i n s t r u c t o r s , did not appear i n the sample. Eight academy grades were reported for each candidate: Investigation and P a t r o l , Physical Education, D r i l l , Dress and Deportment, Emergency Care, Firearms Training, T r a f f i c Studies, Applied S o c i a l Science, and Legal Studies. The contents of the accademy courses are outlined i n Appendix 3. 32 Performance Appraisal Measure: Recruit performance appraisal was measured by the Progress Report Rating Form (see Appendix 4) already i n use at the Vancouver Po l i c e Department. While the drawbacks and l i m i t a t i o n s of the form have been acknowledged i n the Department's recent introduction of a new and "improved" form, the Progress Report Rating Form provided an a v a i l a b l e measure of r e c r u i t performance for a l l subjects at the same period i n t h e i r p o l i c e career - Probationary to Third Class. At the time of appraisal, a l l r e c r u i t s had been with the department aproximately one year (departmental p o l i c y requires appraisal a f t e r one year of s e r v i c e ) . As the Progress Report Rating Form i s l i t t l e more than a d e s c r i p t i v e c h e c k l i s t , i t was necessary to recombine and scale the items i n order to a r r i v e at a performance appraisal measure. Within each category of performance ( i e . d i s c i p l i n e , l o y a l t y , etc.) response a l t e r n a t i v e s were f i r s t analysed as to t h e i r a b i l i t y to d i f f e r e n t i a t e among candidates. Response a l t e r n a t i v e s checked for fewer than 10 percent or greater than 90 percent of the candidates were discarded due to t h e i r lack of discriminating power. Inspection revealed that many of the remaining response a l t e r n a t i v e s within each category were mutually exclusive and r e f l e c t e d varying l e v e l s of i n t e n s i t i e s of the same a t t r i b u t e ( i e . very l o y a l , f a i r l o y a l t y , l o y a l ) . Ratings were assigned to these response a l t e r n a t i v e s as a r e f l e c t i o n of the inadvertent s c a l i n g that raters seemed to be employing. In those cases where response a l t e r n a t i v e s were non-mutually exclusive and required only an acknowledgement of the presence or absence of an a t t r i b u t e , rather than an i n d i c a t i o n of degree of presence, unit weights were assigned. Appendix 5 shows the modifications made on the Progress Report Rating Form and how the items were scored. A candidate's t o t a l score across a l l categories of performance constitute h i s Performance A p r a i s a l measure. Scores on t h i s measure ranged from 16 to 44 with a maximum possible score of 47. The measure's i n t e r n a l consistency as estimated by C o e f f i c i e n t Alpha (Cronbach, 1967) was determined to be .7867. Rated P o t e n t i a l Measure: The f i n a l category of the Progress Report Rating Form, p o t e n t i a l i t y , was employed as the t h i r d performance c r i t e r i o n . It was not combined into the Performance Appraisal measure because, unlike the other categories, i t addresses the candidate's future performance. The category was analysed s i m i l a r l y to those employed i n the Performance Appraisal measure. Only two response a l t e r n a t i v e s of the p o t e n t i a l item were used by r a t e r s . The two were mutually exclusive with one i n d i c a t i n g a greater presence of p o t e n t i a l than the other - weights were assigned to r e f l e c t t h i s . The resultant Rated P o t e n t i a l measure was a two point scale: 1 i f rated p o t e n t i a l as a constable was average and 2 i f rated p o t e n t i a l was above average. Approximately 80 percent of the sample was rated above average and 20 percent was rated average. Data Analysis Two main types of analyses were conducted for t h i s study. The f i r s t type of analysis c a r r i e d out was to determine the p r e d i c t i v e a b i l i t y of the assessment centre with respect to the three job performance measures. For the two c r i t e r i a , performance a p p r a i s a l and p o t e n t i a l i t y , t h i s involved computation of the c o r r e l a t i o n c o e f f i c i e n t between each of them and the OAR. These two c r i t e r i o n 34 measures were also, separately, regressed on the f i f t e e n dimensions underlying the OAR. A stepwise procedure was used to determine which, i f any, of the dimensions were meaningfully re l a t e d to the performance appraisal measure and the measure of p o t e n t i a l i t y as a constable. For the c r i t e r i o n academy grades, c o r r e l a t i o n c o e f f i c i e n t s were computed and the OAR was regressed on the eight grades and the multiple c o r r e l a t i o n c o e f f i c i e n t determined. Stepwise analyses were conducted to determine any meaningful r e l a t i o n s h i p s between the OAR and any of the academy grades. The second type of analysis involved comparison of the control (hired p r i o r to the Assessment Centre) and experimental (hired a f t e r assessment) groups. Differences between the two groups were computed on the c r i t e r i o n performance appraisal using t as the t e s t of 2 s i g n i f i c a n c e , on the c r i t e r i o n p o t e n t i a l i t y using "X. as the test of s i g n i f i c a n c e , and on the c r i t e r i a academy grades using Hotelling's T^as the test of s i g n i f i c a n c e . As an adjunct to t h i s study of p r e d i c t i v e v a l i d i t y two further studies were undertaken: 1. the OAR was regressed on the f i f t e e n assessment dimentions using a stepwise procedure to determine the contributions of the dimensions to a candidate's OAR and 2. the c o r r e l a t i o n matrix of a l l sixteen assessment centre ratings was f a c t o r analysed i n order to determine any broad, underlying dimensions of assessment performance. For these analyses data was employed from a l l Vancouver candidates (hired or not) who were processed thrugh the assessment centre during the period from March 5, 1978 to A p r i l 6, 1981 (n=233). 35 RESULTS Table 2 presents the means, standard deviations, and ranges of the research variables f or the t o t a l sample (where applicable) and f o r both the experimental and co n t r o l groups. OAR vs. the C r i t e r i a : Figure 2 shows the mean performance appraisal measure obtained at each l e v e l of the OAR scale represented i n the sample. The c o r r e l a t i o n between the derived performance appraisal measure and the OAR i s low, as would be expected from inspection of Figure 2 (r=.0784, Not S t a t i s t i c a l l y S i g n i f i c a n t - N.S.). Figure 3 shows the proportion at each l e v e l of the u t i l i z e d OAR scale scoring 2 (above average) as opposed to 1 (average) on the p o t e n t i a l i t y s c a le. While there would appear to be somewhat of a weak trend represented here ( i e . the proportion increases as OAR increases), the point b i s e r i a l c o r r e l a t i o n between OAR and rated p o t e n t i a l i s minimal (r=-.0083, N.S.). It must be remembered, however, that the point b i s e r i a l c o r r e l a t i o n i s not independent of the proportions of the sample which f a l l i nto the two categories of average and above average. In a sample, l i k e t h i s one, where approximately 80 percent of the candidates are rated above average and 20 percent are rated average, the maximum attainable point b i s e r i a l c o r r e l a t i o n i s f a r le s s than one. Employing the formula provided by McNemar (1962) reveals that the maximum attainable point b i s e r i a l c o r r e l a t i o n i n t h i s case would be .6998. It i s i n t e r e s t i n g to note that the derived performance a p p r a i s a l measure and p o t e n t i a l i t y measure are s i g n i f i c a n t l y c orrelated (r=.3291, p<.01). This suggests that the two c r i t e r i a are measuring TABLE II RANGES, MEANS, AND STANDARD DEVIATIONS FOR ALL RESEARCH VARIABLES  VARIABLE TOTAL SAMPLE EXPERIMENTAL SAMPLE CONTROL SAMPLE n= 233 n= 96 n= 56 RANGE MEAN S.D. RANGE MEAN S.D. RANGE MEAN S.D. P r a c t i c a l I n t e l l i g e n c e 4-10 6.597 1.120 5-10 6.922 1.040 In t e g r i t y 1-10 6.361 1.252 4- 9 6.646 .973 Problem Consrontation 3- 9 6.373 1.134 4- 9 6.547 .949 Stress Tolerance 2- 9 6.103 1.214 4- 9 6.349 1.112 A b i l i t y to Learn 3- 9 6.253 1.169 4- 9 6.443 1.046 I n i a t i v e 2-10 6.562 1.253 3-10 6.849 1.179 Decisiveness 2-11 6.470 1.191 4-11 6.635 1.097 F l e x i b i l i t y 4- 9 6.466 .943 5- 9 6.557 .784 Fact Finding 2- 9 6.114 1.256 4- 9 6.234 1.304 Oral Communication 2-10 6.479 .106 4-10 6.615 1.099 Interpersonal Tolerance 4- 8 6.316 .970 5- 8 6.552 .769 Interpersonal S e n s i t i v i t y 1- 9 6.502 1.185 4- 9 6.776 .981 Written Communication 3- 9 6.073 .951 3- 9 6.125 .965 Adherence to Authority 3- 9 6.586 .954 5- 9 6.646 .821 Personal Impact 2-10 6.479 1.347 4-10 6.844 1.182 OAR 2- 9 6.275 1.568 4- 9 6.823 1.152 Investigation and Patrol 1- 3 2.527 .577 1- 3 2.663 .538 1- 3 2.283 .568 Physical Training 1- 3 2.293 .654 1- 3 2.384 .607 1- 3 2.132 .708 D r i l l , Dress, and Deportment 1- 3 1.973 .716 1- 3 1.931 .759 1- 3 2.120 .526 Emergency Care 1- 3 1.878 .582 1- 3 2.011 .536 1- 3 1.642 .591 Firearms 1- 3 2.259 .759 1- 3 2.147 .771 1- 3 2.462 .699 T r a f f i c 1- 3 2.527 .527 2- 3 2.590 .495 1- 3 2.415 .570 Soc i a l Sciences 1- 3 2.257 .573 1- 3 2.358 .544 1- 3 2.076 .583 Legal Studies 1- 3 2.318 .670 1- 3 2.316 .673 1- 3 2.321 .673 Performance Appraisal 16-44 30.151 5.513 16-43 30.573 5.101 16-44 29.429 6.137 Po t e n t i a l 1- 2 1.368 .484 1- 2 1.802 .401 1- 2 1.464 .503 Note: Total Sample refe r s to a l l Vancouver candidates, hired or not , processed through the assessment centre during the period of March 1978 to A p r i l 1981. 37 FIGURE 2 MEAN PERFORMANCE APPRAISAL MEASURE AS A FUNCTION OF OVERALL ASSESSMENT RATING OAR ~ 5 6 1 7 8 <T (n) (4) (9) (19) (35) (26) (3) 38 FIGURE 3 PROPORTION OF ABOVE AVERAGE CANDIDATES AT EACH LEVEL OF OVERALL ASSESSMENT RATING • 75r .50r-• 25h OAR (n) 4 (4) 5 (9) 6 (19) 7 (35) 8 (26) 9 (3) TABLE III CORRELATION MATRIX OF ACADEMY GRADES AND OVERALL ASSESSMENT RATING 1. 2. 3. 4. 5. 1. OAR 1.0000 2. Investigation & Pat r o l -.0636 1.0000 3. Physical Training -.0836 .1726 1.0000 4. D r i l l , Dress & Deportment -.0776 .0169 -.0399 1.0000 5. Emergency Care -.0655 .1969 .0861 .0038 1.0000 6. Firearms Training -.1250 .0440 .1386 -.0973 .0734 7. T r a f f i c Studies .1676 .3143 .0693 .1396 .2172 8. S o c i a l Sciences .3054 -.0199 -.0657 -.0564 .1329 9. Legal Studies -.1043 .2676 .1290 -.0194 -.0291 7. 8. n = 96 r@ .05 = .2006 r@ .01 = .2617 1.0000 .1882 -.0256 -.0291 1.0000 .1960 1.0000 .1700 .0367 1.0000 40 something s i m i l a r . However, while they may both capture some common aspect of p o l i c e success or effectiveness, the c o r r e l a t i o n i s not so high as to suggest that they are measuring the same thing. Table I II presents the c o r r e l a t i o n s of the OAR with each of the eight academy grades. As can be seen from the c o r r e l a t i o n s , a candidate's OAR i s most highly r e l a t e d to h i s grade i n S o c i a l Science (r=.3054, p<.01). The c o r r e l a t i o n s of a l l other academy grades with OAR are n e g l i g i b l e . This i s confirmed i n a stepwise multiple regression a n a l y s i s . None of the other seven academy grades were able to s t a t i s t i c a l l y improve on the pr e d i c t i v e a b i l i t y of the S o c i a l Sciences grade. While a l l academy grades can account f o r approximately 20.3 percent of the variances i n OAR, about 9.3 percent of the variance can be accounted f o r by Social Science grades (see Table IV) alone. TABLE IV MULTIPLE REGRESSION FUNCTION FOR PREDICTING OAR WITH ACADEMY GRADES 5.299 + .646 S o c i a l Sciences grade Mult i p l e R = .3054 R 2= .0933 Predicted OAR Experimental vs. Control Group: As reported i n Table I I , the mean performance ap p r a i s a l measure for the experimental group ws 30.573 and the mean for the con t r o l group was 29.429. Examination of performance appraisal measures for both the experimental and c o n t r o l groups, however, revealed no s i g n i f i c a n t mean differe n c e between them (t=1.24, p=.218). 41 Ratings of p o t e n t i a l i t y , on the othe hand, were found to be s t a t i s t i c a l l y d i f f e r e n t f o r the experimental and con t r o l groups 2 (X = 18.475, p<.001). Experimental group candidates were c o n s i s t e n t l y rated higher on p o t e n t i a l than were the cont r o l group candidates. The mean p o t e n t i a l i t y ratings f o r the two groups were 1.802 -experimental group and 1.464 - control group (see Table I I ) . In comparing the mean academy grades between the experimental and control groups, simple inspection of Table II shows that the experimental group seems to be outperforming the con t r o l group on f i v e of the eight academy grades (Investigation and P a t r o l , Physical Training, Emergency Care, T r a f f i c Studies, and Soc i a l Sciences). S t a t i s t i c a l analysis of performance at the academy shows that there i s indeed a s t a t i s t i c a l l y s i g n i f i c a n t difference between the two groups (T = 42.158, F = 6.0948, p<.001). Further analysis reveals that 8,101 the d i f f e r e n c e i s due pr i m a r i l y to the academy course Investigation and P a t r o l . Experimental group candidates have, on the average, been scoring s i g n i f i c a n t l y higher on t h i s course than have con t r o l group candidates. Assessment Centre Dimensions: Table V presents the c o r r e l a t i o n matrix for the f i f t e e n assessment centre dimensions and the OAR. A l l dimensions co r r e l a t e s i g n i f i c a n t l y with the OAR, ranging from a c o r r e l a t i o n of .2149 ( F l e x i b i l i t y ) to .7163 (Stress Tolerance) with a median c o r r e l a t i o n of .6191. Table VI presents the c o r r e l a t i o n matrix f o r the f i f t e e n assess-ment centre dimensions, the p o t e n t i a l i t y measure, and the performance appraisal measure. Correlations of assessment centre dimensions with the performance appraisal measure range from .0037 (Adherence to TABLE V CORRELATION MATRIX FOR THE ASSESSMENT CENTRE DIMENSIONS AND THE OVERALL ASSESSMENT RATING 1. 2. 3. 4. 5. 6. 7. 8. 9. 1. P r a c t i c a l I n telligance 1.0000 2. Integrit y .3178 1.0000 n = 233 3. Problem Confrontation .5898 .2705 1.0000 r@ .05 = .1286 4. Stress Tolerance .5481 .3823 .6495 1.0000 r@ .01 = .1684 5. A b i l i t y of Learn .7161 .2583 .4907 .4553 1.0000 6. I n i t i a t i v e .6160 .2961 .6620 .5793 .5423 1.0000 7. Decisiveness .5473 .2197 .5697 .5437 .5177 .5945 1.0000 8. F l e x i b i l i t y .1399 .1182 .1460 .0972 .2035 .0903 .0269 1.0000 9. Fact Finding .6101 .1985 .5235 .4189 .5878 .5218 .4940 .1352 1.0000 10. Oral Communication .5009 .2717 .6067 .6868 .4825 .6073 .5498 .1450 .4414 11. Interpersonal Tolerance .3420 .1699 .3314 .4007 .2867 .4015 .2536 .3632 .2960 12. Interpersonal S e n s i t i v i t y .4220 .1636 .3714 .3534 .3294 .4230 .3100 .3164 .2713 13. Written Communication .1856 .1806 .2044 .2249 .1830 .1608 .0799 .1615 .0742 14. Adherence to Authority .2644 .4971 .2171 .2360 .2026 .2676 .1871 .2512 .1455 15. Personal Impact .5535 .2947 .6139 .7051 .4606 .6520 .5489 .2158 .4332 16. OAR .6818 .4719 .6812 .7163 .5709 .6932 .5328 .2149 .5431 (Continued) TABLE V (Continued) CORRELATION MATRIX FOR THE ASSESSMENT CENTRE DIMENSIONS AND THE OVERALL ASSESSMENT RATING 10. 11. 12. 13. 14. 15. 16. 1. P r a c t i c a l I n t e l l i g a n c e 2. Integri t y 3. Problem Confrontation 4. Stress Tolerance 5. A b i l i t y of Learn 6. I n i t i a t i v e 7. Decisiveness 8. F l e x i b i l i t y 9. Fact Finding 10. Oral Communication 11. Interpersonal Tolerance 12. Interpersonal S e n s i t i v i t y 13. Written Communication 14. Adherence to Authority 15. Personal Impact 16. OAR ..0000 .3570 .3831 .2166 .2171 .7345 .6358 1.0000 .6157 .1409 .3119 .4549 .6247 1.0000 .1797 .2609 .4807 .6191 1.0000 .2187 .2991 .2640 1.0000 .3007 .3572 1.0000 .7079 1.0000 TABLE VI CORRELATION MATRIX FOR THE ASSESSMENT CENTRE DIMENSIONS, PERFORMANCE APPRAISAL MEASURE, AND POTENTIALITY MEASURE 1. 2. 3. 4. 5. 6. 7. 8. 9. 1. P r a c t i c a l I n t e l l i g a n c e 1.0000 n = 96 2. Inte g r i t y .2637 1.0000 3. Problem r@ .05 = .2006 Confrontation .4893 .2120 1.0000 r@ .01 = .2617 4. Stress Tolerance .4926 .4022 .5602 1.0000 5. A b i l i t y of Learn .6493 .3108 .4057 .3408 1.0000 6. I n i t i a t i v e .5914 .2602 .5850 .5222 .5499 1.0000 7. Decisiveness .5242 .3315 .5578 .5714 .5621 .5472 1.0000 8. F l e x i b i l i t y .3186 .1579 .3889 .2031 .3569 .3680 .2510 1.0000 9. Fact Finding .5729 .1781 .5249 .3603 .5870 .5369 .4801 .2467 1.0000 10. Oral Communication .4156 .2943 .5424 .6751 .4522 .5923 .5635 .3190 .3943 11. Interpersonal Tolerance .2782 .0671 .3246 .4059 .2490 .3540 .2535 .3744 .2789 12. Interpersonal S e n s i t i v i t y .4523 .2137 .4382 .3087 .2720 .4209 .3489 .3076 .2596 13. Written Communication .1252 .1709 .1832 .2335 .0437 .0723 .0336 .0739 .0727 14. Adherence to Authority .2572 .3818 .1297 .1484 .3134 .2325 .2235 .3508 .1670 15. Personal Impact .4570 .2168 .5933 .6665 .4270 .6515 .6014 .3505 .4647 16. P o t e n t i a l .0004 .0343 -.0723 .0503 .0103 .0252 .0017 -.1143 .0293 17. Performance Appraisal .1018 -.0393 .0499 -.0096 -.0320 .0128 -.2003 -.0096 .0508 (Continued) TABLE VI (Continued) CORRELATION MATRIX FOR THE ASSESSMENT CENTRE DIMENSIONS, PERFORMANCE APPRAISAL MEASURE, AND POTENTIALITY MEASURE 10. 11. 12. 13. 14. 15. 16. 17. 1. P r a c t i c a l I n t e l l i g a n c e 2. Integri t y 3. Problem Confrontation 4. Stress Tolerance 5. A b i l i t y of Learn 6. I n i t i a t i v e 7. Decisiveness 8. F l e x i b i l i t y 9. Fact Finding 10. Oral Communication 1.0000 11. Interpersonal Tolerance .3601 12. Interpersonal S e n s i t i v i t y .3487 13. Written Communication .0955 14. Adherence to Authority .1272 15. Personal Impact .7394 16. P o t e n t i a l .1358 17. Performance Appraisal .1375 1.0000 .5002 .0266 .1379 .4664 -.0174 .1600 1.0000 -.0368 .2273 .4507 -.1006 1.0000 .0964 .1742 .1464 1.0000 .2354 1.0000 -.1194 .0452 .1500 -.0874 .0037 1.0000 0447 .3291 1.0000 46 Authority) to (-).2003 (Decisiveness) with a median c o r r e l a t i o n of .0447. None of the c o r r e l a t i o n s are s t a t i s t i c a l l y s i g n i f i c a n t . Correlations of assessment centre dimensions with the measure of p o t e n t i a l i t y are likewise a l l not s t a t i s t i c a l l y s i g n i f i c a n t . The c o r r e l a t i o n s range from .0004 ( P r a c t i c a l Intelligence) to .1358 (Oral Communication) with a median c o r r e l a t i o n of .0452. Stepwise multiple regression analyses were c a r r i e d out i n order to determine which, i f any, of the f i f t e e n assessment centre dimensions would be p r e d i c t i v e of the two c r i t e r i a : performance app r a i s a l measure and p o t e n t i a l i t y measure. Any fears of m u l t i -c o l l i n e a r i t y among the assessment centre dimensions were d i s p e l l e d by the lack of high c o r r e l a t i o n s - c o r r e l a t i o n s between assessment centre dimensions ranged from .0269 to .7345 with an average c o r r e l a t i o n of .3646 f o r the Table V sample and ranged from .0266 to .7394 with an average c o r r e l a t i o n of .3503 for the Table VI sample. The best predictors of a candidate's performance appraisal measure were found to be h i s ratings on the dimensions of Decisive-ness, Oral Communication, and P r a c t i c a l I n t e l l i g e n c e (see Table V I I ) . As can be seen from the regression equation, the performance ap p r a i s a l measure i s p o s i t i v e l y r e l a t e d to ratings on Oral Communication and P r a c t i c a l I n t e l l i g e n c e and negatively related to ratings on Decisive-ness. While the i n c l u s i o n of low decisiveness i n the p r e d i c t i v e function may seem, at f i r s t , s u r p r i s i n g , i t should probably be interpreted as "not high" (as opposed to low) decisiveness. High decisiveness connotes a r i g i d candidate, one who might make snap judgements with l i t t l e information and be unable or u n w i l l i n g to confirm or modify t h i s judgement. Unless the candidate i s always r i g h t , t h i s tendency would hinder his performance as a constable. While the regression function of three dimensions can only account f o r approximately 17 percent of the v a r i a t i o n i n performance appraisal measures, i t represents the best p r e d i c t i v e combination of a l l f i f t e e n dimensions. Employing a l l f i f t e e n dimensions only increases the per-centage of v a r i a t i o n accounted f o r to 24.79 percent (Multiple R = .4979). TABLE VII MULTIPLE REGRESSION FUNCTION FOR PREDICTING PERFORMANCE APPRAISAL MEASURE Predicted Performance Appraisal Measure 28.306 - 2.350 Decisiveness Rating + 1.513 Oral Communication Rating + 1.134 P r a c t i c a l I n t e l l i g e n c e Rating Mu l t i p l e R = .4118 R-Squared - .1696 The stepwise multiple regression of the assessment centre dimensions on the p o t e n t i a l i t y measure revealed that no combination of the f i f t e e n assessment dimensions could s i g n i f i c a n t l y improve on the p r e d i c t i o n of the p o t e n t i a l i t y measure by the two dimensions Oral Communication and Problem Confrontation. While these two dimensions only account f o r 4.86 percent of the variance i n the p o t e n t i a l i t y measure (see Table V I I I ) , i t i s i n t e r e s t i n g to note that a l l f i f t e e n dimensions can only account f o r 11.5 percent of the variance (Multiple R = .3392). TABLE VIII MULTIPLE REGRESSION FUNCTION FOR PREDICTING POTENTIALITY MEASURE  Predicted P o t e n t i a l i t y Measure = 1.7758 + .090 Oral Communication Rating .087 Problem Confrontation Rating. Mu l t i p l e R = .2204 R-Sqiiared = .0486 48 the f i f t e e n assessment centre dimensions. The resultant regression equation i s shown i n Table IV. An examination of the regression equation shows that the OAR can be modelled quite accurately using a subset of the o r i g i n a l dimensions. The regression equation explains 81.35 percent of the v a r i a t i o n i n the OAR with only nine of the o r i g i n a l f i f t e e n dimensions (Stress Tolerance, Interpersonal Tolerance, Interpersonal S e n s i t i v i t y , Integrity, P r a c t i c a l I n t e l l i g e n c e , Problem Confrontation, I n i t i a t i v e , Personal Impact, and Fact Finding). TABLE IX MULTIPLE REGRESSION FUNCTION FOR PREDICTING OVERALL ASSESSMENT RATING (OAR) Predicted OAR = - 4.692 + .253 Stress Tolerance Rating + .256 Interpersonal S e n s i t i v i t y Rating + .306 Interpersonal Tolerance Rating + .205 I n t e g r i t y Rating + .198 P r a c t i c a l I n t e l l i g e n c e Rating + .184 Problem Confrontation Rating + .131 I n i t i a t i v e Rating + .095 Personal Impact Rating + .092 Fact Finding Rating. Mu l t i p l e R = .9020 R-Squared = .8135 Examination of the above r e s u l t s provides a clue as to a possible reason for the low c o r r e l a t i o n s between the OAR and the performance appraisal measure and the OAR and the p o t e n t i a l i t y measure. The dimensions most p r e d i c t i v e of performance (Decisiveness, Oral Communication, P r a c t i c a l I ntelligence) and most p r e d i c t i v e of p o t e n t i a l (Oral Communication, Problem Confrontation) are not the 49 same dimensions most p r e d i c t i v e of the OAR. The three measures (performance appraisal, p o t e n t i a l , and OAR) appear to be capturing d i f f e r e n t job c a p a b i l i t y dimensions. Also, assessors appear to be weighting job c a p a b i l i t y dimensions i n determining a candidate's OAR i n a manner quite d i f f e r e n t from that prescribed i n Turner and Higgins' (1978) job analysis report. In i l l u s t r a t i o n of t h i s , the f i f t e e n dimensions were regressed against the OAR and t h e i r Beta weights determined. These weights are metric-free and can be used i n determining which dimensions are weighted more heavily than others. Table X presents the dimensions i n order of importance i n determining OAR (as determined by multiple regression Beta weights) as opposed to the order of importance of the dimensions to job success (as determined by job a n a l y s i s ) . As can be seen from Table X, the overlap i s minimal - the Spearman rank c o r r e l a t i o n c o e f f i c i e n t between the two rank orderings i s only .3571. The biggest discrepancy l i e s with the three variables interpersonal tolerance, interpersonal s e n s i t i v i t y , and personal impact. While the job analysis indicates that these three variables are of r e l a t i v e l y l i t t l e importance to job success (ranked 11th, 12th, and 15th r e s p e c t i v e l y ) , the regression analysis reveals that assessors are assigning them much greater importance i n determining a candidate's OAR (ranked 1st, 2nd, and 8th r e s p e c t i v e l y ) . A further analysis of the assessment centre dimensions was c a r r i e d out i n an attempt to explicate any broad, underlying dimensions to assessment centre performance. The c o r r e l a t i o n matrix of a l l sixteen assessment centre ratings was factor analysed ( p r i n c i p l e compenent a n a l y s i s ) , followed by a varimax r o t a t i o n of f a c t o r s . The number of f a c t o r s was determined by mechanical 50 a p p l i c a t i o n of the Kaiser-Guttman r u l e ( i e . r e t a i n those factors with eigen values greater than or equal to one) as well as by a subjective analysis of i n t e r p r e t a b i l i t y . The chosen f a c t o r i a l s o l u t i o n (Table XI) yielded three c l e a r l y d i stinguishable factors accounting f o r 62.52 percent of the t o t a l r a t i n g variance. Factor I appears to be an o v e r a l l a c t i v i t y and general effectiveness f a c t o r . It i s defined by dimensions of i n i t i a t i v e , problem confrontation, decisiveness, p r a c t i c a l i n t e l l i -gence, o r a l communication, stress tolerance, f a c t f i n d i n g , personal impact, and a b i l i t y to lea r n . The second factor might be l a b e l l e d interpersonal effectiveness/competence and includes the interpersonal tolerance, interpersonal s e n s i t i v i t y , and f l e x i b i l i t y dimensions. The t h i r d factor appears to be one of probity. It includes the dimensions of i n t e g r i t y and adherence to authority with a somewhat smaller loading on the written communication dimension. 51 TABLE X RANK ORDERING OF ASSESSMENT CENTRE DIMENSIONS AS DETERMINED BY MULTIPLE REGRESSION AND JOB ANALYSES DIMENSION BETA WEIGHT REGRESSION RANK JOB ANALYSIS RANK Interpersonal Tolerance .2021 1 11 Interpersonal S e n s i t i v i t y .1990 2 12 Stress Tolerance .1863 3 4 Int e g r i t y .1653 4 2 Problem Confrontation .1391 5 3 P r a c t i c a l I n t e l l i g e n c e .1217 6 1 I n i t i a t i v e .1002 7 6 Personal Impact .0799 8 15 Fact Finding .0726 9 9 A b i l i t y to Learn .0466 10 5 F l e x i b i l i t y .0463 11 8 Decisiveness .0357 12 7 Written Communication .0319 13 13 Oral Communication .0167 14 10 Adherence to Authority .0094 15 14 r = .3571 s TABLE XI ROTATED FACTOR SOLUTION OF ASSESSMENT CENTRE RATINGS 52 VARIABLES I n i t i a t i v e Problem Confrontation Decisiveness P r a c t i c a l I n t e l l i g e n c e Oral Communication Stress Tolerance OAR Fact Finding Personal Impact A b i l i t y to Learn Interpersonal Tolerance Interpersonal S e n s i t i v i t y F l e x i b i l i t y I n t e g r i t y Adherence to Authority Written Communication Eigen Values % Variance Accounted For Cumulative Variance FACTOR I • 7926  .7835  .7796  .7777  .7454  .7410  .7389  .7209  .7186  .7162 .3033 .3671 -.0310 .2279 .0919 .1065 7.2830 45.5188 II .1583 .1263 -.0118 .1623 .1551 .1066 .3987 .1173 .2912 .1527 .7509  .7406  .7293 -.0123 .2190 .1376 1.5843 9.9018 III .1665 .1615 .0570 .1523 .1992 .3115 .3398 .0407 .2754 .0798 .1926 .0757 .1375 .8016  .7705  .4893 1.1352 7.0952 45.5188 55.4206 62.5158 53 DISCUSSION The r e s u l t s presented i n the preceeding section shed some l i g h t on a number of assessment centre concerns. They w i l l be discussed here, i n a l o g i c a l order, with emphasis on t h e i r implications and l i m i t a t i o n s . The discussion w i l l end with an o v e r a l l conclusion as to the implications of t h i s study with respect to the P o l i c e Constable Selection Program and to assessment centre l i t e r a t u r e as a whole. Pr e d i c t i v e V a l i d i t y of the OAR Examination of the r e s u l t s leads one to conclude that the assessment centre has l i t t l e p r e d i c t i v e v a l i d i t y f o r the derived performance appraisal measure or the p o t e n t i a l i t y measure (r = .0784 and -.0083 r e s p e c t i v e l y ) . The r e s u l t s are c e r t a i n l y f a r lower than the median c r i t e r i o n - r e l a t e d c o r r e l a t i o n of .37 reported by Cohen, Moses and Byham (1974) i n a review a r t i c l e on the p r e d i c t i v e accuracy of the o v e r a l l assessment ranging across 22 studies. However, i t i s questionable whether such i n s i g n i f i c a n t c o e f f i c i e n t s would ever be reported i n the l i t e r a t u r e . McGinnis and Carpenter (1980) do report they could f i n d l i t t l e agreement between ratings of performance evaluation made by a candidate's supervisor and ratings i n the assessment centre by the assessors but i t must be remembered that they were conducting a p i l o t study and had a sample of only twelve candidates. Of a l l the academy grades, the only c o r r e l a t i o n of any s i g -n i f i c a n c e i s that of the OAR with Social Sciences. Given the emphasis i n t h i s course on interpersonal s k i l l s , t h i s r e l a t i v e l y high corre-l a t i o n i s not s u r p r i s i n g . The multiple regression and factor analyses (to be discussed l a t e r ) reveal that a major component of a candidate's 54 OAR i s h i s interpersonal s k i l l s . Therefore, someone with high interpersonal s k i l l s i s l i k e l y to score higher on the OAR as w e l l as be graded higher i n the So c i a l Sciences academy course. Both measures (OAR and S o c i a l Sciences) are tapping a candidate's a b i l i t y to r e l a t e and deal with other people. With respect to the i n t e r p r e t a t i o n of v a l i d i t y c o e f f i c i e n t s , there are a number of methodological concerns that must be addressed. V a l i d i t y c o e f f i c i e n t s are af f e c t e d to some degree by four important f a c t o r s : ( i ) c r i t e r i o n contamination ( i i ) r e s t r i c t i o n of range ( i i i ) p redictor r e l i a b i l i t y and ( i v ) c r i t e r i a r e l i a b i l i t y and v a l i d i t y . A l l four factors deserve mention i n regard to t h i s study. ( i ) The p o s s i b i l i t y of c r i t e r i o n contamination i s a major problem i n any p r e d i c t i v e study. If present, c r i t e r i o n contamination can cause v a l i d i t y c o e f f i c i e n t s to be i n f l a t e d . While there i s no way to d i r e c t l y determine the e f f e c t , i f any, of c r i t e r i o n contamination i n the present study, i t i s l i k e l y that the e f f e c t i s n e g l i g i b l e . Assessment and c r i t e r i a ratings were made approximately one year apart and by d i f f e r e n t o f f i c e r s . Assessment data was u t i l i z e d only by Personnel at the time of s e l e c t i o n . C r i t e r i a ratings were made by f i e l d supervisors and academy i n s t r u c t o r s who, while probably able to access assessment data, would have l i t t l e reason to do so. In a l l p r o b a b i l i t y , c r i t e r i a ratings were made i n ignorance of a candidate's actual performance at the assessment centre. ( i i ) R e s t r i c t i o n of range occurs when the group under study i s s e l e c t : i e . the e n t i r e possible range of performance i s not represented. This r e s t r i c t i o n of range phenomenon attenuates the v a l i d i t y c o e f f i c i e n t . This study must acknowledge the e f f e c t s of r e s t r i c t e d range for two reasons. One reason i s the lack of sample 55 subjects who have low predictor (OAR) scores. In most cases, those candidates with a low OAR were not accepted by the department and, since no c r i t e r i a data i s a v a i l a b l e on them, they were not included i n the research sample. The average OAR f o r those candidates not included i n the sample, due to departmental r e j e c t i o n or non-acceptance of departmental o f f e r , i s 5.604 (n=106) as compared to a sample mean OAR of 6.823. R e s t r i c t i o n of range i s also operating i n that a l l candidates are rated favourably on a l l measures (predictor and c r i t e r i a ) . Figure 4 shows that of candidates selected from the assessment centre the majority of them were rated above average on t h e i r OAR. Assessors encouraged by t h e i r t r a i n i n g , which stresses the u n l i k l i h o o d of extreme r a t i n g s , have a tendency to shrink the r a t i n g s cale. Examination of c r i t e r i a data (see experimental sample means i n Table II) also shows consistent above average r a t i n g tendencies. This may occur due to halo e f f e c t s or may, i n f a c t , represent the true state of a f f a i r s . Either way, when a l l candidates are rated favourably there i s l i t t l e or no d i s c r i m i n a t i o n possible and s i g n i f i c a n t p r e d i c t i v e v a l i d i t i e s are u n l i k e l y to be found, ( i i i ) For any predictor to be v a l i d i t must f i r s t be r e l i a b l e . Unfortunately, there has been no determination of the r e l i a b i l i t y of t h i s assessment centre process. The report which has been interpreted as a r e l i a b i l i t y study (Chamberlain, 1980) i s , i n f a c t , an examination of the importance of assessment centre dimensions f o r the o v e r a l l assessment r a t i n g . The author i n c o r r e c t l y concludes that the high c o r r e l a t i o n of a candidate's mean score across a l l dimensions with his OAR (r=.94) and the f a c t that the c o r r e l a t i o n s between each dimension and the OAR are s i m i l a r i n two d i f f e r e n t samples establishes FIGURE 4 PERCENTAGE OF CANDIDATES AT EACH LEVEL OF OVERALL ASSESSMENT RAGING 35h 30r-25h 20h 15r-lOh OAR (n) 4 (4) 5 (9) 6 (19) 7 (35) 8 (26) 9 (3) 57 the r e l i a b i l i t y of the dimensions. The two most common methods of assessing assessment centre r e l i a b i l i t y are i n t e r - r a t e r r e l i a b i l i t y and t e s t - r e t e s t r e l i a b i l i t y . I n ter-rater r e l i a b i l i t y measures how well the assessors agre with each other i n terms of independent judgements they make concerning a candidate's performance. This can be done by comparing assessor ratings on the OAR, or on each of the assessment dimensions or on s p e c i f i c exercises. Test-retest r e l i a b i l i t y measures whether or not the assessment centre produces s i m i l a r r e s u l t s over a period of time when the same group of candidates i s assessed by two d i f f e r e n t groups of assessors. A r e l i a b l e assessment centre w i l l ensure that a candidate's evaluation i n the centre does not depend on which centre he attends. There has, however, been no determination of e i t h e r type of r e l i a b i l i t y of the assessment centre process, and the data required for such study has not been kept i n any systematic manner. An important side issue with regard to assessment centre r e l i a -b i l i t y i s that of standardizing the centre's administration so that each candidate receives r e l a t i v e l y the same treatment. The need for standarization i n an assessment centre i s c r u c i a l because of the many sources of bias that may intrude on the process. Without such controls the assessment centre program i s l i k e l y to be contaminated, producing u n r e l i a b l e and i n v a l i d r e s u l t s . Two observed examples of p o t e n t i a l sources of u n r e l i a b i l i t y i n the assessment centre under study follow: - d i f f e r e n t exercise i n s t r u c t i o n s : an assessor i n the group discussion was observed recommending to h i s group of candidates that someone be assigned to keep time - i n another group of candidates, assessors were watching for someone to spontaneously e x h i b i t t h i s 58 behaviour as evidence of a number of dimensions. - assessors r o l e playing i n d i f f e e n t manners: assessors were observed playing the same roles d i f f e r e n t l y i n terms of both temperament and ease of information dissemination. These two examples i f l e f t unstandardized across assessment centres w i l l lead to questionable r e l i a b i l i t y of r e s u l t s . Standardization i s necessary, though not s u f f i c i e n t , for r e l i a b i l i t y . Attention to the d e t a i l s of standardization should f a c i l i t a t e the improvement of r e l i a b i l i t y and v a l i d i t y of an assessment centre (Cohen, 1978 a,b). ( i v ) Also of importance i s the f a c t that the r e l i a b i l i t i e s of the c r i t e r i a measures are unknown; as are t h e i r v a l i d i t i e s as i n d i c a t o r s of constable performance. Unreliable and i n v a l i d c r i t e r i a reduce the v a l i d i t y c o e f f i c i e n t s and make i n t e r p r e t a t i o n of those c o e f f i c i e n t s d i f f i c u l t . The r e s u l t s of t h i s research are only as meaningful as the discriminatory powers and relevance of the c r i t e r i o n measures. Although the assessment centre OAR has f a i l e d to s i g n i f i c a n t l y predict any of the performance c r i t e r i a (performance appraisal measure, p o t e n t i a l i t y measure, and academy grades), i t i s hard to say whether t h i s i s the f a u l t of the assessment centre or of the c r i t e r i a used. However, given the nature of the performance appraisal form used as c r i t e r i a , i t would have been s u r p r i s i n g had the performance ratings correlated with anything. The r e l i a b i l i t y and v a l i d i t y of the form i s questionable and i t s general inadequacy as an a p p r a i s a l form has been acknowledged by the department with the introduction of a new form. There i s very l i t t l e overlap between the old form and the new one. i t would be i n t e r e s t i n g to see how w e l l the assessment centre OAR predicts performance as measured by the new appraisal form. 59 If the new form i s of greater r e l i a b i l i t y and/or v a l i d i t y than the old one, one would expect a larger v a l i d i t y c o e f f i c i e n t f o r the assessment centre than obtained here. Because of the above methodological considerations, i t i s d i f f i c u l t to determine the accuracy of the obtained v a l i d i t y c o e f f i -c i e n t s . Assuming n e g l i g i b l e c r i t e r i o n contamination e f f e c t s , i t i s probable that the obtained v a l i d i t y c o e f f i c i e n t s have been attenuated by the e f f e c t s of r e s t r i c t e d range. Since v a l i d i t y c o e f f i c i e n t s are reduced by a lack of r e l i a b i l i t y i n the predictor and/or c r i t e r i a , examination of r e l i a b i l i t i e s might also help explain the low predictor - c r i t e r i a c o r r e l a t i o n s obtained i n t h i s study. It i s quite probable that more care i n the i d e n t i f i c a t i o n and c o l l e c t i o n of appropriate c r i t e r i a would also undoubtedly increase the obtained v a l i d i t y c o e f f i c i e n t s . Examining the assessment centre l i t e r a t u r e , i t appears c l e a r that the more accurate predictions were obtained where the performance to be predicted was c l e a r l y defined, the assessment r e s u l t s did not r e s t r i c t the range of subsequent performance, and the c r i t e r i a measures employed were not l i m i t e d by low r e l i a b i l i t y and questionable v a l i d i t y . Pre vs. Post Assessment P o l i c e Candidates The comparison of the experimental and control groups was an attempt to compare the effectiveness of the assessment centre method with the t r a d i t i o n a l methods of s e l e c t i o n used by the department without access to the assessment centre. Results appear to show that since the assessment centre has been employed, p o l i c e candidates are more highly rated i n terms of t h e i r p o t e n t i a l as a constable than were r e c r u i t s hired just previous to the assessment centre. There i s , 60 however, no diffe r e n c e between the two groups with respect to the performance appraisal measure, i t has been suggested by assessment centre personnel that experimental group candidates are, i n f a c t , outperforming the co n t r o l group candidates. The normative nature of the performance appraisal measure, however, i s obscuring any mean di f f e r e n c e s . This i s not operating on the p o t e n t i a l i t y measure and i t , therefore, could be a more accurate r e f l e c t i o n of i n d i v i d u a l differences i n performance. The above findings are s i m i l a r to those reported by Campbell and Bray (1967) who found that considerably more of the assessed group were rated as having higher future p o t e n t i a l than the non-assessed group and yet were rated only s l i g h t l y higher i n performance. Campbell and Bray submit that the diffe r e n c e i n the r e s u l t s f o r performance and p o t e n t i a l suggest that the s k i l l s measured at the assessment centre are more important to higher job l e v e l s than that l e v e l i n which the candidate's performance had been measured. Results of t h i s comparative analysis also reveal a s i g n i f i c a n t d i f f e r e n c e between the two groups i n terms of performance at the academy. Recruits hired subsequent to the assessment centre are outperforming r e c r u i t s hired p r i o r to the assessment centre. This d i f f e r e n c e i s most pronounced i n the Investigation and P a t r o l course. The r e s u l t s of the experimental vs. co n t r o l group analysis appear to demonstrate the greater effectiveness of the assessment centre as compared with the department's more t r a d i t i o n a l methods of s e l e c t i o n and suggests that the centre has some v a l i d i t y , i t i s possible that because of the assessment centre the department i s getting "better" performers i n terms of i n t e l l e c t u a l a b i l i t y . While t h i s may not show up i n current performance appr a i s a l s , supervisors could be picking up on t h i s i n t h e i r ratings of p o t e n t i a l i t y and i t i s manifested i n higher grades at the academy. This i n t e r p r e t a t i o n , however, must be approached cautiously. The r e s u l t s could be due to h i s t o r i c a l e f f e c t s (eg. a s o c i o l o g i c a l trend towards more educated, "better" candidates applying to the f o r c e ) , inadequate group matching (eg. the c o n t r o l group chosen was not representative of those r e c r u i t s hired p r i o r to the centre but, rather, represented the poorer performers) and/or the halo e f f e c t s of assessment at the centre (eg. supervisors since the inception of the centre have been expecting the "best" candidates and have rated them accordingly). Of the three, the p o s s i b i l i t y of halo e f f e c t s due to assessment at the centre seems most l i k e l y . There i s l i t t l e reason to believe that the groups have been inad-equately matched and while there i s a trend towards more educated r e c r u i t s , i t has been a gradual trend extending over a period much longer than that covered i n t h i s study. OAR and Assessment Centre Dimensions The regression of the OAR on the assessment centre dimensions indicated that the assessors r e l i e d mostly on evidence of stress tolerance, interpersonal s k i l l (interpersonal s e n s i t i v i t y and interpersonal tolerance), problem solving a b i l i t y ( p r a c t i c a l i n t e l l i g e n c e , problem confrontation, and f a c t finding) and impact ( i n i t i a t i v e , i n t e g r i t y , and personal impact) to make t h e i r o v e r a l l assessment r a t i n g . Assessors, not s u r p r i s i n g l y , do not use a l l a v a i l a b l e information i n reaching the o v e r a l l r a t i n g - a f i n d i n g confirmed i n studies by M i t c h e l l (1975), Ross (1980), Sackett and Hakel (1979) and Schmitt (1977). This i s p a r t i c u l a r l y s i g n i f i c a n t i n l i g h t of the f a c t that assessor t r a i n i n g emphasizes the importance of 62 a l l dimensions. Given the r e s u l t s of the regression analysis i t i s not s u r p r i s i n g that the OAR d i d not c o r r e l a t e with e i t h e r the performance a p p r a i s a l measure or the p o t e n t i a l i t y measure of t h i s study. The performance appraisal measure (as revealed i n Table VII) appears to be a function of o r a l communication, p r a c t i c a l i n t e l l i g e n c e , and decisiveness. The p o t e n t i a l i t y measure i s best predicted by the dimensions of o r a l communication and problem confrontation. The three measures (OAR, performance appraisal and potential) are a l l capturing d i f f e r e n t assessment centre dimensions. The r e l i a n c e on interpersonal s k i l l s i n determining a candidate's OAR i s quite prevalent i n the l i t e r a t u r e (Hinrichs, 1969; Ross, 1980; Sackett and Hakel, 1979; Schmitt, 1977; Wilson and Tatge, 1973). S i g n i f i c a n t other dimensions i n f l u e n c i n g OAR are reported to be organizing and planning, d e c i s i o n making, and leadership. Most of t h i s research, however, has come from managerial assessment centres i n which there i s l i t t l e dimensional overlap with the centre of t h i s study. The only other p o l i c e study (Ross, 1980) reported that, besides interpersonal s k i l l s , the a b i l i t y to e f f e c t i v e l y communicate o r a l l y and i n w r i t i n g determines a candidate's OAR. The p o l i c e o f f i c e r s involved, however, were of a rank higher than p o l i c e constable. While i t i s not possible to claim an isomorphism between the regression equation developed i n t h i s study and the actual d e c i s i o n processes used by the assessors, the r e s u l t s of using the regression equation represents a model that c l o s e l y resembles the assessors' r e s u l t s . Only 19 percent of the variance i n OAR cannot be accounted for with the variables of the regression equation. 63 Assuming for the moment that regression analysis can model the weighting process that assessors use i n deriving a candidate's OAR, i t becomes i n t e r e s t i n g to compare t h i s model with the rated importance of the assessment centre dimensions f o r job performance as determined i n the job analysis study (Turner and Higgins, 1978). If the comparison i s favourable ( i e . assessors are weighting dimensions i n determining OAR s i m i l a r l y to t h e i r e m p i r i c a l l y determined weighting with respect to job success) one should expect higher p r e d i c t i v e v a l i d i t i e s than obtained i n t h i s study and doubt would be cast on the adequacy of the performance c r i t e r i a employed. If the comparison was not favourable, there could be some doubt as to whether the assessment centre OAR, as modelled, by the regression equation, i s a su i t a b l e predictor f o r job performance. Results indicated that the comparison was less than favourable. The rankings of assessment centre dimensions i n order of importance i n determining OAR (as determined by multiple regression analysis) correlated only .357 with the rankings of dimensions i n order of importance to job success (as determined by the job analysis r e p o r t ) . Assessors are r e l y i n g more heavily on interpersonal s k i l l s and personal impact i n determining a candidate's OAR than the job analysis report contends t h e i r importance to job success to be. This w i l l tend to weaken any c o r r e l a t i o n between a candidate's OAR and a measure of hi s job performance (no matter how r e l i a b l e and v a l i d the job performance measure i s ) . Factor Analysis of Assessment Centre Ratings With so many ratings produced i n the assessment centre (OAR plus 15 demensions), the p r o b a b i l i t y of redundancy i s quite high. Factor 64 analysis reveals any broad underlying f a c t o r s . The r e s u l t s of the factor analysis i n t h i s study suggests that the 16 assessment centre ratings a c t u a l l y r e f l e c t only three underlying factors (see the rotated f a c t o r matrix i n Table XI). The factors were defined as an o v e r a l l a c t i v i t y and general effectiveness f a c t o r , an interpersonal effectiveness f a c t o r , and a probity f a c t o r . As done here, a number of other factor a n a l y t i c studies have been conducted i n e f f o r t s to explicate the basic elements of performance (Bray, Campbell and Grant, 1974; Bray and Grant, 1966; Hinrichs, 1969; Huck and Bray, 1976; Sackett and Hakel, 1979; Schmitt, 1977). While each analysis y i e l d s some factors dependent on the p a r t i c u l a r dimensions rated i n the assessment centre program under study, there are also a number of recurring factors throughout the studies. These factors include o v e r a l l a c t i v i t y and general e f f e c t -iveness, organizing and planning/administering, interpersonal competence, cognitive competence/intelligence, work o r i e n t a t i o n / d r i v e , and personal c o n t r o l . Thus the two f a c t o r s , o v e r a l l a c t i v i t y and general ef f e c t i v e n e s s , and interpersonal e f f e c t i v e n e s s , as found i n t h i s study are quite common to other assessment programs. The t h i r d f a c t o r , probity, however, r e f l e c t s the unique nature of the Police Constable Selection Program i n i t s need to determine a candidate's honesty and uprightness. This Is not something that dimensions i n managerial assessment centres are designed to tap. Conclusion This study represented one of the few e f f o r t s to conduct a c r i t e r i o n - r e l a t e d v a l i d i t y study of law enforcement assessment 65 centres. Results indicated that p r e d i c t i v e v a l i d i t i e s obtained were extremely low. However, problems with r e s t r i c t e d range, questionable assessment centre r e l i a b i l i t y , poor assessment centre standardization, and c r i t e r i a of unknown r e l i a b i l i t y and v a l i d i t y were acknowledged. Weak evidence was presented for concluding that candidates processed through the centre are "better" r e c r u i t s than those hired previous to the centre - at l e a s t with respect to academy grades and rated p o t e n t i a l i t y . Again, possible problems with the research design were reported. The assessors' determination of a candidate's OAR was modelled with nine of the f i f t e e n assessment centre dimensions. The model was compared to the importance of job c a p a b i l i t y dimensions as reported i n the job analysis study conducted p r i o r to the establishment of the centre. Results indicated that assessors were weighting interpersonal s k i l l s and personal impact heavily i n determining a candidate's OAR. Factor analysis of the assessment centre ratings revealed three broad underlying f a c t o r s : o v e r a l l a c t i v i t y and general e f f e c t i v e n e s s , interpersonal e f f e c t i v e n e s s , and probity. While the o v e r a l l a c t i v i t y and general effectiveness and interpersonal effectiveness factors can be found i n other assessment centres, the probity factor i s seen as a unique dimension of the P o l i c e Constable Selection Program. The f a i l u r e of t h i s study to demonstrate p r e d i c t i v e v a l i d i t y f o r the assessments centre's OAR should not mean foreclosure of the centre. Rather, i t should provide the impetus for further examination of the assessment centre procedure. While the procedure i s suspect, so are the c r i t e r i a measures employed. If the dimensions being assessed i n the assessment centre can be found to have reasonable r e l i a b i l i t i e s , a t t e n t i o n should be turned to obtaining a r e l i a b l e and 66 v a l i d measure of performance as a constable on which to, once again, v a l i d a t e the OAR. This may involve merely waiting for s u f f i c i e n t numbers of candidates to have been rated on the new performance appraisal measure introduced at the department or may involve the construction and introduction of an even newer performance measure. If the assessment centre i s to be validated against performance as a constable, a comparable amount of time and energy to that expended on the establishment of the centre must be devoted to the e s t a b l i s h -ment of performance c r i t e r i a . A centre of high face v a l i d i t y and presumably content v a l i d i t y (due to the extensiveness of the job analysis) must s t i l l show p r e d i c t i v e v a l i d i t y i n order to ensure that i t i s meeting i t s objective - that of i d e n t i f y i n g the "better" r e c r u i t applicant. As a f i n a l note, the question of the g e n e r a l i z a b i l i t y of r e s u l t s must be addressed. It would not be appropriate to view the r e s u l t s of the study as applicable to a l l s e l e c t i o n assessment centres. Nor, would i t be apropriate to view t h i s study as a case study providing u s e f u l information only to the parent organization. The study should be treated as one a d d i t i o n a l source of information to which the assessment centre user/researcher can r e f e r i n making choices about the use of the assessment centre method. The assessment centre under study here represented a departure from the prevalent managerial type assessment centres and yet i t was s t i l l possible to f i n d r e s u l t s i m i l a r i t i e s . It i s only with the accumulation of research findings from a v a r i e t y of centres that the nature of the assessment centre technique i n law enforcement and i n general can be established. 67 BIBLIOGRAPHY Alexander, L.D. "An Exploratory Study of the U t i l i z a t i o n of Assessment Center Results." Academy of Management Journal 22 (March 1979): 152-157. Bender, J.M. "What i s T y p i c a l of Assessment Centers?" Personal 50 (July/August 1973): 50-57. Blumenfeld, W.S. "Early I d e n t i f i c a t i o n of Managerial P o t e n t i a l by Means of Assessment Centers." Atlanta Economic Review 21 (December 1971): 35-38. Bozza, CM., and Umshied, S.L. "The Assessment Process: Test of the Future.: P o l i c e Chief 47 (December 1979): 46-51, 85. Bray, D.W., and Campbell, R.J. "Selection of Salesmen by Means of and Assessment Centre." Journal of Applied Psychology 52 (February 1968): 36-41. Bray, D.W., and Campbell, R.J., and Grant, D.L. Formative Years i n  Business: A long-term A.T.& T. study of managerial l i v e s . New York: John Wiley & Sons, 1974. Bray, D.W., and Grant, D.L. "The Assessment Center i n the Measurement of P o t e n t i a l f o r Business Management." Psychological Monographs 80 Whole Number 625 (1966). Brown, G.E. "What you always Wanted to Wanted to Know About Assess-ment Centers But Were A f r a i d to Ask." P o l i c e Chief (June 1978): 60-67. Buracker, CD. "The Assessment Center: Is i t the Answer?" F.B.I.  Law Enforcement B u l l e t i n (February 1980): 12-16. Byham, W.C. "Assessment Centers f o r Spotting Future Managers." Harvard Business Review 48 (July/August 1970): 150-160. ._ "The Assessment Center as an Aid i n Management Development." Training and Development Journal 25 (December 1971): 10-22. "Application of the Assessment Centre Method." i n Applying the Assessment Center Method, ed. s J.L. Moses and W.C. Byham. Toronto: Pergamon Press, Inc., 1977. ._ "Help Managers Find the Best Candidate for the Job with Assessment Center Techniques." Training 16 (November 1979): 64-73. "Starting an Assessment Center the Correct Way." Personal Administrator 25 (February 1980): 27-32. 68 Byham, W.C, and Thornton, G.C "Assessment Centers: A New Aid i n Management Selection." Studies i n Personnel Psychology 2 (1970): 21-35. Byham, W.C, and Wettengel, C. "Assessment Centers for Supervisors and Managers." Public Personnel Management 3 (May 1974): 352-364. Campbell, R.J., and Bray, D.W. "Assessment Centers: An Aid i n Management Selec t i o n . " Personnel Administration 30 (March/ A p r i l 1967): 6-13. Cascio, W.F., and Silbey, V. " U t i l i t y of the Assessment Center as a Selection Device." Journal of Applied Psychology 64 ( A p r i l 1979): 107-118. Chamberlain, L. "An I n i t i a l Study of the Po l i c e Constable Selection Program." B r i t i s h Columbia Police Academy unpublished manuscript, 1980. Cohen, B.M.; Moses, J.L.; and Byham, W.C. The V a l i d i t y of Assessment  Centers: A l i t e r a t u r e review, Monograph I I . Pittsburgh: Development Dimensions Press, 1974. Cohen, S.L. "Standardization of Assessment Centre Technology: Some C r i t i c a l Concerns." Journal of Assessment Center Technology 1 (1978,a): 1-10. . "How Well Standardized i s Your Organization's Assessment Center?" Personnel Administrator 23 (December 1978): 41-51. . "The Bottom Line on Assessment Center Technology." Personnel Administrator 25 (February 1980): 50-56. ._ " V a l i d i t y and Assessment Center Technology: One and the Same?" Human Resource Management 19 (Winter 1980): 2-11. Crnbach, L.J. " C o e f f i c i e n t Alpha and the Internal Structure of Tests." i n P r i n c i p l e s of Educational and Psychological Measure- ment: A book of selected readings, ed.s W.A. Mehrens and R.L. Ebel. Chicago: Rand McNally, 1967: 132-167. Dicken, C.F., and Black, J.D. "Predictive V a l i d i t y of Psychometric Evaluations of Supervisors." Journal of Applied Psychology 49 (1965): 34-37. Dodd, W.E. " W i l l Management Assessment Centers Ensure Se l e c t i o n of the Same Old Types?" Proceedings, 78th Annual Convention,  American Psychological Association (1970): 569-570. Driggs, D., and Whisenand, P.M. "Assessment Centers: S i t u a t i o n a l Evaluation." C a l i f o r n i a Law Enforcement 10 ( A p r i l 1976): 131-136. 69 Dunnette, M.D. "The Assessment of Managerial Talent." i n Advances  i n Psychological Assessment, ed. P. McReynolds. Palo A l t o : Science and Behavior Books, 1971. Eisenberg, T. "An Examination of Assessment Center Results and Peer Ratings." P o l i c e Chief (January 1980): 46-47. Fi n k l e , R. "Management Assessment Centers." i n Handbook of I n d u s t r i a l  and Organizational Psychology, ed. M. Dunnette. Chicago: Rand McNally, 1976. Francis, L.T. "The Assessment Center i n Po l i c e Selection." P o l i c e  Journal (May 1975): 4-14. Grant, D.L., and Bray, D.W. "Contributions of the Interview to Assessment of Management P o t e n t i a l . " Journal of Applied  Psychology 53 (1969): 24-35. Grant, D.L.; Katkovsky, W.; and Bray, D.W. "Contributions of Pro j e c t i v e Techniques to Assessment of Management P o t e n t i a l . " Journal of Applied Psychology 51 (1967): 226-232. Greenwood, J.M., and McNamara, W.J. "Interrater R e l i a b i l i t y i n S i t u a t i o n a l Tests." Journal of Applied Psychology 31 ( A p r i l 1967): 101-106. Hinrichs, J.R. "Comparison of "Real L i f e " Assessments of Managerial P o t e n t i a l with S i t u a t i o n a l Exercises, Paper and Pen c i l A b i l i t y Tests, and Personality Inventories." Journal of Applied  Psychology 53 (October 1969): 425-432. • "An Eight Year Follow-up of a Management Assessment Center." Journal of Applied Psychology 63 (October 1978): 596-601. Hinrichs, J.R., and Haanpera, S. " R e l i a b i l i t y of Measurement i n S i t u a t i o n a l Exercises - An Assessment of the Assessment Center Method." Personnel Psychology 29 (Spring 1976): 31-40. Howard, A. "Assessment of Assessment Centers." Adademy of Management  Journal 17 (March 1974): 132-134. "How to Spot the Hotshots." Business Week 2606 (October 8, 1979): 62, 67-68. Huck, J.R. "Assessment Centers: A Review of the External and Internal V a l i d i t i e s . " Personnel Psychology 26 (Summer 1973): 191-212. Huck, J.R., and Bray, D.W. "Management Assessment Center Evaluations and Subsequent Job Performance of White and Black Females." Personnel Psychology 29 (Spring 1976): 13-30. Jaffee, C.L.; Bender, J . ; and Calvert, 0. "The Assessment Center Technique: A V a l i d a t i o n Study." Management of Personnel  Quarterly 9 ( F a l l 1970): 9-14. 70 Jaffee, C.L.; Frank, F.D.; and R o l l i n s , J.B. "Assessment Centres -The New Method for Selecting Managers." Human Resource  Management 15 (Summer 1976): 5-11. Jaf f e e , C.L., and Frank, F.D. "Assessment Centres: Premises, P r a c t i c a l i t i e s and Projections for the Future." Management  International Review 18 (March 1978): 45-53. Kennedy, P.E. "The Assessment Center: The Best Promotional Selection Device?" Firehous (March 1982): 14-16. Kent, D.A.; Wall, C.R.; and Bailey, R.L. "Assessment Centers: A New Approach to Po l i c e Personnel Decisions." P o l i c e Chief (June 1974): 72-77. Klimoski, R.J., and St r i c k l a n d , W.J. "Assessment Centers - V a l i d or Merely Prescient?" Personnel Psychology 30 (Autumn 1977): 353-361. Kraut, A.I. "A Hard Look at Management Assessment Centers and Their Future." Personnel Journal 51 (May 1972): 317-326. ._ "New Fron t i e r s f o r Assessment Centers." Personnel 53 (June 1976): 30-38. Kraut, A.I., and Scott, G.J. " V a l i d i t y of an Operational Management Assessment Program." Journal of Applied Psychology 56 ( A p r i l 1972): 124-129. L i t t l e , A. "Assessment Center or Development Center?" Personnel Management 6 (March 1974): 28-31. MacKinnon, D.W. "An Overview of Assessment Centers." Center for Creative Leadership, Technical Report Number 1, Grensboro, N.C. May 1975. McGhee, A.L., and Deen, M.E. " U t i l i z i n g the Assessment Center to Select P o l i c e O f f i c e r s f o r Ocala, F l o r i d a Department." Po l i c e  Chief 46 (August 1979): 69-74. McGinnis, J.H., and Carpenter, G.J. "The Canadian Police College P i l o t Municipal Force Assessment Centre." Canadian P o l i c e  College Journal 4 (1980): 1-31. McNemar, Q. Psychological S t a t i s t i c s New York: John Wiley & Sons, Inc. 1962: p. 192. M i l l a r d , C.W., and Pinsky, S. "Assessing the Assessment Center." Personnel Administrator 25 (May 1980): 85-88. M i t c h e l l , J.0. "Assessment Center V a l i d i t y : A Longitudinal Study." Journal of Applied Psychology 60 (October 1975): 573-579. Moses, J.L. "Assessment Center Performance and Management Progress. Studies i n Personnel Psychology 4 (1972): 7-12. 71 "The Development of an Assessment Centre for the Early i d e n t i f i c a t i o n of Supervisory P o t e n t i a l . " Personnel Psychology 26 (Winter 1973): 569-581. Moses, J.L., and Boehm, V.R. "Relationship of Assessment Center Performance to Management Progress of Women." Journal of  Applied Psychology 60 (August 1975): 527-529. Moses, J.L., and Byham, W.C, ed.s, Applying the Assessment Center  Method. Toronto, Onterio: Pergamon Press Inc., 1977. Parker, T.C. "Assessment Centers: A S t a t i s t i c a l Study." Personnel  Administrator 25 (February 1980): 65-67. Pomerleau, R. " I d e n t i f i c a t i o n Systems. The Key to E f f e c t i v e Manpower Planning." Personnel Journal 52 (June 1973): 434-441. Quigley, R.C "Management Aptitude Program. The F.B.I. Assessment Center." F.B.I. Law Enforcement B u l l e t i n (June 1976): 1-9. Ross, J.D. "A Current Review of Public Sector Assessment Centers: Cause for Concern." Public Personnel Management 8 (January/ February 1979): 41-46. "Determination of the Pred i c t i v e V a l i d i t y of the Assessment Center Approach to Selecting Police Managers." Journal of  Criminal J u s t i c e 8 (1980): 89-96. Sackett, P.R., and Hakel, M.D. "Temporal S t a b i l i t y and Individual Differences i n Using Assessment Information to Form Ov e r a l l Ratings." Organizational Behaviour and Human Performance 23 (February 1979): 120-137. Sanchez, W. "The Assessment Center Process." P o l i c e Chief 48 (February 1981): 49-50. Schmitt, N. "Interrater Agreement i n Dimensionality and Combination of Assessment Center Judgements." Journal of Applied Psychology 62 ( A p r i l 17=977): 171-176. Skoff, E.J. "Assessing Managerial P o t e n t i a l . " Datamation 21 (August 1975): 37-40. Sl e v i n , D.P. "The Assessment Center: Breakthrough i n Management Appraisal and Development." Personnel Journal 57 ( A p r i l 1972): 255-261. Third International Congress on the Assessment Center Method: Standards and E t h i c a l Considerations f or Assessment Center  Operations. Quebec, Canada, May 1975. Thomson, H.A. "Comparison of Predictor and C r i t e r i o n Judgements of Managerial Performance Using the M u l t i - t r a i t Multi-method Approach." Journal of Applied Psychology 54 (December 1970): 496-502. 72 Turner, T.S. "The Assessment Center Program: A New Way of Matching the Right Person to the Right Job." B.C. P o l i c e Journal (Autumn 1978): 12-16. Turner, T.S., and Higgins, K. " I n i t i a l S election Assessment Centre Program: Police Constable Job Analysis Report." B r i t i s h Columbia Police Commission, unpublished manuscript. Ungerson, B. "Assessment Centers; A Review of Research Findings." Personnel Review 3 (Summer 1974): 4-13. Van Kirk, M. "Selection of Sergeants." F.B.I. Law Enforcement  B u l l e t i n (March 1975): 12-15. Wilson, J.E., and Tatge, W.A. "Assessment Centers - Futher Assessment Needed?" Personnel Journal 52 (March 1973): 172-179. Wollowick, H.B., and McNamara, W.J. "Relationship of the Components of an Assessment Center to Management Success." Journal of  Applied Psychology 53 (October 1969): 348-352. Worbois, G.M. "Va l i d a t i o n of E x t e r n a l l y Developed Assessment Procedures for i d e n t i f i c a t i o n of Supervisory P o t e n t i a l . " Personnel Psychology 28 (Spring 1975): 77-91. Yager, E. "Assessment Centers - The Latest Fad." Training and  Development Journal 30 (January 1976): 41-44. Zemke, R. "Using Assessment Centers to Measure Management P o t e n t i a l . " Training 17 (March 1980): 23-26, 30-34. 73 APPENDIX 1 DESCRIPTION OF THE ASSESSMENT CENTRE OPERATION The Assessment centre system was introduced to the Police Academy i n 1978. While the centre currently operates r e g u l a r l y at the Recruit, Junior and Senior management l e v e l s , t h i s study focuses only on the I n i t i a l S election Assessment Centre Program. The purpose of th i s program i s to provide information to the p a r t i c i p a t i n g p o l i c e departments about the a b i l i t i e s , s k i l l s and p o t e n t i a l of i n d i v i d u a l s being considered as p o l i c e constables. In the three years following i t s inception approximately 466 candidates from ten p o l i c e departments have attended. The candidates f o r the centre are pre-screened at the depart-mental l e v e l and the de c i s i o n as to which i n d i v i d u a l s p a r t i c i p a t e i n the assessment centre program i s made by the i n d i v i d u a l p o l i c e departments. Approximately one out of 35 applicants go on to the assessment centre. If s u f f i c i e n t candidates are a v a i l a b l e , the department may request that an assessment centre be held or the candidates may be included i n t o already scheduled centres with candidates from other departments. Assessors are incumbent p o l i c e personnel holding a p o s i t i o n generally two l e v e l s higher than the r e c r u i t s . The p a r t i c i p a t i n g departments are obliged to i d e n t i f y p o t e n t i a l assessors that meet th i s basic c r i t e r i o n , i f they are having candidates assessed. Assessors receive four days of t r a i n i n g i n order to prepare them for p a r t i c i p a t i o n i n a centre. Their p a r t i c i p a t i o n , however, depends on t h e i r a v a i l a b i l i t y f o r the centres scheduled. The centre i s administered by two high l e v e l o f f i c e r s of the 74 APPENDIX 1 (Continued) p o l i c e community who serve at the centre for a period of three years. They are responsible for the day to day operation of the centres, the scheduling and co-ordination of assessors and candidates, assessor discussions and any feedback interviews. The high organizational ranking of the administrators, an acknowledgment of several years of administrative and p o l i c e experience, i s believed to give greater credence to the assessment centre operation. The r e c r u i t l e v e l assessment centres are held r e g u l a r l y . There are usually twelve assessees, s i x assessors and, at l e a s t , one administrator at each centre. The assessment centre l a s t s one day for the candidates and three days for the assessors. The t y p i c a l assessment centre experience would be as follows: DAY 1 P r i o r to a r r i v i n g at the centre candidates complete a biographical data sheet and the Davis Reading Test. Once at the centre candidates are introduced to the administrators and assessors and given a timetable of exercises and room numbers to follow. One candidate's timetable with appended exercise descriptions could take the following format: TIME EXERCISE 8:30 - 9:30 Group Discussion - a leaderless group discussion involving h a l f of the p a r t i c i p a n t s i n which a number of crime related concerns are to be ranked as to t h e i r importance/ relevance to a p o l i c e department; one assessor for two candidates. 9:45 - 10:30 Interview - an in-depth personal interview with an assessor which examines past educatonal and work experience as well as career expectations, work standards and motivation. 75 APPENDIX 1 (Continued) TIME EXERCISE 10:45 - 11:30 Interpersonal Behaviour - a written exercise i n which the candidate outlines his/her a c t i o n i n response to a number of hypothetical interpersonal s i t u a t i o n s ; assessed by one of the administrators. 11:30 - 12:30 Lunch 12:30 - 13:30 Observation - an exercise i n which the candidate i s given a short period of time to observe a number of objects supposedly found by p o l i c e i n a suspect's hotel room. After the period of observation the candidate i s required to complete an exercise which demands r e c a l l of the items as well as the formulation of hypotheses regarding the person who must have occupied the room; assessed by one the administrators. 13:30 - 14:00 Break 14:00 - 14:30 Fact Finding - i n t h i s exercise two assessors are present - one role plays a c i t i z e n who has c a l l e d the p o l i c e to complain about an inc i d e n t . The candidate i s given a period of time i n which to question the complainant a f t e r which he must decide the fact s of the case and formulate an action plan. The candidate's d e c i s i o n as to a course of action must then be defended under questioning by the assessors. 14:45 - 15:15 Oral Communication - an exercise which requires the candidate to assume the r o l e of a constable who must respond to three separate radio c a l l s (as simulated on a tape). Each c a l l requires the candidate to in t e r a c t with a membe of the public (roleplayed by the assessor). 16:00 - 16:30 Debriefing of candidates by the administrators. DAY 2 The assessors work at home going over the notes they made during the assessment exercises and prepare exercise reports on the candidates observed. Candidates are rated on each of the applicable dimensions for the exercise i n which he was observed. These ratings must be APPENDIX 1 (Continued) supported with observed behavioural data. DAY 3 A l l of the assessors and administrators meet to discuss the candidates. T y p i c a l l y two assessor discussion groups are held simultaneously, each i s chaired by one of the two administrators. Considering a l l the observations of behaviour during the centre, assessors agree on an evaluation of the candidate's strengths and weaknesses r e l a t i v e to each of the dimensions assessed. After t h i s p r o f i l e has been developed, the assessors i n d i v i d u a l l y , and then as a group, evaluate the o v e r a l l p o t e n t i a l of the candidate. A written summary report of the centre r e s u l t s f o r each candidate i s completed by the assessment centre administrator. The assessment centre report contains the assessors' r a t i n g of the candidate, along with substantiating examples of actual assess-ment centre exercise behaviour, f o r each of the f i f t e e n assessment centre dimensions. These dimensions were i d e n t i f i e d and defined i n Turner and Higgins "Police Constable Job Analysis Report: I n i t i a l Selection Assessment". They include: P r a c t i c a l I n t e l l i g e n c e Integr i t y Problem Confrontation Stress Tolerance A b i l i t y to Learn I n i t i a t i v e Decisiveness F l e x i b i l i t y Fact Finding/Observational S k i l l s Oral Communication Interpersonal Tolerance Interpersonal S e n s i t i v i t y Written Communication Adherence to Authority, and Personal Impact 77 APPENDIX 1 (Continued) The report also contains the assessors' o v e r a l l r a t i n g of the candidate as to his p o t e n t i a l as a po l i c e constable. Two copies of the report are made. One copy i s retained by the assessment centre administration for feedback and research purposes, while the other copy i s remitted to the p o l i c e department which sent the candidate to the centre. It i s the r e s p o n s i b i l i t y of the depart-ment to transmit the r e s u l t s to the candidate. The decision whether or not to h i r e a candidate r e s t s e n t i r e l y with the department as does the weighting placed on the assessment centre report i n making that dec i s i o n . 78 APPENDIX 2 DEFINITION OF THE ASSESSMENT CENTRE DIMENSIONS 1. P r a c t i c a l I n t e l l i g e n c e A b i l i t y to quickly analyse the key elements of a s i t u a t i o n or problem, to i d e n t i f y and evaluate possible courses of action and to reach l o g i c a l conclusions; judgement or common sense. 2. I n t e g r i t y A b i l i t y to demonstrate adherence to the values of honesty and trustworthiness. A capacity to r e s i s t temptations of an unethical or i l l e g a l nature. 3. Problem Confrontation A b i l i t y to assert oneself and deal with a p o t e n t i a l l y unpleasant or dangereous s i t u a t i o n . 4. Stress Tolerance A b i l i t y to maintain composure and performance while under s t r e s s . 5. A b i l i t y to Learn A b i l i t y to assimilate and apply new information. 6. I n i t i a t i v e A b i l i t y to a c t i v e l y influence events rather then passively accepting; s e l f - s t a r t i n g . Originates actions rather than just responding to events. 7. Decisiveness Readiness to make decisions, to render judgements, to take action or commit oneself. 8. F l e x i b i l i t y A b i l i t y to modify behavioural s t y l e , to adjust to changing s o c i a l values and to adapt to changing work r e s p o n s i b l i t i e s and methold. 9. Fact Finding/Observational S k i l l s A b i l i t y to i d e n t i f y , gather and r e c a l l relevant f a c t s and d e t a i l s about an incident, s i t u a t i o n or problem. 10. Oral Communication S k i l l s A b i l i t y to express and l i s t e n to ideas, f e e l i n g s , questions and facts i n both i n d i v i d u a l and group s i t u a t i o n s . 79 APPENDIX 2 (Continued) 11. Interpersonal Tolerance A b i l i t y to maintain composure and performance while i n t e r a c t i n g with i n d i v i d u a l s of d i f f e r e n t backgrounds, p e r s o n a l i t i e s , a t t i t u d e s , opinions and values. 12. Interpersonal S e n s i t i v i t y A b i l i t y to react s e n s i t i v e l y , to be empathic, compassionate and sincere and to communicate t a c t f u l l y . 13. Written Communication S k i l l s A b i l i t y to express ideas, f e e l i n g s and f a c t s i n writing i n good grammatical form. 14. Adherence to Authority Willingness to comply with l e g a l departmental regulations, p o l i c i e s and orders. 15. Personal Impact A b i l i t y to project a good f i r s t impression, to command att e n t i o n and respect, to show an a i r of confidence and to achieve personal recognition. The factors contributing to impact are appearance, grooming, demeanor and speech. 80 APPENDIX 3 DESCRIPTION OF POLICE ACADEMY COURSES LEGAL STUDIES A basic introduction to c o n s t i t u t i o n a l structure and lawmaking; the Criminal Code of Canada, i t s i n t e r p r e t a t i o n , a p p l i c a t i o n and procedures; obtaining, preserving and presenting of evidence; and the laws of evidence and court procedures. TRAFFIC STUDIES P r o v i n c i a l and Federal t r a f f i c laws and t h e i r enforcement; accident i n v e s t i g a t i o n ; safe operation of emergency vehicles under a l l circumstances in c l u d i n g defensive d r i v i n g p r a c t i c e s . APPLIED SOCIAL SCIENCE The program i s designed to develop an understanding of society; s o c i o l o g i c a l and psychological causations of c r i m i n a l behaviour; interpersonal communication; defusing c r i s i s s i t u t i o n s ; and i d e n t i f y i n g and coping with s t r e s s . INVESTIGATION AND PATROL In t h i s course s k i l l s and procedures are taught to enable a p o l i c e constable to do h i s work i n the area of i n v e s t i g a t i o n or prevention of crime, and the preservation of crime scenes. FIREARMS TRAINING Provides i n s t r u c t i o n i n s k i l l s required to handle firearms s a f e l y . Emphasizes both mental and manual expertise that enables the constable to complete the course of action chosen with maximum safety to innocent p a r t i e s . PHYSICAL EDUCATION Development of phy s i c a l conditioning required f o r rescue and defense purposes. The program i s also designed to motivate the constable to maintain a high degree of physical f i t n e s s . EMERGENCY CARE F i r s t a id s k i l l s are learned and practiced i n simulated s i t u a t i o n s s t r e s s i n g quick and e f f e c t i v e a c t i o n . APPENDIX 3 (Continued) DRILL, DRESS AND DEPORTMENT Not a formal academy course - rather a r a t i n g of personal appearance and conduct. - as reported i n an informational brochure on the Police Academy d i s t r i b u t e d by the J u s t i c e I n s t i t u t e of B r i t i s h Columbia 82 APPENDIX 4 PROGRESS REPORT RATING FORM INSTRUCTIONS FOR RATERS Next to each c l a s s i f i c a t i o n , you w i l l f i n d several s u b - c l a s s i f i c a -t i o n s . After considering a l l of them c a r e f u l l y , place a check ( ) i n the bracket i n front of the item that most c l o s e l y describes the member being rated. It may be that several of the items describe the member, i f so, place a checkmark ( ) f o r each of them. If you are not sure of a category, pass i t by - DO NOT GUESS. TYPE OF ASSESSMENT Probationary Progress Prob to 3rd Class 3rd Class to 2nd Class 2nd Class to 1st Class Other NAME RANK NO. ) ) AGE MARITAL STATUS_ _DIVISION SQUAD ) ) ASSIGNMENT  ) EXPERIENCE Recurit Class #_ Rated by Increment Promotion Date Rank Date , 19 DISCIPLINE (Attach a d e t a i l e d report of a l l d i s c i p l i n a r y charges) ( ) Resents ( ) Accepts well ( ) Lacks Understanding ( ) I n d i f f e r e n t ( ) Requires l i t t l e supervision ( ) Requires close supervision ( ) Is negligent ( ) Other REMARKS: LOYALITY TO FORCE ( ) Very l o y a l ( ) F a i r l o y a l t y ( ) Loyal ( ) Other REMARKS: ( ) Not l o y a l ( ) Indi f f e r e n t ( ) Poor l o y a l t y ( ) Imporving APPEARANCE: ( ) Very good a l l round ( ) Sloppy ( ) F a i r posture & grooming ( ) Poor posture & grooming ( ) A l e r t - l i v e l y ( ) Other REMARKS: Good posture - f a i r grooming Good posture - poor grooming Good grooming - f a i r posture Good grooming - poor posture Slow - methodical Overweight 83 APPENDIX 4 (Continued) ABILITY TO LEARN ( ) Quick to grasp new know- < : ) Slow but thorough ledge ( ) Some d i f f i c u l t y with new ' ) Slow & haphazard subjects ( ) Poor organization ' ) Retains p r a c t i c a l ideas ( ) Performng to best of a b i l i t y [ ) Not performing to best of a b i l i t y ( ) Good retention ' ) Fa i r r e tention ( ) Poor retention ( ) Other REMARKS: RECEIVING INSTRUCTIONS ( ) Very good ( ) Good to f a i r ( ) F a i r to poor c ) F a i r ( ) Poor ( ) Unable to grasp ( ) Lack of a p p l i c a t i o n ( ) No improvement ( ) Shows improvement ( ) Misinterprets ( ) Other ( ) Quick to grasp REMARKS: INITIATIVE ( ) Good ( ) Eager & co-operative ( ) Does no more than necessary ( ) F a i r ( ) Good to f a i r ( ) L i t t l e or none shown ( ) Poor ( ) F a i r to poor ( ) Other REMARKS: INVESTIGATION AND CASE PREPARATION ( ) Very good ( ) Uses common sense ( ) Shows improvement ( ) Good ( ) No improvement ( ) F a i r to poor ( ) F a i r ( ) Lacks common ( ) Slow improvement ( ) Poor sense ( ) Other ( ) Unsure of s e l f REMARKS: ACCEPTANCE OF RESPONSIBILITY ( ) Good ( ) F a i r to poor ( ) Takes too se r i o u s l y ( ) F a i r ( ) Appears immature ( ) Requires constant ( ) Poor ( ) Attempts to transfer d i r e c t i o n ( ) Good to f a i r ( ) Accepts eagerly ( ) Not serious enough ( ) Other ( ) Requires l i t t l e d i r e c t i o n REMARKS: 84 APPENDIX 4 (Continued) PUBLIC & PERSONNEL RELATIONS ( ) Gets on well with co-worker [ ) Somewhat unstable and supervisors ( [ ) Good publis r e l a t i o n s ( ) Popular with supervisor but ( ) Follows poor example/s not with co-workers [ ) Sincere & well adjusted ( ) Is a leader ( ) Unpopular ( ) Follows good example/s ' ) A n t i - s o c i a l (chip on ( ) Appears a loner shoulder) ( ) Immature - clownish { ) E g o t i s t i c a l ( ) Commands respect ( ) Other REMARKS: STABILITY ( ) Immature for age ( ) Mature & well adjusted ( ) Lalcks aggression ( ) Loses control ( ) E a s i l y embarrassed ( ) Reasonably mature * well adjusted REMARKS: WRITTEN COMMUNICATION ( ) Very good ( ) Good to f a i r ( ) F a i r to poor ( ) Poor writing ( ) Unable to convey REMARKS: GIVING EVIDENCE ( ) Very good ( ) Good ( ) L i k e l y to improve ( ) Good to f a i r ( ) F a i r ( ) Unlikely to improve ( ) F a i r to poor ( ) Poor ( ) Other REMARKS: POTENTIALITY ( ) Is l i k e l y to develop into an above-average constable ( ) i s l i k e l y to develop into an average constable ( ) Is l i k e l y to develop into a below-average constable ( ) Is u n l i k e l y to make a good p o l i c e constable REMARKS: ( ) Immature, w i l l probably improve ( ) Overly aggressive ( ) Immature ( ) Appears nervous ( ) Self conscious ( ) Other ( ) Poor composition ( ) Too b r i e f ( ) Good ( ) F a i r thoughts i n writing Poor Poor s p e l l i n g Lacks organization Too wordy Other 85 APPENDIX 4 (Continued) FINAL APPRAISAL IF PROBATIONARY PROGRESS REPORT On the basis of the above r a t i n g , I recommend that t h i s man be ( ) Promoted ( ) Probationary period be extended ( ) Other REMARKS: Signature of Member Assessed Supervisor i / c ( ) Approved ( ) Suggest raappraisal DATE •  Inspector i / c REVIEWED BY: Superintendent Chief Constable Personnel Records: APPENDIX 5 REVISED PROGRESS REPORT RATING FORM 86 CATEGORY SCORING DISCIPLINE LOYALTY TO FORCE APPEARANCE ABILITY TO LEARN Requires l i t t l e supervision Very l o y a l F a i r l o y a l t y Loyal Very good a l l round Good posture - good grooming Good posture - f a i r grooming Good grooming - f a i r posutre F a i r posture & grooming A l e r t - l i v e l y Quick to grasp Performing to best of a b i l i t y Good retention Retains p r a c t i c a l ideas RECEIVING INSTRUCTIONS Very good Good to f a i r Quick to grasp INITIATIVE Good Good to f a i r F a i r Eager & co-operative INVESTIGATION & CASE PREPARATION Very good Good Good to f a i r F a i r Uses common sense 3 2 1 4 3 2 1 1 3 2 1 1 4 3 2 1 1 ACCEPTANCE OF RESPONSIBILITY Good Good to f a i r F a i r Accepts eagerly Requires l i t t l e d i r e c t i o n 3 2 1 1 1 APPENDIX 5 (Continued) 87 CATEGORY SCORING PUBLIC & PERSONNEL RELATIONS Follows good examples Good pu b l i c r e l a t i o n s Sincere & well adjusted STABILITY Mature & well adjusted 3 Reasonably mature & well adjusted 2 Immature, w i l l probably improve 1 WRITTEN COMMUNICATION Very good Good Good to f a i r F a i r F a i r to poor 5 4 3 2 1 ORAL COMMUNICATION Very good Good Good to f a i r F a i r F a i r to poor Thinks well on feet 5 4 3 2 1 1 MAXIMUM SCORE POSSIBLE 47 COEFFICIENT ALPHA = .7867 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0095588/manifest

Comment

Related Items