Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A study of the predictive validity of the Kenya certificate of primary education examination : application… Othuon, Lucas A. 1993

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1993_fall_othuon_lucas.pdf [ 4.2MB ]
JSON: 831-1.0086280.json
JSON-LD: 831-1.0086280-ld.json
RDF/XML (Pretty): 831-1.0086280-rdf.xml
RDF/JSON: 831-1.0086280-rdf.json
Turtle: 831-1.0086280-turtle.txt
N-Triples: 831-1.0086280-rdf-ntriples.txt
Original Record: 831-1.0086280-source.json
Full Text

Full Text

A STUDY OF THE PREDICTIVE VALIDITY OF THE KENYA CERTIFICATEOF PRIMARY EDUCATION EXAMINATION: APPLICATION OFHIERARCHICAL LINEAR MODELSByLUCAS A. OTHUONB.Ed., University of Nairobi, 1980A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS OF THE DEGREE OF MASTER OF ARTSinTHE FACULTY OF GRADUATE STUDIESDEPARTMENT OF EDUCATIONAL PSYCHOLOGY ANDSPECIAL EDUCATIONWe accept this thesis as conforming tothe required standardTHE UNIVERSITY OF BRITISH COLUMBIAJuly, 1993© Lucas A. Othuon, 1993In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature) Department of ^esE ^es,c-av-Tri^F)2-e-rJ *Eet"sk4fThe University of British ColumbiaVancouver, CanadaDate^ 5DE-6 (2/88)iiABSTRACTPublic examinations have been used in Kenya for decadesas selection instruments for further education and training.The Kenya Certificate of Primary Education examination(KCPE) is the first of such selection examinations. A basicassumption is that those who pass the examination and areselected to join secondary school have a good chance ofsucceeding in secondary school. However, evidence that mayverify such an assumption, that is, a study of thepredictive validity of KCPE, has received little attention.The purpose of this study was to determine the extent towhich KCPE predicts success in secondary school. Success insecondary school was measured by the level of examineeachievement in the Kenya Certificate of Secondary Educationexamination (KCSE).Stratified random sampling was used to select 26secondary schools within a single district in Kenya. The1991 KCSE data for 781 examinees in the sample were used inthe analysis. The KCSE records for examinees in the samplewere matched with corresponding 1987 KCPE records. Thenature of the relationship between KCPE and KCSE wasdetermined by use of Hierarchical Linear Models (HLM). Theinfluence of selected moderator variables on therelationship between KCPE and KCSE was investigated as well.These variables were age, gender, repetition of Standard 8(i.e., writing KCPE more than once), and school size.iiiA moderate linear relationship between KCPE and KCSEwas found. The predictive validity did not significantlyvary from one school to the other. Of the three pupil-levelmoderator variables used in this study, only age showed asignificant influence on the KCPE-KCSE predictiverelationship. A moderate linear relationship, parallelregression slopes, and the extent to which the selectedmoderator variables influenced the KCPE-KCSE relationshipindicate that KCPE is a moderately valid predictor ofsuccess in secondary school.ivTABLE OF CONTENTSPageABSTRACT^ iiTABLE OF CONTENTS^ ivLIST OF TABLES ixLIST OF FIGURES^ xiLIST OF APPENDICES xiACKNOWLEDGEMENTS^ xiiCHAPTER1. INTRODUCTION^ 1Education in Kenya^ 2Examinations in Kenya 4Purpose of the Study^ 8Significance of the Study 92. REVIEW OF LITERATURE^ 11Factors that may affect PredictiveRelationships^ 11Characteristics of Examinations^ 11Characteristics of Examinees 12Age^ 12Gender 13Repetition^ 15Characteristics of Schools^ 19School Size^ 20VMethodological Limitations in PreviousResearch^ 21Statistical Analysis in Predictive ValidityStudies^ 21Correlation Analysis^ 22Ordinary Least Squares Regression Analysis ^ 22Hierarchical Linear Modelling^ 23Methodological Framework of HLM 26Summary^ 293. METHODOLOGY 31The Criterion Variable^ 31The Predictor Variable 32Moderator Variables^ 32Population^ 33Sample 34Data Collection^ 34Data Analysis 36Inter-Correlation Matrix^ 36Distributions^ 37Intra-School Correlation^ 37Model Fitting^ 38Variance Components Model^ 39Random Coefficients Model 39Moderator Models^ 41Omnibus Model 41School Size Model^ 42viSummary^ 424.^RESULTS^ 44Distribution of the Sample by Schools andGender^ 44Range of Scores^ 45Means and Standard Deviations of Scores^ 46Reliability Estimates^ 47Inter-Correlation Matrix of Variables^ 49Scatterplots^ 51Normality of Distributions^ 51School Means and Standard Deviations^ 52Predictive Models^ 55Model 1: The VC Model^ 55Model 2: The RC Model 57Model 3: The MOD Model^ 64Age as a Moderator 65Gender as a Moderator^ 67Repetition as a Moderator 68Model 4: The OB Model^ 69Model 5: The SS Model 72KCPE MATHEMATICS AS A PREDICTOR OF KCSE MATHEMATICS^ 75Variance Components Model^ 75RC Model^ 76Gender as a Moderator^ 78Age as a Moderator 79viiRepetition as a Moderator^ 79KCPE KISWAHILI AS A PREDICTOR OF KCSE KISWAHILI^ 80The VC Model^ 80The RC Model for Kiswahili^ 81Gender as a Moderator in the KiswahiliRelationship^ 83Age as a Moderator^ 84Repetition as a Moderator^ 84OB Model for Composite Exam, Mathematicsand Kiswahili^ 85Summary of Results^ 855.^DISCUSSION^ 88Validity of KCPE^ 88Influence of Age 90Influence of Gender^ 90Influence of Repetition 91Influence of Secondary School Size^ 91Implications for Educational Policy 93Weaknesses and Strengths of the Study^ 95Future Research^ 97REFERENCES^ 99APPENDICES 108A: Mathematics Grade Statistics by Gender^ 109B: Scatterplots of KCSE Grades against KCPEScores^ 110C: KCSE and KCPE Stem and Leaf , Box, andviiiNormal Plots^ 111LIST OF TABLESixPageTABLE1: Models and their Uses^ 382: Sample Distribution by Schools & Gender^ 453: Minimum, Maximum, and Range of Scores 464: Means and Standard Deviations of Scores^ 475: Inter-Correlation Matrix of Student-LevelVariables^ 506: School Means and Standard Deviations for theSample^ 537: Variance Estimates in the VC Model^ 568: Coefficient Estimates in the RC Model 599: Variance/Covariance Estimates in the RC Model^ 6010: Coefficient Estimates in the MOD Model withX2 = AGE^ 6611: Coefficient Estimates in the MOD Model withX2 = GENDER^ 6712: Coefficient Estimates in the MOD Model withX2 = REP^ 6813: Coefficient Estimates in the OB Model with RawScores^ 7014: Coefficient Estimates in the OB Model usingStandardized Scores^ 7115: Variance/Covariance Estimates in the SS Model^ 7316: Coefficient Estimates in the SS Model^ 7317: Variance Estimates in the VC Model for KCSEMathematics^ 7618: Parameter Estimates in the RC Model^ 7719: Variance/Covariance Estimates^ 7820: Variance Estimates in the VC Model^ 8121: Coefficient Estimates for Kiswahili 8222: Variance Estimates in the RC Model forKiswahili^ 8323: Coefficient Estimates in the OB Model^ 85xiLIST OF FIGURESPageFig. 1:^Scatterplot of School Means^ 54Fig. 2:^Plot of Slopes against Intercepts^ 62Fig. 3: Predicted KCSE grades vs KCPE scores for 26schools^ 63LIST OF APPENDICESPageA: Mathematics Grade Statistics by Gender^ 109B: Scatterplots of KCSE Grades against KCPE Scores^ 110C: KCSE and KCPE Stem and Leaf, Box, andNormal Plots^ 112xiiACKNOWLEDGEMENTSIt is not possible to express my sincere gratitude toall those who offered important advice, critique andencouragement in the development of this thesis. However, Iwish to specifically thank my supervisor, Dr. Nand Kishor,and members of my thesis committee, Dr. R. Conry and Dr. W.McKee, for their being available and ever willing to offerguidance whenever such a need was deemed necessary. Dr. D.Allison's role as an external reader of the manuscript washighly appreciated.I must not forget to thank the Canadian InternationalDevelopment Agency and the Kenya Government for theirkindness in ensuring that my stay in Canada was rewardingand full of happiness. It was such a wonderful experience!Last but not least, I cannot forget to thank my wife,Clarice; and my children, Vera, Harry, and Rowney, for theirunderstanding and perseverance during my two years' stayaway from them.Chapter 1INTRODUCTIONPublic examinations have been used in Kenya for decadesas selection instruments for further education and training.The Kenya Certificate of Primary Education examination(KCPE) is the first of such examinations. One of the aims ofhaving an examination at primary school level was to provideevidence of a well rounded education. It was also expectedthat secondary schools would continue to give such training(Kenya Education Commission Report, 1964).Eisemon and Schwille (1992) stated that the object ofprimary schools' national examination was to reduce thenumber of students eligible for secondary schooling. Theyfurther suggested that one of the factors affecting studentperformance is the characteristics of critical examinationsused to select students for further education. For example,a good primary school examination should bear evidence ofhaving a significant positive linear relationship with asecondary school examination in the same system. If massfailure gets reported at secondary school level, as is oftenthe case in Kenya (see The Weekly Review, April 16, 1993, p.10), then there is need to find out the nature of therelationship between the examination that was used forselecting candidates into secondary school, and studentachievement in secondary school. Such predictive validitystudies of performance on KCPE has received little attention(Kellaghan & Greaney, 1988). This concern is crucial if1proper selection decisions are to be made based onperformance in KCPE.This study is an attempt to examine the predictivevalidity of KCPE. The study also attempts to show how thepredictive relationship is influenced by selected pupil-level and school-level variables. Because the predictor andcriterion variables are examinations set and scored inKenya, it is important to give some background informationon education in Kenya.Education in KenyaOn attainment of independence in 1963, the system ofeducation in Kenya was predominantly examination oriented.Pupils who sat for examination during their final year inprimary school were being branded as failures if theirperformance did not meet the required standards. In 1975,out of 220,000 primary school leavers, about 32% of thegroup were offered secondary school or vocational trainingplaces (Report of The National Committee on EducationalObjectives and Policies, 1976). Of the 341,000 primaryschool leavers who wrote KCPE in 1987, 51% were successfulin obtaining places in secondary school (Kellaghan andGreaney, 1992). These figures illustrate how competitive itwas for students to join secondary school in Kenya.Over the years, there has been concern about the impactof external examinations on curriculum implementation. Theformer 7-6-3 education system, which stands for seven years2of primary education, six years of secondary education, andthree years of university education, was much criticized.Parents and the public in general expressed their dislikefor the examination at the end of primary school. They sawit as the traumatic event which determined the fate of themajority of children who did not manage to get places insecondary schools (Kenya Education Commission Report, 1964).The education system was further criticized as not beingpractical-oriented and that it offered a narrow basedcurriculum. Its graduates were thought to be ill-equipped tofit into the changing job market, and hence the rise inunemployment. It was also claimed that the system did notcater for the development of the most appropriate attitudesthat children needed, particularly at primary school level.In 1985, a major change of curriculum was effected inKenya. The 7-6-3 system of education was replaced with 8-4-4system of education, with a view to alleviating the problemsthat beleaguered the old system. The 8-4-4 stands for eightyears of primary education, four years of secondaryeducation and four years of university education. It wasapparent, however, that even with the 8-4-4 system, basicissues and concerns related to curriculum implementation andevaluation still persisted. For example, whereas the newsystem called for much more than the old one ever had interms of physical resources, notably laboratories andworkshops, not many schools could be said to have beenprepared in this area. The responsibility for equipping them3was transferred to parents under the new cost-sharingsystem. In addition, a few of the teachers at that time weretrained to handle the new 8-4-4 curriculum. It was smallwonder, therefore, that critics of the system claimed thatpupils were being used as guinea pigs to test the system,and their worst fears appeared to be confirmed when massfailure in the 1989 Kenya Certificate of Secondary Educationexamination was reported.One area where the negative aspects of the 8-4-4 systemhave come out glaringly is at secondary school level. Itbecame apparent that other than equipping its graduates withthe necessary skills to enable them to survive in thecompetitive job market upon graduating from school, thesystem left them worse off. Employers were not ready to givethem jobs, preferring instead the graduates of the oldsystem. Something seemed to be inherently wrong with the newsystem, calling for a closer look into the system with aview to offering guidelines that may lead to itsrestructuring.Examinations in KenyaThe Kenya National Examinations Council Act, (Cap 225A,Laws of Kenya), enacted in 1980, made provision for theestablishment, constitution, control and administration ofthe Kenya National Examinations Council (KNEC). Amongst theimportant examinations set and scored by KNEC are the KenyaCertificate of Primary Education examination (KCPE) and the4Kenya Certificate of Secondary Education examination (KCSE).KCPE is an external examination taken by pupils at the endof their eighth year in primary schools in Kenya. Acertificate is issued to each candidate irrespective oflevel of achievement. The certificate is meant to certifythat the bearer has undergone eight years of primaryeducation and has attained a level of achievement as shownby the grades contained therein. KCPE examination under thecurrent 8-4-4 system was done for the first time in 1985. In1987, six papers were written in the examination and eachwas scored out of 100 points. The six papers were English(ENG1), Kiswahili (KIS1), mathematics (MAT1), science andagriculture (SC&A), geography, history, civics and religiouseducation (GHCR), and art & craft, home science and music(ACHM). All items in the examination were of multiple choicetype, except for English and Kiswahili, each of which had ashort composition section. The examination was computerscored except for the composition sections of English andKiswahili papers.Other than being used for examinee certification, KCPEis also used for selection of candidates into secondaryschools. In 1989, out of over 600,000 KCPE candidates, only166,748 pupils were selected to join secondary schools thefollowing year (Statistical Abstract, Kenya, 1990). Theselected group formed a proportion of about 30% of theregistered candidates who wrote the examination in 1989.5The use of external examinations as a selection tool inKenya has quite often received criticisms. The KenyaEducation Commission Report (1964), in a section dealingwith the primary school examination that used to be offeredat that time, recommended that:The issue of KPE [Kenya Primary Educationexamination] certificates to successful pupilsshould be replaced by the issue of school leavingcertificates to all pupils. Cramming for KPEshould be discouraged. Alternative selectionprocedures should be the subject of research.Members of the public also expressed their concernabout possibilities of a correlation between achievement inprimary school and age of the examinees. They thought thatthis realization could give rise to the extensive rates ofrepeating before children were submitted to write theprimary school examination (Report of The National Committeeon Educational Objectives and Policies, 1976). However,these feelings have rarely been supported with concreteresearch evidence and should therefore be seen as purelyimpressionistic.The Kenya Certificate of Secondary Educationexamination (KCSE) is also an external examination writtenby pupils at the end of their fourth year in secondaryschool. Those who wrote KCPE examination in 1987, and joinedsecondary school, wrote KCSE in 1991. In the 1991 KCSEexamination, each candidate registered for a minimum of ten6subjects. Three of the compulsory subjects in theexamination were English (ENG2), Mathematics (MAT2) andKiswahili (KIS2). The 1991 KCSE examination was scored bysubject specialists who had received training as examinersby the Kenya National Examinations Council. Other than beingused for selection into tertiary institutions, KCSE gradesare also used by some employers to recruit workers. It istherefore worth stressing that, to date, examinations remainthe basic procedure for selection of students for furthereducation and training in Kenya.The fact that an examination system can be used for avariety of purposes should not be taken to imply that such asystem can readily serve all purposes equally well. Forexample, an examination system that efficiently selects thepupils most likely to benefit from further education mightcontribute to the identification of a technical elite andover time might even have important effects on the economicperformance of a nation (Heyneman, 1987). However, such asystem could have serious and damaging effects on theeducational experiences of many students if it ignores thefact that for many students, learning has to have utilitybeyond that of qualifying individuals for the next level ofeducation (Kellaghan & Greaney, 1992). Thus, a procedurethat most efficiently selects students may be inadequate forcertification purposes. Similarly, a procedure that isadequate for certification is unlikely to be the mostappropriate one for monitoring the quality of performance of7a school or of the educational system in general. This callsfor the evaluation of the validity of examinations at eachstage within the educational system. However, only ad hocevidence has so far been provided on the predictive validityof KCPE examination since its inception in 1985.Purpose of the StudyThe policy of selecting students for secondary schooleducation is based on the assumption that those who pass theselection examination have a good chance of succeeding insecondary school. In other words, in selecting pupils forsecondary school admission, future performance ispredictable from composite KCPE scores. One problem thatrequires analysis is whether KCPE examination, the predictorvariable that was used for selection, has significantrelationship with success in secondary school. Thus, this isa study of the validity of KCPE examination. The termvalidity, when applied to a test, refers to the precisionwith which the test measures some particular mental ability.Three main types of validation studies exist; criterion-related validity, construct validity, and content validity.Criterion-related validity may be either concurrent orpredictive depending on whether the scores predict acriterion at the time a test is administered or at somepoint in future. Since KCPE scores were used to predictsuccess in secondary school at a future date, thisvalidation study is a predictive one. Success in secondary8school was measured by KCSE examination data. The study alsoexamined the extent to which the relationship between KCPEscores and success in secondary school is moderated byselected pupil-level and school-level factors. The majorresearch question is whether KCPE is a valid predictor ofKCSE. This question was addressed by the following sub-questions:1. What is the correlation between KCPE and KCSE?2. How much do secondary schools in a single district inKenya vary in their mean KCSE achievement levels?3. What is the nature of the pupil-level relationshipbetween KCPE and KCSE?4. What is the impact of selected pupil-level and school-level variables on the relationship between KCPE and KCSE?Significance of the StudyAs in other countries, education continues to be adominant sector in Kenya's economy. According to theDevelopment Plan (1984-88), education accounted for about7.2% of Kenya's GDP in 1981. By 1983, it was alreadyreceiving 30% of the national recurrent budget. There is,therefore, a need for identifying variables that govern thequality of our educational output as well as the inter-relationships involved between them. It is only through suchmeans that we can propose important policy decisionsregarding our system of education. This would ensure thatmeagre resources are directed where they are mostly needed9and where the benefits would be optimal, thus increasingefficiency.The number of studies that focus on the predictivevalidity of KCPE has been low. It is therefore hoped thatthis study will offer teachers, educational researchers andpolicy makers some guidance on the predictive validity ofKCPE. The study is also an addition to similar research doneelsewhere in the world using hierarchical linear models(see, for example, Willms (1985) & Willms and Jacobsen(1990)). Hierarchical linear models capture hierarchicaldata structures in a manner that was not possible in most ofthe previous research.1 0Chapter 2REVIEW OF LITERATUREThis chapter discusses some important factors that mayaffect predictive validity of an examination in relationshipto a criterion. The chapter highlights a number of importantstatistical approaches to prediction studies. A conceptualframework of multilevel modelling is also presented.Factors that may affect Predictive RelationshipsSeveral factors may influence predictor-criterionrelationship.^The^factors^include^psychometriccharacteristics^of^examinations,^characteristics^ofexaminees, and characteristics of schools.Characteristics of Examinations Questions of the adequacy of an examination as ameasure of the characteristic it is interpreted to assessare answerable on scientific grounds by appraisingpsychometric evidence (Messick, 1980). Since allpsychological measurements are subject to error, it is rareto set a perfectly reliable examination. Reliability in thiscontext is concerned with the extent to which these errorsare manifested. An examination is said to be reliable ifresults of individuals could be replicated upon writing thesame examination again under similar conditions. In anattempt to determine the degree of relationship between apredictor and criterion, it is important that errors of11measurement be minimized (Crocker & Algina, 1986). Thisimplies that the predictor and criterion should bereasonably reliable as well as suitable.Characteristics of Examinees Examinee characteristics may influence the predictor-criterion relationship in a predictive validity study. Theselected examinee characteristics are age, gender, andrepetition.Age ,The relationship between age and achievement is mademore and more complex by the fact that grade retention,maturation, learning, and nature of instruction areconfounding variables in the relationship. In a studyinvolving some twelve countries, each with a distinctiveeducational system, there was support for the idea thatolder children generally performed better academically thanyounger children (see Husen, 1967). Choppin (1969) foundthat the effect of age on achievement, after controlling forsocial class, differed across countries. Walsh (1988) alsofound that children who were youngest in their class showedthe highest chances of failure. However, Smith and Shepard(1987) noted that it was not age alone but a combination ofyoung age and low ability that had an effect on performance.Harnisch and Archer (1986) gathered data from Japan,India, and Illinois (U.S.) on high school students who had12completed the High School Mathematics Test. Results not onlyshowed differences in achievement across the threecountries, but also differences among countries in therelative influence exerted by age on students' achievementin mathematics. For example, there appeared to be a positivelinear trend in the Japanese sample, with scores increasingas students' age increased. However, in India the youngerstudents had higher scores than their older counterparts.The correlations between mathematics achievement and age inJapan and Illinois were .32 and -.08, respectively. It seemsthat the relationship between age and achievement may varyfrom one setting to another.GenderThere is a substantial body of evidence to suggest thatfrom the beginning of secondary schooling, males frequentlyoutperform females in mathematics (Fennema & Leder, 1990).Fox & Cohn (1980) gave school variables as one of thereasons. These school variables include organizationprocedures as well as behavior, expectations, and beliefs.Another possible explanation is that teachers often interactdifferently with their male and female students, with malesattracting more and qualitatively different interactions(Brophy, 1985). Peer group influence is yet anotherimportant dimension in explaining differential achievementby gender. It acts as an important reference for childhoodand adolescent socialization and further perpetuates sex13role differentiation through leisure-activities, friendshippatterns, subject preferences, and career intentions.To date, there is no conclusive evidence as to whethergender has an effect on achievement. Several studies revealan inconsistency of findings, with males performing betterin some studies and females in others. Stroud and Lindquist(1942) gave an exhaustive review of literature onexperiments about gender differences and achievement inelementary and secondary schools. They reported that judgingfrom past experiments, correlations between IQ andachievement data were higher for girls than for boys.However, this superiority cannot be generalized for it ismore pronounced only in certain subjects. They also reportedthat earlier research showed no significant differences inreading, whereas there existed a significant difference inlanguage and literature in favour of girls. On the otherhand, boys were deemed to excel in mathematics, history, andscience over girls.Aiken (1971) and Werdelin (1961) found that on theaverage, girls tended to score higher than boys on tests ofverbal fluency, arithmetic fundamentals, and rote memory.They also found that boys were superior in spatial ability,arithmetic reasoning and problem solving. However, these sexdifferences were found to be less pronounced in the earlygrades. Dwyer (1973) stated that it has been a commonresearch finding that girls are generally better readersthan boys.14Simon & Danna (1990) conducted a study that evaluatedthe accuracy of Law School Admission Test scores inpredicting student law school performance. Male and femalescores and White, and Black, or Hispanic scores werecompared. Data were drawn from 1987 and 1988 graduatingclasses of five geographically diverse law schools. Nosignificant differences between gender groups were found.Over the years, girls achievement in mathematics andscience in Kenyan secondary schools has been lower than thatof boys (Ndunda, 1990). This outcome may be attributed tothe girls' experiences in mathematics and science, and ofsocio-cultural forces that interact to influence theirconception of mathematics and science. Eshiwani (1985) notedthat whereas female enrollment patterns improvedsignificantly during the 1975-1984 decade, there wereregional disparities and significant differences inattrition rates and the number of girls attending school.When confronted with constraints of limited opportunities orresources for primary schooling, parents have generallyfavoured the education of male children. Eshiwani (1985)found that the only subjects in which secondary school girlsin Kenya performed better at than boys were English andChristian Religious Education.RepetitionRepetition or grade retention is the practice ofrequiring a student who has been in a given grade level for15a full year to remain at that level for a subsequent year.Two main reasons behind retention are to remedy inadequateacademic progress and to aid in the development of studentswho are considered to be emotionally immature. In 1986, 13%of the total primary school enrollment in Kenya wererepeaters (Kellaghan & Greaney, 1992). One consequence ofthe difficulty and selectivity of KCPE examination is highrepetition rates for Standard 8 in order to rewrite the KCPEexamination (Eisemon and Schwille, 1991). These cannot beestimated with precision since ministry regulations forbidrepeating. Most of the research on retention effects havebeen inadequate because the researchers defined thetreatments poorly, used small sample sizes, and did notconsider the hierarchical nature of most organizationalstructures. Jackson (1975) identified three such designscommonly used in the study of the effects of grade retentionon achievement. The first type of design used, for example,in the studies of Coeffield and Bloomers (1956) and Scottand Ames (1969), involved comparing the condition ofretained students after promotion with their conditionbefore promotion. Such a design is biased toward indicatingthat pupils gain from grade retention because of lack ofcontrol for possible improvement resulting from causes otherthan the retention experience itself.The second type of design, used by researchers likeChansky (1964), Briggs (1966), and Reinherz and Griffin(1971), employed an analysis in which students retained16under normal school policies were compared with studentspromoted under normal policies. Such comparison is biasedtoward indicating that grade promotion has more benefitsthan grade retention, because it compares retained studentswho are having difficulties with promoted students whousually are not having as severe difficulties. Althoughmatching retained and promoted students on indicators ofclassroom achievement like age, IQ, or SES is likely toimprove such designs, results from such studies could stillbe flawed because the criteria for promoting students dovary. Additionally, student performance also varies amongschools, thereby making any inferences drawn from suchstudies spurious. Thus, even when retained and promotedpupils have been matched on age, IQ, and SES, there isinadequate assurance that the pupils were initially similarin respect to the actual conditions which precede graderetention.The third type of design, used by Cook (1941), forexample, compared pupils with difficulties who hadexperimentally been assigned to promotion or graderetention. This type of design is superior and more reliablethan the first two designs. However, such experimentallycontrolled designs are few in number as to allow for broadgeneralizations about the effects of grade retention onstudents' academic achievement. It appears that a designthat takes care of the hierarchical nature of studentperformance would be most suitable in this case.17Holmes and Matthews (1984) did a meta-analysis onmethods used for examining the effects of nonpromotion onelementary and junior high school pupils. They definedeffect size as the difference between the mean of theretained group and the mean of the promoted group, dividedby the standard deviation of the promoted group. Thisprocedure results in a measure of the difference between thetwo groups expressed in quantitative units. From theirstudy, they found that the effect of nonpromotion on pupilsacademic achievement was measured in 31 out of 44 studies.Altogether, 367 effect sizes were calculated. When the meanof the effect sizes was calculated a value of -.44 wasobtained, indicating that the promoted group on the averagehad achieved .44 standard deviation units higher than theretained group, .999t366 = 12.57.Niklason (1987) measured retention effects for specificgroups of children under four classification variables:group (retained vs. promoted), district, ability, and gradelevel. Retention was not found to benefit the childrenacademically or in personal or social adjustment. There wasfurther evidence that the arithmetic scores declined thefollowing year for the younger retained children, but notfor the younger promoted children.Lenarduzzi and McLaughlin (1990) used a quasi-experimental design to examine the effects of nonpromotionon academic achievement and scholastic effort in junior highschool. Results indicated that the students who were18retained improved significantly with respect to academicachievement and scholastic effort when compared to those whowere promoted. However, when they later did a follow upstudy on the same students, results showed no significantdifferences between the two groups in academic achievement.Characteristics of Schools School characteristics may affect predictiverelationships. Such characteristics are, in general, calledschool climate (see, for example, Tagiuri, (1968), Anderson(1982), and Bryk & Driscoll (1988)). Tagiuri (1968) statedthat school climate is a summary concept dealing with thetotal environmental quality within a school system. Thesemay be geographical, organizational, or functional. Specificexamples include the quality of teachers, type ofsupervision, location of the school, and curriculumorganization. Anderson (1982) found that each school has aunique climate, and that climate affects many studentoutcomes. Anderson (1982) also found that understanding theinfluence of climate will improve the understanding andprediction of student behavior. However, climates indifferent schools are often elusive, and difficult todescribe and measure. It is for this reason that school sizewas selected as a characteristic of the school that mayaffect predictive relationships. This was with the beliefthat school size is a potential climate mediator.19School Size A number of studies have reported on the effect ofschool size on achievement. Some of these reports seem to beconflicting. Kimble (1976) administered the StanfordAchievement Test to 2,186 high school students in an attemptto find out whether or not school size was important indetermining student achievement. The sample included 1,311sophomores and 875 seniors. Results showed that, at thesenior level, no significant differences existed in the meantest scores based on school size. However, sophomores of thelarger schools scored better than did those from smallerschools. The New York State Department of Education (1976)did a study on how school size affects academic outcomes andfound that the larger the school size, the lower theacademic outcomes. Rutter, Maughan, Mortimore, Ouston andSmith (1979) found that school size had no effect oneducational outcomes. The findings of the New York StateDepartment of Education (1976) were similar to the findingsof a study by Coladarci (1983) who carried a meta-analysison the effects of institution size on pupil progress:smaller schools showed a definite superiority to largerschools. Coladarci (1983) further found that much of theresearch on the relationship between school size andachievement had evidentiary and inferential errors,intellectual puritanism, and rational extravagance. Wyattand Gay (1984), in a review of research and case studiesconcerning how the size of an institution affects academic20achievement, showed that size should not be seen as anindependent variable having any direct impact onachievement. This suggests that size of institution shouldbe used as a moderator variable rather than a predictor instudies involving predictive relationships because of itslow correlation with the criterion.Methodological Limitations in Previous ResearchThe foregoing review reveals that a number of resultsin studies on factors that affect achievement do not concur.The inconclusive findings about the impact of age, gender,repetition, and school size on achievement, suggest thatmore studies are required in this direction. Additionally,most of the previous studies made use of single level linearmodels. Such studies neglected the hierarchical nature oforganizational structures. In an education system, forexample, students are nested within schools, and schoolswithin districts. Using a linear model that does not takecare of this type of hierarchical structure often results inaggregation bias and misestimated precision. A moreappropriate model that resolves the problems inherent tosingle level analyses is therefore necessary if sensibleinferences are to made.Statistical Analyses in Predictive Validity StudiesSeveral statistical approaches to prediction researchexist, and the type chosen for use will quite often depend21on the design of the study. This does not necessarily implythat only one method should be used in a single study. Inquite a number of cases, more than one method is usuallyused in analyzing data. The following are some of theimportant statistical approaches to prediction studies.Correlation Analysis Correlation analysis involves determination of thestrength and direction of relationship between variables.For situations involving more than two variables, partialcorrelation, multiple correlation, canonical analysis,factor analysis or discriminant analysis may be used. Manyresearchers report correlation coefficients not as theirfundamental analytical approach, but as a component of theiranalysis (e.g., Jacobsen, 1990). The presence of acorrelation between two variables does not necessarily meanthere exists a causal link between them. However,correlation between variables can be useful in identifyingcausal relationships when coupled with other methodologicalapproaches (Glass & Hopkins, 1984).Ordinary Least Squares Regression Analysis Ordinary least squares regression is a statisticaltechnique that allows one to attribute amount of change inone variable to amount of change in other variables. Onepurpose of a linear regression equation is to makepredictions on a new sample of observations from the22findings on a previous sample of observations. For example,given individual scores on an independent variable X, it ispossible to predict scores on a dependent variable Y by useof regression analysis. Obviously, the predicted scoresquite often differ from the actual scores due to error ofestimate.Multiple regression is one of the most widely usedstatistical techniques in educational research. Borg andGall (1983) attribute this to its versatility to yieldinformation about relationships between several variables.It has, therefore, been used extensively in situations thatrequire prediction. For example, universities usually admitand reject candidates mainly on the basis of predictionsabout their probable future performance made fromachievement tests in high school. Similarly, insuranceagencies heavily rely on actuarial studies to predict futureevents that may affect their operations. This in turn helpsthem to adjust policy premiums accordingly so that theirbusiness remains self-sustaining.Hierarchical Linear ModellingRaudenbush and Bryk (1986) suggested that a commonweakness with ordinary least squares regression technique isthat all explanatory variables are considered to be at thesame level, yet most organizational systems do not operateas single level structures. They asserted that in the past,most researchers used single-level regression models only23because of lack of viable alternatives. However, thesealternatives are now available.In ordinary least squares regression analysis of datasets from groups, aggregation bias is a common problem.Aggregation bias can be described as the difference betweena slope obtained in a regression of means on means and theslope obtained in an individual level analysis. Raudenbush(1988) defined aggregation bias as a situation in which avariable takes on different meanings and has differenteffects at different levels of aggregation. Longford (1989)suggested that the use of hierarchical linear models mayhelp reduce aggregation bias because such models take careof all sources of variation simultaneously.Goldstein (1986) proposed multilevel linear modelling,also known as Hierarchical Linear Modelling (HLM) estimationby use of iterative generalized least squares, and discussedits application to nested and longitudinal data structures.The advantage of this approach is that, unlike single levelordinary least squares regression technique, it appliescomparatively to a variety of mixed models with two or morelevels of aggregation, with the result of reducedaggregation bias.Raudenbush (1988) noted that multilevel models haveparameters at a lower level of aggregation (microparameters)that are presumed to vary as a function of the parameters atthe next higher level (macroparameters). As an example, atwo-level HLM may have pupils at the micro-level (e.g.24examination performance data) nested within the school atthe macro-level. This gives a linear model with pupil-levelregressors and with a pupil-level dependent variable. Eachschool has its own model. The macro-model relates theparameters of the micro-models, which are the regressioncoefficients and the error variances, to macro-levelregressors. Thus, the advantages of HLM over single levellinear modelling arise from the former's more realisticportrayal of the effects of grouping. HLM incorporates thefact that individuals within groups share common features;they are not the completely independent entities assumed inordinary least squares regression analysis.The iterative generalized least squares approach usedin HLM is an efficient method of fitting multilevel models.It uses all the information available in the data bydifferentially weighting each school's contribution. Schoolswhose coefficients would be poorly estimated if a series ofordinary least squares regressions were conducted benefitfrom the other schools' data. This occurs, for example, whensome schools have far fewer students than others or whenthere is little inter-pupil variation. Additionally, in HLMcovariances among coefficients are exploited to optimizeprecision of the estimates of the individual schools' slopesand intercepts (Rasbash, Prosser, & Goldstein, 1989).With the foregoing considerations, HLM seems to offer asuperior alternative method of data analysis with the type25of data used in this study. The following section discussesthe methodological framework of HLM as used in this study.Methodological Framework of HLMPrevious studies on achievement relationships gavelittle consideration to the fact that most organizationsoperate as hierarchical structures. For example, in anattempt to determine how achievement relates with age andschool size using multiple linear regression analysis, oneought to consider that age is a pupil-level variable andschool size is a school-level variable. The use of a modelthat does not utilize the distinction between multilevelunits leads to smaller estimates of standard errors ofregression coefficients (Rasbash, Prosser, & Goldstein,1989). Inferences made from such an analysis may bespurious.Hierarchical linear modelling (HLM) utilizes themultilevel units in hierarchical data structures. Estimatesof standard errors of regression coefficients using HLM arelarger than the corresponding values from an ordinary leastsquares regression analysis, because the intraclasscorrelation among the measurements is taken into account. Asan improvement over most of the previous research, thefollowing hierarchical linear model was used in this study:If data are collected in J schools, each of whichcontains nj students (j = 1,...,J) and the interest is indetermining the relationship between an examinee's KCPE2627examination scores (predictor) and KCSE examination grades(criterion), then for school j, a possible linearrelationship between KCSE and KCPE isYij = boiXo + bli(Xlij - R..) + eij^ (1)at pupil level. This is an ordinary least squares regressionequation where Yij represents KCSE grade for individual i inschool j, boi is a within-school intercept, X0(=1) is aconstant term, blj is the average change in KCSE grade foreach unit change in KCPE score (i.e., the slope in the KCSE-KCPE relationship), Xiii is the KCPE score for individual iin school j, R., is the grand-mean for KCPE, and eij is arandom residual variable, assumed to have an expectation ofzero. This model permits each school to have its own slopeand intercept (Rasbash, Prosser, & Goldstein, 1989).Generally, boj and blj may vary across schools.Therefore, they are treated as random variables at level 2.If Z represents the size of secondary school, for example,then its impact may be analyzed by use of the followingbetween-school model:boj = coo + colzj + U Oj^ (2)bij = C10 + cl izj + ulj ( 3 )In equation 2, within-school intercept is given as afunction of school size. Generally, c01 is the average28effect of school size on mean KCSE performance forcandidates having average KCPE scores. In other words, itrepresents the benefit of secondary school size on KCSE meanachievement levels. Similarly, c11 in equation (3)represents the average increment to the slope blj explainedby differences due to school size. The random variables uojand ulj represent the effects on the b's not explained byschool size and are assumed to have a joint distributionwith mean zero and a covariance matrix in level 2. It shouldbe noted that equations (2) and (3) need not necessarily bemodelled as functions of Z (See Rasbash et al., 1989).Equations (1), (2), and (3) can be represented by thefollowing general matrix expressions for a model where therandom term is associated with the intercept:^Yj = Xjbj + ej^ (4)for within-unit model for the jth level 2 unit. Yj is theresponse vector values for group j, Xj is a matrix of groupmembers' values on a set of explanatory variables (includingX0), bj is a vector of the coefficients for the group , andej (=[elj,...,enj]') is a vector of level 1 random terms.The between-unit model for the coefficients can be writtenas^bi = zir + uj^(5)where Zj is a between-unit matrix, r is a vector of thefixed coefficients, and uj is a vector of random terms. Allthe matrices given have conformable dimensions.Combining (4) and (5) givesYj = Xilir + (Xjuj + ej)^ (6)where xizir is called the fixed part and (Xjuj + ej) therandom part.SummaryThis chapter has described some pupil-level and school-level factors that may affect predictive relationships. Suchfactors include psychometric characteristics ofexaminations,^characteristics^of^examinees,^andcharacteristics of schools.One such psychometric characteristic is reliability ofboth the predictor and the criterion. Reliability is ameasure of how consistent the scores of individuals are overrepeated administration of the same examination or itsparallel form under same conditions. An examination is saidto be valid if it measures what it purports to measure.Characteristics of examinees may also affect thepredictor-criterion relationship. The selectedcharacteristics are age, gender, and repetition.School^climate^may^also^affect^achievementrelationships. However, climates in different schools are29often elusive and difficult to describe and measure. Forthis reason, school size was selected as a school levelvariable that may affect achievement relationships. This wason the basis that school size is a potential climatemediator.The chapter has also provided an account of someimportant approaches to prediction studies, some of whichare correlation analysis, regression analysis, factoranalysis, canonical analysis, and discriminant analysis.Most of the previous research was limited to the extent thatthey did not consider the hierarchical nature oforganizational structures. HLM was selected as a moreappropriate choice for data analysis because it accounts forthe hierarchical nature of organizational structures. Adescription of methodological framework of a simple twolevel hierarchical linear model was also provided. Chapter 3describes the methodology employed in this study.30Chapter 3METHODOLOGYThis chapter discusses the criterion, predictor, andmoderator variables used in this study. The population isdefined, and methods of obtaining a sample of schoolsconsidered representative of the population are alsodiscussed. This is followed by a description of theprocedure employed in data collection. Finally, dataanalytic techniques used in the study are described.The Criterion VariableThe 1991 KCSE grades were used as the criterionvariable. KCSE examination grades were in the form of lettergrades ranging from the lowest grade, E, to the highestgrade, A. For purposes of statistical analysis, the lettergrades were converted to an equivalent twelve point scalewith 1 corresponding to grade E, the lowest grade, and 12corresponding togives E = 1, D- =grade2,^DA,= 3,theD+highest= 4,^C-grade.^This^scale= 5,^C = 6,^C+ = 7,B- = 8, B = 9, B+ = 10, A- = 11, and A = 12. Grades on threesecondary school subjects: English (ENG2), Kiswahili (KIS2),and mathematics (MAT2), were recorded for each examinee inthe sample. Similarly, the overall KCSE grade for eachexaminee was also recorded.31The Predictor Variable1987 KCPE scores were used as the predictor simplybecause those who wrote the examination were the samecandidates who wrote KCSE examination (the criterion) in1991. The Kenya National Examinations Council scored each ofthe six KCPE examination subjects as a percentage. Thesesubjects were English (ENG1), Kiswahili (KIS1), mathematics(MAT1), science and agriculture (SC&A), geography, history,civics and religious education (GHCR), and art and craft,home science and music (ACHM). The six subjects offered inthe 1987 KCPE examination therefore formed a composite testwith a possible maximum of 600 points. Selection tosecondary school was done on the basis of an examinee's KCPEcomposite score, calculated as a sum of the six subjectscores. Selection was done on merit and the cut-off pointdepended on the number of available Form 1 places.Moderator VariablesAccording to Cohen and Cohen (1983), a moderatorvariable refers to an independent variable that potentiallyenters into interaction with a predictor variable, whilehaving a negligible correlation with the criterion itself.Baron and Kenny (1986) defined a moderator variable as aqualitative or quantitative variable that affects thedirection or strength of the relation between an independentand a dependent variable. It is a third variable thataffects the zero order correlation between two other32variables. The role of pupil-level moderator variables,namely, age, sex, and repetition, on the relationshipbetween KCSE and KCPE examinations, were examined in thisstudy. REPETITION refers to an examinee having sat for KCPEexamination more than once. AGE refers to how old anexaminee was in 1987 when he/she took the KCPE examination.GENDER refers to whether an examinee is male or female.GENDER was dummy coded 0 for males and 1 for females.PopulationThe population in this study comprised of secondaryschools that presented candidates for the 1991 KCSEexamination in South Nyanza district within the Republic ofKenya. The district was chosen on the basis of convenience,for it was the most accessible one to the researcher at thetime of the study. Of the 45 districts in the country, theone that had the highest mean score in 1987 KCPE had 347.15points and the one with the lowest mean score in the sameexamination had 236.34 points (KCPE Newsletter, 1988). Thenational mean score in 1987 KCPE examination, computed from45 district means, was 294 points with a standard deviationof 25. South Nyanza district from which the sample in thisstudy was drawn had a mean score of 273.48 points in the1987 KCPE examination.Whereas records kept at the District Education Office,South Nyanza district, showed that the district had 82secondary schools that presented candidates for the 199133KCSE examination, 1991 KCSE examination data were availablefor only 51 secondary schools. Distribution of thepopulation of 51 secondary schools by gender was 20 boys'schools, 12 girls' schools, and 19 mixed schools.SampleStratified random sampling was used to select about 50%of the available 1991 KCSE data. First, three lists wereprepared: one for boys' schools, one for girls' schools, andone for mixed schools. One school was selected at randomfrom the first two schools on each list. Thereafter, everysecond school was picked from each list, giving a sample of26 schools. The sample consisted of 10 boys' schools, 6girls' schools, and 10 mixed schools with a total of 781cases. This sample was considered to be representative ofsecondary schools in South Nyanza district. Results of thisstudy are therefore generalizable to the population of 1991KCSE examinees in South Nyanza district.Data CollectionA year prior to presentation of candidates for KCSEexamination, each secondary school in Kenya, through theirrespective district education offices, submits studentreturns to the Kenya National Examinations Council. Thereturns give details of examinee background information.This information includes codes that each of the candidatesused in KCPE examination, KCPE examination scores, codes to34be used by each of the examinees in the forthcoming KCSEexamination, sex, and year of birth. Those examinees whowrote KCPE more than once had unique KCPE codes that madetheir identification possible.For each secondary school, the Kenya NationalExaminations Council compiled and forwarded copies of 1991KCSE examination data to the District Education Offices forfiling and distribution. Each of the schools' computerprinted KCSE examination data contained information onexaminees' KCSE subject codes and grades, sex, andexaminees' composite KCSE grades.Two people, whose duty was to maintain examinationrecords in the district education office, were involved inthe data collection exercise. After photocopying thedistrict's 1991 KCSE examination data, they were briefed bythe researcher on how to retrieve and record the requiredinformation. They then embarked on the exercise under theguidance and supervision of the researcher who cross-checked10% of their work. First, an examinee's full name on the1991 KCSE examination data list was read aloud. This namewas then located on the KCSE returns list that was preparedby schools one year earlier. Using the name on the returnslist, the examinee's code in 1987 KCPE examination was readand recorded next to his/her name on the 1991 KCSEexamination data list. This KCPE code was used to locate theexaminees in the 1987 KCPE examination data list. Oncelocated in the KCPE data list, an examinee's KCPE35examination data were copied next to his/her name in theKCSE examination data list. This sequence was repeated untilall the examinees' KCSE and KCPE data were matched.Data AnalysisIn this study, data analysis was done by use of ML 2computer software. This program, unlike similar ones in themarket, has extensive data manipulation facilities andprovides high resolution graphics. The program is imbeddedin another software package called NANOSTAT, which allowsdata preparation and manipulation before, during, and aftermodelling. Because of its integration with NANOSTAT, ML2 isone of the most flexible of the packages in the market today(Arnold, 1992). However, the different packages formultilevel analysis produce similar results (Kreft, De Leew,& Kim, 1990).The following data analytic techniques and models wereused:a)^Inter-Correlation MatrixInter-correlations between variables were calculated tohelp in the investigation of the degree of relationshipbetween the pupil-level variables being studied.3637b) Distributions One of the basic requirements in regression analysis isthat variables be normally distributed. A serious violationof normality may lead to spurious results. Normal plots ofvariables and stem-and-leaf plots of mean scores were usedto check the normality assumption. Skewness and kurtosiswere also calculated to find out the nature of the tails and'peakedness' of the distributions, respectively.c) Intra-School CorrelationIn order to justify the use of multilevel analysis, theintra-school correlation over the sample of schools wascomputed from the formula6 = 620/(020 + 02 e)^ (7)where 0 2 0 is the variance of the schools' intercepts afterthe predictor variable has been fitted, and 0 2 e the within-school residual variation. There would be no need to proceedwith multilevel analysis if the measurements within aschool, but not between schools, were highly correlated(Goldstein, 1987, p.13). The magnitude of this correlationwould indicate whether a portion of the variation inachievement is explained by the differences between schools.d) Model FittingFive models were fitted to help in drawing validityinferences from the data. Table 1 shows the models and theiruses.Table 1Models and their UsesMODEL^ USE381. Variance Components (VC)2. Random Coefficients (RC)3. Moderator (MOD)4. Omnibus (08)5.^School Size (SS)To partition the variancein KCSE grades intowithin- and between-school components inorder to determine theextent to which mean KCSEachievement levels differacross schools.To determine therelationship between KCPEand KCSE.To determine the impactof age, gender, andrepetition on the KCPE-KCSE relationship.To determine the order ofimportance of moderatorvariables in their impacton the KCPE-KCSErelationship.To account for thevariation in interceptsand slopes acrossschools.The following is a detailed description of how themodels were fitted:39Variance Components Model The variance components model was used to determinewhether the mean KCSE achievement levels differedsignificantly across schools. This is a model with noindependent variables, at either pupil or school level. Itwas used to partition the variance in the criterion variable(KCSE) into within- and between-school components. The modelis:Yij = bojX0 + eij^ (8)and^bOj = coo + u0j ( 9 )In this model, the intercept variable X0 has the value of 1for every examinee. Each school has its own mean level ofachievement, bob, and these school means vary about theoverall mean c00.Random Coefficients Model The random coefficients model has two parts, a within-school part and a between-school part. The within-schoolpart relates an examinee's KCSE grade Yid, to his/her KCPEdeviation score from the sample grand mean. The within-school model (i.e., pupil-level) isYij = bob + bij(Xlij - TC1..) + eij^(10)40where Yij is the 1991 KCSE grade for student i in school j,boj is the mean KCSE grade for school j, blj is the KCSE-KCPE achievement relationship (i.e., slope) in school j,Xlij is the 1987 composite KCPE score of student i in schoolj, 21., is the KCPE grand mean, and eij is the error ofestimate for student i in school j. Deviating KCPE scoresfrom the sample grand mean leaves each of the schools'slopes in the KCSE-KCPE relationship invariant. However, thetransformation changes each of the schools' intercepts inthe KCSE-KCPE relationship. The between-school part (i.e.,school-level) isboi = Coo + Uoiand^blj = clo + ulj^ (12)where c00 is the grand mean for KCSE scores across allschools, c10 is the mean slope for the KCSE-KCPErelationship pooled within all schools, uoj is the residualof school j on the KCSE mean achievement, and ?Ili is theresidual of school j on the KCSE-KCPE slope. The parametersof fundamental concern here are:i) the grand mean for KCSE scores across all schools (c00)ii) the variance of the schools' intercepts (a 2 0)iii) the variance of the schools' slopes (a 2 1)iv) the average KCSE performance boost contributed by theKCPE scores, i.e. the mean slope for the KCSE-KCPErelationship pooled across all schools (c10).In setting up this model, the predictor variable wasadded to the fixed part. The estimated mean within-schoolregression equation was used to test the hypothesis thatc10=0 (i.e., that the mean KCSE-KCPE slope is zero).Considering that under the null hypothesis, c10/[s.e. (c10)]has a t distribution (Bryk and Raudenbush, 1992), it waspossible to determine whether the coefficient of KCPE in theKCSE-KCPE linear relationship could have occurred by chancealone.Moderator Models Selected pupil-level variables were added to the randomcoefficients regression model as explanatory variables, oneat a time, to find out the impact of each one of them on therelationship between KCSE and KCPE. The moderator variableswere GENDER, AGE, and REPETITION.Omnibus Model An omnibus model involving all the selected pupil-levelvariables was fitted, first with raw scores of explanatoryvariables; then with standardized scores for the sameexplanatory variables, giving a prediction equation ofstandardized weights. The prediction equation withstandardized weights was used to determine the order ofimportance of the explanatory variables in their impact onthe KCSE-KCPE relationship.41School Size Model In an attempt to account for the variation inintercepts and slopes across schools, school size was usedas an explanatory variable together with KCPE. The magnitudeof estimates of parameter variance were compared with thosein the random coefficients regression model to determinewhether school size partly accounted for the variation inschools' intercepts and slopes.SummaryChapter 3 specified variables used in this study. Thecriterion was 1991 KCSE examination. The predictor was 1987KCPE examination. Moderator variables that may have animpact on the relationship between KCSE and KCPEexaminations were also discussed. These were age, gender,and repetition. School size was discussed as a variable thatmay help account for the differences in schools' interceptsand/or slopes across schools.The population and sample used in the study wereoutlined. The population consisted of secondary schools thatpresented candidates in the 1991 KCSE examination in asingle district in Kenya. Stratified random sampling wasused to select 26 secondary schools whose examination datawere used in the analysis. 1991 KCSE data were collectedfrom computer printouts kept at the district educationoffice. The data were matched with corresponding 1987 KCPEexamination data to allow for analysis. The data analytic42techniques employed in the study were outlined as well. Thetechniques included checking the assumptions and fitting thevariance components model, the random coefficientsregression model, the moderator model, the omnibus model,and the school size model.The next chapter presents the results obtained from theanalysis of data.43Chapter 4RESULTSThe results given in this chapter were from an analysisof 1991 KCSE and 1987 KCPE examination data from a singledistrict using ML 2 computer software. The distributions,correlations between variables, scatterplots, normalityplots, and variability of scores are provided. The variancecomponents model and the random coefficients regressionmodel were used to examine KCSE-KCPE relationship, thusproviding the prediction equations relating KCPE and KCSEexaminations. These models were further extended to includeother pupil-level and school-level variables in an attemptto find out the extent to which these variables affect theKCSE-KCPE relationship.Distribution of the Sample by Schools and GenderTable 2 shows the distribution of the sample by schoolsand gender. The sample contained ten boys' schools, sixgirls' schools, and ten mixed schools. However, some of themixed schools presented candidates of the same gender. Therewere 190 girls and 591 boys in the sample, giving a total of781 cases, with no missing data.. Girls formed 24% of thetotal sample. The proportion of girls enrolled in secondaryschools in Kenya in 1990 was about 40%, which means girlswere underrepresented. Out of the 26 schools in the sample,the smallest school formed 0.4% of the total sample and thelargest school formed 9.0% of the total sample.44Table 2Sample Distribution by Schools and GenderSCHOOL TYPE BOYS GIRLS TOTAL % BY TYPEID1 Boys 70 0 7011 Boys 46 0 4612 Boys 57 0 5719 Boys 41 0 412 Boys 70 0 70 60.226 Boys 25 0 253 Boys 42 0 424 Boys 49 0 496 Boys 55 0 559 Boys 15 0 1513 Girls 0 30 3016 Girls 0 16 1623 Girls 0 21 21 20.925 Girls 0 34 345 Girls 0 38 388 Girls 0 24 2410 Mixed 9 6 1514 Mixed 16 2 1815 Mixed 14 1 1517 Mixed 7 2 918 Mixed 14 5 19 18.920 Mixed 3 0 321 Mixed 20 3 2322 Mixed 9 0 924 Mixed 24 6 307 Mixed 5 2 7TOTAL 591 190 781 100.0Range of ScoresTable 3 shows minimum, maximum, and range of scores forall the examinees in the sample in three KCSE subjects,overall KCSE scores, six KCPE subjects and KCPE compositescores.45Table 3Minimum, Maximum and Range of ScoresEXAM^ MIN.^MAX.^RANGEKCSE*English^1^9^8Kiswahili 1 12 11Mathematics^1^12^11KCSE TOTAL TEST^2 9^7KCPEEnglish^20^95^75Kiswahili 1 82 81Mathematics^18^97^79SC&A^1 88 87GHCR 1^94^93ACHM 1 91 90KCPE TOTAL TEST^172^533^361* KCSE scores are based on letter grades.SC&A = Science and Agriculture;GHCR = Geography, History, Civics, and ReligiousEducation;ACHM = Primary Art and Craft, Home Science, andMusic.The results in Table 3 indicate that there were noobvious errors in scoring or recording of data because thedistributions are within expected range.Means and Standard Deviations of ScoresTable 4 gives means and standard deviations of KCPEexamination scores and three KCSE subject grades. The KCPEsubject with the highest mean score was mathematics (MAT1),followed by English (ENG1) and then Kiswahili (KIS1).However, this order was reversed in KCSE, with the46highest mean score in Kiswahili (KIS2), followed by English(ENG2) and then mathematics (MAT2). The overall averagemark at national level in the 1987 KCPE mathematics,English, Kiswahili, and science and agriculture papers were41%, 43%, 54% and 58% respectively (1988 KCPE Newsletter,KNEC).Amongst the three KCPE subjects, mathematics had thehighest variability followed by Kiswahili and then English.This order was maintained in the three KCSE subjects, withmathematics having the highest variability followed byKiswahili and then English.Table 4Means and Standard Deviations of ScoresPREDICTOR (KCPE) CRITERION (KCSE)Mean (SD) Mean* (SD)English 59.7% (12.1) 4.23 (1.69)Kiswahili 45.8% (15.2) 4.36 (2.26)Mathematics 61.8% (15.4) 3.59 (2.68)SC&A 59.4% (12.2)GHCR 60.8% (12.6)ACHM 60.2% (13.4)TOTAL EXAM 58.0% (57.9)*KCSE means are based on letter gradesSC&A = Science and AgricultureGHCR = Geography, History, Civics, and ReligiousEducationACHM = Art and Craft, Home Science, and Music.Reliability EstimatesThe reliability of the predictor and criterion were notavailable from the Kenya National Examinations Council.47However, an alternative was to estimate the reliabilities byuse of Kuder-Richardson (KR21) formula, which isKR21 = k/(k - 1){l - [A(k - A)/ka 2 ]1^(13)upon making certain assumptions. In the KR21 formula, k isthe number of items on the test, g is the mean total score,and a 2 is the total score variance. Only the 1987 KCPEmathematics examination was used in the estimation of thereliability of the composite KCPE examination. KCPEmathematics examination had 50 multiple choice items,assumed to be of equal difficulty. The sample grand mean forKCPE mathematics was 61.8%. If each item was worth a point,the grand mean of 61.8% would be equivalent to 30.9 points(i.e., half of 61.8), with an adjusted total test scorevariance of 59.3 (i.e., one-quarter of 15.4 2 ). With thecrude estimates, the reliability of the 1987 KCPEmathematics examination using KR-21 was found to be 0.82.This suggests that the entire 1987 KCPE examination waslikely to have reliability greater than 0.82.The mean and variance of the 1991 KCSE mathematicsexamination were 4.36 and 7.17 respectively (N = 781).Considering that KCSE mathematics examination had two papershaving 24 items each, and assuming that the items were ofequal difficulty, KR-21 gave a reliability estimate of 0.46.The reliability of the entire 1991 KCSE examination wouldtherefore be higher than 0.46 because the total test was48longer than the mathematics sub-test, and the longer thetest, the higher the reliability. This reliability estimateis large enough to allow for the use of 1991 KCSEexamination data as a criterion variable.Crocker and Algina (1986) suggested that thecorrelation between a predictor and a criterion is, in mostcases less than or equal to the square root of the productof the reliability estimates of the predictor and criterion.Using .82 and .46 as reliabilities of the predictor andcriterion respectively, an upper bound estimate of thecorrelation between 1987 KCPE examination and the 1991 KCSEexamination is 0.61. In other words, the correlation betweenKCPE and KCSE examinations is not likely to exceed 0.61.This suggests that if a linear relationship exists betweenKCPE and KCSE examinations, then KCPE scores may account fornot more than approximately 37.2% of the variability in KCSEgrades.Inter-Correlation Matrix of VariablesTable 5 shows inter-correlations between pupil-levelvariables. The correlation between KCPE composite scoresand KCSE composite grades was 0.56. All correlationsinvolving KCSE subjects and KCPE subjects were positive,with a range of 0.62 and a median of 0.45. Thesecorrelations are large enough to allow for meaningfulregression analysis.4950Table 5Inter-Correlation Matrix of Student-Level VariablesENG2 KIS2 MAT2 KCSE ENG1 KIS1 MAT1 SC&A GHCR ACHM KCPE GNDR AGE REPENG2KIS2MAT2KCSEENG1KIS1MAT1SC&AGHCRACHMKCPEGNDRAGEREPT1. 1.00ENG2 = KCSE English;KIS2 = KCSE Kiswahili;MAT2 = KCSE Mathematics;ENG1 = KCPE English;KIS1 = KCPE Kiswahili;MAT1 = KCPE Mathematics;SC&A = KCPE Science and Agriculture;GHCR = KCPE Geography, History, Civics, and Religious Education;ACHM = KCPE Art and Craft, Home Science, and Music;GNDR = Gender (coded 1 for girls & 0 for boys);REPT = Repetition (coded 1 for repeaters & 0 for non repeaters).ScatterplotsAppendix B contains scatterplots of KCSE grades againstKCPE scores. The scatterplots showed a linear relationshipbetween KCSE and KCPE scores, thus justifying the use oflinear regression analysis.Normality of DistributionsOne of the basic assumptions in regression analysis isthat the dependent variable be normally distributed. Thestem and leaf plot of school KCSE means as well as thenormal and box plots of the dependent variable reveals thedegree of violation of the normality assumption. Stem andleaf plots, and normal and box plots for 1991 KCSE meangrades and 1987 KCPE mean composite scores for the 26 sampleschools are in Appendix C.The stem and leaf plots, together with the normal andbox plots, indicate that the distribution of sample schoolmeans for KCPE examination was near normal. Fisher'smeasures of skewness and kurtosis for KCPE composite scoreswere -.43 and -.18 respectively, implying that the scoreshad a tolerable negative skewness and a tolerable heaviertail than the normal distribution.KCSE English, Kiswahili and mathematics showed atolerable positive skewness. The median grade for compositeKCSE grades was in the centre of the box. The tails in thebox plot were almost of equal length. The normal plotsshowed a linear trend. These suggest that the distribution51of KCSE composite grades was near normal, with a tolerablepositive skewness of 0.41 and Fisher's measure of kurtosisof 0.05.School Means and Standard DeviationsTable 6 shows KCSE and KCPE means and standarddeviations for each of the 26 secondary schools in thesample. A scatterplot of the school means given in Table 6is shown in Figure 1. No influential data points (outliers)were detected.5253Table 6School Means and Standard Deviations1 2 3 4 5 6SCHOOL7^8 9 10 11 12 13KCSE Mean Grade 5.16 5.10 5.38 5.53 4.58 4.42 4.57 3.83 4.40 3.60 6.52 5.23 3.73KCSE S.D. 1.4 1.2 1.1 1.6 1.0 1.2 1.2 0.9 2.0 1.0 1.3 1.0 1.0KCPE Mean Score 387 359 361 402 375 347 369 310 310 300 429 359 297KCPE S.D. 42 43 29 38 32 42 53 41 62 36 33 28 43SCHOOL14 15 16 17 18 19 20 21 22 23 24 25 26KCSE Mean Grade 4.28 3.27 3.56 4.22 3.79 4.90 2.00 3.17 4.22 3.76 4.53 4.65 4.88KCSE S.D. 1.1 1.1 0.8 0.6 1.0 1.3 2.0 1.0 1.7 1.5 1.1 1.4 0.8KCPE Mean Score 316 291 284 281 255 362 269 311 324 268 337 337 323KCPE S.D. 56 61 34 51 39 28 18 50 63 53 44 26 45Fig. 1: Scatterplot of School MeansIt is evident from the scatter-plot in Figure 1 thatthe school means for KCSE and KCPE have a strong positivelinear correlation. This suggests that on the average,secondary schools with low KCPE achievement levels tended tohave low KCSE achievement levels and those with high KCPEachievement levels tended to have high KCSE achievementlevels.Whereas there were schools like A (Kendu Muslim) and E(Kanga) whose performances were as expected (i.e. school Ahad a low KCPE mean (269 points) and a low KCSE mean (2.00points) and school E had a high KCPE mean (429.39 points)and a high KCSE mean (6.52 points)), some schools like B(Kegonga), C (Arambe) and D (St. Lucy's, Raruowa) had low5455mean intake scores but ended up recording reasonably highmean KCSE scores as compared with other schools with lowintake scores.Predictive ModelsIn an attempt to address the research questions, fivepredictive models were fitted; the Variance Components Model(VC Model), the Random Coefficients Model (RC Model), theModerator Model (MOD Model), the Omnibus Model (OB Model)for pupil-level variables, and the School Size Model (SSModel). These models mainly differ in the number and levelof explanatory variables that are considered in a sequentialorder. Iterative generalized least squares convergencecriterion was used in each of the models. Convergence wasachieved in less than ten iterations in all cases.Model 1: The VC Model The VC model was used to determine whether thevariation in KCSE achievement levels varied significantlyacross schools. The model isYij = bojX0 + eij^ (14)and^bOj = C00 + u0j (15)In this model, the intercept variable X0 has the valueof 1 for every examinee. Each school has its own mean levelof achievement, bpi, and these school means vary about theoverall mean c00. The school-level residuals, uoj, are thedeviations of each school's mean from the district average.Yij is the response variable of KCSE scores for pupil i inschool j. Table 7 shows the results of the VC Model.Table 7Variance Estimates in the VC Model PARAMETER^ESTIMATE^(S.E.)^ pSCHOOL LEVELINTERCEPT^0.626^(0.195)^3.2^.002 *PUPIL LEVELINTERCEPT^1.535^(0.790E-01)i^19.4^.000 *# 0.790E-01 = 0.0790* Significant at .01 levelThe between-school intercept variance is more than threetimes its standard error, suggesting that the average levelof achievement in KCSE differed significantly across schools(t = 3.2, p < .01). It is also clear from Table 7 that mostof the variation in KCSE grades was between pupils. Themaximum likelihood point estimate for the grand-mean KCSEachievement was 4.42 with a standard error of .17,indicating a 95% confidence interval of4.42 ± 1.96(.17) = (4.09, 4.75).To determine if the use of multilevel regressionanalysis was in order, the intra-school correlation over thesample of schools was computed from the formula56576 = a20/(a20 + a2 e)^ (16)where a 2 0 is the variance of the schools' intercepts, anda 2 e is the within-school residual variation. The intra-school correlation, whose value is 6 = 0.29, represents theproportion of variance in KCSE between secondary schools.This indicates that about 29% of the variance in KCSE isbetween schools. This is a justification for the use ofmultilevel analysis (Goldstein, 1987, p.13).Model 2: The RC Model The RC Model serves two purposes. First, it provides anestimate of the mean equation for the regression of KCSEscores on KCPE scores pooled within schools. Second, ittests the null hypothesis that the KCPE-KCSE relationship,(i.e., the KCPE-KCSE slope), does not vary across schools.According to Raudenbush and Bryk (1986), the RC Modelfor determining the required KCPE-KCSE relationship may bewritten asYid = boj + b1 (X11 - )71..) + eij^(17)at pupil level andbOj = c00 + u0j^ (18)b1j = c10 + ulj (19)at school level. Equation (17) is a within-school model thatrelates students' KCSE achievement to their respective KCPEscores when the KCPE scores are deviated from the grandmean. At pupil level, Yij is the response variable, the KCSEachievement for pupil i in school j, Xlij is the predictorvariable i.e., the KCPE achievement for pupil i in school j,71.. is the KCPE grand mean, boj is the KCSE meanachievement for school j, blj is the KCPE-KCSE achievementrelationship (i.e., KCPE-KCSE slope) for school j, and eijis a residual for pupil i in school j. At school level(Equations (18) and (19)), coo is the grand mean for KCSEachievement across schools, c10 is the mean slope for theKCSE-KCPE relationship pooled across schools, uoj is theeffect of school j on the mean KCSE achievement level, andulj is the effect of school j on the KCSE-KCPE slope. It isassumed that uoj and uij are multivariate normallydistributed, both with expected values of 0 (Bryk andRaudenbush, 1992).The model was set by having the constant vector of l'sand the deviation scores of KCPE as explanatory variables.In this regression model, both intercepts and slopes areallowed to vary across schools. Table 8 shows thecoefficient estimates in the RC Model.58Table 8Coefficient Estimates in the RC ModelPARAMETER^ESTIMATE^(S.E.)^tINTERCEPT^4.647 (0.884E-01)^52.6SLOPE^0.121E-01^(0.104E-02)^11.6p .000 *.000 ** Significant at .01 levelUsing the parameter estimates in Table 8, an averagewithin-school linear regression equation relating KCPE andKCSE scores isKCSEij = 4.647 + 0.0121(KCPEij - KCPE..)^(20)In Equation (20), a change of 1 unit in the predictorvariable (KCPE) is associated with a change of 0.0121 unitin the criterion variable (KCSE), and the higher the KCPEexamination score, the higher the KCSE examination score. Inother words, a change of 83 points in KCPE (based on a 600point interval scale) is associated with a change of 1 gradepoint in KCSE (based on a 12 point interval scale). FromTable 8, the ratio of the observed intercept to its standarderror gives a significant t = 52.6, p < .01. The ratio ofthe observed slope to its standard error also gives asignificant t = 11.6, p < .01. These results suggest thatthe observed intercept and slope in the KCPE-KCSErelationship could hardly have occurred by chance alone.59Therefore, on the average, there is a significant positivelinear relationship between KCPE and KCSE within schools.The relationship is substantial, equivalent to a correlationof approximately 0.56.Equation (20) can be used to predict KCSE grades fromgiven KCPE scores. For example, an examinee who scored 350points in KCPE was likely to score:4.65 + 0.012(350 - 347.9) = 4.7 points or C- inKCSE, if all other factors were held constant. However, astudent with a KCPE score of 280 points was likely to score:4.65 + 0.012(280 - 347.9) = 3.8 points or D+ inKCSE, if all other factors were held constant.Table 9 shows between-school and between-pupilparameter variances in the KCPE-KCSE relationship. Theestimated parameter variance for boi (intercepts) was usedto test whether the observed differences among schools inmean achievement levels could have occurred by chance alone.Table 9Variance/Covariance Estimates in the RC Model PARAMETER ESTIMATE^(S.E.) t pSCHOOL LEVELINTERCEPT 0.883^(0.917) 0.96 .173INTERCEPT/SLOPE -.208E-02^(0.257E-02) -.81 .213SLOPE 0.576E-05 (0.735E-05) 0.78 .221PUPIL LEVELINTERCEPT 1.329^(0.691E-01) 19.2 .000 ** Significant at .01 level60The ratio of the between-school intercept variance to itsstandard error gives t = 0.96. This is a non-significant tat .01 level, suggesting that the between-school variationin intercepts (KCSE mean achievements) could have occurredjust by chance. The ratio of between-school slope varianceto its standard error gives t = 0.78 which is non-significant at .01 level, implying that the relationshipbetween KCPE and KCSE within schools (i.e., slope) does notvary significantly across the population of schools. Thus,the regression lines for the 26 schools may, for allpractical purposes, be considered parallel. This suggeststhat the strength of the relationship between KCPE and KCSEis similar across schools. Any observed between-schooldifferences in mean KCSE achievement levels afterconsidering the schools' mean KCPE scores could be due tosampling error or other random factors. Similarly, anyobserved between-school differences in slope could be due tosampling error of other random factors. Based on thenegative value of the intercept/slope covariance in Table 9,it can be inferred that the average slope in the KCPE-KCSErelationship decreases with increasing KCSE mean achievementlevels.The fact that the variation across schools in mean KCSEachievement levels is significant, other things being equal,and that the variation becomes non-significant when meanKCPE scores are considered, has a far reaching impact on61policy decisions that relate to ranking of secondary schoolsbased entirely on KCSE mean achievement levels. This issueis elaborated in Chapter 5.Any linear relationship between two variables in a twodimensional space is completely determined if both theintercept and slope in the relationship are known. Figure 2is a plot of slopes against intercepts for the 26 secondaryschools in the sample. Since the abscissa representsintercepts and the ordinate represents slopes, it ispossible to write linear regression equations for each ofthe 26 secondary schools.0.0 1430.0132cnIAJ0.01210.01100.804 9-6.e^-6.2^6.4INTERCEPTS62 V.0 1 .6Fig. 2: Plot of Slopes against InterceptsIndividual school coordinates were identified fromFigure 2 by setting the cursor on the required point, uponwhich the coordinates were displayed on the computer screenalong with the serial number of the point. Using thesecoordinates, 26 regression lines representing the KCPE-KCSErelationship were drawn, one for each school. The regressionlines are given in Figure 3.rig. 3: Predicted KCSE grades vs KCPE scoresQuadrant A in Figure 3 represents examinees with KCPEscores above the grand mean and who succeeded in high63school. Quadrant B represents examinees with KCPE scoresbelow the KCPE grand mean who succeeded in high school.Quadrant C represents examinees with KCPE scores below theKCPE grand mean who did not succeed in high school. QuadrantD represents examinees with KCPE scores above the KCPE grandmean who did not succeed in high school.It can be deduced from quadrant C that those examineeswho had KCPE scores below the KCPE grand mean were unlikelyto succeed in high school. However, a number of examineeswho had KCPE scores above the KCPE grand mean did notsucceed in high school as evidenced in quadrant D.Model 3: The MOD Model Having concluded from the VC Model that KCSE meanachievement levels differed significantly across schools,and having concluded further from the RC Model that thesedifferences became non-significant if mean achievementlevels in KCPE for secondary schools were considered, anattempt was made to determine the separate roles of threepupil-level variables as moderators in the KCPE-KCSErelationship. The three pupil-level variables are the age ofexaminees at the time they took their last KCPE examination,the sex of examinees, and whether an examinee wrote KCPEonce or more than once. The MOD Model may be written asYid = boj + bij(Xlij - Ri..) + b2jX2ij + eij (21)at level 1 and6465b0j = C00 + u0j^ (22)blj = C10 + ulj (23)b2j = c20 4- u2j^ (24)at level 2. At Level 1, Yij represents KCSE grades, Xlijrepresents KCPE scores, Tel_ represents the schools' KCPEgrand mean, X21j represents the pupil-level variable whichacts as a moderator in the KCPE-KCSE relationship (i.e.,age, gender, or repetition), boj is the mean KCSEachievement for school j, blj is the KCPE-KCSE achievementrelationship in school j, and b2j represents the meancoefficient of the moderator variable for school j. At Level2, c00 represents the overall mean KCSE achievement for allschools, uoj is the residual of school j on the mean KCSEachievement, ulj is the residual of school j on the meanslope in the KCPE-KCSE relationship, c10 is the mean slopefor the KCPE-KCSE relationship pooled for all schools, andc20 represents the mean coefficient of the moderatorvariable pooled across all schools.Age as a ModeratorThe MOD Model was set by adding AGE as an explanatoryvariable in the within-school KCPE-KCSE relationship.Therefore, there were three explanatory variables in themodel; the constant vector of l's, KCPE scores, and age. Theresponse variable was KCSE grades. Table 10 shows theparameter estimates for the MOD Model with X2 = AGE.Table 10Coefficient Estimates in the MOD Model with X2 = AGEPARAMETER ESTIMATE (S.E.) t pINTERCEPTKCPEAGE7.1100.109E-01-.163(0.428)(0.112E-02)(0.274E-01)16.69.73-6.0.000 *.000 *.000 ** Significant at .01 levelUsing Table 10, the equation that represents within-school KCSE-KCPE relationship when age is taken into accountis:KCSEij = 7.11 + 0.011(KCPEij - KCPE..) - 0.163(AGE)ij (25)Under this model, the ratio of the coefficient of AGEto its standard error gives a significant t = -6.0, p < .01.The coefficient of KCPE under this model also gives asignificant t = 9.73, p < .01, implying that the coefficientof KCPE could not have occurred by chance alone.Consideration of age has, however, lowered the KCSE-KCPEslope from 0.012 to 0.011, a drop of approximately 8.3%. Inother words, consideration of AGE as a moderator variablehas changed the KCPE-KCSE relationship. As an illustration,consider two pupils aged 15 and 18 respectively who had thesame KCPE score, and who were attending the same secondaryschool. Equation (25) tells us that, on the average, thepupil aged 15 was likely to score 0.5 points more in KCSE66than the one aged 18. It is clear from the equation that ingeneral, older pupils with the same KCPE grades as theiryounger counterparts attending the same secondary schooltended to perform poorer than their younger counterparts inthe KCSE examination.Gender as a ModeratorIn an attempt to establish how gender moderates theKCPE-KCSE relationship, the MOD Model was fitted with X2 =GENDER. GENDER was dummy coded 0 for boys and 1 for girls.Table 11 shows the parameter estimates for this model atconvergence using the IGLS approach.Table 11Coefficient Estimates in the MOD Model with X2  = GENDERPARAMETER ESTIMATE (S.E.) t pINTERCEPT 4.712 (0.938E-01) 50.2 .000*KCPE 0.120E-01 (0.106E-02) 11.3 .000 *GENDER -.245 (0.168) -1.5 .078* Significant at .01 levelUsing Table 11, an equation that represents the within-school KCPE-KCSE relationship when the sex of examinees isconsidered is:KCSEij=4.71+0.012(KCPEij - KCPE . .)-0.245(GENDER)ij^(26)67In this model, the coefficient of KCPE has not changedmuch from what it was under the RC Model. The ratio of thecoefficient of GENDER to its standard error gives t = -1.46which is non-significant at .01 level. It can therefore beinferred from Equation (26) that if you compare the KCSEmean achievement levels for boys and girls having the sameKCPE grades, and who attended secondary schools having thesame KCPE mean achievement levels, boys tended to outperformgirls in KCSE by approximately 0.25 points, although thiscould have occurred by chance alone.Repetition as a Moderator In an attempt to determine how repetition moderates theKCPE-KCSE relationship, the pupil-level variable REPETITIONwas substituted for X2 in the MOD Model. This gives anequation in which the response variable is a linear functionof KCPE and REP. The variable REP was coded 0 for non-repeaters and 1 for repeaters. Table 12 shows fixedparameter estimates for this model.Table 12Coefficient Estimates in the MOD Model with X2  = REPPARAMETER^ESTIMATE^(S.E.)^t^pINTERCEPT^4.682 (0.920E-01)^50.9^.000 *KCPE^0.121E-01^(0.107E-02)^11.3 .000 *REPETITION^-.243^(0.117)^-2.1^.02568* Significant at .01 levelUsing Table 12, an equation that represents the linearrelationship involving KCSE as a criterion variable and KCPEas a predictor, after taking into account repetition ofexaminees is:KCSEij = 4.68 + 0.012(KCPEij - KCPE..) - 0.243(REP)ij (27)Equation (27) shows that the inclusion of REP i.e.REPETITION, as a moderator variable hardly changed the KCPE-KCSE relationship. The coefficient of REP gives t = -2.1which is non-significant at .01 level. Holding other factorsconstant, repeaters of Class 8 were associated with lowerKCSE mean achievement levels than non-repeaters, althoughthis could have occurred by chance alone.Model 4: The OB Model Under the OB Model, all the selected pupil-levelvariables were fitted simultaneously. Two OB Models werefitted for the explanatory variables KCPE, AGE, GENDER, andREPETITION. The first employed raw scores of explanatoryvariables. This model may be written as:Yij = boj + bijXlij + b2jX2ij + b3jX3ij+ b4jX4ij + eij^(28)at pupil-level, withbpj = cpo + Upj, p = 0,...,4, j = 1,...,26^(29)69at school-level. In Equation (28), Yij is the KCSE grade forpupil i in school j, b1,...,b4 are coefficients of X1,...,X4respectively, X1,...,X4 are the four pupil-level variablesKCPE, GENDER, AGE, and REPETITION, respectively, and eij isthe error term. boj is a constant representing theintercept. At the school-level (Equation 29), each of the b-coefficients has a grand mean of cpo, and a residual upi.Table 13 shows the results of the OB Model for raw scores.Table 13Coefficient Estimates in the OB Model with Raw ScoresPARAMETER ESTIMATE (S.E.) t pINTERCEPT 3.53 (0.650) 5.43 .000*KCPE 0.107E-01 (0.114E-02) 9.39 .000 *AGE -.164 (0.284E-01) -5.8 .000 *GENDER -.389 (0.175) -2.2 .018REP -.862E-01 (0.118) -.73 .236* Significant at .01 levelUsing the results in Table 13, a prediction equationfor KCSE grades from raw scores of explanatory variables is:KCSEij = 3.53 + 0.011(KCPEij) - 0.164(AGEi1)- 0.389(GENDERij) - 0.086(REPij)^(30)The second OB Model was fitted in order to determinethe order of importance of the selected pupil-levelexplanatory variables in predicting KCSE grades. In thiscase, the variables KCSE, KCPE, GENDER, AGE, and REPETITION70were standardized before running the model. The OB Model ofstandardized regression coefficients is given by:KCSEij * = Boj + Blj(KCPE) ij + B2j(GENDER)ij+ BWAGEW + B4j(REPETITION)ij + eij^(31)at pupil-level, withBpj = cpo + up5, p = 0,...,4, j = 1,...,26^(32)at school-level.In Equation (31), KCSEij * represents the standardized KCSEgrades, 131i,...,B4i represent standardized coefficients ofKCPE, GENDER, AGE, and REPETITION, respectively, and eij isthe error term. At the school-level (Equation 32), each ofthe 13-coefficients has a grand mean cpo, and a residual upi.Table 14 shows the results of this analysis.Table 14Coefficient Estimates in the OB Model using StandardizedScoresPARAMETER ESTIMATE (S.E.) t pINTERCEPT -.043 (0.069) -0.62 .270KCPE 0.427 (0.046) 9.28 .000*GENDER -.114 (0.051) -2.24 .017AGE -.181 (0.031) -5.84 .000*REPETITION -.021 (0.029) -0.72 .239Note: The coefficient estimates are standardized.* Significant at .01 level71Using the results in Table 14, a prediction equationfor standardized KCSE grades using standardized scores ofthe predictor and three pupil-level explanatory variablesis:Yij * = -.043 + 0.427(KCPEij * ) - 0.114(GENDERij * )- 0.181(AGEij * ) - 0.021(REPETITIONij * )^(33)where * signifies a standardized variable. In equation (33),the pattern of standardized weights indicates that of thethree moderator variables considered in the KCPE-KCSErelationship, AGE had the greatest impact followed by GENDERand REPETITION, in this order.Model 5: The SS ModelAn attempt was made to account for the observedvariation in mean achievement levels (intercepts) and slopesacross schools. School size, a school level variable, wasused in this attempt. Small schools were coded 1, mediumschools were coded 2, and large schools were coded 3. Inorder to set up the model, a cross product of KCPE scoresand secondary school size was created. This product wasnamed KCPE*SIZE. Table 15 shows the coefficient estimatesfor the SS Model.72INTERCEPTKCPESIZEKCPE*SIZE4.3720.143E-010.664-.143E-02(0.238)(0.262E-02)(0.453)(0.130E-02)18.4^.000 *5.46 .000 *1.47^.077-1.10 .141Table 15Variance/Covariance Estimates in the SS Model PARAMETER^ESTIMATE (S.E.)SCHOOL LEVEL73INTERCEPTINTERCEPT/SLOPESLOPEINTERCEPT0.699-.167E-020.484E-05PUPIL1.33(0.855)(0.243E-02)(0.702E-05)0.82-.690.^19.3^.000** Significant at .01 levelIn Table 15, the parameter variance of schools'intercepts dropped from 0.883 as given under the RC Model(see Table 8) to 0.699 in the SS Model, a drop of 16%. Theslope variance also dropped from 0.576E-05 as given underthe RC Model to 0.484E-05 in the SS Model, a drop of 16%.These reductions suggest that variability in schools' KCSEachievement means for students with low/average KCPE scoresis partly accounted for by secondary school size.Table 16 shows the coefficient estimates in the SSModel.Table 16Coefficient Estimates in the SS Model PARAMETER^ESTIMATE^(S.E.)* Significant at .01 levelThe coefficient of SIZE in Table 16 suggests that anincrease in secondary school size by a single stream isassociated with an increase in KCSE mean achievement levelsof approximately 0.66 points. However, based on the negativecoefficient of KCPE*SIZE, an increase in secondary schoolsize by a single stream is associated with a drop ofapproximately 0.001 units in the KCSE-KCPE slope, implying adrop in the benefit from KCPE.7475KCPE MATHEMATICS AS A PREDICTOR OF KCSE MATHEMATICSIn this section, two models were fitted to helpdetermine the extent to which achievement in KCPEmathematics examination predicts achievement in KCSEmathematics examination. The two models are the variancecomponents model and the random coefficients regressionmodel. The role of three pupil-level variables as moderatorsin the KCPE-KCSE relationship in mathematics was alsoanalyzed.Variance Components Model A VC Model was fitted for KCSE Mathematics (MAT2) as aresponse variable. This model helps in finding out whetherachievement levels in KCSE mathematics vary across secondaryschools. The model is given byMAT2j = b oj + eij^ (34)b0j = C00 + u0j (35)Under this model, each school has its own mean level ofKCSE mathematics achievement, boj, and these school meansvary about the overall mean in mathematics achievement, C00.uoj is the residual at school level. Table 17 shows theparameter variances in this model.Using Table 17, the ratio of the between-schoolintercept variance to its standard error gives t = 3.2,suggesting that mean achievement levels in KCSE mathematicsdiffered significantly from school to school. The maximumlikelihood point estimate for the grand-mean KCSEmathematics achievement was 3.03 with a standard error of.03, indicating a 95% confidence interval of3.03 ± 1.96(.03) = (2.97, 3.09).Table 17Variance Estimates in the VC Model for KCSE MathematicsPARAMETER^ESTIMATE^(S.E.)SCHOOL LEVELINTERCEPT^2.104^(0.653)^3.22^.002 *PUPIL LEVELINTERCEPT^4.888^(0.252)^19.4^.000 ** Significant at .01 levelThe RC Model A random coefficients regression model was fitted formathematics scores. This model is given by:MAT2ij = bo + bij(MATlij - MAT1..) + eij^(36)at pupil level andbOj = c00^u0j^ (37)blj = c10^ulj (38)at school level.76Table 18 shows parameter estimates in the RC Model forachievement in mathematics.Table 18Parameter Estimates in the RC ModelPARAMETER^ESTIMATE^(S.E.)^tINTERCEPT^3.200 (.203)^15.8MAT1^.720E-01^(.828E-02)^8.70 .000 *.000 ** Significant at .01 levelBased on the parameter estimates in Table 18, therelationship between KCPE mathematics and KCSE mathematicsis:MAT2ij = 3.20 + 0.07(MATlij - (39)The ratio of the slope to its standard error is t = 8.7,suggesting that the slope is significantly different fromzero at .01 level. It may be concluded that KCPE Mathematicsis a moderate predictor of KCSE Mathematics.The parameter variances in the RC Model are given inTable 19. The between-school intercept variance in KCSEmathematics after considering the effect of KCPE mathematicswas 0.584 (S.E = 0.914), giving a t-value of 0.64 which isnon-significant at .01 level. This implies that whereasschools differed significantly in KCSE mean mathematicsgrades, these differences ceased to be significant when the77schools' KCPE mean mathematics scores were considered, andany observed differences could have been due to chance.Table 19Variance/Covariance EstimatesPARAMETER ESTIMATE^(S.E.)SCHOOL LEVELINTERCEPT 0.584^(0.914) 0.64 .264INTERCEPT/SLOPE -.241E-01^(0.202E-01) -1.19 .123SLOPE 0.837E-03^(0.467E-03) 1.79 .043PUPIL LEVELINTERCEPT 3.925^(.2041) 19.2 .000 ** Significant at .01 levelGender as a Moderator The equation relating KCSE mathematics scores and KCPEmathematics scores when GENDER is considered is:MAT2ij = 3.33 + 0.07(MATlij -^- 0.35(GENDERij) (40)GENDER was coded 0 for boys and 1 for girls. Equation (40)therefore suggests that for pupils with the same KCPEmathematics scores and attending schools with the same KCPEmean mathematics score, boys tended to outperform girls inthe KCSE mathematics examination by approximately 0.35points.78Age as a ModeratorThe contribution of age of an examinee at the time oftaking KCPE in the KCSE-KCPE Mathematics relationship can bededuced from the ML2 regression equation:MAT2ij = 6.10 + 0.07(MATlij - MAT1 . .) - 0.19(AGE)ij^(41)Equation (41) suggests that for pupils with the same KCPEscore attending secondary schools with the same KCPE meanmathematics score, younger pupils tended to outperform theirolder counterparts in KCSE mathematics.Repetition as a ModeratorEquation (42) is a regression equation involving KCSEMathematics scores, KCPE Mathematics scores, and whether acandidate sat for KCPE once or more than once.MAT2ij = 3.34 + 0.07(MATlij - MAT1 .. ) - 0.55(REP)ij^(42)Considering that REPETITION was dummy coded 1 for repeatersand 0 for non-repeaters, it is clear from Equation 42 thatfor constant KCPE mathematics scores, KCSE mathematicsscores were higher for non-repeaters than for repeaters byapproximately 0.55 of a point.79KCPE KISWAHILI AS A PREDICTOR OF KCSE KISWAHILIThe two basic models used in the prediction of KCSEmathematics scores from KCPE mathematics scores were used inthe prediction of KCSE Kiswahili scores from KCPE Kiswahiliscores.The VC Model A variance components model was fitted for SecondarySchool Kiswahili (KIS2). This model is given by:KIS2ij = boj + eij^ (43)and130j = C00^u0j^ (44)where KIS2ij represents KCSE Kiswahili grades, boj is theKCSE Kiswahili mean grade for school j, c00 is the overallmean for KCSE Kiswahili, eij is the residual at pupil leveland uoj is the residual at school level. This model helps infinding out whether achievement levels in KCSE Kiswahilivaried significantly across schools. Table 20 shows theresults for the variance components model for KCSEKiswahili.80Table 20Variance Estimates in the VC Model PARAMETER^ESTIMATE (S.E.)SCHOOL LEVELINTERCEPT^1.281^(0.408)^3.14^.002 *PUPIL LEVELINTERCEPT^3.856^(0.198)^19.47^.000 ** Significant at .01 levelThe between-school parameter variance of 1.281(S.E.=0.408) gives t = 3.14, p < .01. This indicates thatthere was significant variation in KCSE Kiswahiliachievement levels across schools. The maximum likelihoodpoint estimate for the grand-mean KCSE Kiswahili achievementwas 4.22 points with a standard error of .24, indicating a95% confidence interval of4.22 ± 1.96(.24) = (3.75, 4.69).The RC Model for Kiswahili A random coefficients regression model was fitted forKCSE Kiswahili. This model is given by:KIS2ij = bo+ bij(KISlij - KIS1..) + eij^(45)at pupil level and100j = c00^u0j^ (46)blj = cij + Ulj (47)81at school level. Table 21 shows the fixed coefficients forKCSE Kiswahili as a function of KCPE Kiswahili.Table 21Coefficient Estimates for Kiswahili PARAMETER ESTIMATE (S.E.) t PINTERCEPT 4.28 (0.159) 26.92 .000 *SLOPE 0.715E-01 (0.749E-02) 9.55 .000 ** Significant at .01 levelThe coefficient estimates under this model enable us todetermine a linear relationship between KCPE Kiswahili(KIS1) and KCSE Kiswahili (KIS2). The equation is:KIS2ij = 4.28 + 0.07(KISlij - KIS1..)^(48)The ratio of the slope to its standard error in the KCPE-KCSE Kiswahili relationship gives a significant t = 9.55, p< .01. KCPE Kiswahili is therefore a moderate predictor ofKCSE Kiswahili.Table 22 shows variance/covariance estimates for the RCModel for Kiswahili. The ratio of the intercept variance toits standard error gives a non-significant t = 1.45 at .01level, suggesting that the variation of intercepts in therelationship of KCPE-KCSE Kiswahili could have occurred bychance.82Table 22Variance Estimates in the RC Model for Kiswahili PARAMETER ESTIMATE^(S.E.) t pSCHOOL LEVELINTERCEPT 0.773^(0.532) 1.45 .080INTERCEPT/SLOPE -.215E-01^(0.136E-01) -1.58 .063SLOPE 0.803E-03^(0.386E-03) 2.08 .024PUPIL LEVELINTERCEPT .905^(0.152) 19.11 .000** Significant at .01 levelSimilarly, the ratio of the slope variance to its standarderror gives a non-significant t = 2.08 at .01 level.Gender as a Moderator in the Kiswahili Relationship Parameter estimates of coefficients were used in thedetermination of the effect of gender on the KCSE-KCPErelationship in Kiswahili. Equation (49) shows thisrelationship.KIS2ij = 4.15 + 0.07(KISlij - KIS1 . .) +0.46(GENDERij) (49)With GENDER coded 0 for boys and 1 for girls, it can bededuced from equation (49) that for examinees with the sameKCPE grades in Kiswahili, girls tended to outperform boys inKCSE Kiswahili by approximately 0.46 points.83Age as a Moderator The impact of AGE in the KCSE-KCPE Kiswahilirelationship can be observed from the computed regressionequation given by:KIS2ij = 7.29 + 0.07(KISlij - KIS1 .. ) - 0.20(AGE)ij^(50)This equation suggests that for examinees with the same KCPEKiswahili score attending secondary schools having the sameKCPE Kiswahili mean grade, younger examinees tended toperform much better than older examinees in KCSE Kiswahili.Repetition as a ModeratorParameter estimates for fixed coefficients were used towrite equation (51). This equation represents therelationship between KCSE Kiswahili (KIS2) and KCPEKiswahili (KIS1) after considering whether an examinee satfor KCPE once or more than once before joining secondaryschool.KIS2ij = 4.30 + (KISlij - KIS1 . .) - 0.12(REPij)^(51)REPETITION was coded 0 for non-repeaters and 1 forrepeaters, implying that for pupils with the same KCPEKiswahili score attending secondary schools with the sameKCSE mean grade in Kiswahili, non-repeaters outperformedrepeaters in the KCSE Kiswahili examination.84OB Model for Composite Exam, Mathematics and Kiswahili Table 23 shows the coefficient estimates in the OBModel for composite scores, mathematics scores, andKiswahili scores. The predictive validity of the compositeexamination was moderate. Similarly the predictivevalidities of mathematics and Kiswahili sub-tests were alsomoderate.Table 23Coefficient Estimates in the OB Model KCPE AGE GENDER REP VALIDITYCOMPOSITE .011 -.164 -.389 -.086 MODERATEMATHS .026 -.190 -.706 -.072 MODERATEKISWAHILI .071 -.200 0.365 -.116 MODERATESummary of Results The data used in this study were for 781 examinees in26 secondary schools drawn from one district in Kenya whose1987 KCPE results were most representative in the provincewhere this district is located. Scatterplots showed that therelationship between achievement in KCPE and achievement inKCSE could be approximated by a linear function. Histogramsand normal plots for the predictor and criterion variablessuggested that the distribution of scores were appropriatefor regression analysis. The inter-correlation between85variables were moderate to large. A plot of KCSE schoolmeans against KCPE school means revealed no influential datapoints. Thus, the data were such that multiple regressionanalysis could be meaningfully used to study the predictionrelationships.Several models were fitted to find out the variation ofachievement levels across schools as well as therelationship between KCPE and KCSE examinations. The modelsrevealed that secondary schools differed significantly inKCSE achievement levels but this difference becamestatistically non-significant when the schools' meanachievement levels in KCPE were considered. The KCSE-KCPEslope varied from school to school although this variationremained non-significant. This implies that schools'regression lines for predicting KCSE from KCPE are, for allpractical purposes, parallel.The role of pupil-level and school-level variables onthe KCSE-KCPE relationship was also examined. From theresults of KCSE-KCPE relationship, younger examinees hadhigher achievement levels than their older counterparts,boys outperformed girls, and repetition had a negativeeffect on the predictive relationship of KCPE and KCSE.Whereas the variation across schools in mean achievement andin regression slopes were non-significant, secondary schoolsize helped in explaining the observed differences acrossschools. It was found that the bigger the school, the higher86the achievement mean and the lower the slope in the KCSE-KCPE relationship.In mathematics,than^girls,^youngerboys had a higher achievementexaminees^outperformed^theirleveloldercounterparts, and repeaters performed below non-repeaters.In Kiswahili, girls had a higher achievement level thanboys, younger examinees outperformed their oldercounterparts, and repeaters of Class 8 performed below non-repeaters. Composite KCPE was found to be a valid predictorof composite KCSE. However, the validity was only moderate,with a correlation of .56 between the two compositeexaminations. Similarly, KCPE mathematics and Kiswahili werefound to be valid predictors of KCSE mathematics andKiswahili respectively. The next chapter discusses theprincipal findings of the study.87Chapter 5DISCUSSIONThis final chapter presents important findings thataddress the research questions in this study. Theimplications of these findings in the context of educationalpolicy that may be useful to decision makers are discussed.The findings are considered important in prediction researchbecause the study involved the use of hierarchical linearmodelling, a linear regression approach that takes care ofdifferences between units at one level (pupils in this case)and differences between units at another level (schools inthis case).Validity of KCPEIt was found that KCPE scores had a moderate positivelinear relationship with KCSE grades, with a correlation of0.56 between them. The proportion of variance in KCSE gradesexplained by KCPE scores was, therefore, 31%. In otherwords, 31% of the variance in pupils' KCSE grades may beattributed to differences among the pupils' KCPE scores.The variance components model revealed a significantvariation in KCSE grades across schools. However, when theschools' mean achievement levels in KCPE were considered,the variation across schools in KCSE means became non-significant. The slope in the KCSE-KCPE relationship of0.012 was found to be significant. A change of 83 points inKCPE (measured on a 600 equal interval scale) was found to88be associated with a change of 1 grade in KCSE (measured ona 12 point equal interval scale). Secondary schools with lowKCSE mean grades were generally associated with low KCPEmean scores; those with high KCSE mean grades were generallyassociated with high KCPE mean scores.The between-school variation in the KCSE-KCPErelationship (slope) was non-significant, suggesting thatother than differences due to chance, the KCPE-KCSErelationship (slope) was more or less the same for thepopulation of schools from which the sample was drawn. Astatistically significant KCSE-KCPE slope, a non-significantvariation in KCSE-KCPE relationship (slope) across schools,and a moderate positive linear correlation between KCPE andKCSE, are indicators that KCPE is a valid predictor of KCSE,although the validity is only moderate.Whereas there were no significant differences in KCSE-KCPE slopes across secondary schools, schools with low KCSEmeans tended to have higher slopes than schools with highKCSE means. The finding that schools with higher KCSE meansbenefitted less from KCPE than schools with lower KCSE meanscould be attributed to the regression effect. Unless r=1.0or -1.0, all predictions of a criterion variable from apredictor variable involve a regression toward the mean. Inother words, pupils in secondary schools whose KCPE meansare lower tend to have more room for improvement than theircounterparts in schools with higher KCPE means. Anotherpossible explanation is that pupils in schools with lower89KCPE means are usually aware that they are academicallyweak. For this reason, they tend to work much harder inorder to succeed.Reliability affects validity coefficients. Since thepredictor and criterion were not perfectly reliable, themoderate validity found in the KCPE-KCSE relationship is alower bound.Influence of AgeThe results showed that the influence of age at whichKCPE was taken on the KCPE-KCSE relationship wassignificant. As a moderator variable, it was associated witha drop in the KCSE-KCPE slope from 0.012 to 0.011, a drop of8.3%. This means that the use of age as a moderator variabletends to weaken the relationship between KCPE and KCSE,thereby making prediction poorer. Results further showedthat younger pupils generally had higher achievement levelsthan older pupils.Influence of GenderGender had a non-significant influence on the KCPE-KCSErelationship (slope). However, boys generally showed highermean achievement levels in KCSE than girls, with boysscoring an average of 0.245 points above girls. Boys alsooutperformed girls in KCSE mathematics, scoring an averageof 0.35 points above girls. However, girls turned out to bebetter in KCSE Kiswahili than boys, scoring an average of900.46 points above boys. The finding that boys excelled inKCSE mathematics over girls is similar to that of Fennemaand Leder (1990) who found that from the beginning ofsecondary schooling, boys frequently outperformed girls inmathematics.Influence of RepetitionRepetition of Class 8 had a non-significant influenceon the KCPE-KCSE relationship (slope). However, non-repeaters had higher mean achievement levels (intercepts)than repeaters. Non-repeaters scored an average of 0.24points above repeaters in KCSE. This trend was maintained inKCPE-KCSE mathematics and KCPE-KCSE Kiswahili relationships.Non-repeaters of Class 8 scored an average of 0.55 pointsabove repeaters in KCSE mathematics and 0.11 points aboverepeaters in KCSE Kiswahili.The result that repetition does not help in improvingachievement levels is similar to the findings of Niklason(1987).Influence of Secondary School SizeThe variability in schools' KCSE achievement means foraverage KCPE students was found to be partly accounted forby secondary school size. Similarly, the variability inschools' slopes was found to be accounted for in part byschool size. Larger schools had higher mean achievementlevels in KCSE (intercepts) than smaller schools, although91the difference was not significant. This could be attributedto the fact that larger schools tend to have moreestablished physical and human resources as compared to whatis normally found in smaller schools. The larger establishedschools do receive more financial aid from the government inthe form of grants than the smaller, less established ones.Additionally, such "maintained" schools select some of thebest candidates, leaving only mediocre candidates for the"non-maintained" schools. "Maintained" schools are schoolsthat receive financial aid from the central government inthe form of grants. This further adds to the observeddifferences in mean achievement levels between the largerand smaller schools. Therefore, such secondary schools arenot comparable to other schools because selection is alreadybiased.Another possible explanation for the differences inmean achievement levels between larger and smaller schoolsis that larger schools are normally popular choices forpupils who expect to score higher in KCPE. These pupilsusually have the notion, which turns out to be true, thatlarger schools are more established, and that this mayincrease their chances of succeeding in secondary school.Most of the larger schools therefore tend to select pupilswith higher KCPE scores than smaller schools.The finding that larger secondary schools tend to havehigher mean achievement levels than smaller schools issimilar to the finding of Kimble (1976). In summary, the92predictive validity of KCPE was found to be moderate, with acorrelation of 0.56 between them.Implications for Educational PolicyThis study has revealed that KCPE is a valid predictorof KCSE, although the validity is only moderate. This is adesirable outcome because secondary education is supposed tocater not only for the interests of those who may want to gofor further education, but also for the majority ofcandidates who opt to join the job market after completionof secondary school. For this reason, the use of KCPE as aselection instrument for secondary school admission may becontinued. However, it is worth noting that out of the threepupil-level variables that were considered as possiblemoderators in the KCPE-KCSE relationship, only age had asignificant influence on the relationship. It may thereforebe necessary to ensure that pupils who are in a particulargrade level, on the average, do not differ significantly inchronological age. This may improve the predictive validityof KCPE.KCPE mathematics and KCPE Kiswahili were also found tohave moderate predictive validity with respect to KCSEmathematics and KCSE Kiswahili, respectively.The fact that secondary schools showed significantvariations in KCSE means, and that these variations becamenon-significant when the schools' KCPE means wereconsidered, has important policy implications in the manner93in which secondary schools are normally ranked based only ontheir KCSE means. Such ranking may be spurious, and it isrecommended that this be discontinued or be used withcaution. Rather, each secondary school's KCPE mean should beconsidered as a base line before ranking of secondaryschools is done. For example, in the event that twosecondary schools have the same KCSE mean but different KCPEmeans, the one with the higher KCPE mean is likely to be oflower rank than the one with the lower KCPE mean. In such asituation, the one with a lower KCPE mean may be deemed ashaving attained a higher achievement gain than the one witha higher KCPE mean.The fact that boys generally showed higher achievementlevels than girls could be an indicator that the curriculumor the examinations are biased in favour of boys. This ineffect may jeopardize government efforts in its struggletowards providing equal educational opportunities to bothmales and females. Alternatively, it may be that educationfor females is simply lagging behind that for males,particularly in science education. There seems to be abelief, very strongly embedded in the system, thatmathematics and science are not suitable for girls and arenot reconcilable with the demands of family life (Ndunda,1990). Such negative attitudes towards girls ability byteachers and society in general may hinder girls progress inschool.94Repetition of Grade 8 seemed not to improve achievementin secondary school. This could be explained by the factthat those who repeated grade 8 were, in most cases,academically weak. Repetition of Grade 8 may therefore havehelped such students to gain entry into secondary school,but with the consequences that the same students, on theaverage, may end up failing in secondary school. Theseoutcomes suggest that the policy that prohibits repetitionat primary school level should be further enforced in orderto make good use of the scarce resources, because as foundin the present study, repetition of Class 8 hardly improvedmean achievement levels in secondary school.Based on the outcome that larger schools tended tooutperform smaller schools, it is recommended that ratherthan build more smaller schools that are ill equipped,priority attention may be given to the expansion of existingfacilities in smaller schools, with a view to increasingtheir respective enrollments to a capacity of three streams.This is because in this study, schools with three streamsoutperformed those with either one or two streams. Such amove will be economically prudent for Kenya.Weaknesses and Strengths of the StudyResearchers in the social sciences are rarely able toobtain information on all the factors that may influencepredictor-criterion relationship. Using predictors asexplanatory variables may not account for all the variance95in KCSE. Whenever use of nonexperimental designs is made,and the predictor variables fail to account for all of thevariance in the criterion, inferences about the effects ofthe predictors on that criterion may be biased. The presentstudy is no exception.A number of psychometric characteristics could haveaffected the results in this study. Of major concern are theeffects of restriction of range and criterion contamination.The effect of restriction of range arises from the fact thatthose examinees who failed the predictor were not part ofthe sample in the study. This is a problem that is difficultto eliminate in a predictive validity study of this nature.Restriction of range tends to lower the correlationcoefficient between the predictor and the criterionvariables. Criterion contamination was seemingly inevitablebecause most teachers knew how their students performed inKCPE. This might have influenced their teaching.Another important limitation of the study, and whichturns out to be a limitation of most predictive studies, isthe possibility of decreasing predictive validity acrosstime (Hulin, Henry & Noon, 1990). This is so because studentabilities are not fixed, but dynamic. If predictivevalidities vary randomly across time, and in excess ofrandom fluctuations, then there may be no linear factorinvolved in the variability.Socioeconomic background is a useful variable in mosteducational studies of this nature. However, this study96could not incorporate students socioeconomic background dueto lack of appropriate records.Other weaknesses of the study include inability to fita prediction model for English due to failure ofconvergence, inability to get other school level and pupillevel variables that might have had an influence on therelationship, and inability to calculate the actualreliability of the predictor and criterion.The major strength of the present study is the use ofHLM which considers the hierarchical nature of educationaldata.Future ResearchFuture research on predictive validity of KCPE shouldbe a large-scale national study, making use of three levelhierarchical linear modelling. The levels to be consideredmay be pupil-level, school-level, and district-level. Such astudy may provide comprehensive and valuable informationabout the predictive validity of KCPE as well as therelative achievement levels between pupils, schools, anddistricts. Such a study should incorporate more school levelvariables.In conclusion, the findings of this study showed thatKCPE is a valid predictor of KCSE, although the validity isonly moderate. Whenever examinations are used as selectioninstruments, their predictive validity should be empirically97studied from time to time in order to make possibleimprovements if necessary.98REFERENCESAiken, L.R. (1971). Intellective variables and mathematicsachievement: Direction for research. Journal of School Psychology, 9, 201-212.Anderson, C. S. (1982). The search for school climate: Areview of the research. Review of Educational Research, 52(3), 368-420.Arnold, C. L. (1992). Methods, Plainly Speaking. AnIntroduction to Hierarchical Linear Models. Measurement and Evaluation in Counseling and Development, 25, 58-85.Baron, M. R. and Kenny, D. A. (1986). The Moderator-MediatorDistinction in Social Psychological Research:Conceptual, Strategic, and Statistical Considerations.Journal of Personality and Social Psychology, 51(6),1173-1182.Borg, W.R. and Gall, M.D. (1983). Educational Research. AnIntroduction: Longman.Briggs, L. D. (1968). The impact of failure on elementaryschool pupils (Doctoral dissertation, North Texas StateUniversity, 1966). Dissertation Abstracts, 27, 2719-A.(University Microfilms, No. 67-2571) [0]Brophy, J. (1985). Interactions of male and female students with male and female teachers. In gender influences inclassroom interaction (pp 115-142). N.Y. AcademicPress.99Bryk, A. S. & Driscoll, M. E. (1988). The school as community: Theoretical foundations, contextual influences, and consequences for students and teachers.Madison, WI: National Center on Effective SecondarySchools.Bryk, A. S. and Raudenbush, S. W. (1992). Hierarchical Linear Models: Applications and data analysis methods.Sage Publications.Chansky, N. M. (1964). Progresses of promoted and repeatinggrade 1 failures. Journal of Experimental Education, 32, 225-237.Choppin, B. H. (1969). The relationship between achievementand age. Educational Research, 12(1), 22-29.Coeffield, W. H., & Bloomers, P. (1956). Effects of non-promotion on educational achievement in the elementaryschool. Journal of Educational Psychology. 47, 235-250.Cohen, J. & Cohen, P. (1983). Applied Multiple RegressionAnalysis for Behavioral Sciences (2nd Ed.). LawrenceErlbaum Associates, Inc., Publishers.Coladarci, A. (1983). Paradise regained: An apodicticanalysis of the relationship between school size andpublic achievement. Research in Rural Education, 2(2),79-82.Cook, W. W. (1940). Some effects of the practice of non-promotion of pupils of low achievement. Washington DC.:American Educational Research Association.100Corbett, H. D. & Wilson, B. Raising the stakes in state widemandatory minimum competency testing. Research forBetter Schools Inc., Philadelphia, Pa, 1989. (ERICDocument Reproduction Service No. Ed. 338641.Dwyer, C. A. (1973). Sex differences in Reading: Anevaluation and a critique of current theories. Reviewof Educational Research, 43, 455-467.Ebel, R. L. (1982). The practical validation of tests of ability. Paper presented at the NAEP Conference onLarge-Scale Assessment, Boulder, Colorado: EducationalResearch Association.Eisemon, T. 0. & Schwille, J. (1991). Primary schooling inBurundi and Kenya: Preparation for secondary educationor self-employment? The Elementary School Journal, 92(1), 23-39.Eshiwani, G. S. (1985). The education of Women in Kenya,1975-1984. Kenyatta University College, Nairobi.Fennema, E. and Leder, G. C. (1990). Mathematics and gender.Teachers College Press.Ferguson, G. A. (1981). Statistical analysis in psychologyand education. McGraw-Hill, Inc.Fox, L. H. and Cohn, S.J.(1980). Sex differences in the development of precocious mathematical talent. Womenand the mathematical mystique (pp. 94-111) Baltimore:Johns Hopkins University Press.Goldstein, H. (1987). Multilevel models in educational and social research. Oxford University Press.101Government Printer, Nairobi (1976). Report of the National Committee on Educational Objectives and Policies.Government Printer, Nairobi (1989). Development Plan  (1984-88).Government Printer, Nairobi (1990). Statistical Abstract.Haislett, J. and Hafer, A. (1990). Predicting success inengineering students during the freshman year. Career Development Ouarterly. 39(1), 86-95.Harnisch, D. & Archer, J. (1986). Mathematics productivityand educational influences for secondary students in Japan. India and the United States. Paper presented atthe annual meeting of the American Educational ResearchAssociation (70th San Francisco, CA, 1986).Hedges, L. V. (1982). Fitting continuous models to effectsize data. Journal of Educational Statistics. 7, 245-270.Heyneman, S. P. (1987). Uses of examinations in developingcountries: Selection, research, and educational sectormanagement. International Journal of Educational Development. 7, 251-263.Holmes, C. T. and Matthews, K. M. (1984). The effects ofnonpromotion on elementary and junior high pupils: Ameta-analysis. Review of Educational Research. 54(2),225-236.Hulin, C. L., Henry, R. A. & Noon, S. L. (1990). Adding aDimension: Time as a factor in the generalizability of102103predictive relationships.^Psychological Bulletin, 107(3), 328-340.Husen, T. (1967). International Study of Achievement inMathematics. New York: Wiley.Jackson, G. B. (1975). The research evidence on the effectsof grade retention. Review of Educational Research. 45(4), 613-635.Jacobsen, S. S. (1990). Identifying children at risk: The predictive validity of kindergarten screening measures.Ph.D Thesis, University of British Columbia.Kenya Education Commission Report. (1964). GovernmentPrinter, Nairobi.Kenya National Examinations Council, (1988). K.C.P.E. Newsletter.Kellaghan, T. & Greaney, V. (1992). Using examinations toimprove education. A study in fourteen Africancountries. World Bank Technical Paper Number 165.Africa Technical Department Series.Kimble, J. W. (1976). Basic quality of secondary educationin rural Montana. Bulletin 685. (ERIC DocumentReproduction Service No. Ed. 212418).Kreft, I. G. G., De Leew, J., & Kim, K. (1990). Comparingfour different statistical packages for hierarchical linear regression: GENMOD, HLM, ML2, and VARCL (CSETechnical Report 311, 70-71. Los Angeles, CA: UCLACenter for Research and Evaluation, Standards, andStudent Testing.Lenarduzzi, G. P. & McLaughlin, T. F. (1990). The effects ofnonpromotion in iunior high school on academicachievement and scholastic effort. Unpublishedmanuscript.Longford, N. T. (1989). Fisher scoring algorithm. In R. D.Bock (Ed.), Multilevel analysis of educational data (pp. 297-310). Academic Press, Inc. San Diego,California 92101.McAfee, J. K. (1981). Toward a theory of promotion: does retaining students really work? Paper presented at themeeting of the American Educational ResearchAssociation, Los Angeles, CA.McMillan, J. H. & Schumacher, S.(1984). Research ineducation. A conceptual introduction.  Little, Brownand Company (Canada) Limited.Messick, S. (1980). Test validity and ethics of assessment.American Psychologist, 35(11), 1012-1027.Ndunda, M. M. K. (1990). Because I am a woman: Young women's resistance to science careers in Kenya. M.Ed. Thesis,Queen's University, Canada.New York State Department, Bureau of School ProgramsEvaluation (1976). Which school factors relate tolearning? Summary of findings of three sets of studies.Albany, N.Y.Niklason, L. B. (1987). Do certain groups of children profitfrom a grade retention? Psychology in the Schools, 24,339-345.104105Paterson, L. & Goldstein, H. (October, 1991). Newstatistical methods for analysis of social structures:an introduction to multilevel models. Multilevel Modelling Newsletter, p. 6.Rasbash, J., Prosser, B. & Goldstein, H. (1989). ML 2 software for two-level analysis; Users' Guide.Institute of Education, University of London.Raudenbush, S. W. (1988). Educational applications ofhierarchical linear models: A review. Journal of Educational Statistics, 13, 85-116.Raudenbush, S. W. & Bryk, A. S. (1986). A hierarchical modelfor studying school effects. Sociology of Education, 59, 1-17.Reinherz, H. & Griffin, C. L (1971). The treadmill of failure. Quincy, Mass.Rutter, M., Maughan, B., Mortimore, P., Ouston, J., & Smith,A. (1979). Fifteen thousand hours. London, England:Open Books.Simon, R. J. & Danna, M. J. E. (1990). Gender, race, and thepredictive validity of the LSAT. Journal of Legal Education, 40(4), 525-529.Smith, M. L. & Shepard, L. A. (1987). Effects ofkindergarten retention at the end of first grade.Psychology in the Schools, 24, 346-357.Stanley, J. C. & Hopkins, K. D. (1972). Educational andpsychological measurement and evaluation: Prentice-Hall, inc., englewood cliffs, n.j.Stroud, J. B. & Lindquist, E. F. (1942). Sex differences inachievement in the elementary and secondary schools.Journal of Educational Psychology, 33, 657-667.Tagiuri, R. (1968). The concept of organizational climate.In R. Tagiuri & G. H. Litwin (Eds.). Organizational climate: Exploration of a concept. Boston: HarvardUniversity, Division of Research, Graduate School ofBusiness Administration.System dogged by controversy. (1993, April 16). The WeeklyReview, p. 10.Walsh, D. J. (1988). The two-year route to first grade: Theuse of testing to validate decisions based on class andage. Paper presented at the Annual Meeting of theAmerican Educational Research Association in NewOrleans.Werdelin, I. (1961). The geometrical ability and the spacefactor in boys and girls. Lund, Sweden.Willms, J. D. (1985). Catholic-school effects on academicachievement: New evidence from the High School andBeyond Follow-up study. Sociology of Education, 58, 98-114.Wyatt, J. F. & Gay, J. D. (1984). The educational effects ofdifferent sizes and types of academic organization.Oxford Review of Education, 10(2), 211-223.Population Studies No. 106 (1989). World PopulationProspects, 1988. Department of International Economicand Social Affairs, United Nations, New York, 1989.106Yates, B. J. (1991). A comparison of effectiveness ratingsof selected principals and NASSP assessment centerratings. Paper presented at the annual meeting of theAmerican Educational Research Association.107108APPENDICESAPPENDIX AMathematics Grade Statistics by GenderBOYSGIRLSTOTALA A- B+ B B- C+ C67513644210871013697425013273321867547576828811 550 846 1224 1659 2414 1404% 0.62 0.42 0.65 0.94 1.28 1.86 1.08BOYSGIRLSTOTALC- D+ D D- E TOTAL %337511884776176612982613319541124752931829582765635348158.941.14563 6542 19115 32016 58900 130044 100% 3.51 5.03 14.70 24.62 45.29 100109APPENDIX BScatterplots of KCSE grades against KCPE scores. Figure B-2: KCSE Kiswahili against KCPE Kiswahili110111figure B-4: Composite KCSE against Composite KCPEAPPENDIX CECSE and KCPE Stem and Leaf. Box. and Normal Plots. Leaf unit is 0.100Lowerlimit2.000 1 : 02.500 0 :3.000 2 : 123.500 6 : 5677784.000 5 : 222444.500 6 5556895.000 4 : 11235.500 1 : 56.000 0 :6.500 1 : 5Figure C-1: Stem and Leaf Plot for KCSE School MeansLeaf unit isLowerlimit10.0240.0 1 : 5260.0 2 : 66280.0 4 : 8899300.0 5 : 00111320.0 4 : 2233340.0 3 : 455360.0380.0 41::66678400.0 1 : 0420.0 1 2112Figure C-2: Stem and Leaf Plot for KCPE School Meanslaw•■•=1NNW2 -1I . 2-9.3- 1. 9113Figure C-3: Box Plots for KCSE and KCPE-3.3 1.9^3. •^ VKC•IE ENSLISH SCOREFigure C-4: Normal Plot for KCSE English. 7-O. 3-1.3Figure C-5: Normal Plot for KCSE Kiswahili2. 7••••KC= MATS SCOPE114Figure C-6: Normal Plot for KCSE Mathematicso ..1-1.2/■■•NM..-'.."...". .- .**.0 . ..e ...'..*..••••.."'..'•EW.••••..."115Figure C-7: Normal Plot for KCPE English3 a2. •...-2. • A^_. A^. ISKCPE K1111110111.1 SCOREIFigure C-8: Normal Plot for KCPE Kiswahili3 31 . 7S.1-1.5 116..•.•••••••Kept PlaT)4 SCOPEFigure C-9: Normal Plot for KCPE MathematicsFigure C-10: Normal Plot for KCPE Science and Agriculture2.1.4.13. 3I . •-2 7Figure C-11: Normal Plot for KCPE GHCR117Figure C-12: Normal Plot for KCPE ACHM-1.5111. 11.73.3Figure C-13: Normal Plot for KCPE Composite scoreFigure C-14: Normal Plot for KCSE Composite Score118


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items