@prefix vivo: . @prefix edm: . @prefix ns0: . @prefix dcterms: . @prefix dc: . @prefix skos: . vivo:departmentOrSchool "Education, Faculty of"@en, "Educational and Counselling Psychology, and Special Education (ECPS), Department of"@en ; edm:dataProvider "DSpace"@en ; ns0:degreeCampus "UBCV"@en ; dcterms:creator "Lacroix, Serge"@en ; dcterms:issued "2008-10-10T20:50:20Z"@en, "2008"@en ; vivo:relatedDegree "Doctor of Philosophy - PhD"@en ; ns0:degreeGrantor "University of British Columbia"@en ; dcterms:description "In this study the role that language plays in the expression of intelligence, bilingualism, and the process of assessing selected cognitive abilities was explored. The primary purpose of the study was to determine if individuals who are allowed to move from one language to another when they provide responses to test items produce results that are different than those obtained by bilingual examinees assessed in one language only. The results indicate that the Experimental Group obtained significantly higher results than the Control Group on all the tests and subtests used. The Experimental Group code-switched more frequently and the examiners only code-switched with that group. The frequency of the code-switching behaviours explains, in great part, all the differences noted in the results as very few other sources of differences were identified, even when groups were compared on sex, first language and relative proficiency in French and in English."@en ; edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/2575?expand=metadata"@en ; dcterms:extent "1107994 bytes"@en ; dc:format "application/pdf"@en ; skos:note " i THE BILINGUAL ASSESSMENT OF COGNITIVE ABILITIES IN FRENCH AND ENGLISH by SERGE LACROIX B.A. Laval University, 1985 M.PS. Laval University, 1992 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (School Psychology) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) October 2008  Serge Lacroix, 2008 ii Abstract In this study the role that language plays in the expression of intelligence, bilingualism, and the process of assessing selected cognitive abilities was explored. The primary purpose of the study was to determine if individuals who are allowed to move from one language to another when they provide responses to test items produce results that are different than those obtained by bilingual examinees assessed in one language only. The results indicate that the Experimental Group obtained significantly higher results than the Control Group on all the tests and subtests used. The Experimental Group code-switched more frequently and the examiners only code- switched with that group. The frequency of the code-switching behaviours explains, in great part, all the differences noted in the results as very few other sources of differences were identified, even when groups were compared on sex, first language and relative proficiency in French and in English. iii TABLE OF CONTENTS Abstract …………………………….….…………………………………………………...... Table of Contents ……………………….………………………………………………...…. List of Tables …………………………………………………………………………..…..... Acknowledgement…………………………………………………………………………… Dedication……………………………………………………………………………………. Chapter I Introduction………………………………………………………………….......... Definition of Key Terms ………………………………………………………….. Purpose of the Study …………………………………………………………...….. Research Questions ……………………………….….…….….....………………. Significance of the Study ……………………………...…………………………. Chapter II Review of the Literature…………………………………………………….…… Theory of Intelligence and the Measure of Cognitive Abilities………………….. Definitions and theories of intelligence ....................... ……………………. The Cattell-Horn-Carroll three-stratum theory .............................................. Intelligence Testing and Non-Discriminatory Testing Procedures ....................... The Linguistic Demand Aspect in Tests ............................................................... Best Practices in Non-Discriminatory Assessments of Bilinguals ........................ Bilingualism and Code-Switching ........................................................................ Bilingualism: Definitions and Types ..................................................................... Bilingualism and Intelligence: a Brief Historical Perspective ………………… The Linguistic Structure, the Motivation Behind and the Costs of ii iii vi vii ix 1 5 7 8 9 11 11 11 14 16 19 22 24 25 31 iv Code-Switching, Code-Mixing and Language Borrowing……………………… The linguistic structure of code-switching ................................................... The motivation to code-switch ..................................................................... The cost of code-switching .......................................................................... Cognitive Abilities and Bilingualism in the Assessment Context ........................ The Role Language Plays in the Evaluation of Cognitive Abilities ..................... The Need for a New Testing Procedure ................................................................ Target Population……………………………………………………………...… Summary..……………………………………………………….......................... Chapter III Method…………………………………………………………………………... Participants……………………………………………………………………..... Pilot Study …… .................................................................................................... Instruments ……………………………………………………………………... Procedures ….………………………………………………………………….... Code-switching procedure ................................................................................ Scoring ……………………………. ............................................................... Test administration ........................................................................................... Examiner’s training .......................................................................................... Data analysis ………………………………………………………………..... Chapter IV Results ……………………………………………………………. ..................... Research Question One ......................................................................................... Research Question Two ........................................................................................ Research Question Three ...................................................................................... 32 37 39 41 42 42 44 52 53 55 57 61 62 69 70 71 72 73 74 77 77 85 89 v Chapter V Discussion ……………………………………………………………………….. Research Question One ......................................................................................... Research Question Two ........................................................................................ Research Question Three ...................................................................................... Key Findings……………………………………………………………………… Limitations of the Study ........................................................................................ Strengths of the Study ........................................................................................... Contributions to the Field of School Psychology.................................................. Implications or Directions for Future Research .................................................... Summary…………………………………………………………........................ References ………………………………………………………………………..…….….... Appendix A. Recruitment form ............................................................................................... Appendix B. Consent form ...................................................................................................... Appendix C. Assent form ........................................................................................................ Appendix D. Language Background Questionnaire ................................................................ Appendix E. Training manual for the examiners ..................................................................... Appendix F. Differences between the WISC-IV (Wechsler, 2003) and WISC-IVcdn-fr (Wechsler, 2004) items content and placement ...................................... Appendix G. Test and scoring adapted test record………………………………………..… Appendix H. Certificate of Approval Behavioural Research Ethics Board………………….. 91 91 93 94 94 95 97 98 99 101 103 117 119 122 125 129 138 141 155 vi LIST OF TABLES Table 3.1 First Languages of Study Participants and Their Parents ………………………. Table 4.1 Tests and subtest means and standard deviations for the Control and Experimental groups with the mean differences between the scores ..................................... . Table 4.2 Group differences between the Control and Experimental Groups on WISC-IV subtests and WJ III COG tests ................................................................................................ . Table 4.3 Group by sex differences between the Control and Experimental Groups on WISC-IV subtests and WJ III COG tests …………………………………………………… Table 4.4 Differences on WISC-IV subtests and WJ III COG tests, based on ÉVIP results.... Table 4.5 Differences on WISC-IV subtests and WJ III COG tests, based on PPVT-III results….……………………………………………………………………………………… Table 4.6 Participants’ test and subtest results comparisons, based on first language……….. Table 4.7 Mean Number of Code-Switches by the Control and Experimental Groups……… Table 4.8 Differences in the frequency of code-switching by participants in the two groups.. Table 4.9 Degree of exposure to French, English and combinations of two languages……... 59 78 81 82 83 84 86 87 88 89 vii Acknowledgement The author wishes to express his gratitude to the numerous individuals and groups that have been involved in this dissertation process. Although, in the end, only one name is shown on the title page, the work that lead to this conclusion would not have been possible without the concourse of the people named here and of the many others that cannot all be named here. L’auteur tient d’abord à remercier tous les enfants qui ont généreusement offert leur participation et ont ainsi pavé la route menant à des résultats aussi significatifs. Leur enthousiasme pour le projet et leur volonté de contribuer fût remarquable. Par le fait-même, l’auteur tient à remercier les parents de ses enfants, leurs enseignants et tous les collègues du Conseil scolaire francophone qui l’ont supporté tout au long de ce travail. Dr Nicolas Ardanaz et Dr Jean Watters qui ont tour à tour été Directeur général du Conseil scolaire francophone ont notamment été d’un soutien constant et très apprécié. The author wishes to thank Drs Laurie Ford, Kadriye Ercikan and Kenneth Reeder, members of his dissertation committee. Through discussion with them and with their feedback, this project has reached the final stage. Laurie (Dr Ford) needs to be commended for her patience through this project and others. A special thank you goes to Dr Shelley Hymel who has provided support and encouragement. The wisdom she has put in “agreeing to disagree” with me has taught me a few lessons on academic life. Sur une note plus personnel, l’auteur tient à remercier ses amis le professeur Jean- Bernard Pocreau et Madame Claire Pâquet qui l‘ont aidé à devenir psychologue and his friend Dr Patrick Carney who has help him become a school psychologist. Their support and example are not forgotten. Leur amitié demeure. Dr Douglas Cohen for his support and supervision fits in these aforementioned categories. viii De façon encore plus personnel, je veux remercier mon épouse Jacinthe Gauthier qui malgré mon incompréhension et ma maladresse, exprime son amour et persiste à croire que ce projet avait du sens. Contre vents et marées, elle assure sa présence et ainsi me permet d’avancer car, sans son amour, je ne suis qu’airain qui résonne. Je lui en suis profondément reconnaissant. Par le fait même, je veux remercier les enfants, Pierre et Gabrielle, qui ont participé à tant d’étapes du projet, qui ont offert leur soutien, chacun à leur façon, tout en contribuant plusieurs de leurs alternances codiques à ma compréhension du phénomène. ix Dedication Je veux dédier ce travail qui, en apparence important, est bien pâle face à ce que peut représenter ces deux personnes qui sont la lumière dans ma vie. Je l’offre à ma fille, Camille Lacroix, qui m’a appris l’amour inconditionnel en venant au monde. Elle m’accompagne partout et pour toujours. Je l’offre aussi à Jacinthe Gauthier, mon alter ego, mon double qui chaque jour me fait réaliser l’amour. Le cadeau de sa présence donne un sens à ma vie. Au moment de conclure ce parcours, j’ai une pensée affectueuse pour ma sœur Karoline Lacroix que la vie a emportée beaucoup trop tôt… 1 Chapter One Introduction Assessments of cognitive abilities are conducted on a regular basis in school systems throughout North America (Ortiz, 2002), where the population is less homogeneous than it was 20 to 30 years ago (Lopez, 1997). In many North American school districts, the number of bilingual students has surpassed the 50% mark (Schester & Cummins, 2003) and the proportion of the population coming from a linguistically diverse background is increasing (Marian, Blumenfeld & Kaushanskaya, 2007). Ortiz and Dynda (2005) note the importance of addressing diversity in psychological practice given the “rapid expansion and the dramatic increase in ethnic composition of the population of the United States” (p. 545). With many bilingual students now being tested in schools, there is a crucial need to acknowledge linguistic diversity both for the bilingual students being tested and for the school psychologists asked to conduct psychological and educational assessments. In order to address this issue, there is a need to examine how assessment tools and assessment practices can better respond to these new demographics by more accurately reflecting the linguistic and cultural diversity among students. To some extent, this is in great part why an increasing number of tests are now available in languages other than English. For example, evaluation tools such as the Wechsler Intelligence Scale for Children (WISC-R; Wechsler, 1974) or the Woodcock-Johnson Psychoeducational Battery (WJPEB; Woodcock & Johnson, 1977) were originally only developed in English and were the most commonly used by psychologists in Canada and the United States (US) (Woodcock, 1997). These measures of cognitive abilities have gradually been translated and adapted for several languages around the world (Ortiz, 2002). For example, the Woodcock-Johnson Psychoeducational Battery-Revised (WJ-R: Woodcock- 2 Johnson, 1989) was adapted for the Spanish language and developed as the Baterìa Woodcock- Munoz Pruebas de Habilitad Cogniscitiva-Revisada (Woodcock, & Munoz-Sandoval, 1996). The WISC-R (Wechsler, 1974) was translated into Chinese and was known as the Hong Kong Wechsler Intelligence Scale for Children (HK WISC: Psychological Corporation, 1981). More recently, the Wechsler Intelligence Scale for Children, Fourth Edition (WISC-IV; Wechsler, 2003) was translated into French as the L’Échelle d’Intelligence Wechsler pour Enfants Quatrième Edition (WISC-IVcdn-fr; Wechsler, 2005). Although increasing numbers of intelligence tests continue to be translated into and adapted for languages other than English, whether these tests are appropriate for use with bilingual children has yet to be established. Even if these instruments and procedures were specifically developed to test bilingual individuals, none seem to meet the criteria of a bilingual test (Ortiz, 2002). As Flanagan, McGrew and Ortiz (2000) have stated, “Few things create as much confusion or cause as much consternation for applied psychologists as the attempt to validly measure the cognitive abilities of individuals from diverse cultural and linguistic backgrounds” (p. 289). Exploring the validity of a testing procedure used to evaluate the cognitive abilities of bilingual children could help alleviate some of the difficulties associated with the testing of a growing multilingual population. Developing tests that acknowledge the linguistic particularities of a growing part of the population would address concerns with construct validity because, at this point in time, there are still some ambiguities as to the construct being measured when a bilingual individual is assessed with the same instrument and in the same fashion as a monolingual individual. Psychologists have not yet been able to test bilingual individuals bilingually with standardised tests (Ortiz, 2002). One of the potential obstacles in testing bilingual individuals bilingually on standardised tests could be that bilingual individuals do, at 3 times, move from one language to the other when they respond to test items. Given this behaviour, it is possible that to obtain an accurate measure of cognitive abilities of these bilingual individuals, one would need to allow for the use of two languages throughout the testing process. Martinovic-Zic (1998) described how the use of a language and the movement between two languages, that is code-switching, are guided by several underlying motivations including social identity, the need to adhere to a group, the cost associated with the social exchange, and the perceived social separation between the interlocutors as a result of the power one person can have over the other. “A bilingual chooses his/her first language (L1) or a second language (L2), or mixes the two, based on the (conscious or unconscious) assessment of the relationship with the interlocutor within a context” (Martinovic-Zic, 1998, p.2). In the context of an assessment, where the examiner asks students to do their best (Wechsler, 2003a), and following Martinovic- Zic’s (1998) rationale, one would need to determine if, to do their best, bilingual students would either need to use only one language to respect the examiner’s language or if they would need to move between their two languages in order to obtain the highest number of correct responses. Myers-Scotton (2000) states that code-switching plays many roles, one of which is social. Following her rationale, one might infer that students will want to preserve their relationship with the examiner by doing their best and by using their second language if needed. Conversely, if students are not aware that they can use two languages to respond, they may choose to use only the test language in order to respect both the language of the examiner and the language dictated by the instrument. Measures of cognitive abilities are created to point out individual differences and usually have a primary goal of comparing an individual’s performance with the performance of a group sharing similar attributes (Rechsly & Grimes, 4 2002). Therefore, not allowing code-switching, might prevent the children from providing an accurate portrait of their abilities, their differences or particularities and by extension impact the examiner’s analysis of test results. To be considered a bilingual assessment, an assessment has to be conducted in a bilingual fashion (Ortiz, 2002). Given that being bilingual is in itself sometimes an individual difference and sometimes a shared trait, it becomes important to acknowledge it through the course of an assessment. Taking Ortiz’s (2002) claim further, it is possible that this would imply that to create a truly bilingual assessment situation, the assessment process should allow code-switching on both the examinee and examiner’s part where bilingualism would be freely expressed. Use of a second language has been recognized to some extent through the development of a number of widely used measures of cognitive abilities in various languages, such as the Woodcock-Johnson, Third Edition Tests of Cognitive Abilities, Échelle d’intelligence de Wechsler pour enfants Quatrième édition: WISC-IVcdn-fr). In these tests, responses are accepted in a language other than English, the language in which the test was originally designed, thereby allowing for movement between languages (See Wechsler, 2005, p. 12). Test developers, by accepting responses in another language, are in essence accepting a form of code-switching. However, there are no indications at this point that test developers would go beyond accepting responses in another language because they have not provided any clear theoretical indications as to the reasons why such a change has been made. In the mean time, more research is needed to assess the impact that accepting a broader range of responses would have on the overall test results. Ortiz (2002) supports conducting more research on whether or not accepting a broader range of responses is valid as empirical data is gathered to support bilingual testing. 5 For the present study, a review was conducted of the literature on cognitive abilities, bilingualism, and fair assessment procedures for bilinguals in order to address the issue of bilingual assessment of cognitive abilities. Arising from these topics, further issues were raised such as current practices in bilingual testing and testing children with limited English proficiency, since the majority of tests used are in English. The literature review provides support for the need for bilingual testing procedures that allow for movement from one language to another in the overall context of assessment. Definition of Key Terms In order to provide the reader with a clearer understanding of the terminology used throughout this study, a number of terms are defined. The key terms used in the study include “cognitive ability”, “first language” and “second language”, “bilingualism”, “code-switching”, “matrix language”, and “embedded language”. Cognitive ability Cognitive ability is a common term that has an array of definitions and is often interchanged with the term intelligence. The various definitions encountered in the literature have been associated with theories, tests, and historical changes. Most developers of intelligence tests have provided a definition of intelligence or intelligent behaviours, or have described components of intelligence in theories that could be deemed of multiple intelligences. For the purpose of this study, cognitive ability refers to the set of abilities that, when combined and represented in a test score, encompass what is generally understood as intelligence and often referred to as “g” for general intelligence. These abilities include, but are not limited to “abilities in the domains of language, reasoning, memory and learning, visual perception, auditory reception, cognitive speed, knowledge and achievement” (Carroll, 2005, p.73). 6 Although the general concept of cognitive abilities are referred to throughout this study, only a specific selection of cognitive abilities linked to language abilities were explored in this study. First language First language, also referred to as L1, native language or mother tongue, is the original linguistic code that is first experienced by an individual as well as also being the mother tongue used by most members of a community (Hamers & Blanc, 2000). In this study, the participants’ first language is the one identified by their parents in the Language Background Questionnaire. Second language Second language, often referred to as L2 or foreign language, is the second language learned after the native one (Hamers & Blanc, 2000). It may also be the non-native language in a community where a different mother tongue is spoken (Hamers & Blanc, 2000). Bilingualism Mackey (2002) reports that less than 60 years ago, the definition of bilingualism implied an “equal mastery, choice and use of two languages” (p. 329). Bilingualism is now studied in several sub-areas of linguistics, such as sociolinguistics and ethnolinguistic (Mackey, 2002), and implies the presence of two languages. The term bilingualism encompasses topics such as proficiency, competence, receptive and expressive language, as well as the four areas of language skills: listening comprehension, speaking, reading and writing (Hamers & Blanc, 2000). Some authors, such as McNamara (Hamers & Blanc, 2000), accept minimal competence in these aforementioned four areas as bilingualism. Others use a more stringent definition that requires a native-like mastery in two languages to satisfy their definition of bilingualism. In the context of this study, bilingualism refers to an acceptable level of proficiency in at least one of the four areas of language, speaking, listening comprehension, reading, and writing, 7 in French and in English. A level of language competency expected and shared by one’s group of reference, peers in this study, would be considered an acceptable level of bilingual competency. Code-switching Code-switching (CS) is the process observed when individuals move from one language to the other, either by including a word or words from another language in the course of a conversation or by expressing themselves in one language in full sentences and then periodically switching to the other language in order to help them to express themselves with more ease. Myers-Scotton (1993a) describes code-switching as a choice made by bilinguals producing utterances that are intersentential, when CS occurs between sentences, or intrasentential, when CS occurs within the same sentence. For the purposes of this study, any movement from one language to the other that had included elements of response, as detailed in the test manuals, was considered a code-switch. Code-switching did not equal a point and was not scored as increasing the overall score on a test or subtest. Code-switches were tallied as language behaviours. The frequency of this language behaviour was tallied separately from the effect it had on test scores implying that, at times, participants code-switched but did not necessarily increased their scores by using their second language. Purpose of the Study The primary purpose of the present study was to compare test scores obtained on measures of cognitive abilities in order to assess the impact of code-switching on these scores. A secondary purpose of the study was to look at code-switching patterns in order to assess the 8 frequency of the language behaviour and the need to refer to a second language when responding to test items. Research Questions Taking into account language use in bilinguals, as well as the role played by language in the expression of cognitive abilities, and recognizing that current tests and testing procedures are insufficient or inadequate, three research questions guided the study. Research question one Are there significant differences between results on tests and subtests measuring selected cognitive abilities obtained by participants in an Experimental Group who were provided with a code-switching procedure in which they and the examiner could code-switch, and participants in a Control Group who were tested following the standard procedure in French, as described in the test manuals? Generally, “the hypothesis asserts that the treatment will produce an effect” (Keppel, 1982, p. 25). In this study, the treatment was allowing code-switching during the assessment of participants in the Experimental Group and allowing them to code-switch. It was hypothesized that participants in the Experimental Group would obtain higher results than the participants in the Control Group, who are not exposed in any way to the code-switching procedure. Given that this was the only difference between the two groups, the difference between scores should be attributed to code-switching. Research question two Are there significant differences in the code-switching frequency of participants in the Experimental Group and the code-switching frequency of participants in the Control Group, 9 taking into consideration that the examiners only code-switched with children in the Experimental Group? It was hypothesised that participants in the Experimental Group would code-switch at a greater frequency given that they received the code-switching procedure. Research question three Is there a relationship between the frequency of code-switching behaviours and the degree of exposure to a second language? It was hypothesised that the higher the degree of exposure to a second language, the higher the need to code-switch, and the greater the frequency of code-switching. Significance of the Study Testing bilingual children using a procedure that allows for code-switching should inform research in the assessment of selected cognitive abilities and bilingualism. Contributions are also made to the field of assessment by measuring the impact of using two languages on test results in the measure of cognitive abilities. Doing so could make it necessary to consider more closely the impact of using of a second language in the assessment of cognitive abilities. In addition, the study raises the question of whether it is appropriate to allow for code-switching in the assessment situation. There is a fundamental need to explore the impact of code-switching on test results, to ascertain how frequently it is used to respond to test items, and to expand on the procedure by allowing the use of two languages throughout the testing process. Because the research in the present study allows for code-switching, it introduces a new method of assessing individuals (Ortiz & Ochoa, 2004), and adds to the available, yet limited, literature on the subject (Ortiz, 2002). Allowing code-switching by the examiner and the examinee during the cognitive 10 assessment may provide an opportunity for true bilingual assessment and pave the way for research on the differences that may exist in the assessment of monolingual as opposed to multilingual individuals. 11 Chapter Two Review of the Literature In this study, the foundation of conceptual and theoretical frameworks focuses on perspectives of intelligence, bilingualism, and assessment procedures. These three elements intersect in the course of an assessment of the cognitive abilities of individuals who are bilingual. The underlying theory of intelligence that sustains this study was the Cattell-Horn-Carroll (CHC) Three-Stratum Theory of cognitive abilities that represents intelligence as the amalgamation of broad cognitive abilities connected to a general factor, each of which contribute a certain aspect to the overall concept (McGrew, 2005a). Emphasis was put on the linguistic elements of the theory such as the role played by language in the expression and measurement of important broad factors, including Fluid Intelligence and Crystallised Intelligence and the existence of narrow factors such as Foreign Language Proficiency and Foreign Language Aptitude. Language, and more specifically bilingualism as it is observed through the course of assessments of cognitive abilities, were addressed by focusing on code-switching behaviours, a very well-documented characteristic of the manner in which bilingual students express themselves (Grosjean, 1982; Hamers & Blanc, 2000; Myers-Scotton, 2000). Theory of Intelligence and the Measure of Cognitive Abilities Definitions and theories of intelligence The concept of intelligence, as it is understood today, is still very difficult to define. Sternberg and his associates (Sternberg, Grigorenko, & Kidd, 2005) allege that we still do not know what intelligence is, citing the attempts that have been made to define the concept during important symposia in 1921 and 1986. They (Sternberg et al., 2005) noted the presence of “some common threads, such as the importance of adaptation to the environment and of the 12 ability to learn, but these constructs themselves are not well specified” (p. 46) while conceding that there was still a certain level of ignorance on the topic. Conversely, Snyderman and Rothman (1987), having surveyed more than 1000 experts on intelligence testing, have found that fifty-three percent agreed that there was a consensus on the definition of intelligence. Snyderman and Rothman (1987) also found strong agreements as to the important elements of intelligence. Modern historians of psychology point to Wechsler’s concept of intelligence, which dates back to the late 1930s, defined as the “capacity of the individual to act purposefully, to think rationally, and to deal effectively with his or her environment” (Wechsler, 1991, p.1). The Wechsler definition, like others from around that time, implied that intelligence was comprised of a variety of abilities that were different in qualities (Zhu & Weiss, 2005). This approach, dating back to the early 20th century with Spearman (1904) and his hypothesised presence of a general factor of intelligence (g), developed into theories that considered two or more factors in defining intelligence, such as Thurnstone’s model, the Planning, Attention, Successive, Simultaneous or PASS theory. Carroll’s (1993) model, which evolved from the original work of Horn and Cattell, is a recent example of the refinement now observed in currently-used tests of intelligence (McGrew & Flanagan, 1998). The information processing theories, inspired by computer-like models, surfaced in the latter half of the last century. These theories focus on an individual’s ability to process information, solve problems, and tackle daily tasks. Proponents of these theories favour an approach through which individuals are measured in timed tasks and by which performance on such tasks is equated to what they consider to be intelligence (McGrew & Flanagan, 1998). From this perspective, Naglieri (1997) defines intelligence as consisting of attentional, 13 informational, and planning processes. These processes, among other things, “provide focused cognitive activity […]; the control of attention and self-regulation to achieved desired goals” (Naglieri, 1997, p. 249). Others who view intelligence from a dynamic perspective view it for the role it plays in learning. Feuerstein, Rand and Hoffman (1979), are proponents of cognitive modifiability theories and bring a perspective that relies on human capacities. They used a different approach to testing, and define intelligence as “the capacity of an individual to use previously acquired experiences to adjust to new situations” (Feuerstein, et al., 1979, p. 76). Feuerstein, Feuerstein and Gross (1997) described how their theory calls for a new approach to intelligence. According to this new perspective, intelligence is not a reified, static construct but a dynamic one, designated as learning potential”, dependent upon the modifiability of the construct and the possibility for the individual’s potential to change under various conditions (Cummins 1984; Feuerstein, et al., 1997). Most theories are associated with a specific measure of intelligence. This is misleading because some recent theories, such as Gardner’s Theory of Multiple Intelligences (Gardner, 1983), have gained a lot of attention without necessarily being linked to a test. In that case, Gardner uses a questionnaire listing specific characteristics. Gardner (1983), in an effort to measure intelligence differently, provides the scientific community with a model that moves away from the traditional question “How smart are you?” to the question “How are you smart?” (Chen & Gardner, 1997). In this model, intelligence is not so much measured as it is described thus moving from a quantitative approach to a more qualitative one. Although not all of the theories have Gardner’s power of attraction and popular appeal, many, if not most, contemporary theories of intelligence are multidimensional and could be 14 presented as theories of “multiple intelligences” (Harrison, Flanagan & Genshaft, 1997). This is true of various theories, including the Neuropsychological Model developed by Luria (1973), the Triarchic Theory of Intelligence put forth by Sternberg (1986), and Carroll’s (1993) Three- Stratum Theory of Cognitive Abilities (Harrison et al., 1997). What is most relevant to the present study is that most of the theories developed in the last century (since the development of the first tests measuring mental abilities) have had a verbal component as a key aspect of their model. As an example, the Army Alpha test was administered to those with English as their first language, while those of foreign birth or with limited English skills were given the Beta Test (Wasserman & Tulsky, 2005). This was seen as an acknowledgement of language differences dating back to the 1930s. Binet, Thurnstone, Cattell and Horn (Wasserman, & Tulsky, 2005), and Vernon (1956) are examples of theorists who have provided models of intelligence that, despite significant differences in their delivery, are all multidimensional, and have either a language component or verbal dimensions (Flanagan, Genshaft & Harrison, 1997). The significant presence of a language element in all these theories and models of intelligence is central to this study as it indicates how using verbal skills to respond to questions or items in order to measure intelligence and study its function or functions can impact test results. The Cattell-Horn-Carroll three-stratum theory There is a generous body of literature on intelligence and this study focuses on the part of that literature that pertains to intelligence as it is measured by contemporary instruments. The Cattell-Horn-Carroll Three-Stratum Theory (CHC Theory) served as theoretical framework for cognitive abilities and is associated with what is known as the fourth wave in intelligence tests (Kamphaus, Winsor, Rowe, & Kim, 2005). This particular theory is behind the development of 15 the Woodcock-Johnson Third Edition Tests of Cognitive Abilities (Woodcock, McGrew, & Mather, 2001) and other tests, such as the Stanford-Binet Intelligence Scale, Fifth Edition (Roid, 2003). Carroll (1993) reviewed over 50 years of research and more than 460 original datasets, using factor analysis methods to extract the factors that would become the broad and narrow abilities used in his final model. These factors included the various parts of the general intelligence factor (g) and form the global concept of intelligence. Expanding on the Cattell- Horn Theory, which is also known as the Gf-Gc Theory, Carroll (1993) developed a multi- factorial and multidimensional model of cognitive abilities by identifying 8 broad abilities or factors (Stratum II) and 70 narrow abilities (Stratum I), all linked to a general intelligence or g factor (Stratum III). The abilities are placed in the model in relation to their contribution to the general g factor, which means that the narrow abilities, linked to Fluid Intelligence (Gf) and Crystallised Intelligence (Gc), contribute more to overall intelligence than the other narrow abilities that are linked to less “important” broad abilities. These narrow abilities, many of which have not been fully researched or even defined, are tentatively linked with elements of theories on bilingualism and the role of language in intelligence. Fluid Intelligence (Gf) and Crystallised Intelligence (Gc) are considered to be the main contributors to general intelligence (McGrew, 1997, 2005b). Therefore, the narrow abilities attached to Fluid Intelligence and Crystallized Intelligence are more closely linked to general intelligence, which means that the contribution to general intelligence of Gf and Gc, and the narrow abilities that are attached to them, are more significant to g than other abilities such as Speed of Processing (Gs) or Visual Processing (Gv) (Carroll, 2005). The narrow abilities of interest to this study are associated with the formerly mentioned broad abilities. Of these narrow abilities, some, like the Foreign Language Aptitude and Foreign Language Proficiency are 16 directly relevant to this study, whereas others like Language Development and Lexical Knowledge have important ties to it. The Foreign Language Aptitude narrow ability is defined as the “rate and ease of learning a new language” (McGrew, 2005a). The Language Development (LD) and Foreign Language Proficiency (FLP) narrow abilities are defined as the “general development or understanding and application of words, sentences, and paragraphs (not requiring reading) in spoken native language skills or in a foreign language, in the case of FLP, to express or communicate a thought or feeling” (McGrew, 2005a). The LD narrow ability is one of the most frequently cited abilities tested by intelligence tests (see McGrew, & Flanagan, 1998). Foreign Language Proficiency directly touches on the ability that is solicited by a second language user during the course of an assessment, as it refers to the language development of the second language. Intelligence Testing and Non-Discriminatory Testing Procedures The history of testing stretches as far back as 4000 years ago, when the Chinese introduced a “standardised civil service testing program” (Thorndike, 1997, p.3). Since then, a variety of batteries have been developed, many of them between 1925 and 1975 (Thorndike, 1997), which follow a similar model inspired primarily by the work of David Wechsler. Many of the first tests that were developed followed psychometric theories of intelligence, mainly because they produced instruments that were less time consuming and more practical. In the last hundred years, the measurement of intelligence has become a shared challenge by psychologists, psychometricians and researchers in several areas. Since Binet’s work in the early 1900’s, through which he hoped to address the issue of placement of children within the school system and to understand why children were failing in school (Ittenbach, Esters, & Wainer, 1997), many theories have been developed to understand the concepts associated with human learning. This 17 endeavour is, however, much older than Binet’s 1905 test, which is often referred to as the first test of intelligence. “Unfortunately, the manner in which language and culture affect test performance, let alone interpretation of test results, remains very poorly understood” (Flanagan & Ortiz, 2001, p. 213). Given that the main reasons for conducting this study were linked to testing or measuring of cognitive abilities, this section focuses on the tools and procedures currently in place for assessing children who are bilingual. As mentioned earlier, the number and proportion of individuals who are bilingual continue to increase in the North American population. With this change, more students who do not have English as a first or dominant language will be tested and, therefore, more students will be put into situations where they need to use their other language. This fact is also increasingly recognised by test developers who have gone from translating and adapting their original English tests to other languages to adding the possibility of responding in another language to these tests that are in English (See Woodcock et al., 2001). Figueroa (1990b), in a critique of intelligence tests, concludes that tests of intelligence were merely measures of linguistic skills, given the level of bias he identified. Figueroa (1990b) pointed to the bias of intelligence tests that have been solely published in English and that did not consider cultural factors such as language, stating that “With children who are clearly not proficient in English, there are no research data on bias in intelligence tests” (Figueroa, 1990a, p.685), which implies the presence of bias in the form of a language bias. This should not be confused with the often cited cultural bias, something that have been debated since the Binet and Simon’s debut and have been known not to be empirically defendable (Larivée & Gagné, 2007). If test developers and examiners want to avoid the bias phenomenon, it becomes increasingly 18 important to allow for the use of all of the examinees’ language skills by permitting them to tap languages other than that of the test or of the test item. The issue of fairness is significant because bias is among the most often-cited criticism against intelligence tests (Tzuriel, 2001; Saenz & Huer, 2003). Over than 30 years ago, a court case (Diana v. State Board of Education, 1970) served as a main influence on assessment practices for children not having English as a first or dominant language. Since then, laws such as Public Law 94-142 and the No Child Left Behind Act that followed have been passed in the United States and an important number of recommendations have been made as to procedures that should be used when testing children who are bilingual. Tests of intelligence or cognitive ability are constructed in ways that presume that a given level of language proficiency is present in the average individual that is sufficient to comprehend the instructions, formulate and verbalize responses, or otherwise use language ability in completing the expected task. (Flanagan & Ortiz, 2001, p. 222). The tests also assumed a shared level of acculturation (Flanagan & Ortiz, 2001). In the context of the United States, this means that the bilingual student’s results are compared with the “mainstream North American culture that implies English language proficiency and values reflecting predominantly Anglo or Western-European views” (Ortiz & Flanagan, 2002, p. 341). Regardless of the children’s level of acculturation, it is relatively simple to determine which language or languages to use when they have had limited or minimal exposure to English. Interpreting the data obtained from the assessment is more problematic as “examiners working with bilingual children who have had more exposure to English will find themselves with very vague guidelines as to how to use language proficiency data” (Lopez, 1997, p. 507). The dilemma of test language remains because this level of adequate proficiency is not always 19 present in children who need testing or because the language proficiency data is unclear to the examiner. It also remains because testing these children in their first/dominant language that is not English is not always possible nor is it adequate. The Linguistic Demand Aspect in Tests New tests of intelligence have been developed with new theories to support them. The element of linguistic demands, sometimes referred to as language loading, is an area in which an increasing number of researches are making their mark. This has come about as a result of the element of questioned bias in tests and in response to demands for more appropriate assessment procedures for bilinguals and children who have a limited mastery of English sometimes called Limited English Proficient or English Language Learner. The linguistic demand “is the amount of linguistic skills required by the various tests of intelligence” (Flanagan & Ortiz, 2001, p. 245). When dealing with bilinguals, or in cases where the student tested is not a native English speaker, the examiner should always discuss the level of linguistic demand of the tests or subtests in the interpretation of results, as it helps to put the results in perspective. It may not be fair to compare these bilingual or non-English-native speaking students with those who served to establish the norms; the validity of their results may be questionable (Ortiz & Dynda, 2005). Linguistic demands should not be confused with bias. Simply because a test or a subtest has linguistic demands, it is not automatically biased. For example, the interpretation of test results of a child producing a low score on verbal subtests may differ based on his language background. If a low score is produced by a native English speaker, the child may have a language-based disability. If the low score is that of a bilingual speaker, or that of a child with limited English proficiency, the child may have a language difficulty but the low score might also be a reflection of an invalid test. In this case, the test may be considered invalid in the 20 sense that it may have been measuring the wrong skill; it may have been measuring English proficiency as opposed to verbal comprehension skills (Ortiz & Dynda, 2005). As noted in the previous example, “Bias in testing, with respect to language differences, pertains to validity” (Ortiz & Dynda, 2005, p. 550). As is often the case, the test does not in fact measure what it sets out to measure. Valdés and Figueroa (1994) believe that intelligence tests are merely tests of language abilities, given that the level of language bias is very high. It has been suggested that tests should be renormed in order to address the reliability and validity issues (Saenz & Huer, 2003). Saenz and Huer (2003) mentioned that the current norms may not be appropriate for students who are not monolingual English speakers. Although this has been done in the Santa Anna Unified School District in Southern California with the Expressive One- Word Picture Vocabulary Test-Revised, where it was appropriate to do so, given the homogeneity of its population, a large scale renorming of widely used intelligence testing is unlikely to occur. There are numerous disadvantages associated with the process, namely that the norms can only be applied to a population that shares the same linguistic and cultural background, which assumes homogeneity of bilingualism that we know does not exist. To avoid this linguistic bias, and because traditional tests have a “dependency on language skills in test content and instructions [that] makes them less appropriate for the assessment of cognitive abilities of ethnic minorities […]” (Tellegen & Laros, 1993, p. 147), examiners often defer to nonverbal tests. Nonverbal measures are frequently recommended and part of the instrument set traditionally used for the assessment of students with limited English proficiency (Ortiz & Dynda, 2005). Just because the oral language requirements are decreased, it does not mean that all of the potential bias associated with language, and especially acculturation, are eliminated (Flanagan & Ortiz, 2001). Moreover, many of these tests have 21 limitations such as outdated norms, inadequate psychometric properties, and a narrow range of abilities that they can measure (Lopez, 1997). Lastly, “nonverbal tests measure a narrow range of cognitive abilities and results from such tests may not generalize to other domains such as verbal ability” (Bainter & Tollefson, 2003, p. 600). Lopez (1997) makes suggestions beyond the use of nonverbal measures, recommending, along with many other alternatives, that the examiner “start with the most proficient language and switch to the second language when items are failed” (p. 508). This recommendation is close to what can be observed with the BVAT (Munoz-Sandoval, Cummins, Alvarado, & Ruef, 1998) where individuals are tested in their dominant language first then on their second language. The recommendation made by Lopez has the merit of proposing a testing procedure that is bilingually dynamic as it allows for a certain flow between languages. However, it would fail the validity test, as testing children in two languages would break with the standardisation procedures for all intelligence tests. This often leaves the examiner with one solution: to base his or her conclusions about the assessment process on subjective measures, interviews, questionnaires, and observations. Although these are components of a good assessment, they leave the examiners with no solid objective data to support their other conclusions. Intelligence tests are among the most widely used instruments by psychologists, and the results these tests provide become an important part of making placement decisions in the education system, of the diagnostic process in clinical settings, as well as the basis for many administrative decisions in a variety of contexts such as career placement and the screening procedure for entry into education programs (Tellegen & Laros, 1993). Others (Ascher, 1990; McCloskey & Athanasiou, 2000; Tzuriel, 2001) suggest using dynamic assessment procedures. The flexibility of this method is appealing as it offers 22 advantages that are not associated with standardised tests (Saenz & Huer, 2003). While it provides information on the student’s learning skills as observed through the assessment and as measured through a test-retest procedure, these tests do not provide the standard scores that are often necessary in many settings to diagnose or determine eligibility for special services. The time that is needed to conduct a dynamic assessment is also a factor that plays against the method (Saenz & Huer, 2003), as such assessments can take three or more times longer to administer than standardised tests. Because of lack of training or limited exposure to the knowledge needed to deal with linguistically diverse populations, practitioners have often been forced to use tests and procedures that may not be appropriate (Flanagan & Ortiz, 2001). As the North American population changes and diversifies the need to have assessment practices that reflect the observed levels of diversity increases. Best Practices in Non-Discriminatory Assessments of Bilinguals Keeping in mind the many risks of bias, and the fact that bias is unavoidable (Ortiz, 2002), the most appropriate non-discriminatory assessment procedure should aim to minimize bias without failing to address the language component, which poses a challenge but which also accounts for a significant part of our understanding of intelligence. In order to avoid discrimination, and as a way of acknowledging the linguistic particularities of the individual being tested, “Communications are held in the client’s dominant spoken language or alternative communication system. All student information is interpreted in the context of the student’s socio-cultural background and the setting in which he/she is functioning” (Thomas & Grimes, 1995, p. 1166). To do so imply that the examiner is fluent in the client’s language, has a sufficient knowledge of the client’s cultural background, and has the evaluation instruments to proceed with such an assessment. 23 Ortiz (2002) proposes an approach that stems from a hypothesis testing model. The entire approach is comprised of ten steps that include the assessment and evaluation of language proficiency and allude to traditional testing (Ortiz, 2002). Standardised tests are generally included in every assessment of a child’s broad intellectual and school functioning. They have to be carefully chosen by considering the adequacy of norms, the appropriateness of the instrument and the language and cultural demands of the tests (Flanagan, McGrew, & Ortiz, 2000; Flanagan & Ortiz, 2001; Ortiz, 2002). The use of tests that are psychometrically sound in English is considered preferable to using poorly developed tests in the individual’s native language (Ortiz, 2002). Abedi (2006) and Figueroa and Newsome (2006) have made the point that individuals who do not have English as a first language should still be part of the normalising process in order to respect the standards put forth by major professional organisation in the area. Conclusions drawn from these test results will have long-term consequences for the examinees and have an impact on diagnosis, placement, and support of students within the environment where testing will have occurred. Ortiz (2002) mentions that standardised and norm-referenced tests can be administered in a bilingual fashion, and adds that the data collected would then be qualitative and not quantitative; that is, the scores could not be interpreted quantitatively, thus significantly diminishing their usefulness. The use of these scores is limited because, by testing children bilingually, there is a break in standardisation. The scores obtained cannot be compared with the norms given that the populations that serve to develop these norms were not tested bilingually. Language of assessment is a difficult question to address as proficiency can be adequate, yet not optimal, in English. Language proficiency may be relatively equal in the two languages or higher in a second language for which no test or norms may yet exist. Such questions are 24 directly at stake when bilingual students are tested, as their bilingualism can be recognised, at least informally, but is not necessarily accounted for in the testing procedure. Ortiz and Ochoa (2005), while recommending bilingual testing as part of their Multidimensional Assessment Model for Bilingual Individuals (MAMBI), comment that “At present there are no actual ’bilingual’ tests and the task of standardizing bilingual interactions, which involves spontaneous code-switching as desired or needed, appears exceedingly difficult if not impossible to accomplish” (p. 241). For their part, Bainter and Tollefson (2003) surveyed members of the National Association of School Psychologists (NASP) on what constitutes acceptable practices in the assessment of language minority students. Findings show that “Overall, respondents indicated that the use of a bilingual school psychologist to administer tests in both English and the child’s native language is the most acceptable assessment practice with language minority students” (Bainter and Tollefson, 2003, p. 601). Other responses obtained from the NASP members concerned the use of English-only tests when the student indicates a preference or dominance in that language and the use of nonverbal tests. The authors mention that “74% reported that it is never or rarely acceptable to administer a test in English when student is dominant in another language (Bainter and Tollefson, 2003, p. 602). It is apparent that it would seem irrational for examiners to choose to defer to an English-only test when they are aware of the language background of a bilingual student. It would seem more reasonable for them to request a procedure, if not a test, that is also bilingual in nature. Bilingualism and Code-Switching Bilingual individuals, apart from sharing characteristics that revolve around the use of two languages, also share language behaviours such as code-mixing, language borrowing, 25 translation, and code-switching. Code-switching is a commonly observed behaviour of bilinguals (Hughes, Shaunessy, Brice, Ratliff, & McHatton, 2006). The following section defines and addresses these characteristics and behaviours. Bilingualism: Definitions and Types The study of bilingualism could encompass issues of language acquisition, language development, and the role of culture within the broad understanding of language. In the context of this study, the focus was put on the testing of individuals for whom bilingualism was a characteristic impacting test results. Therefore, the issues mentioned in introduction to this section were not directly addressed. To illustrate the complexity of bilingualism, consider the notion of first language. Dabène (1994) speaks of three types of first languages: the mother tongue, the native language, and the language in which one has the highest proficiency. Dabène refers to the “antériorité d’appropriation”, which refers to the language that was first learned, to describe one’s mother tongue or first language learned. This concept is also associated with her definition of “native language” (Dabène, 1994, p. 11). A distinction is made between first language and native language, however, because an individual may be more proficient in a language other than his mother tongue, such as in the case of immigrants who move to a country where their language of origin is not widely spoken. Bilingual individuals are not the product of two monolinguals put together (Cummins, 1984, Bialystock, 1991, Flanagan, McGrew, & Ortiz, 2000). Bilingualism is a broad term that encompasses numerous levels of language mastery and goes beyond the sole notion of proficiency. It has traditionally been defined as the ability to speak two languages fluently, with a bilingual person being someone with the possession of two languages (Wei, 2000). More 26 recent definitions have added other features and it is now thought that bilingualism should be defined as being part of a multidimensional continuum that includes proficiency, as well as linguistic structures, culture, notions of competency, and issues that surround language use, such as accent and other non-linguistic dimensions (Hamers & Blanc, 2000). Given the complexity of the many dimensions that have to be considered in an operational definition of bilingualism, concepts such as first language, dominant language, and mother tongue, some authors such as Baker (2001) and Baker and Prys-Jones (1998) include a glossary, or even a chapter to their texts, in an effort to educate the reader on the key concepts in the domain of bilingualism. Cummins’s (1984) model of language proficiency can be considered as an example of the complexity surrounding the issue of bilingualism and a link between the issue of bilingualism and the assessment of cognitive abilities. Cummins (1984) has developed a dual iceberg representation of bilingual proficiency whereby L1 and L2 are the two elements of an iceberg that share a common proficiency as part of their common base. Language proficiency itself is separated into two distinct levels: the basic interpersonal communicative skills (BICS) and the cognitive/academic language proficiency (CALP) (Cummins, 1984). The BICS is the more superficial language proficiency level that can be developed by a second-language learner in about one to two years (Cummins, 1984). It is the level of language use that is observed in everyday conversation or on playground conversations in schools. The CALP is a higher level or, in the case of the “iceberg model”, a deeper level of proficiency. Studies cited by Cummins (1984) show that “immigrants require, on average, 5 to 7 years to approach grade norms in L2 academic skills” (p. 149) due to the fact that CALP includes the ability to make reference to the semantic and functional meaning of language and to proceed to more complex cognitive tasks such as analysis and synthesis (Cummins, 1984). 27 Aukerman (2007) disagrees with Cummins’ view on language development, arguing that language should never be decontextualised, an argument put forth by Cummins when he describes CALP. Aukerman (2007), while recognising the benefits of Cummins’ theory, provides examples of how language should be put in context, especially for English language learners who may not have language proficiency in either of their two languages. Setting language in a learning environment, a student should not be confronted with a situation where it is believed that he lacks CALP (Aukerman, 2007). Aukerman’s (2007) position, although questioning Cummins’s perspective on the contextualised aspect of language learning seems to neglect how Cummins (2000) was referring to cognitively demanding language, as observed in CALP, as being the language that is mostly learned in a decontextualised setting. As an example, children developing their vocabulary of abstract concepts (e.g., words associated with thought processes, feelings) might not always be given a context for each word as the meaning of the word may vary significantly from one context to the other, yet they still learn these cognitively demanding words. The contrast between the two levels is important to point out, especially in the context of intelligence testing, when the higher levels are solicited by test items that put the examinees in situations where they have to make use of their verbal comprehension. Such items are found in all the tests and subtests used in this study. For example, participants have to provide descriptive responses that refer to their knowledge of abstract concepts, their social judgment, and even their ability to make complex analogies. As part of the process of comparing bilinguals, Cummins (2000), revisited two hypotheses first developed in the 1980’s as to how the interactions between languages have an impact on language as well as on cognitive development: the interdependence hypothesis and the 28 threshold hypothesis. The interdependence hypothesis of bilingualism suggests that it is the combination of the abilities in two languages that accounts for higher results on tests of various language abilities, such as a test of reading skills (Cummins, 2000). Cummins (2000) acknowledges the impact of bilingualism on test performance and the correlation between language skills that are developed bilingually, as opposed to being developed in a unilingual fashion. Bournot-Trites and Reeder (2001), testing mathematics skills, have found support for the interdependence hypothesis. Their study showed that bilingual students who were taught mathematics in French (their second language) obtained higher results on the mathematics tests administered in English, than their Anglophone counterparts. Cummins (2000) also revisited the threshold hypothesis of bilingualism, which states that bilingual instructions, under certain conditions, provide direct benefits to linguistic and cognitive growth. These benefits are observed with children who have attained the highest level of bilingualism (Cummins, 2000). On the other hand, students with less well-developed academic competence in the two languages (i.e., those at either of the two lowest levels) are limited in their ability to reap the same cognitive and learning benefits (Baral, 1988). This hypothesis implies that the positive difference in tests results would only apply to bilinguals who have already attained a certain threshold level of language skill. MacSwan (2000b) presented a clear argument for why Cummins’ deficit model should be rejected on both empirical and theoretical grounds. He rejects the threshold hypothesis on the basis that it has been developed with a population that differs from the United States’ and on the premise that Cummins considers language literacy as being part of language proficiency. It should be noted that these authors (Baral, 1988; MacSwan, 2000b), including Cummins (2000), have recognised the important role played by the socioeconomic status (SES) in language development to the point where these 29 hypotheses need to be further explored before they actually become the explanation for one’s linguistic functioning. Baral (1988) observed how it was interesting to see that despite the fact that Cummins’s argument for a child’ success in literacy was based on language, he [Cummins] included low SES as part of his explanations. For the purpose of this study, it is worthwhile to consider these two hypotheses while keeping a critical view on them. It is understandable that the proponents of the threshold hypothesis would maintain that allowing code-switching during testing would only benefit examinees who are already at a certain level of language proficiency in both languages and thereby not allow examiners to determine if the impact of code-switching becomes more significant as one is further above the so-called threshold. The proponents of the interdependence hypothesis would assert that all children will benefit from this treatment because all bilingual individuals will rely on their language skills in both languages. With his dual-iceberg model, emphasizing the interdependence hypothesis, Cummins (1984) exposes a theory of language proficiency that was used in the development of the Bilingual Verbal Ability Test (BVAT) (Munoz-Sandoval, et al., 1998). His model illustrates how the interdependence of two languages implies that “experience with either language can promote development of the proficiency underlying both languages” (Cummins, 1984, p. 143). This interdependence hypothesis may also go further and indicate the need to use skills from both languages to express entirely one’s cognitive proficiency. Models such as the one put forth by Cummins (1984) show that proficiency is one aspect of the combined use of languages. Although there are numerous definitions for “bilingualism” that refer to language proficiency, there are also other definitions that focus on other aspects, such as the time the second language was learned, or the relationship between the languages 30 being used. Moving away from a model that goes beyond the notion of proficiency allows the reader to expand on the number of possible definitions of “bilingualism” while including other less prominent aspects of language development, such as those mentioned above. There are an array of definitions and descriptions of bilingualism or of bilingual individuals. MacSwan (2000a), for instance, describes the compound bilingual as someone whose two languages are learned at the same time, often in the same context. This term is similar to the simultaneous bilingual, whose two languages are acquired in infancy. There is also the additive bilingual, whose two languages combine in a complementary and enriching fashion, and the dominant bilingual, a person with greater proficiency in one of his or her languages, and who uses the more proficient language significantly more than the other language. Finally, there is the balanced bilingual, who is often perceived as the traditional bearer of the status, and is someone whose mastery of the two languages is roughly equivalent. This term is synonymous with the term symmetrical bilingual (Wei, 2000). Given this broad definition of bilingualism, which can include terms like receptive bilingual, those who understand a language but do not speak it, it is believed that bilinguals, as opposed to monolinguals, are found in greater numbers worldwide (Dewaele, Housen, & Wei, 2003). The list of countries with bilinguals is not as long as a list of the number of different languages spoken in the world (Dewaele et al., 2003) and examples of bilingualism flourish as bilingualism gains official status in an increasing number of regions around the world. This movement, however, does come with its controversies. Baetens (2003) lists the numerous fears that are associated with bilingualism. These fears were and are still present at many levels, ranging from parental to educational and socio-political levels. Some individuals still believe that there is a factor of inconvenience associated with bilingualism (Grosjean, 1982) and that 31 bilingualism results in negative effects on the educational growth of bilingual children (Cummins, 1984). This last aspect is significant to this study due to the fact that most assessments are conducted within the school system, where educational values are actualised. Bilingualism and Intelligence: A Brief Historical Perspective Until the 1960s, it was believed that bilingualism had a detrimental effect on intellectual growth and it was perceived as a subtractive factor of language development (Hughes et al., 2006) as it was thought that the development of a second language would negatively impact the development of the first one. Bilinguality was viewed as a contributor to inferior intelligence (Hamers & Blanc, 2000). Low scores obtained by non-English speakers were interpreted as evidence of a learning disability and led, in some situations, to the diagnosis of intellectual disability. Many authors (Gonzalez, 1995; Baker, 2001) credit the Peal and Lambert study (1962) with a change to this view because it confirmed that bilinguals score higher than monolinguals on measures of specific verbal and non-verbal cognitive abilities as well as on general intellectual functioning. The Peal and Lambert (1962) study consisted in comparing French-English bilingual students and monolingual students on intelligence tests. Controlling for sex, age, SES and language proficiency, they found that bilingual students scored significantly higher on most measures of verbal and nonverbal tests (Lee, 1996). Studies dating from before the Peal and Lambert study (1962) stated that “bilingual children had lower results on intelligence tests and were socially misfit” (Hamers & Blanc, 1983, p. 90). More recent studies have contradicted this belief and now speak of “verbal flexibility” and have dropped 1920s terms such as “linguistic handicap” (Hamers & Blanc, 2000, p. 86) in reference to the cognitive abilities of bilingual children. However, despite the vast support for the Peal and Lambert study, some of their results were challenged, in part on the base of their sample (McNamara, 1966) 32 while continuing to be perceived as positively impacting the field of bilingualism and intelligence (Lee, 1996). The pendulum swung from the “disadvantage hypothesis” of the 1920s to the “advantage hypothesis” of the late 1930s (Gonzalez, 1995). The “advantage hypothesis” remained greatly challenged until the Peal and Lambert (1962) studies. The seminal work by Peal and Lambert (1962) is widely cited as a turning point in how bilingual individuals became described as having a cognitive advantage over unilingual individuals. In the decade that followed, the “advantage hypothesis” gained more ground as test results revealed consistently higher scores for bilingual populations (Gonzalez, 1995) moving from the impression that bilingualism was subtractive to one that is additive as it is now perceived as an advantage (Hughes et al., 2006). The concern then, which is still a matter for debate in recent literature (see Valdes & Figueroa, 1994), centres around how the testing instruments and procedures impact our understanding of the intelligence of bilinguals. The Linguistic Structure, the Motivation Behind and the Costs of Code-Switching, Code-Mixing and Language Borrowing With reference to code-switching, the terms matrix language and embedded language that are part of the Myers-Scotton’s (1993a; 1993b; 2003) model must be understood. Matrix language, part of the Matrix Language Frame (MLF), “is the main language in code-switching utterances” (Myers-Scotton, 1993a, p. 3). As an example, in the sentence “Oh look, near the fence, it’s a tournesol”, where “tournesol” is the French word for “sunflower”, English is the matrix language and French is the embedded language. The embedded language is a language that is part of the code-switching utterance, but that has a less significant role. As observed in 33 the previous example, only one French word was used in the utterance and there are no further references to the language, making French an “embedded language” in this specific example. In everyday situations, when a bilingual individual is involved in a conversation, a number of language decisions are made in order to adapt to the linguistic demands of the context (Wei, 2000). Grosjean (1982) has designed a model to show the decision process behind the choice of which language to use, based on the language of the interlocutor (Wei, 2000). In a conversation with a monolingual speaker, the bilingual individual will use only one language (L1 or L2 accordingly); when dealing with a bilingual speaker, he/she will choose between L1 with or without code-switching and L2 with or without code-switching (Grosjean, 1982; Wei, 2000). Such decisions imply that there are language skills needed to manipulate two grammars, and possibly a third grammar that is created as both languages overlap. Placed on a continuum, the bilingual individual has to go from using one language and suppressing the other to using both languages in a dynamic way: code-switching without ever breaking the pace of the conversation (Toribio, 2004). Depending on the nature of the situation, bilinguals may choose a range of frequency of code-switching that could vary from none to very frequent in number and type (Francis, 2003). To provide examples of this movement between languages, and the requirements that are associated to these movements, Grosjean (2000) cites studies on the use of higher order constraints in code-switching as well as the phonetics of code-switching. A rich body of research exist on the syntactic, semantic and pragmatic aspects of code-switching, many of which were done by authors cited in this study (e.g., Myers-Scotton, 1993a, Poplack, 1980; 2000). As an example of the phonetic of code-switching, Grosjean (2000) describes how bilingual speakers perceive speech differently than monolingual ones, partially because there is a 34 temporary dominance of the basic elements (e.g., phonemes) of the main language as code- switching is occurring. Apart from these adaptations to language demands, and beyond the many definitions and associated concepts related to them, there are bilingual-specific behaviours such as linguistic mixing, language borrowing, translation, and code-switching, all of which are part of the way language is expressed (Hamers & Blanc, 2000). Code-switching is sometimes referred to as code-mixing, but should not to be confused with language mixing. Language mixing refers to the confusion of two linguistic codes, and occurs more often in young children where the mixings are generally lexical (Hamers & Blanc, 2000). For example, including a Spanish word in an English request made by a three year old: “I want to comer a cone”. Code-mixing and language mixing are often used as interchangeable terms that are themselves confused in the literature. Language borrowing is a more conscious act whereby a speaker chooses to include a word in a conversation from another language that is usually known by his interlocutor in that other language. The sentence “This is what I would call his zeitgeist” is an example of language borrowing. Translation would simply be the repetition of a sentence made in the other language: “Il sera de retour demain. He’ll be back tomorrow.” “Code-switching takes place quickly and fluently” (Grosjean, 1982, p. 328), which means that bilinguals switch from one language to the other without breaking the flow of the conversation and respecting language rules in order to maintain meaning throughout the utterance. For Poplack (2000), code-switching is “the alternation of two languages within a single discourse, sentence or constituent” (p. 224). Intrasentential code-switching occurs within a sentence whereas intersentential code-switching occurs between sentences. 35 Code-switching is sometimes also defined within a specific frame of reference. For example, Myers-Scotton (2003) defines code-switching in the context of the Matrix Language Frame (MLF: Myers-Scotton, 1993a) as “Classic code-switching is switching between two (or more) participating languages/varieties when speakers have strong enough proficiency in one of the languages to make it the sole source of morphosyntactic frame that structures the unit of analysis” (Myers-Scotton, 2005, p. 189). Martinovic-Zic (1998) believes that “Bilingual code- switching or mixing of two languages is a significant illustration of one’s belonging or not (closeness or distance) from his/her group along the ethnolinguistic dimension” (p. 7). The “belonging” aspect of code-switching is important in that it relates to social identity (Myers- Scotton, 2000), a reason that is often described as a catalyst or trigger for code-switching. As observed, code-switching can take various forms. It should not, however, be confused with language borrowing; that is, the action of borrowing specific terms, or group of words, from another language. In certain cases, borrowed words become part of the accepted language. For example, the use of words such as “scenario” or “Gesundheit” show how the English language has integrated borrowed terms, in these cases from Italian and German respectively. In a conversation, when an individual integrates words from another language it should not be automatically interpreted as code-switching. In the course of this study, code-switching behaviours were more directly explored because they are known to serve a variety of functions. These functions can be social or psychological, with the latter being classified as either communicative or cognitive (Hamers and Blanc, 2000). As defined above, code-switching is the alternation between two languages. Code-switching can be used as a tool for social negotiations and serve to emphasize one’s identity or social rank (Myers-Scotton, 2000). It follows certain patterns (Wei et al., 2000), 36 changing as the speaker needs to adapt his language use to a specific situation or elements of the conversation (Auer, 2000). These aspects come into play in the course of an evaluation of intelligence. For instance, code-switching can follow different patterns such as those observed during the evaluation process when an individual needs to make use of his most fluent language, which is not the language of the test, in order to provide an appropriate response to a verbally presented test item. Hamers and Blanc (2000) point out that “Language does not and cannot exist outside the functions it serves” (p. 8). The issue of what function language serves is fundamental in the context of a testing session, during which language is used to express intelligence, in the sense that providing a correct response to a test item is expressing intelligence. Furthermore, because the language is the chosen form of expression, the need to access all of its related components becomes even more important. Taking into consideration that “Code-switching is a verbal skill requiring a large degree of linguistic competence in more than one language, rather than a defect arising from insufficient knowledge of one or the other” (Poplack, 2000, p. 255), one can understand how it becomes important to make proper use of this ability in a more global evaluation of intelligence in which the majority of subtests are mediated verbally. Various authors (MacSwan, 1999; Poplack, 1980) have proposed models guiding what should be acceptable or not acceptable in code-switching. These rules are comprised of a variety of constraints that dictate both the type and appropriateness of code-switching. For example, Halmari (1997) writes about “government constraint”, referring to an independent grammar or government for code-switching that would be similar to that of monolingual speech. Like many of his predecessors, however, Halmari’s (1997) constraints have been challenged, in part because they imply a universal respect for grammar. On the other hand, Chomsky (1995) developed the 37 Minimalist Program, whereby the parameters are restricted to the “lexicon”, a somewhat less restrictive set of constraints. Code-switching, within this program, would simply be the use of a combination of two lexicons, thus not being as constrained as Halmari’s model. The linguistic structure of code-switching Intrasentential code-switching, code-switching within a sentence, occurs between different sorts of languages that are not all grammatically comparable. For example, English can be considered a subject-verb-object (SVO) language where words are generally placed in that order in a sentence. When code-switching occurs, it can be between two languages that are both of the SVO type, but it can also be observed between an SVO and an SOV language such as Korean. The code-switching can also occur within a word where a word has its stem in one language and its ending in another. MacSwan (2000a) gives the following example: “Juan esta eat-iendo” – “Juan is eating” (p. 46) to show how a subject can start a sentence in Spanish, include a stem in English (“eat”), and complete the word and sentence in Spanish. In this example, the original morphology of the sentence and the altered word are intact; that is to say that the “eat” included in the word eat-iendo follows the acceptable language form of the word that would have been “comiendo” in Spanish. Switches do not, however, always follow rules so neatly. Code-switching has been studied from various perspectives, as much to explain the behaviour as to normalise it. The example in the previous paragraph is a mere illustration of a behaviour that takes many more forms and appears either to be governed by numerous rules or by very few rules, depending on how one approaches it. Considering the structures and rules of language organisation in code-switching, some authors (Chomsky 1995; Halmari, 1997; 38 MacSwan, 2000a; Poplack, 2000) have observed how it occurs in many different combinations of languages, attempting to circumscribe the general rules that are behind this language behaviour. Code-switching is accepted as a phenomenon that “occurs with high frequency whenever two or more speakers who are bilingual in the same languages communicate with one another” (Hamers & Blanc, 2000, p. 258). A variety of code-switching models have emerged as a result of the numerous hypotheses put forth proposing that lexical, syntactical or grammatical elements play the role of bridges between languages. MacSwan (2000a), introducing Chomsky’s (1995) minimalist approach, contrasts it with many other approaches that have listed different constraints on the acceptability of certain utterances. Numerous models are proposed (Poplack, 1980; Joshi, 1985; Myers-Scotton, 1993a; Chomsky, 1995), all with limitations, but contributing toward the construction of a somewhat simpler model (MacSwan 2000a). One model that is often mentioned, and that is relevant to this study, is the Myers-Scotton (1993a) model. The author argues that “code-switching provides evidence regarding both the flexibility and inflexibility of language” (Myers-Scotton, 2003, p. 189). Flexibility and inflexibility are noteworthy in that they may be two of the most salient language characteristics at play during an intelligence test. During such testing, a student has to provide responses to questions that are presented in one particular language and the answers may be provided in two languages or in a different language from that of the test. This is referenced in works on lexical and language inhibition aspects (Costa & Santesteban, 2004). Grosjean (1982) noted that language dominance may vary according to contexts and circumstance and gives the example of a student who studied in a language other than his native one, and had established dominance in that second language only when he had to discuss his area of study. With flexibility and inflexibility issues at stake in code-switching, the movement between language dominance may 39 be an illustration of both flexibility and inflexibility in the context of an assessment of intelligence where language and code-switching is constantly solicited. Examining Myers-Scotton’s (1993a) model for code-switching, Grosjean (2000) distinguishes between matrix and embedded languages. Myers-Scotton’s (1993a) model, the Matrix Language Frame (MLF), implies that there is a hierarchy between languages. The matrix language plays a dominant role and “its grammar sets the morphosyntactic frame for two of the three types of constituent contained in sentences showing intrasentential code-switching […]” (Myers-Scotton, 1993a, p. 6). In the MLF, the bilingual individual would generally use his/her dominant language as a matrix language and organise his/her conversation base on that language, while including elements from his/her second language, that is, the embedded language (Myers- Scotton, 1993a). However, MacSwan (2000a), arguing that such a frame was not necessary, challenged this model which implies the presence of an actual grammatical language frame and affirmed that those who share Myers-Scotton’s perspective “carry a particular burden of proof: they must show that the grammatical facts in code-switching cannot be explained unless the notion of a language frame is a principle of grammar and not a code-switching-specific constraint” (p. 42). The motivation to code-switch The motivation behind the need to code-switch is another aspect of code-switching that has been explored by various authors (Grosjean, 1982; Myers-Scotton, 1993b; Martinovic-Zic, 1998). As mentioned above, bilingual-specific behaviours such as linguistic mixing, code- switching and translation (Hamers & Blanc, 2000) are part of the way language is expressed by bilingual individuals in everyday conversations as well as in the context of an evaluation of cognitive abilities. 40 Code-switching can serve a variety of functions, such as being a tool for social negotiations (Myers-Scotton, 2000) and can follow certain patterns (Wei et al., 2000). As an example, in a specific situation such as during the administration of a test, a speaker will choose to refer to a second language either to convey a certain message, “I know the answer but I cannot express it in English”, or to expand on an already complex utterance while respecting certain rules, such as the need to translate key words in a sentence. Beyond the need to communicate, code-switching serves a number of functions that vary from one situation to the other, such as using the language of the interlocutor as opposed to the language of the majority, and vary from one language to the other, switching from L1 to L2. Martinovic-Zic (1998) describes how someone bilingual may choose to move from his first to his second language, or even mix the two languages based either on his understanding of his relationship with his interlocutor, or on the social context. As one way to affirm his social identity, a bilingual individual may choose to speak the language of the majority, his second language, in order to gain an “in-group” status (Martinovic-Zic, 1998). Mackey (2000) in a study aiming at describing bilingualism gives an example in which an individual would alternate from French to English, with French being used less than 5% of the entire text, while the interlocutor, who switched less often but for longer periods, might use French for up to 50% of the time during the conversation. The reason for these switches is not incompetence, but rather an adaptation to the interlocutor and, probably, to the topic of discussion. Mackey (2000) identifies a long list of factors that could impact, explain or trigger code-switching, a list that includes political, cultural, historical and religious factors as well as aspects such as age, sex, intelligence, memory or language attitude. 41 The cost of code-switching Although voluntary, alternating between two languages has a switching cost (Costa & Santesteban, 2004). Von Studnitz and Green (2002) studied the costs associated with code- switching, finding that “bilinguals were slower to make a lexical decision about a word on a trial where there is a switch of language compared to a trial where there is not” (p. 241). They found that, although there might not be any cost to code-switching in everyday circumstances, “if language is a potent cue to language-specific recognition procedures, then costs may exist when such expectations are not met” (p. 249). Costa and Santesteban (2004) have found that the costs are asymmetrical; that is, they vary if the subject has to move from L1 to L2, as opposed to L2 to L1. They conducted experiments with proficient bilinguals, who were asked to do picture naming performances within a switching-pattern procedure in which the background colour of the picture would determine the language in which the response was expected, and found that code-switching was more time consuming for the individual being tested to switch from L2 to L1 than the reverse. One of the explanations for these findings is that, when presented with a picture, subjects respond internally in both languages and they have to inhibit the response in L1 in order to provide a response in L2. Such inhibition is more difficult in the dominant language (Costa & Santesteban, 2004). Picture recognition tasks, such as those used in the previous example, are common in most intelligence tests where a reliance on language skills is important, even in the so-called nonverbal tests or subtests. They could hypothetically be encountered when bilinguals are asked to name pictures, such as in the Picture Vocabulary test of the WJ III COG. Responses may not always be available in the language that is not the dominant language. For this reason, a child 42 may have difficulty dealing with the need to inhibit his dominant language which is language in which the response first appears, and will not necessarily be able to provide the expected response in his L2, which is the language of the test. Given these associated costs, one needs to be cautious in interpreting results where code- switching may have taken place, particularly in testing situations that are under time limitations. Although these types of test questions are not frequently found on tests or subtests measuring or involving verbal skills, they are occasionally used. The Picture Completion subtest of the WISC-IV would be an example. This might also hold true in the case of nonverbal tests that have verbal directions in which the examinee might have to translate the directions in order to fully understand the task at hand. Cognitive Abilities and Bilingualism in the Assessment Context The role language plays in the evaluation of cognitive abilities Bilingualism, in the context of an evaluation of cognitive abilities, means that students who have two languages need to refer to both languages to provide responses, which allows them to optimise their results and give the most accurate possible portrait of their intellectual functioning. This raises the question of whether one can assume that a construct remains equivalent or comparable if a question or test item is translated. When considering the intelligence tests that have been translated and adapted in recent years, such as the WISC-IV (2003) and the WJ III COG (2001), it can be observed that the items are, for the most part, simply translated although they do not necessarily appear in the same position on the item sequence of a test or a subtest. This difference in position is related to the item difficulty level and does not reflect a change in the construct being measured. An examinee who translates or code-switches on a specific item in a subtest is not being tested on a different construct but is 43 rather using a certain skill to address the question at hand. When a student, for example, on a Vocabulary subtest, is asked “What is a dog?” and he/she chooses to respond by providing a French definition of a dog, he/she is still being tested on his/her long term memory, verbal comprehension, range of knowledge, and ability to provide a verbal response to a orally presented item. It can, therefore, be asserted that when examinees code-switch or translate for their own benefit, they are still being tested on the same construct as unilingual examinees who are responding and being questioned in one language only (see Munoz-Sandoval et al., 1998, on construct validity for details). To summarize, code-switching was once believed to be a sign of language incompetence and is sometimes confused with code-mixing, observed in young children (Hamers & Blanc, 2000). It has been found that these code-switches are not due to lack of skill but are rather an adaptation to the interlocutor and, probably, to the topic of discussion (Mackey, 2000). Moreover, until the 1960s, it was believed that bilingualism had a detrimental effect on intellectual growth. Bilingualism was viewed as a cause of inferior intelligence (Hamers & Blanc, 2000) and low scores obtained by non-English speakers were interpreted as evidence of a learning disability and led, in some situations, to a diagnosis of intellectual disability. However, since the Peal and Lambert (1962) study, many experiments have confirmed that bilinguals score higher than monolinguals on measures of specific verbal and non-verbal cognitive abilities, as well as on general intellectual functioning. The “bilingual cognitive advantage” has been observed on measures of verbal originality, verbal divergence, on Piagetian concept-formation tasks, non-verbal perceptual tasks (Ben Zeev, 1977), measures of visual-spatial tasks (Hakuta & Diaz, 1985) and following complex instructions (Cummins, 1984). Ben Zeev (1977) worked with a group of bilingual Hebrew-English children on tasks where their ability to identify 44 language features were tested. These bilingual children outperformed their monolingual counterpart. Such significant changes in the results obtained by bilinguals and the perception we have of their linguistic functioning (condition) are other arguments in favour of additional research on the impact of bilingualism on intelligence. Allowing examinees to use all their languages is intuitively appealing because so many of the tests and subtests that constitute the measures of cognitive abilities are largely verbally mediated. As Chomsky (1972), a multilingual theorist by any definition, puts it: “One would expect that human language should directly reflect the characteristics of human intellectual capacities.” (p. ix). The Need for a New Testing Procedure In this study, the role of language in the expression of cognitive abilities, bilingualism, and the assessment process of cognitive abilities which includes the tests used and test users were examined. Given the increasing number of bilingual students in the school population and the need to assess their cognitive abilities in a fair and appropriate manner and given that the current practice in the assessment of bilinguals is greatly challenged (Flanagan & Ortiz, 2001; Ortiz & Dynda, 2005), this study became the opportunity to describe how the current testing procedures for bilinguals are still inadequate (Figueroa, 1990b; Flanagan et al, 2000; Ortiz, 2002) and greatly need improvement. The need to adapt present practices is, in part, linked to the fact that the “assessment of bilingual students’ verbal academic or cognitive abilities in English alone will underestimate their [second language learners’ –bilinguals’] academic potential to a very significant extent for at least 5 years after they start learning the language” (Munoz- Sandoval, et al., 1998, p. 9). 45 Language and verbal abilities have been a part of most theories of cognitive abilities and, by extension, a key component in developing measures of cognitive abilities. The impact of language and culture on test performance as well as on the interpretation of this performance remains unclear (Flanagan & Ortiz, 2001). Language is part of that development as well as the expression of cognitive abilities. Carroll (1987, 1993) tried to assess the contribution that language skills make to intelligence and, more specifically, the role that knowledge of a second language plays. He pondered Oller’s (1983, 1991) question of whether language proficiency is to be interpreted and organized as a single, unitary ability or as a series of multiple, “divisible” competencies, and concluded that it was the latter. Andreou and Karapetsas (2004) assert the opposite when they wrote that “It is by now a well-established finding that native and foreign language abilities are merely manifestations of a unitary linguistic capacity and that verbal ability underlying proficiency in one language, can be generalised to another language” (p. 357). This issue of proficiency is central to the debate, as it generally becomes the reason behind a decision either to test or to accept test results from a bilingual individual who was tested in English, a second language in many cases, or to make the decision to test that individual in his/her other language. When confronted with the question of language proficiency, practitioners have often been forced to use tests and procedures that may not be appropriate, either due to a lack of training or limited exposure to the knowledge needed to deal with linguistically diverse populations (Flanagan & Ortiz, 2001). As Lopez (1997) put it: “Currently the issue of how to use language proficiency data to determine what language(s) to use during cognitive assessment sessions warrants further empirical study” (p. 507). Bilinguals are as affected by the question of proficiency as Limited English Proficient individuals. In both cases finding the appropriate assessment instrument may present a challenge 46 because bilinguals may not be completely proficient in either of their languages. Code-switching is one specific characteristic of the language use of bilinguals that illustrates this proficiency issue. Code-switching is sometimes referred to as code-mixing (Myers-Scotton, 1993b). Because it is so universally observed amongst bilingual individuals combining all sorts of languages, code-switching has been studied from numerous perspectives such as social psychology, anthropological linguistics, and sociolinguistics, and with a wide variety of languages in many different contexts (Myers-Scotton, 1993b). To name a few that are relevant to this study, code-switching has been studied from two important points of view: the linguistic structure behind code-switching (Poplack, 1980; Myers-Scotton, 1993a; Chomsky, 1995; Poplack, 2000; MacSwann, 2000; Wei 2005) and the motivation for a bilingual to code-switch (Myers-Scotton, 1993b; Martinovic-Zic, 1998; Myers-Scotton; 2000; Wei, 2005). Issues such as grammar, lexicon, and constraints are addressed by the study of the linguistic structure behind code-switching. Issues of social identity, power relationships, and the costs of switching from one language to another are more associated with the motivation aspect of code-switching. Both of these issues are at play in a testing situation in which a student has to determine which language he/she will use to answer a question presented orally which demands an oral response. Furthermore, although it is known from anecdotal evidence and some empirical data that the mixing of languages tends to decrease with age, the changes and the patterns that are associated with mixing vary from one study to another (Genessee, 2001). Despite this age-related decrease, and the fact that code-switching is perceived as a sign of linguistic incompetence when young children are learning two languages, it is seen as a sign of proficiency in bilingual adults (Genessee, 2001). Such considerations are important to note 47 because bilingual students, like others, are tested throughout their school career and may have to refer to their two languages during the course of a test. Bilingualism and the specific linguistic behaviour that is code-switching are also gaining in importance. Schester and Cummins (2003) report that almost half of the populations of many urban school districts in North America come from language backgrounds other than English. Citing past census data, many more authors before them (Lopez, 1997; Flanagan et al., 2000; Ortiz, 2002) have alluded to the increase in the number of students coming from non-English households and the diversity of community and school compositions. Although they may attend English school, these students may come from non-English backgrounds and may have limited proficiency in English or may be bilinguals who have a language other than English as their dominant language. In the interest of test fairness, and in order to obtain the most valid assessments of intelligence, these characteristics need to be considered when choosing a test or an assessment procedure. The need to recognise and assess such specific populations as bilinguals is not new. The National Association of School Psychologists (NASP; 1992) proposed a series of recommendations, insisting on the need for non-biased assessment. For example, Standard 3.5.3 (NASP, 1992) states the following: “Assessment procedures are chosen to maximize the student’s opportunities to be successful in the general culture, while respecting the student’s ethnic background” (Thomas & Grimes, 1995, p. 1166). The fairness issue has been acknowledged in recommendations made by the American Psychological Association (APA) Standards for Educational and Psychological Testing (American Educational Research Association; AERA, American Psychological Association; APA, National Council on Measurement in Education; NCME, 1999), that there be a preference toward bilingual examiners 48 for bilingual students and that students be tested in the language in which they have the highest level of proficiency. Other issues of fairness have to do with the test norms and the test users’ training. The Standards on Educational and Psychological Testing (AERA/APA/NCME, 1999) goes further by implementing Standards 7.1 to 7.12, which address issues of fairness in testing and test use. Bilingual students are a specific subgroup of the general population, and Standard 7.1 states (AERA/APA/NCME, 1999): When credible research reports that test scores differ in meaning across examinee subgroups for the type of test in question, then to the extent feasible, the same forms of validity evidence collected for the examinee population as a whole should also be collected for each relevant subgroup. (p. 80) Taking these facts into consideration accentuates the need to expand on current research on bilingual testing. Although this study does not directly focus on the issue of the training levels given to test administrators, this issue is also of concern. Ochoa, Rivera and Ford (1997) found that “83% of school psychologists […] conducting bilingual assessments described their training in this area as less than adequate” (p. 341), which makes the results obtained and the conclusions drawn by these psychologists, at best, questionable. Despite these recommendations and the current situation in schools across North America, McGrew and Flanagan (1998) remind us of the absence of tests with norms for bilingual and multilingual children because multilingualism is not one of the variables they control for in the development of the norming sample. The testing of cognitive abilities is generally done in English, which is the language of the most widely used tests of intelligence (e.g., Wechsler Intelligence Scale for Children Fourth 49 Edition, Woodcock-Johnson Third Edition Tests of Cognitive Abilities and Stanford-Binet Fifth Edition). Moreover, despite the increasing number of students who come from non-English speaking backgrounds, and because the evaluation of cognitive abilities occurs on a regular basis in most schools across the continent, there is an increased need to match the instruments and procedures to the linguistic realities. Although these aforementioned tests have been or are being developed in other language versions, none of them have norms or procedures that are adequate for the verbal evaluation of bilingual children. In order to evaluate fairly and accurately the cognitive abilities of bilingual children, a testing procedure needs to offer the possibility of using all of the examinee’s available languages. This is consistent with Ortiz’s (2002) comment that the “aggregation of an individual’s language abilities into a bilingual composite after being measured separately is unlikely to be the most accurate operationalisation of what bilingual ability actually is, yet it does manage to surpass previous methods in this respect” (p. 1327). Ortiz (2002) speaks here of the underlying construct validity issue that is raised when bilingual individual are tested with instruments that do not seem to consider the way in which they express themselves. Monolingually testing an individual with two comparable instruments may be measuring the same construct but the question still remains as to the fact that it might not be the same construct for a bilingual individual as for a monolingual one. The Bilingual Verbal Ability Test (BVAT: Munoz-Sandoval, et al., 1998) is currently the only bilingual instrument available to test some of the skills associated generally with intelligence in bilingual students (Ortiz, 2002). Although it allows for responses in more than one language, it does so in only one language at a time (Ortiz, 2002). The BVAT (Munoz- Sandoval et al., 1998) does not allow for constant code-switching and therefore falls short of being a procedure that fully recognises code-switching, which is an important linguistic 50 behaviour characterising the language of bilinguals. Munoz-Sandoval and her colleagues (1998) have identified the need for a bilingual assessment procedure. By adding the code-switching component, a newly developed testing method should offer a procedure that is more representative of a naturally occurring linguistic behaviour. Myers-Scotton (1993a) described how bilingual speakers generally use a main language, the matrix language, and include elements from their other language, the embedded language. Although the switch may be intrasentential (within a sentence) or intersentential (between sentences), it remains present and is not just a matter of language borrowing. This behaviour has to be accepted through testing in order to reflect adequately the tested individual’s language abilities. When one considers that intelligence is assessed, in the vast majority of cases, through language of testing of bilinguals by a monolingual Anglophone examiner conducting a test of cognitive abilities in English, there is a need for bilingual assessment of these students, who represent a significant part of the population (Cummins 1984; Ortiz 2002). Figueroa (1990b) challenges the appropriateness of the practice of testing a student in English, and then testing them in a translated version of the same test. Figueroa describes this kind of testing as a monolingual assessment in two languages rather than proper bilingual assessment. The nuance is obvious for anyone who has heard bilingual children, and often adults as well, speak. Both languages are used simultaneously through a code-switching process that follows a variety of rules (Grosjean, 1982; Clyne, 2000; Poplack, 2000). Code-switching is used by bilinguals in everyday conversations and it is also a concrete reality in intelligence testing situations. Because bilingual individuals are typically only tested in one language, it only makes it a puzzling experience to interpret and reflect upon the test results currently obtained by these individuals on measures of cognitive abilities. The interpretation of the assessment leaves room 51 for questions, in great part because the testing procedures are challenged both by the test users and the test developers themselves, who are aware that their tests do not necessarily account for all language abilities. Ascher (1990) insisted that the psychological assessment of bilingual children should be conducted in both languages, in great part because these children rely on both languages depending on the demands coming from the different settings (e.g., home, school) and contexts in which they are placed. A testing procedure such as the one proposed in this study would allow to address some of the many issues that still need to be addressed when bilingual individuals are evaluated for placement or diagnosis. In this study, using some selected subtests of the WISC-IV and selected tests of the WJ III COG (Woodcock, et al., 2001), the appropriateness of allowing for the possibility of spontaneous code-switching without penalising the examinee was examined. The examiners used both English and French to administer the test while also accepting English and French responses to the test items. In summary, numerous gaps in the area of bilingual testing and in the testing of bilinguals are observed because we lack the necessary evaluation instruments, the norms for the existing instruments, the procedures to use these instruments appropriately, and the training to follow through with the evaluations. Current testing practices have yet to integrate bilingualism with which a growing number of students live, both at home and at school. These bilingual students express themselves in their two languages, either separately or through code-switching. Ortiz (2002), and Lopez (1997) before him, recommended that test administration be adapted to the point where alternate responses would be considered acceptable and where items could be presented bilingually. By letting the student code-switch throughout the assessment, and by measuring the impact of code-switching on test results, one more step was taken toward the 52 recognition that there is a need to assess students in a bilingual fashion and not simply to hypothesize that they can be tested in one language or the other based solely on language dominance. Given all these premises, a new procedure is needed to account for linguistic demands, or language loading, without being biased, and that can allow for and validate the use of the bilingual individual’s two languages. This new testing procedure must still account for language proficiency, as it is important to determine language proficiency in the process of assessing children who have a known limited mastery of English or who are bilingual (Lopez, 1997). Language proficiency data is needed to determine what language or languages should be used in evaluating cognitive abilities. “Once the language proficiency data are gathered, evaluators are faced with the challenge of determining which language(s) to use during the cognitive assessment process.” (Lopez, 1997, p. 507). Target Population Studies (Wei, 2000; Dewaele, Housen, & Wei, 2003) show that, worldwide, the multilingual population has surpassed the monolingual one. Given that there are between 5,000 and 6,800 plus languages or dialects reported for less than 200 countries, it would be surprising if there were fewer multilingual speakers than monolingual speakers. The same trend towards multilingualism has been observed in North America, where demographic changes have been noted and projections have been made regarding a rapid growth in the number of bilingual North Americans. As an example based on the 2006 Canadian census data (Statistics Canada, 2006), 41% of residents in the greater Vancouver area are allophones compared to 38% in 2001. Statistics Canada (2006) reported that 44% of the Toronto population were allophones. Of the allophones in Canada, 68% reported that they speak some English or some French at home 53 (Statistics Canada, 2008), illustrating the number of bilinguals in that country. Various authors (Bainter & Tollefson, 2003; Lopez, 1997; Ortiz, 1997; Ortiz & Flanagan, 2002) are quantifying these changes in order to help put into perspective the need to adjust our assessment tools to this coming reality. Given that one of the most significant changes coming is that a majority of the population use a language other than English on a regular basis, language of testing might need to be revised to recognise the growing bilingual population. In addition, as bilingual instruments are developed, efforts will need to be made to take into consideration the portability of various aspects of bilingual testing and assessment procedures. The fact that language history plays a greater role than language exposure in the learning of a second language (Marian, Blumenfeld, &, Kaushanskaya, 2007) is an example of a component that cannot be occulted when bilingualism is studied in the context of the assessment of cognitive abilities. Summary Intelligence, bilingualism, and assessment procedures intersect in the course of an assessment of the cognitive abilities of bilingual individuals. Although numerous definitions and theories about intelligence exist, the Cattell-Horn-Carroll (CHC) Three-Stratum Theory of cognitive abilities was chosen as the foundation for the understanding of intelligence, given its identification of various factors with linguistic components. Furthermore, the CHC theory is the one on which the Woodcock-Johnson Tests of Cognitive Abilities (Woodcock, et al., 2001) is based. This battery of tests measures a wide variety of abilities that have relied heavily on language skills that are fundamental in the assessment of bilingual individuals. The assessment of these bilingual individuals remains a challenge for psychologists and the issue of bias is still raised, although it has been thoroughly addressed. There are different 54 types of bilingualism and not all are easily considered through testing. These differences are associated either with the age at which a second language is learned, the proficiency in that second language as well as with the development stage of acquisition of the second language (e.g., see Cummins, 1984, 2000; BICS and CALP). Historically considered as having a detrimental impact on intelligence, bilingualism is now perceived as being an advantage. However, the manner in which its impact is measured and the role it plays when cognitive abilities are measured still is unclear. One behaviour noticed when bilingual individuals are tested is code-switching, whereby examinees move from one language to the other to provide a response to a verbal test item. This behaviour observed almost universally in bilingual speakers has an impact on test results that had yet to be measured. These individuals are generally assessed using nonverbal measures or procedures that have yet to be validated. Although accepted in the field, these procedures remain greatly challenged. The population of bilingual speakers is increasing worldwide and the need to acknowledge this fact through the assessment of cognitive abilities becomes increasingly necessary. 55 Chapter Three Method When bilingual students are put in a test situation in which they have to respond orally, they often make use of both their languages or go back and forth between these languages to provide a more complete or accurate response. In this study, the impact of using a second language to respond to verbal items that are presented on tests of cognitive abilities was evaluated. Due to the important role that code-switching plays in the language of bilingual individuals and in order to evaluate the impact code-switching has on scores obtained on standardised measures of cognitive abilities when administered to bilingual children, the following research questions guided this study: Research Question One Are there significant differences between results on tests and subtests measuring selected cognitive abilities obtained by participants in an Experimental Group who were provided with a code-switching procedure in which they and the examiner could code-switch, and participants in a Control Group who were tested following the standard procedure in French, as described in the test manuals? Hypothesis one It was hypothesized that bilingual children in an Experimental Group, who were presented with the option of using code-switching, would obtain higher scores than the average results produced by participants in the Control Group, who were not encouraged to or presented with the option of using code-switching. 56 Research Question Two Are there significant differences in the code-switching frequency of participants in the Experimental Group and the code-switching frequency of participants in the Control Group, taking into consideration that the examiners only code-switched with children in the Experimental Group? Hypothesis two It was hypothesised that students in the Experimental Group would code-switch at a greater frequency because they were presented with the code-switching procedure. Code- switching was not presented as an option for the Control Group. As was observed during the pilot for the present study and as was reported in the literature (see Martinovic-Zic, 1998), bilingual speakers choose the languages they need based on their relationship with the interlocutor or, in this case, the examiner. This study was framed with the presupposition that participants in both groups would code-switch and this would thereby impact their test results; in order to perform well on the tests, the participants would want to use both languages to respond fully to test items. However, because participants in the Experimental group were openly offered the opportunity to code-switch, it was hypothesized that code-switching would occur at a higher frequency. Research Question Three Is there a relationship between the frequency of code-switching behaviours and the degree of exposure to a second language? Hypothesis three The goal here was to examine how having more exposure to one language would have an impact on a participant’s code-switching. As a student’s main sources of language are parents 57 and grandparents, it is a reasonable assumption that the student would speak the language of their parents/grandparents. It was hypothesised that the more a person is exposed to a second language, the higher that person’s need to code-switch between languages and the greater the frequency of code-switching will be. The reason is that increased fluency and proficiency that would stem from the increased use would in return be a reason for the participant to rely on that other language, as is the case in everyday conversation. Participants The participants were 105 students registered in the Conseil scolaire francophone de la Colombie-Britannique (CSF), which governs all the public French schools in more than 55 communities throughout the province of British Columbia. The participants were randomly assigned to one group as their signed consent form (See Appendix A and B) was returned to the investigator and he assigned them going back and forth from one group than the other. The Control Group was composed of 25 girls and 25 boys in grades 4 to 10, ranging in age from 9 years 4 months to 13 years 10 months (M = 11 years 2 months). The participants in the Experimental Group were 31 girls and 24 boys in grades 4 to 10, ranging in age from 9 years 5 months to 13 years 3 months at the time of testing (M= 11 years 4 months). There were more consent forms for girls who were assigned to the Experimental Group and a greater drop-out rate from boys in that group, resulting in a higher number of girls in this group. All students had been taught exclusively in French from kindergarten to grade three and started receiving some English language instruction in grade four, when a class of English Language Arts was added to their curriculum. They received formal instruction in both languages, French and English, used in the testing. They were taught English Language Arts in grade four, which was their only course in 58 English until grade eight, after which some students take other courses such as sciences or other elective courses in English. In addition, participants came from various language backgrounds and, although they were in a francophone school system, not all had French as a first language (as defined in Chapter One). Table 3.1 shows the first language of the study participants and their parents as reported by the parents on the Language Background Questionnaire (See Appendix D) administered as a part of the present study. Parents who responded to the questionnaire had the choice to name French, English or any other language as a first language. Many parents selected more than one language as a first language, either for themselves or their children. By choosing more than one language as a first language, the parents created the need to add another category (the French/English as a first language category as seen in Table 3.1) to the analysis whereby two languages had to be considered a first language, thus supporting the notion that the participants in the present study were bilingual. The majority of parents in the Control Group (43% of mothers and 41% of fathers) shared French as a first language but indicated that their children mostly had English as a first language. Similar results were observed in the parents of students in the Experimental Group (47% of mothers and 40% of fathers) but these parents indicated that their child also had French as first language in 42% of the cases. As was noted earlier, while most children shared their parents’ first language, this cannot be generalised for all children. As illustrated in Table 3.1, differences are noted between the parents’ first language and the participants’ first language. Because the return rate for the Language Background Questionnaire was lower than the actual number of participants, only 37 responses were available from the parents of the participants in the Control Group and 43 responses from the parents of the participants in the Experimental Group. 59 Table 3.1 First Languages of Study Participants and Their Parents First Language of Study Participants Groups Child Mother Father Control n=37* French English French/English Other 30% (11) 43% (16) 14% (5) 14% (5) 43% (16) 30% (11) 8% (3) 19% (7) 41% (15) 32% (12) 3% (1) 24% (9) Experimental n=43* French English French/English Other 42% (18) 26% (11) 19% (8) 14% (6) 47% (20) 23% (10) 2% (1) 28% (12) 40% (17) 33% (14) 0 % (0) 28% (12) *The n is lower than that the number of participants as it reflects the return rate of the questionnaires. Recruitment occurred in francophone schools located throughout the province of British Columbia. The Superintendent granted permission to conduct the study in the district and then permission was sought from five school principals. Given that many of the district’s schools were not close to any densely populated area and had a population of less than 100 students, the participants were first selected from schools where the number of students and the ease of access for testing were optimal. Once permission to conduct research in their schools was obtained, the individual principals were contacted for permission to approach teachers and students in their buildings. With the approval of the principals, a presentation of the goals and process of the 60 study was conducted in each classroom. Recruitment and consent letters seeking their child’s participation were sent to the parents. The parents returned the consent forms to the school. The consent forms were collected in a return box at the schools designated for this project. With the parents’ consent, students were then approached for participation in the study and their assent was obtained. There were 363 letters sent to parents of students in the five schools selected from the schools that had expressed interest in participating in the study. Of those, 145 letters were returned with 123 parents giving consent for their child to participate resulting in a return rate of 39.9% and a participation rate of 85.4%. A total of 123 consent letters were received. There were 19 parents who indicated that they did not want their child to participate in the study. Three returned consent forms were left blank and, as a result, these children were not included in the study. The parents of 52 boys and 71 girls returned the consent forms and 105 students were randomly selected for participation in the study. The students with signed consent forms were randomly assigned to either the Control Group or the Experimental Group as their form arrived. Efforts were made to have equal sex representation in both groups. There were more girls in the Experimental Group (31 girls and 24 boys) than in the Control Group (25 girls and 25 boys) the result of: 1) school scheduling changes resulting in the examiner was not being able to test on certain occasions; and 2) some participants leaving the school prior to testing or being away from school for a long period of time, they were no longer available to participate in the testing. Given that more girls brought back their consent form, more were assigned to a group, with the drop-out rate being higher in boys in the Experimental Group. At the time when 50 participants were left in the Experimental Group, there were only 19 boys. Another recruitment drive was 61 necessary to get five more boys in the Experimental Group, bringing the final number of participants to 105. Students registered in Francisation (French as a Second Language classes) or in English as a Second Language classes, students with learning disabilities or any other impairment such as an intellectual disability or cognitive process deficiency that could directly affect their results were not included in the sample as these would have added a variable that could not have been accounted for in the interpretation of results. Although students with previously identified difficulties were not included in the study, there appeared to be a wide range of student abilities represented. As examples, the scores on the Échelle de Vocabulaire en Images Peabody (ÉVIP: Dunn, Thériault-Whalen, & Dunn, 1993) in the Control Group ranged from 71 to 132 and the scores on the Peabody Picture Vocabulary Test -Third Edition (PPVT-III: Dunn & Dunn, 1997) ranged from 66 to 131. The range of scores on these same tests with the Experimental group was between 78 and 127 on the ÉVIP and between 58 and 142 on the PPVT-III. Comparable ranges were found on the other tests and subtests used in this study. Of the 18 students with parental consent who were not tested, 6 were not tested because they were designated in a special education category, 4 were not tested because they were either away during the day testing occurred at their school or had left school, and the other 8 were not tested as the targeted number of participants had already been reached. Pilot Study An adapted test record was developed specifically for this study. The adapted test record consisted of items selected from the original test record as well as a section to record code- switching occurrences and patterns. Two bilingual high-school students were the pilot student participants, chosen because they were balanced bilinguals and could provide important insight 62 as to what needed to be done in preparation for the study. Both were presented with the code- switching procedure, which is presented later in this chapter. The participants were questioned on how they chose to respond and on when and why they chose to code-switch. Although no formal data were derived from their results, as this was not the goal of the pilot study, the information they provided helped structure the adapted test record, prepared the examiners for what to expect and also provided an insight into the timing and the reasons for code-switching. The information obtained from the pilot study was used to revise the study procedures and training manual (see Appendix E). Instruments In order to be able to measure the impact of code-switching on the test results obtained by participants on tests of cognitive abilities, it was necessary to select tests that measured cognitive abilities. Additionally, it was also important to select tests that provided data on language proficiency in order to prevent a simple explanation for code-switching, such as the students had a much greater facility in one language. The following instruments were selected because of the high level of linguistic demand on the examinee or the need to use specific language skills to respond to a test item (Flanagan, Ortiz, & Alfonso, 2007): the Échelle de Vocabulaire en Images Peabody (Dunn, Thériault-Whalen, & Dunn, 1993) and the Peabody Picture Vocabulary Test - Third Edition (Dunn & Dunn, 1997). These tests were used as comparative measures of language abilities because they are the French and English equivalent of one another. They served as language proficiency indices. Although they were not used to screen participants on language proficiency, they were useful in providing data on how the participants performed on receptive language in both French and English. These two tests were the only tests administered in one language, which means that code-switching did not have had an impact on the test results. 63 Identifying significant discrepancies between the results in these two tests helped to explain other differences that were noted on the other tests administered. The tests are presented here in the order in which they were administered to participants. The technical data, based on the norming groups used with each test, are taken from the test manuals of the respective tests unless otherwise mentioned. These coefficients of correlation were not fixed and may vary significantly based on group composition. It is the case of this study where the participants differed from the general population on important characteristics such as language and cultural background. The Échelle de Vocabulaire en Images Peabody The Échelle de Vocabulaire en Images Peabody (Dunn, et al., 1993) is a French adaptation of the PPVT-II, the predecessor of the PPVT-III. It was developed to test verbal competencies in children ages 2 to 18 years of age by showing the participant four pictures and saying one word. The participant then had to point to or show the picture that matched the word that was spoken. The ÉVIP manual, citing numerous studies, provides data from research on external validity done using the Peabody Picture Vocabulary Test-Revised reported an average coefficient of correlation of .71 when compared to other measures of receptive language. In comparison, the PPVT-R has a median correlation of .72 with the Vocabulary subtest of the Stanford-Binet and .69 with the Vocabulary subtest of the WISC (not specifying which edition), as reported in the ÉVIP manual (Dunn, et al., 1993). The ÉVIP, like the PPVT-III was used because it provided a measure of receptive language ability that correlates well with general intelligence. On the ÉVIP the examinee is asked to point to a response. He or she does not have to verbalise a response, which thus limits the opportunity for code-switching. Using the data available from this study, the coefficient of 64 correlation between the scores on this test and the score obtained on the WISC-IV Verbal Comprehension Index in this study was .60 for the Control Group and .69 for the Experimental Group. These coefficients of correlation are slightly lower than those documented in the test manual. This difference may be explained by the limited sample size. The Échelle d’Intelligence de Wechsler pour Enfants Quatrième Édition Version pour francophones du Canada The Échelle d’Intelligence de Wechsler pour Enfants Quatrième Édition Version pour Francophones du Canada (WISC-IVCDN-F: Wechsler, 2005) was developed to measure intelligence in children 6 years 0 month to 16 years 11 months. The Similitudes (Similarities), Vocabulaire (Vocabulary) and Compréhension (Comprehension) subtests were used in the present study because of their high linguistic demand and therefore high potential for code- switching, which was the behaviour of focus in the present study. The Similarities subtest measured word knowledge, language development, fund of knowledge, learning ability, long- term memory, and verbal concept formation in a task where the individual had to respond orally to a question that asked how two words were alike. The Vocabulary subtest measured verbal reasoning and comprehension, and also involved some auditory comprehension, memory, and verbal expression. The participant had to orally define words that were presented orally. On the Comprehension subtest, which is a measure of verbal reasoning, conceptualization, the ability to evaluate and utilize past experiences, verbal comprehension, and expression, the participant must orally respond to questions about everyday life presented orally by the examiner (Wechsler, 2003a). The WISC-IVCDN-F subtests were selected not only because they represent the only widely published measure of intelligence available in both English (WISC-IV) and French, but 65 also because of the documented reliability and validity of the subtests administered. The reliability coefficients across all ages for the WISC-IV subtests used are as follows: Similarities .85; Vocabulary .88; and Comprehension .79 (Zhu & Weiss, 2005). Although widely used, the appropriateness of the WISC-IV for testing bilingual individuals is not documented and no special studies with French bilingual populations were available at the time of the present study. The coefficient of correlation between the scores on the three subtests and the score obtained on the WISC-IV Verbal Comprehension Index in this study ranged between .84 and .90 for the Control Group and between .75 and .84 for the Experimental Group. These coefficients of correlation correspond to what was documented in the test manuals. Items on the WISC-IVCDN-F (Wechsler, 2005) were not just translations, but were also adaptations; some questions varied from one language to the other, either in their content or in their order of presentation. For instance, one question was asked as Item 5 on the WISC-IVCDN-F and as Item 9 on the WISC-IV. In other instances, a question had totally different versions in the two languages. The examiners were aware of these item differences and asked the question that was at the appropriate level in the second language. The details of these differences are provided in Appendix F. The Woodcock-Johnson Third Edition Tests of Cognitive Abilities The Verbal Comprehension and Retrieval Fluency tests of the Woodcock-Johnson Third Edition Tests of Cognitive Abilities (WJ III COG; Woodcock et al., 2001) were used. The WJ III COG was developed for a population aged 2 years 6 months to 89 years and over. The Verbal Comprehension test consisted of four subtests. The Picture Vocabulary subtest measured aspects of lexical knowledge in a task where the participant identified pictures of familiar and unfamiliar objects by trying to name them orally. The Synonyms and Antonyms subtest measured aspects 66 of vocabulary knowledge in a task where the participant, hearing a word, provided a synonym on the first subtest and an antonym on the second subtest. The Verbal Analogies subtest measured the ability to reason using lexical knowledge, with the participant listening to three words in an analogy and completing the analogy with an appropriate fourth word. An example of verbal analogy would be: Plane is to fly as bike is to... (Response: ride). The reliability of the Verbal Comprehension test was provided by age range, with a median coefficient of correlation is .92 (McGrew & Woodcock, 2001). The coefficient of correlation between the scores on this test and the scores obtained on the WISC-IV Verbal Comprehension Index in this study was .65 for the Control Group and .60 for the Experimental Group. These correlations are somewhat low and lower than what is generally documented. This could be due to the limited sample used in this study. The Retrieval Fluency test measured fluency of retrieval from stored knowledge, with the participant naming examples of 3 given categories, each within a 1-minute time period (Mather & Woodcock, 2001). Subject responses in a language other than English are allowed. The reliability of the Retrieval Fluency test was also provided by age range, and its median coefficient of correlation was .85 (McGrew & Woodcock, 2001). Although no official translation of the WJ III COG in French was available at the time this study was conducted, the subtests that are part of the Verbal Comprehension tests have been translated into French as part of the Bilingual Verbal Ability Test (BVAT; Munoz-Sandoval, et al., 1998). The directions of the Retrieval Fluency test required a French translation. The translation of these directions was done by the researcher, who followed basic accepted translation methods that included forward and backward translation as well as revision by 67 bilingual individuals familiar with the field of testing. These directions were simple and only asked that the examinee to name different things The Peabody Picture Vocabulary Test -Third Edition The Peabody Picture Vocabulary Test -Third Edition (PPVT-III; Dunn & Dunn, 1997) was selected as a measure of English receptive language given its strong psychometric properties and wide-spread use in the research literature. The test was developed for individuals 2 years 6 month to 90 years and over. As with the ÉVIP, the participant was presented with four pictures and a word. The participant then had to point or to show the picture that matched the word spoken. The PPVT-III technical manual cited criterion validity correlations with measures of verbal ability that were between .82 and .92 (WISC-III VIQ) and between .76 and .91 (Kaufman Adult and Adolescence Intelligence Test: KAIT Crystallized IQ), (Dunn & Dunn, 1997) implying that test authors established correlations between the PPVT-III and these other tests that speaks to the fact that they are measuring closely related skills. The coefficient of correlation between the scores on this test and the score obtained on the WISC-IV Verbal Comprehension Index in this study was .65 for the Control Group and .48 for the Experimental Group. These correlations, while low, were comparable to those obtained on the other tests and subtests administered in this study. Language background questionnaire A language background questionnaire was developed to gather information regarding the level of exposure to the languages used by the participants in the tests. As was noted by Marian and colleagues (2007), variables that are related to a child’s history tend to predict his or her later performance better in a second language. By placing the emphasis on language history, language dominance was not the primary predictor in the language acquisition of a second 68 language (Flege, Frieda, Walley and Randazza, 1998) and also served as a supplemental source of information of the participants’ bilingualism. As is noted in Table 3.1, children, for the most part, shared their parents’ language history. That encompassed both the choice of a first language and also the choice of a language of integration. As example, language of integration could be English as it is chosen by a majority of immigrants arriving to British Columbia. This questionnaire, presented in Appendix D, was inspired by and was developed using a variety of language background questionnaires available in the literature. The level of exposure included elements such as the child’s first language, the parents’ and grandparents’ language, the languages spoken at home, the number of years spent in a specific school program, as well as factors such as the students’ most commonly used language. The questionnaire was used to determine the situations and sources of exposure to various languages. Scoring was based on exposure to French, English or other languages named by the participants. One point was given for each response where the first language had to be identified. Every time a language was identified as a first language, a point was added. For example, if the parent responded that their child and both his parents spoke French; three points were allotted to French. If the four grandparents all spoke English, four points were allotted to English, and so on. The scores were intended to be indicators of the frequency of exposure to a particular language with each person providing the equivalent of one point. The questionnaire was used to allocate a score that indicated the number of sources of language use and was also used to gather some data on the various sources of exposure to French and English. For instance, a parent who indicated that their child, one parent, and two grandparents spoke English obtained a score of 4 for exposure to English. 69 Procedures When they are placed in a testing situation where their verbal abilities are measured, children will often choose to switch language either to present their thinking process in a better light or to show the other person that they do not understand what is presented to them (Hughes, et al., 2006). As a result of this language behaviour, a procedure was developed to recognise and document these code-switching behaviours. In addition, the examiner was allowed to code- switch in order to respect the actual language movements initiated by the examinees and to acknowledge formally the need to use two languages in a testing situation where oral responses are sought. The procedure consisted of presenting the test items in French while allowing participants to respond in either French or English as they saw fit, based on their need to use either language to provide a satisfying response. The use of a second language to respond to a test item was observed on the WJ III (Woodcock, et al., 2001), where responses on the English version of the test are accepted in a second language. Responses in a second language were accepted here because they are in the tests standard procedure. They were however tallied separately in order to identify the code-switches made by all participants. In this study, the code- switching procedure included code-switching on the examiner’s part, an element not observed on commonly-used tests. A two-way scoring system, with and without code-switching, was developed and is described in the scoring section of this chapter. This system allowed acknowledgement of score differences by documenting the results with and without code- switching. The scoring was a part of the procedure. The participants in the Control Group were administered all the tests in French, with the exception of the PPVT-III, which was administered in English. Some participants in the Control Group provided responses in English although the tests and subtests were administered in French 70 and the examiner did not code-switch. These responses were accepted and scored as responses with code-switching. No points were allotted for the response scored without code-switching. All the participants in the Experimental Group had the code-switching procedure explained to them by the examiner (see Appendix G – Guide d’évaluation). The following is an example of a code-switch from the question to the response presented to the participant as an example: Question. “What is a dog”; Response. “C’est an animal”. Other examples were provided to participants during the description of the procedure for “within-response” code- switches. Participants were also told that the examiner might also code-switch by asking questions in English. Participants were then tested in French with the possibility of code-switching, as described below. Both the participants and the examiner in the Experimental Group were permitted to code-switch. In the case of the examiner, code-switching procedures were detailed for one of the two conditions presented in the following section on code-switching. Code-switching procedure The instructions for the code-switching procedure were presented to the participants in the Experimental Group to describe how they could move from French to English during testing. Participants were also informed that the examiner would also switch between the two languages when needed. Only participants in the Experimental Group were allowed to code-switch at any point during the testing. The examiner did not code-switch with the participants in the Control Group. While no instruction on code-switching was given to participants in the Control Group, they were permitted to provide responses in English. The examiners in the Experimental Group were allowed to code-switch under two conditions. In the first condition, the error condition, the examiner switched from French to 71 English when the participant did not provide a correct response. The examiner then asked the equivalent question in English and the response was scored. The examiner then resumed testing in French with the next item, unless the second condition was met. In the second condition, the usage condition, the examiner switched from French to English if the participant provided responses in English on 2 items in a row. If the participant responded in French to two questions in a row, the examiner asked the next question in French. Scoring In order to score the items and document the code-switching behaviours, test records were developed from the original test records and adapted to include a column to tally the number of code-switches made by both the participants and the examiner. The scoring itself consisted in scoring items with and without code-switching. For the scoring without code- switching, only correct responses provided in French were given points. For scoring with code- switching the full correct responses that included correct responses in French and an appropriate use of English (including responses provided only in English) were provided points. Because the purpose of the study was to measure the impact of code-switching, either by the examinee or the examiner, no distinctions were made as to the origin of the code-switching. Code-switching points were allotted regardless of the source for the code-switching. This allowed for the comparisons between the scores obtained with and without code-switching. The mean number of code-switches by the examiner is reported in Table 4.3. The adapted test records are presented in Appendix G. The ÉVIP, PPVT III, and WISC-IVCDN-F were hand-scored using procedures outlined in the administration and scoring manuals. The final test scores for the WJ III COG were computed by the researcher who used the WJ III Compuscore and Profiles Program Version 2.0 72 (Schrank & Woodcock, 2003). Two raters were used to score all tests and subtests in order to ensure the maximum reliability of results. One individual did the initial scoring and then a second examiner rescored all tests. In case of disagreement, a consensus was sought and reached, using the information available in the test manuals as a base for the discussion. Test administration The tests were administered in the following order to all students: ÉVIP; WISC-IVCDN-F Similarities, Vocabulary and Comprehension subtests; WJ III COG Verbal Comprehension cluster and Retrieval Fluency tests; and PPVT-III. The choice of which tests to administer and the order of administration were established to maximise the opportunities for code-switching as well as to avoid examinee fatigue. The PPVT-III was administered last as it was the only test administered in English only. More specifically, the ÉVIP and PPVT-III presented situations where the participants chose responses among visual choices whereas the WISC-IV subtests have participants orally respond to orally presented questions with the words also provided visually. The WJ III COG tests offered a combination of subtests, some with items that were orally presented and others where items were both visually and orally presented. Participants in the Experimental Group received the description of the code-switching procedure. They were trained in the code-switching procedure, and were provided with examples of the various possibilities to code-switch either within their responses on one item or by moving from one language to the other from response to response. Code-switching was presented as one possible way to respond to test item but was not described as the optimal way nor was it reinforced when it occurred. Participants in the Experimental Group were told that they could respond in three different ways: in French only, in English only and in a combination of the two languages. The 73 term code-switching was not used in the description of possible responses provided to the examinees. Examiners’ training Three examiners were initially trained and conducted the testing for the study, with the author testing 100 of the 105 participants and two other examiners testing the remaining five participants. All examiners were bilingual (with French as L1), had prior training in standardized test administration and were trained by the author in the study procedures. However, because of constraints out of the author’s control, only two examiners other than the author were able to test five examinees before they quit the study. One of the examiners contributed to the scoring of the tests. Two more attempts were made to recruit new examiners. These attempts were not successful as potential examiners needed to be fluent in both French and English and be familiar with standardized test administration. As a result, the author and co-investigator tested the majority (n=100) of participants. The training was done in three stages. First, direct teaching and practice testing was conducted with the investigator on the instruments used in the study. In order to insure the quality of the administration, the co-investigator observed and commented on practice sessions amongst examiners as well as responded to questions as they tested their first participants. Secondly, the examiners were trained in how to use the adapted test record, score the tests, and tally the number of code-switches. In order to insure the examiners’ familiarity with the scoring procedures, they were taught and trained using the same method as for the tests. A wide array of situations was presented to the examiners to expose them to the diversity of scoring situations they would encounter throughout the testing. Then, the examiners were supervised doing practiced administration. One examiner submitted a video showing her 74 conducting testing. The video was reviewed by the study author and co-investigator. The other examiner went through a step-by-step analysis of the administration in order to ensure that the rules of administration were respected. The final stage of training consisted in the investigator reviewing test protocols and discussing issues and difficulties, if any were encountered. There were regular contacts between the investigator and the examiners between testing sessions. It was made clear with the examiners that their role was one of data collection and that the results would only be used as part of the study. The examiners only conducted administrations and calculated the raw scores. These were then submitted to the author who completed the scoring process and converted the raw scores to standard scores. Data analysis The primary purpose of the present study was to compare test scores obtained on measures of cognitive abilities in order to measure the impact of code-switching on these scores and to examine code-switching patterns in order to assess the frequency of the language behaviour and the need to refer to a second language when responding to test items. The data analysis involved comparisons of the test results obtained by the Control and Experimental Groups as well as comparisons of frequency of use of code-switching. Research question one The quantitative dependent variables were the Similarities, Vocabulary and Comprehension subtest scores for the WISC-IVCDN-F and the test scores for the WJ III COG Verbal Comprehension and Retrieval Fluency tests. A multivariate analysis of covariance (MANCOVA) was used to make the comparisons. The independent variable was sex. In order to capture the group differences on all the WISC-IV subtests and WJ III COG tests that served as 75 dependent variables, a MANCOVA was done using sex as a fixed factor and tests scores on the measures on receptive language proficiency (ÉVIP and PPVT-III) as covariates. Like the multivariate analysis of variance (MANOVA) the MANCOVA is used to see the main effect and the interaction effects of categorical variables on multiple dependent interval variables (test scores in the case here), adding the covariates that serve as control variables for the independent factors, serving to reduce the error term in the model (Tabachnick, & Fidell, 2001). By proceeding this way, the repetition of multiple comparisons is avoided. The groups were compared based on these measures to determine the impact of code-switching. Research question two The data analysis for the second research question consisted of tallying the frequency of code-switching behaviours, establishing means and frequency patterns by subtests, and comparing the two groups, using a chi-square test. The variables were the number of code- switching behaviours by each group. Comparisons were made between the two sets of frequency scores: those from the combination of all test and subtest except Retrieval Fluency and those from Retrieval Fluency. The data were accounted for separately on the WJ III COG Retrieval Fluency test results. This was on account of two factors: in this test, the participants could choose to respond in either language “at once” due to having 1 minute to respond and provide a list of words; and there was a very high degree of movement between French and English responses in this test. A separate scoring process for this test was done to prevent a distorted picture of the actual comparisons by having this one test weigh heavier in the frequency count than all the others combined. More specifically, some participants responded only in French, others only in English, and others used both languages. Some participants only responded in English on the Retrieval Fluency test, thus 76 giving the impression that they code-switched for their response to this test. As a result, they obtained a score of 0 when the test was scored for French responses, that is, prior to code- switching. This will be discussed further in Chapter Four as it has had an impact on the mean scores for that test for both groups. Research question three The results from the analysis of frequency of use of code-switching were compared with each other to investigate the source and potential frequency of exposure to French, English or any other language. The scores used to compare the frequency of exposure were the first language used by the participants, by his or her parents, and by his or her grandparents. 77 Chapter Four Results This chapter summarizes the results of the study as they correspond to each of the research questions. Research Question One Are there significant differences between results on tests and subtests measuring selected cognitive abilities obtained by participants in an Experimental Group in which both they and the examiner were allowed to code-switch and participants in a Control Group who were tested in French following the standardized procedures, as described in the test manuals? Comparisons were made between results on the Similarities, Vocabulary and Comprehension subtests of the WISC-IVCDN-F, the WISC-IVCDN-F Verbal Comprehension Index, and between the results obtained on the WJ III COG Verbal Comprehension and Retrieval Fluency tests. Comparisons were made between the results obtained by the Control Group and results produced by the Experimental Group, where points were allotted when the examinee code-switched. Table 4.1 shows the means and standard deviations for the Control and Experimental groups and comparisons of test scores between the two groups. There were statistically significant differences favouring the Experimental Group on the three WISC-IV subtest scores, the WISC-IV Verbal Comprehension Index score, and the WJ III COG Verbal Comprehension and Retrieval Fluency test scores. Taking into consideration that effect sizes are small at d = .2, medium at d = .5, and large at d = .8 (Cohen, 1988), the effect sizes presented in Table 4.1 ranging from .46 to .80 are medium to large. 78 Table 4.1 Tests and subtest means and standard deviations for the Control and Experimental groups with the mean differences between the scores Control Group n=50 Experimental Group n=55 Tests/Subtests Means Standard Deviations Means Standard Deviations Mean Differences Significance (2-tailed) Effect sizes EVIP 99.32 15.78 105.84 11.98 -6.52 .180 PPVT-III 99.58 18.02 106.47 18.17 -6.89 .056 WISC-IV Similarities 10.50 2.82 10.47 3.41 .03 .970 WISC-IV Similarities CS 10.60 2.87 12.82 2.92 -2.22 .001* .76 WISC-IV Vocabulary 11.38 3.21 10.78 3.67 .60 .383 WISC-IV Vocabulary CS 11.54 3.18 13.47 3.33 -1.93 .003* .59 WISC-IV Comprehension 11.58 2.95 11.03 3.04 .54 .360 WISC-IV Comprehension CS 11.88 3.02 14.11 3.35 -2.22 .001* .70 WISC-IV Verbal Comprehension 107.26 15.01 105.05 15.35 2.02 .504 WISC-IV Verbal Comprehension CS 108.34 15.23 121.27 17.27 -12.93 .001* .80 WJ III COG Verbal Comprehension 85.92 12.01 82.18 12.16 3.74 .120 WJ III COG Verbal Comprehension CS 91.06 10.77 97.36 23.00 -6.30 .083 WJ III COG Retrieval Fluency 90.18 14.45 80.11 29.32 10.07 .032* .46 WJ III COG Retrieval Fluency CS 97.18 10.60 100.71 13.55 -3.53 .147 Note CS= Code-switching, *p<.05 79 Due to the manner in which the Retrieval Fluency test was scored, in order to account for the frequency of code-switching, there were greater differences between these scores than would normally be observed in situations where code-switching is not a factor. This particular way of scoring was needed in order to capture the variety of ways in which the examinee could respond and principally to distinguish between respondents who only used one language with those who used two. Participants responded in three ways on this test. First, participants responded only in French, that is, they used no code-switching. Second, participants responded in both French and English, therefore receiving code-switching points for the responses provided in English. Third, they responded only in English, with no points allocated for French responses. Because no French responses were given, all the points were then allotted as code-switching points. This created an unusual situation. On many occasions, some participants, especially in the Experimental Group, only responded in English, thus obtaining scores based only on code-switching. This particular scoring method was necessary to allow for the clear identification of code-switching behaviour on a test that consisted of three questions that each required a list of words as a response. Giving zero points for French responses on this particular test appeared to have an impact on the overall mean of the scores on this test when only French scores were considered. No significant statistical differences were observed between the two groups on any of the test and subtest scores without code-switching, both groups functioning at the same level on the WISC-IV and WJ III COG when scores are tallied following the standard procedure. At the same time, the Experimental Group produced higher scores on six of the seven comparisons when code- switching was permitted, again providing support that CS had a greater impact when it was allowed and presented to examinees as an option. 80 A multivariate analysis of covariance was conducted to compare both groups on all tests and subtests using sex as a fixed factor and scores on the ÉVIP and PPVT-III as covariates to determine if there were differences between boys and girls, the Experimental group having more girls than boys. The ÉVIP and PPVT-III were used as covariates as they provided a measure of receptive vocabulary, in the measurement of bilingualism. The results, presented below in Table 4.2, revealed significant group differences between the Control and Experimental groups on Similarities, Similarities with CS, Vocabulary, Comprehension, Comprehension with CS, WISC- IV Verbal Comprehension, WISC-IV Verbal Comprehension with CS, WJ III COG Verbal Comprehension and WJ III COG Retrieval Fluency indicating that the Experimental group obtained significantly higher mean scores on these tests and subtests. The significant F results ranged from F (1, 1) = 3.85 on the WISC-IV Similarities without code-switching to F (1, 1) = 15.90 on the WJ III COG Verbal Comprehension without code-switching, with Pillai’s Trace = .516, Wilk’s Lambda = .484, F (13, 87) = 7.137, p = .001. Group by sex comparisons were made to compare the Experimental and Control groups who did not have the same number of participants. No significant differences were found as can be observed in Table 4.3. Results vary with Pillai’s trace = .172, Wilk’s Lambda = .828, F (13, 87) = 1.388, p = .181. The results comparing participants on all the tests and subtests under the various conditions are presented in a table 4.4 and 4.5. They show that results between participants varied significantly when comparisons considered their performance on the measures of receptive language. Again, participants in the Experimental group obtained higher mean scores. The following series of comparisons targeted first language and degree of exposure to either English or French. Participants who had English as a first language were compared with participants who had French as a first language. 81 Table 4.2 Group differences between the Control and Experimental Groups on WISC-IV subtests and WJ III COG tests Source Dependent Variable Type III Sum of Squares df Mean Square F Significance group WISC-IV Similarities 26.70 1 26.70 3.85 .050* WISC-IV Similarities CS 30.92 1 30.92 6.80 .011* WISC-IV Vocabulary 79.21 1 79.21 12.87 .001* WISC-IV Vocabulary CS 10.94 1 10.94 2.65 .107 WISC-IV Comprehension 45.02 1 45.02 6.77 .011* WISC-IV Comprehension CS 41.56 1 41.56 6.76 .011* WISC-IV Verbal Comprehension 1563.32 1 1563.32 13.84 .001* WISC-IV Verbal Comprehension CS 1009.96 1 1009.96 10.18 .002* WJ III COG Verbal Comprehension 1338.77 1 1338.77 15.90 .001* WJ III COG Verbal Comprehension CS 31.67 1 31.67 .12 .729 WJ III COG Retrieval Fluency 3125.46 1 3125.46 6.49 .012* WJ III COG Retrieval Fluency CS 42.30 1 42.30 .32 .574 CS=with code-switching, df = degree of freedom, * p<.05 82 Table 4.3 Group by sex differences between the Control and Experimental Groups on WISC-IV subtests and WJ III COG tests Source Dependent Variable Type III Sum of Squares df Mean Square F Significance group * SEX WISC-IV Similarities .68 1 .68 .098 .76 WISC-IV Similarities CS .98 1 .98 .216 .64 WISC-IV Vocabulary 14.09 1 14.09 2.29 .13 WISC-IV Vocabulary CS 14.93 1 14.93 3.62 .06 WISC-IV Comprehension 6.96 1 6.96 1.05 .31 WISC-IV Comprehension CS .610 1 .610 .10 .75 WISC-IV Verbal Comprehension .64 1 .64 .01 .94 WISC-IV Verbal Comprehension CS 169.92 1 169.92 1.71 .19 WJ III COG Verbal Comprehension 17.61 1 17.61 .21 .65 WJ III COG Verbal Comprehension CS 484.39 1 484.39 1.85 .18 WJ III COG Retrieval Fluency 1684.23 1 1684.23 3.50 .06 WJ III COG Retrieval Fluency CS 54.85 1 54.85 .41 .52 CS= with code-switching, df = degree of freedom, * p<.05 83 Table 4.4 Differences on WISC-IV subtests and WJ III COG tests, based on ÉVIP results Source Dependent Variable Type III Sum of Squares df Mean Square F Significance ÉVIP WISC-IV Similarities 96.78 1 96.78 .098 .001* WISC-IV Similarities CS 96.67 1 96.67 .216 .001* WISC-IV Vocabulary 317.39 1 317.39 2.29 .001* WISC-IV Vocabulary CS 270.67 1 270.67 3.62 .001* WISC-IV Comprehension 55.65 1 55.65 1.05 .005* WISC-IV Comprehension CS 34.56 1 34.56 .10 .020* WISC-IV Verbal Comprehension 4622.59 1 4622.59 .01 .001* WISC-IV Verbal Comprehension CS 3967.23 1 3967.23 1.71 .001* WJ III COG Verbal Comprehension 4854.18 1 4854.18 .21 .001* WJ III COG Verbal Comprehension CS 2295.65 1 2295.65 1.85 .004* WJ III COG Retrieval Fluency 8605.87 1 8605.87 3.50 .001* WJ III COG Retrieval Fluency CS 1786.67 1 1786.67 .41 .001* CS= with code-switching, df = degree of freedom, * p<.05 84 Table 4.5 Differences on WISC-IV subtests and WJ III COG tests, based on PPVT-III results Source Dependent Variables Type III Sum of Squares df Mean Square F Significance PPVT-III WISC-IV Similarities 104.33 1 104.33 15.06 .001* WISC-IV Similarities CS 157.95 1 157.95 34.73 .001* WISC-IV Vocabulary 72.91 1 72.91 11.85 .001* WISC-IV Vocabulary CS 129.65 1 129.65 31.39 .001* WISC-IV Comprehension 97.96 1 97.96 14.73 .005* WISC-IV Comprehension CS 236.73 1 236.73 38.50 .020* WISC-IV Verbal Comprehension 3238.98 1 3238.98 28.68 .001* WISC-IV Verbal Comprehension CS 6362.37 1 6362.37 64.12 .001* WJ III COG Verbal Comprehension 41.87 1 41.87 .50 .001* WJ III COG Verbal Comprehension CS 2664.21 1 2664.21 10.17 .482 WJ III COG Retrieval Fluency 4459.35 1 4459.35 9.26 .002* WJ III COG Retrieval Fluency CS 104.33 1 104.33 .14 .003* CS= with code-switching, df = degree of freedom, * p<.05 85 As shown in Table 4.6 that follows, none of the differences were statistically significant showing that having French or English as a first language could not explain the overall differences noted between participants. No significant differences based on first language were noted on any of the tests and subtests indicating that participants who had English as a first language functioned at the same level as participants who had French as a first language. Overall, the Experimental Group scored significantly higher on all tests and subtests when they code-switched. The results would be considered both statistically significant and meaningful because they were consistent and always in one direction. Participants in the Experimental Group obtained significantly higher results than the participants in the Control Group when they code- switched. Research Question Two The second research question was then addressed. Are there significant differences in the code-switching frequency of participants in the Experimental Group and the code-switching frequency of participants in the Control Group, taking into consideration that the examiners will only code-switch with participants in the Experimental Group? The frequency (mean number of times) both groups code-switched is presented in Table 4.7. It was hypothesized that participants in the Experimental group would code-switch at a higher rate. As discussed in Chapter Three, due to the nature of the response process for the WJ III COG Retrieval Fluency test, code-switches were tallied separately for this test because participants sometimes chose to respond only in English. 86 Table 4.6 Participants’ test and subtest results comparisons, based on first language Comparisons (English n= 27, French n= 30) Mean Differences Standard Deviations Standard Error Mean t Degrees of freedom Significance (2-tailed) EVIP -1.74 14.08 2.71 -.64 26 .53 PPVT-III 2.48 22.21 4.27 -1.11 26 .57 WISC-IV Similarities .93 5.14 .99 .94 26 .36 WISC-IV Similarities CS .59 4.32 .83 .58 26 .48 WISC-IV Vocabulary -.48 4.26 .75 .59 26 .56 WISC-IV Vocabulary CS -.96 4.53 1.02 -.71 26 .28 WISC-IV Comprehension .04 3.91 .75 -.05 26 .96 WISC-IV Comprehension CS -1.30 5.28 1.02 -1.28 26 .21 WISC-IV Verbal Comprehension 2.59 21.76 4.19 .62 26 .54 WJ III COG Verbal Comprehension 2.11 14.61 6.62 .63 26 .46 WJ III COG Verbal Comprehension CS 4.15 34.38 9.16 .14 26 .54 WJ III COG Retrieval Fluency 1.26 47.62 3.59 -.16 26 .89 WJ III COG Retrieval Fluency CS -.56 18.65 2.71 -.64 26 .88 Note. t= t-test; CS= code-switching; *p<.05 87 Due to its frequency, code-switching was tallied as part of the scoring procedure and all of the responses in English on this test were counted as one code-switch. This dramatically increased the number of code-switches and influenced the overall code-switching tally. To avoid what the researcher perceived as the artificial increase created by the code-switching on this particular test, it was decided that the code-switching that occurred on the WJ III COG Retrieval Fluency test would be tallied separately. As can be observed from the data presented in Table 4.7, the code-switching rate for the WJ III COG Retrieval Fluency test is, on its own, slightly higher than the code-switching noted on all the other subtests and tests combined, supporting the need to use the separate tally. The rate of code-switching for the Experimental Group, who received instruction on the code-switching procedure, and the Control Group, who used English without receiving any instruction on code-switching, is presented in Table 4.7. Table 4.7 Mean Number of Code-Switches by the Control and Experimental Groups Number of code-switching Groups Means Standard Deviation Control n=50 Total excluding Retrieval Fluency 4.58 4.92 on Retrieval Fluency 6.62 8.04 Experimental n=55 Total excluding Retrieval Fluency 17.18 9.24 Retrieval Fluency 18.36 19.08 Examiner’s Code-Switching 20.76 14.12 As was hypothesized, participants in the Experimental group code-switched at a higher frequency, both on the Retrieval Fluency test and on the combination of the other tests and 88 subtests. The differences between the frequencies of code-switching behaviours observed in both groups are presented in Table 4.8. The first result includes all the code-switching behaviours except the ones on the Retrieval Fluency. The frequency of code-switches on the Retrieval Fluency test was tallied separately. Participants in the Experimental Group code-switched at a significantly higher rate than participants in the Control Group both on the Retrieval Fluency test, where frequencies are tallied separately, and on all the other tests and subtests where code-switches were tallied together. The difference in frequency is statistically significant with a result of X²(29, n= 105) = 57.2, p=.001on the combination of test and subtest scores and of X²(34, n = 105) = 91.1, p = .001 on the Retrieval Fluency test. Eleven (11) participants in the Control Group code-switched only once or on no occasions, compared to only two (2) in the Experimental Group. On the other hand, the highest number of code-switches by a participant in the Control Group was 17. By comparison, the highest frequency in the Experimental Group was 40 code-switches, with 24 participants code- switching 17 or more times. The mean number of code-switches for the examiners with the Experimental Group was 21, with frequencies ranging from 5 to 55 times. The examiners did not code-switch with the Control Group. Table 4.8 Differences in the frequency of code-switching by participants in the two groups Participant code-switching Retrieval Fluency Chi-Square df Asymptotic Significance 57.2001 29 .001* 91.1002 34 .001* 1. 30 cells (100.0%) have expected frequencies less than 5. The minimum expected cell frequency is 3.3. 2. 35 cells (100.0%) have expected frequencies less than 5. The minimum expected cell frequency is 2.9.* p<.01. 89 Research Question Three Is there a relationship between the frequency of code-switching behaviours and the degree of exposure to a second language? The factors associated with the exposure to a second language as measured by the Language Background Questionnaire and its impact on the frequency of code-switching was examined in research question three. Table 4.6 showing the comparisons between the test and subtest results obtained by participants who were said to have either English or French as a first language is helpful in interpreting the results for this question. As was described, there were no significant differences between the results of these participants, based on first language. The degree of exposure to French, English, other languages, and combinations of two languages are presented in Table 4.9. In both groups, participants were exposed to French in 40 percent of the cases. They were exposed to a language other than French in 60 percent of the cases, showing that the Control and Experimental Groups have comparable language backgrounds and share a language history that includes exposure to French as well as exposure to other languages, keeping in mind that an important number of participants did not have either French or English as a first language. Table 4.9 Degree of exposure to French, English and combinations of two languages Groups French English Other 2 Languages Control (n=37); 252 points 40% (102) 31% (77) 25% (62) 4% (11) Experimental (n=42); 294 points 40% (117) 28% (83) 30% (85) 3% (9) Although the majority of students did not have French as a first language (40%), it was the one most often cited as a first language. Further, the majority of participants did not have English as a first language, however English was the language they used when they code-switched. 90 Overall, the majority of participants, more than 70%, lived in multilingual home environments where they were exposed to two or more languages daily. Participants in both groups appear to share a comparable level of exposure to French and English, with both groups having parents and grandparents who had French as a first language more often than English. Although their overall language backgrounds are comparable (see Table 4.9), forty-two percent of students in the Experimental Group (see Table 3.1) were said to have French as a first language while thirty percent of participants in the Control Group had French as a first language. As was seen, it is not uncommon for individuals to code-switched in an effort to identify with the “in-group” and self- identity plays an important social function. This could, in part, explain why participants in the Experimental Group code-switched at a much higher rate. In an effort to identify the quality/type of bilingualism as determined by the information available through this study, it is noted that the data collected through research questions allowed for extrapolation as to the status of bilingualism as attributed by parents and determined by results on the tests and subtests used. The results from those tests provided information on the degree of skills in the various language areas they measured and the information provided in the Language Background Questionnaire supplemented that analysis. Participants in both groups when taken collectively shared at least average abilities on the measures of receptive language in French and English (See ÉVIP and PPVT-III scores) and they also obtained mean scores at least within the average range on the subtests measuring Verbal Comprehension (See Verbal Comprehension WISC-IV and WJ III COG Verbal Comprehension scores). Furthermore, participants in both groups obtained higher scores when they used code-switching. Overall, the participants have obtained results and shared a language history that would allow for a denomination of symmetrical bilingualism. 91 Chapter Five Discussion In this section, the results of the study are discussed. Each research question is addressed individually followed by a general discussion on the implications of the findings. The limitations and strengths of this study as well as the contribution to the field of school psychology are also presented. Research Question One In Research Question One, differences in test scores obtained by the Experimental and Control Groups were explored. Accounting for sex and scores obtained on measures of receptive language in both French and English, significant differences favouring the Experimental Group were noted on the majority of tests and subtests used and scored in both conditions (with and without code-switching). The mean scores of participants in the Experimental Group were higher. The direct impact of the code-switching procedure on test results was noticeable on the majority of tests and subtests of the WISC-IV and WJ III COG, supporting the hypothesis that by using their second language, participants in the Experimental Group obtained higher mean scores on measures of cognitive abilities. Such an impact on test scores when code-switching is allowed points to the advantage provided to the bilingual individuals as they were offered the opportunity of a wider range of responses on test items as it also points to the advantage of being bilingual given that these individuals do have access to these responses because they have the two language repertoires. Larivée and Gagné (2007) point to the development of the Binet-Simon scales in the early 1900s and ethnic differences that were noted on test scores resulting in concerns regarding test bias, implying a negative bias. When bilingual persons were tested bilingually and were allowed to use both their languages in the 92 present study they obtained higher scores than the participants who were bilingual and were not permitted use both languages. Because the primary difference between the two groups was the fact that testing was conducted bilingually, it can be concluded that the performances on measures of cognitive abilities were impacted by the use of skills associated with language development and foreign language proficiency. Differences were noted when the participants were allowed to code-switch and results showed that the groups did not differ significantly when results were compared without the code- switching. The only significant difference (p>.01) between boys and girls in both groups was obtained on the WISC–IV Similarities, with code-switching score. Both boys and girls in both groups function at the same level on all the other instruments used in this study. There were no other significant differences based on sex between mean scores on any of the remaining tests and tests/subtests within these groups, indicating that boys and girls functioned at comparable levels on the majority of tests/subtest used in this study. Further, when compared on their first language, no significant differences were found between any of the participants’ test and subtest results. This indicates that first language did not present as a determining factor in the differences observed between the participants. Bilingualism and code-switching have been perceived in the past as having a negative effect on language development (Hughes et al., 2006). With so many significant differences noted in the scores obtained with code-switching, the potential positive impact of bilingualism on test results is also important to consider. The results of this study were more in agreement with the proponents of the additive component of bilingualism and, as Hughes and her colleagues (2006) insist, “code-switching […] reflects an intellectual advantage to many students” (p. 18). This advantage should be reflected in test scores. 93 Research Question Two The second research question was used to examine code-switching frequency. As hypothesized, significant differences in the code-switching frequency between the participants in the Experimental Group and those in the Control Group were found. Participants in the Experimental Group code-switched at a significantly higher frequency on all tests and subtests used in this study. Such difference in frequency of code-switching behaviours help explained how participants in the Experimental group benefited more from code-switching than participants in the Control group. That is not to say that the more the students’ code-switched, the greater the score difference. This might imply that a code-switch automatically adds up to an appropriate response on an item. A more specific interpretation of these findings is not clear at the present time. One hypothesis as to why participants having French as a first language code- switched more frequently could be that because participants in the study were students registered in a Francophone school system but living in a very predominantly Anglophone environment. The pressure to switch to English, as anecdotally observed in the school context, could explain partially why these francophone students more readily chose to use English to respond to test items presented in French. This need for self-identity and group affiliation has been identified (see Martinovic-Zic, 1998 and Myers-Scotton, 2000). Beyond code-switching being presented as an option, it is difficult to determine the reasons behind the code-switching frequency observed with the Experimental Group in part because code-switching behaviours varied in both frequency and quality. Francis (2003) in a study of bilingual children and adults found that code- switching rates varied significantly as it went from being non-existent in adults to surpassing in frequency that of the youngest children in his study. 94 Research Question Three In Research Question Three, the relationship between the frequency of code-switching behaviours and the degree of exposure or experience with a second language was explored. The results indicate that both groups demonstrated a comparable level of experience in French and similar levels of experience in languages other than French, meaning that although significant differences were noted in the frequency of code-switching, it was not possible to isolate the exposure to other languages as a determining factor of code-switching in the present study, although having French as a first language seem to have been a determining factor for participants in the Experimental Group. Considering that language history has a greater impact on the ease at which a second language is learned, more than the actual exposure to a language (Marian, et al., 2007), it is understandable that across groups, participants obtained similar results on the measures of receptive language, ÉVIP and PPVT-III. Participants in the Control and Experimental groups obtained comparable results across the two tests used as a measure of their receptive language implying that the students in the study presented as balanced or symmetrical bilinguals. Therefore, because the participants generally functioned at the same language level, it was not possible to isolate a difference in language proficiency that would have allowed understanding the relationship between language proficiency and frequency of code- switching. Key Findings Bilingual children often choose to use a second language in the testing situation when verbal skills are measured by tests and subtests that have a high language load. The findings of the present study suggest that when students were given the opportunity to code-switch, they did so. The choice or need to code-switch was not necessarily associated with the degree of exposure 95 to the second language or a reflection of proficiency in the first language itself. Allowing the use of a second language in the testing situation for persons who are bilingual had a meaningful impact on the results they obtained on the various tests and subtests measuring their cognitive abilities in the present study. Observing such significant differences is important as it provides an example of the role bilingual testing could play in the assessment of bilingual individuals. The difference noted when participants were allowed to use their second language to respond to test items brings back the issue of test fairness. Larivée and Gagné (2007) found that although numerous studies report no cultural biases in widely used tests of cognitive abilities, cultural biases remain a misconception that is still commonly observed. The bias might not be with the tests in general or with the items more specifically. The bias might be with the testing procedure that does not allow bilingual individuals to use all of their verbal abilities that are expressed in more than one language to respond to a test item that is not biased. The bias might also be in the test administration and how individuals react differently to the testing situation based on their linguistic and cultural background. The fairness issue can therefore be raised in this context. Limitations of the Study The validity of the test results is affected when these results are impacted by factors other than the actual cognitive functioning of the examinee (Bainter & Tollefson, 2003). In the present study, the standardised procedure was changed and may have impacted the results. Adding the code-switching procedure and therefore moving away from the standardised procedure means that the validity of the results has been impacted thus becoming a limitation to the study. 96 Another aspect of validity that was impacted may have been construct validity. As an example, while it might be easy to agree that aspects of long term memory and verbal comprehension are measured with the Vocabulary subtest of the WISC-IV, be it in French or English, it is not so clear, especially with more complex items, if elements of language development are not taking over for long term memory when the last few items are presented to an average child who might not have had the opportunity to store complex words in his long term memory in both languages. The allowance for the use of the second language may mean that the item or subtest is not necessarily measuring the same concept anymore. Although responses are accepted in another language on the WJ III COG, no data are reported in the technical manual regarding the impact this practice and therefore no responses are provided as to the impact on construct validity. The translating of tests is a common solution to the problem of responses provided in a language other than English. However, this practice raises concerns with construct validity as there are questions as to the portability of concepts from one language to the other (Hughes et al., 2006). Because items on most tests are simply translated, it is reasonable to believe that certain items do not measure the same construct in a second language. Also, because the code-switching procedure was used only once and with a limited sample, issues of concurrent validity are present. Questions remain as to the comparability of results that would be obtained if the BVAT (Munoz-Sandoval et al., 1998) was used. Until the code-switching procedure can be used with other tests, its concurrent validity remains to be fully recognised. Other limitations of the study are the number of participants and a possible examiner effect. By having only 105 participants, the power of the conclusions and of the statistical analyses are limited. In addition, because the study utilized a sample of convenience, a bias 97 could be examined. Participants were not matched with census data and although the limited demographic information available on participants indicated that, as a group, they appeared representative of the school district’s population, some questions remain as to how representative they were of the general population of French bilingual students. Because the majority of participants were tested by the author and co-investigator in this study, it is possible that an examiner effect may have taken place. While efforts were made to counter an examiner effect by recruiting and training more examiners, after two recruitment periods, no new examiners could be recruited. The manner in which the scoring of the Retrieval Fluency test using the code- switching procedures was conducted is a limitation. It added confusion when comparisons were made because some examinees artificially obtained the lowest available score when they only responded in English as they were given no points for these responses without code-switching, hence distorting some of the comparisons. Strengths of the Study This study is a first step in the process of both gathering data from examinees that were presented with an original bilingual procedure and actually testing them by using a method that recognised a mode of language expression that included code-switching. This was done by testing children in a dynamically bilingual fashion, that is, in a way where there was an opportunity to switch as needed between languages. Bainter and Tollefson (2003) have identified concerns with the testing of cognitive abilities in bilingual individuals, some of which are associated to test translation (AERA et al., 1999), psychologist’s training (Ochoa et al., 1997) and use of English language instruments. In the current study, testing allowed for responses in two languages and was conducted by a bilingual examiner who used adapted versions of widely accepted measures. Doing so was a 98 positive response to what Bainter and Tollefson (2003) have found to be the most acceptable practice for testing of language minority students (i.e. those who do not have English as a first and dominant language). Contributions to the Field of School Psychology Code-switching is common among persons who are bilingual (Hughes et al., 2006). By systematically acknowledging code-switching in the assessment process and by recognising that it is not a reflection of a lack of vocabulary or of diminished language skills but that it could be a behaviour associated with higher ability (Hughes et al., 2006), examiners in this study have had the opportunity to experience the significant impact it had on examinees’ performance. The use of a procedure that recognised an almost universal bilingual behaviour and the measure of the impact it had on test results is a significant contribution to both practitioners and researchers in school psychology. By using a bilingual test administration through this study, an important message is sent to the field of test development and applied testing. Code-switching is a common and recurrent behaviour amongst bilinguals (Hughes et al., 2006) and it should be acknowledged through the test development, test administration, test scoring and test interpretation process. Bainter and Tollefson (2003) also found that the majority of the psychologists they surveyed indicated that testing in English when the participant’s dominant language was not English was not an acceptable practice. It is an important contribution to school psychology as well as a strength of this study to have used tests and subtests in two languages that were strong if not dominant (see ÉVIP and PPVT-III scores) for the participants in both groups. Through this study, researchers went beyond the notion of language dominance to encompass language skills as an indivisible aggregate of cognitive abilities. Narrow abilities such as Language 99 Development, Foreign Language Proficiency, Foreign Language Aptitude identified by Carroll (1993) were all considered. As the various WISC-IV and WJ III COG tests and subtests used are known to measure more than language proficiency, the opportunity to document significantly higher results when code-switching was allowed also became the opportunity to document how a specific language ability (bilingualism) could impact how bilingual individual’s overall cognitive profile are perceived or measured. Implications and / or Directions for Future Research Systematically prompting students to code-switch by introducing them to the code- switching procedure and by having the examiner code-switch under specific circumstances has significant implications for test developers. By adding this element to the standardisation process when bilingual individuals are tested, another level of screening and analysis is included. Test developers would have to account for bilingualism as a screening factor when they establish norms, the same way sex, ethnic background and parent education are now considered. Furthermore, allowing code-switching could have a significant impact on test results to the point where norms would be changed as averages would vary. Abedi (2006) reminds us that the inclusion in norm development of English language learners is now a policy that is mandated by the Congress in the United States for “reliable, valid and fair assessments for all” (p. 2282). Moreover, Figueroa and Newsome (2006) recommend that, for non-discriminatory assessment to take place, guidelines emitted by professional organisations should at least be respected and this would include important consideration of the fact that bilinguals are a special group within the general population. Further research on the appropriateness of such an inclusion in the standardization process as well as the minimum respect of rules that have been in place for more than 25 years (Figueroa et al., 2006) are necessary to determine if the impact of code-switching 100 on test results and the conclusions that are derived from them mandate such a radical move towards bilingual testing. Further investigation is needed to examine how providing a bilingual testing procedure that is moving away from what is currently available with the BVAT (see Munoz-Sandoval et al., 1998) really impacts the validity of the results, especially given that this procedure would mean that examinees and examiners could at any point move from one language to the other as opposed to testing in one language than testing in the other. Although the difference in procedure appears subtle, the implications are important and should be studied. Other related areas of research should also be explored in the future. One area is the frequency rate of code-switching. Given that some students code-switched ineffectively and with no benefit, future research could look at the frequency rate and the rate of appropriate and inappropriate responses when using code-switching. A second area is related to validity. The number of participants limited the extent of the analysis. By increasing the number of participants in both groups, analysis such as consideration of differential item functioning could be done. Finally, given the issues of second language acquisition and its link to cognitive abilities, researchers in the field of linguistic and second language acquisition should consider these results and explore the possible impact the learning of a second language has on intelligence and vice-versa. By allowing individuals to use their two languages on tests measuring cognitive abilities closely linked to language skills in this study an important factor of test validity was raised. There is a need to consider bilingualism as a factor in the normalisation process of tests of cognitive abilities. The question as to the actual impact on test scores of the breach of standardised procedure versus the use of code-switching should be asked. Although it might not 101 be possible to separate the two, recognition of code-switching as a behaviour increasing test scores is a significant/meaningful finding. Furthermore, because the use of a second language is accepted on tests such as the WJ III COG and WISC-IV, considerations should be made as to the possibility of presenting this option to examinees, even if examiners do not use the other language. This in itself would become part of the new standardised procedure and would allow addressing the validity issue on future versions of the tests. Extending the reflection to the policy level for school jurisdictions, these results show that the code-switching procedure could become a mandatory elements of every testing procedure that involve bilingual students. This would be one way to acknowledge their cultural and linguistic particularities that are expressed in the manner in which they respond to test items. Summary In this study conducted with bilingual students, results show that the possibility of using both languages to respond to test and subtest items on measures of cognitive abilities had a significant impact on these results. Mean results with code-switching were higher and participants in the Experimental Group who benefited from the code-switching procedure obtained higher mean results than participants in the Control Group. Introducing such a novel procedure allowed for the recognition of a bilingual language behaviour in the testing context and reinforced the notion that bilingual individuals as a group tend to obtain higher results on measures of cognitive abilities, what is sometime referred to as the bilingual advantage. Despite some limitations to the study that include a limited sample and challenges to the validity of the instruments used, contributions were made to the field of school psychology as the need to consider bilingualism as a factor in norms development, the need to consider code-switching as an acceptable behaviour in the testing process and the impact of using a second language to 102 tackle test items were all identified. From this study, a variety of future research should be considered. Offering more code-switching conditions and further exploring the impact of code- switching are only two of them. Bilingual individuals placed in an environment where their two languages could be used may choose to do so or not, based on their conscious or unconscious interpretation of the situation, that is a testing situation where examiners have demands, ask a variety of increasingly difficult questions and work on establishing rapport on the basis of their need to have an accurate portrait of a child’s cognitive functioning. This would be especially appropriate in a testing situation where their language skills are directly interpellated and measured and when they can fully express the range of their cognitive abilities that are in great part assessed through a linguistic channel. 103 References Abedi, J. (2006). Psychometric issues in the ELL assessment and special education eligibility. Teachers College Record, 108, 2282-2303. American Psychological Association (APA), American Educational Research Association (AERA), & National Council on Measurement in Education (NCME). (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. Andreou, G., & Karapetsas, A. (2004). Verbal abilities in low and high proficient bilinguals. Journal of Psycholinguistic Research, 33, 357-364. Ascher, C. (1990). Assessing bilingual students for placement and instruction. ERIC/CUE Digest No. 65. Auer, J.C.P. (2000). A conversation analytic approach to code-switching and transfer. In L. Wei (Ed.), The bilingualism reader (pp.166-187). London and New York: Routledge. Aukerman, M. (2007). A culpable CALP: Rethinking the conversational/academic language proficiency distinction in early literacy instruction. The Reading Teacher, 60, 626-635. Baetens, B. H. (2003). Who’s afraid of bilingualism. In J.M. Dewaele, A. Housen, & L. Wei (Eds.), Bilingualism: Beyond basic principles. (pp. 10-27). Clevedon, UK: Multilingual Matters. Bainter, T.R., & Tollefson, N. (2003). Intellectual assessment of language minority students: What do school psychologists believe are acceptable practices? Psychology in the Schools, 40, 599-603. Baker, C. (2001). Foundations of bilingual education and bilingualism (2nd ed.). Clevedon, UK: Multilingual Matters LTD. 104 Baker, C., & Prys Jones, S. (1998). Encyclopedia of bilingualism and bilingual education. Clevedon, UK: Multilingual Matters LTD. Baral, D. (1988). The theoretical framework of Jim Cummins: A review and critique, in L.M. Malave, Theory, research and applications: selected papers from the annual meeting of the national association of bilingual education (pp. 1-20). Fall River, MA: National Dissemination Center. Ben-Zeev, S. (1977). The influence of bilingualism on cognitive strategy and cognitive development. Child Development, 48, 1009-1018. Bialystock, E. (1991). Language processing in bilingual children. New York: Cambridge University Press. Bournot-Trites, M., & Reeder, K. (2001). Interdependence revisited: Mathematics achievement in an intensified French immersion program. The Canadian Modern Language Review/La Revue canadienne des langues vivantes, 58. 27-43. Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York: Cambridge University Press. Carroll, J.B. (2005). The three-stratum theory of cognitive abilities. In D.P. Flanagan, & P.L. Harrison (Eds), Contemporary intellectual assessment (2nd ed.) (pp. 69-76). New York: The Guilford Press. Chen, J. Q., & Gardner, H. (1997). Alternative assessment from a multiple intelligences theoretical perspective. In D.P. Flanagan, J.L. Genshaft, & P.L. Harrison (Eds.), Contemporary intellectual assessment (pp. 105-121). New York, NY: The Guilford Press. Chomsky, N. (1972). Language and mind. New York, NY: Harcourt Brace Jovanovich Chomsky, N. (1995). The minimalist program. Cambridge, MA: Massachusset Institute of 105 Technology Press. Clyne, M. (2000). Constraints on code-switching: How universal are they? In L. Wei (Ed.), The bilingualism reader (pp. 257-280). London and New York: Routledge. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum Associates. Costa, A., & Santesteban, M. (2004). Lexical access in bilinguals speech production: Evidence from language switching in highly proficient bilinguals and L2 learners. Journal of Memory and Language, 50, 491-511. Cummins, J. (1984). Bilingualism and special education: Issues in assessment and pedagogy. Clevedon, UK: Multilingual Matters. Cummins, J. (2000). Language, power and pedagogy. Clevedon, UK: Multilingual Matters LTD. Dabène, L. (1994). Repères sociolinguistiques pour l’enseignement des langues. Paris, FR: Hachette. Dewaele, J.M., Housen, A., & Wei, L. (2003). Bilingualism: Beyond basic principles. Clevedon, UK: Multilingual Matters. Dunn, L.M., & Dunn, L.M. (1997). The Peabody Picture Vocabulary Test (3rd ed.). Circle Pines, MN: American Guidance Services. Dunn, L.M., Thériault-Whalen, C.M., & Dunn, L.M. (1993). Echelle de Vocabulaire en Images Peabody. Toronto, ON : Psycan. Feuerstein, R., Rand, Y., & Hoffman, M. (1979). The dynamic assessment of retarded performers: Learning Potential Assessment Device, theory, instruments, and techniques. Baltimore: University Park Press. Feuerstein, R., Feuerstein, R., & Gross, S. (1997). The Learning Potential Assessment Device. In 106 D.P. Flanagan, J.L. Genshaft, & P.L. Harrison (Eds.), Contemporary intellectual assessment (pp. 297-311). New York, NY: The Guilford Press. Figueroa, R.A. (1990a). Assessment of linguistic minority group children. In C.R. Reynolds & R.W. Kamphaus (Eds.), Handbook of psychological and educational assessment of children: Intelligence and achievement (pp. 671-696). New York, NY: Guilford Press. Figueroa, R.A. (1990b). Best practices in the assessment of bilingual children. In A.Thomas, & J. Grimes (Eds), Best practices in school psychology II (p. 93-106). Washington, DC: National Association of School Psychologists. Figueroa, R.A., & Newsome, P. (2006). The diagnosis of LD in English learners: Is it discriminatory? Journal of Learning Disabilities, 39-3, 206-214. Flanagan, D.P., McGrew, K.S., & Ortiz, S.O. (2000). The Wechsler Intelligence Scales and Gf- Gc Theory: A contemporary approach to interpretation. Needam Heights, MA: Allyn & Bacon. Flanagan, D.P., & Ortiz, S.O. (2001). Essentials of cross-battery assessment. New York, NY: John Wiley & Sons. Flanagan, D.P., Ortiz, S.O., & Alfonso, V.C. (2007). Essentials of cross-battery assessment (2nd ed.). Hoboken, NJ: John Wiley & Sons. Flege, J.E., Frieda, A., Walley, A.C., & Randazza, L.A. (1998). Lexical factors and segmental accuracy in second language speech production. Studies in Second language, 20, 155- 187. Francis, N. (2003, April-May). Cross-linguistic influence, transfer and other kinds of language interaction: Evidence for modularity from the study of bilingualism. Paper presented at the Annual International Symposium on Bilingualism, Tempe, AZ. 107 Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York, NY: Basic Books. Genessee, F. (2001). Bilingual first language acquisition: Exploring the limits of the language faculty. Annual Review of Applied Linguistics, 21, 153-168. Gonzalez, V. (1995). Cognition, culture and language in bilingual children: Conceptual and semantic development. Bethseda, MD: Austin & Winfield. Grosjean, F. (2000). Processing mixed language: Issues, findings and models. In L. Wei (Ed.), The bilingualism reader (pp. 443-469). London and New York: Routledge. Grosjean, F. (1982). Life with two languages. Cambridge, MA: Harvard University Press. Hakuta, K., & Diaz, R.M. (1985). The relationship between degree of bilingualism and cognitive ability: A critical discussion and some new longitudinal data. In K.D. Nelson (Ed.), Children's Language, 5, 319-344. Hillsdale, NJ: Lawrence Erlbaum. Halmari, H. (1997). Government and code-switching: Explaining American Finnish. New York, NY: Academic Press. Hamers, J.F., & Blanc, M. (1983). Bilingualité et bilinguisme. Bruxelles: Pierre Mardaga. Hamers, J.F., & Blanc, M.H.A. (2000). Bilinguality and bilingualism (2nd ed.). Cambridge, UK: Cambridge University Press. Harrison, P.L., Flanagan, D.P. & Genshaft, J.L. (1997). An integration and synthesis of contemporary theories, tests and issues in the field of intellectual assessment. In D.P. Flanagan, J.L. Genshaft, & P.L. Harrison (Eds.), Contemporary intellectual assessment (pp. 533-561). New York, NY: The Guilford Press. Hughes, C.E., Shaunessy, E.S., Brice, A.R., Ratliff, M.A., & McHatton, P.A. (2006). Code 108 switching among bilingual and limited English proficient students: Possible indicators of giftedness. Journal for the Education of the Gifted, 30, 7-28. Ittenbach, R.F., Esters, I.G., & Wainer, H. (1997). The history of test development. In D.P. Flanagan, J.L. Genshaft, & P.L. Harrison (Eds.), Contemporary intellectual assessment (pp.17-31). New York, NY: The Guilford Press. Joshi, A. (1985). Processing of sentences with intrasentential code switching. In D.R. Dowty, L. Kattunen, & A.M. Zwicky (Eds.), Natural language parsing: psychological, computational and theoretical perspectives (pp. 190-205). Cambridge: Cambridge University Press. Kamphaus, R.W., Winsor, A.P., Rowe, E.W., & Kim, S. (2005). A history of intelligence test interpretation. In D.P. Flanagan, & P.L. Harrison (Eds.), Contemporary intellectual assessment (2nd ed.) (pp. 23-38). New York, NY: The Guilford Press. Keppel, G. (1982). Design and analysis: A researcher’s handbook (2nd ed.). Englewood Cliffs, NJ: Prentice-Hall, Inc. Larivée, S., & Gagné, F. (2007). Les biais culturels des tests de QI: La nature du problème. Canadian Psychology, 48, 221-239. Lee, P. (1996). Cognitive development in bilingual children: A case for bilingual instruction in early childhood education. Bilingual Research Journal, Summer. 1-14. Lopez, E. (1997). The cognitive assessment of limited English proficient and bilingual children. In D.P. Flanagan, J.L. Genshaft, & P.L. Harrison (Eds.), Contemporary intellectual assessment (pp. 503-516). New York, NY: The Guilford Press. Luria, A. R. (1973). The working brain: An introduction to neuropsychology. New York: Basic books. 109 Mackey, W.F. (2000). The description of bilingualism. In L. Wei (Ed.), The bilingualism reader (pp. 26-56). London and New York: Routledge. Mackey, W.F. (2002). Changing paradigms in the study of bilingualism. In L. Wei, M. Dewaele, & A. Housen (Eds.), Opportunities and challenges of bilingualism (pp. 329-344). Berlin: Mouton de Gruyter. Macnamara, J. (1966). Bilingualism and primary education. Edinburgh: Edinburgh University Press. MacSwan, J. (1999). A minimalist approach to intrasentential code-switching. New York, NY: Garland. MacSwan, J. (2000a). The architecture of the bilingual language faculty: Evidence from intrasentential code switching. Language and Cognition, 3, 37-54. MacSwan, J. (2000b). The threshold hypothesis, semilingualism, and other contributions to a deficit view of linguistic minorities. Hispanic Journal of Behavioural Sciences, 22, 3-45. Marian, V., Blumenfeld, H.K., & Kaushanskaya, M. (2007). The language experience and proficiency questionnaire (LEAP-Q): Assessing language profiles in bilinguals and multilinguals. Journal of Speech, Language, and Hearing Research, 50, 940-967. Martinovic-Zic, A., (1998, March). You are what you speak: Language choice in bilinguals as a strategy in power relations. Paper presented at the Annual Meeting of the American Association for Applied Linguistics, Seattle, WA. Mather, N., & Woodcock, R.W. (2001). Examiner’s manual: Woodcock-Johnson Third edition Tests of cognitive abilities. Itasca, IL: Riverside Publishing. McGrew, K.S. (1997). Analysis of the major intelligence batteries according to a proposed 110 comprehensive Gf-Gc framework. In D.P. Flanagan, J.L. Genshaft, & P.L. Harrison (Eds.), Contemporary intellectual assessment (pp. 151-179). New York, NY: The Guilford Press. McGrew, K.S. (2005a). Cattell-Horn-Carroll (CHC) Definition Project. Retrieved on July 25, 2005 from http://www.iapsych.com/chcdef.htm McGrew, K.S. (2005b). The Cattell-Horn-Carroll theory of cognitive abilities: Past, present and future. In D.P. Flanagan, & P.L. Harrison (Eds.), Contemporary intellectual assessment (2nd ed.) (pp. 136-181). New York: The Guilford Press. McGrew, K.S., & Flanagan, D.P. (1998). The intelligence test desk reference (ITDR): Gf-Gc cross-battery assessment. Boston: Allyn & Bacon. McGrew, K.S., & Woodcock R.W. (2001). Technical manual: Woodcock-Johnson Third Edition: Tests of cognitive abilities. Itasca, IL: Riverside Publishing. McCloskey, D., & Athanasiou, M.S. (2000). Assessment and intervention practices with second-language learners among school psychologists. Psychology in the Schools, 37, 209-225. Munoz-Sandoval, A., Cummins, J., Alvarado, C.G., & Ruef, M.L. (1998) Bilingual Verbal Ability Test. Itasca, IL: Riverside Publishing. Myers-Scotton, C. (1993a). Duelling languages: Grammatical structure in codeswitching. Oxford, GB: Clarendon Press. Myers-Scotton, C. (1993b). Social motivations for code-switching: Evidence from Africa. Oxford, GB: Clarendon Press. Myers-Scotton, C. (2000). Code-switching as indexical of social negotiations. In L. Wei (Ed.), The bilingualism reader (pp. 137-165). London and New York: Routledge. 111 Myers-Scotton, C. (2003). Code-switching: Evidence of both flexibility and rigidity of language. In J. Dewaele, A. Housen, & L. Wei (Eds.), Bilingualism: Beyond basic principles (pp. 189-203). Clevedon, UK: Multilingual Matters LTD. Naglieri, J. A. (1997). Planning, attention, simultaneous and successive theory and the Cognitive assessment system: A new theory-based measure of intelligence. In D.P. Flanagan, J.L. Genshaft, & P.L. Harrison (Eds.), Contemporary intellectual assessment (pp. 247-267). New York, NY: The Guilford Press. National Association of School Psychologists (1992). Standards for the provision of school psychological services. Silver Spring, MD: Author. Ochoa, S.H., Rivera, B., & Ford, L. (1997). An investigation of school psychology training pertaining to bilingual psychoeducational assessment of primarily Hispanic students: Twenty-five years after Diana v. California. Journal of School Psychology, 35, 329-349. Oller, J.W. Jr. (1983). Issues in language testing research. Rowley, MASS: Newbury House Publishers. Oller, J.W. Jr. (1991). Language and bilingualism: More tests of tests. London and Toronto, ON: Associated University Presses. Ortiz, A.A. (1997). Learning disabilities occurring concomitantly with linguistic differences. Journal of Learning Disabilities, 30, 321-332. Ortiz, S.O. (2002). Best practices in non-discriminatory assessment. In A.Thomas, & J. Grimes (Eds), Best practices in school psychology - IV (pp. 1321-1336). Washington DC: National Association of School Psychologists. Ortiz, S.O., & Dynda, A.M. (2005). Use of intelligence tests with culturally and linguistically 112 diverse populations. In D.P. Flanagan, & P.L. Harrison (Eds.), Contemporary intellectual assessment (2nd ed.) (pp. 545-556). New York: The Guilford Press. Ortiz, S.O., & Flanagan, D.P. (2002). Best practices in working with culturally diverse children and families. In A.Thomas, & J.Grimes (Eds), Best practices in school psychology IV (pp. 337-351). Washington DC: National Association of School Psychologists. Ortiz, S.O., & Ochoa, S.H. (2004, April- May). Psychoeducational assessment of children from culturally and linguistically diverse backgrounds. Paper presented at the annual conference of the National Association of School Psychologists, Dallas, TX. Ortiz, S.O., & Ochoa, S.H. (2005). Advances in cognitive assessment of culturally and linguistically diverse individuals. In D.P. Flanagan, & P.L. Harrison (Eds.), Contemporary intellectual assessment (2nd ed.) (pp. 234-250). New York: The Guilford Press. Peal, E., & Lambert, W.E. (1962). The relation of bilingualism to intelligence. Psychological Monographs, 76, 1-23. Poplack, S. (1980). Sometimes I’ll start a sentence in Spanish y termino en espanol: Toward a typology of code-switching. Linguistics, 18, 581-618. Poplack, S. (2000). Sometimes I’ll start a sentence in Spanish y termino en espanol: Towards a typology of code-switching. In L. Wei (Ed.), The bilingualism reader (pp. 221-256). London and New York: Routledge. Psychological Corporation. (1981). The Hong Kong Wechsler Intelligence Scale for Children. New York, NY: Psychological Corporation Reschly, D.J., & Grimes, J. (1995). Best practices in intellectual assessment. In A. Thomas, & J. 113 Grimes (Eds.), Best practices in school psychology – III (pp. 763-773). Bethesda, MD: National Association of School Psychologists. Reschly, D.J., & Grimes, J. (2002). Best practices in intellectual assessment. In A. Thomas, & J. Grimes (Eds.), Best practices in school psychology – IV (pp.1337-1350). Bethesda, MD: National Association of School Psychologists. Roid, G.H. (2003). The Stanford-Binet Intelligence Scales Fifth Edition. Itasca, IL: Riverside Publishing. Saenz, T., & Huer, M. (2003). Testing strategies involving least biased language assessment of bilingual children. Communication Disorders Quarterly, 24, 184-193. Schester, S.R. & Cummins, J. (2003). Multilingual education in practice. Portsmouth NH: Heinemann. Schrank F.A., & Woodcock, R.W. (2003). WJ III Compuscore and Profiles Program Version 2.0. Itasca, IL: Riverside Publishing. Snyderman, M., & Rothman, S. (1987). Survey of expert opinions on intelligence and aptitude testing. American Psychologist, 42, 137-144. Spearman, C.E. (1904). “General intelligence,” objectively determined and measured. American Journal of Psychiatry, 15, 210-293. Statistics Canada. (2006). The evolving linguistic portrait, 2006 census. Ottawa, ON: Statistics Canada. Sternberg, R.J. (1986). Intelligence applied: Understanding and increasing your intelligence skills. San Diego, CA: Harcourt Brace Jovanovich. Sternberg, R.J. (1997). The concept of intelligence and its role in lifelong learning and success. American Psychologist, 52, 1030-1037. 114 Sternberg, R.J., Grigorenko, E.L., & Kidd, K.K. (2005) Intelligence, race, and genetics. American Psychologist, 60, 46-59. Tabachnick, B.G., & Fidell, L.S. (2001). Using multivariate statistics (4th ed.). Needham Heights, MA: Allyn & Bacon. Tellegen, P., & Laros, J. (1993). The construction and validation of a nonverbal test of intelligence: The revision of the Snijders-Oomen tests. European Journal of Psychological Assessment, 9, 147-157. Thomas, A., & Grimes, J. (1995). Best practices in school psychology – III . Bethesda, MD: National Association of School Psychologists. Thorndike, R.M. (1997). The early history of intelligence testing. In D.P. Flanagan, J.L. Genshaft, & P.L. Harrison (Eds.), Contemporary intellectual assessment (pp. 3-16). New York, NY: The Guilford Press. Toribio, A.J. (2004). Convergence as an optimization strategy in bilingual speech: Evidence from code-switching. Bilingualism: Language and Cognition, 7, 165-173. Tzuriel, D. (2001). Dynamic assessment of young children. New York, NY: Kluwer Academic/Plenum. Valdés, G., & Figueroa, R.A. (1994). Bilingualism and testing: A special case of bias. Norwood, NJ: Ablex. Vernon, P.E. (1956). The measurement of abilities. London, UK: University of London Press. Von Studnitz, R.E., & Green, D.W. (2002). The cost of switching language in a semantic categorization task. Bilingualism: Language and Cognition, 5, 241-251. Wasserman, J.D., & Tulsky, D.S. (2005). A history of intelligence assessment. In D.P. Flanagan, 115 & P.L. Harrison (Eds.), Contemporary intellectual assessment (2nd ed.) (pp. 3-22). New York: The Guilford Press. Wechsler, D. (1974). The Wechsler Intelligence Scale for Children-Revised. San Antonio, TX: The Psychological Corporation. Wechsler, D. (1991). The Wechsler Intelligence Scale for Children-Third Edition. San Antonio, TX: The Psychological Corporation. Wechsler, D. (2003a). The Wechsler Intelligence Scale for Children-Fourth Edition. San Antonio, TX: The Psychological Corporation. Wechsler, D. (2003b). The Wechsler Intelligence Scale for Children-Fourth Edition: Technical and interpretive manual. San Antonio, TX: The Psychological Corporation. Wechsler, D. (2005). L’Échelle d’Intelligence Wechsler pour Enfants Quatrième édition. San Antonio, TX: Harcourt Assessment. Wei, L. (2000). The bilingualism reader. London and New York: Routledge. Wei, L. (2005). “How can you tell?”: Towards a common sense explanation of conversational code-switching. Journal of Pragmatics, 37, 375-389. Woodcock, R.W. (1997). The Woodcock-Johnson Tests of Cognitive Ability - Revised. In D.P. Flanagan, J.L. Genshaft, & P.L. Harrison (Eds.), Contemporary intellectual assessment (pp. 230-246). New York, NY: The Guilford Press. Woodcock, R.W., & Munoz-Sandoval, A.F. (1996) Baterìa Woodcock-Munoz Pruebas de Habilitad Cogniscitiva-Revisada. Chicago, IL: Riverside Publishing. Woodcock, R.W., & Johnson, M.B. (1977). Woodcock-Johnson Psychoeducational Battery. Itasca, IL: Riverside Publishing. 116 Woodcock, R.W., & Johnson, M.B. (1989). Woodcock-Johnson Psychoeducational Battery- Revised. Itasca, IL: Riverside Publishing. Woodcock, R.W., McGrew, K.S., & Mather, N. (2001). Woodcock-Johnson Third Edition: Tests of Cognitive Abilities and Tests of Achievement. Itasca, IL: Riverside Publishing. Zhu, J., & Weiss, L. (2005). The Wechsler Scales. In D.P. Flanagan, & P.L. Harrison (Eds.), Contemporary intellectual assessment (2nd ed.) (pp. 297-324). New York: The Guilford Press. 117 Appendix A Recruitment Form T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Department of Educational & Counselling Psychology, & Special Education 2125 Main Mall Vancouver, B.C. Canada V6T 2B5 Tel: (604) 822-0091Fax:(604) 822-3302 The Bilingual Assessment of Cognitive Abilities in French and English Recruitment Form January 15, 2006 Dear Parent/Guardian, I am writing to invite you and your child to take part in a research study that we are conducting as part of the Doctoral degree requirements for Mr. Lacroix, the co- investigator. Your name was selected because you are the parent of a student in one of the schools within the Conseil Scolaire francophone de la Colombie-Britannique (CSF). The purpose of this study is to learn more about the assessment of bilingual children. Your willingness to work with me is very important. It will provide your child with the opportunity to participate in a study that will allow him/her to learn about the language they use in the context of an assessment. Taking part is voluntary and will not affect any services you or your child may receive from the school district. While your name was obtained from the CSF, information obtained from this study will remain confidential and will not be shared as student will not be identified by names on any documents. You or your child will have the right to withdraw from the study at any time, without any consequences. If you think you might be interested and would like your child to participate in this study, we want you to sign the consent form below and complete the questionnaire that will later be sent home. Your child will also be asked to give his/her assent to participate to the study. The questionnaire is intended to provide us with information on your family’s language history and language uses. The questionnaire should not take more than 15 minutes of your time. Your child’s involvement in the in-person session(s) will consist of a one-on-one assessment of your child’s verbal abilities. The person assessing your child is trained in giving these tests to children and will not give them unless your child is comfortable. 118 The assessment will take about 90 minutes of your child’s time. The assessment will take place over one session. In some rare instances an additional session may be needed but this is unlikely. If you/your child agree to take part in the in-person assessment, it will take place at your child’s school. As mentioned earlier, no results will be shared as the study deals with a novel procedure that still needs to be validated. There are no risks for your child’s participation. Participants in the study will be offered an educational gift worth approximately 5$. It is very important to me that your family’s right to privacy is respected. Therefore, all information collected as part of this research study will be kept confidential. No individual information will be reported and no parent or child will be identified by name in any reports about the completed study. If you are interested in taking part or would like to learn more about what the study involves, you may contact Serge Lacroix by telephone at (604) 868-7428 or by e-mail at slacroix@csf.bc.ca. If you do decide to take part in this study and at any time have any concerns about your treatment or your rights as a research participant you may contact the Research Subject Information Line in the UBC Office of Research Services at the University of British Columbia at (604) 822-8598. Sincerely, Laurie Ford, PhD Principal Investigator Department of Educational & Counselling Psychology and Special Education 604-822-0091 laurie.ford@ubc.ca Serge Lacroix, M.Ps. Co-Investigator, Doctoral student Department of Educational & Counselling Psychology and Special Education 604-868-7428 slacroix@csf.bc.ca 119 Appendix B Consent Forms T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Department of Educational & Counselling Psychology, & Special Education 2125 Main Mall Vancouver, B.C. Canada V6T 2B5 Tel: (604) 822-0091 Fax:(604) 822-3302 The Bilingual Assessment of Cognitive Abilities In French and English Consent Form for Child Assessment Principal Investigator: Laurie Ford, Ph.D. 604-822-0091 Co-Investigator: Serge Lacroix, M.PS. 604-868-7428 E-mail address: slacroix@csf.bc.ca Dear Parent/Guardian, Please read the following form carefully. Sign one copy and return. Keep the other for your records. This is a request for you and your child to take part in the study that we are doing. This project is part of the dissertation research for Mr. Lacroix. Purpose: The purpose of this study is to learn more about assessment tools that are used in testing bilingual children’s verbal abilities. We want to determine how helpful it is for bilingual children to use both their languages in the context of an assessment. Research Study Participation: 1. Taking part in the study means that you agree to let your child participate in a one- on-one testing session that will last up to 90 minutes. The person assessing your child is trained to give these tests to children. The assessment will take place in your child’s school. We will work with your child’s teacher so that they do not miss out of important classroom activities or activities that they like to do. 2. The assessment will be done in 1 visit. In rare cases, a second visit may be required. 3. Your child will be videotaped during the assessment. The reason we need to 120 videotape tape is to help us check our own work. If you or your child do not want to be videotaped, we can do an audiotape of the session instead. 4. Your allowing your child to take part is voluntary and will not affect any services that your child receives or may need in the future in school. You (and your child) have the right to withdraw from the study at any time. 5. When we are finished with the study, you will receive general information about the study but no individual results will be shared. 6. The information you give us is confidential. No individual information will be reported and no parent or child will be identified by name in any reports about the study. The only people who will have access to the information you give us are the researchers working on this project. 7. By taking part in this project, you may help to improve services for bilingual children being tested through in the school system. 8. Your child will receive a $5 educational gift for participating in this study. 9. If at any time you have any concerns about your treatment or rights as a research participant, you may contact the Research Subject Information Line in the UBC Office of Research Services at the University of British Columbia at (604) 822-8598. If you have any questions or concerns regarding the project you may contact Serge Lacroix by leaving a message at 604-868-7428. _______________________________ ____________________________ Laurie Ford Ph.D. Principal Investigator Serge Lacroix M.Ps., Co-investigator 121 The Bilingual Assessment of Cognitive Abilities in French and English Consent Form for Child Assessment Please check one of the following: ____ Yes, I agree to take part in this part of the project and I agree that my child may take part in this project. ____ No, I do not wish to take part in this part of the project and I do not wish my child to take part. Parent’s/Guardian’s signature (please sign): Parent’s/Guardian’s name (please print your name): Date: Child’s Name: Child’s Birth Date: Your signature indicates that you have received a copy of this consent form (Pages 1-3) for your own records. 122 Appendix C Assent Forms T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Department of Educational & Counselling Psychology, & Special Education 2125 Main Mall Vancouver, B.C. Canada V6T 2B5 Tel: (604) 822-0091 Fax:(604) 822-3302 The Bilingual Assessment of Cognitive Abilities in French and English Assent Form for Student Assessment Principal Investigator: Laurie A. Ford, Ph.D. 604-822-0091 Co-Investigator: Serge Lacroix, M.PS. 604-868-7428 E-mail address: slacroix@csf.bc.ca Dear student, Please read this form carefully. We will help by reading it with you. When you are finished with this form, please sign one copy and return it. We are asking you to take part in the study that we are doing as part of a university project so that Mr. Lacroix will receive his degree. Purpose: The purpose of this study is to learn more about assessment tools that are helpful in testing bilingual children’s verbal abilities. We want to how helpful it is to use two languages when students like you are tested. Research Study Participation: 1. Taking part in this part of the study means that you agree to participate in a one-on-one testing session that will last up to 90 minutes. The person assessing you is trained to give these tests. The assessment will take place in your school, over 1 visit. In rare cases, it may require a second visit. We will pick a time so you do not miss any of your favourite school activities. 2. Taking part is voluntary and will not affect any services that you receive or may need in the future in school. If you do not want to take part you do not have to. You can stop at any time. 3. We would like to videotape you during the assessment. The reason we need to videotape tape is to help us check our own work. If you do not want to be videotaped, we can do an audiotape of the session instead. 123 4. The information you tell us will not be shared with anyone other than the people working on the project. No individual information will be reported. We will not use your real name in any reports about the study. 5. By taking part in this project, you may help to improve services for bilingual children being tested through in the school system. 6. You will receive a $5 educational gift if you help us with this study. If you have any questions or concerns regarding the project you may contact Serge Lacroix or leave a message at 604-868-7428. ________________________________ ______________________________ Laurie Ford, Ph.D. Principal Investigator Serge Lacroix M.Ps. Co-investigator 124 The Bilingual Assessment of Cognitive Abilities in French and English Assent Form for Child Assessment Please check one of the following: ____ Yes, I agree to take part in this part of the project. ____ No, I do not wish to take part in this part of the project. Student’s signature (please sign): Student’s name (please print your name): Date: Your signature indicates that you have received a copy of this assent form for your own records. 125 Appendix D Language Background Questionnaire T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Department of Educational & Counselling Psychology, & Special Education 2125 Main Mall Vancouver, B.C. Canada V6T 2B5 Tel: (604) 822-0091 Fax:(604) 822-3302 The Bilingual Assessment of Cognitive Abilities in French and English February, 2006 Dear Parents, This questionnaire is used as part of the information gathering process in the study exploring how bilingual children use both their languages to respond to test item. In order to better understand the connection between language use and how children respond, we have developed this questionnaire on language history. The questionnaire looks at language history and language use. Could you please fill all the items to the best of your knowledge and add any comments that could help the researchers understand your child’s functioning. As no participant is identified by name, you will note that there is a code assigned to your questionnaire. It corresponds to your child’s code. This system is put in place in order to respect privacy and confidentiality. Because these questionnaires are filled on an anonymous basis, there is no information that is gathered that could serve in identifying you or any other participants. No individual results will come out of these questionnaires. Thank you for your participation. If you have any questions or concerns regarding the project you may contact Serge Lacroix or leave a message at (604-868-7428). _______________________________ _____________________________ Laurie Ford, Ph.D. Principal Investigator Serge Lacroix M.Ps. Co-investigator 126 The Bilingual Assessment of Cognitive Abilities in French and English Language Background Questionnaire Child’s code: ________ Date: ______________ This questionnaire is comprised of questions aiming at identifying your language background. Please respond to the best of your knowledge. If the question does not apply to your situation, check “N/A” (not applicable). When the word child is use, we refer to the one participating in the study. A. Language History Please check the appropriate response. 1. What is your child first language? a. French___ b. English___c. other(s)____ Name the languages__________ 2. What is his/her mother first language? a. French___ b. English___c. other(s)____ Name the languages__________ 3. What is his/her father first language? a. French___ b. English___c. other(s)____ Name the languages__________ 4. What is your child’s paternal grandmother’s first language? a. French___ b. English___c. other(s)____ Name the languages__________ 5. What your child’s paternal grandfather’s first language? a. French___ b. English___c. other(s)____ Name the languages__________ 6. What your child’s maternal grandmother’s first language? a. French___ b. English___c. other(s)____ Name the languages__________ 7. What your child’s maternal grandmother’s first language? a. French___ b. English___c. other(s)____ Name the languages__________ 8. In what language does the father speak to his child? a. French___ b. English___c. other(s)____ Name the languages__________ 9. In what language does the mother speak to her child? a. French___ b. English___c. other(s)____ Name the languages__________ 10. What language does your child usually uses at play? a. French___ b. English___c. other(s)____ Name the languages__________ 127 B. Language Use Please check the appropriate response 1. How often do you speak French at home?  0 to 10% of the time  10 to 25%  25 to 50%  More than 50% N/A 2. How often do you speak English at home?  0 to 10% of the time  10 to 25%  25 to 50%  More than 50% N/A 3. How often do you speak a language other that French or English at home?  0 to 10% of the time  10 to 25%  25 to 50%  More than 50% N/A 4. How often do you speak French outside your home?  0 to 10% of the time  10 to 25%  25 to 50%  More than 50% N/A 5. How often do you speak English outside your home?  0 to 10% of the time  10 to 25%  25 to 50%  More than 50% N/A 6. How often do you speak a language other that French or English outside your home?  0 to 10% of the time  10 to 25%  25 to 50%  More than 50% N/A 7. With whom does your child speak your home language? 1._____________________ When? Daily_______Monthly____ Only during holidays_____ 2._____________________ When? Daily_______Monthly____ Only during holidays_____ 3._____________________ When? Daily_______Monthly____ Only during holidays_____ C. School History Please check the appropriate response. 1. How many years has your child studied in: a. French____ b. English____ other(s)____ Name the languages________________ 2. Has your child ever received services in: a. Francisation____ b. English as a second language____ 128 3. In what language did the child’s father study? a. French___ b. English___c. other(s)____ Name the languages__________ 4. In what language did the child’s mother study? a. French___ b. English___c. other(s)____ Name the languages__________ Comments: Please add here any relevant information regarding your child and your language history.- ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ 129 Appendix E Training manual for the examiners As examiners, you will receive direct training on the tests used in the study. You will be trained in the standardised procedures of the four tests (WISC-IV, WJ III COG COG, ÉVIP, PPVT III) used and their training in those aspects will not differ from the training other examiners learning about these tests receive. The difference will be in the use of the protocol for the scoring and the identification of the code-switching behaviours. Before being trained on the test administration, you will receive general information on the study, find out how the recruitment process will occur and learn how to use the integrated test protocol. You will then be trained in the administration of the WISC-IV Verbal Comprehension subtests (Similarities, Vocabulary and Comprehension), including the use of both the French and original English manual. Because the protocol used will have both the French and English equivalent items on it, you will only need one protocol for the entire study. After being trained on the WISC-IV, you will receive training on the use of the two WJ III COG COG tests (Verbal Comprehension, Retrieval Fluency) who will also have items in the two languages as well as responses (sample of correct and incorrect responses) on the protocol for the Verbal Comprehension French subtests (the English sample are in the test manual). Examiners will be made familiar with the ÉVIP and PPVT-III and will only need a refresher to confirm that are up to standards on the administration of these tests. Although the tests are used in their standard form, the responses and scores are not registered in the usual fashion. The protocol in use for the study will differ and the training will start with a presentation of the protocol, its use and its particularities. 130 Step 1 Research Presentation The training will start with a brief description of the research project, with introduction of concepts such as code-switching and bilingual testing. This presentation will be the opportunity to start familiarising the examiners with issues around testing of bilinguals as well as code- switching behaviours noted through testing. Step 2 Participants The participants to be tested (Students in Grades 5 to 8) come from 5 to 10 schools where the data collection will take place. You will also be informed of who will not be included in the process and for what reasons. A total of 100 participants is sought, 50 in each group. Step 3 School and student presentation As examiners, you will receive copies of the various forms (Parent recruitment form, Consent form, assent form) and the Language Background Questionnaire that they will share with the students and parents. These forms will be completed by the parents, except for the assent form that will be completed by the participant. At this stage, the trainer will describe the random assignment procedure to the examiners and tell them about the experimental and Control Groups. Participants will be assigned a code that will provide information such as the name of the school, of the examiners, the participant’s sex and if they are in the experimental or Control Group. The study should be introduced to the students through a visit to all grades 5 to 8 classroom. You will introduce the research as being a doctoral project from a UBC student, employed by the school district. The project is put in place to help learn about how bilingual students respond to various tests. You will tell the students that we need their parents consent, that the testing will take about 90 minutes of their time and that they will receive a small gift for 131 participating. Examiners, in their presentation, will be clear, enthusiastic and answer any questions by the students. Step 4 Testing session Once students are recruited and assigned to a group, examiners will meet with them for testing. You will start the session by a reminder of the study goals, will establish rapport with the students and will have student sign the assent form. The study will be introduced in 5 steps: A) Description of the study, its goals and the reason we seek their participation. You will say: “We are doing a study on bilingual students. By participating you will help us understand how bilingual children like you respond on tests. Thank you for your participation B) Signature of the assent form with the implications and the right to stop at any point without consequences. You will say “We will read this form together. If you have any questions, just ask them to me and then sign that you accept to participate”. C) Description of what will be done, with a brief introduction of the tests, the duration, the break. You will say: “We will work for about 90 minutes where I will ask you some questions, We can take a break in the middle.” D) Beginning of the testing with a description of the code-switching procedure with participants in Group 1; Beginning of the testing with the standardised introduction to the tests with participants in Group 2.: You will then introduce the actual testing. The participant in the Control Group will receive the standard introduction to the test, in French, with no mention of code-switching or possibility to use English during the test. That introduction will be: “Today, I am going to ask you some questions as part of our study. I want you to do your best. You 132 will see that there will be easy questions and more difficult questions. Just do your best.” This introduction will be presented both in French and English. The participants in the Experimental Group will receive the same general directions and you will give them the following instructions explaining the code-switching rule: “This test is not a language test. We can do it either in French, either in English or both in French and in English. I will first ask you questions in French and you can respond either in French either in English or use both languages in the sentence. As an example, if I ask you « What is a dog? » you can respond « It’s an animal », « C’est un animal”, “C’est an animal” or “It’s un animal”. All these responses are acceptable”. You will also add that s/he may code- switch under certain conditions “I will also sometimes ask you questions in English and you can respond the same way, that is either only in French, only in English or using both French and English. It does not matter in which language you respond”. You will then explain that because the participants are allowed to answer in either language, s/he will now give these instructions in English. “Because you are allowed to also answer in English, here are the instructions I have just given in French (the same directions will be provided in French).” Step 5 The test protocol Because the protocol used for the study is an integrated one, amalgamating the French and English versions of the tests and subtests used, you will receive direct training on its use. The protocol will first be introduced in conjunction with the test manuals. The protocol’s format will be described as well as its scoring features (a copy of that protocol will be provided as an addendum of the proposal). Colours will be used to highlight specific elements of the protocol. Red will be used to show directions of a subtest, blue will identify the English items, dark red 133 will indicate the starting points and subtests names. Illustration F.1 shows a part of the protocol. The protocol is presented in its entirety to the examiners. TABLE 1. Part of the testing protocol, WISC-IV items PCS ECS Items Directions: In what way are ---- and ---- alike? Score Score w /CS Ex. Rouge-Bleu – Red-Blue 0 1 2 0 1 2 1. Lait- Eau --- Pen- Pencil 0 1 2 0 1 2 2. Stylo- Crayon à mine --- Milk- Water 0 1 2 0 1 2 The first consideration will be for the scoring as the protocol will have a regular scoring section and one with code-switching, keeping in mind that these scores are tallied separately and do not add up. On the protocol, you will use various codes to detail your actions. As examples, you will write “Q” whenever you ask a question to the participant in an item, “R” whenever you need to repeat a question, “NSP” or “DK” whenever the participant does not know the answer and // to separate to sets of responses to an individual item. The scoring itself is straightforward as it respects the standardised procedures of the test when responses are provided in the test language or when only the participant code-switches. a) You will score the items according to the test manual when children respond in either French or English. When they code-switch (e.g., respond in English to a French questions) you also score the section identified “Score w/CS” which is the section of scores with code-switching. The scoring differs when the examiner code-switches with Group 1 as points are given when the items are presented in the other language by the examiner. This would not be the case in the standardised administration as will be observed with Group 2 where the you will never 134 code-switch. The general principle to score an item with code-switching is that “a code-switch is a code-switch”, meaning that an item will be scored with a code-switch regardless of who code- switched (participant or examiner or both). b) When testing participants in Group 1, you score the items with and without the code- switching. However, you identify the instances of code-switching by both the examiners and participant. It is possible that the examiner will switch to English at one point in a subtest and remain in English until the end of the subtest. Such a switch counts for only one code-switch. You always start a subtest in French. The points allowance for all of the items follows the recommended procedure from the test manuals and remains as is. The difference lies in the allowance for examiner code-switching with Group 1 and the fact that we tally the instances of code-switching as well as the scores with code-switching. Once you are familiar with the scoring, the code-switching coding section of the protocol will be introduced. On this section, you need to check whether the examinee or yourself code-switched. b) The code-switching is identified as PCS for “Participant code-switching” and ECS for “Examiner code-switching”. You need to check who code-switches, being aware that it could be either the participant or the examiner or both of them. These will later be tallied to determine the frequency of code-switching on the part of the examinee and the examiner. It is extremely important to remember that the code-switching follows two rules: 1) The examiner will code-switch whenever the participant has code-switched twice in a row; 2) The examiners will ask the question in the other language when the participant obtains a score of 0 on an item, always reverting back to the original language for the following item. 135 Step 6 Scoring Once you are familiar with the administration and the individual scoring of items, we examine the tally of results and score at the subtest level. The items and subtests are scored by adding the number of points and filling in the “Total score” box at the end of each subtest on the protocol. The subtest totals will be entered in the appropriate computer program that interprets the data by the co-investigator who will also use the tables to obtain overall results for the ÉVIP and PPVT III. Given the importance of this aspect of the procedure it should be done by someone already familiar with the computer programmes used, given that scoring is a critical feature of the test administration. Although you will score the test you administer, you will also “blindly” score other subtests of other examiners in order to insure appropriateness of scoring and quality control. To blindly score means you will score the items using only the responses written in the protocols, unaware of who did the testing. This dual rating method of scoring will itself be counter- examined by the co-investigator that will seek an inter-rater rate of reliability of .90 (meaning that scoring should be the same 9 times out of 10). This will be measured by the level of congruency between scores. The co-investigator will compare the scores given by the examiner with the score given by the external rater. The total scores will be compared and there should be 90% agreement between raters, meaning that these scores should be the same 9 times out of 10. The counter-examination will also be done blindly. The tallying of the code-switching will be a simple addition of all the instances of participant and examiner code-switching. The total will be add up per subtest and for the total of all tests and subtests hence providing two totals of code-switching behaviours for both the participant and examiner. 136 Step 7 Training on test administration Once you are familiar with the protocol, we move to the tests and subtests administration. Administration will follow the standard procedure of the test and subtest with the Control Group and you will be required to achieve a predetermined level of mastery before you start testing participants. The co-investigator will do a presentation of all the tests and subtests, the way they are used as well as the need to be aware of the code-switching feature of the administration for both groups. You will read the test manual describing what needs to be done for each tests and subtests and will practice among examiners then with volunteers. Once you have received the information on the tests, including some reading on the administration procedure, you will practice the test on volunteers (not study participants) and will be videotaped. These videotaped sessions will be scored by the co-investigator based on specific aspects of the training. Following a first videotaped session that will be individually scored and analysed, the co-investigator will determine if another videotaped is needed, based on the level of mastery observed. You will be able to test participants once they achieve the aforementioned mastery level in test administration. Apart from the standard testing procedure as presented in the various test manuals, you will be instructed on how to do the bilingual test procedure. The only difference is in the code- switching that will follow the two aforementioned rules. You code-switch whenever the participants code-switches twice in a row. For the goals of this research, a code-switch is when the participant uses the other language (depending on the one originally used) to express a significant utterance. A significant utterance is one that provides element that are significant in the scoring sense of the term as observed in the test manuals. As an example if the participant only says “C’est un---” for “It’s one---“, it is not considered a code-switch. However, when a 137 participant uses the word mammal to describe a cow in the cow item of the Vocabulary subtest, it is considered a code-switch. Step 8 Concluding the session The testing session will last about 90 minutes with a break, if needed after the ÉVIP and three WISC-IV subtests have been administered. The session will be concluded by thanking the participant for his contribution and offering the honorarium (i.e. 5$ educational gift). The student will then be sent back to his/her classroom. You will then stop the audiotape and score the tests and subtests. 138 Appendix F Differences between the WISC-IV (Wechsler, 2003) and WISC-IVcdn-fr (Wechsler, 2004) items content and placement Listed below are the differences between test items on the Similarities, Vocabulary and Comprehension subtests of the WISC-IV: English Subtests French Subtests Similarities Similitudes Item 1 Pen and pencil are compared Lait / eau (milk and water are compared) Item 2 Milk /water are compared Stylo / crayon à mine (pen / pencil) Item 4 Shirt / shoe Papillon / abeille (butterfly / bee) Item 5 Butterfly / bee Chat / souris (cat / mouse) Item 6 Cat / mouse Chemise / soulier (shirt / shoe) Item 8 Anger / Joy Coude / genou (elbow / knee) Item 9 Elbow / knee Planche de bois / briques (lumber /bricks) Item 10 Frown / smile Colère / joie (anger / joy) Item 12 Lumber / bricks Glace / vapeur (ice / steam) Item 14 Mountain / lake Caoutchouc / papier (rubber / paper) Item 15 Ice / steam Froncement de sourcils/sourire (frown/smile) Item 16 First / last sel / eau (salt / water) Item 17 Flood / drought premier / dernier (first / last) Item 18 Rubber / paper Inondation / sécheresse (flood / drought) Item 19 Salt / water Se venger / pardoner (revenge/forgiveness) Item 20 Revenge/forgiveness Montagne / lac (mountain / lake) 139 Vocabulary Vocabulaire Item 5 Clock Chapeau (hat) Item 6 Hat Parapluie (umbrella) Item 7 Umbrella Vache (cow) Item 8 Cow Bicyclette (bicycle) Item 9 Bicycle Voleur (thief) Item 10 Alphabet Horloge (clock) Item 12 Brave Alphabet Item 13 Thief Brave Item 15 Island Ancien (ancient) Item 16 Pest Baleine (whale) Item 17 Nonsense Mimer (mimic) Item 18 Ancient Île (island) Item 19 Mimic Transparent Item 21 Fable Peste (pest) Item 22 Migrate Précis (precise) Item 23 Precise Migration (migrate) Item 24 Transparent Tolérer (tolerate) Item 25 Seldom Prévoyance (foresight) Item 26 Rivalry Fable Item 27 Strenuous Obscur (obscure) Item 28 Foresight Se vanter (boast) Item 29 Unanimous Absurdité (nonsense) 140 Item 30 Amendment Rivalité (rivalry) Item 31 Compel Ardu (strenuous) Item 32 Affliction Contraindre (compel) Item 33 Imminent Unanime (unanimous) Item 34 Aberration Insinuer (insinuate) Item 35 Garrulous Séquestrer (sequestrate) Item 36 Dilatory Volubile (garrulous) Comprehension Compréhension Item 4 Smoke Portefeuille-Sacoche (wallet) Item 5 Wallet Fumée (smoke) Item 6 Police Se batter (fight) Item 7 Fight Policiers (police) Item 11 Librairies Inspection de la viande (inspect) Item 12 Inspect Bibliothèques publiques (librairies) Item 13 Newspaper Promesse (promise) Item 15 Copyrights Timbres (stamps) Item 16 Promise Journaux (newspaper) Item 17 Stamps Droits d’auteur (copyrights) Item 18 Owning Démocratie (democracy) Item 19 Democracy Science et technologie (technology) Item 20 Technology Posséder (owning) 141 Appendix G Test and scoring adapted test records T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Department of Educational & Counselling Psychology, & Special Education 2125 Main Mall Vancouver, B.C. Canada V6T 2B5 Tel: (604) 822-0091 Fax:(604) 822-3302 The Bilingual Assessment of Cognitive Abilities in French and English GUIDE D’ÉVALUATION 1- Commencer la session en faisant signer le formulaire d’assentiment. 2- La page couverture bleue est pour le groupe expérimental-bilingue 3- Donner la consigne suivante au groupe expérimental, en français et en anglais : « Aujourd’hui, je vais te poser des questions qui font partie de notre recherche. Je veux que tu fasses de ton mieux. Tu vas voir, il y a des questions faciles et des questions plus difficiles. Fais de ton mieux. Ce test n’est pas un test de langue. On peut le faire soit en français soit en anglais ou utiliser les deux langues dans une même phrase. Je vais d’abord te poser les questions en français et tu peux répondre soit en français, soit en anglais ou utiliser les deux langues dans une même phrase. Par exemple, si je te demande « Qu’est-ce qu’un chien? » tu peux répondre « It’s an animal », « C’est un animal”, “C’est an animal” or “It’s un animal”. Toutes ces réponses sont acceptables.” Je vais aussi te poser des questions en anglais et tu peux aussi répondre soit en français, soit en anglais ou utiliser les deux langues dans une même phrase. 4- Ensuite vous donnez les consignes en anglais. “Today, I am going to ask you some questions as part of our study. I want you to do your best. You will see that there will be easy questions and more difficult questions. Just do your best.” “This test is not a language test. We can do it either in French, either in English or both in French and in English. I will first ask you questions in French and you can respond either in French either in English or use both languages in the sentence. As an example, if I ask you « What is a dog? » you can respond « It’s an animal », « C’est un animal”, “C’est an animal” or “It’s un animal”. All these responses are acceptable”. You will also add that s/he may code-switch under certain conditions “I will also sometimes ask you questions in English and you can respond the same way, that is either only in French, only in English or using both French and English. It does not matter in which language you respond”. 5- Au groupe contrôle, dites: « Aujourd’hui, je vais te poser des questions qui font partie de notre recherche. Je veux que tu fasses de ton mieux. Tu vas voir, il y a des questions faciles 142 et des questions plus difficiles. Fais de ton mieux. Ensuite procéder au test en français, sans jamais passer à l’anglais. 6- Pour le groupe Expérimental : N’oubliez pas de rappeler la consigne que les élèves peuvent répondre de façon bilingue aux sous-tests Similitudes, Vocabulaire et Compréhension. 7- Inscrivez le code de l’élève sur le protocole. Les codes ont 6 chiffres - Le premier chiffre correspond au groupe (Groupe1: Contrôle ; Groupe 2 : Expérimental) - Le deuxième chiffre correspond au numéro de l’examinateur - Le troisième chiffre correspond au numéro de l’école (voir au bas) - Le quatrième chiffre correspond au sexe du participant (0=masculin, 1=féminin) - Les deux derniers chiffres correspondent au numéro de séquence 1 2 3 4 5 6 Groupe 1 ou 2 Examinateur 1 à 7 École 1 à 9 Sexe 0 ou 1 Séquence 0 à 9 Séquence 1 à 9 1-Groupe 1: Contrôle 2: Expérimental 2-Codes des examinateurs/trices 1- Mary-Lou McCarthy 2- Manon Landry 3- Jacinthe Gauthier 4- Yves Gagnon 5- Jean-Claude Bazinet 6- Serge Lacroix 3-Codes des écoles 1- Anne-Hébert 2- André-Piolat 3- Les Pionniers 4- Rose-des-Vents 5- Gabrielle-Roy 6- Victor-Brodeur 7- Au Cœur de l’Île 4-Sexe 0 : garçon 1 : fille 5-6- Séquence Entre 01 et 50 par groupe Exemple : 111023 correspond à un garçon de Anne-Hébert testé par M.L. McCarthy et il est le 23e testé dans le groupe expérimental. 274126 correspond à une fille de Rose-des-Vents, testée par J.C. Bazinet et elle est la 26e testée dans le groupe contrôle. 8- Rappelez à l’élève qu’il/elle ne doit pas parler de la recherche aux autres car tous ne font pas la même chose. N.B. Si le QUESTIONNAIRE SUR L’HISTOIRE FAMILIALE n’a pas été remis, vous pouvez le remettre à l’enfant afin qu’il le fasse compléter par ses parents et nous le ramène. 143 The Bilingual Assessment of Cognitive Abilities in French and English (BACAFE) Protocole de recherche Code de l’enfant Année Mois Jour Nom de l’examinateur Date de l’évaluation Date de naissance Âge ÉVIP Plancher : Les 8 plus bas réussis Plafond : 6 échecs parmi 8 items Consigne : MONTRE-MOI--- OU OU SE TROUVE --- Items Départ 9 ans Réponse Score Items Départ 13 ans Réponse Score 70. saluer 3 105. marais 1 71. fleuve 2 106. nuque 2 72. uniforme 4 107. tropical 2 73. édifice 4 108. parallèle 4 74. descendant 1 109. évaluer 3 75. demeure 1 Items Départ 14 ans 76. artiste 3 110. panache 4 77. portatif 2 111. mendiant 3 78. grogner 1 112. fragment 3 79. temps 3 113. judiciaire 2 80. cultivateur 4 114. entonnoir 3 81. pièce 1 Items Départ 15 ans 82. agriculture 4 115. bordereau 3 83. composer 4 116. prétentieux 4 84. rive 2 117. chevalet 3 Items Départ 10 ans 118. moissonner 1 85. solaire 2 119. canin 3 86. savant 4 Items Départ 16 ans + 87. plâtrer 3 120. précipitation 2 88. angle 2 121. marécage 3 89. cubique 4 122. encombré 3 90. taquin 1 123. arctique 2 91. survoler 3 124. accablée 3 92. alpiniste 1 125. escorter 4 93. nutritif 3 126. doléances 4 94. oratoire 1 127. ébénisterie 2 Items Départ 11 ans 128. incisive 1 95. furieux 1 129. volaille 3 96. falaise 3 130. maçon 4 97. porcelaine 2 131. prodige 1 98. boussole 2 132. portail 1 99. phare 4 133. scruter 2 Items Départ 12 ans 100. étonné 3 134. chômer 1 101. morse 2 135. assister 1 102. triplés 4 136. archéologue 4 144 103. espiègle 4 137. épuisement 4 104. échangeur 3 138. compas 3 Items Réponse Score Items Réponse Score 139. pédagogue 1 156. imbiber 4 140. lubrifié 1 157. empaler 1 141. amphibie 4 158. radier 3 142. équestre 2 159. balustrade 1 143. bovin 2 160. clairon 2 144. brasier 3 161. encastrement 4 145. étamine 3 162. aïeule 3 146. concave 4 163. réceptacle 1 147. garçon 3 164. passementerie 1 148. coin 3 165. ébahissement 3 149. cosse 4 166. ellipse 4 150. copieux 2 167. ingénieux 2 151. submerger 4 168. enticher 3 152. assortir 1 169. arable 3 153. convergence 2 170. décidu 4 154. apparition 2 Score Total 155. dôme 3 WISC SIMILITUDES- RAPPEL: PROCÉDURE BILINGUE AU GROUPE EXPÉRIMENTAL Marche arrière : Si l’enfant n’obtient pas un score parfait à l’un ou l’autre des deux premiers items. Arrêt : Mettre fin au sous-test après 5 cotes consécutives de 0 P C S E C S Items DE QUELLE FAÇON --- ET --- SONT-ILS PAREILS? Ex. Rouge-Bleu – Red-Blue Score Score w /CS 1. Lait- Eau --- Pen- Pencil 0 1 2 0 1 2 2. Stylo- Crayon à mine --- Milk- Water 0 1 2 0 1 2 3. Départ 9-11 ans Pomme- Banane --- Apple-Banana 0 1 2 0 1 2 4. Papillon- Abeille --- Shirt-Shoe 0 1 2 0 1 2 5. Départ 12-16 ans Chat- Souris --- Butterfly-Bee 0 1 2 0 1 2 6. Chemise- Soulier --- Cat- Mouse 0 1 2 0 1 2 7. Hiver- Été --- Winter- Summer 0 1 2 0 1 2 8. Coude – Genou --- Anger-Joy 0 1 2 0 1 2 9. Planche de bois- Briques --- Elbow-Knee 0 1 2 0 1 2 Arrêt : Mettre fin au sous-test après 5 cotes consécutives de 0 145 10. Colère- Joie --- Frown-Smile 0 1 2 0 1 2 11. Peinture – Statue --- Painting-Statue 0 1 2 0 1 2 12. Glace- Vapeur --- Lumber-Bricks 0 1 2 0 1 2 13. Poète- Peintre --- Poet-Painter 0 1 2 0 1 2 14. Caoutchouc – Papier --- Mountain-Lake 0 1 2 0 1 2 15. Froncement de sourcils- Sourire --- Ice-Steam 0 1 2 0 1 2 16. Montagne – Lac --- First-Last 0 1 2 0 1 2 17. Sel- Eau --- Flood-Drought 0 1 2 0 1 2 18. Premier- Dernier --- Rubber-Paper 0 1 2 0 1 2 19. Inondation – Sècheresse - Salt - Water 0 1 2 0 1 2 20. Se venger – Pardonner - Revenge – Forgiveness 0 1 2 0 1 2 21. Permission – Restriction - Permission- Limitation 0 1 2 0 1 2 22. Réalité – Rêve - Reality- Dream 0 1 2 0 1 2 23. Espace – Temps - Space- Time 0 1 2 0 1 2 Score total WISC Vocabulaire Marche arrière : Si l’enfant n’obtient pas un score parfait à l’un ou l’autre des deux premiers items. Arrêt : Mettre fin au sous-test après 5 cotes consécutives de 0 P C S E C S Items QU’EST-CE QU’UN/E ---? ou QUE VEUT DIRE--- ? Score Score w/CS 1. Auto - Car 0 1 2 0 1 2 2. Fleur - Flower 0 1 2 0 1 2 3. Train - Train 0 1 2 0 1 2 4. Seau - Bucket 0 1 2 0 1 2 5. Chapeau Clock 0 1 2 0 1 2 6. Parapluie Hat 0 1 2 0 1 2 146 7. Départ 9-11 ans Vache Umbrella 0 1 2 0 1 2 8. Bicyclette Co 0 1 2 0 1 2 9. Départ 12-16 ans Voleur Bicycle 0 1 2 0 1 2 10. Horloge Alphabet 0 1 2 0 1 2 11. Quitter Leave 0 1 2 0 1 2 12. Alphabet Brave 0 1 2 0 1 2 Arrêt : Mettre fin au sous-test après 5 cotes consécutives de 0 13. Brave Thief 0 1 2 0 1 2 14. Obéir Obey 0 1 2 0 1 2 15. Ancien Island 0 1 2 0 1 2 16. Baleine Pest 0 1 2 0 1 2 17. Mimer Nonsense 0 1 2 0 1 2 18. Île Ancient 0 1 2 0 1 2 19. Transparent Mimic 0 1 2 0 1 2 20. Absorber Absorb 0 1 2 0 1 2 21. Peste Fable 0 1 2 0 1 2 22. Précis Migrate 0 1 2 0 1 2 23. Migration Precise 0 1 2 0 1 2 24. Tolérer Transparent 0 1 2 0 1 2 25. Prévoyance Seldom 0 1 2 0 1 2 26. Fable Rivalry 0 1 2 0 1 2 27. Obscur Strenuous 0 1 2 0 1 2 28. Se vanter Foresight 0 1 2 0 1 2 29. Absurdité Unanimous 0 1 2 0 1 2 30. Rivalité Amendment 0 1 2 0 1 2 31. Ardu Compel 0 1 2 0 1 2 147 32. Contraindre Affliction 0 1 2 0 1 2 33. Unanime Imminent 0 1 2 0 1 2 34. Insinuer Aberration 0 1 2 0 1 2 35. Séquestrer Garrulous 0 1 2 0 1 2 36. Volubile Dilatory 0 1 2 0 1 2 Score total WISC Compréhension Marche arrière : Si l’enfant n’obtient pas un score parfait à l’un ou l’autre des deux premiers items. Arrêt : Mettre fin au sous-test après 4 cotes consécutives de 0 PCS ECS Items Score Score w/CS 1. Dents Teeth 0 1 2 0 1 2 2. Légumes Vegetables 0 1 2 0 1 2 3. Départ 9-11 ans Ceintures de sécurité Seatbelts 0 1 2 0 1 2 4. Portefeuille- Sacoche Smoke 0 1 2 0 1 2 5. Départ 12-16 ans Fumée Wallet 0 1 2 0 1 2 6. Se battre Police 0 1 2 0 1 2 7. Policiers Fight 0 1 2 0 1 2 8. S’excuser Apologize 0 1 2 0 1 2 9. Lumières Lights 0 1 2 0 1 2 10. Exercice physique Exercise 0 1 2 0 1 2 11. Inspection de la viande Libraries 0 1 2 0 1 2 Arrêt : Mettre fin au sous-test après 4 cotes consécutives de 0 12. Bibliothèques publiques Inspect 0 1 2 0 1 2 13. Promesse Newspaper 0 1 2 0 1 2 14. Médecins Doctors 0 1 2 0 1 2 148 15. Timbres Copyrights 0 1 2 0 1 2 16. Journaux Promise 0 1 2 0 1 2 17. Droits d’auteur Stamps 0 1 2 0 1 2 18. Démocratie Owning 0 1 2 0 1 2 19. Science et technologie Democracy 0 1 2 0 1 2 20. Posséder Technology 0 1 2 0 1 2 21. Communication Communication 0 1 2 0 1 2 Score total WJ III COG Vocabulaire en images Plancher : 3 réussites consécutives Plafond : 3 échecs consécutifs PCS ECS Items Lorsqu’il y a plus d’une image : MONTRE --- Lorsqu’il n’y a qu’une image : QU’EST-CE QUE C’EST ? Score Score w/CS Ex. A____ _____ball B ____ _____ cat 0 1 0 1 1. ___ bébé baby - 0 1 0 1 2. ___ cheval horse - 0 1 0 1 3. ___ QU’EST-CE QUE C’EST ? chiot puppy - 0 1 0 1 4. ___ soulier shoe 0 1 0 1 5. QU’EST-CE QUE C’EST ? banane banana 0 1 0 1 6. Départ 5e QU’EST-CE QUE C’EST ? clé key 0 1 0 1 7. DIS-MOI CE QUE C’EST ? ciseaux scissors 0 1 0 1 8. QU’EST-CE QUE C’EST ? carotte carrot 0 1 0 1 9. QU’EST-CE QUE C’EST ? fourchette fork 0 1 0 1 10. QU’EST-CE QUE C’EST ? hélicoptère helicopter 0 1 0 1 11. QU’EST-CE QUE C’EST ? cadenas padlock 0 1 0 1 12. Départ 6e -8e QU’EST-CE QUE C’EST ? vaisseau/bateau/navire ship 0 1 0 1 13. QU’EST-CE QUE C’EST ? globe globe 0 1 0 1 149 14. QU’EST-CE QUE C’EST ? robinet faucet 0 1 0 1 15. COMMENT CELA S’APPELLE-T-IL? pyramide pyramid 0 1 0 1 16. QU’EST-CE QUE C’EST ? carrosse stagecoach 0 1 0 1 17. COMMENT CELA S’APPELLE-T-IL? stéthoscope stethoscope 0 1 0 1 18. QU’EST-CE QU’IL Y A AUTOUR DU BRAS DE CET HOMME ? Garrot tourniquet 0 1 0 1 19. QU’EST-CE QUE C’EST ? étau vise 0 1 0 1 20. COMMENT S’APPELLE CE VÊTEMENT? toge toga 0 1 0 1 21. COMMENT CELA S’APPELLE-T-IL? Joug/Attelage yoke 0 1 0 1 22. COMMENT S’APPELLE CE STYLE DE BÂTIMENT? pagode pagoda 0 1 0 1 23. COMMENT S’APPELLE CETTE PARTIE DE L’ÉDIFICE ? Flèche/aiguille spire 0 1 0 1 Score total WJ III COG Synonymes TOUJOURS COMMENCER PAR LES ITEMS DE PRATIQUE Plancher : 3 réussites consécutives Plafond : 3 échecs consécutifs P C S E C S Items Consignes : DONNE MOI UN AUTRE MOT POUR --- Ex. A ___ ____ gros-large big-large B ___ ____ coucher-dormir/sommeiller nap-sleep Score Score w /CS 1. Départ 5e EN COLERE Angry- Furieux, exaspéré, irrité // mad 0 1 0 1 2. PETIT small Minuscule, menu // little 0 1 0 1 3. COMMENCER begin Débuter // start 0 1 0 1 4. Départ 6e-8e PELOUSE lawn Herbe, gazon // grass 0 1 0 1 5. ------S/O 0 1 0 1 6. VOITURE Car automobile, auto, véhicule //automobile 0 1 0 1 7. AIDER assist Secourir, soutenir, épauler //help 0 1 0 1 8. SAUVAGE untamed fauve-indompté -wild 0 1 0 1 9. DÉVORER devour manger, avaler, engloutir // eat 0 1 0 1 10. CACHER conceal dissimuler-Camoufler/masquer // hide 0 1 0 1 11. LUMINEUX luminous Brillant, éclatant //bright 0 1 0 1 12. LUNAIRE Lunar 0 1 0 1 150 sélénite //moon 13. ÉVIDENT obvious Apparent/flagrant/Manifeste // evident 0 1 0 1 14. AMBIGU ambiguous Incertain/vague/équivoque // indefinite 0 1 0 1 15. Gronder chide réprimander/attraper/disputer // scold 0 1 0 1 Score total WJ III COG Antonymes TOUJOURS COMMENCER PAR LES ITEMS DE PRATIQUE Plancher : 3 réussites consécutives Plafond : 3 échecs consécutifs PCS ECS Items Consigne: DIS-MOI LE CONTRAIRE DE --- Ex. A ___ ___ oui-non yes-no B ___ ___ mal-bien wrong-right Score Score w/CS 1. NON no oui // yes 0 1 0 1 2. EN-BAS down en-haut // up 0 1 0 1 3. DEHORS out dedans/dans in 0 1 0 1 4. GARÇON boy fille // girl 0 1 0 1 5. GRAND large petit // little 0 1 0 1 6. DOUX soft Dur // hard 0 1 0 1 7. Départ 5e- 8e FORT strong faible // weak 0 1 0 1 8. VRAI true Faux // false 0 1 0 1 9. PLANCHER floor plafond // ceiling 0 1 0 1 10. VIE life mort/décès // death 0 1 0 1 11. ANCIEN ancient Moderne // modern 0 1 0 1 12. GÉNÉREUX generous égoïste/avare/pingre // selfish 0 1 0 1 13. PRÉVENANT considerate- égoïste, mal élevé, désagréable discourteous 0 1 0 1 14. AUTHENTIQUE authentic faux/simulé/falsifié // bogus 0 1 0 1 15. ATTIRER attract repousser, éloigner, rejeter // repel 0 1 0 1 16. ABSURDE absurd logique, raisonné, sensé // sensible 0 1 0 1 17. RÉSERVÉ demure effronté, expansif, exubérant // brazen 0 1 0 1 18. SYNTHESE synthesis dissolution, dispersion, développement // analysis 0 1 0 1 Score total 151 WJ III COG Analogie verbales TOUJOURS COMMENCER PAR LES ITEMS DE PRATIQUE Plancher : 3 réussites consécutives Plafond : 3 échecs consécutifs P C S E C S Items Consignes : COMPLÈTE CE QUE JE DIS--- Ex. A Un oiseau vole; un poison (nage) swims B Une mère est à un père comme une sœur est à un (frère) brother C En haut est à en bas comme dedans est à (dehors) out Score Score w /CS 1. L'œil est pour voir comme l'oreille est pour (entendre) hear 0 1 0 1 2. Rouge est pour arrêter comme vert est pour (aller) go 0 1 0 1 3. Un manteau est pour porter comme une pomme est pour (manger) eat 0 1 0 1 4. Départ 5e-8e Courir est à vite comme marcher est à (lentement) (ou courir vite/marcher lentement) slow 0 1 0 1 5. Une cannette est à métal comme une bouteille est au (plastique) ou à (la vitre) glass 0 1 0 1 6. Un collet est au cou comme une montre est au (poignet) wrist 0 1 0 1 7. Démarrer est à arrêter, ce que marche est à (arrêt) stop 0 1 0 1 8. L'eau est à la pipe comme l'électricité est au (fil/câble/corde) wire(s) 0 1 0 1 9. Réfrigérateur est à zoo comme la nourriture est à (l'animal) animal(s) 0 1 0 1 10. L'eau est à l'air comme le bateau est à (l'avion) plane 0 1 0 1 11. Les pinces sont aux ciseaux comme tenir est à (couper) cut 0 1 0 1 12. Le poignet est à l'épaule comme la cheville est à (la hanche) hip 0 1 0 1 13. Malice est à crime comme coquin est à (criminel) criminal 0 1 0 1 14. Les dames sont aux dés comme cylindre est au (cube/bloc) cube 0 1 0 1 15. Le vin est à la cuve comme l'eau est au (baril) tank 0 1 0 1 Score total WJ III COG FLUIDITE DE RAPPEL Items Réponses Scores Scores W/CS Français Anglais 1. JE VEUX QUE TU ME NOMMES DES CHOSES QUI SE MANGENT OU SE BOIVENT. TU AURAS 1 MINUTE POUR EN NOMMER AUTANT QUE TU PEUX. QUAND JE TE DIRAI « VAS-Y », DONNE AUTANT DE MOTS QUE TU PEUX (PAUSE). VAS-Y! 152 2. MAINTENANT JE VEUX VOIR COMBIEN DE PRENOMS DE PERSONNES DIFFERENTS TU PEUX NOMMER. TU AURAS 1 MINUTE. QUAND JE TE DIRAI « VAS-Y », DONNE AUTANT DE PRENOMS QUE TU PEUX (PAUSE). VAS-Y! 3. MAINTENANT JE VEUX VOIR COMBIEN D’ANIMAUX DIFFERENTS TU PEUX NOMMER. TU AURAS 1 MINUTE. QUAND JE TE DIRAI « VAS-Y », DONNE AUTANT DE NOMS D’ANIMAUX QUE TU PEUX (PAUSE). VAS-Y! Score total PPVT-III ItemsSet 7 Départ 8-9 Réponse Score Réponse Score 73. Gigantic 2 88. surprised 4 74. nostril 4 89. canoe 3 75. vase 3 90. interviewing 1 76. knight 1 91. clarinet 4 77. towing 1 92. exhausted 2 78. horrified 3 93. pitcher 3 79. trunk 2 94. reptile 2 80. selecting 1 95. polluting 3 81. island 2 96. vine 1 82. camcorder 4 97. pedal 2 83. heart 3 98. dissecting 2 84. wrench 4 99. bouquet 4 Items Départ 10-11 100. rodent 3 85. flamingo 2 101 inhaling 4 86. tambourine 4 102. valley 1 87. palm 1 103. tubular 3 153 104. demolishing 4 152. lever 1 105. tusk 1 153. detonation 2 106. adjustable 2 154. pillar 2 107. fern 1 155. cultivating 1 108. hurdling 3 156. aquatic 4 Items Départ 12-16 157. indigent 2 109. solo 4 158. oasis 1 110. citrus 2 159. disappointed 4 111. inflated 3 160. perpendicular 3 112. lecturing 3 161. poultry 4 113. timer 1 162. confiding 1 114. injecting 1 163. periodical 2 115. links 4 164. filtration 1 116. cooperating 2 165. primate 4 117. microscope 1 166. spherical 2 118. archery 2 167. talon 3 119. garment 4 168. octagon 3 120. fragile 3 Items Set 14 Items Set 11 169.incandescent 4 121. carpenter 2 170. pilfering 2 122. dilapidated 4 171. trajectory 1 123. hazardous 3 172. mercantile 3 124. adapter 2 173. derrick 4 125. valve 3 174. ascending 2 126. isolation 1 175. monetary 4 127. feline 2 176. entomologist 2 128. wailing 1 177. gaff 1 129. coast 4 178. quintet 3 130. appliance 1 179. nautical 4 131. foundation 4 180. incarcerating 4 132. hatchet 3 181. coniferous 4 133. blazing 3 182. wildebeest 1 134. mammal 2 183. caster 3 135. reprimanding 1 184. reposing 4 136. upholstery 4 185. convex 1 137. hoisting 1 186. gourmand 3 138. exterior 1 187. dromedary 2 139. consuming 4 188. diverging 4 140. pastry 4 189. incertitude 2 141. cornea 2 190. quiescent 3 142. constrained 3 191. honing 1 143. pedestrian 2 192. cupola 2 144. colt 3 193. embossed 4 Items Set 13 194. perambulating 2 145. syringe 4 195. arable 3 146. transparent 3 196. importunity 1 147. ladle 2 197. cenotaph 1 148. replenishing 3 198. tonsorial 4 154 149. abrasive 1 199. nidificating 3 150. parallelogram 3 200. terpsichorean 1 151. cascade 4 201. cairn 4 202. osculating 2 204. lugubrious 2 203. vitreous 3 SCORE TOTAL Tableau des scores Tests ou sous-tests Scores bruts Scores normalisés Scores bruts W/CS Scores normalisés W/CS ÉVIP PPVT Similitudes Vocabulaire Compréhension Compréhension Verbale WJ- Picture Vocabulary Synonyms Antonyms Analogies WJ Verbal Comprehension Fluidité de rappel Commentaires, observations : Commentaires de l’élève Questions ou particularités https:11 rise.ubc.ca/rise/Doc/0/91N906VOQlOKNFJMS UNVOLET7 4/fromString.html 7/11/0811:04 AM UBC ~ ~ --. \"\" The University of British Columbia Office of Research Services Behavioural Research Ethics Board Suite 102, 6190 Agronomy Road, Vancouver, B.C. V6T 1Z3 CERTIFICATE OF APPROVAL - FULL BOARD PRINCIPAL INVESTIGATOR: INSTITUTION I DEPARTMENT: UBC BREB NUMBER: UBC/Education/Educational & Laurie Ford Counselling Psychology, and Special H06-80148 Education INSTITUTION(S) WHERE RESEARCH WILL BE CARRIED OUT: Institution I Site UBC Point Grey Site Other locations where the research will be conducted: N/A CO-INVESTIGATOR(S): Lacroix Serge SPONSORING AGENCIES: N/A PROJECT TITLE: The Bilingual Assessment of Cognitive Abilities in French and in English REB MEETING DATE: CERTIFICA TE EXPIRY DATE: February 23, 2006 uly 24, 2008 DOCUMENTS INCLUDED IN THIS APPRQVAL: ~A TE APPROVED: uly 24, 2007 N/A The application for ethical review and the document(s) listed above have been reviewed and the procedures were found o be acceptable on ethical grounds for research involving human subjects. Approval is issued on behalf of the Behavioural Research Ethics Board and signed electronically by one of the following: --\"=,,,,~. ~ - Or. Peter Suedfeld, Chair Or. Jim Rupert, Associate Chair Or. Arminee Kazanjian, Associate Chair Dr. M. Judith Lynam, Associate Chair https:llrise.ubc.ca/rise!Doc/0/91N906VOQlOKNFJM 5UNVOLET7 4!fromString. htm I Page 1 of 1"@en ; edm:hasType "Thesis/Dissertation"@en ; vivo:dateIssued "2008-11"@en ; edm:isShownAt "10.14288/1.0054493"@en ; dcterms:language "eng"@en ; ns0:degreeDiscipline "School Psychology"@en ; edm:provider "Vancouver : University of British Columbia Library"@en ; dcterms:publisher "University of British Columbia"@en ; dcterms:rights "Attribution-NonCommercial-NoDerivatives 4.0 International"@en ; ns0:rightsURI "http://creativecommons.org/licenses/by-nc-nd/4.0/"@en ; ns0:scholarLevel "Graduate"@en ; dcterms:title "The bilingual assessment of cognitive abilities in French and English"@en ; dcterms:type "Text"@en ; ns0:identifierURI "http://hdl.handle.net/2429/2575"@en .