UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

ABRACADABRA : the effectiveness of a computer-based reading intervention program for students at-risk… Flis, Ashley 2018

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2018_may_flis_ashley.pdf [ 1.68MB ]
Metadata
JSON: 24-1.0363396.json
JSON-LD: 24-1.0363396-ld.json
RDF/XML (Pretty): 24-1.0363396-rdf.xml
RDF/JSON: 24-1.0363396-rdf.json
Turtle: 24-1.0363396-turtle.txt
N-Triples: 24-1.0363396-rdf-ntriples.txt
Original Record: 24-1.0363396-source.json
Full Text
24-1.0363396-fulltext.txt
Citation
24-1.0363396.ris

Full Text

ABRACADABRA: THE EFFECTIVENESS OF A COMPUTER-BASED READING INTERVENTION PROGRAM FOR STUDENTS AT-RISK FOR READING FAILURE, AND STUDENTS LEARNING ENGLISH AS A SECOND LANGUAGE   by   Ashley Flis  M.A., The University of Toronto, 2006    A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF   DOCTOR OF PHILOSOPHY   in   The Faculty of Graduate and Postdoctoral Studies  (Special Education)    THE UNIVERSITY OF BRITISH COLUMBIA  (Vancouver)     February, 2018   © Ashley Flis, 2018  ii Abstract   This study examined the effectiveness of ABRACADABRA (ABRA). ABRA stands for A Balanced Reading Approach for CAnadians Designed to Achieve Best Results for All. This web-based literacy tool was used to target the literacy skills within two groups:  grade 1 students who are at risk for reading failure (at-risk group), and students who speak English as a second language (ESL group). One hundred four students were selected to participate in the study within a school district in Ontario.  In four elementary schools, students from ten grade 1 classrooms participated.  Eighty-two of these students participated in the at-risk group and 22 students participated within the ESL group. Upon meeting eligibility through a screening process with the Dynamic Indicators of Basic Early Literacy Skills measure (DIBELS), participants were matched and assigned to one of two conditions: treatment (students received ABRA and regular literacy instruction) and control (students received regular literacy instruction) based on the students’ total DIBELS score. Literacy skills were assessed before and after the intervention program by six standardized assessments. Measures included: Word Attack and Letter Word Identification subtests from the Woodcock-Johnson III Tests of Achievement (Woodcock, McGrew, & Mather, 2001); Blending and Elision subtests from the Comprehensive Test of Phonological Processing (CTOPP: Wagner, Torgesen & Rashotte, 1999); and Sight Word Efficiency and Phonetic Decoding Efficiency subtests from the Test Of Word Reading Efficiency (TOWRE: Torgesen, Wagner, & Rashotte, 1999). These assessed: phonics, phonological awareness, and word reading skills. The intervention was conducted over a ten-week period, four days per week for twenty-minutes per day. For the at-risk group, the results from a 2x2 repeated measures analysis of variance (ANOVA) indicated that there were no significant interactions  iii between time and condition on any of the measures. For the ESL group, there was a statistically significant interaction between the time and condition for the Word Attack, Blending, and Letter-Word Identification subtests indicating that the ABRA intervention was a direct contributor to the improvement of reading skills among ESL students.  A discussion focuses on how findings are consistent with and divergent from previous research and offers implications for future research and practice.                               iv Lay Summary Children who experience difficulty with learning how to read in grade 1 often struggle with reading their whole lives. If left untreated, many undesirable outcomes can take place.  Teachers must intervene early in order to potentially reverse these outcomes. Students who speak another language other than English also require additional instruction with reading such as strengthening their phonological awareness skills. We live in an age of technology. Students respond well to using technology in the classroom. In this study, a group of grade 1 students who are at risk for reading failure and a group of students who speak a language other than English used a reading program on the computer for 10 weeks. My main question was to see whether a computer-based literacy program strengthened the reading skills of the group of grade 1 students, as well as the students who speak a language other than English.                 v Preface This dissertation consists of original research conceived by the graduate student, A. Flis, with advisement from her research supervisor, Dr. L. Siegel. The graduate student was the lead researcher and author. All data collection, recruitment, analysis and, writing was conducted by the graduate student with assistance from the adviser. Ethics approval was required by the UBC Behavioural Research Ethics Board (BREB) to conduct this research. The UBC BREB certificate number is H11-01037.     vi Table of Contents   Abstract .......................................................................................................................................... ii Lay Summary ............................................................................................................................... iv Preface ............................................................................................................................................ v Table of Contents ......................................................................................................................... vi List of Tables ................................................................................................................................ xi List of Figures .............................................................................................................................. xii List of Abbreviations ................................................................................................................. xiv Acknowledgements .................................................................................................................... xvi Chapter 1: Introduction ............................................................................................................... 1         1.1  Rationale ........................................................................................................... 1                        1.1.1  Contributions to Research ...................................................................... 3                        1.1.2  Student Success ...................................................................................... 5                        1.1.3  Research Perspectives ............................................................................ 9                                  1.1.3.1  Reading ..................................................................................... 9                1.1.4  Response to Intervention ...................................................................... 14                         1.2  English as a Second Language ....................................................................... 20                 1.3  ABRACADABRA Reading Intervention Program ....................................... 25                        1.3.1  Intervention Studies Involving ABRACADABRA ............................. 26              1.3.2  ABRA Studies Involving Students Who Speak ESL ........................... 29  vii                       1.3.3  Activities ............................................................................................... 31                               1.3.4  ABRA Dosage ...................................................................................... 34                1.4  Definition of Learning Disabilities ................................................................. 35                       1.4.1  Identification Process of Learning Disabilities ..................................... 37                1.5  Multimedia and Literacy ................................................................................. 40     1.6  Current Study................................................................................................... 45 Chapter 2: Method ...................................................................................................................... 47        2.1  Setting .............................................................................................................. 47        2.2  Participants ...................................................................................................... 47                2.2.1  At Risk .................................................................................................. 48                        2.2.2  ESL ...................................................................................................... 49               2.3   Measures ......................................................................................................... 52                        2.3.1  Measures Used to Screen for Eligibility to Participate in the Study                                 for the At-Risk Group ........................................................................... 52                        2.3.2  Measures Used to Assign Participants to Experimental and Control                                  for both At-Risk and ESL Groups ........................................................ 53                        2.3.3  Pre-and Post- Standardized Assessments............................................. 57                        2.3.4  Teacher, Parent, Student Survey and Language Questionnaire ........... 58               2.4  Procedure ......................................................................................................... 60                        2.4.1  Consent and Assent .............................................................................. 60                        2.4.2  Facilitation and Training ...................................................................... 61                        2.4.3  Screening, Pre- and Posttest ................................................................. 64                        2.4.4  Screening Test ...................................................................................... 65  viii                        2.4.5  Pre-and Posttest .................................................................................... 66                2.5  Intervention ..................................................................................................... 67                2.6  Fidelity of Implementation ............................................................................. 68                          2.6.1 ESL ..................................................................................................... 74 Chapter 3: Results....................................................................................................................... 75                 3.1  Analyses ......................................................................................................... 75                           3.1.1  Significance and Effect Size ............................................................. 76                 3.2  At Risk Group ................................................................................................ 77                           3.2.1  Word Attack ...................................................................................... 79                           3.2.2  Blending ........................................................................................... 80                           3.2.3  Elision ............................................................................................... 81                           3.2.4  Letter-Word Identification ................................................................ 82                           3.2.5  Sight Word Efficiency ...................................................................... 83                           3.2.6  Phonetic Decoding Efficiency .......................................................... 84                 3.3  ESL Group ..................................................................................................... 85                           3.3.1 Word Attack ....................................................................................... 87                           3.3.2  Blending ............................................................................................ 88                           3.3.3  Elision ............................................................................................... 89                    3.3.4  Letter-Word Identification ............................................................... 90                           3.3.5  Sight Word Efficiency ...................................................................... 91                           3.3.6  Phonetic Decoding Efficiency .......................................................... 92                 3.4  Social Validity ............................................................................................... 93                           3.4.1 Teachers’ Perceptions of ABRA’s Effectiveness for At Risk   ix                                 Students ................................................................................................. 93                           3.4.2  Parent Perceptions of ABRA’s Effectiveness for At Risk                                      Students ............................................................................................. 95                           3.4.3  At Risk Student Perceptions of ABRA’s Effectiveness ................... 97                           3.4.4  Teachers Perceptions of ABRA’s Effectiveness for ESL                                      Students ............................................................................................. 99                           3.4.5  Parent Perceptions of ABRA’s Effectiveness for ESL Students .... 100                           3.4.6  Language Questionnaire ................................................................. 100                           3.4.7  ESL Students Perceptions of the Effectiveness of ABRA .............. 102 Chapter 4: Discussion ........................................................................................................ 103                    4.1  Overview of Chapter 4 .............................................................................. 103                    4.2  Findings Consistent with Previous Research ............................................ 104          4.3  Findings Divergent from Previous Research ............................................. 106                 4.3.1  Findings Involving Comparisons Across Groups ........................... 106          4.4  Limitations ................................................................................................. 110          4.5  Implications ............................................................................................... 114                 4.5.1  Implications for Research ................................................................ 114                 4.5.2  Implications for Practice ................................................................. 116                 4.5.3  Conclusion ....................................................................................... 120 References .................................................................................................................................. 122 Appendices ................................................................................................................................. 148 Appendix A Screen Measures............................................................................................ 148                    A.1  Phoneme Segmentation Fluency Measure .............................................. 148  x                    A.2  Nonsense Word Fluency Measure .......................................................... 149 Appendix B Pre- and Posttest Measures .......................................................................... 150                     B.1  Elision Measure ...................................................................................... 150                     B.2  Blending Measure ................................................................................... 152                     B.3  Letter-Word Identification Measure ....................................................... 154                     B.4  Word Attack Measure ............................................................................. 155                     B.5  Sight Word Efficiency Measure ............................................................. 156                     B.6  Phonetic Decoding Efficiency Measure ................................................. 158 Appendix C Social Validity................................................................................................ 160                    C.1  Teacher Survey ........................................................................................ 160                    C.2  Parent Survey .......................................................................................... 161                    C.3  Student Survey ........................................................................................ 162  Appendix D ESL Questionnaire ........................................................................................ 163                    D.1  ESL Parent Language Questionnaire ...................................................... 163            xi List of Tables   Table 2.1       Distribution of Participants by Condition for At-Risk Group ................................ 49 Table 2.2       Distribution of Participants by Condition for ESL Group ...................................... 50 Table 2.3       Reading Readiness Screening Tool Subtests……………………………………...56 Table 2.4       Ratings from the ABRACADABRA Fidelity Checklist .................................. … . 69 Table 2.5       Adapted ABRACADABRA program……………………………………………..72 Table 2.6       Study Design…………………………………………………………………..…..74 Table 3.1      ANOVA Comparison of Pre- Posttest Mean Scores for At-Risk Group……….….78  Table 3.2      ANOVA Comparison of Pre- Posttest Mean Scores for ESL………….……….…86   Table 3.3      Results of Teacher Survey for At-Risk Group……………………………….........95 Table 3.4      Results of Parent Survey for At-Risk Group………………………………………97 Table 3.5      Results of Student Survey for At-Risk Group ........................................................ .99 Table 3.6      Results of Student Survey for ESL Group ............................................................  102                     xii List of Figures  Figure 3.1  Word Attack Mean Scores at Pre- and Post Intervention by Condition for At-Risk Group .................................................................................................................... 79 Figure 3.2  Blending Mean Scores at Pre- and Post Intervention by Condition for At-Risk Group…………………………………………………………………………….80 Figure 3.3  Elision Mean Scores at Pre- and Post Intervention by Condition for At-Risk Group…………………………………………………………………………….81 Figure 3.4  Letter-Word Identification Mean Scores at Pre- and Post Intervention by Condition for At-Risk Group ................................................................................ 82 Figure 3.5 Sight Word Efficiency Mean Scores at Pre- and Post Intervention by Condition for At-Risk Group ................................................................................................. 83 Figure 3.6 Phonetic Decoding Efficiency Mean Scores at Pre- and Post Intervention by Condition for At-Risk Group…………………………………………………….84 Figure 3.7  Word Attack Mean Scores at Pre- and Post Intervention by Condition  for ESL Group....................................................................................................... 87 Figure 3.8 Blending Mean Scores at Pre- and Post Intervention by Condition  for ESL Group....................................................................................................... 88 Figure 3.9 Elision Mean Scores at Pre- and Post Intervention by Condition   for ESL Group....................................................................................................... 89 Figure 3.10  Letter-Word Identification Mean Scores at Pre- and Post Intervention by Condition for ESL Group...................................................................................... 90 Figure 3.11  Sight Word Efficiency Mean Scores at Pre- and Post Intervention by  xiii  Condition for ESL Group...................................................................................... 91 Figure 3.12  Phonetic Decoding Efficiency Mean Scores at Pre- and Post Intervention by Condition for ESL Group...................................................................................... 92                      xiv List of Abbreviations   ABRA: ABRACADABRA CAI:  Computer-Assisted Instruction CSLP:  Centre for the Study of Learning and Performance CTOPP: Comprehension Test of Phonological Processing DIBELS: Dynamic Indicators of Basic Early Literacy Skills DSM-5: Diagnostic and Statistical Manual of Mental Disorders - 5 ELL:  English Language Learners ESL:  English as a Second Language  ES:  Effect Size IEP:  Individual Education Plan IOA:  Inter-Observer Agreement LD:  Learning Disability LDAA: Learning Disability Association of Alberta LDAC: Learning Disability Association of Canada LDAO: Learning Disability Association of Ontario LWI:  Letter-Word Identification NICHD: National Institute of Child Health and Human Development NRP:  National Reading Panel NWF:  Nonsense Word Fluency PDD/NOS: Pervasive Developmental Disorder / Not Otherwise Specified PDE:  Phonetic Decoding Efficiency PSF:  Phoneme Segmentation Fluency  xv Psych-Ed: Psychoeducational Assessment RRST:  Reading Readiness Screening Tool RTI:  Response to Intervention SERT:  Special Education Resource Teacher SLD:  Specific Learning Disability SWE:  Sight Word Efficiency WRM:  Woodcock Reading Mastery                                xvi Acknowledgements  I offer my gratitude to the faculty, staff, fellow students at the UBC, co-workers in the school system, and most importantly my students who have inspired me daily to continue my work in this field. I owe particular thanks to Dr. L. Siegel, whose expertise in the field inspired me to inquire on many levels, as well as Dr. N. Perry and Dr. S. Mercer for their expertise. Special thanks are owed to my family who has supported me through this process.    1 Chapter 1: Introduction  1.1 Rationale  Researchers agree that successful acquisition of early literacy skills is essential for future learning (Cunningham & Stanovich, 1998). Society demands effective literacy skills for success in the workplace. Those students who are at risk for reading failure in elementary school, and who do not receive effective early reading intervention, will most likely be at a disadvantage later in adolescence and possibly adulthood (Torgesen, 2000).   Academic difficulty (McLeskey & Grizzle, 1992), including grade retention and school dropout are examples of consequences of reading failure and learning/reading disability (Lichtenstein & Zantol-Wiener, 1988; National Center for Education Statistics, 2015). In addition, students with learning disabilities are at increased risk of developing social issues, negative peer relations, (Sabornie, 1994; Wiener & Schneider, 2002) and negative self-concept (Boetsch, Green, & Pennington, 1996; Chapman, 1988).  Studies have also shown that youth with poor reading skills or who are identified with reading disabilities and have not received intervention, are more likely to experience suicidal ideation or attempts and more likely to drop out of school than youth with typical reading ability (Barwick & Siegel, 1996; Daniel et al., 2006; McBride & Siegel, 1997).   In 2008, researchers examined reassessments of the National Reading Panel (NRP) report (National Institute of Child Health and Human Development [NICHD], 2000) that noted five essential elements to reading that need to be present in reading instruction or within reading programs in order for the programs to be successful. These components are phonics, phonological awareness, reading fluency, vocabulary and reading comprehension. It is essential  2 to note that the original NRP (2000) and the reassessment (Camilli et al., 2003) suggest that phonics instruction is an integral component of a balanced early literacy program.  Students who are at risk for reading failure must be remediated with an immediate, effective reading program that targets their specific needs. If the student does not respond to the intervention, then there is a possibility for that student to be identified. This process is described in more detail in section 1.1.4.  I’ve included an at-risk group within this study to determine if the selected reading program is a significant contributor in the advancement of the student’s reading and related skills.  At the time the current study was conducted, the term English as a Second Language (ESL) was used to describe students that speak a language other than English as a first language. Since the study was conducted, ESL has been changed to English Language Learner (ELL) within the Ministry of Education in Ontario. Since most schools within the study district continue to use the term ESL when referring to teacher and program, this term will be used to refer to English language learners in the remainder of this thesis. The study will be referred to as “this study.”  A team of researchers investigated the development of literacy skills in language-minority children (August & Shanahan, 2008).  They found that effective instruction involving key components of literacy such as phonemic awareness, decoding, oral reading fluency, reading comprehension, vocabulary, and writing will have a positive influence on literacy development among language-minority students.  The population of students who speak a language other than English (ESL) in the district participating in this study is increasing. Students who speak a language other than English will be referred to as ESL students for the remainder of this thesis. Referrals to receive special education services are also increasing for this group due to the high  3 amount of ESL students who are scoring below benchmark on reading assessments. When the referral to receive special education services is discussed, the ESL student is often denied a psychoeducational assessment (psych-ed) based on the reasoning that the results will most likely produce insufficient data for identification due to ESL concerns (discussed in section 1.4). A recommendation is made to continue to participate in the school’s ESL program (discussed in section 1.2) for an extra year. At this time it is crucial for the ESL teacher to implement a literacy intervention program that specifically targets the full extent of these students’ literacy learning needs. This program is implemented in addition to regular classroom literacy instruction. The need for services within the at-risk and ESL population created a context for me to examine the efficacy of ABRACADABRA (ABRA, A Balanced Reading Approach for CAnadians Designed to Achieve Best Results for All) for supporting literacy skills in these populations.  1.1.1 Contributions to Research This study makes several contributions to research regarding the literacy development of ESL and at-risk populations. Reading difficulty exists within the ESL population, and warrants specific intervention compared to an ESL control group. At the time data were collected for this study, there was limited research examining effective reading programs that included an ESL control population (treatment and control condition) for comparison purposes. One study examined the effectiveness of a reading intervention program, PHAST, (now called EMPOWER) for 166 struggling readers (Lovett et al., 2008). The study involved two separate groups, students who spoke a language other than English (ELL), and students who spoke English as their first language (EFL). Each group was measured against a control group. The treatment groups participated in PHAST for a total of 105 hours. Special education teachers  4 delivered the program for one hour a day. It is important to note that students who spoke English as a second language who had been in Canada for fewer than two years were not included in the sample. Most of the students who spoke ELL in the study where born in Canada. The intervention group size was not recorded in the study. The results indicated that students who spoke ELL and EFL demonstrated equivalent outcomes in both the intervention and control conditions. This is an important finding as it suggests that ESL students have the ability to acquire and strengthen their reading skills similarly to EFL students. It is noteworthy to mention that this program is not technological, and is costly. ABRA is a free of charge, evidence-based, phonological-based literacy software that provides an engaging, interactive learning environment for elementary aged students. In 2009, the authors of ABRA conducted studies with at-risk populations that included ESL students (Savage et al., 2009). ABRA is selected as the literacy intervention tool for this study due to its previous use with ESL students. However, this study separates ESL from other at-risk students, creating a separate ESL group with its own control condition to determine its effectiveness within that specific population.  In 2014, after the current study, the ABRA authors also conducted studies using ESL populations within their home countries (Hong Kong, Australia, and Africa).  A second contribution to research involves the potential contributions that technology-based literacy tools, such as ABRA, can make. At the time of this study, there was minimal research that focused on technological-based reading intervention programs that support the development of ESL students’ literacy skills (Burstein, Sabatini, Shore, Moulder, & Lentini (2013). Results from a few large-scale, randomized control trials (RCT) of technology-based reading intervention programs in the U.S. (Dynarski et al., 2007) found effect sizes for these  5 interventions were not significantly different from zero. Considering the value technology can bring to the improvement of reading skills among students (as noted by Canadian researchers in section 1.5), technology should not be abandoned. However, this finding suggests that effective, technology-based reading intervention programs are needed. The school district in this study had not implemented technology-based reading intervention programs. Therefore, ABRA presented an opportunity to include technology as a support for literacy learning in this district while simultaneously ensuring ABRA was effective for diverse student needs through the research project.  There is a need for an objective assessment of the effectiveness of ABRA. All studies involving ABRA were either authored or co-authored by their creators. This study provides an independent assessment of ABRA.  Finally, this study used ABRA as a part of the RTI model. Specifically, a screening process was implemented to ensure that a population of students who are experiencing legitimate difficulties with reading were the only participants in the at-risk group. There are few (if any) research studies that analyze the effectiveness of ABRA on such a population. Studies involving the analysis of the effectiveness of ABRA often use whole-class samples that were not screened.  1.1.2 Student Success The school district in which this study was conducted, which I will refer to as the study district from now on, currently implements a Response to Intervention (RTI) model for literacy that aims to address early intervention for struggling learners. RTI is a multi-tiered model that moves students through increasingly intense intervention instruction within small groups and was created as an alternative approach to using the IQ-achievement discrepancy to identifying  6 students with learning disabilities. In the remainder of this document, learning disabilities will be referred to as LD. The Study district does rely on the IQ-achievement discrepancy for identification of LD, and thus adopts what I will now refer to as a modified version the RTI model as a possible preventive approach before identification.  A full description of the pre-RTI model as well as the modified RTI model as it relates to the Study district will be discussed. In the recent past, the pre-RTI model did not include early intervention programs, thus wasted valuable instruction time. Students that demonstrated difficulty with reading were presented to an in-school team meeting. This meeting was generally held within the first two months of school and consists of the school principal, teacher, and special education resource teacher (SERT). Instructional strategies were suggested to the classroom teacher to implement within the classroom with the hope that the student would respond positively.  If the student did not respond positively to the differentiated instruction, the concerns would be addressed within a second team meeting, hopefully held within the same school year.   In addition to these in-school team meetings, the SERT would attend monthly special education board team meetings. The principal and special education consultant from the Study district would also be in attendance. The purpose of these monthly meetings is to present specific students’ needs and to create an action plan. The discussion possibly leads to consent for the administration of standardized literacy assessments.  After presenting the results at the following monthly Board Team meeting, consent was provided to place the student’s name on a psychoeducational assessment. A psychoeducational assessment will henceforth be referred to as a psych-ed. Consent depended on the amount of evidence of past implementation of differentiated instruction. At the time of this study, the student would have waited to receive the  7 assessment for approximately two years. This pre-RTI model does not involve specific intervention programs with effective instruction, pre-posttest scores, and it takes approximately two years to determine a possible identification. This model therefore does not utilize valuable time for early intervention. Currently, most schools in the Study district adhere to the modified RTI model as it relates to effective literacy instruction. Tier 1 assessment and instruction involves all students in grade one classrooms. Students who are demonstrating difficulty with reading are presented with differentiated instruction strategies within the classroom. If these students do not respond positively to these strategies they are brought to the first in-school team meeting. This usually occurs within the first two months of school.  At this first team meeting, teachers from the entire primary division, the principal and SERT are required to assess the students’ needs and produce an action plan. This action plan refers to Tier 2 of the modified RTI model, and will determine which literacy intervention program is best suited, which teacher will implement the program, and the duration of the program. Pre-and posttest scores from chosen assessments, such as the Diagnostic Reading Assessment (Beaver & Celebration Press, 2002) are taken for every student that participates in the Tier 2 intervention. A timeframe of approximately ten weeks is provided to deliver the intervention program. The second in-school team meeting occurs at the end of the Tier 2 intervention period. The purpose of this meeting is to discuss the students’ responsiveness to the intervention. If no (or minimal) response was detected then further action maybe required. The team determines appropriate Tier 3 intensive intervention programs that target the students’ needs.   8 The third and final in-school team meeting typically marks the end of the Tier 3 intervention period; however, it often takes place within the Tier 3 timeframe. The same team discusses the students’ responsiveness (to-date) to the Tier 3 intervention. If the team decides that the students’ response to the intervention program was minimal or insufficient, then the principal and SERT present each student to the board team meeting. This often takes place within the Tier 3 timeframe. At this time, if the response to intervention has been minimal, the special education consultant will likely recommend a psych-ed assessment. Traditionally, a tiered system did not exist. Students were presented at monthly board team meetings, and they were added to long wait-lists in order to receive psych-ed assessments. A tiered system allows for students to receive immediate intervention. Students with minimal or insufficient response to interventions are recommended for psych-ed assessments, with shorter wait times. If a psych-ed is approved, school psychometricians will administer a battery of standardized assessments to the student within an individual setting. The IQ-achievement discrepancy is currently used in order to determine a diagnosis of an LD.  “The IQ-achievement discrepancy is a calculation of the difference between a student’s actual achievement and his or her measured potential as determined by an IQ test” (Siegel, 2003). Further details on this model are discussed in section 1.4. An Individual Education Plan (IEP) is developed for students who receive the LD classification, indicating appropriate accommodations and/or modifications that must be implemented within their regular classroom in order to achieve success. Not all school districts in Canada have adopted the RTI model, and as stated above, a student may wait to receive a psych-ed for up to two years. Researchers have stated that the LD population has increased by about 150% to the point where it represents over 50% of the special education population and over 5% of all students in school (Kavale et al., 2005).  Although these  9 statistics remain constant over years, it continues to place a high demand for school districts to effectively strategize cost-effective solutions regarding resources and associated funding. Implementing immediate intervention upon an initial observation of reading difficulty will effectively strengthen literacy skills among grade one students and may potentially decrease the rate of LD diagnoses (Pikulski, 1994).  1.1.3 Research Perspectives In response to the growing number of LD identifications and the need for effective literacy intervention, researchers indicate that only within the past 20-25 years has reliable evidence been reported on remediating reading disabilities (Lovett, Barron, & Frijters, 2013). To understand how to effectively remediate students with reading difficulties, it is essential to understand two concepts: the difficulties that limit successful development of word-reading processes both for students who speak English as a first language, and ESL students; and the RTI model that will be discussed briefly in this section.   1.1.3.1 Reading The development of reading and verbal language through early childhood is essential to a child’s academic performance.  Students with reading difficulties demonstrate a range of deficits related to explicit awareness of, and ability to manipulate the sound structure of spoken words (Brady, 1997).  Specifically, researchers note that these deficits are found to be within the word identification and phonological processing domains. To demonstrate a deficit in phonological awareness means that “significant difficulties are detected with segmenting and differentiating individual speech sounds in spoken words, blending individual speech sounds to form a spoken  10 word, and using phonological codes effectively to aid working memory performance” (Liberman & Mattingly, 1985; Mann, 1986; Stanovich, 1991, 1994; Wagner & Torgesen, 1987). Reading disabilities are not all due to deficits in phonological awareness. It is noted that core processing impairments of many disabled readers extend beyond phonological awareness, and may include language impairments.  Language impairments and reading disabilities are two common language-based learning disabilities with prevalence estimates of 5-8% and 5-17% respectively (Pennington & Bishop, 2009; Peterson & Pennington, 2012). Children with language impairments and reading disabilities have similar deficits including phonological processing, comprehension, fluency and phonological short-term memory (Catts et al., 2005; Gathercole & Baddeley, 1990; Pennington, 2006; Pennington & Bishop, 2009; Wise et al., 2007). In addition, researchers note that about 40% to 75% of preschoolers with early language disorders have been found to develop reading problems later on (Pennington, 2006; Scarborough & Fowler, 1993).  The Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5; American Psychiatric Association, 2013) describes that difficulties may arise in language form, content, or function of language. If so, one may be identified as having one of the three different types of Language Disorders: Receptive (deficits in comprehension), Expressive (deficits in verbal and written expression), and Mixed Receptive-Expressive Language Disorder (deficits in both receptive and expressive communication).  Understanding the cause of reading disabilities lead to the investigation of appropriate and effective classroom-wide interventions. Explicit classroom instruction in letter-sound correspondence was reported to prevent reading underachievement in students at risk for reading failure who had poor phonological awareness skills (Juel & Minden-Cupp, 2000).  11 Since there may be multiple causes for a specific reading disability, researchers have found that implementing a reading intervention program that solely focuses on phonological awareness is not enough to produce statistically significant results. In fact, it is noted that in order to successfully remediate students with reading disabilities, a multidimensional approach (e.g., direct and dialogue-based instruction, explicitly teaching students different levels of sub-syllabic segmentation, effective decoding strategies) to core reading-related deficits should be used (Lovett, Barron, & Frijters, 2013). The results of this research indicate that faster learning and superior outcomes are attained when a broad-based intervention is implemented. Students who speak ESL are faced with challenges of learning academic content while developing English language proficiency. These students must learn English with efficiency in order to catch up with their monolingual English classmates. Considering the challenges faced by ESL students, reading difficulties may be present, and persist past expected benchmark levels (otherwise known as Stages; these are described in section 2.2.2). It is difficult to diagnose a specific reading disability when it is complicated with factors such as ESL. Geva and Siegel (2000) note that there are five basic cognitive processes involved in reading: phonological processing, syntactic awareness, working memory, semantic processing, and orthographic processing. Measures that are used to screen students at risk for reading failure involve the assessment of these processes (Majsterek & Ellenwood, 1990; Scanlon & Vellutino, 1997; Share et al., 1984). A study that was conducted in North Vancouver (Chiappe et al., 2002) examined whether measures used to identify children at risk for reading failure are appropriate for children from different language backgrounds.  The study examined 659 kindergarten students that were categorized into three separate groups: native English speakers, bilingual (BL), and ESL. Bilingual children spoke English and  12 another language at home. ESL children spoke a language other than English as their primary language at home. The intent of the study was to examine whether measures used to identify children at risk for reading failure are appropriate for children from different language backgrounds. Results from this study indicate that the children who were BL and the ESL children performed lower on most of the assessments. Although the measures of phonological awareness, syntactic awareness and verbal working memory were more difficult for both BL and ESL children, their acquisition of basic literacy skills (word recognition and spelling) was not inhibited. The acquisition of basic literacy skills among the children who were BL and the ESL children developed in a similar manner to that of native English speaking children. It was concluded that the same instructional methods supported the development of literacy among the ESL and BL children. The measures used to identify children at risk for reading failure were therefore appropriate for children from different language backgrounds. Researchers note that phonological awareness is a skill that predicts reading development not only for native English speakers but for ESL and BL children as well (Ben-Dror, Bentin, & Frost, 1995; So & Siegel, 1997; Sprenger-Charolles, 1991). It is a skill that appears to transfer from a first language to the second language (Chiappe & Siegel, 1999; Cisero & Royer, 1995; Durgonoglu, Nagy, & Hancin-Bhatt, 1993; Verhoeven, 1994). Although this important skill transfers to the second language successfully, researchers note that other cognitive skills essential for reading may not. Syntactic awareness is the ability to understand and focus on the basic grammatical structure of a sentence rather than the meaning. Mastery of this skill is critical for fluent and efficient readers. Deficits in this skill were reported for non-native English speaking children (Bentin, Deutsch, & Liberman, 1990; Da Fontoura & Siegel, 1995; So & Siegel, 1997).  13 Verbal working memory is a cognitive process that is determined to be an essential part of reading. This process involves information being temporarily stored while retrieving information from long-term memory (Baddeley, 1983). While students are learning to read, they must decode, thus requiring working memory skills. Difficulties in working memory were reported for disabled readers in Chinese (So & Siegel, 1997), Hebrew (Geva & Siegel, 2000) and Portuguese (Da Fontoura & Siegel, 1995). The North Vancouver study (Chiappe et al., 2002) concluded that BL and ESL students, despite their limited exposure to English, acquired basic literacy skills similar to native English speaking students. Therefore, it can be suggested that ESL students that may not be reading at an age-appropriate level may be due to a lack of appropriate instruction or a lack of sufficient exposure to English. Therefore, it is important to adopt and implement an effective literacy tool early and expose ESL students to English (oral and reading) often prior to assessing for potential learning disabilities. Lesaux and Harris (2013) note that RTI holds promise for effectively delivering literacy instruction early. It is also noted that the typical reading instruction for ESL students, focuses on two essential goals: building code-based skills and meaning-related skills. The current study focuses on the development of code-based skills, but acknowledges the importance of developing skills required for knowledge extraction and making meaning from the text. Code-based skills allow students to read words with accuracy (Catts & Weismer, 2006). These skills include phonological processing, letter knowledge, decoding and fluency. As noted before, phonological processing skills support the development of fluent word reading. Fluent word reading draws on knowledge of letter-sound relationships, sight words, and decoding skills (August & Shanahan, 2008).  Ultimately, these skills support reading comprehension.  14 1.1.4 Response to Intervention The second research perspective involved in this study is the RTI model and how it relates to the improvement of literacy skills for students at risk for reading failure and ESL students. Students move through this three-tiered model of increasingly intense instruction and supports, beginning at Tier 1 in the regular classroom. Students who do not effectively respond to the regular classroom instruction will require additional support (i.e., Tiers 2 and, possibly, 3) that involves research-based intervention programs (Vaughn, Wanzek, Woodruff, & Linan-Thompson, 2007).  Under the Constitution Act (1982), individuals (who speak French or English) are guaranteed the right to have their children receive an education in their own language. The rights of individuals with disabilities were outlined in the Charter of Rights and Freedoms in section 15(1)(2), referred to as the equality provisions of the Charter. Through these sections of the Charter, Canadians with disabilities had the right to equality, and the right to receive an education (Sussel, 1995).  The Education Act is the main statute that governs public education in Ontario. This legislation provides authority for the creation of all of the main features of the education system. The Ministry of Education in each province also creates education policy. In Ontario, the Ministry of Education issues Policy/Program Memorandums (PPM) that address many issues such as the criteria required to identify with a specific exceptionality, and the identification process of students with special needs. In 2010, the Ministry of Education released Growing Success, an assessment, evaluation, and reporting guide. It is noted that according to Policy/Program Memorandum No. 11, Early Identification of Child’s Learning Needs (1982), every school board in Ontario must have  15 procedures in place to identify the level of development, learning abilities, and needs of every child who is enrolled in the school, and to ensure that educational programs are designed to accommodate those needs. Assessment and evaluation for students with special needs are key components to programming. The Education Act notes that the definition of a special education program includes a “program that is based on and modified by the results of continuous assessment and evaluation and that includes a plan containing specific objectives and an outline of educational services that meet the needs of the exceptional student” (Education Act, S.1 (1)). In 2013, a panel of experts within the Ministry of Education in Ontario created a document titled “Learning for All,” (Ontario Ministry of Education, 2013). In it, they emphasized a tiered approach to learning is an extremely effective approach to assessment and intervention is cited within this document (Vaughn & Fuchs, 2003). The school district that participated in the current study (among other districts) refers to this Ministry document as a resource when instructing students with special needs. With well-known long-term effects of failing to read, the Education Act advised school districts to incorporate research-validated reading programs that encompass direct and explicit instruction of core reading skills essential for literacy development. These skills include phonemic awareness, phonics, fluency, vocabulary, and comprehension (National Institute of Child Health and Human Development [NICHD], 2000). One way to effectively combat over-identification, two-year wait-lists, and possible misdiagnoses of LD is to implement prevention strategies. An example of a prevention strategy is the implementation of reading intervention programs at the onset of reading difficulties.  Pikulski (1994) notes that providing effective instruction to children who struggle with reading is essential during the early years. Providing early identification and intervention is of paramount  16 importance for a number of reasons.  Children who fall behind in reading in kindergarten and Grade 1 continue to experience difficulties with reading over time (Lyon et al., 1995; Stanovich, 1986).  It is also known that when an intervention program is implemented too late (grade 3 or later), the number of children whose reading difficulties are corrected decreases dramatically (Lyon, 1995). Therefore, there is a window of opportunity to implement effective intervention programs. Intervention programs also must be implemented on a regular basis. Reading success is greater when intervention is provided on a regular basis and with a high level of commitment (Florida Center for Reading Research [FCRR], 2007).  Lipka and Siegel (2010) examined the development of literacy skills among children in kindergarten longitudinally until grade 7, within an entire school district in North Vancouver. Children whose first language was English, and children who spoke another language other than English participated in all three tiers of the RTI model. In this study the RTI model focused on early identification and the delivery of a preventive literacy program to the entire population. As such, many of the students acquired successful literacy instruction at the Tier 1 level within the classroom. Specific phonological awareness programs were used in kindergarten classes such as Firm Foundations. In grades 2 to 7, the district used the Reading 44 program.  The intervention period took place three to four times per week for 20-25 minutes each session. Those students that needed further, more intensive support were withdrawn from the class to work with a remedial teacher when it was necessary. Students who required Tier 2 and Tier 3 intensive support received it only as long as it was needed. The results of Lipka and Siegel’s (2010) longitudinal study indicated that the number of children with reading difficulties decreased significantly as a result of participating in the RTI model. Students that were at risk for reading failure (who were ESL and those who spoke  17 English as their first language) in kindergarten were recorded as reading at average to above average levels.  It was noted that the RTI model, if implemented at the early stages of literacy instruction, is an alternative to expensive assessments and the ‘wait to fail’ approach.  It is noteworthy that RTI is an integral part to a much larger over-arching model called Multi-Tiered System of Supports (National Centre for Response to Intervention [NCRI], 2010).  It is defined as a whole-school, data-driven, prevention-based framework for improving learning outcomes for every student through a layered continuum of evidence-based practices and systems. The term MTSS and RTI often are used interchangeably. For purposes of this study, since RTI is the model that is currently being implemented within the school district, I will continue to use the term RTI for the remainder of the thesis.  The Center on Response to Intervention refers to RTI as the practice of providing effective instruction and intervention across three tiers to all students (NCRI, 2010). Assessment, progress monitoring, and data-driven decision-making are all components of successful RTI implementation. Originally, the RTI model was used as a procedure to intervene and evaluate eligibility for special education services (Individuals with Disabilities Education Improvement Act [IDEA], 2004). For purposes of this study, RTI is a model that is implemented as a system of instructional support to those who are demonstrating difficulty in reading.  The RTI model that is implemented within the study district consists of three tiers (or levels of intensity) of intervention specific to a subject such as reading. The primary prevention level (otherwise known as Tier 1) includes high quality, core instruction early in the school year with the classroom teacher delivering differentiated instruction to all students. Teachers administer district-approved assessments or screens at this time. Students can be identified as at risk for reading failure if they score below average, or below benchmark on these measures. The  18 students are monitored during a certain length of time (e.g., eight weeks) while specific teaching strategies are taught in the classroom. At the end of this block of time, the teacher will administer a district-approved assessment to assess the student’s reading level.  A small group of non-readers begin to emerge at this time.  Selecting students who are at-risk for reading failure is a data-driven process. Teachers can choose several strategies to accomplish this. Researchers have noted that a score below the 25th percentile on a norm-referenced measure can be used to designate risk  (Fuchs & Fuchs, 2006).   As there is no standard definition of at-risk students, this rule is arbitrary. This can pose both positive and negative effects to practice and research. Teachers don’t have to abide by specific assessment results to create intervention groups (they have leeway in how they define at-risk groups). Alternatively, the term at-risk can be viewed as stigmatizing certain groups of students, possibly negatively impacting their social/emotional state.  Without the clear definition of students at-risk, researchers and authors of intervention programs are able to extend their programs to students of differing exceptionalities without depending on outcomes of screening tools. Conversely, it is still important to have a standard reference point of what it means to be at-risk when communicating between providers, policy makers etc. Teachers can use a mark that is below benchmark on a criterion-referenced measure and determine that the student is at risk.  Students who are unresponsive or who demonstrate a score below the average are referred to Tier 2 through conversations held at the first team meeting (as noted above). The secondary level (or Tier 2) includes evidence-based interventions of moderate intensity. At this tier, the teacher may implement a particular reading intervention program in a small group within or withdrawn from the classroom, depending on the availability of  19 appropriately trained staff. This instruction should be explicit and intensive. At the end of the program (and thus second tier), the students may be assessed for responsiveness, in a similar manner to Tier 1.  If the student does not respond to the intervention, the needs of the unresponsive students at the end of the second tier are discussed at a second team meeting.  The student is referred to the third tier.  The tertiary prevention level (or Tier 3) includes individualized interventions of increased intensity for students who show minimal response to Tier 2. The student receives this individualized, intensive intervention from the SERT (most likely in a one-to-one, or one-to-two setting) in a quiet location. Pre-and posttest data are recorded for documentation of possible gains. Fuchs and Fuchs (2006) describe the nature of the intervention within the RTI model as being individualized to the students, becoming more intensive as a student moves across the tiers.  The student’s progress is discussed at the third and final team meeting of the school year. In the study district, the unresponsive student at the end of Tier 3 meets eligibility for a referral to receive services from the Special Education department (e.g., receive a psych-ed and/or additional services). The creation of a non-identified IEP, or the implementation of a second round of a different Tier 3 intense intervention program will take place.  The current research study used the premise that in order to strengthen the development of literacy skills, early intervention needs to be explicit, intensive, and systematic in nature. It focused on a web-based, free-access reading intervention program, ABRA that was implemented at Tier 2 within the RTI model within 10 Grade 1 classrooms. Students in the early months of their grade 1 school year who exhibit difficulties with one or all of these skills require immediate intervention. RTI uses screening measures to identify  20 those students who require intervention. Through progress monitoring, students will move throughout the tiers of progressive, receiving intensive individualized instruction until they potentially meet eligibility requirements for special education services.  1.2 English as a Second Language In Canada, there is great linguistic diversity among ESL students. Dramatic increases in the number of ESL students in North American schools have been well documented, with even greater growth projected in the coming decades. Students who are language minorities have been identified as the fastest growing segment of the school population (Wagner, Francis, & Morris, 2005). In Canada in 2011, 20.6% of the general population was born outside the country (Statistics Canada, 2011). Researchers have indicated that the growth in immigration from non-English speaking countries has posed special challenges to publicly funded school systems across North America. Issues related to education are complex considering effective literacy instruction is in the English language (Lovett et al., 2008).  In Ontario schools, over 25% of students are identified as English language learners, a percentage according to Statistics Canada that will continue to increase in years to come (Ontario Ministry of Education [MOE], 2007). Many factors play an important role when considering diversity: from country of origin to home language to community involvement in education. Researchers also note two surprising factors: 1) most ESL students in Ontario classrooms are Canadian-born; and 2) Canadian-born ESL students are underperforming academically not only in comparison with their English-speaking peers, but also with more recently arrived immigrant students (Coelho, 2007; Jang et al., in press; MOE, 2008). These facts warrant the research  21 described in this thesis, which evaluates the efficacy of a technological-based reading program with students whose first language is other than English. In an effort to respond to diversity in the classroom, ABRA contains a ‘multi-cultural’ component to its story and skills section within the program. Stories and activities contain images, myths, and meaningful text related to specific cultures.  Studies involving ABRA have taken place in countries outside of Canada. For example, studies in Hong Kong (Cheung, Mak, Abrami, Wade, & Lysenko, 2015) and in Kenya (Abrami, Wade, Lysenko, Marsh & Gioko, 2014) examined the efficacy of ABRA on primary school children’s English reading skills after a 14 and 13-week intervention period respectively. The results from both countries indicate students in the treatment condition achieved significantly higher gains in literacy skills (in English) from pre-to posttest. Specifically, Hong Kong students in the treatment condition scored higher on phonics measures than their peers in the control group, and Kenyan students in the treatment condition scored higher in reading comprehension measures than their peers who didn’t receive the intervention. In addition, students in the treatment condition in both studies showed a higher level of improvement over the control condition on core subject exams at the end of their school year, including English, Mathematics, Science, and Social Studies.  August and Shanahan (2006) in their review of the National Literacy Panel on Language-Minority Children and Youth note that language-minority students’ literacy development has some parallels with the findings in research on monolingual literacy development. Specifically, word-level skills of language-minority students (e.g., decoding, spelling) are much more likely to be at levels equal to monolingual English speakers. However, this is not the case for text level skills (e.g., reading comprehension and writing).  It is widely  22 researched that the first language can and should play an important role in the development of literacy in a second language.  Therefore an explanation for such varied responses of different groups of ESL students toward literacy intervention programs could be that student’s lack of mastery of specific literacy skills (e.g., decoding and comprehension) in their first language.  Effective literacy instruction that includes teaching all necessary components such as phonemic awareness, decoding, reading fluency, comprehension, and vocabulary is required for the successful development of literacy skills among language-majority students.    Another finding suggests that literacy instruction with the focus on vocabulary development and oral language usually benefits second-language learners more than first-language learners (August & Shanahan, 2008). Teaching print literacy skills to ESL students is not enough. Students must also have a proficient understanding of oral English to be successful. Vocabulary development through guided, instructional conversations provides the necessary support of oral language development. This language development contributes to comprehension and together with effective literacy skill development, reading comprehension is improved.  In the Hong Kong study (2014), researchers note that although there are studies that have investigated the efficacy of ABRA on reading skills for ESL students within their home country, it is worth replicating studies that investigate the efficacy of ABRA on reading skills for ESL students in a non-native English-speaking context. Although this study examines the effectiveness of ABRA on phonological awareness and reading skills for ESL students within an English-speaking environment, the ESL participants have just immigrated to Canada within the last 1-2 years and are new to the English language as was the case for the students in the Cheung et al. (2015) and Abrami et al. (2014) studies.  23 The definition of ESL, now known as ELL, is taken from the English Language Learners Policies and Procedures Guide. It states: English language learners are students in provincially funded English language schools whose first language is a language other than English, or is a variety of English that is significantly different from the variety used for instruction in Ontario’s schools, and who may require focused, educational supports to assist them in attaining proficiency in English. These students may be Canadian born or recently arrived from other countries. (MOE, 2008, pp. 8)  When ESL learners currently enter school in Ontario for the first time, they are expected to learn English at the same time as they are learning the curriculum. Steps to English Proficiency (STEP, 2012) is the current framework for assessing and monitoring the language acquisition and literacy development of English language learners across the Ontario curriculum. Students must be administered an initial assessment that determines the student’s profile of language proficiency (oral, reading, and writing), literacy development, a starting point for instruction, and supports programming. The student is then placed on the continua within each of the three strands (oral, reading and writing). There are six steps within each strand that range from a beginner level (Step 1) towards proficiency (Step 6). Steps 5 and 6 are considered grade appropriate.  When ESL learners entered school in Ontario at the time of the study, they were assessed by an ESL teacher and identified with one of four stages, which are defined in Supporting English Language Learners: A practical guide for Ontario educators, (MOE, 2008). Students are thought to move through the four stages predictably, progressing with proficiency in English at each stage. The stages are designed to be a continuum of language acquisition, and are divided  24 into two levels titled early and later (MOE, 2008, pp. 52). Stages are not associated with a particular grade. Each stage contains five components of language acquisition: Writing, Reading, Speaking, Listening, and Orientation.  These stages are further explained in section 2.2.2.  An effective model of academic learning (for both first, and second-language learners) that encompasses three factors related to the time it takes to achieve a learning objective: (a) characteristics inherent to the learner; (b) the time allocated for learning; (c) and the quality of instruction (Carroll, 1963).   It is important to achieve a baseline or understand the foundation of skills for students prior to implementing a literacy program. ESL students with diverse learning and curricular needs, who, by virtue of their instructional, experiential, cultural, socioeconomic, and linguistic backgrounds bring different and oftentimes additional requirements to instruction, and curriculum. Researchers have emphasized that within most native North American homes, early literacy skills (e.g., oral language, familiarity with print, understanding text structures, and the acquisition of knowledge) typically start developing long before students enter school (Lesaux et al., 2008). It is important to note that different cultural and socioeconomic communities may language and literacy development differently. The ESL students in this study come from a variety of different cultures and linguistic backgrounds, and that any early literacy skills taught previous to their entry to their current school are unknown. Teachers cannot presume what experiences children have with reading, or with the fundamentals of language and literacy before they arrived at school. Therefore, reading instruction as stated previously in this chapter must ensure that students are taught the necessary literacy skills (e.g., phonological awareness, letter knowledge, phoneme-grapheme relationships etc.) and link these skills in order to read for meaning (e.g., vocabulary skills, world knowledge, comprehension strategies).  25 Acknowledging that ESL students enter a school with unknown prior instruction supports the fact that these two aspects of literacy development are essential for ESL students. ABRA also suggests a developmental progression for learning, similar to that of the Ontario Ministry of Education. The ABRA authors note that good literacy programs must be built on a solid phonological foundation (Learning Tool Kit, 2010). Phonological grounding is a good predictor of future reading success; therefore these building blocks are the backbone of the ABRA software. Parallel to this software, the Ontario Ministry of Education suggest that the progression of ESL students requires learning basic vocabulary, becoming familiar with the conventions of English print, and begin to use comprehension strategies to locate main ideas and some details (MOE, 2008, pp.52). These levels are supported throughout ABRA by accessing the different activity modules (e.g., comprehension).  It is also an evidenced-based program, thus satisfies Carroll’s third element for effective learning (quality of instruction). ABRA, (discussed in the next section) is an approach to learning that encompasses these three factors (characteristics inherent to the learner, time allocated for learning, and quality of instruction) while addressing the components of literacy within the stages of English language acquisition.   1.3 ABRACADABRA Reading Intervention Program ABRACADABRA is a web-based highly interactive, free-access, evidence-based literacy software that is designed as a literacy instruction tool (Savage & Abrami, 2007). A team of researchers, including Robert Savage, Literacy Theme Leader, and Philip Abrami, Concordia University’s Research Chair, and Director of the Centre for the Study of Learning and Performance, created the program.  ABRA is designed to be a literacy tool that can support a classroom’s literacy instruction.  Teachers have the flexibility to use ABRA as a whole-class  26 instructional literacy tool, small-group work, individual remedial work, or enrichment activities. Teachers may also focus on contextualizing activities by emphasizing the ABRA stories first since there is no prescribed order for engaging in activities. ABRA consists of 32 instructional activities and 21 stories that in combination create hundreds of challenging tasks for students.  As noted in the previous section, studies have been conducted that indicate significant results for students at risk for reading failure who received ABRA. This finding led me to choose this program for this current study.  Separate headings below describe the important characteristics of ABRA as noted by researchers. For purposes of clarification as to how the current study uses ABRA to contribute to the literature in light of extensive studies, further explanations are provided in the appropriate sections.  1.3.1 Intervention Studies Involving ABRACADABRA As noted above, ABRA is a web-based reading program that can be utilized as an effective literacy instruction tool for students in the early and primary years. It has been embedded within the practice of several school districts across Canada. There are several studies that examine the effectiveness of ABRA in these districts, and are described below.  A recent study (Piquette, Savage, & Abrami, 2014) reported a cluster randomized control trial evaluation of the ABRA literacy program with 107 kindergarten and 96 grade 1 students. There were 12 intervention classes, and 12 control classes. The students in the treatment condition received ABRA for 10-12 hours within a whole-class instruction. Significant gains with medium effect sizes were reported for the treatment condition in letter-sound knowledge, phonological blending and word reading.  27 A Pan-Canadian, randomized control trial (RCT) study (Savage et al., 2013) involving 1,067 students in grades 1 and 2 across Alberta, Ontario, and Quebec examined the effects of literacy development using ABRA. Each student in the treatment group received 20 hours of ABRA instruction, over one full semester of school. The results indicated that the treatment group showed significant improvement in phonological blending, letter-sound knowledge, and phoneme segmentation fluency. A recent meta-analysis examined nine randomized control trials that have analyzed the efficacy of ABRA on literacy skills (Abrami, Borohkovski, & Lysneko, 2015). Students from diverse populations were included in the study and were in Kindergarten, grade 1, and grade 2. Across the studies, students received between 10 to 32 hours of ABRA within a timeframe of 8 to 16 weeks. The results of the meta-analysis indicated that students in the treatment conditions had statistically significant results in phonemic awareness, phonics, vocabulary, and listening comprehension compared to students in the control conditions, with small effect sizes.  A study conducted in 2006, examined the effects of ABRA on 145 students in kindergarten (Centre for the Study of Learning and Performance [CSLP], 2006). The results indicate that despite a relatively brief intervention dosage of a total of 13 hours per child, significant results in areas of letter-sound knowledge and blending demonstrate that ABRA can give students the necessary building blocks to become good readers. The evidence showed that significant results in students’ listening comprehension were also obtained. These patterns of findings suggest ABRA could make a major contribution to children’s development of reading skills. Reading development is represented in different models. For example, researchers have determined that exposing students to words that rhyme may assist with reading acquisition  28 (Zorzi, Houghton, & Butterworth, 1998).  This model refers to analytic phonics. In contrast, synthetic phonics is a model that refers to the blending of smaller grapheme-to-phoneme units (Ehri, 2005).  A study that utilized the benefits of ABRA as an intervention, involved 144 students in grade 1 (Savage, Abrami, Hipps, & Deault, 2009). Students were randomly selected to participate in either an ABRA treatment or control condition. The ABRA group was further divided into two subgroups: those that received analytic phonics, and those that received synthetic phonics instruction. There were more statistically significant results for the synthetic phonics group (phoneme-based) than the analytic phonics group (rhyme-based). Improvements in phonological awareness, listening comprehension, reading comprehension at immediate posttest, and in phonological awareness and reading fluency at delayed posttest (seven months after the intervention) were noted. This finding indicates that an intervention program that focuses first on explicitly teaching students grapheme-to-phoneme relationship, and then how to blend phonemes and their associated graphemes when pronouncing words, is essential when developing literacy skills. These results are consistent with trends in systematic reviews (National Reading Panel [NRP], 2000; Torgeson et al., 2006) and suggest that children will readily apply their phonological awareness skills in early reading acquisition. ABRA reading program therefore contributes to improving a range of reading and related skills such as comprehension, reading fluency and writing. A follow-up study examined the students’ literacy scores one year later (Di Stasio, Savage, & Abrami, 2012). Results showed a significant effect for the analytic group on passage comprehension. This indicates that literacy programs that focus on phonics and phoneme awareness may provide advantages in literacy skills for kindergarten students.  29 A recent study explored the effects of ABRA on the reading accuracy and comprehension skills of children diagnosed with Autism Spectrum Disorder (Bailey, Arciuli, & Stancliffe, 2017). Twenty students in grade 1 to grade 6 were matched into treatment and control conditions. The ABRA sessions were administered individually over a 13-week period. Significant gains were made in reading accuracy and comprehension for the treatment condition with large effect sizes.   As noted by the NRP (2001), in order for students to improve their reading the literacy program must include reading fluency and reading comprehension components. Given the results of the studies noted above regarding reading fluency and comprehension, and the fact that there continues to be minimal literature that focuses on reading fluency and comprehension within technological reading programs, ABRA is a logical choice of reading intervention program for this study. 1.3.2 ABRA Studies Involving Students Who Speak ESL Researchers have implemented ABRA across numerous school districts in Canada, supporting the literacy development of students in the early and primary years.  In more than a dozen studies, ABRA has shown positive results in supporting students with letter-sound knowledge, phonological blending, listening comprehension and reading comprehension (Savage, Abrami, Hipps, & Deault, 2009). It has also shown positive results with different populations, for example, children with poor attention (Deault et al., 2009), and in low socioeconomic groups (Comaskey et al., 2009).  In addition to these studies, researchers have now recognized the need to support literacy development within the ESL population. One study examined the impact of ABRA on literacy skills in many parts of the world including Northern Australia (Wolgemuth et. al., 2013). This  30 study examined 308 students (ages four to eight) across six schools. Schools were solicited that had more than 30% Indigenous student enrollment. Indigenous students received less exposure to ABRA than non-Indigenous students. Despite the lower exposure, Indigenous (and non-Indigenous) students had similar gains (in comparison to the control condition) for most outcomes, and significantly greater gains for phonological awareness skills. Wolgemuth, Abrami, Helmer, Savage, Harper and Lea, (2014) conducted a further study examining the fidelity with which ABRA was implemented and the impact of implementation fidelity measures on student outcomes. Results of the study report that implementation fidelity measures accounted for between 1.8% and 15% variance of intervention students’ scores. This indicates that quality of instruction and lesson delivery are of primary concern when implementing a literacy tool such as ABRA. The Kenyan study (Abrami, Wade, Lysenko, Marsh, & Gioko, 2014) examines the effects of ABRA on the literacy development of students in a language-minority environment. It revealed that the180 students in the treatment group significantly improved their comprehension skills (specifically passage and listening comprehension) than the students performed significantly better than their peers in control classes on English, Math, Science, and Social Studies end-of-year exams (Abrami, Wade, Lysenko, Marsh, & Gioko, 2014). Further research explored the effects of ABRA on non-native English –speaking students’ literacy development in Hong Kong (Cheung et al., 2015). A total of 125 primary students participated in either the treatment or control conditions. Each participant in the treatment group received ABRA for a total of 21 hours over the course of 14-weeks. The treatment group scored significantly higher than the control group on phonics measures. Both  31 groups improved similarly on other literacy measures (e.g., Letter-Word Identification, Listening Comprehension, and Passage Comprehension).  This research notes the importance of including non-native English-speaking populations. At the time of the study, however, there were no studies that used an ESL group, with an ESL control condition. The numerous studies have included classrooms that by nature of their diverse communities not only address the needs of language-majority students, but also include ESL students in their samples. The study described in this thesis, contains an ESL treatment and control condition. ESL students were automatically eligible to participate in the study if they were enrolled in either Stage 1 or Stage 2 within their school setting. Students at these stages (described in section 2.2.2) have just immigrated to Canada, predominantly speak another language in their home, and are just introduced to the English language (listening, speaking, and written language). They are experiencing difficulty with reading English, as they are learning phonological awareness, and phonics skills. This study therefore builds upon the research studies at the time, by providing an ESL group with a control condition that would determine if ABRA was a significant contributor to the development of reading and related skills within that population. 1.3.3 Activities ABRA consists of 32-leveled, game-like interactive activities within four segments that increase in difficulty and complexity. The segments are: Sounds, Letters and Words (phonics and phonological awareness activities); Reading (fluency activities); Understanding the Story (comprehension activities); and Writing. Each activity will prompt the student to choose a level of difficulty. Students are said to master a particular activity when the rate of correct response is  32 90%-100% for three consecutive entries (Savage & Abrami, 2007). When students achieve this, they are to go on to the next level.  There are 21 interactive stories that can be linked to a chosen activity. There are different genres of stories to choose from that support the classroom teacher’s focus of instruction (e.g., Folk and Fairy Tales, Fiction, Non-Fiction, Poetry, and there is a Mulitcultural section). These choices afford great flexibility to teachers and students and appeal to different levels of interest. The Sounds, Letters and Words segment offers the following activities: Matching Sounds, Alphabet Song, Same Phoneme, Syllable Counting, Rhyme Matching, Letter ID Bingo, Auditory Segmenting, Word Changing, Auditory Blending, Word Matching, Letter Sound Search, Same Word, Animated Alphabet, Word Counting, Word Families, Blending Train, and Basic Decoding. These activities are designed to strengthen students’ phonological awareness skills, and are appropriate for those students that are at the beginning stages of alphabetic skill development.  These activities focus on listening skills, auditory discrimination and letter naming (Learning Tool Kit, 2011).  They are associated with stories embedded in the software in order to assist with vocabulary development and to build context. The Reading segment consists of the following activities: High Frequency Words, Expression, Speed, Tracking and Accuracy. These activities help to support the student’s reading fluency by strengthening their ability to decode with little or no effort. These activities help students concentrate on the content of the story rather than the individual letters and sounds in words. Focusing on the pace of reading will help the student to focus on the meaning of a text. It is important to note that although High Frequency Words and Speed are the only activities that provide the practice of reading with a concentration in automaticity, they are not timed. When the student chooses the High Frequency Words activity, the student is prompted to  33 choose a story to go with it (in order for specific words from the story to appear in the activity). The activity has two levels of difficulty. The first level contains words that have three letters, and the second level contains words that have four letters. As the word appears on the screen, the student must say it aloud before the word disappears. There is no actual recording of time.   The activity objective for Speed is to have the student read a given text at an appropriate rate. By clicking start and finish keys, the student’s reading rate is calculated and categorized into either: Too Slow, Just Right, or Too Fast by using an algorithm. The time it takes to read the words or sentences is not shared within the activity. Considering the fact that reading fluency is an essential component of basic literacy skills and students require practice in order to master this skill, activities such as the ones above could be embedded in other parts of the ABRA program. There are multiple opportunities to promote fluency within the program, considering the number of stories and activities available. Understanding the Story segment offers the following activities that help build specific skills that contribute to comprehension, or reading for meaning: Prediction, Sequencing, Vocabulary, Story Response, Comprehension Monitoring, Summarizing, Vocabulary (ESL), and Story Elements. Reading for meaning is the goal of learning how to read and is a complex process that depends on the development of these related skills. There are specific stories within the software program that associate with these activities and allow the student to practice reading for meaning. The Vocabulary (ESL) is an activity that allows the ESL student to match given words with their corresponding pictures then uses them in given sentences. The Writing segment contains two activities, Spelling Words and Spelling Sentences. In these activities, the student applies the phonetic principles learned from the other segments to  34 write words and sentences through game-like activities. The eight different levels of difficulty within the Spelling Words activity challenges the student to write two to five phoneme words. As each student was assigned a username and password for the current study, the program automatically recorded data and stored the information within the Assessment module. With the use of this data, classroom teachers (with their own secure usernames and passwords) and parents were able to log in and continue to be informed on which activities the students were using and how they were progressing in each of the activities. Specifically, teachers and parents could see a student report, date selector, activity specific statistics (e.g.: words read correctly on first try, words read correctly with help, words read incorrectly), activity specific error reports, and activity insights (this field will provide details of skills within an activity and gives suggestions for reinforcing subordinate skills). The Assessment module also contains a professional development section that offers resources and suggestions to teachers when teaching Language Arts in their classrooms. Videos are provided to see examples of how to use ABRA in different settings within the classroom. A Communication section is also provided for teachers to communicate with one another regarding their experience with ABRA.  1.3.4 ABRA Dosage There is a suggested ABRA dosage written in the Teacher Manual Learning Toolkit by the Centre for the Study of Learning and Performance (CSLP). The authors indicate that the dosage is flexible and adaptable to the specific needs of the students in a classroom (CSLP, 2011). It is suggested that the program could be used in place of a one-hour literacy session. A suggested schedule from the authors is as follows. 1. Word-level work (10 minutes) where  35 phonological awareness activities will be completed. 2. Text-level work (15 minutes) for students to strengthen their fluency and comprehension skills. 3. Collaborative work (20 minutes) where students explore fluency, comprehension, phonological awareness, and writing activities. 4. Extension work (15 minutes) is provided for students to read favourite stories and respond to comprehension questions provided by the printable resources.  The recommendation from researchers who conducted studies (Abrami et al., 2014; Cheung et al., 2015; Wolgemuth, 2013) undertaken across Canada, Australia, Kenya and Hong Kong is that ABRA be used for about two hours per week per student for no less than 13 weeks. The meta-analysis (Abrami, Borokhovski & Lysenko, 2015) reported that the results of studies that followed the recommended dosage upon post-intervention for students that received ABRA demonstrated statistically significant improvement in four out of six categories: phonics, phonemic awareness, listening comprehension, and vocabulary knowledge. It is noteworthy to mention that two of the above studies used a lesser dosage of ABRA. One of the Canadian studies used ABRA less than the recommended dosage at three times per week for 13 weeks for a total of 10 hours per child (Comaskey & Savage, 2009). Results of this study demonstrated significant results in blending and pseudo-word reading skills among kindergarten students who received ABRA. In addition, the Kenyan study used ABRA for 1 hour and 30 minutes per week for 13-weeks for a total of 19 hours and 30 minutes per child (Abrami, Wade, Lysenko, Marsh, & Gioko, 2014). Significant gains in reading comprehension were reported for the ABRA group.  1.4 Definition of Learning Disabilities Learning disabilities has been difficult to define, and researchers note that controversy has always surrounded the parameters in terms of how it is defined and operationalized (Fletcher,  36 Stuebing, Morris, & Lyon, 2013). In Ontario, the Learning Disability Association of Ontario (LDAO) states that learning disabilities refer to a variety of disorders that affect acquisition, retention, understanding, organization or use of verbal and / or non-verbal information (LDAO, 2001). Learning disabilities are specific and are not global impairments. Therefore, they are distinct from intellectual disabilities.  This is an important point to consider as a teacher can therefore distinguish between a student who is having difficulty with reading and a student with severe developmental issues. The teacher will recommend the student with reading difficulty to participate in an appropriate intervention program, while the student with severe issues can be brought to team where appropriate learning plans can be discussed.  The Ontario Ministry of Education (2014) has recently replaced Policy/Program Memorandum No. 8, titled “Learning Disabilities” (1982) indicating an updated definition of a LD. It states that, “a learning disability is one of a number of neurodevelopmental disorders that persistently and significantly has an impact on the ability to learn” (MOE, 2014). It also explains that the LD has an impact on the following six academic and other skills.  First, the LD affects the ability to perceive or process verbal or non-verbal information in an effective manner (for students who have intellectual abilities in at least the average range). Second, the LD results in underachievement that is inconsistent with intellectual abilities. Third, the LD results in difficulties in the development and use of mathematical, writing, or reading skills, and learning skills. Fourth, the LD may be associated with difficulties with one or more cognitive processes such as phonological processing; memory and attention; processing speed; perceptual-motor processing; visual-spatial processing; and executive functions. Fifth, the LD may be associated with social problems. Sixth, the LD is not the result of a lack of acuity in  37 hearing or vision, intellectual disabilities, socio-economic factors, cultural differences, lack of language proficiency, lack of effort, or gaps in school attendance.  Students who are at risk for reading failure in grade 1, along with ESL students in Stage 1 or 2, experience ongoing issues with reading. An effective reading intervention program is essential in order to provide an opportunity to both of these populations to strengthen and demonstrate their reading skills. If students in the at-risk group are unsuccessful at Tier 2, then there are possibly learning challenges that can be identified in a psych-ed assessment. This assessment can take place within two years from entering Tier 1 of the RTI model.  For the ESL group, psychometricians in this school district prefer to provide the students with five years (Stage 1-4 plus an additional year) of literacy growth prior to administering a psych-ed assessment. This provides ESL students with the opportunity to strengthen and practice their literacy skills with ESL teachers.  1.4.1 Identification Process of Learning Disabilities It is typically the case that students will begin to read in grade 1.  They are introduced to a routine of learning numbers, letters of the alphabet, days of the week, the names of colours and shapes. At this time, difficulties may emerge.  Observations of a child experiencing difficulty may include: delayed speech, pronunciation problems, poor concentration or simply difficulty following directions. If difficulties persist, these students often are categorized in Tier 1 as being at risk for reading failure. Students will go through the Response to Intervention model, receiving targeted instruction at every tier. The aim of this study is to examine the effectiveness of a Tier 2 intervention program, ABRA, on students at-risk for reading failure. This is a crucial  38 time for students to demonstrate sufficient growth in their reading skills, as an unsuccessful outcome may result in eligibility for special education services. Reading is still a challenge for many children (Allington & Walmsley, 1995), and as noted earlier, there are few studies that examine reading programs for ESL students. This study recognizes the need to address an ESL population with an ESL control group in order to determine growth in the students’ reading and related skills. As noted in section 1.2, students are eligible for this study if they are in Stage 1 or Stage 2.  After Stage 3 and 4, typically it is encouraged that ESL students continue to receive one more year of literacy support from an ESL teacher prior to a recommendation for a psych-ed assessment. The reason for this postponement is to ensure that the results are not as a result of ESL concerns.  According to Faggella‐Luby and Deshler (2008), more than 40% of children do not read at grade level. This is the first indication that a learning disability (LD) may be present. Researchers note that the LD construct is difficult to define (Fletcher et al., 2013). As well, the identification process for LDs vary depending on what type of model is being used (e.g., RTI, or the IQ-Achievement Discrepancy), and often contributes to definitional confusion. Since this study examines a Tier 2 intervention program on two separate groups (at-risk and ESL) within an RTI model that has the potential to lead to a diagnosis of a LD, a definition and an explanation of the identification process are noted below. In this study, students in both groups (at-risk, and ESL) participated in a Tier 2  intervention program within their schools. If students move through the RTI model and are eligible for special education services, they are recommended to receive a psych-ed assessment. The psychometricians in the study district use the IQ-achievement discrepancy model in order to diagnose LDs. It was noted by Bateman (1965) that in order for a student to be recommended for  39 an identification of a LD, “an educationally significant discrepancy must exist between estimated intellectual potential and actual level of performance related to basic disorders in the learning processes” (p. 220). For decades the IQ-achievement discrepancy model has served as the fundamental conceptualization of LD, although researchers have provided evidence that it is flawed. The traditional way of identifying a student with a LD is inadequate for the following reasons.   First, researchers have noted that the concept of intelligence (IQ) is not necessary to define a learning disability (Siegel, 1989).  Stanovich notes that using the IQ-achievement discrepancy actually can cause students with reading disabilities to not be identified (Stanovich, 1988). This finding results in many schools adopting the RTI model, as it is possible to determine if a student is experiencing difficulty with reading by listening to them read, or by using a screening tool. The school district in this study, although still relies on the IQ-achievement discrepancy in order to diagnose a LD, also implements the RTI model as it acknowledges the importance of implementing immediate, effective, targeted intervention to students at-risk for reading failure.  This study also focuses on a separate ESL group that equally requires an essential reading program at the early stages of reading development. Currently, school systems may have a wait-list of up to two years in order to receive a psych-ed assessment that determines the IQ-achievement discrepancy. The demand for such an assessment leaves no other option for the student but to receive regular classroom instruction until the identification is potentially diagnosed. This ‘wait-to-fail’ approach takes up valuable time for essential intervention.  Students within the at-risk group demonstrated below-benchmark ability in reading, and therefore are in need of early intervention. The ESL students within Stage 1, or 2, also demonstrated below benchmark reading levels and required immediate, targeted  40 phonological awareness instruction. As noted above, if it is logical to disregard the IQ-achievement discrepancy, then a psych-ed is not required to identify a reading disability, thus saving the student this valuable time for intervention. Research indicates that the field of identifying students with LD is moving from conceptualizing LD as a neurological disorder, or a cognitive discrepancy, to using an instructional model as outlined in RTI (Solis et al., 2014) to address particular needs for learning in this population. The study district continues to implement the modified RTI model (that includes the use of an IQ-achievement model) addressing the needs of at-risk and ESL learners.  1.5 Multimedia and Literacy It has been documented by many researchers that computer-based technologies are capable of implementing essential tools for learning (Bereiter, 2003; Dede, 1996; Mayer, 2003), yet teachers are frequently unaware of which computer-based literacy instruction programs, or computer-assisted instruction (CAI), can support early literacy attainment with a high degree of effectiveness (Johnston, McDonnell, & Hawken, 2008). Young people are growing up “digitally” with enhanced access to knowledge and ideas. Therefore, teachers take it to be their responsibility to give all students access to technology in a safe, effective manner (Scardamalia & Bereiter, 2005). However, they require professional development in this area of evidenced-based technological literacy programs. Abrami and colleagues (2008) introduced a software package designed to facilitate learning within the classroom. He describes the technological intervention software, ABRA that allow learners and educators to record student progress while they are interacting with the program. Progress monitoring allows teachers to evaluate academic performance (Abrami, Savage, Wade, Hipps, & Lopez, 2008).  41 Mayer (2003) has noted other advantages of using multimedia for learning from increased acquisition, enhanced processing, and improved long-term retention and recall attesting to the potential of technology-based literacy programs. With an increase of user-friendly software, accessibility, free-access web-based literacy programs, using technology in the classroom does involve caution and provokes questions.  Such questions are, “Do students acquire the necessary literacy skills as effectively from a computer as would be the case if the intervention were teacher-administered?” Meta-analyses that have reviewed the effectiveness of technology and literacy programs have suggested that the research base is mixed in terms of findings (Ehri et al., 2001; Torgesen & Zhu, 2003). A recent study evaluated the effects of ABRA as a Tier 2 literacy instruction tool to students in small groups, and compared the results to students struggling to read who received ABRA on a one-to-one basis in 33 high poverty schools (Chambers, Slavin, Madden, & Abrami, 2011). Results report that students in the small groups of six outperformed the students who received the ABRA instruction on a one-to-one basis on all reading measures, indicating that a computer-assisted, small-group tutoring program may be an effective way to serve more struggling readers. A recent review (Tamim, Bernard, Borokhovski, Abrami, & Schmid, 2011) notes that technology used in the classroom can result in positive learning outcomes if it is well designed and implemented (Schmid et al., 2014).  Further implications of these reviews argue strongly that there is an urgent need for basic evidence from well-designed, randomized-control studies of the use of technology to facilitate learning to read before technology is used in the classroom (Savage et al., 2008).  42 In one study Abrami and others (2008) implemented ABRA, a web-based, free-access, balanced literacy program in 13 grade 1 classrooms in Canadian schools. The results indicate that ABRA produced significant effect sizes for change in literacy skills among grade 1 students (Abrami et al., 2008; Hipps et al., 2005). The need for further evidence of significant literacy gains among ESL students through the use of cost-effective technology with an evidenced and web-based intervention program (ABRA) motivated the current study. In addition to researchers noting advantages of using multimedia for learning in the classroom, Savage (et al., 2010) explored how teacher variations in use of ABRA, a web-based tool, affected literacy growth among grade 1 students. Sixty students received ABRA for 16 hours in total. The three classrooms delivered ABRA in three different ways: Entry, Adoption and Adaptation. This was aligned with the technology integration model (1997). Although significant gains were reported for all measures among students in the treatment condition, the largest difference was reported for the students in the Adaptation classroom. This indicates that teachers who link the technology content (literacy skills) to wider learning themes in the classrooms may produce larger gains in literacy skills.  The research on technology and students with LD has addressed a range of questions, yet few studies focus on specific literacy skills (e.g., phonological awareness). One study examined the effectiveness of CAI on students with initially low phonological awareness skills. The results indicated that they did not make much progress after using CAI (Olson & Wise, 1992).  The hypothesis was that students with more severe reading disabilities would require additional targeted instruction in phonological awareness and decoding, in order to benefit from CAI.  Olson and Wise then conducted a second study with two groups of primary aged-students. The first group (PA: Phonological Analysis) received seven to nine hours of phonological awareness  43 and phonics instruction, and 20 hours of computer time divided between CAI on word skills and supported reading of texts. The second group (CS: Comprehension Strategies) received equal amounts of teacher instruction and computer time, but without phonological training. Students in the phonological awareness group made greater progress than those in the CS group, on all phonological measures. There were no differences in comprehension measures (Wise et al., 2000).  This study suggests that the difference in phonological awareness outcomes is due to the extra teacher-directed instruction. CAI in this study is used as a supplementary tool to support the learning of literacy skills. Wise suggested that combining specific computer software with an increase of targeted teaching instruction will yield better results than less instruction. Bus et al., (2006) noted that no research on the effectiveness of CAI with struggling readers was  found. The literature review on ABRA in section 1.3.1 examines the fact that it can be used as a literacy tool (unsupported by teacher-lead instruction) however many studies on ABRA indicate its usage as an intervention tool that is implemented in support of regular classroom literacy instruction. The current study used ABRA, based on its significant research findings, as an intervention tool for those at risk for reading failure, and ESL students.  As many teachers and parents know, there is a growing deployment of information and communication technologies (ICT) in schools (Chambers et al., 2008; Cuban, 2001). The study district encourages schools to fundraise for technology in order to support the growing demand for technology in the classrooms. The concern is two-fold. First, can technology support literacy growth among students at risk for reading failure? Second, professional development for teachers regarding different types of literacy programs, and the implementation process is essential.  The research of Wise and Olson (2000) found that: a) the use of computers to support reading of connected text appears to have some effect on the development of reading skills; b) combining  44 CAI with teacher-instruction is more effective then CAI alone; and c) students with low phonological awareness receiving CAI, may require additional targeted instruction in those skills.  To address the concern of whether technology can aid in literacy instruction, many researchers have examined and reviewed studies that have resulted in generally small positive effect sizes for the use of ICT on literacy (Blok et al., 2002; Cheung and Slavin, 2012; Ehri et al., 2001; MacArthur et al., 2001). Sandvik and Van Daal (2013) reviewed research in this area and concluded that interventions using ICT for literacy yielded medium positive effect sizes  Teachers’ preparation to teach with technology is also at issue. To address the concern of professional development among teachers, Archer et al. (2014) conducted a meta-analysis of technology and found that when teachers are trained effectively (e.g., initial training, delayed re-training and classroom support) effect sizes were as high as 0.60. When teachers received one day or did not receive any professional development, the effect sizes were close to zero. This suggests that professional development is essential for the successful implementation of technology to support students’ literacy development. Professional development was an ongoing aspect of the current intervention study.  Training on ABRA with all facilitators was provided at the beginning of study, and was guided. Throughout the intervention, frequent visits were made to the schools, and communication was sustained in different ways (e.g. email, phone, or visit). Since ABRA has reported numerous positive effect sizes that indicate the effectiveness of literacy instruction, I chose to further examine its effectiveness with two separate groups of students: those at risk for reading failure, and ESL students.    45 1.6 Current Study ABRACADABRA was used as the intervention program in the current study, and was administered in addition to the regular 40-minute literacy block in classrooms in three of four participating schools, to further support the needs of students who are at risk for reading failure, and ESL students separately. Within the fourth school, ABRA was administered to the participants for 20 minutes within their regular 40-minute literacy block. The purpose of the current study was to assess the effects of an interactive web-based, free-access reading intervention program, ABRA, on grade 1 students who are at risk for reading failure and ESL students. Specifically, the study examined how the program affected the phonological awareness skills of “at risk” grade 1 students and ESL students.  The study evaluated the following research questions: 1) Does the ABRA reading program improve the phonics skills, phonological awareness skills, and word reading skills of students at risk for reading failure in the treatment conditions relative to students in a control condition exposed to regular literacy instruction only? Specific Hypothesis: grade 1 students at risk for reading failure participating in the ABRA reading intervention program will show significantly improved phonics, phonological awareness and word reading skills when compared to students in the control condition. 2) What is the relative efficacy of the ABRA reading program for ESL students’ phonics, phonological awareness, and word reading skills, compared to ESL students receiving regular literacy instruction only? Specific Hypothesis: ESL students participating in the ABRA reading intervention program will show significantly improved phonics,  46 phonological awareness, and word reading skills when compared to ESL students in the control condition, who are receiving regular literacy instruction only. 3) To what extent do grade 1 and ESL teachers, parents, and students perceive ABRA reading program to be an appropriate technological reading program?                                        47 Chapter 2: Method   2.1 Setting This study took place in a city within a Catholic school district in Southwestern Ontario in 2010/11. At the time of the study there were 42 elementary and eight secondary schools in the district, enrolling over 30,000 students.   Demographic information for the city that participated in the study was taken from the 2011 Ontario Census. It revealed that the city’s total population was 182,520. Of that, 13% of households had a single parent, and 5.7% of families lived with relatives. There were 7,300 people who immigrated to this city between 2006 and 2011. In 2011, the top three languages spoken in the home were English (83%), multiple languages not including English (4%), and English with a non-official language (4%; Statistics Canada, 2011).  Although the district is culturally and linguistically diverse, some areas are more diverse than others. At the time of the study the growth of diverse cultures was evident within this school district. I therefore found it essential to include a sample of ESL students in the study.   2.2 Participants Data for this study were collected from a total of 104 students over a 16-week period. These students were enrolled in 10 grade 1 classrooms across four elementary schools in the school district described above. There were two groups involved in the study: students at risk for reading failure and, ESL students.  Within each group there was a treatment and control condition. Each classroom included students from both conditions. Students were characterized as being at risk for reading failure in grade 1 when they performed below benchmark on the Dynamic Indicators for Basic Early Literacy Skills (DIBELS) (Good & Kaminski, 1996). Benchmark qualifications are explained below in section 2.3. ESL students were eligible for  48 participation if they were enrolled in Stage 1 or 2 (see section 1.2) of the school’s ESL program. They were not administered the DIBELS screening tool. See Table 2.1.   2.2.1 At Risk  Parents of one hundred twenty four students provided consent for their children to participate in the study (88%). Forty-one students were excluded from the study after the DIBELS screening process, as they performed at or above DIBELS benchmark reading level. One student dropped out of the study within the first month.  The at-risk group had a total sample size of 82 participants (41 in the treatment condition, and 41 in the control condition). The students completed a 14-subtest screening tool called the Reading Readiness Screening Tool (RRST) taken from the Learning Disability Association of Alberta (LDAA). Each subtest was out of a different total score. Students were placed in either the treatment or control conditions based on similar individual scores. For example, student 1 and student 2 scored within two points of each other on the Phoneme Deletion subtest and therefore were assigned to either the treatment or control condition. The purpose of this method was to have as equal-achieving groups as possible. The mean age of the students in the at-risk group at pretest was 76.51 months (with a standard deviation of 3.88 months and a range of 66.90 to 87.70 months). There were 46 boys and 36 girls. The at-risk group included students who were receiving special education services. One student was diagnosed with Communication: Autism Spectrum Disorder (PDD-NOS), and one student with a diagnosis of Behaviour/Severe Mental Health Disorder.    49  Table 2.1   Distribution of Participants by Condition for At-Risk Group Conditions Control (41)     Treatment (41) Schools    1         2   3           4   1      2  3         4      (6)       (15) (8)       (12)  (8)    (12)            (9)      (12) Classrooms  1   2    3 4  5  6   7       8  9 10  1  2      3  4  5  6  7      8 9 10   (3)(3) (4)(5)(5)(1)  (8)    (3)(5)(4)           (3)(5)   (2)(4)(4)(2)       (9)  (3)(5)(4) N = 82  The number in parentheses refer to the number of students within the schools and classrooms   2.2.2 ESL As noted in section 1.2, the Steps to English Proficiency (STEP) model is currently being used with ESL students across Ontario. At the time of the study, schools across Ontario were administering an ESL program that used Stages as part of its framework.  ESL students in their school’s Stage 1 or 2 ESL program were automatically eligible to participate in the study. Twenty-two ESL students were invited to participate. Parents of all 22 students gave consent for their children to participate. The students were compared and placed in either one of two conditions (treatment, or control) using the RRST, similar to the At-Risk group. Eleven students were selected for the treatment condition and eleven students were selected for the control condition. According to the Ontario Student Record, these students entered Stage 1 or 2 of the ESL program within their school. Their first languages include Spanish, Italian, Arabic, Polish, Urdu, Romanian, and Russian (see Table 2.2).    50 Table 2.2    Distribution of Participants by Condition for ESL Group Conditions Control (11)     Treatment (11) Schools 1 2 3  4   1  2 3  4    (4) (3) (0) (4)   (4) (5) (0) (2)   N = 22 At Stage 1 (also called the early beginner and later beginner level), students are becoming familiar with sounds, rhythms, and patterns of English. Specifically in reading, the student will begin to recognize the alphabet in print, know the direction of English print, begin to use phonetic and context clues and sight recognition to understand simple texts (e.g., pattern books, chart stories, songs, chants, and rhymes).  Students at this stage require intense phonological awareness instruction. Students will recognize familiar words and repeated phrases in poems and stories, and they will participate in shared reading activities. They will show some limited comprehension of language. They require visual aids for understanding. At this stage they will respond non-verbally or with single words or short phrases (MOE, 2008, pp. 52).  Upon the improvement of these skills, students will then move to Stage 2 (also called the early intermediate and later intermediate level) where they listen with greater understanding and use everyday expressions independently. The student’s confidence is growing, while they use relevant language appropriately. At this stage in reading, students are continuing to develop their phonological awareness skills and use strategies to assist in deriving meaning from text (e.g., predicting, phonics, and word families). They understand familiar vocabulary in stories, poems and computer text. The students also select main ideas in short, familiar passages from a variety of genres of stories and poems.  The development of phonics, and phonological awareness skills  51 continue to be the main concern at this stage. Students who have significant gaps in their learning because of limited prior schooling will require different kinds of support than other students (MOE, 2008, pp. 52).  When ESL students acquire the skills necessary to move to Stage 3 (also called the later intermediate, early advanced level) students are using English independently in most contexts. Students are speaking with less hesitation and they demonstrate with increasing understanding. When speaking, they produce longer phrases and sentences. Students at this stage are expected to participate more fully in academic activities. They are encouraged to use newly acquired vocabulary when retelling, describing or explaining a story/concept taught. Students are using a variety of reading strategies to comprehend grade-level texts, with consistent support and scaffolding. Students are expanding their skills and confidence in writing, using an expanded vocabulary for a variety of purposes (MOE, 2008, pp.53). Students in Stage 4 (also called the later advanced level) are using English with a proficiency that is approaching that of their English-speaking peers for most social and academic purposes. They are developing a level of reading comprehension approaching that of most English-speaking peers with grade-appropriate texts. Students are developing a level of competence in English writing using a range of grade-appropriate stylistic elements and organizational formats. They are also independently using a full range of age-appropriate grammatical structures to express their connections with few errors (MOE, 2008, pp.53). The mean age of the students of the ESL group at pretest was 97.69 months (with a standard deviation of 16.42 months and a range of 71.28 to 126 months). There were 10 boys and 12 girls. Students in the ESL group spoke the following languages outside of school: Spanish (Mexico and Columbia); Italian; Arabic (Egypt and Iraq); Polish; Hindi (India); Urdu (Pakistan);  52 Romanian; and, Russian.  Information was recorded from a Parent Language Questionnaire (see Appendix D).  2.3  Measures Students from each grade 1 classroom were eligible to participate in the screening process in order to identify students in the at-risk group. Two subtests, Nonsense Word Fluency (NWF) and, Phoneme Segmentation Fluency (PSF), from the DIBELS measure were used as the screen. Results from the NWF and PSF subtests determined the student’s eligibility to participate in the study within the at-risk group. For the remainder of the thesis, I will refer to NWF and PSF when referring to the screening assessment for students characterized as at-risk in their reading development.  See below. Students in the at-risk group who met eligibility were administered 14 subtests of the RRST from the LDAA, in order to be placed into either treatment or control conditions (the matching process is discussed later in this chapter).   2.3.1 Measures Used to Screen for Eligibility to Participate in the Study for the At-Risk Group The NWF and PSF measures of the DIBELS, 6th edition, were used to assess the student’s literacy skills. Nonsense Word Fluency assesses the student’s understanding of the alphabetic principle (Good & Kaminski, 1996). It is a standardized, individually administered test that assesses students’ ability to blend letters into words in which letters represent their most common sounds. It is administered for students from mid to end of kindergarten through the beginning of the second grade.  To administer NWF, the examiner presents a piece of paper with randomly ordered non-real words. The student is then asked to sound out the word, or say the  53 word as a whole. For example, if the written word is “vaj” the student could say /v/ /a/ /j/ or say the whole word /vaj/. The student scores three correct letter sounds if they choose to answer either way. If they include the whole word as well, they receive one point. The test is timed. The student has one minute to read as many pseudowords as possible (see Appendix A.2). The benchmark goal is 27 correct letter sounds, and one whole word read per minute by early first grade. If the student scored below benchmark on the NWF, they were considered at risk and therefore eligible for the study.  Phoneme Segmentation Fluency, a second standardized, individually administered subtest was administered to assess the student’s phonological awareness skills. Specifically, it assesses the student’s ability to segment three and four-phoneme words into their individual phonemes (e.g., the examiner says “hall,” the student says, “/h/ /a/ /l/”). The student segments as many words as he or she can within one minute. The number of correct sound segments produced is the final score (see Appendix A.1). This test is administered to students from mid-kindergarten through to the end of grade 1. The benchmark goal is 45 correct phonemes per minute in the beginning of grade 1. Participants who scored below the benchmark (45 correct phonemes per minute) on this test or the NWF test, were eligible for the study.  2.3.2 Measures Used to Assign Participants to Experimental and Control for both At-Risk and ESL Groups After each student in the at-risk category was screened using NWF and PSF measures, and were determined eligible for participation for the study, they were then administered the RRST. The RRST was also administered to the ESL group. This tool, from the LDAA provides early learning and primary division teachers with an easily accessible, affordable measure to  54 screen students for reading readiness skills.  This screening tool consists of 14 subtests that assess phonological awareness skills. As seen in Table 2.3, each subtest contains a score. A total score (123 points) was provided for every student.  Administering the RRST to students in the same classroom who were eligible for the study allowed the researcher to distribute them across the treatment and control conditions according to the similarity of their reading scores. The RRST was administered individually to students in a quiet setting one classroom at a time. If there were numerous students with similar total scores (within one or two points), then the students were matched according to the 14-subtests.  The intention was to divide the students with similar literacy profiles into either the treatment or control conditions, and by doing so enabled as fair a comparison of outcomes as possible. If equal scores were not possible, then a pair of students who scored within two points on the total score was placed into either the treatment or control condition. See Table 2.3.   The Letter-Identification subtest is a measure that determines if a student recognizes all of the letters in the alphabet in print. The facilitator shows the student all of the letters of the alphabet in print (not in order) and asks, “I am going to point to some letters. Tell me the name of each letter.” The subtest total score is out of 24. The Sound Identification subtest required the student to give the sounds for all of the letters in the alphabet. The facilitator shows the student all of the letters of the alphabet and says, “I am going to point to some letters. Tell me the sound each letter makes.” The subtest total score is out of 24. For the Word Recognition subtest, the student reads words from left to right across a page. The facilitator says, “I am going to show you some words. Read as many of the words as you can.” (e.g., cat, dog, red, book). The subtest total score is out of 15.   55  On the Word Detection subtest, the facilitator says to the student, “I’m going to say a sentence, Clap one time for each word I say, like this”. For example, the facilitator says, “the – ball – is – red.” This should result in four claps. The total score is out of five.  The Syllable Detection subtest requires the student to tap on the table for each word part the facilitator says. For example, the facilitator says the word ‘December’. This will produce three taps (De – cem – ber). The subtest total score is out of five. For the Rhyme Detection subtest the facilitator points to a series of four pictures (i.e., boat, foot, bike, and coat). The facilitator says, “Here is a picture of a boat. Which word rhymes with boat? Foot, bike, or coat?” There are ten scenarios, for a subtest total score of 10. The Rhyme Generation subtest request the student to say a word that rhymes with a given word. For example, the facilitator says, “I’m going to say a word. I want you to say a word that rhymes with it. You can make up a word if you want to. Tell me a word that rhymes with cat.” The subtest total score is out of five.  The Syllable Blending subtest requires the student to say the entire word when the facilitator says a word that is broken into separate syllables. For example, the facilitator will say, “I will say the parts of a word. You guess what the word is. What word is this? Milk – shake.” The subtest total score is out of five. The Phoneme Blending subtest requires the student to say the whole word when the facilitator says the sounds of a word. For example, the facilitator will say, “/t – o – p/. What word is this?” The subtest total score is out of five. Syllable Deletion subtest requires the student to say a word without one of its parts. For example, the facilitator says, “say baseball. Now say baseball again, but don’t say ball.” The subtest total score is out of five.  Phoneme Deletion subtest prompts the student to say a word (for example, ‘mat’), and asks the student to say it again, but without a specific sound (/m/). The answer is ‘at’. The subtest total score is out of five.   56  Initial Sound Isolation subtest measures the student’s ability to identify a specific part (sound) of a word and say it aloud. For example, the facilitator says, “Tell me the beginning sound in the word bear” (/b/). The subtest total score is out of five. The following two subtests measure the student’s ability to indicate and say the final and middle sounds of a word. Each subtest is out of five. Table 2.3 below lists the subtests and their scores, for a total score of 123.  Table 2.3  Reading Readiness Screening Tool Subtests Name of Subtest    Total Score Letter Identification    24 Sound Identification    24 Word Recognition    15 Word Detection    5 Syllable Detection    5 Rhyme Detection    10 Rhyme Generation    5 Syllable Blending    5 Phoneme Blending    5 Syllable Deletion    5 Phoneme Deletion    5 Initial Sound Isolation   5 Final Sound Isolation    5 Medial Sound Isolation   5  57 2.3.3 Pre-and Post- Standardized Assessments Woodcock-Johnson III Tests of Achievement (WJ III: Woodcock, McGrew, & Mather, 2001) consists of 12 subtests and 10 extended subtests. Two subtests Word Attack, and Letter-Word Identification were used as pre- and post-test measures.  Word Attack: The Word Attack subtest is an individually administered assessment that measures grapheme-to-phoneme translation of pseudo words. It measures the student’s ability to recognize printed forms and to apply knowledge of the correspondence between graphemes and sounds. The student earned one point per word pronounced correctly. This task had a Cronbach’s alpha of .86 - .90 for the at-risk and ESL group respectively. Letter-Word Identification: The Letter-Word Identification subtest measures the student’s ability to identify letters of the alphabet and sight word identification skills. One point was earned per word pronounced correctly. This task had a Cronbach’s alpha of .90 - .94 for the at-risk and ESL group respectively. Comprehensive Test of Phonological Processing (CTOPP: Wagner, Torgesen & Rashotte, 1999) is a standardized assessment that measures phonological abilities.  Elision: The Elision subtest is an individually administered 20-item test that measures the ability to say what is left of a word after removing designated sounds (e.g., “say popcorn. Now say popcorn without saying corn = pop). The student then omitted sounds from words to reveal new words (e.g., “say tan. Now say tan without saying /t/ = an).  For every correct response, a score of one point was recorded. This task had a Cronbach’s alpha of .80 - .85 for the at-risk and ESL group respectively. Blending: The Blending subtest is an individually administered 20-item test that measures the ability to combine sounds to form words. The student earned one point if after  58 listening to parts of a word, added the parts of the word together to make a whole word (e.g., “num-ber” = number). This task had a Cronbach’s alpha of .80 - .84 for the at-risk and ESL group respectively. Test of Word Reading Efficiency (TOWRE: Torgesen, Wagner, & Rashotte, 1999) is a standardized assessment that measures a student’s ability to pronounce printed words and pseudo-words accurately and fluently.   Sight Word Efficiency:  The Sight Word Efficiency subtest assesses the number of real printed words that can be accurately identified within 45 seconds. There were 104 listed words. This task had a Cronbach’s alpha of .94 - .96 for the at-risk and ESL group respectively. Phonetic Decoding Efficiency: The Phonetic Decoding Efficiency subtest measures the number of pronounceable printed non-words that can be accurately decoded within 45 seconds. This task had 63 listed non-words and had a Cronbach’s alpha of .89 - .92 for the at-risk and ESL group respectively. 2.3.4 Teacher, Parent, Student Survey and Language Questionnaire In order to address the study’s third research question (To what extent do grade 1 and ESL teachers, parents, and students perceive ABRA reading program to be an appropriate technological reading program?) a questionnaire was provided to all parents, teachers, and students after the 10-week intervention period. It was used to determine their perceptions of the social validity of ABRA. The researcher created the questionnaires (see Appendix C).  A seven-item questionnaire assessed classroom teachers’ perception of the effectiveness of ABRA for both at risk and ESL groups. Each item was rated on a five-point Likert-type scale (e.g., 1 = Not at all; 5 = Very). Social validity was measured by the classroom teacher as to the importance of phonological awareness and word reading learned by the students, the importance  59 of learning to use a computer, the effectiveness of learning a reading intervention program delivered by a computer, whether they noticed a difference in the students’ reading skills, whether they noticed an improvement in the students’ interest in reading, and in their personal opinion, the overall effectiveness of ABRA.  A six-item questionnaire assessed parents’ perception of the efficacy of ABRA within the at risk group. Parents of participants within the ESL group were given a questionnaire with two open-ended questions, post-intervention. Each item of the at risk group’s questionnaire was rated on a five-point Likert-type scale. Social validity was measured by the parent and addressed the importance of phonics when learning to read, the importance of learning to use a computer, whether the parents noticed a difference in their child’s reading skills after using a computer-based reading program, whether the parents thought ABRA was user-friendly, and overall, if ABRA was an effective computer-based reading program. A student questionnaire was administered to the participants of the treatment condition and assessed the student’s perceptions of the efficacy of ABRA. On a five-point Likert-type scale (e.g., 1 = Not At All; 5 = A Whole Lot), the student’s rating of their perception on five items was recorded. The first item asked the student to think about sounds and words, and asked whether they helped the student to become a better reader. The second asked the students to think about computers, and whether they helped the student to become a better reader. The third asked if ABRA helped the student to become a better reader. The fourth asked overall, if the student enjoyed participating in ABRA, and the final item asked the student whether or not they would like to continue using the ABRA program in the future. Parents of the ESL students in both treatment and control conditions completed a language questionnaire at the beginning of the 10-week intervention period that included 13- 60 items within three categories: siblings, language and, birthplace. Questions within the first category (siblings) included: Question 1) What is the name and age of the student’s first sibling? Question 2) What is the name and age of the student’s second sibling?  Questions within the second category, Language included: Question 3) What are the languages spoken in the home? Question 4) What language is spoken predominantly to your child/children in the home? Question 5) What other language(s) does the Father know? Is it / are they spoken in the home? Question 6) What other language(s) does the Mother know? Is it / are they spoken in the home? Question 7) What other language does sibling 1, 2, or 3 know? Is it / are they spoken in the home? Question 8) Is your child learning a new language at home? What is it? Question 9) Do your child’s grandparents live in the home? Question 10) If your child’s grandparents live in the home, to they speak another language in the home? Questions within the third category, Birthplace included: Question 11) Was your child born outside of Canada? Question 12) In what country was your child born? Question 13) Does your child currently receive ESL instruction from an ESL teacher in the school?  2.4 Procedure 2.4.1 Consent and Assent  First, approval was granted by the University of British Columbia’s Behavioural Research Ethics Board to conduct the current study. Second, approval was granted from the ethics committee within the selected school district. The Superintendent of Special Education Services identified seven schools as likely candidates for participation and provided authorization to invite them. The researcher invited all seven schools to participate, four accepted and three declined. Approval from the principals of the four participating schools was also  61 provided. None of the schools had previously participated in the ABRA program.  All participating principals, facilitators, teachers, and parents provided written consent for participation.    2.4.2 Facilitation and Training   I, along with one other facilitator attended a two-day training session on The Learning Toolkit (LTK) in Quebec, hosted by one of ABRA’s program team members, prior to implementing the ABRA program in this study.  The LTK is a suite of software tools developed for educators by the Centre for the Study of Learning and Performance (CSLP).  ABRA is one of the three free-access software programs offered, and was the focus for one of the two-day training session. The researcher and facilitator attended one day out of the two-day seminar.  A Learning Toolkit Teacher Manual binder was provided to all attendees and was referred to throughout the seminar. The binder consisted of specific information on all three software programs, including ABRA.  Contents within the ABRA section listed details of the structure of the program, components of all activities, the story genres and themes, the four segments, the professional development module, printable resources, and user requirements. The training reviewed literature that supports the need for an evidence-based literacy program, significant results in literacy scores from current studies that used ABRA as a reading program, and details on each segment. The training session provided time for inquiry and offered attendees to interact with each activity for a hands-on learning experience. The seminar also reviewed the process of uploading the ABRA program to the district’s server by the Information Technology department in the district, and offered contact information in order to ensure efficient delivery.    62 After attending the training session, I created a facilitator procedural binder (that was provided to every facilitator) that included the following: a calendar showing a month at a glance look at the customized intervention schedule (see Table 2.5), a 40-page daily lesson plan that contained a lesson per day with a step-by-step procedure on how to access the daily activity and story, printed worksheets for students, copies of consent and assent forms, lists of students in the treatment and control conditions (and their usernames and passwords), and a copy of the LTK activity summary booklet (for their reference). A list of tasks for the facilitator was also included. This study had eight facilitators in total.  Seven facilitators were members of the Ontario College of Teachers. One facilitator was a pre-service teacher. Four facilitators, including myself, administered ABRA within the at-risk group, and four ESL teachers acted as facilitators to the ESL group.  For the at-risk group, I facilitated the ABRA program in School 1. The Vice Principal of the second school facilitated ABRA in School 2. A pre-service teacher facilitated ABRA in School 3 as a volunteer, and a classroom teacher facilitated ABRA at School 4. For the ESL group, each school’s ESL teacher facilitated ABRA to their ESL students. I visited each school and met with the school’s facilitator and principal in order to review 1) the facilitator’s tasks, and 2) the customized daily lesson plans for 40 days. I provided them with the facilitator procedural binder that included this information.   Each facilitator was delegated several tasks to ensure appropriate implementation of the program. First, they distributed and collected parent consent and student assent forms. These forms were created, photocopied and stored in the binder.  Signed forms were then stored in a locked office. Requests from parents to discuss the program were provided to the researcher and were addressed in a timely manner. Second, with collaboration with the school’s principals, I  63 created a schedule to ensure time for the intervention sessions. Each facilitator collected the students from their classrooms and brought the students to an alternative site in order to receive ABRA on the computer (e.g.: computer labs, resource room). I provided the schools with any headphones they required. Third, the facilitators reviewed the daily expectations with every student in the treatment condition prior to signing on.  I provided a written script that outlined instructions for all 40 lessons.  Fourth, ongoing supervision was expected to ensure that every student was accessing the correct activity.  I outlined all necessary information on every lesson plan stored in the facilitator procedural binder. Fifth, all facilitators provided instructions to the students if the daily lesson included the completion of worksheets. Resources from ABRA were printed off, and photocopies were in the binder. If the daily lesson plan included a story to be read aloud, the facilitator had a hard copy of the story in the binder. The 40 lesson plans were created and scripted for every day of the intervention. The lessons were composed of four parts. Part 1 provided specific detail as to the literacy component taught that day (e.g., reading fluency), the planned skill activity (e.g., expression), the week number (e.g., two), the ABRA session number (e.g., seven), the associated story for all students to read (e.g., The Little Red Hen), and the goal for that day (e.g., Read The Little Red Hen with the correct expression. Differentiating intonation of voice, moderate pace, observing final punctuation, and reading accordingly).  Part 2 of the lesson plan indicated a step-by-step procedure on how to prepare the individual instruction for that day. For example, Step 1 asks the facilitator to provide examples to the students of how effective expression-filled sentences sounds when reading a story. The facilitator will talk about how a story should be read differently if a character is happy or sad.  64 Step 2 provides a step-by-step process of how the students will use their personal identification usernames and passwords in order to log on to ABRA.  Part 3 of the lesson plan provides step-by-step instructions on how the students will choose the planned activity of the day (e.g., select student activities and stories. Choose reading. Choose expression with The Little Red Hen). Part 4 of the lesson plan focuses on the story of the day. After successful completion of the activity, the students are prompted to read the story accurately and with correct intonation. They are then directed to log off. The researcher met with each facilitator at least once per week to discuss related issues or concerns with the program, and was available at anytime by other means of communication (this almost always was by electronic mail).  2.4.3 Screening, Pre- and Posttest Screening and pre-test data were collected in October. Two retired teachers administered the screens. I, and one of the retired teachers administered all pre- and post tests. Post-test data were collected in February. As indicated above, the RRST was administered to assess the phonological awareness and fluency skills of participants and in order to match participants into either treatment or control conditions. A set of pre-test measures were administered to determine the current reading level of each student, and the same measures were administered at post-test to determine if any changes occurred to student’s phonological awareness and fluency skills during the intervention.       65 2.4.4 Screening Test As noted in section 2.3, students were eligible to participate in the study if they achieved a score below benchmark on NWF and PSF subtests. They were then placed into either the treatment and control conditions based on their 14-subtest scores from the RRST. The placement was completed on a case-by-case basis. The students’ individual scores on the 14-subtests were entered into an excel spreadsheet. A total score was calculated for all students. If there were similar total scores (within two points from each other) among the students, then they were placed in either the treatment or control conditions. If the total score was more than two points in difference, then a comparison of subtest scores was completed.  There were 12 pairs of students with a variance in total scores that exceeded 2 points. The variance of total scores for these students ranged from 10-14 points. A comparison of subtest scores showed that the largest gap of subtest scores were from the Rhyme Deletion subtest with scores ranging from 5-10, Word Recognition subtest with scores ranging from 6-15, and Medial Sound Isolation subtest with scores ranging from 2-5. The students were then assigned to either treatment or control conditions. Implications are discussed in section 4.3.1. The researcher and two volunteer retired teachers conducted the screens.  The first volunteer teacher worked for the district for a total of 37 years as a kindergarten teacher. The second volunteer teacher worked for the district for a total of 35 years as a grade 2 and ESL teacher. It took approximately ten minutes to administer the NWF and PSF subtests, and fifteen minutes to administer the RRST, per student. Both screens were administered in one sitting. Students were asked if they would like a break between subtests and screens. To appropriately prepare for the administration of the screens, the volunteers were asked to participate in an inter-observer agreement (IOA) with the researcher. The IOA is the most  66 common method of assessing the reliability and validity of observational data. (Foster et al., 1988). To calculate the IOA the following steps were conducted: a) if the correct letter sounds per minute (for the NWF subtest), or correct phonemes per minute (for the PSF subtest) were exactly the same, there was perfect agreement (100%) between the two testers b) if the scores were different, then the smaller number was divided into the larger number and multiplied by 100 to determine a percentage agreement, c) if the percent was 95% or greater, this was accepted. If the percent was 94% or less, then the testers would conduct another test until 95% was obtained. For the PSF measure, the researcher participated in an IOA with the first volunteer teacher. A volunteer student (who was not part of the study) participated as the subject that was administrated each of the subtests. Acceptable inter-observer agreement (95%) was obtained on the second attempt. The researcher participated in an IOA with the second volunteer teacher. Acceptable inter-observer agreement (95%) was obtained on the third attempt.   For the NWF measure, the researcher and the first and second volunteer achieved an IOA at 100% on the second attempt.   2.4.5 Pre-and Posttest After the participants were assigned to either the treatment or control condition, the two CTOPP subtests, two WJ III subtests, and the two TOWRE subtests were administered to all students. It took two weeks to administer 104 pre-tests, prior to the start of the ABRA program. The researcher administered the pre-tests to all participants in School 1. The second volunteer retired teacher (that administered the NWF, PSF, and RRST screens) administered all pre-tests for Schools 2, 3, and 4. The researcher provided administration instructions and scoring  67 guidelines to the volunteer teacher, for each standardized measure. Practice sessions were held between the researcher and the volunteer teacher while following the standardized measure’s administration and scoring guidelines. All students were administered the pre-tests, individually, in a quiet room within their school during school hours. It took approximately 25 minutes per student. Breaks between subtests were offered to all students. The testing aimed to be conducted within one setting, however absenteeism due to illness lead some testing periods to be conducted over two to three sessions.   2.5  Intervention Prior to the start of ABRA, the researcher visited each of the three schools and met with the principal. A discussion determined who was best suited for the facilitator role at the school. The researcher, vice principal, pre-service teacher, and a classroom teacher administered ABRA to 41 students at risk within four schools. Four ESL teachers were facilitators of the ABRA reading intervention program to 11 ESL students. All received ABRA four-days per week for 20 minutes each session for 10 weeks within groups.  Each facilitator was prepared in the same way. A separate meeting was held with each facilitator to discuss the overview of the study, the process of implementation (including the 40-lesson plans), and a practice session allowed for exploration of ABRA. Questions were answered and welcomed at any point during the study. The facilitators of the at-risk group within School’s 1, 2 and, 3 traveled to the classroom to pick up the students and brought them to the school’s computer lab. The at-risk group in School 1 received ABRA in the resource room. They then guided the students to login and activate the specific activity. If after an activity was completed  68 on the computer, and a particular lesson required an extension activity in the form of a printable worksheet, it was completed by the student while seated beside the computer. The facilitators then guided the logoff process, and walked the students back to their classrooms.  Facilitators of the ESL group were the ESL teachers. In combination with their previously scheduled withdrawal time for ESL support, they added time periods in order to fulfill the 80-minute per week ABRA intervention. The sessions were conducted in their ESL rooms, as they were outfitted with the proper technology. The ESL teachers guided the students through every stage of the daily lesson plans. Culminating tasks (e.g., printable resources/worksheets) that were required at the end of the lesson were completed one-on-one with the ESL teacher.  2.6 Fidelity of Implementation  Facilitators of the ABRA reading intervention program were provided with a procedural binder that contained: (a) a detailed monthly, weekly, and daily schedule indicating the focus, daily lesson number, reference to daily worksheets and stories; (b) scripts for all forty-lessons and respective lesson plans; (c) all printed resources for stories and worksheets; and (d) coded login identification for every student in the treatment condition. I met with each facilitator and principal to review the binder’s contents, and facilitator responsibilities prior to the start of the intervention. There were eight ABRA program facilitators. The researcher was scheduled for one hour once per week for consultative purposes during the intervention period. There were a few times that the researcher was requested to travel to the schools for consultative assistance to ensure ABRA is delivered as intended.  For the first two months of the study, three of the seven facilitators met with the researcher. After the first two months, the teachers preferred  69 communication via electronic mail. The remaining four teachers preferred communication through electronic mail during the entire length of the study.  Facilitation fidelity was measured through a six-item fidelity checklist created by the researcher.  Facilitators were asked to answer each question with a yes or no response. All facilitators answered yes to all questions. See Table 2.4 for results.  Table 2.4   Results from the ABRACADABRA Fidelity Checklist  Facilitator Questions            Number*       1.  Did you employ the instructional strategies step-by-step   8             to locate and administer daily lessons? 2.  Did you restate any misunderstood instructions?    8        3.  Did you follow the number, length and frequency of lessons?  8            (40 lessons, 20 minutes per lesson) 4.  Did you implement ABRA 4 days per week over a 10-week period? 8       5.  Did you use all printable worksheets during ABRA?    8       6.  Did you use all printable stories?      8       *Number = the total number of respondents  Implementation fidelity is the degree to which programs are implemented as intended by the program developers (Dusenbury, Brannigan, Falco, & Hansen, 2003).  Since the authors of ABRA allow for flexibility in terms of dosage, the fidelity of implementation for this study must be measured against ABRA as it is customized for this study (20 min per session, four days per  70 week for 10 weeks). Although a Fidelity Checklist was conducted to measure the fidelity of implementation of ABRA, there is one factor that limits the extent to which it is achieved. As my role as a facilitator in one school limited my interaction with other schools, it is important to note that I was not physically present during all intervention lessons. Therefore, I cannot say with certainty that all lessons were delivered with fidelity. To ensure the program was delivered as intended in my absence, I relied on the facilitator procedural binder that contains scripted lesson plans, detailed instructions on how to access the activities, and all necessary printed resources for the facilitators.   Email communication between myself, and the facilitators was frequent. This method allowed for me to check in and determine the status of the intervention and if they were following the procedures outlined in the facilitator procedural binder (e.g.: what lesson number were they on, what obstacles are they facing with technology, any issues with scheduling the use of the computer lab, any issues with students, is the amount of time walking to the computer lab impeding on the time allotted for the intervention lesson) within each school and to address any concerns or answer any ongoing questions facilitators may have.  Researchers have noted that one way to increase time or intensity of a reading intervention program that will most likely yield significant results is to increase the amount of sessions or hours of instruction spent per day over the same number of days it takes to complete the intervention program (Wanzek & Vaughn, 2008). Torgesen et al. (2001) examined the effects of an intense reading intervention period for two 50-minute sessions per day over eight to nine weeks. The results yielded substantial improvements in word reading and comprehension and suggest that more instruction in a short period of time may benefit students with severe reading disabilities.  Therefore, best efforts were made by the researcher to have the ABRA intervention  71 program sessions (20 minutes in length) run in addition to regular classroom literacy instruction (30-40 minutes in length).  Despite the research findings, the principals and classroom teachers made the decision regarding when (time of day) ABRA was implemented.  Despite best efforts to administer ABRA in addition to regularly scheduled literacy instruction, school 1 and school 3 treatment groups received the program during regular literacy classroom instruction. School 2 and school 4 treatment groups received the program in addition to their regular literacy instruction.  For purposes of this study, an alternate dosage (from the author’s recommendations) and schedule was used. Instead of working on each segment every day, students worked on one specific segment on that same day every week. This revised dosage and schedule allowed for uninterrupted sessions targeted at one specific component to reading per day, thus increasing time and concentration required to learn essential early literacy skills. See Table 2.5 below for an overview of the adapted ABRA program dosage and schedule. (Note that Sounds, Letters and Words segment has been separated into phonics and phonological awareness activities. Also, Reading and Understanding the Story segments have been separated into fluency and comprehension activities in order to show specific instruction).         72 Table 2.5   Adapted ABRACADABRA program Weeks  Segments      Activities          (20 minutes each) 1  Monday: Phonics     Alphabet Song Tuesday: Phonological Awareness   Same Word Wednesday: Fluency & Comprehension  Tracking Thursday: Writing & Spelling   Spelling Words         Worksheet: LSM 1  2  Monday: Phonics     Animated Alphabet Tuesday: Phonological Awareness   Word Matching Wednesday: Fluency & Comprehension  Expression Thursday: Writing & Spelling Writing: The Little Red Hen  3  Monday: Phonics     Letter-Sound Search Tuesday: Phonological Awareness   Rhyme Matching Wednesday: Fluency & Comprehension  Prediction, Sequencing Thursday: Writing & Spelling Spelling Sentences  4  Monday: Phonics     Same Phoneme Tuesday: Phonological Awareness   Word Changing Wednesday: Comprehension    Summarizing Thursday: Writing & Spelling Sentence Writing  5  Monday: Phonological Awareness   Syllable Counting Tuesday: Fluency     High Frequency Words Wednesday: Comprehension    Vocabulary ESL Thursday: Writing & Spelling Spelling Words  6  Monday: Phonological Awareness   Word Changing Tuesday: Fluency     Accuracy Wednesday: Comprehension    Comprehension Monitor Thursday: Writing & Spelling Spelling Sentences  7  Monday: Phonological Awareness   Auditory Segmenting Tuesday: Fluency     Tracking Wednesday: Comprehension    Vocabulary Thursday: Writing & Spelling Spelling Words  8  Monday: Phonological Awareness   Auditory Blending Tuesday: Fluency     Speed Wednesday: Comprehension    Story Elements Thursday: Writing & Spelling Sentence Writing  73 Table 2.5   Adapted ABRACADABRA program Weeks  Segments      Activities          (20 minutes each)      9  Monday: Phonological Awareness   Blending Train Tuesday: Fluency     Accuracy Wednesday: Comprehension    Story Response Thursday: Writing & Spelling Sentence Writing  10  Monday: Phonological Awareness   Basic Decoding Tuesday: Fluency     Speed Wednesday: Comprehension    Sequencing Worksheet Thursday: Writing & Spelling Sentence Writing _______________________________________________________________________  School 1  The researcher delivered the program during regular classroom literacy instruction and holds certification of pre-service teaching training with a specialization in special education.  At the time of the study, the researcher had four years experience as a primary-junior teacher and as a Special Education Resource Teacher. The two treatment groups were withdrawn from two classrooms at different times, to a resource room with laptops. The first group contained three students. The second group contained five students.  School 2  The Vice Principal delivered the program in addition to regular classroom literacy instruction. The two treatment groups were withdrawn from four classrooms at different times to the computer lab. The first group contained six students. The second group contained six students.     74 School 3 The pre-service teacher volunteer delivered the program during regular classroom literacy instruction. The treatment group of nine students was withdrawn from one classroom to the computer lab.  School 4 The classroom teacher delivered the program in addition to regular classroom literacy instruction. The instruction occurred before and after school, with verbal consent from the parents to drop off and pick up their child accordingly. This treatment group consisted of 12 students, and conducted their lessons in the computer lab.  2.6.1 ESL Four ESL teachers within the four schools consented to administer the ABRA reading intervention program to the 11 ESL students during their daily forty-minute scheduled withdrawal instruction time.  Table 2.6   Study Design Pretest   Literacy   Posttest     Measures  Instruction   Measures At Risk group  Treatment    O   A    O Control   O   *    O N=82   ESL group Treatment    O   A    O Control   O   *    O N=22 Note: O= six standardized assessments, A= ABRA program, *= regular classroom instruction  75 Chapter 3: Results   3.1   Analyses   To address the first two questions in the research study, (Does the ABRA reading program improve the phonics skills, phonological awareness skills, and word reading skills of students at risk for reading failure in the treatment conditions relative to students in a control condition exposed to regular literacy instruction only? What is the relative efficacy of the ABRA reading program for ESL students’ phonics, phonological awareness, and word reading skills, compared to ESL students receiving regular literacy instruction only? ) pre and post intervention data were analyzed using the Statistical Package for Social Sciences, version 23 (SPSS) to determine any changes in students’ phonics, phonological, and word reading skills for both at-risk and ESL groups. The data were analyzed using a 2x2 repeated measures analysis of variance (ANOVA). Each variable has two levels, condition (treatment /control) and time (pre/posttest). The analysis used the condition as the between groups factor. The interaction effect of condition by time was the test used to determine if there was a change in reading skills between the two conditions related to the intervention.  To address the first question of this research study, “Does the ABRA reading program improve the phonics, phonological awareness, and word reading skills of students at risk for reading failure in the treatment conditions relative to those in a control condition exposed to regular literacy instruction only?” the interaction effect between the condition and time was analyzed to determine if there is a change in phonics, phonological awareness and word reading skills as measured by the Word Attack, Letter-Word Identification, Blending, Elision, Sight Word Efficiency and Phonetic Decoding Efficiency subtests.  76 To address the second research question, “Does ABRA reading program improve the phonics, phonological awareness, and word reading skills of ESL students?” the interaction effect between condition and time was analyzed to determine if there is a change in phonics, phonological awareness and word reading skills as measured by the Word Attack, Letter-Word Identification, Blending, Elision, Sight Word Efficiency and Phonetic Decoding Efficiency subtests. The third research question will be addressed in section 3.3, Social Validity.  3.1.1 Significance and Effect Size For all subtests for both at-risk and ESL groups, the statistical significance (established at p < .05) and magnitude of the effectiveness of the intervention was calculated. The magnitude is reported as the effect size.  The effect size (ES), Hedges’ g was calculated to quantify the magnitude of difference in change between the conditions (ABRA and control). It is defined as the difference between the mean outcome for the ABRA intervention condition and the mean outcome for the control condition, divided by the pooled within-group standard deviation of the outcome measure (Clearinghouse, 2008).  Since both conditions yielded different pretest means and standard deviations, it was essential to correct for these values using Klauer’s (2001) method of computing Hedges’ g. Klauer proposes to compute g for both conditions at pretest, then at posttest. Subtracting the two g values provides an effect size that automatically corrects for differences at pretest.  Hedges’ g effect sizes range from 0 to 1. A guideline proposed by Hedges (1981) is used  to interpret the strength of the effect sizes: small (0.2), moderate (0.5), and large (0.8).     77 3.2   At-Risk Group An analysis of variance was used to compare the results from the six standardized measures used to assess students at risk for reading failure at pre- and posttest.  The p value reports the significance of the interaction between the condition (ABRA and control) and time. Hedges’ g reports the effect size or magnitude for all interactions between the condition and time.  78 Table 3.1   ANOVA Comparison of Pre- Posttest Mean Scores for At-Risk Group   Subtest Condition  Pretest M SD  Posttest M  SD  p      g       Wd Attack ABRA   6.56  3.67  9.02  3.27  0.39  -0.21     Control  8.63  6.96  11.88  6.07 Blending ABRA   12.80  3.53  13.88  2.49  0.92  -0.13      Control  13.61  3.10  14.76  2.18    Elision  ABRA   7.32  3.72  9.34  3.45  0.22  0.27      Control  9.32  4.10  10.27  4.30  Letter Wd  ABRA   22.46  6.63  28.93  5.51  0.59  -0.02      Control  27.44  7.91    33.29  6.80   SWE  ABRA   15.90  10.25  24.85  11.99  0.12  -0.14      Control  25.59  14.83    36.41  13.56    PDE  ABRA   5.63  3.93  10.07  3.82  0.49  -0.04      Control  9.98  8.01    15.05  8.80      79 3.2.1 Word Attack For the Word Attack subtest, there was a statistically significant effect for condition (treatment and control), F (1, 80) = 5.38, p = 0.02. This indicates that there was a difference between the conditions overall. The control condition performed higher than the treatment condition at both pre-and posttest. There was an overall improvement at posttest for both conditions. There was a statistically significant effect for time, F (1, 80) = 39.58, p = 0.01. Both treatment and control condition’s phonics skills increased, and changed positively over time. See Figure 3.1. There was no statistically significant interaction effect between condition and time, F (1, 80) = 0.74, p = 0.39. See Table 3.1.     Figure 3.1. Word Attack Mean Scores at Pre- and Post Intervention by Condition for At-Risk Group    80 3.2.2 Blending For the Blending subtest, there was no statistically significant effect for condition (treatment and control), F (1, 80) = 2.61, p = 0.11. The control condition performed higher than the treatment condition at pre- and posttest.  There was an overall improvement for both conditions. There was a statistically significant effect for time, F (1, 80) = 9.48, p = 0.01. Figure 3.2 indicates the conditions’ performance on the Blending subtest increased and changed positively over time. There was no statistically significant interaction effect between time and condition, F (1, 80) = 0.01, p = 0.92. See Table 3.1.     Figure 3.2.  Blending Mean Scores at Pre- and Post Intervention by Condition for At-Risk Group      81 3.2.3 Elision  For the Elision subtest, there was a statistically significant effect for condition (treatment and control), F (1, 80) = 3.86, p = 0.05. This indicates an overall difference between the conditions. The control condition performed higher than the treatment condition at both pre- and posttest as indicated in Figure 3.3. There was a statistically significant effect for time, F (1, 80) = 11.74, p = 0.01. Figure 3.3 indicates the conditions’ performance on the Elision subtest increased and changed positively over time. There was no statistically significant interaction effect between time and condition, F (1, 80) = 1.53, p = 0.22. See Table 3.1.     Figure 3.3.  Elision Mean Scores at Pre- and Post Intervention by Condition for At-Risk Group         82 3.2.4 Letter-Word Identification For the Letter-Word Identification subtest, there was a statistically significant effect for condition (treatment and control), F (1, 80) = 11.40, p = 0.01. This indicates an overall difference between the conditions. The control condition performed higher than the treatment condition at both pre- and posttest as indicated in Figure 3.4. There was a statistically significant effect for time, F (1, 80) = 118.86, p = 0.01. Figure 3.4 indicates the conditions’ performance on the Letter-Word Identification subtest increased and changed positively over time. There was no statistically significant interaction effect between time and condition, F (1, 80) = 0.29, p = 0.59. See Table 3.1.   Figure 3.4.  Letter-Word Identification Mean Scores at Pre- and Post Intervention by Condition for At-Risk Group     83 3.2.5 Sight Word Efficiency For the Sight Word Efficiency subtest, there was a statistically significant effect for condition (treatment and control), F (1, 80) = 14.88, p = 0.01. This indicates an overall difference between the conditions. The control condition performed higher than the treatment condition at both pre- and posttest as indicated in Figure 3.5.There was a statistically significant effect for time, F (1, 80) = 261.73, p = 0.01. Figure 3.5 indicates the conditions’ performance on the Sight Word Efficiency subtest increased and changed positively over time.  There was no statistically significant interaction effect between time and condition, F (1, 80) = 2.36, p = 0.13. See Table 3.1.     Figure 3.5.  Sight Word Efficiency Mean Scores at Pre- and Post Intervention by Condition for At-Risk Group    84 3.2.6 Phonetic Decoding Efficiency For the Phonetic Decoding Efficiency subtest, there was a statistically significant effect for condition (treatment and control), F (1, 80) = 11.56, p = 0.01. This indicates an overall difference between the conditions. The control condition performed higher than the treatment condition at both pre- and posttest as indicated in Figure 3.6. There was a statistically significant effect for time, F (1, 80) = 104.88, p = 0.01. Figure 3.6 indicates the conditions’ performance on the Phonetic Decoding Efficiency subtest increased and changed positively over time.  There was no statistically significant interaction effect between time and condition, F (1, 80) = 0.47, p = 0.49. See Table 3.1.    Figure 3.6.  Phonetic Decoding Efficiency Mean Scores at Pre- and Post Intervention by Condition for At-Risk Group     85 3.3   ESL Group An analysis of variance was used to compare the results from the standardized measures used to assess students in the ESL group at pre- and posttest.  The p value reports the significance of the interaction between the condition (ABRA and control) and time. Klauer’s (2001) method of calculating Hedges’ g reports the effect size or magnitude for all interactions between the condition and time. See Table 3.2. 86 Table 3.2   ANOVA Comparison of Pre- Posttest Mean Scores for ESL Group  Subtest Condition  Pretest M SD        Posttest M   SD               p               g     Wd Attack ABRA   8.73  6.15  12.91  6.53  0.01  0.46     Control  11.00  5.25  12.55  4.68 Blending ABRA   9.82  4.33  14.55  2.77  0.01  0.70      Control  11.82  2.67  14.18  2.52   Elision  ABRA   5.82  4.10  9.91  4.44  0.29  0.80     Control  8.55  2.78  10.00  4.90 Letter Wd  ABRA   22.73  10.16  29.91  11.25  0.02  0.37      Control  25.09  5.77  29.18  5.95  SWE  ABRA   27.27  21.49  31.18  17.61  0.46  -0.22     Control  32.82  21.77  40.00  19.54   PDE  ABRA   11.36  9.23  18.00  14.55  0.38  0.29     Control  17.91  12.70  22.55  15.33     87 3.3.1 Word Attack There was a statistically significant interaction effect between condition and time, F (1, 20) = 8.65, p = 0.01. This indicates that not only are the two conditions changing (increasing their skills) overall, they are changing in different ways. There was no statistically significant effect for condition (treatment and control), F (1, 20) = 0.16, p = 0.70. There was a statistically significant effect for time, F (1, 20) = 40.83, p = 0.01, indicating that both conditions’ phonics skills as measured by the Word Attack subtest changed positively over time. Both conditions improved their phonics skills over time. As seen in Figure 3.7, the treatment condition performed lower than the control condition at pretest and outperformed the control condition at posttest.       Figure 3.7. Word Attack Mean Scores at Pre- and Post Intervention by Condition for ESL Group      88 3.3.2 Blending For the Blending subtest, there was a statistically significant interaction effect between condition and time, F (1, 20) = 7.81, p = 0.01. As seen in Figure 3.8, this indicates that not only are the two conditions changing (increasing their skills) overall, they are changing in different ways. There was no statistically significant effect for condition (treatment and control), F (1, 20) = 0.41, p = 0.53. There was a statistically significant effect for time, F (1, 20) = 70.25, p = 0.01, indicating that both conditions’ phonological awareness skills as measured by the Blending subtest, changed positively over time. Both conditions improved their phonological awareness skills over time. The treatment condition performed lower than the control condition at pretest and outperformed the control condition at posttest.     Figure 3.8.  Blending Mean Scores at Pre- and Post Intervention by Condition for ESL Group    89 3.3.3 Elision For the Elision subtest, there was no statistically significant effect for condition (treatment and control), F (1, 20) = 1.24, p = 0.28. There was a statistically significant effect for time, F (1, 20) = 5.17, p = 0.03. In Figure 3.9, the treatment and control conditions’ slopes are both increasing. This indicates that both conditions’ phonological awareness skills as measured by the Elision subtest changed positively over time. There was no statistically significant interaction effect between time and condition, F (1, 20) = 1.17, p = 0.30.    Figure 3.9.  Elision Mean Scores at Pre- and Post Intervention by Condition for ESL Group       90 3.3.4 Letter-Word Identification For the Letter-Word Identification subtest, there was a statistically significant interaction effect between time and condition, F (1, 20) = 6.09, p = 0.02. This indicates that the treatment and control conditions are changing overall, and that they are changing in different ways. The treatment condition performed lower than the control condition at pre-test, and performed higher than the control condition at posttest. There was no statistically significant effect for condition (treatment and control), F (1, 20) = 0.05, p = 0.82. There was a statistically significant effect for time, F (1, 20) = 81.01, p = 0.01. Figure 3.10 indicates both conditions’ performance on the Letter-Word Identification subtest increased and changed positively over time.    Figure 3.10.   Letter-Word Identification Mean Scores at Pre- and Post Intervention by Condition for ESL Group    91 3.3.5 Sight Word Efficiency There was no statistically significant effect for condition (treatment and control), F (1, 20) = 0.74, p = 0.40. There was a statistically significant effect for time, F (1, 20) = 6.65, p = 0.01.  In Figure 3.11, although the treatment condition performed lower than the control condition on the pre- and posttest, both conditions’ slope change is increasing. This indicates that both conditions’ word reading skills as measured by the Sight Word Efficiency subtest change positively over time. For this subtest, there was no statistically significant interaction effect between time and condition, F (1, 20) = 0.58, p = 0.46.    Figure 3.11.  Sight Word Efficiency Mean Scores at Pre- and Post Intervention by Condition for ESL Group     92 3.3.6 Phonetic Decoding Efficiency For the Phonetic Decoding Efficiency subtest, there was no statistically significant effect for condition (treatment and control), F (1, 20) = 1.02, p = 0.32. There was a statistically significant effect for time, F (1, 20) = 25.46, p = 0.01. In Figure 3.12, although the treatment condition performed lower than the control condition on the pre- and posttest, both conditions’ slope increased. This indicates that both conditions’ word reading skills as measured by the Phonetic Decoding Efficiency subtest changed positively over time. There was no statistically significant interaction effect between time and condition, F (1, 20) = 0.80, p = 0.38.       Figure 3.12.  Phonetic Decoding Efficiency Mean Scores at Pre- and Post Intervention by Condition for ESL Group   93   3.4 Social Validity Students participating in the experimental conditions (at-risk and ESL) of the study, along with their teachers and parents, responded to questionnaires about ABRA immediately following the post-tests. Each questionnaire contained questions that aligned with the three study questions: (1) Does ABRA improve the phonics, phonological awareness, and word reading skills of students at risk? (2) Does ABRA improve the phonics, phonological awareness, and word reading skills of ESL students? (3) To what extent do grade 1 and ESL teachers, parents, and students perceive ABRA reading program to be an appropriate technological reading program? The questionnaires below were administered to gain insights about stakeholders’ perceptions ABRA’s effectiveness for improving children’s reading skills.   3.4.1 Teachers’ Perceptions of ABRA’s Effectiveness for At Risk Students Seven teachers (70%) filled out a seven-item questionnaire using a five-point Likert-scale (e.g., 1= Strongly Disagree, to 5=Strongly Agree) for the at risk condition. Additional written comments from the teachers were welcomed. The results did not show a difference between the teachers’ opinions from four different schools.  Question 1 asked if phonological awareness was an important factor when teaching a student how to read. Five out of the seven (71%) teachers indicated a rating of five (5=Strongly Agree).  For question 2, four out of seven (57%) teachers indicated a rating of four. They agree with the statement “it is important for students to learn how to use a computer”.  One teacher provided additional comments that said, “I think a reading intervention program on a computer is  94 one of the effective ways to remediate literacy skills. Guided instruction from the teacher, and hopefully the parents, are also essential”.   For question 3, four teachers (57%) selected five, indicating they strongly agree that delivering a reading intervention program on a computer is an effective way to remediate literacy skills. Further comments from one teacher indicated that, “it is an interesting and motivating method for students”.   Question 4 yielded three teachers (57%) rating a level four, and the fourth teacher did not provide a rating.  They agree that there is a noticeable improvement with the student’s reading skills post-intervention. Three teachers said they strongly agree, and the teacher that did not select a rating wrote the following comment, “All students reading levels went up, but I can’t say that it was as a result of ABRA, or in-class work”.  Question 5 asked if there is an improvement with the student’s interest to read post-intervention. Three teachers (43%) strongly agreed, two teachers (29%) agreed and, two teachers (29%) were undecided. One further written comment reported, “This is difficult to say for sure because children who are five and six years old are always developing skills”.  Question 6 asked if teachers would like to implement ABRA with future students. Five teachers (71%) said they were undecided, two teachers said they strongly agree. Question 7 asked if teachers thought ABRA was an effective intervention program. Four teachers (57%) said they strongly agree.  Two teachers said they agreed. One teacher did not answer. Further comments from this teacher said, “When students are first learning reading strategies, all tools are helpful to meet their needs”.  The range of mean scores on the teacher questionnaire was from 3.57 to 4.71, indicating that the teachers somewhat agreed, to agreed with the seven statements. This suggests that the  95 teachers perceived ABRA was an effective literacy intervention program. Below in Table 3.3 are the questionnaire items for the teachers in the at-risk group. Table 3.3  Results of Teacher Survey for At-Risk Group  Teacher Questionnaire Questions   Number* Mean**   1. Phonological awareness is important  7  4.71       to the reading achievement of your      students            2. It is important for students to learn how   7  4.57       to use a computer           3. Delivering a reading intervention program  7  4.57       on a computer is an effective way to      remediate literacy skills          4. There is an improvement with the student’s  6  4.00       reading skills after the intervention        5. There is an improvement with the student’s  7  4.14       interest to read after the intervention        6. I would like to implement ABRA with   7  3.57       future students           7. Overall, ABRA is an effective    6  4.33       tool to help meet the range of needs of the      students in my class          *Number=the total number of respondents.  **=Mean for 5-point Likert scale  3.4.2 Parent Perceptions of ABRA’s Effectiveness for At Risk Students  There were fifteen (36%) complete parent surveys for the at risk condition.  Question 1 asked if phonics is important to the reading achievement of their child. Twelve (80%) parents selected a rating of five indicating that they strongly agree. Three parents said they agree.   96 Question 2 asked if the parent thought it was important for their child to learn how to use a computer. Eleven (73%) parents strongly agreed, selecting a rating of five. Four parents agreed, with a rating of four.  No parent indicated that they did not feel it was important. Question 3 asked parents if there was a noticeable improvement with their child’s reading skills post-intervention. Ten (67%) selected five, indicating they strongly agree, five (33%) selected a rating of four, indicating they agree. One parent wrote that they question whether the improvement in reading was due to the intervention, or perhaps natural development.  No parent indicated that they did not notice an improvement with their child’s reading skills post-intervention. Question 4 asked if the parents thought there was an improvement with their child’s interest in reading, post-intervention. Three (20%) parents strongly agreed, nine parents (60%) agreed, one parent was undecided, and two parents disagreed with this statement. This response indicates that most students who received the ABRA program increased their general interest to read.  The fifth question asked the parents if they thought ABRA was an easy program to use.  Fourteen parents (93%) strongly agreed. One parent left the question unanswered. This may be an indication that the parent missed the question. Question 6 asked whether overall, ABRA is an effective reading intervention. Eight (53%) parents strongly agreed, 5 (33%) parents agreed, 2 (13%) parents were undecided. Additional comments written by parents include, “My son thinks this program helped him” and, “Thank you for this opportunity and for helping my son”. There were seven parents that requested a meeting via telephone to discuss the intervention in detail. The majority of these parents were excited for their child to be a part of the program, and to use the program at home with their child.    97 The range of mean scores on the parent questionnaire ranged from 3.87 to 4.93. This indicates that parents perceived ABRA to be an appropriate technological reading intervention program for their child. See Table 3.4 below. Table 3.4   Results of Parent Survey for At-Risk Group Parent Questionnaire Questions   Number* Mean**  1. Phonics is important to the reading  15  4.80       achievement of my child          2. It is important for my child to learn  15  4.73       how to use a computer          3. There is an improvement with my child’s  15  4.67       reading skills after the intervention        4.  There is an improvement with my child’s  15  3.87        interest to read after the intervention 5.  The ABRA program    15  4.93        is easy to use         6.  Overall, ABRA is an effective reading  15  4.40    intervention program for my child         *Number=the total number of respondents     **=Mean for 5-point Likert scale    3.4.3 At Risk Student Perceptions of ABRA’s Effectiveness  A student survey was conducted to explore the impact of the ABRA reading program on every student in the treatment condition. For the at-risk group, 41 students completed the five-item questionnaire. Every student was read five questions from a script. The students’ rated their  98 answers on a Likert-type scale from one to five (e.g., 1 = Not at all [strongly disagree], to 5 = A whole lot [strongly agree]).   The first question asked the students: “Think about sounds and words, how much do they help you become a better reader?” Twenty-three (56%) students strongly agreed, nine (22%) agreed, seven (17%) were undecided, and two disagreed.  The second question asked the students: “Think about computers, how much did it help you to become a better reader?” Twenty-one (51%) students rated “A Whole Lot” (strongly agreed), 10 (24%) students were undecided, seven (17%) agreed, and four (9%) disagreed. This indicates that over half of the ABRA participants believe that a computer is an effective tool to deliver literacy instruction.  Question 3 asked the students “how much do you think ABRA helped you to become a better reader?” Twenty-nine (71%) students strongly agreed, six (15%) agreed, four (9%) were undecided, and two disagreed.  Question 4 asked “overall, how much did you like doing ABRA?” Thirty-three (80%) students rated a five (A Whole Lot, or strongly agreed), four (10%) students were undecided, two (5%) students agreed, and two (5%) students disagreed, suggesting that they did not enjoy the program.  The fifth question asked, “Would you like to use ABRA again?” Twenty-five (61%) students strongly agreed, 11 (27%) students disagreed and said they will not use it again. The following are students’ comments on this question: “I won’t do it again because I will miss my reading buddies on reading day,” “It’s a lot of work for the teachers,” “It was hard,” “I already know how to use it.” Three (7%) students agreed that they would use it again, and two (5%) students were undecided.  99 Based on the additional comments provided by the students, reasons to support this statement are based on individual interpretations of the question. For example, a student wrote that they have already investigated the program and would prefer to remain with their friends in the classroom. Another student wrote that it would provide the teachers with extra work. It is noteworthy that the same two students reported that they disagreed on questions one to four, and both reported that they were undecided as to their usage of ABRA in the future. See Table 3.5.   Table 3.5  Results of Student Survey for At-Risk Group  Student Questionnaire Questions   Number* Mean**  1. Think about sounds and words. How much 41  4.24      did they help you become a better reader?        2. Think about computers. How much did it  41  4.02       help you to become a better reader?        3. Overall, how much do you think ABRA  41  4.46       helped you to become a better reader?        4. Overall, how much did you like doing   41  4.58      ABRA?        5. Would you like to use ABRA again?   41  3.76    *Number=the total number of respondents, **=Mean for 5-point Likert scale    3.4.4 Teachers Perceptions of ABRA’s Effectiveness for ESL Students Two out of four ESL teacher questionnaires were returned. A question that yielded the biggest difference was question 4 (“There is an improvement with the student’s reading skills after the intervention”). The two ratings were two (disagree) and four (agree). The teacher who  100 provided the lower ranking said, “This student is indicating that she wants to read, however, it is challenging to know whether or not it is due to the ABRA program”.  Considering that only two questionnaires were returned, it is therefore too small of a sample to include in the results chapter.   3.4.5 Parent Perceptions of ABRA’s Effectiveness for ESL Students  All parents from 22 ESL participants within both treatment and control conditions were asked to fill out an 11-item language questionnaire. Twenty-one out of 22 parents (95%) returned the questionnaire. The parents of the students in the treatment condition were also asked to fill out a questionnaire with two open-ended questions, post-intervention. All 11 were returned. The opening letter stated that all correspondence could be provided in the first language requested by the parent. Three letters were sent home in Spanish. All responses were written in English. The parents of the students who speak ESL were asked two open-ended questions: 1) Do you think ABRA helped your child to learn how to read English?  2) What did you like about ABRA? The overall response indicated that ABRA did improve the literacy skills of the students who speak ESL, and that the animated gaming-aspect of the program was an asset to the program.   3.4.6 Language Questionnaire  Seventeen (81%) parents indicated that they have more than two children. Thirteen (62%) parents indicated that another language, other than English, is spoken predominantly in the home. Of those families, six (46%) speak Spanish, three (23%) speak Russian, two (15%) speak  101 Tagalog, and two (15%) speak Croatian. The other eight parents (36%) reported speaking English at home as the predominant language, and also reported that either the mother or the father spoke another language other than English in the home, some of the time.  Five of the eight parents who spoke English at home also spoke another language some of the time (e.g., one parent spoke Tagalog, one parent spoke Polish, one parent spoke Croatian, two parents spoke French). Three of the eight parents who spoke English at home, also spoke Italian, French and Taglog some of the time. Two out of the 21 households, indicated that the families live with their grandparents. Both of these grandparents speak predominantly, another language other than English (e.g., Tagalog, and Russian).   Considering that over half of the parents of the participants in the ESL condition reported speaking another language other than English predominantly in the home, and the rest predominantly speak English and another language other than English, a second open-ended questionnaire was provided to the 11 parents of the students in the treatment condition, post-intervention, in order to understand their views on ABRA. The first question asked “Do you think ABRA helped your child to learn how to read English?” and the second question asked, “What specifically did you like about the ABRA program?” The majority of the parents wrote their answers in English, and responded positively toward the program.  Three students were asked by their parents to share their answers verbally. Collectively, the parents were thankful for the extra support for their children, and appreciated that the program could be used in their home. All parents agreed that the program was fun to play.  The animals made it exciting and inviting. Most of the parents’ commented on how they noticed their children starting to take the initiative to read at home.     102 3.4.7 ESL Students Perceptions of the Effectiveness of ABRA  Table 3.6 shows the results from the student survey for the ESL group. Eleven students (100%) from the treatment condition completed the survey. Overall, the students enjoyed ABRA. They would ask to “play it” throughout the day, in addition to their scheduled ABRA sessions. They also enjoyed the printable resources and adapted lessons. Specifically, after the ESL teacher read an ABRA story, the students were asked to draw a picture that predicted the next part of the story, and to write creative sentences that explained the picture.  They wrote the sentences in their language. Under the student’s sentence, the ESL teacher would write the same sentence in English. The student then copied the ESL teacher’s sentence in English. Upon completion of the intervention, one student wrote a letter saying, “Thank you Miss. Flis for putting me in this program. It really helped me. I really liked it. My favourite game was the disco one”. Table 3.6  Results of Student Survey for ESL Group  Student Questionnaire Questions           Number*  Mean**   1. Think about sounds and words. How much  11  4.45       did they help you become a better reader? 2. Think about computers. How much did it   11  4.45      help you to become a better reader? 3. Overall, how much do you think ABRA   11  4.54       helped you to become a better reader? 4. Overall, how much did you like doing    11  4.64      ABRA?       5. Would you like to use ABRA again?    11  4.54  * = the total number of respondents  ** = Mean for 5-point Likert scale   103  Chapter 4: Discussion  4.1      Overview of Chapter 4 The intention of this study was to determine the efficacy of a computerized reading program, as a Tier 2 intervention (of the Response to Intervention model of instruction), on the reading skills for students at risk for reading difficulty, as well as ESL students. ABRA was selected as the literacy tool due to its demonstrated significant results published in earlier studies (Savage & Abrami, 2007) involving at-risk students. All students in this current study were either screened for eligibility by two evidenced-based measures of the DIBELS measurement tool, or automatically eligible for participation because they were assessed as a Stage 1 or 2 within their ESL program. Upon eligibility, all students were then administered a RRST that composed of 14-subtests. Subsequently, six standardized measures were administered to assess each student’s phonics, phonological awareness, and word reading skills pre and post-intervention.  For the at-risk group, there were no statistically significant interaction effects between condition and time. These results indicate that ABRA did not influence the reading skills for this sample of students. For the ESL group, there was a statistically significant interaction effect between condition and time for three measures (Word Attack, Blending, and Letter-Word Identification). This finding indicates that ABRA influenced this group’s phonics, phonological awareness, and word reading skills, although modestly.  Classroom teachers, parents and students responded to questionnaires to measure their perceptions of ABRA’s effectiveness. All rated the program as socially valid.  The following sections will outline the interpretation of the analysis in three ways. First, I will examine the current research study and how it is consistent with the past (and present)  104 research conducted on ABRA. Second, I will compare how the current study is different from this body of existing research. Third, since significant results were found within the ESL group only, I find that it is important to discuss the points of comparison between the at-risk group and the ESL group. Finally, the limitations, implications for further practice and research, and recommendations for future research going forward will be discussed in the subsequent sections.  4.2 Findings Consistent with Previous Research Early intervention for students at risk for reading failure is crucial. It appears that once students fall behind in their ability to read, they rarely catch up with their peers (Juel, 1988, 1991).  It is widely researched and reported that systematic instruction in word recognition skills, fluency development, and meaning-based literacy experiences, is important in order to read successfully (Bishop et al., 1990; Chard et al., 1995; Smith et al., 1995; Stanovich, 1988, 1991).  ABRA was chosen and implemented as a Tier 2 intervention program for students at risk for reading failure, and ESL students since it focuses on these components of reading. This study is consistent with the research on ABRA in the following ways. First, the students who were in the at-risk and ESL group in this study did not show significant gains in their reading fluency skills as measured by two standardized subtests: Phonetic Decoding Efficiency (PDE), and Sight Word Efficiency (SWE). A meta-analysis (Abrami, Borohkovski, & Lysenko, 2015) written about the effects of ABRA on reading outcomes summarizes research studies conducted across Canada, Australia, and Kenya and found similar results with respect to reading fluency. Data were taken from studies conducted from 2008-2014 in formal education settings. The results concentrated on phonics, phonemic awareness, reading fluency, reading comprehension, listening comprehension, and vocabulary knowledge. Studies reviewed did not  105 find the ABRA intervention improved reading comprehension and reading fluency skills. Similar to this study, the at-risk group did not show significant improvement in their reading fluency skills as measured by the PDE and SWE subtests.  A possible reason why students in both groups did not achieve significant results in fluency skills could be related to the fact that there is only one activity within the ABRA program that is fluency-related. This activity, High Frequency Words, is found under the Reading category. The student is asked to read aloud words within a short timeframe. The activity is presented as a game-like task as the student must read the word before the next word appears on the screen. This format is appealing to the students, and could have produced more engagement over the 10-weeks if there were different sets of words, or different fluency activities. Since fluency is not targeted within the program, this seems to be a likely explanation as to the student’s non-significant gains. This study also found significant interaction effects within the ESL group for the Word Attack subtest which measures phonics skills, the Blending subtest which measures phonological awareness skills, and the Letter-Word Identification subtest which measures word reading skills. This finding is consistent with other research. Specifically a study that was conducted in Hong Kong (Cheung et al., 2015) noted that at posttest, the treatment students scored significantly higher than the controls on phonics measures.       106 4.3 Findings Divergent from Previous Research The following section will outline possible explanations as to why the current study is different than the research on ABRA.  All six subtests measuring the phonics, phonological awareness, and word reading skills of the at-risk students did not produce significant interaction effects. This finding is inconsistent with the majority of other research (that does not include screened, at-risk students) that notes significant results for all literacy measures. Studies involving the analysis of the effectiveness of ABRA often use whole-class samples that include, but are not limited to students that experience difficulty with reading.  In these studies, there are no clear distinctions between at-risk and typically developing learners. In section 1.3, I noted contributions the current study has made to research. Specifically, this study used ABRA within an RTI model, screening all participants. This screening process ensured that a population of students who are experiencing legitimate difficulties with reading were the only participants in the at-risk group. At the time of the study, there were few (if any) research studies that analyzed the effectiveness of ABRA on such a population.  A second difference between the current study and other research on ABRA involves this study incorporating an ESL group with its own ESL control group. At the time of the study, ABRA was not used as an intervention specifically for ESL students.  4.3.1 Findings Involving Comparisons Across Groups There were statistically significant effects for interactions between condition (treatment and control) and time (pre- and posttest) within the ESL group, for the Word Attack, Blending, and Letter-Word Identification subtests. Since significant interaction effects were found for the ESL group but not the at-risk group, possible explanations are discussed.   107 First, the ratio of facilitator to student (group size) plays an important part when implementing an intervention program. A study conducted by Vaughn et al., note that the smaller the ratio between the facilitator and group of students, the greater the impact of the intervention. A study examined three grouping formats, 1:1 (one teacher with one student), 1:3 (one teacher with three students), and 1:10 (one teacher with ten students).  The results indicated that all groups made gains in reading scores after a supplemental reading intervention program. In addition, both 1:1, and 1:3 groups made greater gains than the 1:10 group. Although the 1:1 group yielded statistically significantly higher scores on a phonological awareness measure (phoneme segmentation fluency) than the 1:10 group, it did not perform better than the 1:3 group on any measure at posttest (Vaughn et al., 2003). It is important to note that in this study ABRA sessions were conducted within four separate schools. The smallest group size was 1:8 (one teacher with eight students). The largest group size was 1:10 (one teacher with ten students). At this particular school, the principal would join the sessions in order to offer assistance for the majority of the 10-week intervention period. Within the ESL group, there were four ESL teachers that were the facilitators of the ABRA sessions. Considering the nature of the ESL withdrawal programming within schools in this district, teachers withdrew the ESL students from their regular classroom to receive the ABRA sessions. The smallest group was 1:1 (one teacher with one student). The largest group was 1:2 (one teacher with two students). The discrepancy in ratios among groups may have contributed to the possible lack of significant results for the at-risk group.  Second, the students within the at-risk group who participated in the treatment condition scored lower than the control condition at pre-test on all six literacy measures, despite best-efforts to equalize the literacy profiles (specifically targeting phonological awareness skills)  108 through a matching process. The RRST screen was used in the matching process. Students had to obtain eligibility (score below benchmark on DIBELS screen) prior to the matching process. This is an accuracy and fluency-focused tool.  Students in the treatment condition had lower pretest scores in all subtests in the at-risk and ESL groups. Since the participants were screened using the Reading Readiness Screening Tool (RRST) and matched to allow for similar reading profiles among conditions within each classroom, further investigation by group provides possible explanations as to the discrepancy among pretest scores.  At-Risk Group First, it is important to note that the RRST screen used to match and distribute participants into the treatment and control conditions only measured phonological awareness skills. Two (out of six) pretests (Elision and Letter-Word Identification) assessed phonological awareness skills.   Second, it is noted in section 2.4.4 that if the total score on the RRST was greater than 2 points, then an a comparison of subtest scores was done to distribute participants in either the treatment or control condition. This was the case for 12 pairs of students. Comparison of subtests within these pairs found that the largest variance of scores was recorded within the Rhyme Deletion, Word Recognition, and Medial Sound Isolation subtests. The students were then placed into either the treatment or control conditions. This process may have contributed to the lower pre-test scores in the treatment conditions.  Third, although there were an equal number of participants in the overall treatment and control conditions for the at-risk group (N = 41 for both conditions), School 1 and 3 had more  109 participants in the treatment condition. The average of total RRST scores among the treatment condition within the two classrooms was 74.8. The average of total RRST scores among the control condition within the two classrooms was 83.8. This discrepancy of 8.5 may have contributed to the lower pretest scores for the Elision and Letter-Word Identification subtests for the treatment conditions.  School 3 (1 classroom) had one more participant in the treatment condition and had an average total RRST score of 94. The average total RRST scores among the control condition was 107.3. This discrepancy of 13.3 may have contributed to the lower pretest scores for the Elision and Letter-Word Identification subtests for the treatment condition.  ESL Group Students in the ESL group did not receive DIBELS. They were automatically eligible to participate in the study if they were enrolled in Stage 1 or 2 in their school’s ESL program. They were matched based on RRST scores. Considering both groups were matched, they should have had similar pre-test scores on all subtests.  Results report that the control condition outperformed the treatment condition on all pretest measures.  For the Elision subtest, a mean difference between the conditions was 2.7. For the Letter-Word Identification subtest, a mean difference of 2.3 was reported. An investigation into the matching procedure found that the total scores on the RRST for the 22 ESL participants varied greatly. There were four pairs of students with a total RRST score that exceeded two points. The range of the total score was between 5 - 11 points. A comparison of subtest scores was completed in order to complete the matching process.   110 The largest range of subtest scores were found within the Sound Identification and Word Recognition subtests. These variances could be explained by the student’s limited and varied experience with the English language and literacy development, considering that they were placed in Stage 1 or 2 within their ESL program. Perhaps more effective screening tools could have been implemented. It is important to note that within at least nine studies (from 2007 and on that include a cluster randomized control trials with populations exceeding 1000) authored by the creators of ABRA, do not include a screening process. In other words, other researchers have conducted studies using ABRA on students with varying literacy profiles, while this current study focuses on students who are at risk for reading disabilities, and ESL only. The ESL sample within this study can be seen as similar in profile to the typical ABRA sample in prior research.  4.4 Limitations A number of considerations are important when interpreting the results of this current study. I was unable to randomly select schools and classrooms therefore the participants were not randomly selected to different conditions. The schools were selected, and thus the classrooms were selected. A random sample avoids bias and is a sample selected by equal opportunity. In this case, no section of the population is favored, or excluded from the selection process. In addition, due to the small sample size of the ESL group (N=22), the power to detect treatment impact is limited. This could account for non-significant differences among three out of six subtests.  The screening process used assessments that were unfamiliar to the classroom teachers. DIBELS was used as an effective tool to measure the student’s reading level using two subtests:  111 Nonsense Word Fluency, and Phoneme Segmentation Fluency. Teachers use PM Benchmarks or the Diagnostic Reading Assessment (DRA) assessments in the classroom to assess reading levels. Teachers could not correlate DIBELS scores to their assessments. It is worth the exploration to analyze whether using teacher-friendly assessments would increase teacher-motivation to participate in reading intervention programs. For purposes of obtaining equal literacy profiles in both conditions for the at-risk, and ESL group, the matching process was implemented, yet was unsuccessful. This was evident in the clear differences in the two conditions’ reading scores at pretest, for both groups. The treatment condition scored lower on all subtests, for both groups than the control conditions. The differences between the conditions at pretest may have been the result of the screen focusing only on phonological awareness skills, and the six standardized subtests that assessed a range of literacy skills, phonics, phonological awareness, and word reading skills. The reading program that was selected for this study (ABRA) is a computer-based, free-access program that is designed for children for the pre-school –primary school age. The animated characters, and graphics may be perceived as too immature for those in the junior division (grades 4 and higher). Although this program contributed to the significant results for three measures for those ESL participants that were in Stage 1 or 2 in their ESL program, this may be an explanation to the non-significant results of the other three measures. Later in this chapter, this point supports an important implication for practice and would therefore suggest that students use age-appropriate, high-interest materials when participating in reading programs. The implementation of the ABRA reading program took place according to the daily timeframe provided by each school’s administrator. As noted earlier, research indicates that to increase the likelihood of achieving improvements from a reading intervention program, an  112 increase of the intervention session time, and an increase number of intervention days are suggestions.  School 1 delivered the program during the regular literacy block. School 2 and 3 delivered in addition to regular classroom literacy instruction. School 4 delivered the program before and after school. It is important to take into consideration that when a program is delivered in addition to the regular classroom instruction (e.g., literacy), the classroom teachers must be prepared to have those participants miss another subject entirely, thus having to: (a) compensate for missed instruction, or (b) notify the parents and indicate appropriately on the District’s Board-issued report cards. The dosage of ABRA was less than the author’s recommended dosage (two hours per week for no less than 13 weeks for a total of 26 hours). For the study, students received ABRA for 13 hours over a ten-week period. This chosen dosage could have contributed to the lack of significant outcomes. Each intervention session was 20 minutes in length.  This accounted for ‘transit time’, the time it takes for students to walk from their classroom to the computer lab, set-up, and walk back to their classrooms. This study created a timeframe of two weeks for administration meetings, three weeks for screening, two weeks for pre-testing, ten weeks for the intervention, one week for post-testing, and four days for survey completion. The entire process took six months. The administrators may not have preferred a longer intervention period, considering there was two months left in the school year. The chosen timeframe could not have been extended. As noted earlier in this chapter, the at-risk treatment condition teacher and student ratio (smallest group was one teacher and eight students) was not comparable to the ESL treatment condition teacher and student ratio (smallest group was one teacher and one student).  113 Researchers note that smaller groups of intervention (1:1, or 1:3) produce increased number of significant results in comparison to larger groups (1:10) (Vaughn et al., 2003).  The chosen reading intervention program contained one fluency activity. This activity was incorporated into the intervention lesson plans. The lack of repeated oral reading practice with feedback and guidance from the facilitator may have led to no significant interaction effects on the Sight Word Efficiency (SWE) or the Phonetic Decoding Efficiency (PDE) measure for both groups.    Implementing the reading intervention sessions with fidelity means adhering to the guidelines set out by the author, administering the various aspects of the program in a specific, structured way in order to meet the needs of the students. A set time was offered to all facilitators of the program for one hour per week. At the beginning of the program, consultation with facilitators was ongoing. However, during the latter half of the program the facilitators chose not to meet. The facilitators opted for communication via electronic mail. I visited the schools and the implementation of the sessions on a minimal basis (as I was a SERT, and the facilitator at one of the participating schools, so was unable to be present at the other sites when facilitators were implementing the program). The measure of the fidelity of implementation was in the form of a simple checklist system that contained very few meaningful questions. Therefore I cannot say for certain that the intervention was delivered with 100% fidelity. One teacher commented on the teacher survey that they could not decipher if the students’ gain in literacy scores were truly due to ABRA (or as a result of natural development). Despite the low-quality measure of the fidelity of implementation, researchers note that this does not contribute to possible significant results. Rather, it is the way the intervention program is integrated with the literacy instruction that makes the difference in reading scores.  114 Comaskey et al. (2009), Savage et al. (2009), Savage et al. (2013), and Wolgemuth et al. (2013) note that significant results from using ABRA are attributable to quality differentiated instruction, teacher’s technical competency, good lesson planning, and adequate time of student exposure to the program. ABRA offers professional development to teachers in many different ways (e.g., well organized information sessions offered by the team members, electronic Teacher Zone online, available printable resources, assessment tools, communication chat room with other colleagues). Teachers and parents also have the ability to logon to the website and navigate their way through resources that provide descriptions of all ABRA content (e.g., activities and lessons).  These levels of support were shared in conversations with the classroom teachers that were involved with the study. However, I did not guide the teachers through the necessary exploration of these levels of support. Consequently, the teachers may not have received the full benefits of ABRA.   4.5 Implications Examining results from this study and other research that used ABRA as a reading tool, offers insights to guide both research and practice.   4.5.1 Implications for Research There is a vast amount of research that focuses on the development and implementation of reading intervention programs, especially for students who are at risk for reading disabilities. The National Reading Panel (2000) states that in order to have successful reading programs, a requirement of five essential components of reading must be present. These components are phonics, phonological awareness, reading fluency, vocabulary and reading comprehension. This  115 guidance and insight is valuable for the design, implementation, and successful outcomes of programs. Although research studies that used ABRA as the primary literacy tool purported to show a positive impact on student’s reading ability, they were not universally effective for all students (such as in this study).  Although the treatment condition did score higher at posttest than at pretest, it remained parallel to the control condition. This indicates that the impact on achievement for the treatment condition was not great enough to fully deduce that ABRA was the cause of improvement among reading scores. Since previous research dictates that fluency is a necessary component to reading, and that ABRA contains one fluency-related activity, it is therefore recommended that ABRA develop more activities with fluency-related skills. Considering this study varies in relation to previous research that used ABRA with whole-class populations, as it studied the effectiveness of ABRA on students at risk and ESL students, it may be effective to have a research-based dosage specific to students who are at risk for reading failure, or specific to ESL students.  ABRA is a reading program that is flexible in terms of dosage.  The authors outline a guideline for teachers to use within a daily one-hour literacy block. A detailed breakdown of literacy components and time allocation is provided as a suggestion. There has been research indicating the effectiveness of ABRA used as a 13-hour intervention program (Savage et al., 2009). Using ABRA as an intervention program for students at risk for reading failure, and ESL students, I created an agenda to suit the schools’ needs and the timeline of the study (e.g., Monday = Alphabetic Principle activities, Tuesday = Phonological Awareness activities, Wednesday = Fluency activities, Thursday = Writing activities).  This agenda or daily lesson plans were also developed by myself as a method to address the fundamental literacy skills required for successful reading, as mentioned from the National Reading Panel (2000).  It is  116 therefore beneficial that further research indicating the most effective way to implement ABRA as an intervention tool. This research-based dosage may benefit students who are at risk for reading failure, and ESL students.  In addition to the need for an intervention dosage, further research into longitudinal studies using ABRA (specifically for at-risk, and ESL students) will be beneficial to demonstrate the effectiveness of the reading program on the literacy skills of students over longer periods of time.  The data from this study reveal several practical applications worthy of future study.  First, it would be valuable to further expand ABRA as a reading tool specifically for mature students (e.g., students above the age of seven or eight) in order for them to engage in their reading. Research has examined multiple aspects of reading engagement among children. Au (1997) has referred to engagement as the students’ ownership through self-confidence of reading and writing.   Oldfather et al. (1994) and Turner (1995) have noted that students’ intrinsic motivation and enjoyment of reading relates strongly to engaged reading. Throughout the numerous studies on reading engagement and motivation, it is widely accepted by researchers that readers are decision makers and intend or want to enable a reading process. In other words, comprehension of a text occurs not only because students are able to read, but read because they are motivated to do so (Guthrie et al., 2000).  4.5.2 Implications for Practice At Risk group The results from this study, in particular, the overall insignificant results on student reading outcomes for the at-risk group offer a starting point for other schools to consider using  117 different reading intervention programs within an RTI service delivery model. Although the treatment condition’s reading scores among all subtests at pre-test for both groups were lower than the controls’, they remained parallel and on an upward trajectory. This indicates that there was some growth among the treatment condition, however the cause is undetermined.  Possible reasons for lower treatment pre-test scores lead to the investigation into the procedure used for the screening tool (RRST) used to match participants into treatment and control conditions. Explanations discussed in section 4.3.1 suggest that additional participants within their own schools that were placed in the treatment condition with lower total scores on the RRST than their control counterparts, might have contributed to the low treatment pre-test scores. The implications of using a non-random assignment, may have allowed for an unequal distribution of student profiles among conditions for both groups, and therefore could have contributed to the insignificant results among the treatment condition. It is also noted in section 4.3.1 that the RRST is a screen that focuses on phonological awareness skills. Two pre-tests measure phonological awareness skills (Elision and Letter-Word Identification). For the Elision subtest within the at-risk group, the treatment condition scored (M = 7.32) a lower mean than the control condition (M = 9.32) by 2.  For the Letter-Word Identification subtest, the treatment condition scored (M = 22.46) a lower mean than the control condition (M = 27.44) by 4.9. Although the distribution of the relevant pre-test variables seem minimal, it may have been decreased if a random assignment of participants was chosen. ESL group For the Elision subtest within the ESL group, the treatment condition scored (M = 5.82) a lower mean than the control condition (M = 8.55) by 2.7.  For the Letter-Word Identification subtest, the treatment condition scored (M = 22.73) a lower mean than the control condition (M =  118 25.09) by 2.3. Although the distribution of the pre-test means seem minimal, they may have been caused by the matching system. Due to a wide variance of total RRST scores, subtest comparisons were made in order to place students in the conditions. This may have caused any insignificant outcomes in literacy scores for the treatment condition.  Improving literacy skills at an early age is an effective and crucial practice, yet teachers oftentimes have students in junior grades (grade 4, and 5) that demonstrate poor reading skills. In addition, ESL students that are placed in early stages (e.g., Stage 1 or Stage 2) at their school may be older than students in grade 1. Researchers have found that engaged reading is strongly associated with reading achievement. Students who read actively and frequently improve their comprehension of the text as a natural consequence (Cipielewski & Stanovich, 1992). ABRA captures the attention of young students with visually appealing cartoon characters, scenes and game-like activities, while enhancing their literacy skills.  It is important to note that capturing the attention of older students while teaching them essential literacy skills is critical. This study involves a group of ESL students who range in age (from six to ten years). I realized through conversations with the ESL teachers throughout the study, that a small amount of ESL students (two out of 11) demonstrated a lack of motivation to participate during two days (out of 40).  Although the students did not specifically identify the cause for their disengagement, I feel it is relevant to consider that student disengagement may have been due to a lack of age-appropriate material. Incorporating high-interest material with reading intervention programs for older students is important for future practice.  Students who are older (e.g., ages eight and older) and who demonstrate the same level of poor reading achievement, may require a computerized program that is more mature visually and contextually.  119 A third issue that would be valuable for further examination relates to the integration of technology in the classroom and thus professional development for teachers who wish to implement technological-based literacy programs. In this study, the facilitators of the ABRA program consisted of ESL teachers, a SERT, a principal, a pre-service teacher and only one classroom teacher. The reason for this is two-fold. First, the ESL teachers, SERT, principal, and pre-service teacher were not responsible to teach classes, and were therefore available at any time to assure a smooth operation of technology. Second, the classroom teacher volunteered to be a facilitator on her planning time in order for the students to receive the intervention. The lack of teacher facilitation (without professional development) can be seen as a major issue moving forward.  Researchers have noted that educators have been intrigued with the potential that technology can bring to strengthen student learning (Hew & Brush, 2007). In this study, the results of the teacher survey indicate that overall, teachers agree that ABRA is an effective literacy intervention tool, and that they would use ABRA in the future with their students. Although the teachers agree that ABRA is effective, and that they would use it in the future, I believe that the teachers would require professional development in the form of learning workshops in order to become familiar with the implementation process. Although there was no specific data collected, I believe that this point is essential to consider since only one teacher was involved with the facilitation of ABRA.  Technology integration can be difficult without professional development. Hew and Brush (2007) have reviewed 48 studies of technology integration within kindergarten to grade 12 classrooms. They report specific barriers that contribute to the difficulty of implementing technology in the classroom such as: knowledge and skills on behalf of educators and their  120 associated attitudes and beliefs about technology as a learning tool. Lack of specific technology knowledge and skills is one of the most common reasons given by teachers for not using technology in the classroom (Snoeyink & Ertmer, 2001; Williams et al., 2000). As a result of the teacher survey, I think it is encouraging to know that teachers’ perceptions of the effectiveness of ABRA are positive, and that they are willing to implement it with future students.  However, I do believe it is necessary to provide an ongoing information sharing-forum for teachers to explore and learn the technology in order to adopt a successful technology integration model within their classroom. When planning an intervention, teachers must consider smaller (teacher to student) ratios.  Having smaller groups of students allows for the teacher to engage with every student on a more effective basis, thereby allocating more time spent with each student to address specific needs. This potentially leads to increased engagement on behalf of the student and teacher, and is of paramount importance to the success of a program (Vaughn et al., 2002).  Finally, smaller ratios allow the teacher to easily detect the need for increased efficacy of instructional interventions. When students do not respond to the intervention, one way for teachers to increase the efficacy of the instructional intervention is to modify specific components of the program to suit the needs of the student (Fuchs et al., 2010).  4.5.3 Conclusion Given the importance of learning to read in grade 1, and the severity of outcomes if students move past grade 3 and continue to be poor readers (Juel, 1988) together with the need of intervention programs for ESL students, the demand for quality intervention programs continues to increase. The need for effective, early intervention programs within a tiered model is essential.  121 Through a collaborative combination of research and practice, we attempt to reach the goal of intervening early in hopes to close literacy achievement gaps.                        122 References American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders   (5th ed.). doi:10.1176/appi.books.9780890425596.744053  August, D., & Shanahan, T. (2006). Developing literacy in second-language learners: Report of   the national literacy panel on language minority children and youth. Mahwah, NJ:   Lawrence Erlbaum Associates. August, D., & Shanahan, T. (Eds.). (2008). Developing reading and writing in second-language  learners: lessons from the report of the National Literacy Panel on language-minority  children and youth. New York, NY: Routledge. Abrami, P.C., Savage, R., Wade, C. A., Hipps, G., & Lopez, M. (2008). Using technology to   assist children learning to read and write. In T. Willougby & E. Wood (Eds.), Children’s  learning in a digital world (pp. 129-172). Oxford, UK: Blackwell Publishing. Abrami, P. C., Bernard, R. M., Bures, E. M., Borokhovski, E., & Tamim, R. M. (2011).  Interaction in distance education and online learning: Using evidence and theory to improve practice. Journal of Computing in Higher Education, 23(2), 82-103. doi:10.1007/s12528-011-9043-x Abrami, P., Borohkovski, E., & Lysenko, L. (2015). The Effects of ABRACADABRA on  Reading Outcomes: A Meta-Analysis of Applied Field Research. Journal of Interactive  Learning Research, 26(4), 337-367. Abrami, P. C., Borokhovski, E., & Lysenko, L. (2015). The effects of ABRACADABRA on  reading outcomes: A meta-analysis of applied field research data. Journal of Interactive  Learning Research. Abrami, P. C., Savage, R., Deleveaux, G., Wade, A., Meyer, E., & LeBel, C. (2010). The  123 learning toolkit: The design, development, testing and dissemination of evidence-based educational software. The learning toolkit (pp. 168-187). Montreal, Quebec: IGI Global. doi:10.4018/978-1-61520-781-7.ch012 Abrami, P. C., Wade, A., Lysenko, L., Marsh, V., & Gioko, J. (2014). Using educational  technology to develop early literacy skills in Sub-Saharan Africa. Education and  Information Technologies, 1-20. doi: 10.1007/s10639-014-9362-4 Abrami, P. C., White, B., & Wade, A. (2010). ABRACADABRA LTK teacher guide. Retrieved   from http://grover.concordia.ca/abracadabra/resources/download/LTK_ABRA_lr.pdf on   January 6, 2018. Adams, M. J., Stahl, S. A., Osborne, J., & Lehr, F. (1990). Beginning to read: The new phonics  in context. Heinemann Educational. Allington, R. L., & Walmsley, S. A. (1995). No Quick Fix: Rethinking Literacy Programs in  America's Elementary Schools. Language and Literacy Series. Teachers College Press.  1234 Amsterdam Avenue, New York, NY 10027. American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC.  Archer, K., Savage, R., Sanghera-Sidhu, S., Wood, E., Gottardo, A., & Chen, V. (2014).   Examining the effectiveness of technology use in classrooms: A tertiary meta-analysis.   Computers & Education, 78, 140. doi:10.1016/j.compedu.2014.06.001 Au, K. H. (1997). Ownership, literacy achievement, and students of diverse cultural  backgrounds. Reading engagement: Motivating readers through integrated instruction,  168-182. doi: 10.1207/s15548430jlr3703_1 Bateman, B.D. (1965). An educational view of a diagnostic approach to learning disabilities. In  124 J. Hellmuth (Ed.), Learning disorders (1), 219-239. Seattle, WA: Special Child Publications. Baddeley, A. D. (1983). Working memory. Philosophical Transactions of the Royal Society of   London B: Biological Sciences, 302(1110), 311-324. doi: 10.1098/rstb.1983.0057 Ball, E. W., & Blachman, B. A. (1991). Does phoneme awareness training in kindergarten make  a difference in early word recognition and developmental spelling? Reading Research  Quarterly, 26(1), 49-66. doi:10.1598/RRQ.26.1.3 Barr, R., Sadow, M. W., & Blachowicz, C. L. Z. (1995). Reading diagnosis for teachers: An   instructional approach (3rd ed.). White Plains, N.Y: Longman Publishers USA.  Barwick, M., & Siegel, L. S., (1996). Learning difficulties in adolescent clients of a shelter for   runaway and homeless street youths. Journal of Research on Adolescence, 6(4), 649-670.  doi: 10.1177/00222194030360010101 Beaver, J., & Celebration Press. (2002). DRA: Developmental Reading Assessment. Parsippany,  NJ: Celebration Press. Ben-Dror, I., Bentin, S., & Frost, R. (1995). Semantic, phonologic, and morphologic skills in  reading disabled and normal children: Evidence from perception and production of spoken Hebrew. Reading Research Quarterly, 30(4), 876-893. doi: 10.2307/748202 Bentin, S., Deutsch, A., & Liberman, I. Y. (1990). Syntactic competence and reading ability in  children. Journal of Experimental Child Psychology, 49(1), 147-172.  http://dx.doi.org/10.1016/0022-0965(90)90053-B Bereiter, C., & Scardamalia, M. (2003). Learning to work creatively with knowledge. Powerful  learning environments: Unravelling basic components and dimensions, 55-68. Retrieved from, http://ikit.org/fulltext/2003_KBE.pdf on October 2015.  125 Bereiter, C., & Egnatoff, W. J. (2004). Education and mind in the knowledge age. Vancouver:   Association for Media and Technology in Education in Canada. Bishop, D. V. M., & Adams, C. (1990). A Prospective Study of the Relationship between   Specific Language Impairment, Phonological Disorders and Reading Retardation.  Journal of Child Psychology and Psychiatry, 31(7): 1027–1050. doi: 10.1111/j.1469-7610.1990.tb00844.x Blok, H., Oostdam, R., Otter, M. E., & Overmaat, M. (2002). Computer-assisted instruction in   support of beginning reading instruction: A review. Review of educational research,   72(1), 101-130. doi: 10.3102/00346543072001101 Boetsch, E. A., Green, P., Pennington, B. F., (1996). Psychosocial correlates of dyslexia across  the life span. Development and Psychopathology, 8(3), 539-562.  doi:10.1017/S0954579400007264. Brady, S. A. (1997). Ability to encode phonological representations: an underlying difficulty of   poor readers. In Foundations of Reading Acquisition and Dyslexia: Implications for Early Intervention (21-33) New York, NY: Routledge. Branum-Martin, L., Fletcher, J. M., & Stuebing, K. K. (2013). Classification and Identification  of Reading and Math Disabilities The Special Case of Comorbidity. Journal of learning  disabilities, 46(6), 490-499. doi: 10.1177/0022219412468767 Burstein, J., Sabatini, J., Shore, J., Moulder, B., & Lentini, J. (2013). Technology to increase  teachers’ linguistic awareness to improve instructional language support for English  Language Learners. Association for Computational Linguistics, 1-10. Byrne, B., & Fielding-Barnsley, R. (1989). Phonemic awareness and letter knowledge in the child's acquisition of the alphabetic principle. Journal of Educational Psychology,  126 81(3), 313-321. doi:10.1037/0022-0663.81.3.313 Catts, H. W., & Weismer, S. E. (2006). Language deficits in poor comprehenders: A case for the  simple view of reading. Journal of Speech, Language, and Hearing Research, 49(2),  278-293. doi:10.1044/1092-4388(2006/023) Carroll, J. (1963). A model of school learning. The Teachers College Record, 64(8), 723-733. Centre for the Study of Learning and Performance. (2009). The learning tooklit. Retrieved from   http://doe.concordia.ca/clsp/ICT-LTK.php on January 6, 2018 Chambers, B., Slavin, R., E., Madden, N., A., Abrami, P., Logan, M., K., & Gifford, R. (2011).   Small-Group, Computer-Assisted Tutoring to Improve Reading Outcomes for Struggling First and Second Graders. The Elementary School Journal 111(4), 625-640. doi.org/10.1086/659035 Chambers, B., Slavin, R. E., Madden, N. A., Abrami, P. C., Tucker, B. J., Cheung, A., &   Gifford, R. (2008). Technology infusion in success for all: Reading outcomes for first graders. The Elementary School Journal, 109(1), 1-15. doi:10.1086/592364 Chapman, J. W., (1988). Learning disabled children’s self concept. Review of Educational   Research. 58(3), 347-371. doi: 10.3102/00346543058003347 Chard, D. J., Simmons, D. C., & Kameenui, E. J. (1995). Understanding the primary role of   word recognition in the reading process: Synthesis of research on beginning reading. In   D. Simmons, & E. Kameenui (Eds.), What reading research tells us about children with  diverse learning needs (pp. 180-191). Portland, Oregon: Routledge. Cheung, A., Mak, B., Abrami, P. C., Wade, A., & Lysenko, L. (2015, in preparation). The effectiveness of the ABRACADABRA (ABRA) web-based literacy program on primary  127 school students in Hong Kong. Cheung, A. C. K., & Slavin, R. E. (2012). How features of educational technology applications affect student reading outcomes: A meta-analysis. Educational Research Review, 7(3), 198-215. doi:10.1016/j.edurev.2012.05.002 Chiappe, P., & Siegel, L. S. (1999). Phonological awareness and reading acquisition in English- and Punjabi-speaking Canadian children. Journal of Educational Psychology, 91(1), 20. http://psycnet.apa.org/doi/10.1037/0022-0663.91.1.20 Chiappe, P., Siegel, L. S., & Gottardo, A. (2002). Reading-related skills of kindergartners from   diverse linguistic backgrounds. Applied Psycholinguistics, 23(01), 95-116.   http://dx.doi.org/10.1017/S014271640200005X Cipielewski, J., & Stanovich, K. E. (1992). Predicting growth in reading ability from children's exposure to print. Journal of Experimental Child Psychology, 54(1), 74-89. http://dx.doi.org/10.1016/0022-0965(92)90018-2 Cisero, C. A., & Royer, J. M. (1995). The development and cross-language transfer of  phonological awareness. Contemporary Educational Psychology, 20(3), 275-303. http://dx.doi.org/10.1006/ceps.1995.1018 Clarke, B., Doabler, C., Strand Cary, Mari. (2014). Preliminary evaluation of a tier 2 intervention  for first-grade students: using a theory of change to guide formative evaluation activities.  School Psychology Review, 43(2), 160-177. Clearinghouse, W. W. (2008). Procedures and standards handbook (Version 3.0). Retrieved May,   7, 2017. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, N.J:  L. Erlbaum Associates.    128 Cohen, R. L., & Netley, C. (1978). Cognitive deficits, learning disabilities, and WISC Verbal– Performance consistency. Developmental Psychology, 14(6), 624. http://psycnet.apa.org/doi/10.1037/0012-1649.14.6.624 Comaskey, E. M., Savage, R. S., & Abrami, P. (2009). A randomized efficacy study of web- based synthetic and analytic programmes among disadvantaged urban kindergarten children. Journal of Research in Reading, 32(1), 92-108. doi:10.1111/j.1467-9817.2008.01383.x Cowen, J. E. (2003). A balanced approach to beginning reading instruction: A synthesis of six  major U.S. research studies. Newark, DE: International Reading Association. Retrieved on July 6, 2016 from http: //ubc.summon.serialsolutions.com  Crosson, A. C., & Lesaux, N. K. (2010). Revisiting assumptions about the relationship of fluent  reading to comprehension: Spanish-speakers’ text-reading fluency in English. Reading  and Writing, 23(5), 475-494. doi: 10.1007/s11145-009-9168-8 Cuban, L., Kirkpatrick, H., & Peck, C. (2001). High access and low use of technologies in high  school classrooms: Explaining an apparent paradox. American educational research  journal, 38(4), 813-834. doi:10.3102/00028312038004813 Cunningham, A. E., & Stanovich, K. E. (1998). What reading does for the mind. American   Educator, 22(1/2), 1-8. Cunningham, A. E., Perry, K. E., Stanovich, K. E., & Stanovich, P. J. (2004). Disciplinary knowledge of K-3 teachers and their knowledge calibration in the domain of early literacy. Annals of Dyslexia, 54(1), 139-167. doi:10.1007/s11881-004-0007-y D'Angiulli, A., & Siegel, L. S. (2003). Cognitive functioning as measured by the WISC-R: Do  children with learning disabilities have distinctive patterns of performance? Journal of  129 Learning Disabilities, 36(1), 48-58. doi:10.1177/00222194030360010601 Da Fontoura, H. A., & Siegel, L. S. (1995). Reading, syntactic, and working memory skills of bilingual Portuguese-English Canadian children. Reading and Writing, 7(1), 139-153. doi: 10.1007/BF01026951 Daniel, S. S., Walsh, A. K., Goldston, D. B., Arnold, E. M., Reboussin, B. A., & Wood, F. B.  (2006). Suicidality, school dropout, and reading problems among adolescents. Journal of Learning Disabilities, 39(6), 507-514. doi:10.1177/00222194060390060301 Deault, L., Savage, R., & Abrami, P. (2009). Inattention and response to the ABRACADABRA  web-based literacy intervention. Journal of Research on Educational Effectiveness, 2(3), 250-286. doi:10.1080/19345740902979371 Dede, C. (1996). The evolution of distance education: Emerging technologies and distributed  learning. American Journal of Distance Education, 10(2), 4-36. doi:10.1080/08923649609526919  Denckla, M. B., & Rudel, R. G. (1976). Rapid ‘automatized’ naming (RAN): Dyslexia differentiated from other learning disabilities. Neuropsychologia, 14(4), 471-479.  doi: 10.1016/0028-3932(76)90075-0 Dickens, R. H., Meisinger, E. B., & Tarar, J. M. (2015). Test review: Comprehensive test of  phonological processing-2nd ed. (CTOPP-2) by Wagner, R. K., Torgesen, J. K., Rashotte, C. A., & Pearson, N. A. Canadian Journal of School Psychology, 30(2), 155-162. doi:10.1177/0829573514563280 Durgunoğlu, A. Y., Nagy, W. E., & Hancin-Bhatt, B. J. (1993). Cross-language transfer of  phonological awareness. Journal of educational psychology, 85(3), 453. doi/10.1037/0022-0663.85.3.453  130 Dusenbury L, Brannigan R, Falco M, Hansen W.  (2003). A review of research on fidelity of  implementation: Implications for drug abuse prevention in school settings. Health Education Research. 18, 237–256. doi: 10.1093/her/18.2.237. Dynarski, M., Agodini, R., Heaviside, S., Novak, T., Carey, N., Campuzano, L., Means, B.,   Murphy, R., Penuel, W., Javitz, Emery, D., & Sussex, W. (2007). Effectiveness of   reading and mathematics software produces: findings from the first student cohort.   Report number NCEE 2007-4005.  Education Act, R.S.O. (1990, c. E-2). Retrieved from the Department of Education Ontario from  https://www.ontario.ca/laws/statute/90e02 on June 9, 2016. Ehri, L. C. (2005). Learning to read words: Theory, findings, and issues. Scientific Studies of  reading, 9(2), 167-188. doi/10.1002/9780470757642.ch8 Ehri, L. C., Nunes, S. R., Willows, D. M., Schuster, B. V., Yaghoub-Zadeh, Z., & Shanahan, T.  (2001). Phonemic awareness instruction helps children learn to read: Evidence from the national reading panel's meta-analysis. Reading Research Quarterly, 36(3), 250-287. doi:10.1598/RRQ.36.3.2 Elbaum, B., Vaughn, S., Hughes, M. T., & Watson Moody, S. (2000). How effective are one-to- one tutoring programs in reading for elementary students at risk for reading failure? A meta-analysis of the intervention research. Journal of Educational Psychology, 92(4), 605-619. doi:10.1037/0022-0663.92.4.605 Faggella‐Luby, M. N., & Deshler, D. D. (2008). Reading comprehension in adolescents with LD:  What we know; what we need to learn. Learning Disabilities Research & Practice,  23(2), 70-78. doi:10.1111/j.1540-5826.2008.00265.x Fiedorowicz, C., Craig, M. J., Phillips, M., Price, A., & Bullivant, M. G. (2015). Retrieved from  131  the Learning Disabilities Association of Canada from www.ldac-acta.ca on May 6, 2017. Foorman, B. R., & Moats, L. C. (2004). Conditions for sustaining research-based practices in   early reading instruction. Remedial and Special Education, 25(1), 51-60. doi:   10.1177/07419325040250010601 Foster, S. L., Bell-Dolan, D. J., & Burge, D. A. (1988). Behavioral observation. In A. S. Bellack & M. Hersen (Eds.), Behavioral assessment: A practical handbook (3rd Edition). NY: Pergamon. Francis, D. J., Fletcher, J. M., Stuebing, K. K., Lyon, G. R., Shaywitz, B. A., & Shaywitz, S. E.  (2005). Psychometric approaches to the identification of LD: IQ and achievement scores are not sufficient. Journal of Learning Disabilities, 38(2), 98-108. doi:10.1177/00222194050380020101 Fuchs, D., & Fuchs, L. S. (2006). Introduction to response to intervention: What, why, and how  valid is it? Reading Research Quarterly, 41(1), 93-99. doi:10.1598/RRQ.41.1.4  Fuchs, L. S., Geary, D. C., Compton, D. L., Fuchs, D., Hamlett, C. L., & Bryant, J. D. (2010)   The contributions of numerosity and domain-general abilities to school readiness. Child  Development, 81, 1520-1533. doi:10.1111/j.1467-8624.2010.01489.x Geva, E., & Siegel, L. S. (2000). Orthographic and cognitive factors in the concurrent   development of basic reading skills in two languages. Reading and Writing: An   Interdisciplinary Journal, 12(1-2), 1-30. doi: 10.1023/A:1008017710115 Good, R. H., & Kaminski, R. A. (1996, 2007). Dynamic indicators of basic early literacy skills   (6th edition). Eugene, OR: University of Oregon. Guthrie, J. T., Wigfield, A., & VonSecker, C. (1996). Effects of Integrated Instruction on  Motivation and Strategy Use in Reading. Journal of Educational Psychology, 92(2),   132 331-341. Doi: 10.1037/0022-0663.92.2.331 Hedges, L. (1981). Distribution Theory for Glass’s Estimator of Effect Size and Related  Estimators. Journal of Educational Statistics. 6(2) pp. 107-128. Hew, K. F., & Brush, T. (2007). Integrating technology into K-12 teaching and learning: Current  knowledge gaps and recommendations for future research. Educational Technology  Research and Development, 55(3), 223-252. doi: 10.1007/s11423-006-9022-5 Hipps, G., Abrami, P.C., & Savage, R. (2005). ABRACADABRA: The research, design and  development of web-based early literacy software. In S. Pierre (Ed.), DIVA. Innovations  et tendances en technologies do formation et d’apprentissage (pp. 89-112) Montreal: Presses Internationales Polytechnique. Hurford, D., & Wright, C. (2003). Test review of the Test of Word Reading Efficiency. In B.S.   Plake, J.C. Impara, and R.A. Spies (Eds.), The fifteenth mental measurements yearbook   (pp. 226-232). Lincoln, NE: Buros Institute of Mental Measurements. Individuals With Disabilities Education Act, 20 U.S.C. § 1400 (2004).  Johnston, S. S., McDonnell, A. P., & Hawken, L. S. (2008). Enhancing outcomes in early  literacy for young children with disabilities: Strategies for success. Intervention in School and Clinic [H.W.Wilson - EDUC], 43(4), 210. Retrieved from  http://journals.sagepub.com/doi/abs/10.1177/1053451207310342 on May 3, 2016.  Juel, C. (1991). Beginning reading. In R. Barr (Ed.), Handbook of reading research (pp, 759-  788). Beaverton: Ringgold Inc. Juel, C. (1988). Learning to read and write: A longitudinal study of 54 children from first  through fourth grades. Journal of Educational Psychology, 80(4), 437. doi:10.1037/0022-0663.80.4.437  133 Juel, C., & Minden‐Cupp, C. (2000). Learning to read words: Linguistic units and instructional   strategies. Reading Research Quarterly, 35(4), 458-492. DOI: 10.1598/RRQ.35.4.2 Kamhi, A., & Catts, H. (1986). Toward an understanding of developmental language and reading  disorders. Journal of Speech and Hearing Disorders, 51, 337-347.  doi: 10.1044/jshd.5104.337 Kavale, K. A., Holdnack, J. A., & Mostert, M. P. (2005). Responsiveness to intervention and the identification of specific learning disability: A critique and alternative proposal [corrected version]. Learning Disability Quarterly, 29(2), 113-127. Retrieved from http://journals.sagepub.com/doi/abs/10.2307/4126970 on February 16, 2016.  Lane, H., & Cullen Pullen, P. (2015). Blending wheels: tools for decoding practice. Teaching   Exceptional Children, 48(2). 86-92. doi: 10.1177/0040059915594791 Learning Disability Association of Alberta. (2015). Reading readiness screening tool. Learning Disability Association of Ontario (2001). Learning Disability memo. Lesaux, N. K., Geva, E., Koda, K., & Siegel, L. S. (2008). In D. August & T. Shanahan.  Developing reading and writing in second language learners, 27-59. Lesaux, N. K., & Siegel, L. S. (2003). The development of reading in children who speak English as a second language. Developmental Psychology, 39(6), 1005. Retrieved from http://eyebooks.demicon.ru/files/1319284646.pdf on August 8, 2015. Liberman, A. M., & Mattingly, I. G. (1985). The motor theory of speech perception revised.   Cognition, 21, 1-36. Liberman, I. Y., & Shankweiler, D. (1985). Phonology and the problems of learning to read and  write. Remedial and special education, 6(6), 8-17. doi: 10.1177/074193258500600604 Lichtenstein & Zantol-Wiener. (1988). Current issues in new directions in the integration of   134  cognition, neurobiology and genetics of reading. In Pugh & McCardle, How Children   Learn to Read. (pp. 24). New York, NY: Psychology Press. Lipka, O., & Siegel, L. S. (2007). The development of reading skills in children with English as a  second language. Scientific Studies of Reading, 11(2), 105-131. doi:10.1080/10888430709336555 Lipka, O., & Siegel, L. S. (2010). The improvement of reading skills of L1 and ESL children using a response to intervention (RtI) model. Psicothema, 22(4), 963-969. Retrieved from http://www.redalyc.org/html/727/72715515064/ on September 13, 2014 Lipka, O., & Siegel, L. S. (2011; 2012). The development of reading comprehension skills in  children learning English as a second language. Reading and Writing, 25(8), 1873-1898. doi:10.1007/s11145-011-9309-8 Lovett, M. W., Barron, R. W., & Frijters, J. C. (2013). Word identification difficulties in children   and adolescents with reading disabilities. The handbook of learning disabilities, 329-360. Lovett, M. W., De Palma, M., Frijters, J., Steinbach, K., Temple, M., Benson, N., & Lacerenza,  L. (2008). Interventions for reading difficulties: A comparison of response to intervention by ELL and EFL struggling readers. Journal of Learning Disabilities, 41(4), 333-352. doi:10.1177/0022219408317859 Lovett, M. W., Steinbach, K. A., & Frijters, J. C. (2000). Remediating the Core Deficits of   Developmental Reading Disability A Double-Deficit Perspective. Journal of Learning   Disabilities, 33(4), 334-358. doi: 10.1177/002221940003300406 Lyon, G. R., Shaywitz, S. E., & Shaywitz, B. A. (2003). A definition of dyslexia. Annals of   dyslexia, 53(1), 1-14. doi: 10.1007/s11881-003-0001-9 Lysenko, L. V., & Abrami, P. C. (2014). Promoting reading comprehension with the use of   135  technology. Computers & Education, 75, 162-172. doi:10.1016/j.compedu.2014.01.010 Ministry of Education. (2001). The Ontario curriculum grades 1-8: English as a second language an English literacy development A resource guide. MacArthur, C. A., Ferretti, R. P., Okolo, C. M., & Cavalier, A. R. (2001). Technology  applications for students with literacy problems: A critical review. The Elementary  School Journal, 273-301. doi:10.1086/499669 Majsterek, D. J., & Ellenwood, A. E. (1990). Screening preschoolers for reading learning  disabilities: Promising procedures. In LD Forum (Vol. 16, pp. 6-14). Mayer, R. E. (2003). The promise of multimedia learning: using the same instructional design methods across different media. Learning and instruction, 13(2), 125-139. http://dx.doi.org/10.1016/S0959-4752(02)00016-6 McBride, H., & Siegel, L. S. (1997). Learning disabilities and adolescent suicide. Journal of   Learning Disabilities, 30(6), 652-659. doi: 10.1177/002221949703000609 McLeskey, J., & Grizzle, K. L. (1992). Grade retention rates among students with learning  disabilities. Exceptional Children, 58(6), 548. Ministry of Ontario, Education Division. (1982). Policy/Program Memorandum No. 11. Early  identification of children’s learning needs. Retrieved from: http://www.edu.gov.on.ca/extra/eng/ppm/11.html on June 16, 2016. Ministry of Ontario, Education Division. (2014). Policy/Program Memorandum No. 8.   Identification of and program planning for students with learning disabilities   National Center for Education Statistics. Retrieved from http://www.edu.gov.on.ca/extra/eng/ppm/ppm8.pdf  on June 16, 2016. Morin, A. (2014, Dec 10). Understanding dyscalculia. Retrieved from www. understood.org on  136  January 29, 2017. Morris, D., Shaw, B., & Perney, J. (1990). Helping low readers in Grades 2 and 3: An after-  school volunteer tutoring program. The Elementary School Journal. Retrieved from   http://psycnet.apa.org/doi/10.1086/461642 on July 12, 2016 National Centre on Response to Intervention (2010). Developing an RTI guidance document.   Washington, DC: U.S. Department of Education, Office of Special Education Programs,  National Center on Response to Intervention. Retrieved from: http://www.rti4success.org   on January 4, 2017. National Education Goals Panel (U.S.). (1999). Reading achievement state by state, 1999  Retrieved from https://eric.ed.gov/id=ED428332 on May 2, 2016 National Institute of Child Health and Human Development (U.S.), & National Reading Panel (U.S.). (2000). National reading panel: Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups. (No. 00-4754). Washington, D.C.: National Institute of Child Health and Human Development, National Institutes of Health. Retrieved from http://onlinelibrary.wiley.com/doi/10.1598/RRQ.36.3.1/pdf on September 6, 2016 Norton, E. S., & Wolf, M. (2012). Rapid automatized naming (RAN) and reading fluency: Implications for understanding and treatment of reading disabilities. Annual Review Psychology.  63, 427-452. doi: 10.1146/annurev-pscyh-120710-100431 Oldfather, P., & Dahl, K. (1994). Toward a social constructivist reconceptualization of intrinsic   motivation for literacy learning. Journal of Literacy Research, 26(2), 139-158.  doi: 10.1080/10862969409547843  137 Olson, R. K., & Wise, B. W. (1992). Reading on the computer with orthographic and speech  feedback. Reading and Writing, 4(2), 107-144. doi: 10.1007/BF01027488 Ontario Ministry of Education. (2008). Supporting English language learners: A practical guide  for Ontario educators, grades 1 to 8. (No. 2010-01432.). Toronto: Ministry of  Education. Retrieved from http://www.edu.gov.on.ca/eng/ on June 8, 2016 Ontario Ministry of Education. (2010). Growing success: assessment, evaluation, and reporting  in Ontario schools: covering grades 1 to 12. Ontario Ministry of Education (2013). Demographics of Halton Catholic District School Board.   Retrieved from http://www.edu.gov.on.ca/eng/ on October 4, 2015. Ontario Ministry of Education (2015). Steps to English Proficiency. Retrieved from:   http://www.edugains.ca/resourcesELL/Assessment/STEP/STEP_InitialLanguageAssessm  ent/STEPUserGuide_IntialAssessment_June2012.pdf on Jan 6, 2018. Partanen, M., & Siegel, L. S. (2014). Long-term outcome of the early identification and  intervention of reading disabilities. Reading and Writing, 27(4), 665-684. doi:10.1007/s11145-013-9472-1 Pennington, B.F. (2006). From single to multiple deficit models of developmental disorders.   Cognition, 101(1), 385-413. doi: 10.1111/gbb.12085 Pennington, B.F. & Bishop, D.V. (2009). Relations among speech, language, and reading   disorders. Annual Review Psychology, 60(1), 283-306.  doi: 0.1146/annurev.psych.60.110707.163548 Patino, E. (2014, Jan 1). Understanding Dysgraphia. Retrieved from https://www.understood.org  on January 29, 2017. Pikulski, J. (1994). Preventing reading failure: a review of five effective programs. The Reading  138  Teacher, 48(1), 30-39. Retrieved from  http://www.jstor.org.ezproxy.library.ubc.ca/stable/20201362 on August 3, 2016 Piquette, N.A., Savage, R. S., & Abrami, P. C. (2014). A cluster randomized control field trial of   the ABRACADABRA web-based reading technology: Replication and extension of basic   findings. Frontiers In Psychology, 5. Pugh, K., & McCardle, P. (Eds.). (2011). How children learn to read: Current issues and new  directions in the integration of cognition, neurobiology and genetics of reading and  dyslexia research and practice. Taylor & Francis. Qi, S., & O’Connor, R. (2000). Comparison of phonological training procedures in kindergarten  classrooms. The Journal of Educational Research, 93(4), 226-233.  http://dx.doi.org/10.1080/00220670009598711 RAND Reading Study Group. (2002). Reading for understanding: Toward an R& D program in   reading comprehension. Washington, DC: RAND Education. Reading, F. F. Florida Center for Reading Research (2007). Retrieved from, http://www.fcrr.org   on October10, 2015 Sabornie, E. (1994). Social-affective characteristics in early adolescents identified as learning  disabled and nondisabled. Learning Disability Quarterly, 17(4), 268-279.  doi: 10.2307/1511124 Sandvik, J. M., van Daal, V. H., & Adèr, H. J. (2013). Emergent literacy: Preschool teachers’ beliefs and practices. Journal of Early Childhood Literacy, doi: 10.1177/1468798413478026 Savage, R., & Abrami, P. (2007). ABRACADABRA: Progress in the development,   Implementation and effectiveness of a web-based literacy resource. In World Conference   139 on E-Learning in Corporate, Government, Healthcare, and Higher Education, 1 pp. 6530-6536). Savage, R. S., Abrami, P., Hipps, G., & Deault, L. (2008). A randomized controlled trial study of  the ABRACADABRA reading intervention program in grade 1. Journal of Educational  Psychology, 101(3), 590-604. doi: org/10.1037/a0014700 Savage, R. S., Abrami, P., Hipps, G., & Deault, L. (2009). A randomized controlled trial study of  the ABRACADABRA reading intervention program in grade 1. Journal of Educational  Psychology, 101(3), 590. Retrieved from, http://psycnet.apa.org/doi/10.1037/a0014700  on September 4, 2015 Savage, R. S., Erten, O., Abrami, P., Hipps, G., Comaskey, E., & van Lierop, D. (2010).  ABRACADABRA in the hands of teachers: The effectiveness of a web-based literacy intervention in grade 1 language arts programs. Computers & Education, 55(2), 911-922. doi:10.1016/j.compedu.2010.04.002 Savage, R., Abrami, P. C., Piquette, N., Wood, E., Deleveaux, G., Sanghera-Sidhu, S., & Burgos,  G. (2013). A (pan-Canadian) cluster randomized control effectiveness trial of the ABRACADABRA web-based literacy program. Journal of Educational Psychology, 105(2), 310. doi:10.1207/s1532799xssr0103_2 Scanlon, D. M., & Vellutino, F. R. (1997). A comparison of the instructional backgrounds and  cognitive profiles of poor, average, and good readers who were initially identified as at  risk for reading failure. Scientific Studies of Reading, 1(3), 191-215. Scarborough, H., & Fowler, A. (1993). The relationship between language disorders and reading   disabilities. Neurophysiology Speech and Language Disorders, 3(1), 12-15. Scardamalia, M., & Bereiter, C. (2005). Does education for the knowledge age need a new   140 science. European Journal of School Psychology, 3(1), 21-40. Retrieved from, http://www.ikit.org/fulltext/2005_DoesEducation.pdf on October 10, 2015. Schedule, B., RIGHTS, D., & CITIZENS, D. R. O. (1985). Constitution Act, 1982. Schmid, R. F., Bernard, R. M., Borokhovski, E., Tamim, R. M., Abrami, P. C., Surkes, M. A., &  Woods, J. (2014). The effects of technology use in postsecondary education: A meta- analysis of classroom applications. Computers & Education, 72, 271-291.  doi: org/10.1016/j.compedu.2013.11.002 Share, D. L., Jorm, A. F., Maclean, R., & Matthews, R. (1984). Sources of individual differences in reading acquisition. Journal of educational Psychology, 76(6), 1309-1324.  doi: 10.1037/0022-0663.76.6.1309 Siegel, L. S. (1989). IQ is irrelevant to the definition of learning disabilities. Journal of Learning   Disabilities, 22(8), 469-478. doi: 10.1177/002221948902200803 Siegel, L. S. (2003). IQ-discrepancy definitions and the diagnosis of LD: Introduction to the special issue. Journal of Learning Disabilities, 36(1), 2-3. doi:10.1177/00222194030360010101 Siegel, L. S. (2009). Introduction to the special issue. Reading and Writing, 22(7), 753-754.   doi:10.1007/s11145-009-9171-0 Siegel, L. S., Chiappe, P., & Wade-Woolley, L. (2002). Linguistic diversity and the development   of reading skills: A longitudinal study. Scientific Studies of Reading, 6(4), 369-400.  doi:10.1207/S1532799XSSR0604_04 Siegel, L. S., & Heaven, R. K. (1986). Categorization of learning disabilities. Handbook of  cognitive, social, and neuropsychological aspects of learning disabilities, 1, 95-121. Siegel, L. S., Linder, B. A. (1984) Short-term memory processes in children with reading and  141  arithmetic learning disabilities. Developmental Psychology, (20)2, 200-207.  doi: 10.1037/0012-1649.20.2.200 Siegel, L. S., & Ryan, E. B. (1984). Reading disability as a language disorder. Remedial and   Special Education, 5(3), 28-33. doi: 10.1177/074193258400500308 Smith, S. B., Simmons, D. C., & Kameenui, E. J. (1995). Synthesis of research on phonological  awareness: Principles and implications for reading acquisition. In B. Simmons, & E. J.  Kameenui (Eds.), What reading research tells us about children with diverse learning needs (pp. 230-245). Portland, Oregon: Routledge. Snoeyink, R., & Ertmer, P. A. (2001). Thrust into technology: How veteran teachers respond.  Journal of educational technology systems, 30(1), 85-111.  doi: 10.2190/YDL7-XH09-RLJ6-MTP1 So, D., & Siegel, L. S. (1997). Learning to read Chinese: Semantic, syntactic, phonological and  working memory skills in normally achieving and poor Chinese readers. Reading and  Writing, 9(1), 1-21, doi: 10.1023/A:1007963513853 Solis, M., Miciak, J., Vaughn, S., & Fletcher, J. M. (2014). Why Intensive Interventions Matter   Longitudinal Studies of Adolescents With Reading Disabilities and Poor Reading   Comprehension. Learning Disability Quarterly, 37(4), 218-229. Sprenger-Charolles, L., Siegel, L. S., & Bonnet, P. (1998). Reading and spelling acquisition in  French: The role of phonological mediation and orthographic factors. Journal of  experimental child psychology, 68(2), 134-165. doi: 10.1006/jecp.1997.2422 Stanovich, K. E. (1986). Matthew effects in reading: Some consequences of individual  differences in the acquisition of literacy. Reading research quarterly, 360-407. Retrieved from, http://www.jstor.org/stable/747612 on June 8, 2016.  142 Stanovich, K. E. (1988). Explaining the differences between the dyslexic and the garden-variety  poor reader The phonological-core variable-difference model. Journal of learning  disabilities, 21(10), 590-604. doi: 10.1177/002221948802101003 Stanovich, K., (1991) Word recognition: changing perspectives. In Handbook of Reading  Research, 2, 419-425. New Jersey, Lawrence Erlbaum Associates, Inc. Stanovich, K. E., & Siegel, L. S. (1994). Phenotypic performance profile of children with   reading disabilities: A regression-based test of the phonological-core variable-difference  model. Journal of Educational Psychology, 86(1), 24. http://psycnet.apa.org/doi/10.1037/0022-0663.86.1.24 Statistics Canada. (2006, 2011). Ontario census of population. (Oakville: Statistics Canada) Stevens, J. (2009). Applied multivariate statistics for the social sciences (5th ed.). New York:  Routledge.  Stuebing, K. K. (2008). Journal of educational psychology: A response to recent  reanalyses of the national reading panel report: Effects of systematic phonics instruction  are practically significant American Psychological Association. doi:10.1037/0022- 0663.100.1.123 Sussel, T. A. (1995). Canada's legal revolution: Public education, the Charter, and human  rights. Emond Montgomery Publication. Swanson, H. L. (1999). Reading research for students with LD: A meta-analysis of intervention outcomes. Journal of Learning Disabilities, 32(6), 504-532. doi:10.1177/002221949903200605 Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F. (2011). What   forty years of research says about the impact of technology on learning a second-order  143  meta-analysis and validation study. Review of Educational research, 81(1), 4-28.  doi: 10.3102/0034654310393361 Tannock, R. (2014). DSM-5 Changes in Diagnostic Criteria for Specific Learning Disabilities (SLD): What are the Implications? International Dyslexia Association. Retrieved from, http://dyslexiahelp.umich.edu on November, 2015 Tarar, J. M., Meisinger, E. B., & Dickens, R. H. (2015). Test review: Test of word reading efficiency-second edition (TOWRE-2) by Torgesen, J. K., Wagner, R. K., & Rashotte, C. A. Canadian Journal of School Psychology, doi:10.1177/0829573515594334 Taub, G. E., & McGrew, K. S. (2014). The Woodcock–Johnson tests of cognitive abilities III’s  cognitive performance model: Empirical support for intermediate factors within CHC theory. Journal of Psychoeducational Assessment, 32(3), 187-201.  Thurlow, M. L., Ysseldyke, J. E., Wotruba, J. W., & Algozzine, B. (1993). Instruction in special education classrooms under varying student-teacher ratios. The Elementary School Journal, 93(3), 305-320. doi:10.1086/461727 Tindal, G., & Vacca, J. J. (2003). Review of the test of word reading efficiency. The fifteenth   mental measurements yearbook. Lincoln, NE: Buros Institute of Mental Measurements. Torgesen, J. K. (1997). Preventive and Remedial Interventions for Children with Severe Reading Disabilities. Learning Disabilities: A Multidisciplinary Journal, 8(1), 51-61. doi:10.1207/s1532799xssr0103_3 Torgesen, J. K. (2000). Individual differences in response to early interventions in reading: The  lingering problem of treatment resisters. Learning Disabilities Research and Practice, 15(1), 55-64. doi:10.1207/SLDRP1501_6 Torgesen, J. K. (2002). The prevention of reading difficulties. Journal of School Psychology,   144  40(1), 7-26. doi:10.1016/S0022-4405(01)00092-9 Torgesen, J. K. (2009). The response to intervention instructional model: Some outcomes from a large-scale implementation in reading first schools. Child Development Perspectives, 3(1), 38-40. doi:10.1111/j.1750-8606.2009.00073.x Torgesen, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C. A., Voeller, K. K. S., & Conway, T. (2001). Intensive remedial instruction for children with severe reading disabilities: Immediate and long-term outcomes from two instructional approaches. Journal of Learning Disabilities, 34(1), 33-58. doi:10.1177/002221940103400104 Torgesen, J. K., Wagner, R. K., & Rashotte, C. A. (1999). Test of word reading efficiency   (TOWRE). Austin, TX: Pro-Ed. Torgesen, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C. A., Voeller, K. K., & Conway,   T. (2001). Intensive remedial instruction for children with severe reading disabilities   immediate and long-term outcomes from two instructional approaches. Journal of   learning disabilities, 34(1), 33-58. doi: 10.1177/002221940103400104 Torgesen, C., & Zhu, D. (2003). A systematic review and meta-analysis of the effectiveness of   ICT on literacy learning in English, 5-16. In: Research Evidence in Education Library.   London: EPPI-Centre, Social Sciences Research Unit Institute of Education. Turner, J., & Paris, S. G. (1995). How literacy tasks influence children's motivation for literacy.  The Reading Teacher, 48(8), 662-673. Retrieved from http://www.jstor.org/stable/747624 on September 10, 2015 Van Santen, F. W. (2012). Cognitive processing profiles of school-age children who meet low- achievement, IQ-discrepancy, or dual criteria for underachievement in oral reading accuracy (Doctoral dissertation, NORTHWESTERN UNIVERSITY).  145 Vaughn, S., Gersten, R., & Chard, D. J. (2000). The underlying message in LD intervention  research: Findings from research syntheses. Exceptional Children, 67(1), 99-114.  doi: 10.1177/001440290006700107 Vaughn, S., & Fuchs, L. S. (2003). Redefining learning disabilities as inadequate response to  instruction: The promise and potential problems. Learning disabilities research &  practice, 18(3), 137-146. Doi: 10.1111/1540-5826.00070 Vaughn, S., Linan-Thompson, S., Kouzekanani, K., Pedrotty Bryant, D., Dickson, S., & Blozis,  S. A. (2003). Reading instruction grouping for students with reading difficulties. Remedial and Special Education, 24(5), 301-315. doi:10.1177/07419325030240050501 Vaughn, S., Wanzek, J., Woodruff, A. L., & Linan-Thompson, S. (2007). Prevention and early   identification of students with reading disabilities. (pp. 11-27). Baltimore, MD, US: Paul   H Brookes Publishing. Vellutino, F. R., Scanlon, D. M., & Spearing, D. (1995). Semantic and phonological coding in  poor and normal readers. Journal of experimental child psychology, 59(1), 76-123.  http://dx.doi.org/10.1006/jecp.1995.1004 Verhoeven, L. T. (1994). Transfer in bilingual development: The linguistic interdependence   hypothesis revisited. Language learning, 44(3), 381-415. doi: 10.1111/j.1467-  1770.1994.tb01112.x Wagner, R. K., Francis, D. J., & Morris, R. D. (2005). Identifying English language learners with learning disabilities: Key challenges and possible approaches. Learning Disabilities Research & Practice, 20(1), 6-15. doi: 10.1111/j.1540-5826.2005.00115.x Wagner, R. K., Torgesen, J. K., & R., C. A. (1999). The comprehensive test of phonological   processing (CTOPP).(Austin, TX: Pro-Ed.)  146 Wagner, R. K., Torgesen, J. K., (1987). The nature of phonological processing and its causal role  in the acquisition of reading skills. Psychological Bulletin, 101(2), 192-212,  doi: 10.1037/0033-2909.101.2.192 Wanzek, J., & Vaughn, S. (2008). Response to varying amounts of time in reading intervention  for students with low response to intervention. Journal of Learning Disabilities, 41(2),   126-142. doi: 10.1177/0022219407313426 Weiner, J. & Schneider, B.H., (2002). A Multisource Exploration of the Friendship Patterns of  Children With and Without Learning Disabilities. Journal of Abnormal Child  Psychology, 30(2), 127-141. doi: 10.1023/A:1014701215315 Willoughby, T., & Wood, E. (2008). Children's learning in a digital world (1st ed.). Malden,   MA: Blackwell Pub.  Williams, D., Coles, L., Wilson, K., Richardson, A., & Tuson, J. (2000). Teachers and ICT:  Current use and future needs. British journal of educational technology, 31(4), 307-320.  doi: 10.1111/1467-8535.00164 Wise, B. W., Ring, J., & Olson, R. K. (2000). Individual differences in gains from computer-  assisted remedial reading. Journal of experimental child psychology, 77(3), 197-235.   doi: 10.1006/jecp.1999.2559 Wise, J. C., Sevcik, R. A., Morris, R. D., Lovett, M. W., & Wolf, M. (2007). The growth of   phonological awareness by children with reading disabilities: a result of semantic   knowledge or knowledge of grapheme-phoneme correspondences? Scientific Studies of   Reading, 11(2), 151-164. doi: 10.1080/10999430709336557 Wolf, M., & Katzir-Cohen, T. (2001). Reading fluency and its intervention. Scientific studies of   reading, 5(3), 211-239. doi:10.1207/S1532799XSSR0503_2  147 Wolgemuth, J. R., Abrami, P. C., Helmer, J., Savage, R., Harper, H., & Lea, T. (2014).   Examining the impact of ABRACADABRA on early literacy in Northern Australia: An   implementation fidelity analysis. The Journal Of Educational Research, 107(4), 299-311.   doi:10.1080/00220671.2013.823369. Wolgemuth, J., Savage, R., Helmer, J., Harper, H., Lea, T., Abrami, P. C., & ...Louden, W.  (2013). ABRACADABRA aids indigenous and non-indigenous early literacy in Australia: Evidence from a multisite randomized controlled trial. . Computers & Education, 67, 250-264. Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock Johnson III tests of   achievement.(Rolling Meadows, IL: Riverside.) Zorzi, M., Houghton, G., & Butterworth, B. (1998). The development of spelling-sound  relationships in a model of phonological reading. Language and Cognitive Processes,  13(2-3), 337-371. doi:10.1080/016909698386555               148 Appendices  Appendix A Screen Measures A.1 Phoneme Segmentation Fluency Measure   Say these specific directions to the student: I am going to say a word. After I say it, you tell me all the sounds in the word So, if I say, “sam,” you would say /s/ /a/ /m/. Let’s try one (one-second pause). Tell me the sounds in “mo.” CORRECT RESONSE: If student says /m/ /o/ /p/, you say INCORRECT RESPONSE: If the student gives any other response, you say Very good. The sounds in “mop” are   /m/ /o/ /p/. The sounds in “mop” are /m/ /o/ /p/. Your turn. Tell me the sounds in “mop”. Ok. Here is your first word. Give the student the first word and start your stopwatch.  Leaned /l/ /ea/ /n/ /d/  Worm  /w/ /ir/ /m/  Porch  /p/ /or/ /ch/  Grabbed /g/ /r/ /a/ /b/ /d/  Lit  /l/ /i/ /t/  Get  /g/ /e/ /t/  Roared  /r/ /or/ /d/  Broke  /b/ /r/ /oa/ /k/  Raise  /r/ /ai/ /z/  Worth  /w/ /ir/ /th/  That  /th/ /a/ /t/    Total: __________  Error Pattern: _________  149  A.2 Nonsense Word Fluency Measure  Say these directions to the student: Look at this word (point to the first letter of the word). It’s a make-believe word. Watch me read the word: /s/ /i/ /m/, “sim” (point to each letter then run your finger fast beneath the whole word). I can say the sound of the letters, /s/ /i/ /m/ (point to each letter), or I can read the whole word, “sim”.  Your turn to read a make-believe word. Read this word the best you can. Make sure you say any sound you know.  CORRECT RESPONSE: If the child responds “lut” or with some or all of the sounds, say INCORRECT RESPONSE: If the child does not respond within 3 seconds or responds incorrectly, say That’s right. The sounds are /l/ /u/ /t/ or “lut”. Remember, you can say the sounds or you can say the whole word. Watch me. The sounds are /l/ /u/ /t/. or “lut”. Let’s try again. Read this word the best you can.  Um  jac  zoj  oc  kom  Kic  raj  lon  zeb  ig  Mes  juk  et  noj  vin  Jic  wuj  om  hul  mid  Bes  pek  moz  um  ut  Pej  waj  rej  jul  nej  Lat  puz  des  ud  nam   Total correct letter sounds (CLS):        __________ Total words recorded completely and correctly (WRC):   __________         150 Appendix B Pre- and Posttest Measures  B.1 Elision Measure   Materials:  None Ceiling:  Stop after child misses 3 test items in a row. Feedback:  Give feedback on all practice items and test items 1-5 only. Scoring:  Record correct answers as 1 and incorrect answers as 0. The total raw score of this subtest is the total number of correct test items up to the ceiling.   Directions: Say, “Let’s play a word game.”  Practice Items:         Correct  Score (1/0)         Response  a.          Say toothbrush. Now say toothbrush without saying tooth.  brush   If correct say, “That’s right. Let’s try the next one.”  If incorrect say, “That’s not quite right. Toothbrush without saying tooth is brush.”  Continue to give correct/incorrect feedback as before. Say, “Let’s try some more.”  b. Say airplane. Now say airplane without saying plane.   Air c. Say doughnut. Now say doughnut without saying dough.   Nut  Test Items: Continue to give correct / incorrect feedback as before    Score (1/0)  1. Say popcorn. Now say popcorn without saying corn.  Pop    2. Say baseball. Now say baseball without saying base.  Ball  3. Say spider. Now say spider without saying der.   Spy  Practice Items: Say, “Okay, now let’s try some where we take away smaller parts of the words.” Continue to give correct/incorrect feedback. Use the phoneme, not the letter name (e.g., /k/ is the sound of k).           Correct           Response d. Say cup. Now say cup without saying /k/.    up  If correct say, “That’s right. Let’s try the next one.”  If incorrect say, “That’s not quite right. Cup without saying /k/ is up.”  e. Say meet. Now say meet without saying /t/.    me f. Say farm. Now say farm without saying /f/.    arm        151 Test Items: Continue to give correct/incorrect feedback as before.      Score (1/0) 4. Say bold. Now say bold without saying /b/.    old 5. Say mat. Now say mat without saying /m/.    at     Remaining Test Items: Provide no feedback on remaining items.  6. Say tan. Now say tan without saying /t/.     an 7. Say mike. Now say mike without saying /k/.    my 8. Say time. Now say time without saying /m/.    tie 9. Say tiger. Now say tiger without saying /g/.    tire 10. Say powder. Now say powder without saying /d/.   power 11. Say winter. Now say winter without saying /t/.    winner 12. Say snail. Sow say snail without saying /n/.    sail 13. Say faster. Now say faster without saying /s/.    fatter 14. Say sling. Now say sling without saying /l/.    sing 15. Say driver. Now say driver without saying /v/.    dryer 16. Say silk. Now say silk without saying /l/.    sick 17. Say flame. Now say flame without saying /f/.    lame 18. Say strain. Now say strain without saying /r/.    stain 19. Say split. Now say split without saying /p/.    slit 20. Say fixed. Now say fixed without saying /k/.    fist         Total Raw Score:    __________                         152 B.2 Blending Measure   Materials:  Audiocassette recorder and “Blending Words” section of cassette Ceiling:  Stop after child misses 3 test items in a row. Note:  If the child asks you to repeat the sounds, you may play the taped sounds one more time. Prompt:  If the child says the sounds separately (e.g., m-e, rather than me,) prompt by saying, “Try to say the sounds all together as a real word.” This prompt can be used as often as needed on practice words only. Feedback:  Give feedback on all practice items and the first 3 test items only. Scoring:  Record the correct answers as 1 and incorrect answers as 0. The total raw score for this subtest is the total number of correct items up to the ceiling.   DIRECTIONS  Practice Items: Say, “Listen to the audiocassette recorder. You will hear some words in small parts one part at a time. I want you to listen carefully, and then put these parts together to make a whole word. Ready? Let’s try one.”  a.        Play the taped instruction that says, “What word do these sounds make? Can-dy.” Then pause to allow the child to answer.  If correct say, “That’s right. Let’s try the next one.” If incorrect say, “That’s not quite right. When you put can-dy together, it makes candy. You try it: can-dy makes _________? (Pause). “Let’s try the next one.”  Continue the taped items listed below, pausing after each item, and give corrective feedback as above.)          Correct Response b. What word do these sounds make? Ham-er hammer c. What word do these sounds make? S-un  sun d. What word do these sounds make? T-ak  take e. What word do these sounds make? N-o  no f. What word do these sounds make? M-a-d  mad          153 TEST ITEMS:    Say, “Let’s try some more words. Each time you will hear the word one part at a time. Listen carefully and put the parts together to make a whole word.” (Continue to the taped items below, pausing after each item.) Items may be repeated once.  FEEDBACK on items 1-3 only:  If correct say, “That’s right.”  If incorrect say, “When you put num-ber together, it makes number.”       Correct Response  Score (1/0) 1. num-ber  number   2. pen-sel   pencil 3. an-ser   answer 4. i-t   it 5. t-oi   toy 6. s-o   saw 7. sh-e   she 8. n-ap   nap 9. m-i-s   miss 10. b-o-n   bone 11. m-oo-n  moon 12. s-t-a-m-p  stamp 13. j-u-m-p  jump 14. m-i-s-t-a-k  mistake 15. s-ur-k-e-s  circus 16. o-l-m-o-s-t  almost 17. g-r-a-s-h-o-p-er grasshopper 18. t-e-s-t-e-f-i  testify 19. u-n-d-er-s-t-a-n-d understand 20. m-a-th-e-m-a-t-i-k-s mathematics        Total Raw Score   __________                 154 B.3 Letter-Word Identification Measure Scoring 1. 1= correct response 2. 0= incorrect response 3. do not penalize subject for mispronunciation resulting from articulation errors, dialect variations or regional speech patterns. 4. If you do not hear the subject’s response clearly, all the subject to complete the entire page and then ask him or her to repeat all the items on that page. Score only the item in question: do not rescore the other items. Basal -test by complete pages until the 6-lowest-numbered items administered are correct, or until you have administered page with Item 1  Ceiling -test by complete pages until the 6 highest-numbered items administered are incorrect, or until you have administered the page with Item 76.  Example:  1. L   2. A   3. W   4. S   5. I   6. Y 7. R 8. N 9. K 10. Red 11. P 12. Q 13. B 14. U 15. See 16. The 17. Is 18. And 19. Go 20. Will 21. Not 22. But 23. From 24. Had 25. Keep   155 B.4 Word Attack Measure  Scoring 1. 1=correct response 2. 0= Incorrect response 3. do not penalize a subject for mispronunciations resulting from articulation errors, dialect variations, or regional speech patterns.  Basal -test by complete pages until the 6-lowest-numbered administered are correct, or until you have administered page with Item 1.  Ceiling -test by complete pages until the 6-highest-numbered items administered are incorrect, or until you have administered the page with Item 32.  Example: 1. points to r  29. cythe 2. /s/   30. cimp 3. /m/   31. depnonlel 4. hap   32. querpostonious 5. mell 6. fim 7. ven 8. jop 9. floxy 10. leck 11. pawk 12. diestrum 13. chur 14. vorse 15. gradly 16. loast 17. blighten 18. wreet 19. yerdle 20. koodoo 21. baunted 22. splaunch 23. gnobe 24. centizen 25. quog 26. wroutch 27. phintober   156 B.5 Sight Word Efficiency Measure  Test of Word Reading Efficiency T.O.W.R.E Real Words  Student Name: ____________ Teacher: _______________  School: _____________   Materials:  Stopwatch, Forms A and B, Sight Word Efficiency Reading Cards Ceiling:  Administer all items Scoring:  1) Mark all the words the examinee reads incorrectly and draw a line after the last word read. 2) The examinee’s raw score is the total number of words correctly read within 45 seconds. If the examinee finishes all the words before the time is up, note the time required to read all the words. If the examinee skips a word, simply count it as an error. 3) If he or she hesitates for more than 3 seconds on a word and is instructed to go to the next word, make the word as incorrect.  Practice:  Present the Practice words on the Form A card.  Say: “I want you to read some lists of words as fast as you can. Let’s start with this practice list. Begin at the top, and read down the list as fast as you can. If you come to a word you cannot read, just skip it and go of to the next word. Use your finger to help you keep your place if you want to.”  Have the examinee read the words. If the examinee skips around a lot, ask him or her to read the words from top to bottom, without jumping around.  Note: if you are giving Form A immediately after Form B, omit the practice instructions and proceed Form A by saying,  “Now we will do it one more time. Remember to read all the words as fast as you can without making errors. Skip any words that you cannot read.”   Practice Words: On, My, Bee, Old, Warm, Bone, Most, Spell       157     158  B.6 Phonetic Decoding Efficiency Measure  Test of Phonetic Decoding Efficiency T.O.W.R.E  Student Name: ___________   Teacher: _______________  School: _____________    Materials: Stopwatch, Forms A and B Phonemic Decoding Efficiency Reading Cards  Ceiling: Administer all items  Scoring: Mark all the nonwords the examinee reads incorrectly on each form and draw a line after eh examinee’s last nonword. The examinee’s raw score is the total number of nonwords read correctly within 45 seconds. If the examinee skips a nonword, simply count it as an error. If the examinee hesitates for more than 3 seconds on a nonword, mark it as incorrect and point to the next item and say, “Go on.” Some items have more than one correct pronunciation for the vowel. Score the item correct if the examinee gives any of the correct pronunciations. Alternative correct pronunciations are indicated with real word examples, with the vowel in question underlined. For words with more than two syllables, alternative pronunciations are given separately for each syllable.  Practice:  Present the Practice items on the Form A card. Say, “Now I want you to read some words that are not real words. Just tell me how they sound. I want you to read them as fast as you can. Let’s start with this practice list. Begin at the top, and read down the list as fast as you can. If you come to a made-up word you cannot read, just skip it and go on to the next word. Use your finger to keep your place if you want to.” Have the examinee read the nonwords. If the examinee skips around a lot, ask him or her to read the words from top to bottom, without jumping around. If the examinee tries to substitute real words for the nonwords, remind him or her that these are made-up words, not real words.  Directions:  Say, “Now you will read some longer lists of nonwords. The words start out pretty easy, but they get harder as you go along. Just read as many of these nonwords as fast as you can until I say stop. You will begin as soon as I turn over the card.”           159  Stimulus Pronunciation  Stimulus Pronunciation  1. Ip  tip   44. Trober  sober 2. Ga  gap, gate  45. Depate  de (dee, deck), pate  3. Ko  code, cot  46. Glant  plant 4. Ta  tack, tape  47. Sploosh loose, book 5. Om  on   48. Dreker  dre (met, meet), ker  6. Ig  pig   49. Rittun  rit (sit), lun (bun) 7. Ni  nip, nice  50. Hedfert hed (bed), fert (fern) 8. Pim  him   51. Bremick bre (tree, bed), mick  9. Wum  sum   52. Nifpate nif (sniff), pate (late) 10. Lat  fat   53. Brinbert brin (fin), bert (her) 11. Baf  bat   54. Clabom cla (clay, clap), bom  12. Din  pin   55. Drepnort drep (pep), nort (fort) 13. Nup  cup   56. Shratted matted 14. Fet  met   57. Plofent  plo (toe, mop), fent  15. Bave  save   58. Smuncrit smun (fun), crit (bit) 16. Pate  fat)   59. Petnador pel (fell), na (nap, nut, 17. Herm  term   60. Fornalask forn (born), a (at),  18. Dess  mess   61. Fermabalt ferm (firm), a (at),  19. Chur  blur   62. Crenldmoke cre(hen), nid(lid),  20. Knap  nap   63. Emulbatate e(eel), mul(hull),  21. Tive  hive       22. Barp  tarp 23. Stip  tip    24. Plin  fin 25. Frip  trip    26. Poth  moth    27. Casp  clasp 28. Meest  feast 29. Shlee  flee 30. Guddy  mjddy   Number of Words Read Correctly ______ 31. Skree  tree 32. Felly  jelly 33. Clirt  shirt   If examinee finishes list before 45 seconds,  34. Sline  line   note time of finish __________________ 35. Dreef  reef            36. Prain  pain 37. Zint  lint 38. Bloot  loot, book 39. Trisk  brisk 40. Kelm  helm 41. Strone  stone 42. Lunaf  lu (tune, bun), naf (after) 43. Cratty  fatty  160 Appendix C Social Validity  C.1 Teacher Survey  Date:________________________    School: _________________ Dear (Classroom Teacher’s name)   Congratulations! We have completed the 10-week ABRA reading intervention program! Please take a couple of minutes to complete this survey about your student’s progress through, and improvement of his/her literacy skills as a result of using ABRACADABRA. Please return to your facilitator at your earliest convenience. On behalf of the ABRA study team, it has been a pleasure working with you and your class.  We wish you all the success in the future, Sincerely, Miss. A. Flis Resource Teacher, HCDSB PhD Candidate, UBC  Please circle the number that best answers the question: Question               Strongly Disagree  Undecided    Strongly Agree 1. Phonological awareness is important  1          2          3           4   5 to the reading achievement of my students.    2. It is important for students to learn how   1          2          3           4               5 to use a computer.  3. Delivering a reading intervention   1          2          3           4               5 program on a computer is an effective way to  remediate literacy skills. 4. There is an improvement with the  1        2          3          4               5 student’s reading skills after the intervention. 5. There is an improvement with the  1        2         3           4               5 student’s interest to read, after the intervention. 6. I would like to implement ABRACADABRA 1        2         3           4                5 with future students. 7. Overall, ABRACADABRA is an effective tool 1       2         3           4                5 to help meet the range of needs of the students  in my class.  161  C.2 Parent Survey  Date:  Dear Parent(s) and/or Guardian(s):   Congratulations! Your child has completed a 10-week reading intervention program at (school name here). For ten weeks, your child has worked very hard and diligently on improving his/her literacy skills. Please take a couple of minutes to complete this survey about your child’s progress through, and improvement of his/her literacy skills as a result of using ABRACADABRA. Below you will find a space to indicate whether you would like to set a meeting to discuss your child’s literacy results, or have the results sent home.  Please note that by returning the completed survey to the school, it is implied that consent has been given to complete the survey.  On behalf of the ABRA study team, it was a pleasure to work with your child and wish all the success for the future.  Sincerely, Miss. A. Flis Resource Teacher, HCDSB PhD Candidate, UBC  Please circle the following numbers that best represent your answer: Question    Strongly Disagree             Undecided       Strongly Agree   1. Phonics is important to the reading   1      2        3           4              5 achievement of my child.  2. It is important for my child to learn  1      2        3           4               5 how to use a computer.   3. There is an improvement with my child’s 1     2        3          4               5 reading skills after the intervention.  4. There is an improvement with my  1     2       3           4               5 child’s interest to read, after the intervention.  5. ABRA is easy to use.    1     2       3           4                5  6. Overall, ABRACADABRA   1      2       3           4                5 is an effective reading intervention program for my child.   162 C.3 Student Survey  1. Think about SOUNDS and WORDS. How much did ABRA help you become a better reader?           _____________________________________________________________     1  2  3  4  5 Not at all   Kind of    A whole lot   2. Think about computers. How much did it help you become a better reader?           _____________________________________________________________     1  2  3  4  5 Not at all   Kind of    A whole lot  3. Overall, how much you think ABRACADABRATM helped you become a better reader?           _____________________________________________________________    1  2  3  4  5 Not at all   Kind of    A whole lot  4. Overall, how much did you like doing ABRACADABRATM?           _____________________________________________________________     1   2  3  4  5 Not at all   Kind of    A whole lot   5. Would you want to play with ABRACADABRATM in the future?                _____________________________________________________________   1   2  3  4  5 Not at all    Kind of    A whole lot  163 Appendix D ESL Questionnaire  D.1 ESL Parent Language Questionnaire    The University of British Columbia  Date: Dear Parent(s) and/or Guardian(s):   As stated in the information letter sent along with this package, the following request for information in this questionnaire is designed to assist the ABRA research team in determining if the reading program is effective for students who speak another language other than English. Please take a couple of minutes to complete this questionnaire. A translation into your preferred language is possible upon request. Please note that by returning the completed questionnaire, it is implied that consent has been provided to compete this questionnaire. Thank you for returning this questionnaire to the school at your earliest convenience.  Sincerely, Miss. A. Flis Resource Teacher, HCDSB PhD Candidate, UBC  More information about this intervention program can be found by visiting: grover.concordia.ca/abracadabra/                       164 Please fill out the following questionnaire and return to school   Child’s name: __________________ Parent(s) and/or Guardian (s) name:      ______________________  Child’s Age:    __________________                            ______________________   Siblings  Sibling #1 Name:__________________________ Sex: M/F Age: ______       Sibling #2 Name: __________________________ Sex: M/F Age: ______  Sibling #3 Name: __________________________ Sex: M/F Age: _______   Language  Language(s) spoken in the home: ______________________   What language is spoken predominantly to your child/children in the home? _____________________  What other language does the Father know:__________________________    Spoken in home? Y / N  What other language does the Mother know:_________________________    Spoken in home? Y / N  What other language does Sibling #1,2 or 3 know: _____________________   Spoken in home? Y / N  Is your child learning a new language at home?  _____  (yes or no)          If yes, what is it? ________________________________  Do your child’s grandparents live in the home?: _____ (yes or no)       If they speak another language, what is it? __________________                 Birthplace:  Was your child born outside Canada?  ____________ (yes or no)  If yes, what country? ________________________  Is your child receiving ESL instruction from an ESL teacher in the school? _____________ (yes or no)    Sincerely, Ms. A. Flis   

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0363396/manifest

Comment

Related Items