Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Investigating the effects of a repeated reading intervention for increasing oral reading fluency with… Forster, Erika M. 2009

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2009_spring_forster_erika_m.pdf [ 4.97MB ]
Metadata
JSON: 24-1.0054437.json
JSON-LD: 24-1.0054437-ld.json
RDF/XML (Pretty): 24-1.0054437-rdf.xml
RDF/JSON: 24-1.0054437-rdf.json
Turtle: 24-1.0054437-turtle.txt
N-Triples: 24-1.0054437-rdf-ntriples.txt
Original Record: 24-1.0054437-source.json
Full Text
24-1.0054437-fulltext.txt
Citation
24-1.0054437.ris

Full Text

INVESTIGATING THE EFFECTS OF A REPEATED READING INTERVENTION FOR iNCREASING ORAL READING FLUENCY WITH PRIMARY, BRAILLE-READING STUDENTS USING CURRICULUM-BASED MEASUREMENT WITHIN A RESPONSE TO INTERVENTION FRAMEWORK by ERIKA M. FORSTER B. A., The University of British Columbia, 1993 B. Ed., The University of British Columbia, 1994 M. A., The University of British Columbia, 1997  A THESIS SUIvIBITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (School Psychology)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) March 2009 ©Erika M. Forster, 2009  ABSTRACT Given the predictive validity of early reading skills for future reading proficiency, early assessment and intervention in the primary grades are of vital importance. The stakes are particularly high for those students who are deemed “at risk” for current and future reading problems. Students who are blind and read braille may be at enhanced risk for literacy problems relating, for example, to reading speed and accuracy, or oral reading fluency (ORF) (Coppins & Barlow-Brown, 2006). However, the field of visual impairment lacks a “body of empirically-based, experimental research” to inform the development and use of interventions to address such reading challenges (Ferrell, Mason, Young, & Cooney, 2006, p. 4). The purpose of this study is to build on the limited repeated reading research that shows promise for improving ORF for students with visual impairments, targeting braille-reading students evidencing ORF-related challenges in the critical primary grades. The intervention design was informed by the Instructional Hierarchy’s (Raring, Lovitt, Eaton, & Hansen, 1978) stage model of learning such that the intervention was matched to the skill-based needs of the participants (Daly & Martens, 1994). Accordingly, the intervention drew heavily on empirically validated best practices, employing curriculumbased measurement (CBM) and user-friendly assessment materials to investigate the effects of a repeated reading intervention on ORF within a Response to Intervention (RTI) framework. A nonconcurrent multiple baseline design was used to investigate whether there was a functional relationship between the implementation of the repeated reading intervention and ORF and comprehension. Participants’ response to the intervention was  11  measured using assessment materials designed as High Content Overlap (HCO) passages and Low Content Overlap (LCO) passages. The study also investigated the social validity of the intervention for teachers for students with visual impairments (TVIs). Additionally, the study evaluated the effects of undertaking the intervention on participants’ self perception as readers. TVIs were trained to implement the intervention with their students in their respective schools. Results indicated tentative support for the continued investigation of reading intervention as a socially valid means of improving ORE and comprehension for primary braille readers.  111  TABLE OF CONTENTS ABSTRACT  ii  TABLE OF CONTENTS  iv  LIST OF TABLES  viii  LIST OF FIGURES  ix  ACKNOWLEDGEMENTS  x  CHAPTER 1: INTRODUCTION  1  Rationale for the Current Study  5  7  Background of the Problem  11  Response to thtervention (RTI) Research Questions  23  Definition of Terms  24  CHAPTER 2: REVIEW OF THE LITERATURE  27  Literacy for Students with Visual Impairments  27  Learning Media Assessment (LMA)  28  Framework for Literacy for Students with Visual Impairments  31 32  Oral Reading Fluency (ORF) Theoretical Models for ORF  33  Early Braille Skill Development: Unique Implications for ORF  37  Reading Rates in Braille  46  ORE and Comprehension for Braille Readers  52  Assessment of ORE (and Comprehension) for Braille Readers  54  60  RTI, Curriculum-based Measurement (CBM), and ORE  iv  .60  Framework for RTI  66  Repeated Reading for Braille Readers CHAPTER 3: METHODOLOGY  69  Participants and Setting  69 69  Selection Criteria Materials  73  Measurement  82  Dependent Variables  82  Treatment Integrity  90  Interobserver Agreement  91  Independent Variable: Repeated Reading  94  Research Design: Nonconcurrent Multiple Baseline across Participants  95 96  Procedures Recruitment  97  Participant Screening and Selection  97  Participant Screening and Selection Assessments  100  Baseline  108  Intervention  1 10  Follow-up  112 113  Data Analyses Visual Analysis  1 13  Therapeutic Criteria  114  Social Validity Evaluation  115  V  Evaluation of Participants’ Self-perception as Readers  116  CHAPTER 4: RESULTS  117  Summary of Findings  118  Correct Words per Minute (CWPM)  120  Errors per Minute  131  Oral Retell Fluency (Comprehension)  137  CWPM for the Third Read of Instructional Passages and High Content Overlap (HCO) Passages  142  Errors for the Third Read of Instructional Passages and High Content Overlap (HCO) Passages  153  CWPM and Errors for Low Content Overlap (LCO; Progress Monitoring) Passages  161  Oral Retell Fluency (Comprehension) for Low Content Overlap (LCO; Progress Monitoring) Passages  170  Social Validity and Participants’ Self-perception as Readers  175  CHAPTER 5: DISCUSSION  181  Summary of Results  181  Correct Words per Minute (CWPM)  182  Errors per Minute  183  Oral Retell Fluency (Comprehension)  183  CWPM, Errors and Oral Retell Fluency for Low Content Overlap (LCO) Passages  184  Analysis of a Functional Relationship between the Dependent Variables and the Repeated Reading Intervention  184  CWPM for the Third Read of Instructional Passages and High Content Overlap (HCO) Passages  185  vi  Errors for the Third Read of Instructional Passages and High Content Overlap (HCO) Passages  186  Social Validity  186  Participants’ Self-Perception as Readers  187  Findings in Relation to the Literature  187  Unique Contributions and Clinical Implications  202  Limitations and Future Directions  207  Conclusion  212  REFERENCES  215  APPENDICES  248  Appendix A: Recruitment Email  248  Appendix B: Child Assent Form  250  Appendix C: Parental Consent Form  253  Appendix D: Teacher Consent Form  258  Appendix E: School District Administrator Consent Form  263  Appendix F: Student Participant Information Form  268  Appendix G: Sample Treatment Integrity Checklist (Baseline)  270  Appendix H: Social Validity Questionnaire for Teachers for Students With Visual Impairments  276  Appendix I: Self-perception Questionnaire for Participants  278  Appendix J: UBC Ethics Review Board Certificate of Approval  280  vii  LIST OF TABLES Table 1 Participant Information for the First and Second Cohort  72  Table 2 Materials and Assessment Schedule  75  Table 3 First Grade Dynamic Indicators of Basic Early Literacy Skills (DIBELS) Benchmarks  79  Table 4 Second Grade DIBELS Benchmarks  79  Table 5 Third Grade DIBELS Benchmarks  79  Table 6 Expected Rate of Improvement for Correct Words per Minute (CWPM)  86  Table 7 Interobserver Agreement (and Range) across Dependent Measures  93  Table 8 Direct Assessment Placement Criteria  103  viii  LIST OF FIGURES Figure 1 Cohort 1: Correct Words per Minute for the First, “Cold” Read of Instructional Passages  121  Figure 2 Cohort 2: Correct Words per Minute for the First, “Cold” Read of Instructional Passages  122  Figure 3 Cohort 1: Errors per Minute for the First, “Cold” Read of Instructional Passages  132  Figure 4 Cohort 2: Errors per Minute for the First, “Cold” Read of Instructional Passages  133  Figure 5 Cohort 1: Oral Retell Fluency (Comprehension) for Instructional Passages  138  Figure 6 Cohort 2: Oral Retell Fluency (Comprehension) for Instructional Passages  139  Figure 7 Cohort 1: Correct Words per Minute for the First, “Cold” Read and the Third Read of Instructional Passages and for High Content Overlap (HCO) Passages  144  Figure 8 Cohort 2: Correct Words per Minute for the First, “Cold” Read and the Third Read of Instructional Passages and for High Content Overlap (HCO) Passages  145  Figure 9 Cohort 1: Correct Words per Minute and Errors per Minute for DIBELS, Low Content Overlap (LCO) Passages  163  Figure 10 Cohort 2: Correct Words per Minute and Errors per Minute for DIBELS, Low Content Overlap (LCO) Passages  164  Figure 11 Cohort 1: Oral Retell Fluency (Comprehension) for DIBELS, Low Content Overlap (LCO) Passages  171  Figure 12 Cohort 2: Oral Retell Fluency (Comprehension) for DIBELS, Low Content Overlap (LCO) Passages  172  ix  ACKNOWLEDGEMENTS  The completion of this dissertation was made possible because of the ongoing, incredible support shown to me by my professors, colleagues, friends, and family, and I thank God for the many blessings these people have brought to my life. I would like to extend my sincere appreciation to my doctoral committee. I am most grateful to Dr. Cay Holbrook for her steadfast faith in my ability to undertake a PhD and for her unflagging resolve to improve the education and, thereby, the lives of students with visual impairments. I continue to be inspired by your ardent commitment to the field and our most worthy students. I am indebted to Dr. Ruth Ervin for helping me to realize my dream of becoming a school psychologist. Your integrity, expertise, kindness, ingenuity, and strong work ethic buoyed me along throughout this challenging degree. Dr. Joe Lucychyn brought single-subject research to life, providing me with a powerful, effective, highly rigorous methodology that was uniquely well suited to answering my research questions. Your expertise and painstaking attention to detail are reflected in my scholarship. My research interests converged beautifully in Dr. Kent McIntosh’s areas of expertise. I am fully cognizant of the pivotal role you played in bringing this dissertation to its successful conclusion, and I sincerely thank you for your willingness to share your wealth of knowledge. I was the beneficiary of financial support from several key organizations, and this support greatly facilitated the completion of my research study. In particular, I appreciate the generous support I received from the Canadian National Institute for the Blind, the Social Sciences and Humanities Research Council of Canada, and the Canadian Language and Literacy Research Network.  x  I am also very thankful for the support I received from my colleagues and previous supervisors. Thank you, Dr. John Carter, for your perfectly timed words of encouragement, invigorating marathon analogies, and calming and empowering perspective. In addition, I am ever so grateful for the kindness and mentorship I received while on internship from my two immensely talented supervisors, Patti Weiss and Dr. Ying Hoh. Throughout the doctorate, I was thankful to know Dr. Sally Rogow, a pioneer in the field of visual impairments and my masters supervisor. Your warmth and kindness were ever present. Thank you, Dr. Elizabeth Thompson, for your optimism, practical support, and “open door policy”. I will always be thankful for the extraordinary friendships I made while pursuing this degree, and the healing and restoration they brought to my life. I would like to thank Suretha, Brent, Surita, and Junie for being such amazing and special people, for sharing of themselves, and for helping things make sense when I was at a loss. I did not anticipate the emotional costs associated with completing this degree, and I am eternally grateful to my husband and children for their help and patience as I fulfilled my dream. And to my Margaret, it is obvious to me that the completion of this degree is the culmination of your many years of motherly love and support.  xi  CHAPTER 1 Introduction Current Canadian and American educational systems face heightened scrutiny as demands increase for improved educational outcomes for all students, particularly in key areas such as literacy. Estimates of up to 20% of all American children experience reading difficulties within their first three years of formal schooling (Lyon & Moats, 1997; National Reading Panel [NRP], 2000) and, while Canadian data are unavailable, it is thought that Canadian children are faring similarly. These statistics are worrisome, given that early success in reading is deemed critical (Hasbrouck & Tindal, 2006; Snow, Burns, & Griffin, 1998). Poor literacy outcomes in the area of reading are concerning because reading is a pivotal skill, foundational to many other skills and overall life success (NRP, 2000). The process of learning to read, or to “make meaning from text” (Commission on Reading, 1985, p. 7) has long been evaluated in an effort to maximize student success by grade three because early reading skill levels are highly predictive of later reading skills (Juel, 1988; Stanovich, 1986). In the absence of appropriate, research-validated interventions, students who fail to read fluently by the end of their primary school years are very likely to continue to underachieve compared to their peers for the duration of their school careers (Francis, Shaywitz, Stuebing, Shaywitz, & Fletcher, 1996; Good, Simmons, & Smith, 1998; Juel, 1988; Juel & Leave!!, 1988; Shapiro, 2004; Shaywitz, 1997). For example, the often cited research by Jue! showed that there is an approximate 88% chance that a poor reader in grade one will also be a poor reader at the end of grade four. Further, research by Shaywitz, Escobar, Shaywitz, Fletcher, and Makuch  1  (1992) revealed that 75% of poor readers in grade three continued to underachieve in high school. Oral reading fluency (ORF), a “lower-order reading skill” (Therrien, Wickstrom, & Jones, 2006, p. 90), has been advanced as a barometer of reading competence for typically sighted students, deemed the “single-best indicator of reading proficiency for younger students” (Daly, Chafouleas, & Skinner, 2005, p. 10). ORF also has high predictive validity, particularly in the primary grades and before grade five (Brown-Chidsey, Johnson, & Fernstrom, 2005; Shinn, Good, Knutson, Tilly, & Collins, 1992). ORF is considered a “prerequisite to independent comprehension of text” (Daly, Chafouleas, et al., p. 10), and comprehension is generally considered the “essence of reading” (Durkin, 1993, p. 11). The relationship, albeit correlational, of ORF to reading comprehension is well documented (Allington, 1983; Dowhower, 1987; Fuchs, Fuchs, Hosp, & Jenkins, 2001; Levy, Abello, & Lysynchuk, 1997; Pikulski & Chard, 2005; Samuels, 1988). For the purpose of this study, ORF refers to reading speed and accuracy (Rashotte & Torgesen, 1985; Samuels, 1979). The increasing widespread recognition of the importance of ORF has spurred considerable literacy research and encouraged the development of empirically validated interventions to affect improvements in ORF. One such intervention, repeated reading, is at the forefront of these interventions, enjoying considerable empirical support in the literature (Daly, Bonfiglio, Mattson, Persampieri, & Foreman-Yates, 2006; Dowhower, 1987; Rashotte & Torgesen, 1985; Therrien, 2004). Students with visual impairments, such as those who are blind and read braille, may be at enhanced risk for literacy problems relating to reading speed and accuracy (Coppins & Barlow Brown, 2006) and may, as a result, be most in need of empirically validated reading  2  interventions. There is, for example, some current evidence of reading underachievement among braille-reading students (Ferrell, Mason, Young, & Cooney, 2006). It has been suggested in the literature that braille readers may take, on average, longer to read than their sighted counterparts (Duckworth & Caton, 1986; Kusajima, 1974; Nolan & Kederis, 1969). When they occur, discrepancies between print and braille-reading rates are attributed, in large part, to the tactile nature of the reading medium and the quality of learning opportunities afforded early braille readers (Coppins & Barlow-Brown; Kusajima; Ponchillia & Ponchillia, 1996). Reading underachievement among braille students is concerning as, for example, research has demonstrated that reading proficiency is critical to their social and economic well-being (Ferrell et al.; Ryles, 1996). Researchers face many challenges as they attempt to undertake scientifically-based research to improve braille-reading competency. For example, research is made complicated because many research designs are unsuitable for use with the low incidence, exceptionally heterogeneous, and widely dispersed population of braille readers (Ferrell et al., 2006). Additionally, research focusing on students with visual impairments is chronically underfunded (Corn & Ferrell, 2000; Ferrell et al.). Consequently, the field of visual impairment currently lacks contextualizing, normative data and a “body of empirically-based, experimental research” to guide the development and use of assessments and interventions that promote braille-reading skill development (Ferrell et al., p. 4). Recent developments within the larger field of education, however, hold great promise for those who seek to build the research foundation in braille literacy. Response to Intervention (RTI) epitomizes a new paradigmatic approach to conceptualizing, preventing, and addressing  3  academic and social concerns (Fairbanks, Sugai, Guardino, & Lathrop, 2007; Kame’enui, 2007). This new approach may encompass braille reading achievement. RTI in practice involves attempting to maximize student success by providing high quality instruction and intervention in keeping with student needs. Student progress is monitored frequently (i.e., collecting data regarding students’ responsiveness to instruction) so as to enable and inform decision making regarding changes to instruction, goals, and nature of supports at the individual student level. In addressing individualized student needs, RTI embraces the use of both curriculum-based measurement (CBM) methodologies for assessment and intervention and single-subject research methods. Both of these rigorous, scientific methodologies accommodate “individual differences” within the population of braille readers and offer a means of developing research-based interventions, even in the current absence of braille-reading norms. The significance ascribed to ORF, regardless of whether the student reads print andlor braille, and an acknowledgment of the need for empirically validated interventions to improve ORF for braille readers has led to the current study. The focus of this research study was on investigating the utility of a repeated reading intervention to improve ORF when employed with primary braille readers. The study examined whether there was a functional relationship between the use of the intervention and changes in ORF and levels of comprehension. The study also investigated the social validity of the intervention from the perspective of teachers for students with visual impairments (TV Is), and the effects of undertaking the intervention on participants’ self-perception as readers. The results of this study add to the emerging literature regarding the need to develop empirically validated reading interventions that enhance the reading competency of primary braille readers.  4  Rationale for the Current Study Despite the importance ascribed to ORF skill development and the use of empirically validated interventions for ORE related problems, there is a lack of research regarding reading interventions for braille readers. Arguably, students with visual impairments are particularly in need of high quality interventions because they may be at heightened risk for literacy problems (Bigelow, 1990; Koenig & Holbrook, 2000; Steinman, LeJeune, & Kimbrough, 2006; Tompkins & McGee, 1986). These problems include challenges relating to ORE and reading disabilities. Available statistics strongly suggest that the majority of children with special needs, such as those students with visual impairments (Ferrell et al., 2006), are among the one-fifth of students who are struggling readers (Lyon & Moats, 1997). For example, preliminary data in the United States from statewide assessments in grades three to eight mandated under No Child Left Behind (2001) reveal that the majority of students with visual impairments are currently underperforming in terms of reading achievement (Ferrell et a!.). According to the literature, there are long term consequences for reading underachievement. Those students who are underachieving face a worrisome long-term prognosis with respect to their future reading competency, education, and employment potential (Amato, 2000; Kirchner, Johnson, & Harkins, 1997; Rex, Koenig, Wormsley, & Baker, 1994; Ryles, 1996, 1998; Schroeder, 1989). Ryles (1996) found that approximately 70-80 percent of American adults who had visual impairments are un- or underemployed. Canadian adults with visual impairments also report low levels of employment, as 25 percent of adults aged 21-63 are unemployed (Canadian National Institute for the Blind [CNIB], 2005). These grim statistics have been linked to poor or delayed literacy skill development (Amato; Ryles, 1996).  5  Ryles’ (1996) research revealed, for example, that among adults with visual impairments, those who were congenitally legally blind and had begun reading and writing braille as their primary medium attained higher rates of postgraduate education than those who had begun reading with print. Further, those adult braille readers who had exclusively read braille also demonstrated higher rates of employment and enjoyed greater levels of financial independence. Additionally, Kirchner and colleagues (1997) studied the attributes of successfully employed persons without useful vision and found that 93 percent used braille effectively as their reading medium. In light of the importance of braille-reading competency for life outcomes, the literature focusing on issues related to visual impairment makes specific reference to ORF rates for braille readers and reading fluency as an important prognostic measure. Wolfe’s (2000) recommendation of a minimum reading rate of 150 words per minute for success in any employment involving literacy is gaining support among educational practitioners in the field of vision impairment and is cited in the research literature. Albeit without providing empirical support, Woiffe argues that “both children with low vision and blind children need to use this benchmark to determine if the medium they are using is appropriate for them to be competitive when they leave school and attempt to secure work” (p. 22). Lusk and Corn (2006) stressed that “[w]hile reading speed should not be the only criterion used, it is important that a student develop a functional and competitive reading speed in either print or braille”  (p.  654). However,  Lusk and Corn also state that “[vjarious studies have found that braille readers do not acquire reading rates that meet Wolffe’ s estimate”  (p.  654).  Ultimately, then, there is an expectation that students with visual impairments, such as braille readers with no known additional disabilities, will advance in reading skill development  6  alongside their typically sighted peers. It is anticipated that they will demonstrate satisfactory rates of progress when following literacy programs that reflect the unique implications associated with a visual impairment and a tactile medium (Koenig, 1992). Therefore, notable underachievement by braille readers in areas such as ORE is an area of longstanding concern among researchers and practitioners alike (Flanagan, 1966; Koenig & Holbrook, 2000; MacCuspie, 2002; Millar, 1988). Further exploration into the efficacy of reading interventions for braille readers is long overdue. The repeated reading intervention, as defined and validated in the literature, had the potential to work well with those braille readers who had been identified as struggling readers, assuming appropriate adaptations to the intervention were implemented. The current study focused on braille readers in the primary grades, as it was theorized that their early ORE levels were, as they are for their sighted counterparts, predictive of future reading competency. Further, it has been hypothesized, but as yet poorly researched, that intervention in the early years is essential for students with visual impairments. For example, in the absence of appropriate interventions, young struggling braille readers may experience chronic problems with reading speed and accuracy, which may culminate in reduced reading practice and a loss of interest in reading because it is laborious. Thus, a cycle of ongoing fluency underachievement may be affected (Barlow-Brown & Connelly, 2002). The study was designed, therefore, to investigate the usefulness of the repeated reading intervention for this unique, at-risk population such that their ORE-related reading challenges might be overcome and/or mitigated. Background ofthe Problem Investigation into braille literacy underachievement has been hampered by the lack of a scientific body of research to guide literacy instruction, assessment, and intervention (Ferrell et  7  a!., 2006). This phenomenon may result in potentially ineffective special education programming (Shinn, 1989, 2002). A number of interrelated issues have hindered the advance of scientifically based research to inform “best practices” for braille readers (Ferrell et al.). Given issues of lowincidence and heterogeneity, for example, it is problematic and/or inadvisable to apply most large group research designs and related statistical procedures to the study of the low-incidence, diverse population of braille readers (Ferrell et al.). Further, educational research relating to students with visual impairments is both expensive and chronically underfunded (Corn & Ferrell, 2000; Ferrell et a!.; Mason & Davidson, 2000). Ultimately, little research exists regarding the efficacy of methods designed to teach, assess, and remediate braille-reading and writing (Herzberg, Stough, & Clark, 2004; Koenig, 1992; Ponchillia & Durant, 1995). Explanations of braille-reading underachievement. Owing to this context of limited research, causal explanations for reading underachievement among children with visual impairments vary considerably. A lack of reading practice and instruction are currently thought to be among the most important causal factors. Research has shown, for example, that young students who are blind and read braille receive minimal pre-school literacy instruction, typically enter kindergarten unable to recognize any letters (Barlow-Brown & Connelly, 2002), unlike their sighted counterparts who know an average of 15 letters (Treiman & Rodriguez, 1999), and consequently, often show limited ability on measures of phonological awareness in their early schooling (Barlow-Brown & Connelly). Conditions of limited early access to brailled reading materials coupled with extremely reduced incidental learning during the preschool years have the potential to severely hamper early literacy skill development. Whereas for the typically sighted student, “visible language” (Frith, 1985) is an almost constant feature of the environment, many  8  children who are blind may only come into contact with braille with the start of formal schooling (Coppins & Barlow-Brown, 2006). Braille-reading underachievement has also been associated with inadequate literacy instruction (Koenig & Holbrook, 1995) due, for example, to shortages of qualified TVIs and the use of an itinerant model for literacy skill instruction which may fail to provide sufficient amounts of instructional time (Koenig, 1992; Koenig & Holbrook). These long-standing issues continue to receive attention in the visual impairment literature (MacCuspie, 2002). Further, there are limitations inherent in the tactile perception of braille compared to the visual perception of print, which also have implications for braille-reading speed. For example, one difference between learning to read in braille as opposed to print has been attributed to the much narrower scope of the “perceptual window” for the braille reader (Chailman, 1978; Nolan & Kederis, 1969). Braille readers’ perception is thought to be limited to those parts of the text that their fingers contact. Contrastingly, print readers can visually scan across letters, words, and sections of text. Braille is a tactile code that, like print, is a symbolic representation of words and ideas. The Braille Authority of North America (1994) defines braille as a “system of touch reading for the blind which employs embossed dots evenly arranged in quadrangular letter spaces or cells. In each cell, it is possible to place six dots, three high and two wide” (p. 1). Jolley (2006) argues that braille remains “the bedrock of literacy for blind people, and despite the ubiquity of digitalisation remains paramount in the same way that sighted people with their keyboards and screens have not yet discarded pen and paper” (p. 1). The focus of this study is the literary braille code; however, there are braille codes using the same six-dot configuration for mathematics, science, music, and modern languages.  9  Depending on whether they have functional vision (i.e., have and are able to use their residual vision effectively), students with visual impairments can access literacy in two media, namely print or braille. Often these students can access technological supports to expand the options available to them in meeting their literacy skill development goals. Students with low vision often gain useful information visually, use print, enlarged print, and/or print with optical aids. In addition, there are technology options for enlarging print so that it is accessible to students with low vision. Alternatively, braille readers, the focus of the current study, are those students who are unable to obtain reliable information visually or access the print medium effectively and, therefore, use braille and braille related technology to read and write. The braille code is complex and governed by many rules, and braille readers typically learn these rules during the primary years. The braille code features numerous rules because the 63 possible dot combinations based on the 6-dot cell are used in multiple ways. Additionally, there are two forms of the braille code, namely uncontracted braille and contracted braille. Uncontracted braille, also called alphabetic or Grade 1 braille, encompasses the letters of the braille alphabet, punctuation symbols, and the number sign (Adkins, 2004; Miller & Rash, 2001). Contrastingly, contracted braille, or Grade 2 braille is composed of the alphabet in braille and 189 one- and two-cell contractions standing for a variety of letter combinations (Koenig & Holbrook, 1995). Contracted braille is governed by 450 rules (Miller & Rash). Beyond issues relating to braille, the etiology of reading underachievement is often complicated further by the co-morbidity of additional challenges associated with a visual impairment. Along with any educational needs stemming directly from the visual impairment, students with visual impairments may also have additional academic challenges. It is, for example, possible for a braille reader to have a dual diagnosis of a visual impairment and a  10  learning disability (LD) such as a reading disability (Erin & Koenig, 1997; Layton & Lock, 2001; Loftin, 2006). For the purpose of this study, a learning disability is defined as a “general term that refers to a heterogeneous group of disorders manifested by significant difficulties in the acquisition and use of listening, speaking, reading, writing, reasoning, or mathematical abilities” (National Joint Committee on Learning Disabilities, 2001, p, 28). Intrinsic factors such a having a visual impairment or a learning disability relating to reading must be differentiated from extrinsic factors such as a lack of high quality instruction, for example (Layton & Lock, 2001). Further, as Erin and Koenig (1997) state, “A visual impairment seems easier to define, comprehend, and treat; therefore, it may be the only disability diagnosed, when in truth both conditions or a learning disability only are present” (p. 311). For these reasons, the incidence of learning disabilities within this population of students with visual impairments is unknown (Layton & Lock). Response to Intervention (RTI)  There are many potential explanations for dysfluency across the student population. It is increasingly thought that interventions should stem from assessment results that reflect the underlying reasons for the presenting problems (Chard, Vaughn, & Tyler, 2002; Wolf & Bowers, 2000). The process of literacy assessment and intervention for braille readers, however, is made more problematic because of the dearth of empirically validated assessments or interventions (Ferrell et al., 2006) and nonexistent normative data for key indicators of reading health such as ORE Hence, while it is often standard practice to conduct a variety of assessments of braille reading competence (Erin & Koenig, 1997; Koenig, 1996a; Koenig & Holbrook, 1989, 1995; Lewis & Russo, 1998; Loftin, 2006), these assessments require guidance from a stronger scientific research base.  11  The Learning Media Assessment (LMA) is arguably foremost among literacy related assessments for students with visual impairments, yet is also limited in potential efficacy by the weak research base in visual impairments available to inform it. The LMA is an “objective process of systematically selecting learning and literacy media for students with visual impairments” (Koenig & Holbrook, 1995, p. 22) to guide their educational programming. The LMA includes assessments of reading competency, such as ORF (e.g., as part of an Informal Reading Inventory). The LMA process of determining reading health is tailored to the specific needs of students with visual impairments. A LMA is undertaken by a TVI and includes input from all members of a student’s educational team. Diagnostic teaching is at the core of the LMA (Koenig & Holbrook). Diagnostic teaching is an efficient, educationally valid combination of assessment and instruction, formative and summative assessment, data collection, and problemsolving (Koenig & Holbrook). Diagnostic teaching helps ensure that “assessment findings are educationally valid since strategies used during assessment will continue in instruction” (Koenig &Holbrook,p. 13). Although not specifically linked within the literature, the concept of Response to Intervention (RTI) appears to encompass the intent behind the ongoing assessment and diagnostic teaching within the LMA process. RTI epitomizes a new paradigmatic approach to conceptualizing and addressing academic and social concerns (Fairbanks et al., 2007; Kame’enui, 2007). The Individuals with Disabilities Education Improvement Act of 2004 (IDEA) in the United States stipulates that “RTI is essentially and instrumentally an assessment and instructional process that is dynamic, recursive, and based on rigorous scientific research” (Kame’enui, p. 7). Scientifically based research is defined as “research that involves the application of rigorous, systematic, and objective procedures to obtain reliable and valid  12  knowledge relevant to educational activities and programs” (No Child Left Behind Act [NCLB], 2001). The IDEA indicates that a local agency may elect to “use a process that determines if the child responds to scientific, research-based interventions as part of the evaluation procedures” (Public Law, 2004, No. 108-446-614, 118 Stat. 2706) involved in special education eligibility and assessment for specific learning disabilities. Despite RTI’ s primary conceptualization as a model of specific learning disabilities identification (Gresham, Gansle, & Noell, 1993), many researchers see its broad applicability and utility (Kame’ enui, 2007). Kame’ enui argues that “RTI, at least the essence of it as a concept, is thoroughly consistent with the statutory intent and practice of special education, which requires a process of continuous evaluation and determining how a child responds as an individual to “specially designed” instruction or interventions” (p. 7). RTI intervention necessitates the use of scientific, research-based interventions and measures that are “reliable and valid for the purposes of assessment” and “implemented with fidelity” (Kame’enui, p. 7). RTI methodology continues to evolve. Sanctioned interventions are the lynchpin within this proactive, preventative RTI model (Fairbanks et al., 2007; Kame’enui, 2007) and the focus of this study. Tilly and Flugum (1995) defined intervention as “a planned modification in a specified way” (p. 485). Intervention has also been defined “as a set of procedures and strategies designed to improve student performance with the intent of reducing the student’s problem” (Upah & Tilly, 2002, p. 483). Implementing these empirically validated interventions as part of RTI facilitates an early, preventative, research-based approach to assessment and intervention (instruction) for underachieving children. RTI is informed by the evidence that most typically sighted, struggling readers who are assessed early and receive research-based intervention can be brought up to and  13  maintained at grade level in key areas such as ORF, suggesting an early, crucial period for intervention (Good et al., 1998; Fletcher & Lyon, 1998; Foorman et al., 1997). It is thought, but as yet not proven, that, as with their typically sighted peers, braille readers are most at risk during the period from kindergarten to grade three and, alternatively, likely most able to benefit from specialized early assessment and intervention and the improved instruction this process affords their sighted counterparts (Barlow-Brown & Connelly, 2002). Though additional research is needed, the extant literature indicates that certain distinct aspects of the process of learning to read using braille also support the need for early, targeted ORF assessment and intervention. Braille-reading rates, for example, have been significantly correlated to the length of time spent learning braille and when (formal) braille instruction had begun, irrespective of the nature of that instruction or school placement (e.g., residential or public) (Pring, 1984). Further, there is support in the literature for the necessity of reaching a threshold level of automaticity with the braille code to facilitate phonological awareness among young braille readers (Pring, 1982, 1984). Curriculum-based measurement (CBM). RTI models typically employ a form of curriculum-based measurement (CBM) methodology to monitor the effects of interventions targeted at a number of critical components of reading such as ORF as well as phonemic awareness, phonics, vocabulary, and comprehension (NRP, 2000). Curriculum-based measurement refers to “a set of standardized and validated short duration tests that are used by special education and general education teachers for the purpose of evaluating the effects of their instructional programs” (Shinn, 2002, p. 671) with respect to basic skills relating to reading, for example. CBM is “dynamic in that measures are designed to be sensitive to the short-term effects (i.e., 4-6 weeks) of instructional interventions; they are designed to assess change” (Shinn, p.  14  675), serving as “indicators in that they were validated to be correlates of key behaviors indicative of overall performance in an academic area” (Shinn, p. 675). CBM can be conceived of as “educational thermometers” for these key indicators (Shinn, p. 675). A modicum of preliminary research suggests that CBM has promise for students with visual impairments (Morgan & Bradley-Johnson, 1995), yet this research is in its infancy. Morgan and Bradley-Johnson investigated the technical adequacy of CBM for elementary school braille readers, conducting one-, two-, and three-minute ORF assessments, and concluded that a two-minute oral reading sample was preferable for the assessment and ongoing monitoring of ORF for braille readers, rather than the one minute metric used with sighted students (Morgan & Bradley-Johnson). High internal consistency coefficients, alternate-form reliability coefficients, and test-retest reliability were also reported. Concurrent validity was established with high correlations between the CBM measure and the Diagnostic Reading Scales (Spache, 1981; Morgan & Bradley-Johnson). Moreover, the CBM data correlated significantly with overall teacher ratings of individual student reading competency (Morgan & Bradley-Johnson). Oral readingfluency (ORF). As previously mentioned, CBM has been used extensively to assess ORF. Meyer and Felton (1999) defined fluency as “the ability to read connected text rapidly, smoothly, effortlessly, and automatically with little conscious attention to the mechanics of reading, such as decoding” (p. 284). Hudson, Lane, and Pullen (2005) asserted that ORF “captures a variety of processes related to reading: using sound-symbol relationships to translate text to sound, accessing word meanings, making connections between words and sentences, relating textual meaning to prior knowledge, and making inferences” (p. 704). ORF is most commonly measured with respect to oral reading rate, or the number of words read aloud correctly in one minute. Research suggests that “contextual reading” as  15  opposed to reading lists of words (Jenkins, Fuchs, van den Broek, Espin, & Deno, 2003) and oral reading as opposed to silent reading (Fuchs, Fuchs, Eaton, & Hamlett, 2000) are the best measures of ORF (Hudson et al., 2005, p. 705). Studies have shown support for the connection between improvements in ORF and improvement in comprehension. However, the connection is correlational (Shinn et al., 1992; Dowhower, 1987) and the relationship between fluency and comprehension is far from understood (Meyer & Felton, 1999; Pikulski & Chard, 2005). As Stecker, Roser, and Martinez (1998) argue, “The issue of whether fluency is an outgrowth [of] or a contributor to comprehension is unresolved. There is empirical evidence to support both positions” (p. 300). Ultimately, Stecker and colleagues state that, “[fjluency has been shown to have a ‘reciprocal relationship’ with comprehension, with each fostering the other” (p. 306). It has also been argued that fluency may be related to comprehension for braille readers (Harley, Truan, & Sanford, 1987; Koenig & Holbrook, 1989); however, only limited research has investigated this potential link. In the end, ORF has been shown to have more predictive validity in terms of comprehension than questioning, doze, and retelling (Fuchs et al., 2001). Ultimately, however, despite its importance, ORF is often a neglected component of reading competency and effective reading instruction across the grades (Kame’enui & Simmons, 2001; Pikulski & Chard, 2005). Moreover, despite the overwhelming evidence to support ORF as a technically adequate diagnostic tool for overall reading competency (Fuchs et al.), some serious concerns among practitioners regarding face validity exist. Hence, often an accompanying measure of comprehension such as an oral retell fluency technique, particularly one based on the passage  16  used to assess ORF (Good & Kaminski, 2002; Layton & Koenig, 1998) is needed to enhance face validity for practitioners. In buttressing the face validity of ORF, there are a number of ways that comprehension can be assessed (Parker, Hasbrouck, & Tindal, 1992); however this serves to complicate the investigation of the connections between fluent reading and comprehension. For the purpose of this study, oral retell fluency was chosen to assess comprehension, and this process is defined as “percentage of content words voluntarily provided by the student through an oral retelling of the passage following the last reading of the session” (Layton, 1994, p. 39; see also Good & Kaminski, 2002). This common process was desirable given that it was in keeping with a variety of assessment protocols used in the study (e.g., standardized DIBELS oral retell fluency assessment), time efficient, standardized, and did not require the young participants to read questions and/or braille out answers to questions. Further, there is evidence in the literature that oral retelling as a comprehension measure correlates well with other comprehension measures (e.g., Stanford Achievement Tests; see Fuchs, Fuchs, & Maxwell, 1988) and with alternative informal measures of comprehension such as the doze procedure or question and answer formats (Fuchs et al.; Layton, 1994; Salvia & Hughes, 1990). Additionally, Good and Kaminski reported a correlation of 0.59 between Correct Words per Minute (CWPM) and oral retell fluency scores. Repeated reading. A wide variety of empirically validated approaches to improving ORE exist for typically sighted students, many of which employ CBM methodologies and are implemented within a RTI framework. Repeated reading (Samuels, 1979) is arguably preeminent among them (Meyer & Felton, 1999; NRP, 2000; Rashotte & Torgeson, 1985; Therrien, 2004). Chard and colleagues (2002) reported that “intervention research on fluency development for students with LD had been dominated by research on repeated reading” (p. 403). According to  17  Samuels, repeated reading, based on an information processing model (Meyer & Felton), involves the rereading of “a short, meaningful passage several times until a satisfactory level of fluency is reached” (p. 404). Repeated reading is said to provide, through repetition, the necessary practice and heightened opportunities to respond (Daly, Martens, Hamler, Dool, & Eckert, 1999; Daly, Persampieri, McCurdy, & Gortmaker, 2005). Repeated reading is considered a simple and efficient intervention (Nelson, Alber, & Gordy, 2004) for improving the lower order reading skill of ORF. Repeated reading boasts an “extensive research base” (Therrien, 2004, p. 253) and proven efficacy for students without disabilities and those with learning disabilities (Therrien; see also Daly, Murdoch, Lillenstein, Webber, & Lentz, 2002; Vaughn et al., 2000). While there are many variations of Samuel’s (1979) original repeated reading technique (Hudson et al., 2005), repeated reading typically involves the oral rereading of a short reading passage, often for one minute at a time (Daly, Chafouleas, et al., 2005; Glazer, 2007; Vaughn et al. 2000) during each reading session until the student reads the passage approximately three to four times (Samuels) attempting to improve his or her ORF with each rereading (Daly, Chafouleas, et al.) and/or reaches a predetermined criterion ORF reading rate. A new passage is read in each subsequent repeated reading session. If the repeated reading intervention involves goal-setting, a new, higher criterion ORF may be set after the participant has consistently reached criterion (Layton & Koenig, 1998). Instructional Hierarchy (IH). There are many potential explanations for dysfluency within the wider student population (Chard et al; 2002; Wolf& Katzir-Cohen, 2001), such as phonological problems, a lack of practice, and motivational issues, and it is increasingly thought that interventions such as repeated reading should be tailored to address the underlying reasons  18  for the specific presenting problems (Chard et al., 2002; Wolf & Bowers, 2000). For example, in terms of designing and implementing an intervention for ORF, it is important to consider whether the participant has a skill and/or a performance (e.g., motivational) deficit. Overall, repeated reading is best suited to students who have reached the acquisition stage of learning to read, such that they are accurate but slow readers (Therrien & Kubina, 2006). A brief examination of Haring, Lovitt, Eaton, and Hansen’s (1978) stage-based model of learning called the “Instructional Hierarchy (IH)” sheds light on nature of the repeated reading intervention and its applicability to skill deficits relating to ORF. The IH is a “conceptual framework” that can inform the selection of interventions when remediation is required such that interventions are well matched with student learning needs (McCurdy, Daly, Gortmaker, Bonfiglio, & Persampieri, 2007). The IH involves four stages of acquisition, fluency, generalization, and adaptation (Haring et al.). During acquisition, the first stage, a student is beginning to learn a target skill and, therefore, lacks accuracy and fluency in the said skill. According to the IH model, interventions designed for students in this stage of learning would focus upon improving accuracy (Haring et al., 1978). During fluency, the second stage in the IH model, a student is able to demonstrate a skill accurately but performs it slowly or inefficiently. Students at this stage of learning would benefit most from interventions that target improvements in speed. Students in the third stage, generalization, need assistance with demonstrating the target skill across settings and in a variety of situations and with discriminating the skill from other similar skills. Students who are both accurate and fluent in the target skill across many settings, as per the fourth stage in the IH model, would benefit from interventions that promote the adaptation or modification of the skill in response to novel situations and/or task-demands (Haring et al.).  19  In keeping with the IH, it appears that repeated reading is best suited to helping students whose skills fall largely within the fluency stage of learning. Repeated reading offers struggling readers the structured opportunity to systematically drill and practice reading with feedback and encouragement regarding accuracy and speed. Repeated reading induced improvements in ORF can be thought to enable improvements in comprehension. Contrastingly, repeated reading appears to be least effective in remediating comprehension problems stemming from lower-order accuracy problems (e.g., decoding) or from a combination of lower (e.g., decoding) and higherorder skills problems (e.g., integration of content information) (Therrien, Gormley, & Kubina, 2006). However, many researchers recommend flexibility regarding the interpretation of IH’s “linear or stage-like thinking” because the existing literature suggest that an emphasis upon “a single dimension of responding (e.g., accuracy) as a prerequisite to working on another dimension of responding (e.g., fluency and/or generalization) is a narrow interpretation of the IH and is unsupported by the extant literature” (McCurdy et al., 2007, p. 23; see also Martens & Eckert, 2007). Rather, numerous studies show how combinations of instructional variables and motivational variables around the core repeated reading intervention affect the most pronounced changes (Martens & Eckert; McCurdy et al.). In addition to assisting researchers and practitioners in developing interventions based on optimal combinations of instructional and motivational factors,” [t]he IH may find its greatest utility and application as a heuristic that expands our understanding of more basic behavioral principles of behavior” (McCurdy et al., 2007, p. 8). For example, ORF can be conceived of as a type of academic response governed by stimulus control “in which the response comes under the control of an instructional antecedent (e.g., a word in a text)” (McCurdy et al., p. 8). According  20  to McCurdy and colleagues, “Stimulus control occurs when differential reinforcement is consistently applied to responding” (p. 8). ORF development, like other academic learning, “occur[s] as a function of the number of learning trials delivered during instructional sessions” (McCurdy et al., p. 8; see also Heward, 1994; Skinner, Fletcher, & Hennington, 1996). Repeated reading can be interpreted from the JR theoretical perspective as a skill-based intervention (Chafouleas, Martens, Dobson, Weinstein, & Gardner, 2004) as a means of “strengthening the learning trials” provided to readers, particularly those struggling with proficiency. Specifically, repeated reading offers a potentially more potent response prompt (e.g., a cued response in that the student is told what to do beforehand), coupled with “stronger contingencies for behavior” (McCurdy et al., 2007, p. 8) such as performance feedback in the form of immediate error correction (Barbetta, Heron, & Reward, 1993; Barbetta, Reward, & Bradley, 1993). As Chafouleas and colleagues argued: from a theoretical perspective  . . .  the goal of [instruction and reinforcement]  is ultimately to bring rapid and accurate reading under the stimulus control of printed text. As a general case development of stimulus control involves the strategic use of assistance and reinforcement to promote the occurrence of increasingly more efficient forms of behavior in the presence of discriminative stimuli (p. 70; see also, Alberto & Troutman, 2003). Specifically with respect to ORF, Chafouleas and colleagues stated “stimulus control may best be promoted using different combinations of modeling, practice, feedback, and reinforcement, depending on a student’s reading ability”  (p.  70), as there is evidence in the  literature that “particular combinations are likely to be differentially effective depending on the types of readers”  (p.  70). Hence, by providing multiple opportunities to practice reading a  21  particular passage coupled with performance feedback and reinforcement, this intervention can help students resolve their ORF and ORF-related comprehension problems (Chard et al., 2002; Levy et al., 1997; Sindelar, Monda, & O’Shea, 1990; Therrien, Wickstrom, et al., 2006). Braille readers. The limited extant research suggests that ORF may be a critical component of reading competency among braille readers, and that there is most often scope for improvement for these students (Wallace, 1973). It also appears that CBM can be used effectively to assess ORE. In keeping with the LMA purpose and process, RTI logic (Fairbanks et al., 2007) has instinctive appeal for braille-reading students with or at risk for dysfluency. RTI offers a framework for the prevention or remediation of academic failure that hinges upon the use of well-researched interventions. Repeated reading is represented in the very limited reading intervention research pertaining to students with visual impairments. Two repeated reading studies have been conducted to date, and there is some very tentative, limited, and preliminary evidence of the effectiveness of this practice for this population. One study employed a repeated reading strategy and reported improvements in ORE across all participants with low vision who read print (Layton & Koenig, 1998), while another study included one braille reader among participants with visual impairments and again reported the effectiveness of the intervention for all participants (Pattillo, Heller, & Smith, 2004). Pattillo and colleagues and Layton and Koenig employed an oral retell fluency measure to monitor comprehension for students with visual impairments, as have those working with other populations (Young, Bowers, & MacKinnon, 1996; O’Shea, Sindelar, & O’Shea, 1985). Based on the available literature, it was hypothesized that the implementation of the repeated reading intervention may affect meaningful improvements in ORE, specifically CWPM  22  and errors. It was also hypothesized that the use of the intervention may lead to improvements in comprehension. This study’s experimental and descriptive analyses were designed to initiate formal inquiry into the efficacy of the repeated reading intervention to improve braille-reading proficiency. Research Questions The study addressed three experimental and three descriptive research questions: 1. Is there a functional relationship between the implementation of a repeated reading intervention and an improvement in oral reading fluency (ORF), operationalized as correct words per minute (CWPM) and errors per minute, for braille-reading students in grades one, two, and three? 2. Is there a functional relationship between the implementation of a repeated reading intervention and an improvement in comprehension, operationalized as oral retell fluency, for braille-reading students in grades one, two, and three? 3. Is there a functional relationship between the implementation of a repeated reading intervention and an improvement in CWPM for Low Content Overlap (LCO; progress monitoring) passages? 4. Are any gains in CWPM during repeated reading intervention associated with generalized improvement in untrained, High Content Overlap (HCO) passages? 5. Are the repeated reading intervention and implementation process socially valid (i.e., important, acceptable, and useful) from the point of view of teachers for students with visual impairments (TVIs)? 6. Does the implementation of the repeated reading intervention change the participants’ self perception as readers?  23  Definition of Terms The following key terms have been defined in order to clarify the concepts and terminology involved in this study. Curriculum-based Measurement (CBM) Curriculum-based measurement is a standardized method employed to assess reading fluency, and this methodology typically involves the use of curricular materials (Daly, Chafouleas, et al., 2005; Deno, 1985). High Content Overlap (HCO) High Content Overlap (HCO) refers to the degree to which a passage shares the same or similar words with another passage. Given the word overlap, the HCO can function as a measure of estimating a reading intervention’s ability to affect generalization (Gortmaker, Daly, McCurdy, Persampeiri, & Hergenrader, 2007). Instructional Hierarchy (IH) The Instructional Hierarchy is a stage-based model of skill learning that includes the following levels of responding: accuracy, fluency, generalization, and adaptation (Haring et al., 1978; Daly, Lentz, & Boyer, 1996). The IH model can be used to inform the most effective, parsimonious interventions and the nature of instruction. Repeated Reading Repeated reading is a reading intervention intended to improve ORF. Numerous variations for repeated reading exist, but all involve the students rereading stories aloud several times, usually while receiving guidance and feedback from a teacher.  24  Response to Intervention (RTI) Response to Intervention (RTI) epitomizes a new paradigmatic approach to conceptualizing, preventing, and addressing academic and social concerns (Fairbanks et al., 2007; Kame’enui, 2007). RTI in practice involves attempting to maximize student success by providing high quality instruction and intervention in keeping with student needs, monitoring progress frequently (i.e., collecting data regarding students’ responsiveness to instruction) so as to enable and inform decision making regarding changes to instruction, goals, and level of supports. Oral Reading Fluency (ORF): Oral Reading Fluency (ORF) refers to the speed and accuracy of reading aloud. ORF is typically measured as correct words per minute. Social Validity Social validity refers to the social importance, acceptability, and value of treatment goals, procedures, and effects (Cooper, Heron, & Heward, 2007; Gresham & Lopez, 1996; Wolfe, 1978). Treatment Integrity Treatment integrity refers to the “extent to which the independent variable [was] implemented or carried out as planned” (Cooper et al., 2007, p. 235). Visual Impairment In British Columbia (BC), students with visual impairments are defined as those students who experience a “range of difficulties with vision and includes the following categories: blind, legally blind, partially sighted, low vision, and cortically visually impaired” (BC Ministry of Education [MOE], 2006, p. 127). The BC MOE stipulates that the visual impairment must be  25  educationally significant in that the student’s “visual acuity is not sufficient for the student to participate with ease in everyday activities” (p. 127).  26  CHAPTER 2 Review of the Literature A number of interrelated topics inform this study and each is addressed in turn. First, selected aspects of literacy for students with visual impairments, particularly those who read braille, are reviewed insofar as they relate to oral reading fluency (ORF) in braille. Research regarding ORF as a construct is then reviewed in relation to braille readers, as is the framework of Response to Intervention (RTI) and the use of curriculum-based measurement (CBM) to measure and monitor ORE Finally, repeated reading research that informs the design of the intervention for braille readers is reviewed. Literacyfor Students with Visual Impairments Students with visual impairments in the province of British Columbia (BC) receive educational services in both public and private school systems. In either system, eligibility for supplemental funding is predicated on demonstrating an educationally significant visual impairment in addition to a number of conditions. These conditions include, for example, that students with visual impairments must receive frequent and regular specialized services “directly related to the student’s visual impairment” from a qualified teacher for students with visual impairments (TVI) and have a “current Individualized Education Plan [IEP] in place that includes individualized goals with measurable objectives, adaptations and/or modifications where appropriate, the strategies to meet these goals, and measures for tracking student achievement in relation to the goals” (BC MOE, 2006, p. 127). These IEP-related parameters reflect both the current provincial and international educational climate of increasing emphasis on individualized educational programming and heightened accountability and provide the rationale for both educational interventions that have the potential to enhance student  27  achievement and the methodologies designed to assess the effectiveness of these interventions (Richards, Taylor, Ramasamy, & Richards, 1999). For the purpose of this study, literacy was defined as “the ability to read and write in braille and/or print” (Ferrell et al., 2006, p. 5). The students’ communication or literacy needs are seen to depend in large part on the degree of useful vision, the presence of additional disabilities, and the nature of specific tasks (Hatlen, 1996). With respect to the criterion of degree of useful vision, a student with a visual impairment who has low vision has needs that reflect a “difficulty accomplishing visual tasks, even with prescribed corrective lenses” (Corn & Koenig, 1996, p. 4) and the capacity to “enhance his or her ability to accomplish these tasks with the use of compensatory visual strategies, low vision and other devices, and environmental modifications” (Corn & Koenig, p. 4). Corn and Koenig (2000) indicate that “while some students with low vision are able to use standard print and see some or all portions of information printed at a distance in classrooms (e.g., on dry-erase boards), others use optical devices, large print, and/or braille to become efficient readers and writers” (p. 305). This study focuses on students who are unable to obtain functional, reliable, useful visual information and for whom braille was the preferred medium. Learning Media Assessment (LM4) Due to their varying degrees of functional vision, students with visual impairments have a wide variety of literacy needs and several available options such as braille and/or print. In some cases print will be enlarged or read with an optical device. The decision to read braille and/or print is currently thought to be best informed as part of a formal Learning Media Assessment (LMA; Koenig & Holbrook, 1995) A formal LMA is required by law and consistently administered in a number of American states such as Texas, Florida, and Colorado (C. Holbrook,  28  personal communication, October 10, 2007). While generally considered best practices and recommended within British Columbia, undertaking a formal LMA is currently not legally required in Canada. Use of Learning Media Assessment involves an examination of a student’s literacy skills in an effort to determine the student’s most effective medium (e.g., print or braille) or media (e.g., both print and braille) for accessing instruction (Koenig & Holbrook, 1995), and to monitor student needs and progress. Hence, LMAs necessarily involve a measure of reading speed and comprehension, which may be achieved through an Informal Reading Inventory (IRIs), or other means. The need for LMA stems from the complexity of the decisions regarding the appropriate reading medium for students with visual impairments (Koenig, Sanspree, & Holbrook, 1991). Koenig and colleagues state: there can be no predetermined reading medium for all students within an arbitrary category [e.g., for students who are legally blind] and still uphold the principle of educating each student according to his or her individual capabilities and needs... The task of educators is to provide .  instruction in the reading mediums which will allow a child to ultimately become a literate adult, not to restrict opportunities for achieving literacy by failing to match a child’s existing abilities with the appropriate learning mediums (p. 1). Examination of the results of an ongoing LMA is fundamental to the information gathering process that should occur prior to the use of other informal or formal, standardized assessments (e.g., those designed to test for reading proficiency or additional disabilities). LMA  29  findings help educators along with other members of the student’s educational team such as parents, test adapters, and others determine the student’s preferences regarding reading media in order that they may assess fluency and comprehension in the student’s preferred, optimal medium (Koenig et al., 2000). As part of the LMA process, educators may also investigate reasons for unexplained low levels of academic achievement that may be unrelated to implications of the student’s vision and report their findings in the LMA. The LMA may also include information about whether the student has been observed and evaluated while carrying out tasks in non-testing environments that mirror those used in the formalized testing process. For example, prior to a reading performance assessment, educators may evaluate a student’s capacity in academic tasks such as reading aloud in class and taking timed tests that will ultimately be included in the assessment protocols. Limited empirical (Lusk & Corn, 2006) and anecdotal evidence suggests that the practice of conducting LMAs and the ORF assessments therein is not widespread in Canada and abroad. For example, of the 95 TVI surveyed by Lusk and Corn from 21 American states and one Canadian province who serve a total of 108 students aged 4 to 21 (preschool to grade 12) who were learning or using print and braille, only one third were able to provide oral and/or silent reading rates for their students. The consequences of this lack of assessment and progress monitoring are worrisome. As Lusk and Corn argue, “If teachers are not taking measures of their students’ reading rates, they have no way to tell how the students are progressing in their acquisition of literacy skills or how competitive the students can be expected to be with their peers at the present time or in future employment situations” (p. 662). Trent and Truan (1997) argue that professionals serving braille readers “must continue to determine how students who  30  read braille are actually performing. With this data in hand, they can continue to document effective ways to increase reading speeds, build vocabulary, and help braille-reading students to become competitive, functioning adults” (p. 497). The causes behind the low rates of teacher use of LMA completion are unknown. Many explanations have been suggested. A lack of scientifically-based assessment and intervention tools (Ferrell et al., 2006), TVI qualifications, and service delivery models (e.g., braille literacy instruction on an itinerant basis; Koenig, 1992) are just a few of the factors proposed as potential explanations. Frameworkfor Literacyfor Students with Visual Impairments Koenig’s (1992) framework for understanding the literacy of persons with visual impairments recognizes the distinctive challenges presented by this sensory impairment (Hatlen, 1996). Koenig argues that literacy is manifested in “(1) written communication, (2) through communication with an intended audience, (3) through the successful application of reading and writing skills, and (4) at various levels throughout the life span” (Koenig). Koenig argues that, given their unique challenges involved in accessing print media, it is critical for individuals with a visual impairment to “go beyond the basic level of literacy to gain access to materials in regular print independently” (p. 277) in order “to meet the demands of a society that is immersed in and driven by print”  (p.  281). Students may, for example, need specific skills and techniques  such as those involving braille and devices such as braille technology to enhance their receptive communication (e.g., ability to read printed text) and! or to adapt their medium of expressive communication to share their written material with the intended audience (Koenig). With respect to literacy, reading is of central concern. The three levels of literacy, emergent, basic, and functional, as proposed by Koenig (1992) are tailored to reflect the  31  additional literacy-related demands faced by individuals with visual impairments across their lifetime. Of the three levels, the emergent level is of particular relevance to this study. According to Smith (1989), “Emergent literacy refers to a child’s early experiences with reading and writing” (p. 528) and includes the time between the child’s birth and the initiation of conventional reading and writing (Craig, 1996). The analysis of ORF has historically played a part of determining and monitoring how reading with braille develops among emergent readers. ORF, often called reading rate in the visual impairment research (Koenig, Holbrook, & Layton, 2001), and has inspired and informed research into braille-reading skill development (Lorimer, 1990; Knowlton & Wetzel, 1996; Nolan & Kederis, 1969; Olson, Harlow, & Williams, 1975). ORF in braille is conceptualized largely in the same way as ORF for print-readers with typical vision. Oral Reading Fluency (ORF) Within literature focusing on students with typical vision, an impressive array of empirical studies provide support for the construct of ORE as an effective and appropriate measure of reading competence at the individual level (Deno, Mirkin, & Chiang, 1982; Fuchs et al., 1988; Fuchs et al., 2001; Hosp & Fuchs, 2000; Marston, 1989). As a technically adequate diagnostic tool, ORE is strongly supported within research related to students who are typically sighted (Chard et al., 2002; Fuchs et al., 2001) and is connected to the decision regarding reading disabilities (Breznitz, 1991; Chard et al.; Fuchs et al., 1988; Hudson eta!., 2005). ORE is also associated with levels of comprehension (Breznitz, 1987; Deno, Marston, Shiiin, & Tindal, 1983; Dowhower, 1987; Rasinski, 1989, 1990). Additionally, ORF is highly correlated to teacher judgments of proficiency (Deno et al.; Hudson et a!.).  32  In addition to reading speed and accuracy, ORF is said to involve prosody, “the linguistic term to describe the rhythmic and tonal aspects of speed” (Hudson et al., 2005, p. 704) or the “music’ of oral language” (Hudson et al., p. 704). Despite strong face validity between prosodic reading and reading comprehension (Kuhn & Stahl, 2000), research attempting to link prosody (e.g., intonation, stress patterns, overall expression) to comprehension has been equivocal. It is, for example, difficult to determine whether prosodic improvements are the cause or result of improved comprehension (Dowhower, 1987; Kuhn & Stahl; Pinnell et al., 1995). Dowhower reported that the introduction of a repeated reading intervention with grade two students coincided with improvements in prosodic reading accompanied by improvements in reading rates, accuracy, and comprehension. Yet a causal relationship between prosodic improvement and the other key indicators of reading proficiency was unclear (Hudson et al.). Despite the acknowledged importance of prosody, the study focused upon ORF in terms of speed and accuracy while monitoring comprehension. Theoretical Modelsfor ORF A variety of studies have shown that “both rapid reading of high-frequency words and rapid decoding as a means to enhance text understanding appear critical for typical reading development” (Chard et al., 2002, p. 386; see also Fuchs et al., 2001; Kuhn & Stahl, 2000; Meyer & Felton, 1999). Three popular theoretical models offer conceptual frameworks to explain how readers respond to the text at the word recognition level and what results when word recognition is ineffective. These models also help with conceptualizing ORF as a performance based gauge of overall reading proficiency that encompasses comprehension (Fuchs et al.; Potter & Wamre, 1990).  33  LaBerge and Samuels’ (1974) often cited “automaticity model of reading” framework advances a “bottom-up serial-stage model of reading” (Fuchs et a!., 2001, p. 239). This perspective suggests that in order to read fluently, individuals must be able to engage in lower level reading processes such as decoding automatically. This automaticity, in turn, allows readers to focus their attention upon the more complex skills involved in the multifaceted skill of reading, such as comprehension (Chard et al., 2002; Fuchs et al.; LaBerge & Samuels; Pikulski & Chard, 2005). LaBerge and Samuels’ theory of automatic information processing led to research designed to target the improvement of reading speed and inspired the development of the repeated reading intervention (Chard et a!., p. 386; Samuels, 1979). Samuels (1979) discovered that, while implementing the repeated reading intervention, ORF increased while error rates (e.g., word recognition errors) decreased with each rereading of the same passage. Moreover, the “cold read” scores, or the ORF scores for the first time reading the passage, improved across repeated reading sessions. Also in accordance with his automaticity theory, Samuels argued that improvements in comprehension attended improvements in ORF. While the goal of repeated reading is ultimately enhanced reading comprehension, the focus of the intervention is on automatic decoding to increase speed, hence errors and the nature of comprehension checks are comparatively less emphasized. LaBerge and Samuel’s (1974) model is akin to the “verbal efficiency model” proffered by Perfetti (1977, 1985). Perfetti’s model suggests that slow, laborious word processing and slow word reading impede automaticity in reading and tax the working memory, thereby hampering comprehension (Chall, 1979; Chard et a!., 2002). Chall and colleagues depicted such readers as “glued to print” (p. 41).  34  Contrastingly, Stanovich’ s (1986, 2000) theoretical model reflects a more interactive conceptualization of reading, one that suggests that lower level automaticity need not completely precede higher level processing and that readers employ prior contextual knowledge to facilitate word identification and offset poor word-level skill (Fuchs et a!., 2001). Stanovich’s (1986) work also seeks to explain the relationship between reading fluency and the total time readers engage in reading (Pikulski & Chard, 2005). Higher proficiency in terms of fluency is related to higher amounts of time spent reading, as more proficient readers enjoy reading and set more time aside to practice, thus further improving their skills (Pikuiski & Chard). Alternatively, nonfluent readers are more likely to avoid reading, and thus increasingly underperform compared to their fluent peers (Pikulski & Chard). Ultimately, the NRP’s (2000) report concluded unequivocally that “fluency develops from reading practice” (p. 3-1), and “repeated oral reading practice or guided oral reading practice” were identified as empirically validated procedures for improving fluency. Among those procedures, the repeated reading intervention (Samuels, 1979) was identified as a highly credible approach (NRP). Theoretical models applied to braille-readingperformance. Ultimately, the ORF related assessments and interventions involved in this study are informed by the aforementioned theories, having been designed to address a lack of automaticity or proficiency in reading connected text. It has been suggested, but not yet fully confirmed empirically (Steinman et al., 2006), that aspects of the process of learning to read and write using print and braille may be reasonably analogous (Barlow-Brown & Connelly, 2002; Kusajima, 1974; Nolan & Kederis, 1969; Pring, 1994; Wetzel & Knowlton, 2000). As Rex and colleagues (1994) assert, “The way children who are blind learn-both general learning and literacy learning- is more similar to the learning of children with normal vision than it is different” (p. 32). For example, Trent and Truan  35  (1997) argue that early word-recognition skills, such as the use of phonological and contextual cues to facilitate whole word identification, are “parallel for braille and print readers” (p. 494). Based on the results of her research studies, Pring (1982) argues for a similar “reciprocal interaction between phonological awareness and reading acquisition” (p. 47) irrespective of tactile or visual input. Pring (1982, 1984) reports evidence that braille readers use phonological awareness skills to facilitate letter and word recognition of tactile representations. Given the perceived similarities between reading in print and reading in braille, the limited reading research pertaining to students with visual impairments subscribes to the aforementioned information processing models, argues for the relevance and importance of automaticity and, accordingly, fluency building strategies such as repeated reading (Layton, 1994; Pattillo et a!., 2004). Research by Carreiras and Alvarez (1999) offers support for an information processing approach, showing that higher word frequency and repetition were associated with higher reading speeds for the 26 high school braille-reading participants (with no additional disabilities). Carreiras and Alvarez reported that other word-level factors such as word length also were associated with word-reading times; longer words took their participants longer to read. Based on these and other findings, Carreiras and Alvarez concluded that “it seems that print and braille-reading are similar at the level of word processing”  (p.  589).  Further, research by Pring (1984) indicated that braille readers, like their sighted counterparts, identif’ high frequency, familiar words more quickly than less familiar words. She advanced the concept of tactile, “sensory perceptual decoding” that was minimized or deemphasized when words were familiar such that the student had the potential to take better advantage of contextual facilitation. More attentional resources could, thereby, be devoted to semantic content, to the benefit of comprehension.  36  Unfortunately, no (information processing) model for oral reading fluency, or general literacy skill development, has been specifically proposed for those using the braille code (Steinman et al., 2006). Similarities between print and braille-reading development aside, the current theoretical models of reading development were designed with typically sighted readers in mind and, understandably, imperfectly capture the process for braille readers. In the end then, all of the previously described theories fail to address specific, key considerations intrinsic to their processes of developing as readers. Additional research is still needed, therefore, to more fully understand literacy skill development for students using braille (Trent & Truan, 1997). In recognition of unique factors that are necessarily involved in learning to read using braille, this study is also conceptualized within an interactive theory of learning ability (Lipson & Wixson, 1986). According to this interactive theory, external variables (e.g., the quality and nature of instruction, assessment, and intervention) are said to interact with variables intrinsic to the learner (e.g., tactile methods of reading) as sources of variance that affect literacy skill development (Lipson & Wixson). Accordingly, ORF challenges may be attributable to the effects of variables external and internal to the braille reader. Additionally, the study applies a skills deficit theoretical model (Francis et al., 1996) to explain ORF problems in terms of a deficiency or weakness in key braille-reading skills rather than necessarily a developmental lag. Challenges with respect to ORF that may be experienced by a braille reader are further conceptualized as a type of cumulative learning disadvantage (Francis et al.). In other words, this may be due, for example, to the absence of the unifying, mediating influence of vision on learning (Lowenfeld, 1973; Pring, 1994). For example, without specific educational supports, braille-reading students may struggle to read words that are  37  beyond their direct experience (e.g., words relating to the circus or the zoo), which has implications for ORF. Early Braille Skill Development: Unique Implications for ORF Several unique and interrelated factors, fundamental to the process of using braille to read, are thought to have particular implications for ORE (Pring, 1984). These factors include braille readers’ lack of functional vision (Lowenfeld, 1973), the nature of the braille tactile script itself, and braille readers’ reliance on an alternative, tactual modality (i.e., touch rather than vision) (Dodd & Corm, 2000; Greaney & Reason, 2000; Pring; 1984; Rex et al., 1994). Aspects of these factors, in so far as they pertain to ORE, are described further below. Lack offunctional vision. The literature pertaining to braille readers often references the influence of a lack of functional vision on these students’ literacy skill development, emphasizing the severely reduced incidental learning that may result (Dodd & Conn, 2000; Pring, 1994). For example, some research suggests that this lack of vision may delay blind children’s development of object permanence and semantic representations (e.g., of objects) and the corresponding vocabulary development may reflect their unique experiences (Bigelow, 1990; Steinman et al., 2006). For example, children who are blind may encode and store information gained from their remaining senses about a banana that does not include the colour yellow given the lack of conceptualization of color (Bigelow; Steinman et al.). Reading skill development for children who are blind may also be influenced in other ways by the nature and range of their experiences (Koenig & Farrenkopf, 1997). For example, direct experience on a farm would likely enrich the enjoyment and understanding of a book about life on a farm. Further, students who read braille often face compromised early exposure to this, their preferred medium. Dodd and Comi (2000) argue that “[b]lind children are disadvantaged in the  38  acquisition of literacy” (p. 2) owing, for example, to “limited access to braille before they start school” (p. 2; see also Greaney & Reason, 2000). Pre-school braille readers generally experience far less incidental exposure to braille words compared to sighted children’s exposure to print (Craig, 1996; Steinman et al., 2006). Further, Trent and Truan (1997) and others argue that braille readers have reduced opportunities to learn words informally, “in the context of everyday life” (p. 494) and to have this word learning reinforced through the parents’ modeling of braillereading. Their word learning largely begins with the onset of formal schooling, unlike their typically sighted peers (Greaney & Reason; Trent & Truan). This limited exposure may, consequently, affect their ability to develop a functional understanding of the social purpose of reading and may have a negative impact on their emergent literacy processes (Tompkins & McGee, 1986; Craig). Insufficient early literacy instruction is thought to have additional implications for braille readers. For example, although additional research is certainly needed, it has been suggested that braille readers experience “a critical period, or window of opportunity, during which learning to read is easiest, most natural, and most reinforcing” (Trent & Truan, 1997, p. 494), as is true for their typically sighted counterparts (Adams, 1990; Chall, Jacobs, & Baldwin, 1990). Reduced exposure to braille in the early years may also result in poorer associations between graphemes and graphemic combinations and their corresponding concepts (Steinman et al., 2006). Additional research is needed regarding phonological awareness for braille readers, but there is some evidence that emergent braille-reading is “dependent upon phonological skills and require[s] the development of the same graphemic knowledge that is used by readers who are sighted to learn sound-symbol relationships” (Steinman et al., p. 41; see also Gillon & Young, 2002; Wormsley & D’Andrea, 1997).  39  Due to factors like compromised incidental learning and limited pre-school exposure to braille, Trent and Truan (1997) assert that braille readers may fail to master early word recognition and decoding skills commensurate with their typically sighted peers, with potential implications for fluency skill development. If their trajectories diverge, it may become increasingly difficult for slower braille readers to “catch up” to their typically sighted peers who have long enjoyed immersion in their medium of print (Trent & Truan). Further, Trent and Truan hypothesize that braille readers may therefore be slowed in making the transition to reading to learn and gain automaticity with content-related vocabulary, as are any typically sighted emergent readers whose print reading speeds similarly fail to improve (Adams, 1990; Trent & Truan). Further, there is evidence that the length of time spent reading braille has implications for levels of competency. Trent and Truan (1997) reported that the fastest readers among their 30 high school braille-reading participants (from a residential school for the blind) were those who had experienced the most braille-reading practice (i.e., always read braille exclusively). Those participants who had begun using braille later in their schooling (due to changes in their vision that made print reading no longer functional) read more slowly than those who learned braille early and exclusively. The nature ofbraille tactile script. Researchers and practitioners have long noted the impact of the nature of the braille code on braille-reading skill development, including ORF (Dodd & Conn, 2000; Greaney & Reason, 2000). The current braille code was adopted in the early 1 900s without being rigorously researched (Mangold, 2000), and this lack of an empirical base of information is reflected in the many questions and challenges facing the field of visual impairment today.  40  Some have suggested that learning braille may be challenging for emergent readers (Greaney & Reason, 2000; McCall, 1997). According to Greaney and Reason and others “Braille is more complicated than print reading” (p .35). Young braille readers must learn this extensive and complex symbolic code relative to the print code, while simultaneously learning the letter sounds and developing their tactile perception for symbol discrimination (Barraga & Erin, 1992; Millar, 1997). According to Greaney and Reason, the medium of braille “poses different challenges and makes different demands upon the child’s perceptual and cognitive abilities” (p. 54), and braille readers face an “extended period of time during which they are still learning their code” (Wormsley, 1997, p. 8). Some of the challenges facing braille readers stem from the fact that many letter shapes within this involved code are poorly differentiated, such as the letters d, f, h, and j (McCall, 1997). These letters are essentially spatially rotated versions of the same form (McCall). The tactual perceptual discrimination of braille influences the order and nature of letter introduction and, therefore, the making of letter-sound connections (Pring, 1994). Additionally, the limited number of possible dot configurations within a six-dot braille cell results in multiple references for the same dot configuration, which may be confusing for young readers (Caton, Bradley, & Pester, 1982). Lorimer (1990) points out that, contingent on the context and braille code rules, the braille sign for “was” also means “by”, the sign signalling the end of a quote, and the sign for degrees. Additionally, some words such as “in” or “was” may appear in their uncontracted or contracted forms depending on the context of the sentence (Caton, 1979; Rex, 1971). Caton and colleagues referred to these multifunctional dot shapes as “confusers.” Moreover, “an ‘easy’ word in print may be adifficult one in braille or, more likely, a word in braille that can be confused with another braille word or similar configuration”  (p.  309).  41  Tactual modality and ORF In addition to these specific structural aspects of the braille code that may affect ORF, the reliance upon tactual perception has fundamental implications and perhaps limitations for reading rate and accuracy in braille. For example, tactual perception is “less accurate” than vision (Dodd & Conn, 2000; Millar, 1997). Lorimer (as cited in Dodd & Conn, p. 11) stated that “because braille is perceived in moving, as opposed to static fixations of attention, the images fed to the brain tend to be incomplete and lacking in clarity, as evidence by the frequent occurrence of missed and added dot errors.” Henderson (1967) reported a particular error pattern or “clusters of characters” (p. 15) that are typically problematic for braille readers from a tactual standpoint due to the similarities among them. Common errors involve reversals, transpositions, and missed dots. As Pring (1984) indicates, orthographic differences may account for many types of errors in braille because the differentiation among braille symbols involves fine (tactual) discrimination. Tactual limitations compound the complexities of the code such as the previously mentioned “confusers.” Further, a “minor change in the position of dots in braille can crucially alter the meaning of the text” (p. 1866), with implications for comprehension. Contrastingly, reading print is a more “robust” process in that “a large part of the [print] display can be missing (e.g., the bottom half), but the identification of letters remains fairly accurate”  (p.  1866).  Unsurprisingly, the most common oral reading errors are orthographic errors linked to contractions, particularly those infrequently used contractions (Ashcroft, 1960). In fact, there are movements within the field of visual impairment to simplify the contractions and standardize the braille code, particularly in terms of developing a Unified Braille Code across English speaking countries (Mangold, 2000). It is hoped that the results of the longitudinal Alphabetic-Braille and  42  Contracted (ABC) braille study (2000-2007) will shed light on the merits of learning uncontracted and contracted braille. Additional research is required. Reading rates in braille are also thought to be affected by much narrower scope of the “perceptual window” or “perceptual unit” (Challman, 1978; Nolan & Kederis, 1969) afforded by tactual reading. The visual system allows for the discrimination of many small individual and “spatially extended patterns” of symbols (e.g., words; Foulke, 1979a) within an impressive field of view (Greaney & Reason, 2000). In contrast, while the tactual perceptual system also offers rich access to symbolic, spatially presented information, it is unable to offer the same degree of discrimination and field of view (Foulke; Simon & Huertes, 1998). The tactual field of view is limited to the amount of braille that is felt through “tactile fixations” (Simon & Huertas, 1998, p. 322). A number of researchers have attempted to determine precisely the way in which braille is perceived tactually (Carreiras & Alvarez, 1999; Foulke, 1982; Millar, 1997; Nolan & Kederis, 1969; Simon & Huertas, 1998), particularly with respect to delimiting the unit of braille recognition, as this is thought to have implications for reading speed. It is as yet unclear from the available evidence as to whether braille readers engage in serial perception of individual braille symbols (Nolan & Kederis) or whether larger patterns such as an entire word are perceived (Chailman, 1978; Foulke). Millar suggests, however, that there may not be a “single perceptual form which determines the perceptual unit of braille-reading” (p. 87). More research is needed, particularly for primary school, emergent braille readers. Some researchers advance a sequential, serial symbol-by-symbol perceptual processing of braille. In this approach, the braille cell is the “sensorial unit” (Daneman, 1988; Simon & Huertas, 1998) or the “perceptual unit” (Simon & Huertas). Pring (1984, 1994) argued that  43  braille involves “tactual input [that] tends to be successive while with print, visual encoding of several letters may take place almost simultaneously” (Pring, 1994, p. 68). Should that be the case, braille readers are required to remember the initial parts of words until they have decoded the latter letters (Steinman et al., 2006). Research by Nolan and Kederis (1969) provided evidence for serial perception of braille symbols prior to integration into whole words as the total time it takes to read a word (“coverage time”) exceeded the total recognition time for each of the individual symbols making up the word (“synthesis time”). Daneman’s (1988) study found that 31 adult braille readers did not “chunk” redundant or superficially scan frequently occurring words (see also Healy, 1976). With respect to the study by Nolan and Kederis (1969), a tachistotactometer device was used to find the minimum exposure time needed for 36 elementary and high school participants to identifr each of the 55 braille dot configurations that represent a letter, small groupings of letters, or selected punctuation marks. Relative legibility values for the 55 individual configurations were based on the mean of all participants’ identification times for each character. Nolan and Kederis reported that total time needed to recognize a word in braille was greater than the sum of the recognition times for the single characters making up the word. Letter and word retrieval times arguably confound these studies. Contrastingly, print readers’ letter and word identification times are typically generally equivalent (Foulke & Wirth, 1973). Research by the Uniform Type Committee (as cited in Foulke, 1 979a), Kederis (as cited in Caton, 1979), and Nolan and Kederis (1969) suggested that, at the time of these studies, the number of dots within a braille character or a word in braille affected its legibility and, therefore, the rate at which it was recognized by readers of all ages. Overall, those braille symbols with the most dots were the most time-consuming to recognize.  44  Related research conducted by Nolan (as cited in Nolan & Ashcroft, 1969) involved a synopsis of a six-year perceptual braille-reading project (by the American Association of Instructors of the Blind). Results indicated that braille word identification was based in a “sequential integrative [process] in which word recognition is the result of the accumulation of pieces of information over a temporal interval” (Nolan & Ashcroft, p. 61). From this perspective, the perceptual unit was the braille character, rather than the entire word. However, Nolan (1966) reported that readers used a variety of “perceptual clues” to recognize words before all of the individual letters are touched and/or identified. It has been hypothesized that braille readers operate from a type of “probabilistic model” (Nolan & Ashcroft, p. 61) in order to predict the word from the first letter or set of letters (Kusajima, 1974). The effectiveness of this model may be enhanced by context clues (Kusajima; Martin & Alonso, 1967). There is limited, early research that suggested that higher frequency braille characters are easier to identify (Pring, 1984). However, other researchers advance the whole word or “chunking” perception model of braille-reading. Pring (1984) reports evidence for braille readers’ ability to “recognize words on the basis of their tactual pattern alone without needing to phonologically recode each letter cluster as it is perceived” (p. 1865). Based on her study of word recognition in braille with nine congenitally blind children (mean age of 12.2, all began braille at 5 years of age), Pring suggests that the braille readers’ “touch-say’ method/strategy of reading” (Pring, p. 1865) is commensurate to sighted children’s “look-say method”  (p.  1865). Pring deemed that “the internal  tactual code used to represent words for the blind appears to be available for word recognition and other information-processing purposes”  (p.  1865). Pring argued that the braille-reading  45  participants “relied more on whole-word information (i.e., a touch-say strategy) than on lettersound information” (p. 1865). It is also unclear, based on the extant research, whether or how contractions facilitate the perception of extended patterns like full words. Nolan and Kederis (1969) reported that braillereading participants could identify 10-20 percent of the presented words more quickly than the individual letters making up the words. Results of Krueger’s (1982) study in which braille readers could find a target letter embedded in a word more readily than in a nonword is often cited as evidence of an “extended spatiotemporal pattern to emerge from a sequential exposure” (p. 498). Reading Rates in Braille The analysis of the differential between the reading rate for the two media of print and braille is very complex because much of the research regarding reading rates using braille is outdated, “sparse and inconsistent” (Trent & Truan, 1997, p. 494), contradictory, and poorly reported (Hong & Erin, 2004; Lowenfeld, Abel, & Hatlen, 1969; Trent & Truan). Oftentimes, for example, authors present reading rates without reporting the age and other key specifics about the participants. Additionally, authors commonly fail to disaggregate students who read both print and braille (Lusk & Corn, 2006) and those with additional disabilities, and differentiate clearly between silent and oral reading rates or the level of difficulty of the text read. Moreover, much of the available research covers specific grades and ages (Lowenfeld et al.; Truan, 1978; Trent & Truan; Wormsley, 1996). Additionally, most studies on braille ORF neglect to address whether students have received adequate braille literacy instruction from a qualified TVI. Further, methodological challenges are common among braille-reading studies, as many studies  46  neglected to state whether the materials were presented in uncontracted (i.e., more digits per word), partially contracted, or (fully) contracted braille. There is also evidence that braille (and print) reading rates are a function of the particular reading task (Knowlton & Wetzel, 1996). The 23 adult braille-reading participants in research by Knowlton and Wetzel demonstrated reading rates for oral reading and two, more demanding tasks involving silent reading were found to be significantly different. The oral reading task without a comprehension check resulted in the fastest rates (135.9 words per minute on average), and these rates diverged widely from 64.94 to 185.29 words per minute (Knowlton & Wetzel). Knowlton and Wetzel discovered that braille and print reading participants tailored their reading behavior to meet the specific task demands. Further, Knowlton and Wetzel showed that braille and print rates were impacted similarly by task. For the aforementioned reasons and others, additional research regarding braille-reading rates is certainly required. While there is debate regarding braille readers’ relative speed compared to typically sighted print readers, there is considerable evidence in the literature that braille readers, generally speaking, read more slowly than do their age-peer print readers (Foulke, 1979b; Hampshire, 1981; Nolan & Kederis, 1969; Simon & Huertas, 1998; Trent & Truan, 1997). The limited available research suggests that braille readers may read approximately one-third to one-half as fast as their typically sighted age peers reading in print (Nolan, as cited in Nolan & Ashcroft, 1969). It is commonly agreed upon that braille readers’ reading rates may often be insufficient for success in a general education, academic program and for positive employment outcomes (Lusk & Corn, 2006). Reports of children’s braille-reading rates vary. Nolan’s (as cited in Nolan & Ashcroft, 1969) research indicated braille-reading rate of 59 words per minute for participants in grades 4  47  to 6. Lowenfeld and colleagues (1969) reported fourth grade braille-reading rates at 84 words per minute at local, public schools and 72 words per minute for residential students in grade four. According to Nolan, the average braille high school reading rates was 83 words per minute. In the studies by Nolan and Kederis (1969), eighth grade public school braille readers read at 149 words per minute while their same-grade counterparts in residential schools read 116 words per minute. Reading rates for adolescents reported by Trent and Truan (1997) (as measured with the Gilmore Oral Reading Test transcribed in contracted braille for an entire residential school population of 30) added confirming evidence that “even the fastest braille readers were slower than their sighted peers” (p. 497). According to their findings, age of onset of blindness was the most critical factor in determining reading speed. Those participants who were congenitally blind were, overall, the fastest braille readers, and their superior rates were attributed primarily to more years of braille instruction (Trent & Truan). The fastest braille-reading participant who was congenitally blind read 103 words per minute, twice the rate of the fastest participant who had become blind after completing primary school (Trent & Truan). Surprisingly, Trent and Truan reported that “there did not appear to be a direct relationship between speed and comprehension, degree of vision, methods of instruction reported by the students, reading for pleasure, attitude toward braille, attendance at public or residential school, or age” (p. 497). However, additional, critical information is needed to effectively appraise the results of the study by Trent and Truan (1997). For example, half of the participants had additional known disabilities including learning disabilities, physical disabilities, and hearing impairments, yet the impact of these additional disabilities was not fully addressed. Moreover, the authors also did not fully delineate their statistical analyses. Further, the authors failed to report age or grades to  48  accompany the words per minute results for each of the 30 participants. Overall, however, these authors ultimately concluded that their adolescent participants, all of whom had received “a significant amount of braille instruction” (Trent & Truan, p. 497) demonstrated reading rates insufficient for successful integration in a regular classroom without a heavy reliance on audio taped materials. Approaches to improving braille-reading rates. Numerous researchers and practitioners have proposed strategies to help improve braille-reading rates (McBride, 1974; Olson, Harlow, & Williams, 1975; Wallace, 1973). Approaches to this concern have generally focused on altering the student’s reading behaviors (Kusajima, 1974; Mangold, 1978; Wormsley, 1979), modifying the way braille characters were presented (e.g., columnar display), and/or changing the braille code itself (Foulke, 1 979a, 1 979b). Braille-reading necessitates the use of fingers and hands in motion over the dots, hence, the first of Foulke’s (1979a) three proposals may be a promising approach (Kusaj ima). Hand and finger movement patterns have received attention in the literature (Foulke, 1 979a; Rex et al, 1994; Wormsely, 1979, 1997). Although more and up-to-date empirical research is needed, there is some evidence that specific hand and finger movements appear to be associated with faster reading speeds (Schiff& Foulke, 1982; Wormsley, 1997). In summarizing aspects of proper reading mechanics, Wormsely (1997) describes that the fastest, most efficient braille readers adopt a “scissor-like” pattern whereby the “left hand reads to the middle of the line, then the right hand takes over and reads to the end of the line while the left hand returns to the next line and begins to read independently of the right” (p. 63). Wormsley (1997) explained that, in this hand pattern, “the hands meet in the middle of the line and then separate” (p. 63). In addition to using hands independently in this manner,  49  competent braille readers generally “maintain contact with the braille with all four fingers of both hands” (Wormsley, p. 64). Alternatively, it has been suggested that fast braille readers tend to read with both index fingers, the left index finger taking the lead for the first half of the braille line before it is moved to the beginning of the next line while the reading of the current line is transferred to the right index finger (Schiff & Foulke, 1982). There is support in the literature that braille-reading speeds can be improved by teaching poor readers to emulate the hand and finger movements of strong braille readers (Kurzhals & Caton, 1973; Lowenfeld et al., 1969); however, further scientific research is needed (Schiff& Foulke, 1982). The limited available research has shown that guided, specific, and frequent practice in learning unfamiliar or difficult symbols can help affect modest improvements in reading speeds (Henderson 1967; Umstead, 1970). Those braille readers (grades three to six) in Henderson’s study who received character recognition training improved their accuracy by 85 percent and their average mean character reading time by 30 percent, while demonstrating a mean increase of 12 CWPM compared to control participants who did not receive the training. McBride (1974) reported reading rate increases for the adults in his study from 138 to 710 words per minute on “informal measures” as a result of his 2-week “rapid reading training” program. His results were met with skepticism by many teachers and researchers (Olson et al., 1975), as were his use of “informal” measures of reading rate and comprehension in braille. McBride’ s informal comprehension check involved asking participants’ to orally recall the passage, perhaps similar to the current retell methodology. Participants were said to have reached a “book report” level of comprehension when they were able to remember at least 80 percent of the content.  50  Procedures in McBride’s (1974) rapid reading training program involved self-efficacy enhancement regarding potential to improve, and braille-specific techniques and mechanics to improve reading speed (e.g., quicker return sweeps with hands, use of more fingers and both hands). Additional techniques were emphasized in McBride’s study, and these included minimizing subvocalizing, reading purely for speed or quick line coverage with less emphasis upon comprehension, daily goal setting, and sharing tips and strategies with fellow research participants. Wallace (1973) sought to replicate the McBride (1974) study by employing an adapted version of McBride’s methodology over a shorter time frame (approximately 16 hours of “rapid reading training”) with 22 adult American and Canadian braille and large print readers. The age range for the two braille groups was 19-62 and 10 to 65, but the results were unavailable to scrutinize the performance of the 10-year-old and anyone around his/her age. While unable to match McBride’s results, Wallace’s reported results of an average of 79 to 120 words per minute and an average of 93 to 121 words per minute for the two braille-reading adult cohorts. The Diagnostic Reading Scales (Spache, 1981) served as Wallace’s formal measure. Rate gains were significant, but there was pronounced intra-group variability (e.g., 30-33 8 word per minute range in one of the braille groups), and the gains also did not appear to be linked to total time spent in the practice training conditions (Olson et al., 1975). Overall, pre and post-intervention comprehension rates did not change significantly for either braille cohort; however, there was high intra-group variability with some braille participants experiencing with impressive losses (e.g., a 33 per cent drop) and gains (e.g., a 50 per cent improvement). Retention of reading rate gains was not evaluated.  51  The self-efficacy component to Wallace’s (1973) training program speaks to the importance of motivation on improving braille-reading rates, yet this factor has received minimal attention in the literature (Chailman, 1978; Henderson, 1967; Nolan & Kederis, 1969; Olson et al., 1975). The study by Kederis, Nolan, and Morris (1967) reported that reading speeds were increased an average of 25% and up to 100% through the manipulation of motivational variables. It has been hypothesized that braille readers’ motivation to read (or write) may be low (Pring, 1994; Spungin, 1989) because of the lack of modeling (both as a function of the lack of access to visible role models due to the visual impairment and due to lack of resources and knowledge by parents (Craig, 1996), lack of suitable materials in braille, and lack of reinforcement because teachers and others are unable to read and spontaneously, naturally reinforce the child’s efforts. ORF and Comprehension for Braille Readers Comprehension is commonly accepted as the purpose of reading, whether print and/or braille is the desired medium (Ashcroft, 1960; NRP, 2000). For the purpose of this study, comprehension is defined as a complex cognitive process of making meaning from text and has been deemed the reason and “essence of reading” (Durkin, 1993, p. 11). The previously mentioned theoretical models of ORF (e.g., LaBerge & Samuels, 1974) submit that greater word recognition speed allows a reader to concentrate on the higher level, integrative, comprehension processing of text (Fuchs et al., 2001). Ultimately, according to both theoretical perspectives, it is theorized that the “fluency with which an individual translates text into spoken words should function as an indicator not only of word recognition skill but also of an individual’s comprehension of that text” (Fuchs et a!., p. 242). For example, ORF was more highly correlated with comprehension outcomes than were several direct measures of reading comprehension (e.g., “question answering” and a “doze” measure) (Fuchs et al.). Moreover, research supports the  52  assessment of ORF of words in context rather than isolated words (Fuchs, et al; Jenkins & Jewell, 1993). Additionally, ORF is more highly correlated to reading comprehension performance than silent reading fluency (Fuchs et al.). The available evidence regarding the relationship between reading rates in braille and comprehension and braille readers’ comprehension levels relative to their print-reading counterparts is unclear and often contradictory. Trent and Truan (1997) reported no direct relationship between reading speed and comprehension, with a number of the fastest reading participants scoring low in reading comprehension and, conversely, a number of the slowest readers scoring highest in comprehension. There is some evidence that braille readers’ reading comprehension levels may be behind those of their sighted peers (Dodd & Conn, 2000; Lorimer, 1992; Nolan & Kederis, 1969). For example, the study by Dodd and Conn reported comprehension levels for their 7- to 12-year-old braille-reading participants that were nine months behind those of their sighted peers, as per the results on the Neale Analysis ofReading Ability (Greaney, Hill, & Tobin, 1998). However, there is disconfirming evidence of differential comprehension rates as well (Williams, 1971). Additional research specific to primary braille readers must be conducted to characterize the relationship between reading rates in braille and comprehension more fully. This is of particular importance because of potential for “word-calling” (Hamilton & Shinn, 2003) as a result of braille readers’ reduced incidental learning. Students with visual impairments may evidence a large working vocabulary, for example, but lack sufficient background knowledge or “broader life experiences” (Koenig & Farrenkopf 1997) fundamental to the process of comprehension (Loftin, 2006). Hence, their knowledge may, at times, be superficial or incomplete, despite their spurious perception of understanding. For example, more abstract or  53  remote experiences (e.g., an African safari) may be largely inaccessible to these students, as they are unable to take advantage of indirect experiences through television and other highly visual media (Koenig & Faffenkopf, 1997). Koenig and Farrenkopf analyzed 254 children’s stories in an effort to identify those experiences children, including those with visual impairments, would need to understand the text. These researchers advocated for direct, explicit instruction with respect to the 22 vitally important “global areas of experience” their analysis revealed. These areas included “doing or making things” and “experiences with friends” (Koenig & Farrenkopf, p. 17). Understanding is facilitated and made possible by relating the content of reading materials directly to their life experiences (Koenig & Farrenkopf). Assessment of ORF (and Comprehension) for Braille Readers Braille readers comprise a unique group of students with a complex reading profile and literacy medium. Braille readers may be at enhanced risk for dysfluency and need appropriate assessment and interventions, particularly in the early primary grades (Coppins & BarlowBrown, 2006). The use of standardized, published norm-referenced assessments to determine areas of need among braille readers is problematic for a variety of reasons (Em & Koenig, 1997; Hall, Scholl, & Swallow, 1986). It is thought that every instrument that is presently available evidences some weaknesses in terms of its ability to properly evaluate students with visual impairments given the unique implications of a visual impairment on learning (Loftin, 2007). A master list of formal, commercially available published norm-referenced tests of aptitude and achievement (PNTs) (Shinn, 2002) determined to be appropriate for the assessment of students with visual impairments does not currently exist (Loftin, 1997). Work by Bradley Johnson (1986), Erin and Koenig (1997), and Loftin was aimed at analyzing the merits of a variety of psychoeducational instruments to assist educators in choosing among and interpreting  54  various tests with more confidence. The tests they evaluated, however, are now largely outdated (e.g., Woodcock Johnson  —  Revised).  Ultimately, choosing among the tests should be based on what Loftin (1997) describes as a “relative comparison of strengths and weaknesses of such instruments” (p. 150). Loftin cites examples of test strengths that include “allowing opportunities to demonstrate tasks,” “allowing physical assistance during demonstration,” and “using a large sample for standardization” (p. 50). Contrastingly, tests or subtests that concentrate on one particular skill area, such as 1 motor development or visual perceptual skills, or that emphasize rote memory or recitation and/or speed are considered poorly designed to accurately assess examinees with visual impairments (Loftin). Loftin asserts that “[i]nterpretation would be simpler if all the weaknesses resulted in either an under- or an over estimation of abilities. Unfortunately, this is not the case” (p. 150). Ultimately, Silberman and Sowell (1998) recommended against overtesting and suggest administering only relevant subtests from (a few) batteries of tests depending on the particular area of concern (see also Loftin). According to the literature, the more severe a student’s visual impairment, and subsequent reliance on tactile and/or auditory methods of learning, the more difficult it is to derive meaningful results from even the most carefully adapted standardized assessments (Duckworth, 1993; Em & Koenig, 1997; Rex et al., 1994). Many steps and procedures are involved in facilitating the proper use of standardized assessments, and all standardized test results should ultimately be administered and interpreted with care given the complex ways visual impairment influences development and test performance (Erin & Koenig, 1997; Loftin, 1997; Silberman & Sowell, 1998). Standardized test modifications for a braille reader may include transcribing the test content into braille and providing the test individually for the student (Duckworth, 1993; Em & Koenig). Modifying the  55  test in any of these ways, however, often fails to guarantee that the process is “nondiscriminatory” (Erin & Koenig). Alternatively, by eliminating visual based questions, the test’s validity is compromised. Furthermore, in an effort to make an assessment more accessible for a student with a visual impairment, assessors can drastically alter the nature of the task requirements and create more unknowns or problems for the student. For example, reading multiple-choice questions to students involves assessing, in part, the student’s memory skills as opposed to reading and decoding skills (Erin & Koenig). Loftin argues that assessors should refrain from rejecting specific formal tests primarily because the standardization sample failed to incorporate students with visual impairments. Instead, Loftin insists that “[t]he critical value. in . .  test selection is its ability to measure what it purports to measure” (p. 149). Ultimately, the treatment validity of assessment data derived from PNTs is questionable, even for typically sighted examinees (Shinn, 2002). PNTs offer “global measures of general abilities (e.g., reading)” (Shinn, p. 672) with broad-band content based on a limited range of items, which are detached from the curriculum. PNTs use response formats largely unknown in the classroom and may reveal little regarding the examinees’ skills or the strategies they employ (Howell & Nolet, 1999; Shinn). Curriculum-based measurement (CBM) as an alternative to published tests. CBM applied to literacy assessments and interventions for braille readers in the critical primary grades appears as a promising alternative to commercial, standardized tests. CBM offers a variety of advantages with respect to the assessment, intervention, and progress monitoring of braille readers. Increasingly, experts are calling for evaluations of dysfluency and other academic problems, through CBM, for example, that acknowledge the complex ecological considerations (e.g., the instructional environment and student materials), in addition to the individual student  56  variables (Shapiro, 2004). The importance of assessment processes informed by an ecological perspective is particularly important in terms of students with visual impairments, as their visual capacity to engage with the environment and text may be severely compromised (Jan & Groenveld, 1993). Moreover, as a “narrow band” assessment (Shinn, 2002), CBM is authentic in deriving the curriculum from the classroom or grade level (“maximizing the testing/teaching overlap” (Shinn, p. 673). Further, CBM is based on “production-type, authentic response modes” (Shinn, p. 673) such as reading out loud from a passage, that have great potential to reveal “how and why students answered in specific ways” (Shinn, p. 673). CBM methodology accommodates the lack of oral fluency and other reading proficiency norms for braille readers, for example. Further, CBM offers an alternative to the use of commercially available, standardized, norm-referenced assessments. Many problematic steps and procedures are involved in facilitating the proper use of standardized assessments, and all standardized test results should ultimately be administered and interpreted with care given the complex ways visual impairment influences development and test performance (Erin & Koenig, 1997; Loftin, 1997; Silberman & Sowell, 1998). CBM can be viewed from a normative point of view to identify students who maybe in need of special help (Fuchs, Fuchs, & Compton, 2004); however it is typically used as a criterion-referenced means of progress monitoring. For example, with respect to ORF, CBM involves the repeated use of reading passages selected on the basis of a student’s individual reading level (e.g., grade level) to assess a student’s oral reading speed and accuracy across time. Student proficiency data can help in the decision-making and in the assessment of student progress toward long-range goals (Hintze & Pelle-Petitte, 2001).  57  CBM monitoring is one strongly supported means of fulfilling the intent of the JEP in that CBM formative data can serve to inform the development of individualized goals, measurable objectives, suitable adaptations and/or modifications as necessary, the procedures adopted to meet these goals, and means of tracking student achievement with respect to the goals (Fuchs, 1989; Shinn, 2002). Further, CBM has been shown to be highly sensitive to intra- and inter-individual differences and changes in reading proficiency, for example, as a result of a wide variety of instructional practices and interventions (Fuchs, Fuchs, Hamlett, & Ferguson, 1992). Extensive research supports the utility of CBM in both general and special education contexts (Deno, 1989; Marston, 1989; Shinn & Good, 1993), and the CBM ORF metric as a measure of student performance can be generalized across the special and general education service delivery models and vice versa (Hintze & Pelle-Petitte, 2001). Researchers in the area of CBM have generated a tremendous amount of research supporting the merit of ORF as indicative of general reading competency, and CBM data have been shown to effectively inform intervention strategies to improve student outcomes (Fuchs et al., 2001; Shapiro, 2004). Leaders in the field (Deno, 1985; Fuchs & Fuchs, 1999; Shinn, 1989) have studied the psychometric properties of obtaining ORF data by counting the number of words a student reads aloud correctly within one minute (Shinn, 2002). According to Fuchs and colleagues (2001 ),“Research shows how this simple method for collecting ORF data produces a broad dispersion of scores across individuals of the same age, with rank orderings that correspond well to important external criteria, and that represent an individual’s global level of reading competence” (p. 251). Examples of empirically supported CBM such as the Dynamic Indicators of Basic Early Literacy (DIBELS) measures include numerous alternative test materials, which makes this form of academic assessment suitable for the repetitive measurement  58  over time inherent in single-subject methodology. This capacity is particularly important given the predictive power of “early performance trajectories” (i.e., fluency in one minute timed tests) in predicting future academic performance (Fuchs et a!., 2001). Dynamic indicators ofbasic early literacy skills (DIBELS) and braille readers. The DIBELS measure meets the generally agreed-upon criteria as an example of a Reading-General Outcome Measure (R-GOMS), or Reading-Curriculum-Based Measures (R-CMBS) (Deno, 1985; Fuchs & Deno, 1992, 1994; Fuchs & Fuchs, 1993; Silberglitt & Hintze, 2007) and is frequently used as the curriculum-based methodology in RTI studies. The DIBELS CBM measure is free, valid and reliable, and quick and easy to administer and score (Madelaine & Wheldall, 1999; Wagner, McComas, Boliman, & Holton, 2006). R-CBMS are also sensitive to changes that may result from the implementation of an intervention, for example (Eckert, Ardoin, Daisey, & Scarola, 2000). DIBELS is well suited to this investigation given its psychometrically robust nature and the adaptability and availability of its standardized, grade-levelled reading probes. For example, the DIBELS CBM approves many accommodations relating to changes in materials such as altering print size and text color and producing materials in braille that fail to alter the validity of the measure. Moreover, the DIBELS aforementioned progress monitoring fluency assessment probes are constructed to be administered repeatedly over time, providing an opportunity to measure changes in participants’ fluency rates over time. In addition, the DIBELS includes well delineated and streamlined protocols, and the inter-observer agreement requirements are well known and highly acceptable to the researcher and to many school districts’ staff. Adapting the DIBELS fluency probes is a relatively straightforward process compared to that involved in adapting assessment tools that rely on pictures, for example (Tobin, 1994). However, DIBELS’  59  ORF results in braille should be interpreted cautiously because the “scores may not be directly  comparable” (Good & Kaminski, 2002, p. 46) because, for example, braille readers were not included in the norm group. RTL CBM’ and ORF  The study is designed to address ORF concerns for braille readers using CBM within an RTI framework because of the difficulties inherent in the use of published norm referenced tests (PNTs) and given the advantages CBM and RTI methodologies afford. Researchers in the area of CBM have generated a tremendous amount of evidence supporting the merit of ORF as indicative of general reading competency and helped inform intervention strategies to improve student outcomes (Fuchs et aL, 2001; Shapiro, 2004). Frameworkfor RTI  In essence, RTI is “essentially and instrumentally an assessment process and instructional process that is dynamic, recursive, and based on rigorous scientific research” (Kame’enui, 2007, p. 7). RTI involves two primary goals, namely “to identify risk early so that students participate in prevention prior to the onset of severe deficits, which can be difficult to remediate, and to identifi students with LD who prove unresponsive to validated, standardized forms of instruction and instead require an individualized form of instruction” (Fuchs & Fuchs, 2007, p. 15). The underlying premise of RTI “is that students are identified as LD when their response to validated intervention is dramatically inferior to that of peers” (Fuchs & Fuchs, p. 15). The rationale behind this thinking is that nonresponse to a proven intervention may indicate the need for specialized treatment to affect acceptable learning outcomes and/or serve as a proactive measure. Fuchs and Fuchs argue, “In this way, a central assumption is that RTI can differentiate between two explanations for low achievement: inadequate instruction versus disability. If the child  60  responds poorly to instruction that benefits most students, then the assessment eliminates instructional quality as a viable explanation for poor academic growth and instead provides evidence of disability” (p. 14). American education policy “preceded and drives research and development” in RTI (Bradley, Danielson, & Doolittle, 2007, p. 11) According to IDEA (2004), in the United States, “a local educational agency (LEA) may use a process that determines if the child responds to scientific, research-based interventions as a part of the evaluation procedures” (614 (b)(6) (A-B, IDEA 2004). This RTI process, however, is as yet undefined and the U.S. Department of Education is endorsing no single RTI model with respect to the identification of SLD (Bradley et al., p. 9). However, a “multi-tiered prevention model” (Bradley et al., p. 9; Fuchs & Fuchs, 2007) with a minimum of three tiers is a front-runner in frameworks for RTI (Bradley et al., p. 9). The nature or quality of the student’s response to the intervention guide and inform the movement from one level or tier up to another (Bradley et al., p. 9). Each level in the multi-tired model is differentiated in tenns of intensity or duration, time, and frequency of the intervention, “size of student groupings, and skill level of the service provider” (Bradley et al., p. 9). First tier. The first level or tier in the RTI framework, or primary intervention, is general education, a “universal, core program” (Fuchs & Fuchs, 2007, p. 14). This first level typically refers to “high-quality, research-based instruction in the general education setting, universal screening to identify at-risk students, and progress monitoring to detect those students who might not be responding to this primary intervention as expected” (Bradley et al., 2007, p. 9). Failure to respond to general education instruction suggests the need for secondary prevention or the beginning of the RTI LD identification process (Fuchs & Fuchs).  61  Second tier. The second tier involves secondary interventions “involving one or more rounds of research-based small-group tutoring” (Fuchs & Fuchs, 2007, P. 14) that are commonly undertaken for 8 to a maximum of 12 weeks (Bradley et a!., 2007, p. 9). Progress monitoring for secondary or tertiary interventions is ongoing and takes place weekly or biweekly (Bradley et al., p. 10). Secondary interventions are more intense than primary interventions, but less intense than those characterizing the tertiary level. Third tier. Those students who respond poorly or demonstrate “unexpected failure” to secondary prevention are candidates for the next, tertiary, level of intervention (Fuchs & Fuchs, 2007, p. 14). The final level involves “individualized and intensive interventions and services, which might or might not be similar to traditional special education services” (Bradley et a!., 2007, p. 9). It is common at this level to recommend a “full and individual evaluation under IDEA” either before (Fuchs & Fuchs) or after the initiation of another, more intense individualized intervention (Bradley et a!., p. 9). RTI and repeated reading. RTI is appealing in the assessment and intervention of braillereading students for generally the same reasons it is well received in other areas of special education. RTI is early, preventative, responsive, and research-based. RTI explicitly links assessment, intervention, and instruction (Bradley et al., 2007; Vaughn & Fuchs, 2003) and stands in stark contrast to the “wait to fail” aptitude/achievement discrepancy approach. According to current, leading authorities on RTI, scientific and research-based interventions are at the heart of RTI (Kame’enui, 2007, P. 7). However, there are currently no empirically validated ORF assessments for braille readers (in the primary grades). Fortunately, there is an opportunity to adapt and profit from the wide variety of empirically validated approaches to improving ORF that exist for sighted students. Repeated reading or readings  62  (Samuels, 1979) is arguably pre-eminent among these approaches (NRP, 2000; Meyer & Felton, 1999; Rashotte & Torgeson, 1985; Therrien, 2004). In fact, “intervention research on fluency development for students with LD had been dominated by research on repeated reading” (Chard et al., 2002, p. 403). Repeated reading enjoys an “extensive research base” (Therrien, p. 253) and proven efficacy for students without disabilities and those with learning disabilities. The NRP (2000) recommended guided repeated oral reading as an essential component of reading programming after conducting a meta-analysis of fluency enhancement research (Chard et al., p. 387). NRP findings included a moderate (Cohen, 1988) mean weighted effect size for guided oral reading of 0.41 on the reading achievement for participants in the extensive studies reviewed (NRP, p. 3-17). In his meta-analysis of fluency and comprehension gains for sighted students reading print in experiments using repeated reading protocols, Therrien (2004) reports that all participants in experimental conditions involving repeated reading “obtained a moderate mean increase in fluency” of 0.75 and 0.77 effect sizes for nondisabled students and students with LD respectively (p. 257). Therrien also reported mean increases in comprehension of 0.48 and 0.49 effect sizes for nondisabled students and students with LD respectively (p. 257). Drawing on extensive literature from 1975-2000 inclusive, Chard et al. (2002) surmise that “in general, the findings from this synthesis suggested that repeated reading interventions for students with LD are associated with improvements in reading rate, accuracy, and comprehension” (p. 402), in keeping with the theoretical models by LaBerge and Samuels (1974) and Perfetti (1977, 1985). A number of the most empirically validated aspects of the repeated reading intervention (Chard et al.; Themen) are discussed in turn.  63  Multiple readings ofthe text. Repetition, drill, or practice under criterion stimulus conditions have been shown to target fluency and generalization (Daly & Martens, 1994). O’Shea and colleagues (1985) investigated the impact on ORF of rereading text one, three, and  seven times. With respect to ORF, these researchers identified main effects with significant differences between each of these levels of intervention. Ultimately, seven readings yielded better performance than three readings, which was, in turn, significantly more effective than rereading once. Their results for story retelling indicated no significant difference between rereading text three or seven times, but significantly higher scores between these rereading conditions and the single reading condition (Chard et al., 2002). Research by Sindelar, Monda, and O’Shea (1990) indicated significantly better ORF gains from thrice rereading as opposed to a single reading. Therrien (2004) argues that three to four rereadings of the selected text will result in mean fluency effect size increases that are 30 percent greater than those gained from twice rereading the passage (p. 257). Differential comprehension gains are insignificant beyond the second rereading (Therrien, p. 257; see also O’Shea et al., 1985; Stoddard, Valcante, Sindelar, O’Shea, & Algozzine, 1993). The research suggests that repeated reading interventions should involve participants reading passages aloud to an adult (Therrien, 2004). Therrien explains that “adult implementation is recommended because the fluency and comprehension effect sizes for students in transfer interventions conducted by adults were more than three times larger (mean fluency ES= 1.37, mean comprehension ES= 0.71) than those obtained by peers” (p. 257). Cued response. The use of cues is a strongly recommended practice within the repeated reading protocol. Potential cues include prompting students to specifically attend to speed, accuracy, or comprehension, or speed and comprehension. Therrien (2004) writes, however, that  64  “a definitive answer as to the type of cue to provide.., could not be determined because differences in fluency and comprehension gains based on the type of cue received were negligible” (p. 257). Participants in a number of studies appeared to be able to adapt their reading rate in keeping with the assigned task (DiStefano, Noel, & Valencia, 1981; Therrien). Cues typically involve introducing some kind of performance criterion. Interventions that include a performance criterion (e.g., reading to beat a previous ORF score) were typically more effective, resulting in a mean fluency effect increase (1.70) that was over four times greater than that associated with interventions that involved a fixed number of rereadings (0.38) (Therrien, 2004). Corrective feedback. It is hoped that the implementation of a repeated reading intervention will help participants achieve gains in fluency, which involves improvements in speed and accuracy. The research points to the importance of corrective feedback to affect this improvement, particularly with respect to accuracy (Chard et al., 2002; Therrien, 2004). Thernen argues, “Corrective feedback on word errors seems to be essential because all students involved in adult-run interventions were given corrective feedback and obtained a large mean fluency effect size (1.37)” (p. 257). Contingencies. Many research studies use contingencies to increase or decrease behaviors (e.g., a reduction in math homework for increasing the number of completed addition questions to criterion levels). However, in the literature regarding repeated reading, the use of contingencies is less consistent, as typically the intervention itself and/or moderate social reinforcement (e.g., “Good job!) if the participant achieved the desired rate were seen as sufficient to affect improvements in ORF (Layton & Koenig, 1998).  65  High content overlap (HCO) passages. Research has shown that participants frequently meet or exceed ORF goals in instructional passages after multiple rereadings, but that this change is poorly generalized (Daly et al., 2002). The repeated reading literature suggests that evidence of treatment effects is strengthened by signs of improvement or gains associated with generalized improvements in untrained, High Content Overlap (HCO) passages (Glazer, 2007; Gortmaker, 2006). HCO passages have many of the same words as their corresponding passages, yet these words are constructed into a different story. Changes in CWPM for HCO passages are regarded as relatively stronger evidence of progress than within-session improvements in CWPM (Ardoin, McCall, & Klubnik, 2007). Repeated Readingfor Braille Readers The repeated reading intervention was conceived as a preventative and/or remedial intervention based on one of the two primary models of prevention intervention within RTI, or “standardized protocols” (Fuchs & Fuchs, 2007, p. 16). Standardized protocols involve clear, regularized instructions and treatment integrity checks and are more easily conducted by those without school psychological training (Fuchs & Fuchs). The standardized protocols for the repeated reading intervention model feature the aspects of repeated reading found to be most effective with typically sighted participants (Therrien, 2004), namely multiple opportunities to read the same text independently, the provision of specific cues, and systematic corrective feedback by an adult. The use of contingencies and materials with a high overlap of words is often part of the repeated reading research design. Repeated reading is represented in the limited reading intervention research pertaining to students with visual impairments, and there is some tentative, preliminary evidence of the effectiveness of this practice for this population. One study employed a repeated reading strategy  66  to affect improvements in ORF across all participants with low vision who read print (Layton & Koenig, 1998), while another similar study included one braille reader among participants and reported the effectiveness of the intervention for all participants (Pattillo et al., 2004). In keeping with the LaBerge and Samuels (1974) theoretical model, Layton and Koenig (1998) studied the effects of repeated readings on reading fluency with four students who have low vision and whose primary medium was print, using a changing criterion design (Alberto & Troutman, 2003). All students demonstrated an improvement in ORF as a function of the changing criterion. Within each session, participants were asked to reread a short passage until they met a predetermined criterion reading rate based on “professional estimate” (Layton, 1994) coupled with the guidelines of Alberto and Troutman of placing the first intervention criterion at a rate of 50% more than the mean demonstrated during baseline rate. Within each subphase, individual reading selections were reread until the participant reached the criterion rate or until the participant indicated fatigue and a desire to conclude the session. Criterion was discussed with the student prior to each session and assistance was offered after each passage was read in its entirety. At the conclusion of the session, the participant was asked to orally retell the passage to the investigator. A social validity analysis involving the classroom teacher was ongoing on a weekly basis. Pattillo and colleagues (2004) used a modified repeated reading strategy coupled with computer assisted (OCR) software with five participants aged 11 to 14. One of these participants used braille. A brief social validity analysis was administered to participants. All participants demonstrated improvements in ORF and more favorable attitudes towards reading. The empirical literature pertaining to literacy for students with visual impairments is relatively limited and often outdated. This is particularly true with respect to ORF for braille  67  readers in grades one, two, and three. A variety of related topics converge in this study, and as discussed above, inform the topic and research methodology of the cuent study.  68  CHAPTER 3 Methodology The literature regarding Oral Reading Fluency (ORF) is extensive, and much attention has also been paid to using curriculum-based measurement (CBM) to monitor ORF within a Response to Intervention (RTI) framework. The available research in these areas suggested that the repeated reading intervention would likely be effective in improving ORF rates for braille readers. This chapter presents the procedures and materials designed to answer the research questions. Participants and Setting Primary-age braille-reading students constitute a very low incidence group. For example, there are currently approximately only six primary-age braille readers in all of British Columbia (BC). Hence, in order to find enough participants for this study, students were recruited from schools throughout Canada and the United States. Selection Criteria Repeated reading is conceptualized primarily as a skill based intervention. Eligible candidates for the study were those braille-reading students who would, it was hypothesized, benefit most from participation because they were experiencing skill based ORF related challenges. Eligibility was based on the following criteria: 1. enrolment in grades one, two, or three; 2. a documented visual impairment (as per the BC Ministry of Education guidelines); 3. exclusive use of the braille-reading medium; 4. no documented evidence of additional disabilities (e.g., formal screening for a learning disability);  69  5. no documented evidence of English as a Second Language;  6. the ability to read connected text; 7. a current difficulty with ORF that is unrelated to motivation, according to the TVI or parents, which was later confirmed by the investigator; 8. frequent and regular direct service by a TVI (i.e., a minimum of approximately three lessons of 30-minute each per week); 9. frequent and regular TVI supervision of (braille) literacy instruction provided by additional staff who work with the student (e.g., classroom teacher or paraprofessional); 10. willingness on the part of the TVIs (and other staff) to be audiotaped during the study-related sessions with the participant, and a 11. willingness on the part of the TVI, participant (and parent), and additional supervised staff to participate in an approximately five to eight week study involving three to five 30-minute weekly experimental sessions. Eligibility was not based on participants’ gender, ethnicity, or socioeconomic status. Particzpant information. Participants were recruited through TVI email list serves in  Canada and the United States (see Appendix A). TVIs interested in participating contacted the researcher via email. TVIs put forward the names of students they had previously identified as demonstrating poor reading achievement. Of those students put forward by their TVIs, seven students ultimately did not participate either because they did not meet eligibility criteria, moved out of district, or a parent expressed concern regarding the time that would be required to participate in the study.  70  Ultimately, eight students, and their TVIs and other staff working with them, took part in the study. Five of the eight participants were in grade one. One participant was in grade two, and two participants were in grade three. There were four female and four male participants involved in the study. Every participant attended a public school and received support from a TVI on an itinerant basis. All eight participants continued with the study until school ended for the summer in their respective school districts. Descriptive information about the participants is summarized in Table 1 (All names have been changed to pseudonyms). The eight participants were assigned to research cohorts of four in the order in which they completed the participant screening and selection phase. The first cohort included Kelly, Kevin, Tabitha, and Carrie with baselines of three, four, five, and seven respectively. The second cohort included John, Linda, Mark, and Tom with baselines of three, five, six, and eight respectively. Data were collected over a period of five to nine weeks depending on when participants started the study and when their school year ended. Three participants underwent a follow-up phase.  71  Gender  Female  Female  Tabitha  Carrie  Female  Male  Male  Linda  Mark  Tom  Second Cohort John Male  Male  Kevin  First Cohort Kelly Female  Participant  Optic Nerve Hypoplasia Ganglioglioma (tumor) Optic Nerve Hypoplasia Optic Atrophy (linked to cranuopharygioma)  Optic Nerve Hypoplasia Retinal degeneration Optic Nerve Hypoplasia Bilateral microphthalmia with corneal opacities; partial deletion of X chromosome  Visual Impairment  7  9  7  10  6  7  6  7  Age  1  3  1  3  1  1  1  2  Grade  Particzpant Information for the First and Second Cohort  Table 1  1.5  4  2  5  2  1.5  1  3  1 (uncontracted) 1 (uncontracted) 3 (contracted) 1 (contracted)  2 (contracted) 1 (uncontracted) 1 (uncontracted) 1 (uncontracted)  Years of Grade level of formal all reading braille materials instruction (contracted or uncontracted)  45 minutes X 7 lessons 60 minutes X 5 lessons 45minutes X 4 lessons 90 minutes X 2 lessons  30-45 minutes X 5 lessons 30 minutes X 15 lessons 60 minutes X 1 lesson 60 minutes X 3 lesson  Duration and frequency of braille instruction per week by a TVI  Daily literacy support from braillist/ paraprofessional for language arts tasks  none  none  none  60 minutes X 4 lessons from a resource teacher none  20-30 minutes X 2 lessons None  Duration and frequency of braille instruction per week by a teacher, braillist or paraprofessional  Setting. The study was conducted remotely by the investigator. Participant screening and all other assessment and intervention sessions occurred in the participants’ respective schools. All aspects of the study involving direct contact with participants were conducted primarily by their TVIs and, at times, with their classroom teacher and/or paraprofessionals. The study took place during the spring and summer school terms in 2008. Materials This study involves the use of reading materials from two different sources for the experimental analysis of the effectiveness of the intervention. One source for reading materials was the Dynamic Indicators of Basic Early Literacy Skills (DIBELS; http://dibels.uoregon.edu/) program. The other materials used for this study were adapted versions of passages used in peer-reviewed reading intervention studies for typically sighted students (Ardoin, Eckert, & Cole, 2008; Eckert, Ardoin, Daly, & Martens, 2002; Gortmaker, 2006). Each of the three phases of baseline, intervention, and follow up in this study involved separate reading passages for screening, assessment, intervention, and/or progress monitoring. Each of the passages was only used once with individual participants (Wagner et al., 2006, p. 42). Each passage was a complete narrative (Therrien et al., 2006). All of the materials used in this study were informed by CBM methodologies (Deno, 1985; Good & Kaminski, 2002). The DIBELS and research study probes were chosen over generic passages from trade books or classroom-specific material because these generic passages can vary widely (e.g., within the book) in terms of their readability (Shapiro, 2004). Shapiro recommends the use of “generic passages already carefully controlled for difficulty level”  73  (p. 124) such as those from DIBELS. CBM research shows that controlled CBM measures such as DIBELS can be sensitive to student progress across curricula (Fuchs & Deno, 1994; Tilly, 1999). The estimated average readability (and range) for each type of reading passage within every phase was determined using the Spache (1974) readability formula (Daly et al., 2006). The Spache analysis was completed using the Intervention Central website (http://www.interventioncentral.com). According to Shapiro (2004), this formula is useful in estimating the readability of text for grades one, two, and three. Both vocabulary and sentence length are evaluated. DIBELS passages were presented in numerical order throughout the study across participants. Instructional passages were administered in the same order for all participants in each grade. However, in response to concerns raised by researchers (Rex et al., 1994), all passages that emphasized content based on visual experience were adapted or eliminated. This procedure was thought to help avoid penalizing braille readers for their lack of experiential background (Salvia & Ysseldyke, 1988) and to enhance the validity of the assessments, particularly the retell comprehension component. A summary description of the type and purpose of each kind of reading material and the assessment schedule for their use are described further in the following section and summarized in Table 2.  74  Baseline  Participant Screening and Selection  Phase  DIBELS Progress Monitoring Passages (Low Content Overlap)  Instructional Passages  Stimulus Material Benchmark Screening and Assessment  Participants were assessed with benchmark stories from earlier grades if their ORF and/or error scores fell in the frustrational range. No instruction was provided for these passages. Passages taken and/or adapted from other reading intervention studies. These passages were used to identify typical, pre-intervention ORF and oral retell fluency rates. No instruction was provided for these passages. DIBELS passages estimated to be at the student’s instructional level based on DIBELS ORF screening assessment data and clinical judgment. These passages were used to monitor the student’s progress with respect to ORF and oral retell fluency in unrelated text over the course of the entire study. No instruction was provided for these passages. Passages help to identify students “at risk”, “at some risk”, or “low risk” for fluency problems.  Participants read the DIBELS in order to detennine eligibility for inclusion in the study. These passages were used as a screening measure to assess the participant’s risk for current and future reading (fluency problems). Passages help to identify students “at risk”, “at some risk”, or “low risk” for fluency problems within an informal reading inventory process. Speed and accuracy scores on these probes were used to help determine instructional level for all materials to be used by a participant for the duration of the study.  DIBELS Winter benchmark probes (3) were first used by the investigator to train the TVIs how to conduct ORF assessments.  Description of Stimulus Material and Purpose  Materials and Assessment Schedule  Table 2  TVI (or teacher, and/or paraprofessional  TVI (or teacher, and/or paraprofessional  Assessor  20 minutes per session  Time Involved 20 minutes per session  Follow-up  DIBELS Progress Monitoring Passages (Low Content Overlap)  High Content Overlap Passages (generalization passages) DIBELS Progress Monitoring Passages (Low Content Overlap) Instructional Passages  Table 2. continued Phase Stimulus Material Intervention Instructional Passages  Passages taken and/or adapted from other reading intervention studies. These passages were used to identify post-intervention ORF and oral retell fluency rates. No instruction was provided for these passages. These passages were used to monitor the student’s generalized reading progress with respect to ORF and oral retell fluency in unrelated text over the course of the entire study (every second day of assessment). No instruction was provided for these passages.  Passages taken andJor adapted from other reading intervention studies. Participants read these passages three times in an attempt to improve their ORF, error rates, and oral retell fluency. The passages were used to identify ORF rates and oral retell fluency during the intervention. Instruction was provided for these passagçs. Instructional passages that were re-written to retain a high percentage of the same words. These passages were used to assess the efficacy of the intervention, or the generalization of learning from repeated reading of the corresponding instructional passage with respect to ORF, error rates, and oral retell fluency. No instruction was provided for these passages. These passages were used to monitor the student’s generalized reading progress with respect to ORF and oral retell fluency in unrelated text (every second assessment session). No instruction was provided for these passages.  Description of Stimulus Material  -  TVI (or teacher, and/or para professional  Assessor  20 minutes per follow-up session (up to 3 sessions)  Time Involved 3 0-40 minutes per session  DIBELS passages. This study involved the use of two different types of DIBELS passages, benchmark and progress monitoring passages, both at the grades one to three levels (Therrien, Wickstrom, et al., 2006). DIBELS benchmark passages for the winter term were used to screen potential participants. DIBELS progress monitoring passages were used for the brief assessment for performance factors such as motivation during screening and to monitor the progress made by participants across all phases of the study. No instruction was provided for any of the DIBELS passages at any point in the study. The first type of DIBELS passage, benchmark probes, served as a screening tool to gauge ORF proficiency and to help determine eligibility for inclusion in the study. Specifically, the measures helped identify participants who were at risk for current and future fluency problems. DIBELS benchmark assessments were designed for administration during three times of the year, namely fall, winter, and spring. Each benchmark assessment consists of three stories. To assess participants’ ORF using a DIBELS benchmark assessment, each participant individually read aloud from three grade-level benchmark passages for winter, each for one minute. The median (middle) score was recorded as their ORF score. DIBELS offers two benchmark screening assessments (i.e., winter and spring) for grade one students and three benchmark screening assessments for grade two and three students. This study involved the benchmark probes for winter for all of the participants. The second type of DIBELS reading passage used in this study, DIBELS progress monitoring probes, was used for two purposes, namely for a brief, one-time assessment for performance factors such as motivation and to monitor students’ ORF longitudinally. DIBELS includes 20 progress monitoring stories calibrated to each specific grade. The  77  participant screening and selection phase involved the use of one of these progress monitoring stories to help determine whether the participant’s ORF challenges were likely attributable to a skill deficit (e.g., lack of speed) or performance (e.g. lack of motivation). The DIBELS progress monitoring passages were used to monitor the student’s generalized reading progress with respect to ORF and oral retell fluency in unrelated text over the course of the entire study (every second day of assessment). Using grade level progress monitoring probes can help teachers determine to what extent instruction in instructional level materials is translated into progress using the more challenging grade level probes (Hosp, Hosp, & Howell, 2007). Students’ ORF performances for all DIBELS stories were compared to criteria for expected levels of fluency for their particular grade and the winter term ORF criteria (see Tables 3, 4, and 5). DIBELS provides research-based criteria for placing students into one of three categories of risk for reading problems (low risk, some risk, or at risk). Criteria are based on an American normative sample of typically sighted children. Criteria increase over time to reflect expected reading growth, and benchmark goals are available for each grade level. Students who receive ORF scores within the “Low Risk” range for experiencing later reading problems are considered to be “on target”. Additional intervention is recommended for students who receive ORF scores within the “Some Risk” for current and future reading problems. Students who obtain ORF scores within the “At Risk” range are believed to be in need of substantial intervention.  78  Table 3 First Grade DIBELS ORF Benchmarks (Good & Kaminski, 2002) Winter (middle of the year) ORF <8 8  <  ORF <20  ORF >20  Spring (end of the year)  at risk  ORF <20  At risk  some risk  20  some risk  low risk  ORF  ORF <40  40  low risk  Table 4 Second Grade DIBELS ORF Benchmarks (Good & Kaminski, 2002) Fall (beginning of the year)  Winter (middle of the year)  Spring (end of the year)  ORF <26  at risk  ORF <52  at risk  ORF <70  at risk  26<ORF<  somerisk  52<ORF<68  somerisk  70<ORF<  somerisk  44  90  ORF >44  low risk  ORF >68  low risk  ORF> 90  low risk  Table 5 Third Grade DIBELS ORF Benchmarks (Good & Kaminski, 2002) Fall (beginning of the year)  Winter (middle of the year)  Spring (end of the year)  ORF> 53  at risk  ORF <67  at risk  ORF <80  some risk  67  some risk  80  53 <ORF  <  <  ORF <92  77 ORF >77  <  ORF  at risk <  some risk  110 low risk  ORF >92  low risk  ORF >110  low risk  79  DIBELS passages are available online at no cost. Brailling the DIBELS passages is among the approved accommodations. As of the spring of 2008, DIBELS benchmark passages for grades one, two, and three are available for purchase in contracted or uncontracted braille. The average DIBELS passage length is approximately 200 words. There is a low level of word overlap across all DIBELS passages (i.e., a low percentage of similar words among passages). Hence, all DIBELS passages are considered Low Content Overlap (LCO) passages. The DIBELS stories were transcribed into two braille versions, contracted and uncontracted braille. TVIs selected either contracted or uncontracted braille-reading passages for their students before beginning the participant screening and selection process. Over the course of the study, participants continued to use only the form of braille that their TVI had initially selected. The level of difficulty of these passages was approximated using the previously mentioned online Spache (1974) readability formula. Estimates for mean readability scores for DIBELS progress monitoring passages for grades one, two, and three were 2.64 (range of 2.30 to 2.83), 3.08 (range of 2.66 to 3.43), and 3.50 (range of 3.06 to 3.77), respectively. Instructional passages. In addition to DIBELS passages, the study involved stories taken and/or adapted from other research studies. These stories were used as instructional stories. The instructional passages were used to identify ORF rates and oral retell fluency during the baseline, intervention, and follow-up phases. The level of difficulty of these passages was estimated based on online Spache (1974) readability ratings (Glazer, 2007). The mean length of the instructional passages was 109 words, with a range of 98 to 123 words. The mean readability scores for instructional passages at  80  a grade one, two, and three instructional level were 1.68 (range of 1.35 to 1.94), 2.43 (range of 2.00 to 2.91), and 3.32 (range of 3.03 to 3.62), respectively. The High Content Overlap (HCO) passages used in this study were created based upon the instructional passages used for the intervention. Most of the HCO passages used in this study were taken and/or adapted from the same research studies from which the instructional passages were derived. To create the HCO passages, the instructional passages were rewritten to retain many of the same words (i.e., a minimum of 80%; Daly, Persampieri, et a!., 2005), yet tell a different story (i.e., the order of the words had been changed) (Daly, Martens, Kilmer, & Massie, 1996; Gortmaker, 2006; McCurdy et a!., 2007). Inevitably, given the high word overlap, there was also considerable content overlap with respect to theme and plot (Rashotte &Torgesen, 1985). These HCO passages served as “generalization” passages used to assess the efficacy of the intervention, or the generalization of newly learned words from repeated reading of the parallel instructional passage (Daly, Chafouleas, et a!., 2005). These HCO (McCurdy et al., 2007) passages were used to measure the “generalization of instructional effects” (McCurdy et a!., p. 10) of the repeated reading intervention. Both overlap in words and content have been shown to facilitate transfer of reading speed when passages are beyond an independent level of difficulty (Faulkner and Levy, 1994). Ultimately, each instructional passage used in the intervention phase had a single corresponding HCO passage. The level of difficulty of these HCO passages was calculated again through an online Spache readability analysis. On average, the HCO stories for grades one and two were very slightly more challenging than the instructional passages. The mean readability  81  scores for HCO passages at a grade one, two, and three instructional level were 1.76 (range of 1.54 to 1.98), 2.50 (range of 2.20 to 2.85), and 3.30 (range of 3.13 to 3.41), respectively. Word overlap is expressed as a percentage (Gortmaker, 2006; McCurdy et al., 2007), determined by the number of words occurring in both the instructional and HCO passages divided by the total number of words in the instructional passage (McCurdy et al.). Words occurring in only one passage were not considered as overlapping. Mean word overlap between the instructional passages and the HCO passages was calculated and expressed as a percentage, along with the range of overlap between these two passage types (McCurdy et a!.). The mean percentages of word overlap between instructional passages and HCO passages for grades one, two, and three were 84% (range of 80 to 92%), 82% (range of 80 to 87%), and 84% (range of 82 to 89%) respectively. Additional materials. In addition to the brailled reading passages, TVIs were provided with an audiotape recorder, tapes, and batteries as all sessions with the participants were recorded. Stopwatches were also provided given the need to collect standardized data on words read correctly in a two minute time frame. Incentive items for the performance assessment were also provided. In addition, the investigator supplied braille graph paper and adhesive dots that served as tactile data points. Measurement Dependent Variables This study involves three dependent variables relating to reading proficiency, namely (a) correct words per minute (CWPM), (b) error rate, and (c) comprehension. Social validity for the TVIs was also assessed, as were participants’ self-perceptions as  82  readers through the use of brief questionnaires. The primary dependent variable was CWPM. CWPM, error rate, and comprehension were assessed during each assessment session across every phase. Social validity was formally assessed four times and informally throughout the study. The investigator reviewed treatment integrity for TVIs for every assessment session. Interobserver agreement across dependent variables was assessed for 34% of the assessment sessions randomly chosen across the phases. Each dependent variable is operationally defined as follows. Oral readingfluency (ORF).ORF, primarily operationalized as CWPM, is a specific and observable target behavior (Fuchs et al., 2001). As the primary dependent variable, CWPM was selected for its clear relationship to the described conceptual theories and its social significance for emergent readers, particularly those who read braille (Layton & Koenig, 1998). As the primary dependent variable, CWPM is the number of words a participant read correctly in one minute, regardless of the length of the individual words (Wagner et al., 2006, p. 42). CWPM was calculated using curriculum measurement procedures. Correctly read words were those pronounced correctly and stated within six seconds, double the typical wait time (Begeny, Daly, &Valleley, 2006). Words that a participant self-corrected within six seconds, as the typical wait time was doubled for braille readers, were scored as accurate. Insertions were not counted as errors. Several studies with students who have visual impairments call for the use of digits per second or Carver’s (1989) six-letter word equivalent (Layton & Koenig, 1998). However, the conventional CWPM metric makes it possible to compare braille readers’ results with their sighted counterparts (e.g., against DIBELS “at risk” criteria). While the  83  standard one minute probe was adapted to two minutes for braille readers (Morgan & Bradley-Johnson, 1998), the participants’ total words read correctly was ultimately halved to create a one minute CWPM ORF score, again to allow for comparison to sighted norms. The focus of the ORF analysis was on participants’ CWPM scores for the first, or “cold” read of the instructional passage and for HCO passages. It is thought that CWPM scores for the first readings more directly reflect participants’ ORF skill development than the ORF scores for the second and third readings. Alternatively, CWPM scores for the HCO reading passages are thought to reflect the participants’ ability to demonstrate generalization more accurately than the CWPM scores for the second or third readings (E. Daly, personal communication, September 18, 2007). Published rates of improvement (Fuchs & Fuchs, 1982, 1993; Hasbrouck & Tindal, 2006) were used as therapeutic criteria to evaluate the practical value of any improvements in CWPM. Expectations regarding anticipated rates of reading growth for sighted students vary depending on the researcher, however, thus complicating this analysis. For the purpose of this study, Hasbrouck and Tindal’s fluency norms were used. They have been used to evaluate growth rates within the repeated reading literature (Therrien & Kubina, 2007; Therrien, Wickstrom, et al., 2006) and were recently adopted by the leading braille-reading researchers. This will facilitate cross examination of research data (C. Holbrook, personal communication, September 5, 2007). Hasbrouck and Tindal summarized the American normative data for 2000 through 2004 from numerous school districts in 23 states (Hasbrouck & Tindal). The norms indicate  84  percentiles from the  th 10  to the  th 90  percentile. Hasbrouck and Tindal’ s norms are shown  in Table 6.  85  Table 6 Expected Rates ofImprovementfor CWPM (Hasbrouck & Tindal , 2006) Grade  Percentile  Fall  Winter  Spring  Average Weekly  CWPM  CWPM  CWPM  Improvement  90  No  81  111  1.9  75  assessment  47  82  2.2  50  in Fall for  23  53  1.9  25  grade 1  12  28  1.0  6  15  0.6  10 2  3  90  106  125  142  1.1  75  79  100  117  1.2  50  51  72  89  1.2  25  25  42  61  1.1  10  11  18  31  0.6  90  128  146  162  1.1  75  99  120  137  1.2  50  71  92  107  1.1  25  44  62  78  1.1  10  21  36  48  0.8  Hasbrouck and Tindal (2006) calculated historical rates of improv ement for typically sighted students at each percentile rank for each grade level. The rates were determined by subtracting the CWPM for the fall from the spring, and then dividing this  86  difference by the usual number of weeks between these two assessments (i.e., 32 weeks). In the case of grade one, ORF is not assessed in the fall, hence the typical weekly expected rate of improvement was determined by subtracting the winter score from the spring score and dividing the difference by 16, the number of weeks that usually separate the winter and spring assessments (Hasbrouck and Tindal). Error rates. ORF involves word accuracy (Rashotte & Torgesen, 1985), operationalized as error rate. DIBELS error criteria were applied across passages to allow for comparison of results across passages. The DIBELS error wait time was doubled to six seconds (Morgan & Bradley-Johnson, 1998), but all other error parameters were preserved. In keeping with DIBELS error criteria, the error rate is defined as the number of incorrect words per minute. Incorrect words are those that are mispronounced, omitted or substituted, transposed, or hesitated upon for over six seconds (Begeny et al., 2006), double the typical time allowed. Errors made in reference to proper names and places were assessed as one error, regardless of the incidence within the passage (Pattillo et al., 2004). If the participant skipped a line, only one error was counted and the participant was redirected to the start of the appropriate line. Error rates were calculated by subtracting the number of errors from the total words attempted by the participant. Comprehension. Oral retell fluency was chosen as a measure of comprehension based on its support in the literature (Fuchs et al., 2001; Johnston, 1982) and suitability for young, beginning readers at risk for literacy problems. For example, it was highly probable that, because participants find reading difficult, they would be unable to read brailled comprehension questions or Maze formats effectively and this would compromise their ability to demonstrate their understanding. Further, the method was  87  chosen because it was probable that the participants may have been unable to read sufficient amounts of text to answer orally presented comprehension questions. Oral retell fluency involved the participants telling the TVI what they could remember about the story they had just read. TVIs were permitted to prompt participants to tell them everything they know once more following a five second pause in their retelling. The conventional time limit of one minute for oral retell fluency was used. The oral retell fluency process was tape-recorded each time across phases, and the tapes were sent to the investigator for scoring. Comprehension was assessed across all phases of the study and for all passages other than HCO passages. ORF was the key concern regarding the HCO passages. It was hypothesized that oral retell fluency measure for the HCO passage would be confounded by its high content overlap with the instructional passages. Oral retell fluency measure was operationalized as described by Fuchs and colleagues (1988), Layton and Koenig (1998), and Salvia and Hughes (1990). The content words told by the participant were tallied. Content words included proper and common nouns, adjectives, verbs, and adverbs (Layton & Koenig). Slight variations of the content words (e.g., “go” and “going”) and reasonable synonyms (e.g., “little” and “small”) were accepted as correct and included in the oral retell fluency score. Oral retell fluency is expressed as a percentage. The scoring process was relatively complex and, hence, was carried out by the investigator, who reviewed all of the audiotapes. The investigator compared the number of content words stated in the participant’s retelling against the total number of content words in the passage (Layton & Koenig, 1998). Precise matches or synonyms were  88  accepted. The oral retell score was presented as a percentage and calculated by dividing the number of content words in retelling by the number of content words in the passage, multiplied by 100 (Layton & Koenig). Social validity evaluation. For the purpose of this study, social validity is defined as the social importance, acceptability, and value of treatment goals, procedures, and effects (Cooper et al., 2007; Gresham & Lopez, 1996; Wolfe, 1978). There is solid empirical support for the relationship between social validity and treatment integrity and the adoption of an intervention extraexperimentally (Ehrhardt, Bamett, Lentz, Stollar, & Reifin, 1996; Gresham & Lopez). TVIs evaluated the social validity of the repeated reading intervention using the same questionnaire at two times, once during the beginning of the intervention phase and again once during the follow-up phase. TVIs who were unable to proceed to the follow-up phase because summer holidays had begun completed the questionnaire on the day of their last assessment session. Comparisons were drawn between the results for the first and second iteration of the questionnaires. TVIs rated 10 statements that comprised the questionnaire using a 5-point Likert scale (1= disagree; 5= agree). The statements were designed to determine the TVIs’ perceptions regarding the appropriateness, acceptability, and value of the intervention (Cooper et al., p. 238) (see Appendix H). Likert scores for each item were added and then divided by the number of TVIs responding to yield an average rating score. TVI scores for the first questionnaire were compared to scores for the second questionnaire. TVIs were also invited to provide anecdotal feedback after each assessment session with a participant. This option was noted in the treatment integrity checklists.  89  Evaluation ofparticiants’ sefperception as readers. Participants completed a simplified questionnaire designed to help investigate their self-perceptions of reading skills (see Appendix I). They completed the same short questionnaire once during baseline and again at the end of the study. The questions were designed to evaluate their own reading (e.g., their speed) and the importance and effectiveness they ascribed to practice to affect improvements in their reading skills using the same short questionnaire at two different times. The questionnaire was in a 5-point Likert scale format (1= disagree; 5= agree). Likert scores for each item were added and then divided by the number of participants responding to yield an average rating score of self-perception of reading ability. Given that many of these students were in the early stages of their literacy skill development, the questions were read to them by the TVI. The TVI then recorded participants’ answers on the questionnaire. Several TVIs audiotaped the process of completing the questionnaires. The investigator later reviewed the tapes and anecdotal responses were noted. Treatment Integrity For the purpose of this study, treatment integrity was defined as the “extent to which the independent variable [was] implemented or carried out as planned” (Cooper et al., 2007, p. 235). TVIs followed scripted procedural protocols or treatment integrity checklists (Cooper et al., p. 236; McCurdy et al., 2007; Therrien, Wickstrom et al., 2006) developed by the investigator for participant screening and selection and each phase of the study. TVIs checked off the steps outlined in the appropriate checklist during each session with the participant. The use of these checklists was intended to help minimize  90  observer drift, enhance reliability, and maximize the study’s internal validity. TVIs taped all of the assessment sessions in their entirety, and the checklists were returned to the investigator. The investigator evaluated treatment integrity by using the same checklists completed by the TVIs. The investigator reviewed and evaluated all of the audiotapes of the sessions and evaluated the TVIs’ accuracy in carrying out the procedures in all of the assessments across all phases (e.g., retimed and rescored each reading assessment). Treatment integrity was expressed as a percentage of the total number of steps that the TVI completed correctly during an assessment session. In the event of deviations from protocols (e.g., an error in timing), the investigator adjusted the scores accordingly (e.g., prorated the ORF scores). Overall TVI treatment integrity for instructional passages, HCO passages, and DIBELS progress monitoring passages was 89.7% (range of 84.3 to 100%), 91.4% (range of 87.1 to 100%), and 87.2% (range of 82.2 to 100%) respectively. Two tapes were lost in the mail, however, and treatment integrity for these reading assessments could not be verified. Given the high treatment integrity and the lack of outliers among the scores that would have been recorded on these four tapes, the results that the TV! emailed the investigator were counted verbatim in the results. Interobserver Agreement For the purpose of this study, interobserver agreement was defined as “the degree to which two or more independent observers report the same observed values after measuring the same events” (Cooper et al., 2007, p. 113). TVIs taped all of the assessment sessions in their entirety to allow for the evaluation of interobserver agreement across dependent variables. Jnterobserver agreement for CWPM and errors,  91  comprehension (i.e., oral retell fluency), and treatment integrity was scored for 34% of the assessment sessions. The investigator independently scored all data. An expert external evaluator was recruited to analyze interobserver agreement for the data. The external evaluator was a doctoral student and master resource room teacher with formal curriculum-based measurement (CBM) training and comprehensive experience. The external evaluator was trained to an objective, minimum standard of 98% competency across dependent variables over five consecutive ORF assessments. The training was conducted for four hours. Following the training, the external evaluator also had ongoing access to the investigator via email and phone. The investigator followed up with the external evaluator after the completion of the interobserver agreement assessments in order to provide performance feedback as necessary. The external evaluator was kept uninformed with respect to the intent of the study. Interobserver agreement was calculated between the investigator and the external evaluator, rather than the eight TVIs (and other staff), for consistency and enhanced validity of the measure across participants and phases. The external evaluator listened to 34% (Cooper et al., 2007) of the audiotaped reading assessments that were randomly chosen across participants and phases of the study. Interobserver agreement between the investigator and the external evaluator for the dependent variables was expressed as a percentage of agreements calculated by dividing the number of agreements and disagreements, multiplied by a 100. As in other similar studies that analyzed audiotaped sessions (Gortmaker, 2006), disagreements between the investigator and the external evaluator were due largely to poor tape quality. The results for interobserver agreement  92  for each dependent variable and for treatment integrity are summarized in Table 7 reported as follows. Table 7 Interobserver Agreement (and Range) across Dependent Variables Dependent Variable  Interobserver Agreement (range)  CWPM and errors  97.8% (95.3 to 100%)  Comprehension  97.2% (97.4 to 100%)  Treatment Integrity  96.4% (93.2 to 100%)  CWPM and errors Interobserver agreement for CWPM was calculated on a pointby-point evaluation between the CWPM score determined by the investigator and that determined by an external evaluator for each of the interobserver agreement sessions (Alber-Morgan, Ramp, Anderson, & Martin, 2007; Nelson et al., 2004). Interobserver agreement for CWPM between the investigator and an external evaluator is expressed as a percentage and was calculated by dividing the number of agreements by the number of agreements and disagreements, multiplied by 100 (Aib er-Morgan et a!.). Agreements were words counted as either an error or as a correctly read word by both the investigator and an external evaluator (Alber-Morgan et al.). Disagreements were those words that were scored differently by the investigator and an external evaluator (Alber-Morgan et al.). The mean percentage of agreement for CWPM between the investigator and the external evaluator was 97.8% (range, 95.3% to 100%). Comprehension. This same point-by-point evaluation was used to determine interobserver agreement with respect to comprehension. The investigator developed a  93  master list of the total potential content words for each individual passage. The investigator and an external evaluator listed the content words provided in each participant’s oral retelling. The interobserver agreement between the investigator and the external evaluator for comprehension was assessed for each content word and expressed as a percentage of agreements calculated by dividing the number of agreements by the number of agreements and disagreements, multiplied by 100. The mean percentage of agreement for comprehension was 97.2% (range of 94.7 to 100%). Treatment integrity. A variety of procedures and strategies were put in place to address treatment integrity in anticipation of the challenges involved in conducting a study remotely. Observer drift was addressed through regular email and occasional phone contact with the TVI (e.g., an email exchange between the TVI and the investigator after every assessment session), the use of a simplified repeated reading protocol provided in written format (i.e., the treatment integrity checklists), and the audiotaping of each reading session. The investigator reviewed every audiotape, evaluating for treatment integrity. The investigator trained the external evaluator to assess treatment integrity by first verbally describing the procedures, then providing the same treatment integrity checklists as those used by the investigator and the TVIs (see Appendix G for a copy of the treatment integrity checklists), and finally by modeling the steps involved in each type of checklist. The mean percentage of interobserver agreement for properly implemented procedures was 96.4% (range, 93.2% to 100%). Independent Variable: Repeated Reading The repeated reading intervention used in this study incorporated practices that were reported to be most effective according to the literature (Therrien, 2004; Therrien &  94  Kubina, 2006). For example, the intervention involved systematic performance feedback and error correction. Additionally, participants were encouraged to “beat” their previous scores with each reading during intervention. Repeated reading offers the advantage of more opportunities for participants to respond than other research-validated approaches (Gortmaker, 2006). The intervention was implemented over an approximately 30 minute time frame, for a minimum of 3, preferably 5, times per week (Therrien & Kubina). Participants’ response to this intervention is conceptualized as reflecting potential risk for current or future reading problems. For example, those participants who score low on the initial participant screening and selection assessment, fail to demonstrate satisfactory growth over the baseline phase, and are unresponsive to the intervention may be at risk and may benefit from additional assessment and intervention. Research Design: Nonconcurrent Multiple Baseline across Participants Single subject research is a scientific method used to study behavioral change in individuals (Kazdin, 1982). This method allows researchers to conduct true experiments involving only one or a small number. of participants. This method also control for threats to internal validity such as history and maturation (Kazdin). This study employed a particular kind of single subject research design, namely the nonconcurrent multiple baseline design across participants, to evaluate the effects of the repeated reading intervention on CWPM (Richards et al., 1999). This single subject design involved the staggered introduction of the intervention in a pre-planned manner. The CWPM data were not collected simultaneously, as each participant was not simultaneously available for the baseline, intervention, and follow up phases. Instead, the investigator randomly determined the baselines a priori for each research group of  95  participants. This procedure ensured that baselines were unequal (e.g., 3, 4, 5, and 7). When all of the consent and assent forms relating to a particular participant had been signed and returned to the investigator, that participant would then be randomly assigned to one of the predetermined baseline lengths. The baseline assessments were then undertaken and, assuming a stable pattern of results for CWPM, the intervention was implemented at the pre-determined start point (Barlow, Nock, & Hersen, 2009). The nonconcurrent multiple baseline design is advantageous for several reasons, particularly given the logistical challenges characteristic of this study. For example, the unequal, randomly assigned baseline lengths supported the planned initiation of the intervention and allowed for the demonstration of experimental effect (Barlow et al., 2009). The design maximized flexibility and individualization of the implementation of each phase. In keeping with this single subject design, conclusions were made with respect to the effects of the repeated reading intervention on CWPM by comparing the results of the intervention across participants. Procedures This study involved the following procedures: (a) recruitment; (b) participant screening and selection; (c) collection of baseline data; (d) the implementation of the intervention, and (e) a follow-up phase. However, in this study, follow-up was not always feasible due to the end of the school term. Follow-up phases, therefore, often did not occur or did not have three data points as is typically recommended (Barlow & Hersen, 1984). TVIs undertook the assessments and interventions consistent with the research phases. The investigator was both a teacher of students with visual impairments and a  96  school psychologist. The investigator’s school psychological training included formal CBM training and extensive experience with respect to formative and summative assessments, in addition to the development and implementation of reading interventions. At the end of every assessment session across phases, TVIs first emailed the results and then sent the tapes and treatment integrity checklists to the investigator. For simplicity, the use of the term TVI for the duration of this study is meant to refer to the TVI and any other staff also working with the participant (i.e., under the supervision of the TVI). Recruitment All participants were contacted through their TVIs as students with visual impairments are typically assigned a TVI by their districts or local education authority. Following approval from the UBC Behavioral Research Ethics Board, the investigator posted a recruitment email that briefly outlined the study on the BC Provincial Resource Centre for the Visually Impaired (PRCVI) list serve for TVIs. Given the low number of British Columbian responses, the recruitment email was sent out to all Canadian provincial list serves and nation-wide American list serves for TVIs. Those TVIs wanting to participate in the study contacted the investigator via email. The investigator responded via email with additional information about the study and mailed them consent, assent, and release of information forms (see Appendices B, C, D, & E). Once all of the forms had been signed and returned, the investigator began the participant screening and selection process. The recruitment phase spanned a total of seven weeks. Particzant Screening and Selection The participant screening and selection phase involved a variety of procedures designed to identify eligible participants. These screening procedures mirrored those  97  recommended for identifying students in need of additional supports within a RTI model (Fuchs & Fuchs, 2007) and included (a) background data collection, (b) TVI training, (c) participant screening and selection assessments, and (d) a brief analysis for performance factors such as motivation. This phase of the study was completed within approximately one week. Background data collection. Preliminary, background data were collected by way of the Student Information Form (see Appendix F). For example, information was gathered regarding the participants’ eye conditions and prognosis and the nature of their literacy instruction to date. TVIs indicated whether they would prefer participant assessment materials in either contracted or uncontracted braille, and the participant was required to continue using those materials for the duration of the study. TVIs used portions of Assessment of Braille Literacy Skills (ABLS) (Koenig, 1 996b) to describe participants’ reading competency. The ABLS is an informal, criterionreferenced assessment tool, in the form of a consumable record book, offering ongoing and meaningful braille literacy skill assessment. The ABLS is organized, according to Koenig’s literacy framework, into the areas of emergent, academic, and functional literacy (Koenig, 1992, 1 996b). This instrument is appropriate for preschool to high school braille readers and targets both braille code related skills and higher-level literacy skills (Koenig, 1996b). Using the ABLS, TVIs commented on their braille students’ emergent skills relating to the unique features of braille (e.g., discriminating common objects and braille words, tracking lines of braille). TVIs also reported on participants’ “Emergent Literacy Indicators.” These indicators reflect early literacy behaviors and skills that are not unique  98  to braille readers such as selecting a preferred braille book, pretending to read, and brailling random letters to convey meaning (Koenig, 1996b). TVIs also used the ABLS to assess their braille readers’ academic, formal school literacy associated with basic reading and writing skill development. Aspects of this development that are particular to braille, such as hand movements and the use of braille writers are reflected in the instrument. Additionally, basic reading and writing skills such as using structural cues and writing are addressed. Additionally, TVIs commented on the variety of strategies their students employ to accomplish age-appropriate literacy tasks required in their daily lives at home, school, and in the community. TVI training. Given that TVIs were spread out geographically over Canada and  the US, the investigator trained the TVIs to implement the procedures using an individual telephone conference call before the baseline and intervention phases were begun. The investigator used verbal instructions, modeling, rehearsal, and performance feedback to improve levels of treatment integrity (Cooper et al., 2007; Gresham et al., 1993). TVIs were trained to an objective, minimum standard of 90 percent competency over at least two consecutive ORF assessments (Cooper et al.). Following the training, TVIs also had ongoing access to the investigator via email and phone. The investigator followed up with TVIs after the completion of the participant screening and selection assessment phase in order to provide performance feedback as necessary. TVIs then trained other staff, namely teachers and paraprofessionals, if they would be conducting some of the assessments. The investigator was able to check the treatment integrity for all sessions by way of the audiotaped assessment sessions.  99  TVIs, like the external evaluator, were kept uninformed of the hypotheses of the study. Further, the investigator did not share evaluations of participant’s reading competence (e.g., “at risk” status according to DIBELS criteria) with the TVIs until the end of the study. Measurement confounds such as observer bias were minimized by keeping the TVIs naïve about the expected outcomes of the study (Cooper et al., 2007). Particzpant Screening and Selection Assessments Participants were screened based on the aforementioned eligibility criteria. Various assessments, in addition to the professional judgement on the part of the investigator, were used to assess eligibility for participation. The participants underwent brief assessments of (a) risk for current or future reading problems, (b) instructional levels, (c) levels of comprehension, and (d) a brief assessment for performance factors such as motivation. All reading by participants across phases was done orally to ensure that they actually read the passages (Chard et al., 2002; Daly, Chafouleas, et al., 2005). DIBELS assessment ofriskfor current orfuture reading problems. Speed and accuracy scores for the DIBELS winter benchmark probes, in addition to clinical judgement, were used to assess risk for current or future fluency problems. Median scores were utilized in an effort to control for variability in passage difficulty (Shapiro, 2004). Each student’s TVI conducted the three DIBELS Winter benchmark assessments. The three passages were read in one session on one day or over two (preferably consecutive) days. Participants’ median ORF score was divided by two for a “per minute” rate and compared to DIBELS norm group, to help determine their degree of risk for current or future reading problems.  100  In addition to administering the benchmark assessments, TVIs followed the stepby-step direction delineated in the treatment integrity checklists (see Appendix G). TVIs followed these same directions in administering the DIBELS passages across the different phases. TVIs organized the study materials, turned on the audiotape recorder, and read out verbatim the following instructions to the participant: “I will be asking you to read and timing you with my stopwatch. You may ask questions after you are finished reading the story.  “  The TVI then placed the copy of the reading passage in front of the  participant, and stated verbatim, “Please read this story out loud. Ifyou get stuck, I will tell you the word so you can keep reading. When I say, “stop” I may ask you to tell me about what you read, so do your best reading. Start here.” The TVI then helped the participant find the beginning of the passage, and said, “Begin. The TVI started the “  stopwatch after the student read the first word and began timing for two minutes. If the participant did not respond for six seconds, the TVI stated the first word (count it as an error) and then started the stopwatch. The TVI followed along with the participant using the teacher (print) copy of the reading passage, marking any errors with a slash (“I”). If a participant struggled with a word for six seconds, the TVI said the word for the participant and marked it incorrect. The participant was not required to repeat this word, although most did. At the end of the two minutes, the TVI placed a bracket and said, “Stop  “.  (]) after the last word spoken by the participant  The TVI then removed the participant copy of the reading passage. The  TVI then tallied the total words attempted and the total errors, and then computed the CWPM score (total words read minus total errors) and divided the score by two to obtain  101  a CWPM score. If the participant asked for feedback, the TV! stated something like, “Good work” or “Keep trying your best.” The TVI then completed a comprehension check. The TV! stated verbatim, “Please tell me all about what you just read. Try to tell me everything you can. Begin.” The TVI began timing for one minute and prompted the participant (“Tell me everything you can.  “)  if he or she did not say anything for three seconds. That prompt could only be  used once during the session. The TVI said, “Stop” after one minute or if the student did not say anything or veered off topic for five seconds. The TVI concluded by thanking the participant for doing his or her best reading. Assessment ofinstructional level. Brief ORF screening assessments, such as DIBELS Winter benchmarks, provide one piece of valuable information with respect to the student’s overall reading skill level (Hasbrouck & Tindal, 2006; Hosp & Fuchs, 2005). Screening materials, such as the DIBELS benchmarks, should be at grade level (Hasbrouck & Tindal; Hosp et al., 2007). However, the use of text at an instructional level is recommended for instruction, the diagnosis of reading problems, or for use with interventions (Hasbrouck & Tindal). Providing text at an instructional level of difficulty is thought to optimize learning (Shapiro, 2004), ORF skill development, and comprehension (Billingsley & Wildman, 1988; Nolte & Singer, 1985; Therrien, Wickstrom et al., 2006; Vadasy & Sanders, 2008). However, Fuchs and Deno (1982) indicated a lack of consensus on the definition of an instructional level. Definitions of instructional level generally tend to hinge on the student’s proficiency with respect to word recognition and comprehension (Hasbrouck & Tindal, 2006). Word recognition scores are typically expressed as a percentage of words  102  read accurately. Comprehension scores are often expressed as a percentage of comprehension questions answered correctly. The construct of instructional level is often also further defined by a measure of reading speed (i.e., CWPM; Shapiro, 2004). Criteria for reading speed put forward by Fuchs and Deno (1982) are often cited (Alber-Morgan et al., 2007; Shapiro, 2004) and are shown in Table 8. The discussion, when applied to braille readers, is further complicated due to a lack of norms for braille readers. Fuchs and Deno’ s criteria were used as guidelines in this study with acknowledgement that, even for typically sighted students, “the criteria provided are not specific cut-offs, but should be viewed as gradual changes” (Shapiro, p. 133). Table 8 Direct Assessment Placement Criteria (Fuchs & Deno, 1982)  grade level of  Level of difficulty  materials 1-2  3-6  words correct per  errors per minute  minute frustration  <40  >4  instructional  40-60  4 or less  mastery  >60  4 or less  frustration  <70  >6  instructional  70-100  6 or less  mastery  >100  6 or less  For the purpose of this study, instructional level is defined as the grade level of text that was challenging, yet manageable for participants based primarily upon the  103  criterion of word recognition. With teacher or tutor guidance and assistance, participants could read text that was at an instructional, or “teaching level”, with approximately a minimum of 90% accuracy (Betts, 1946; see also Caldwell, 2002; Hasbrouck & Tindal, 2006). The instructional level was determined for participants in this study by way of an Informal Reading Inventory (IRI) process (Caidwell, 2002). In brief, the IRI process involves asking students to read text from different grade levels, while evaluating their accuracy (and levels of comprehension). According to Caldwell, the IRI process may be applied to any passage (e.g., grade leveled CBM passages, trade books, published IRIs, basal readers) to investigate the extent to which students can read and understand it. CBM methodologies were employed in the IRI process (Hasbrouck & Tindal, 2006). In this study, oral retell fluency was chosen to assess levels of comprehension as part of the IRI process. The study’s IRI process involved untrained, graded DIBELS benchmark passages. While the DIBELS passages did not necessarily mirror participants’ classroom materials, the use of these free and user-friendly materials allowed for the comparison across participants, comparison against norms for sighted students, and standardized procedures. Further, the DIBELS passages are controlled for difficulty level (Shapiro, 2004). With respect to assessing reading speed, the IRI process was adapted for the braille readers in this study. The process of administering the DIBELS benchmark passages is in keeping with the process for administering instructional passages throughout the study, thus potentially enhancing treatment integrity. For example, participants were given twice as long to read the passages (i.e., 2 minutes rather than 1  104  minute) based on previously cited findings by Morgan and Bradley-Johnson (1995). Ultimately, the final ORF scores were halved to allow for comparison against one-minute norms for typically sighted children. Ultimately, given the lack of norms for braille readers, clinical judgment factored into the decision making process of estimating the participant’s instructional level. For example, participants whose median, grade level scores for accuracy or speed did not fall within the instructional range for their grade level were re-tested using DIBELS benchmark assessments for earlier grades (Shapiro, 2004) until median results for accuracy and/or speed fell closer to or within the instructional range recommended for typically sighted students. John, a grade three student (who was grade four age), was closest to instructional level in speed at the grade one level. Once the instructional level (grade level) of text had been estimated for participants, they only received materials at that level for the duration of the study. All participants, save one (Tabitha) were instructional for accuracy and/or speed in at least grade one level using DIBELS benchmark materials. In Tabitha’ s case, as a lower grade level of DIBELS ORF materials was not available, it became necessary to assess her prereading skills (Shapiro, 2004). Using an informal, locally developed measure, her TVI determined that Tabitha could consistently demonstrate a sound understanding of the alphabetic principle and phonemic awareness. The decision to include Tabitha in the study was a complicated one. Following consultation with her TVI, the investigator ultimately believed that Tabitha had sufficient phonological awareness that, provided she attended regularly, she could benefit from the nature of intensive reading practice characterizing the intervention. It was expected that her performance  105  would be closer to instructional level using instructional passages, as the DIBELS passages were more difficult than the instructional passage. The scripted attention and feedback was expected to be reinforcing (Daly, Chafouleas, et al., 2005). Assessment ofcomprehension. Additionally, an oral retell fluency assessment was conducted after participants read through each passage, and a median DIBELS oral retell fluency assessment score was recorded. Those participants with a significant reading comprehension problem were ineligible for participation in the study (i.e., oral retell fluency scores of 25 percent or less of their ORF when the latter exceeds 40 words per minute; Good & Kaminski, 2002). The procedures for comprehension checks were described above. Briefassessmentfor motivation: Reward contingency. Motivational factors have been referred to within the literature pertaining to braille readers (Pattillo et al.; Greaney & Reason, 2000). In an old study, Kederis and colleagues (1967) improved the reading rates of braille readers in grades 5 to 12 between 25 to 100 percent by manipulating motivational factors. Eligible participants for the current study were those who, for example, experienced poor ORF due to a skill rather than a motivation related performance deficit (McCurdy et al., 2007). In keeping with the IH framework, each participant was assessed to determine the effects, if any, of adding a motivational component to the learning trial. Participants who previously performed either within the DIBELS criteria of “at risk” or “some risk” levels yet were able to reach instructional level in connected text (i.e., instructional level was achieved at least at a grade one level) underwent a brief assessment for performance factors such as motivation.  106  All participants were offered a preferred reward (e.g., an incentive such as a small toy, tactile, smelly stickers, or an extra privilege) for beating their best ORF score from the three DIBELS benchmark assessment probes. In order to receive the preferred reward, participants were required to improve or “beat” their best score on the previously administered DIBELS benchmark reading passage by one word and not make more than six errors. A DIBELS progress monitoring story was used as the text in this assessment. If participants were unable to beat their previous score on the first read of this assessment, they were given one more attempt. In terms of the contingent reinforcement consequence, the investigator informally interviewed the TVI to ensure the reinforcing functional value of the reward items. Participants were shown the reward prior to the assessment. The directions for this brief assessment were as follows. The TVI noted the highest benchmark CWPM score for the participant, and stated verbatim, “I will be asking you to read another story now and timing you with my stopwatch. Our goal is to beat your old score. When you read the other stories, your best score was  (insert  highest previous CWPM score) words per minute. Try to beat this score without making more than 6 errors when you read this next story. Ifyou beat your score, I will give you a (n)  (name incentive). The TVI then gave the participant the brailled story and stated “  verbatim, “Please read this story out loud. Ifyou get stuck, I will tell you the word so you can keep reading. When Isay “stop  “,  I may ask you to tell me about what you reac1 so do  your best reading. Start here (show with finger on braille page). Begin. “The TVI started the stopwatch and, after the student read the first ward, began timing for two minutes. If the participant did not read the first word after six seconds, the TVI told him or her the  107  word (marked it as an error) and then started the stopwatch. At the end of the two minutes, the TVI placed a bracket (]) after the last word spoken by the participant and said, “Stop  “.  The TVI took back the participant’s story. The TVI then tallied the total  words attempted and the total errors, and then computed the ORF score (total words read minus total errors) and divided the score by two to obtain a per minute ORF score. The TVI then thanked the participant for participating in the story reading. To beat the old score and obtain the incentive, the participant only had to read one more word than the time before without making more than six errors. Participants who did not obtain the incentive were allowed to read the story and try to beat their previous score one more time. Participants who were unable to beat their score after the first or second reading were told that they would be given the incentive at some point in the study. Participants who exceeded their best benchmark ORF score by 30 percent with six or fewer errors (Daly, Persampieri, et al., 2005) would have been ineligible to participate in the study because their fluency score would be considered indicative of a performance (motivational) concern as opposed to a skill deficit (Daly et al., p. 402; see also Carnine, Silbert, & Kame’enui, 1990). Those participants who improved their ORF scores by less than 30 percent or again scored in the “at risk” or “some risk” levels following this assessment were eligible to participate in the study. All participants improved their scores by less than 30%, and therefore, were eligible to participate in the study. Baseline Baseline procedures involved (a) obtaining the collection of baseline ORF, error, and comprehension data, (b) progress monitoring, and (c) administering the participants’  108  self-perception assessments. The eight participants were organized into two cohorts, each with four participants. The first four participants to complete the participant screening process were assigned to the first cohort and assigned an a priori baseline according to the order in which their screening was completed. For example, the first participant to join the first cohort received the baseline of three, and the next three participants received baselines of four, five, and seven, respectively. The next four participants were assigned to the second cohort, and given the baselines of three, five, six, and eight in the same manner. Collection of baseline ORF data. TVIs assessed participants’ ORF (in addition to error rates and comprehension) once each day for a minimum of three (preferably consecutive) days. TVIs used instructional passages and followed the same procedures used for the DIBELS passages in the participant screening and selection phase (see Appendix G for sample checklist). No repeated reading was undertaken during the baseline phase, nor did participants receive feedback with respect to any of their scores (e.g., regarding ORF). TVIs thanked the participants for partaking in the session. TVIs scored the ORF and error rate, wrote the scores on the procedures checklist for the particular session, and mailed the checklist and tape to the investigator in a pre-paid courier envelope. The investigator determined the comprehension score, treatment integrity, and interobserver agreement for sessions in the baseline phase. Progress monitoring. Following every second baseline assessment, TVIs also administered instructional level DIBELS progress monitoring passages. TVIs followed  109  the same procedures used for the DIBELS passages in the participant screening and selection phase (see Appendix G). Particzants sef-perception as readers. Participants responded to a 5-question ‘  Likert scale questionnaire. The scale was designed to help assess their self-perceptions as readers (see Appendix I). Several TVIs audiotaped the process of completing the questionnaire. Anecdotal responses from these audiotaped sessions are included in the analysis section. Intervention The intervention procedures involved implementing the repeated reading intervention. TVIs’ social validity data were collected during this phase, as were the data for participants’ self-perception as readers. Repeated reading intervention procedures. The intervention was implemented over a 30 minute time frame for an average of three times per week over a range of approximately five to nine weeks (Therrien & Kubina, 2006). The intervention protocol was delineated in a checklist format (see Appendix G) and included three opportunities to read the same text and immediate corrective feedback for word errors as informed by DIBELS protocols and the dimensions of repeated reading reported to be most effective in the literature (Therrien, 2004; Therrien & Kubina). The repeated reading intervention involves participants rereading a short, instructional level passage three times (Rashotte & Torgesen, 1985) within every session (Therrien, Wickstrom, et al., 2006; Therrien, Gormley, et a!., 2006). Participants read as much of the passage as they could within a two minute period (Daly, Chafouleas, et al., 2005). Participants reread the passages two more times (again for two minutes each  110  time), trying to beat their first, “cold” read with each successive reading. TVIs used instructional passages and followed similar procedures used for the DIBELS passages in the participant screening and selection phase, except that for the second and third rereadings, participants were encouraged to beat their previous score (see Appendix G for checklist). The treatment integrity checklists provided step-by-step repeated reading directions and verbal feedback and encouragement (e.g., should participants improve their ORF over the previous reading). No consequences for improved ORF or error performance were provided other than (scripted) verbal praise. The intervention included a performance feedback component as recommended in the literature (Daly, Chafouleas, et al., 2005). Participants were told the CWPM and error scores after each reading. In addition, after each reading of the passage, participants noted their ORF rate on a tactile graph, with assistance from the TVI. Graphing scores as a means of providing performance feedback has been shown to improve student motivation and self-confidence. Immediately after the third and last rereading of the passage, participants orally retold the story as a measure of comprehension. TVIs offered systematic corrective feedback by providing the correct word if the participant was unable to read the word within six seconds. This strategy was likely less effective for word learning than a phrase drill type of error correction strategy (Daly, Chafouleas, et a!., 2005). Nonetheless, it helped to minimize the repeating of errors across rereadings while adhering to the DIBELS protocol so as to allow for comparison of CWPM and error rates across phases and passages.  111  TVIs were also instructed to cue participants for both fluency and comprehension as per DIBELS instructions as this practice, while using instructional level materials, was found to enhance both fluency and comprehension (O’Shea et al., 1985). Social validity assessmentfor TVIs and self.perception assessments for particzants. On the last day of the intervention, TVIs and participants completed their respective social validity questionnaires. The self-perception questionnaires were read to participants, and the TVIs recorded the participants’ responses on the questionnaire. Several TVIs chose to audiotape their participants’ responses. Anecdotal responses from both the TVIs and participants are included in the analysis section. Follow-up Follow-up phase procedures involved (a) the collection of post-intervention ORF rates, (b) progress monitoring, and (a) obtaining social validity data from TVIs and participants. TVIs implemented this follow-up phase a week after concluding the intervention phase. The phase was completed within one week. Collection offollow-up ORF data. TVIs assessed participants’ ORF (in addition to error rates and comprehension) once each day for three (preferably consecutive) days. TVIs used instructional assessments and followed the same procedures used for the DIBELS passages in the participant screening and selection phase (see Appendix G for checklist). No repeated reading was undertaken during the follow-up phase, nor did participants receive feedback with respect to any of their scores (e.g., regarding ORF). TVIs thanked the participants for partaking in the session. TVIs scored the ORF and error  112  rate, completed the treatment integrity checklists and sent them, along with the tapes, to the investigator. Progress monitoring. Following every second follow-up assessment, TVIs also administered instructional level DIBELS progress monitoring passages. TVIs followed the same procedures used for the DIBELS passages in the participant screening and selection, baseline, and intervention phases. See Appendix G for treatment integrity checklists. Data Analyses The current study sought to investigate a functional relationship between the implementation of a repeated reading intervention and improvement in reading proficiency by way of a nonconcurrent multiple baseline design. Visual analysis was used to analyze the data generated during the study. The degree to which therapeutic criteria were met (i.e., the extent to which the change demonstrated was meaningful) (Richards et al., 1999, p. 266; see also Alberto & Troutman, 2003) was assessed by comparing any improvements in CWPM to the aforementioned criteria set out by Hasbrouck and Tindal (2006). The importance and acceptability of the intervention and the implementation process were assessed by examining data from TVI social validity questionnaires and anecdotal responses. Participants’ self-perceptions as readers were also assessed by examining questionnaire results. Each analysis, as applied to the current study, is described in turn. Visual Analysis Visual analysis and interpretation remain the primary, most prevalent method of data interpretation in single subject research (Cooper et al., 2007; Kazdin, 1982). Visual  113  analysis is deemed uniquely suited to the identification of socially important, strong, and reliable variables (Cooper, et al). Baer (1977) argued, “If a problem has been solved, you can see that; if you must test for statistical significance, you do not have a solution” (p. 171). Visual inspection is linked to low rates of Type I errors, identifying variables that “contribute to an effective, robust technology of behavior change” (Cooper et al., p. 249). Visual analysis for this study involved the examination of level, trend, and variability, and each is described as follows for the purpose of this study. Level is defined as “mean performance” (Homer et al., 2005). Level was assessed within and across phases with respect to its “absolute value,” looking specifically at mean and range relative to the y-axis scale and the degree of change from one level to another (Cooper et al., 2007, p. 150). Mean values for CWPM and errors were determined for each participant and in each phase of the research design. Differences among mean values across participants, phases, and passages were evaluated as well, in part to evaluate the practical significance of any changes observed. Immediacy of effects after the introduction of the intervention also was interpreted (Homer et al.). Trend was defined as the overall direction of the data path (Cooper et al., 2007; Richards et al., 1999, p. 272). Trend was analyzed with respect to its direction (e.g., increasing), magnitude, and stability (Cooper et al.). For the purpose of this study, variability was defined as the “fluctuation around a mean or slope during a phase” (Homer et al., 2005, p. 171). Therapeutic Criteria Therapeutic criteria offer another important means of evaluating the strength of the relationship between the intervention and dependent variables. Practically significant  114  improvement (Kirk, 1996) in reading proficiency, primarily in terms of CWPM, was the goal of this study. The literature confirms the social importance of the target behavior, ORF, operationalized as CWPM. According to the research (Amato, 2000; Rex et al., 1994; Ryles, 1996, 1998) improvements in CWPM that approximate research based therapeutic criteria (e.g., Hasbrouck & Tindal, 2006) should improve the projected academic, social, and career outcomes for braille-reading students. Results for changes in CWPM (e.g., mean gains from baseline to intervention) were evaluated according the therapeutic criteria put forward by Hasbrouck and Tindal. Social Validity Evaluation Emphasis upon evaluating results in terms of social validity is a hallmark of single subject research (Kazdin, 1982). The study involved what was intended to be a user-friendly intervention to affect practically meaningful improvements in reading proficiency that could be easily used extraexperimentally by TVIs and other practitioners in the field. The independent variable, the repeated reading intervention, was designed and continually assessed with respect to its “social acceptability, complexity, practicality, and cost” (Cooper et al., 2007, p. 250) as well as its perceived effectiveness. The acceptability of the process of implementing the intervention was also assessed during the course of the study. TVI social validity questionnaires were presented in Likert scale format. Results for this measure are presented as means and ranges. Additionally, results for the questionnaire administered following the first week of intervention are compared with the results for the questionnaire that was completed at the end of the intervention. Any anecdotal responses provided by TVIs are summarized and noted.  115  Evaluation ofParticzpants Sef.perception as Readers. Participants completed a short ‘  questionnaire designed to investigate their self-perceptions of reading skills. Participants’ self-perception questionnaires were presented in Likert scale format. Results for this questionnaire are presented as means and ranges. Pre and post-intervention questionnaire results were compared. Participants’ anecdotal responses are summarized.  116  CHAPTER 4 Results The purpose of this study was to investigate the effects of the repeated reading intervention to determine whether a functional relationship exists between the implementation of the intervention and an improvement in oral reading fluency (ORF) operationalized as correct words per minute (CWPM) and errors per minute, and comprehension for primary braille-reading students. Results of the implementation of the repeated reading intervention are presented in this chapter. Data were analyzed in order to answer three experimental and three descriptive research questions addressed by this study: I. Is there a functional relationship between the implementation of a repeated reading intervention and an improvement in oral reading fluency (ORF), operationalized as correct words per minute (CWPM) and errors per minute, for braille-reading students in grades one, two, and three? 2. Is there a functional relationship between the implementation of a repeated reading intervention and an improvement in comprehension, operationalized as oral retell fluency, for braille-reading students in grades one, two, and three? 3. Is there a functional relationship between the implementation of a repeated reading intervention and an improvement in CWPM for Low Content Overlap (LCO; progress monitoring) passages? 4. Are any gains in CWPM during repeated reading intervention associated with generalized improvement in untrained, High Content Overlap (HCO) passages?  117  5. Are the repeated reading intervention and implementation process socially valid (i.e., important, acceptable, and useful) from the point of view of teachers for students with visual impairments (TVIs)? 6. Does the implementation of the repeated reading intervention change the participants’ self-perception as readers? Summaiy ofFindings A nonconcurrent multiple baseline design across participants was employed to investigate the functional relationship between the independent variable and the dependent variables related to reading proficiency. The eight participants were assigned to research cohorts of four in the order in which they completed the participant screening and selection phase. The first cohort included Kelly, Kevin, Tabitha, and Carrie, and the second cohort included John, Linda, Mark, and Tom. Data were collected over a period of five to nine weeks depending on when participants started the study and when their school year ended. Three participants underwent a follow-up phase. The data for the dependent variables, correct words per minute (CWPM), errors per minute, and comprehension were visually displayed in graphic form. A multiple baseline design across participants was used to display data for both cohorts one and two. To facilitate visual analysis, dependent variables for each cohort were generally graphed separately. The data were then analyzed visually according to rules of evidence for single subject research (Cooper et a!, 2007; Kazdin, 1982; Johnston & Pennypacker, 1993) to evaluate whether there was a functional relationship between the implementation of the repeated reading intervention and changes in the dependent variables across participants and phases. Visual analyses for this study involved the examination of level, trend, and  118  variability, as previously defined, within and across baseline, intervention, and follow-up phases. The presence of a functional relationship between the intervention and dependent variables was evaluated by looking for stable improvements for CWPM, the primary dependent variable, and comprehension in addition to a reduction in errors per minute from baseline to intervention phase continuing on in the follow-up phase. Experimental control was evaluated by looking for improvements in each dependent variable in terms of level and trend at the point of intervention for at least three of four participants in each cohort. Computer generated least squares trend lines were used to facilitate analysis of data paths. Results for CWPM also were compared to therapeutic criteria or the extent to which any changes were therapeutic or meaningful for participants. Therapeutic criteria were based on Hasbrouck and Tindal’s (2006) expected weekly growth rates for typically sighted students undergoing only Tier 1 level intervention (i.e., classroom instruction) within a Response to Intervention (RTI) framework (Therrien et al., 2006). Therapeutic criteria were depicted graphically as growth or aim lines (Sutherland & Snyder, 2007). An aim line was computed for each participant according to his or her mean CWPM during baseline (Sutherland & Snyder). The aim line was calculated according to the equation: y mx + b. The expected weekly growth rate of the participant, according to his or her percentile in Hasbrouck and Tindal’ s normative data for the winter term, served as the slope of each aim line (m) (Francis et al., 2008). Aim lines were extended into the follow-up phases because, when these phases occurred, they were conducted immediately after or within five days of the conclusion of intervention.  119  Social validity data based on the questionnaires completed by the teacher for students with visual impairments were summarized using descriptive statistics. Results for the participant questionnaires regarding their self-perceptions as readers also were summarized using descriptive statistics. Additionally, anecdotal responses from the TVIs and participants are reported. Results of the analyses for the three dependent variables (CWPM, errors per minute, and comprehension) are summarized below according to cohort and in the following order: (a) CWPM; (b) errors per minute, and (c) comprehension. Results for the three descriptive questions are then presented according to cohort in the following order: (a) CWPM for the third read of instructional passages and HCO passages, (b) errors for the third read of instructional passages and HCO passages, and (c) and CWPM and errors for LCO passages. Finally, social validity data are presented, first with respect to the TVIs and then the participants. Prior to the presentation of the individual results, information relevant to the participant is discussed. Correct Words per Minute (CWPM) The results for the first and second cohorts’ oral reading fluency, primarily conceptualized as CWPM, are shown in Figures 1 and 2 respectively. These figures are presented below in sequence. The data for CWPM are based on the first “cold” read of instructional passages across phases. Trend lines and aim lines for CWPM during baseline and intervention phases are also presented in these figures.  120  I’J  CC)  ci)  CD  CO  LI)  LI)  CCC  DC  C e 0O 0.0  -‘  CD DC 0. 0  0 0  CD Co  0  C CD  3  0  C,  0  C  CD  CD  C’) Co  C’)  C’) 0)  01  C’)  C’) CD  C’) I’)  C’)  C’) (0  CO  -  -  -  Co  Co  -.4  0)  CD  C  C  -  C’) C  C1) C C  .  Cl) C  0) C  Correct Words Per Minute -4 C  Co C C C  C’) C C  (I) .  C  0) C  0) C  Correct Words Per Minute 4 C  0) C C C  C’) C CD C C  .  C  (YC  C  0)  Correct Words Per Minute 4 C  Co C C  C  C’) C  Cl) C C  CO C  0) C  Correct Wordst Per Minute 4  C  C  Co  DC CO CD Ii)  DC  DC  a  CCC)  0  CD DC  0  CD  0  3  0  CC  C) 0  0  -n  0  C’)  F’) (0  C’) 0’  C’) —C  C”) 0)  C’) 0)  “3  C.)  F’)  F’) F’)  F’.)  CC.) 0  (0  0)  -  —I  -  C.)  -  (0  0)  0  0  0  F’)  0  C.)  0 0  0)  0  0)  Correct Words Per Minute 0  -C  0  0)  0  0  C’) 0  C’) 0 0  0 0  Correct Words Per Minute 0 0 0  0  -  0  F’)  0  ‘)  0  CCC  0  0  0  Correct Words Per Minute 0 0  0  0  F’)  0  0  C.)  0  .  (.11  0  0)  0  Correct Words Per Minute —C  0  0  0)  First cohort: C WPMfor the first, “cold” read ofinstructional passage. Figure 1 presents the results for CWPM across the four participants in cohort one. Kelly, the first participant, was an eight-year-old female participant in grade two who had a visual diagnosis of optic nerve hypoplasia. Prior to the study, she had received approximately two years of formal braille instruction. On a weekly basis, she received five 3 0-45 minute literacy lessons from a TVI and two 20-3 0 minute braille literacy lessons from another staff person under the supervision of the TVI. According to her TVI, Kelly read contracted braille. Hence, for the purpose of this study, Kelly was given contracted reading materials. These materials were at a grade two instructional level. The phase sequence for Kelly was baseline and intervention, and data collection took place for seven weeks after screening was completed. Baseline data for Kelly show a decreasing trend and an average of 30 CWPM (range, 26 to 34). During intervention, a moderate decrease in CWPM was evidenced with an average of2l CWPM (range, 15 to 23.5). During the follow-up phase, CWPM increased to an average of 26 (range, 25 to 27). An aim line was computed in order to evaluate Kelly’s results for CWPM against therapeutic criteria. With a mean of 30 CWPM in baseline, her expected weekly growth rate according to Hasbrouck and Tindal (2006) was between 0.6 CWPM (students in the percentile, reading 18 CWPM) and 1.1 CWPM (students in the  th 25  percentile,  reading 42 CWPM). The study adopted a conservative approach, and an aim line was computed based on an expected weekly growth rate of 1.1 CWPM. The data show that Kelly’s results for CWPM clearly did not match expected growth rates across the intervention and follow-up phases.  123  Kevin, the second participant, was a six-year-old male student in grade one who had a visual diagnosis of retinal degeneration. Prior to the study, he had received approximately one year of fonnal braille instruction. On a weekly basis, he received two 90-minute literacy lessons from a TVI and daily literacy support from a braillist during language arts. According to his TVI, Kevin read uncontracted braille. Hence, for the purpose of this study, Kevin was given uncontracted reading materials. These materials were at a grade one instructional level. The phase sequence for Kevin was baseline and intervention, and data collection took place for five weeks after screening was completed. Baseline data for Kevin show a stable baseline and an average of 11 CWPM (range, 9 to 13.5). A sight increase in the last data point for baseline may have been associated with his TVI’s offering of an incentive for improved reading, which was a deviation from protocol. During intervention, a slightly increasing trend for CWPM was evidenced with a slight increase to an average of 13 CWPM (range, 9 to 15.5). An aim line was computed in order to evaluate Kevin’s results for CWPM against therapeutic criteria. With a mean of 11 CWPM in baseline, his expected weekly growth rate according to Hasbrouck and Tindal (2006) was approximately 1.0 CWPM (students th in the 25 percentile, reading 12 CWPM). An aim line was computed based on this  expected weekly growth rate. The data show that Kevin’s results for CWPM appeared to track the expected growth rates until the middle of the intervention phase (i.e., approximately three weeks into intervention), but then showed some deterioration from the aim line. Tabitha, the third participant, was a seven-year-old female student in grade one who had a visual diagnosis of optic nerve hypoplasia. Prior to the study, she had received  124  approximately 1.5 years of formal braille instruction. On a weekly basis, she received one 90-minute literacy lesson from a TVI and four 60-minute lessons from a resource teacher supervised by a TVI. According to her TVI, Tabitha read uncontracted braille. Hence, for the purpose of this study, Tabitha was given uncontracted reading materials. These materials were at a grade one instructional level. The phase sequence for Tabitha was baseline and intervention, and data collection took place for seven weeks after screening was completed. Baseline data for Tabitha show a stable baseline and an average of 2 CWPM (range, 0.5 to 2). During intervention, a slightly increasing trend for CWPM was evidenced with an increase to an average of 4 CWPM (range, 0.5 to 7. 5). An aim line was computed in order to evaluate Tabitha’ s results for CWPM against therapeutic criteria. With a mean of 2 CWPM in baseline, her expected weekly growth rate according to Hasbrouck and Tindal (2006) was approximately 0.6 CWPM (students in the  th 10  percentile, reading 6 CWPM). An aim line was computed based on an expected weekly growth rate of 0.6 CWPM. The data show that Tabitha’s results for CWPM generally tracked the expected growth rates. Carrie, the fourth participant, was a six-year-old female student in grade one who had a visual diagnosis of bilateral microphthalmia with corneal opacities. Prior to the study, she had received approximately 2 years of formal braille instruction. On a weekly basis, she received three 60-minute literacy lessons from a TVI. According to her TVI, Carrie read uncontracted braille. Hence, for the purpose of this study, Carrie was given uncontracted reading materials. These materials were at a grade one instructional level.  125  The phase sequence for Carrie was baseline, intervention, and follow-up, and data collection took place for nine weeks after screening was completed. Baseline data for Carrie show a stable baseline and an average of 8 CWPM (range, 6 to 9). During intervention, a moderately increasing trend for CWPM was evidenced with an increase to an average of 13 CWPM (range, 6 to 21.5). An aim line was computed in order to evaluate Carrie’s results for CWPM against therapeutic criteria. With a mean of 8 CWPM in baseline, her expected weekly growth rate according to Hasbrouck and Tindal (2006) was between 0.6 CWPM (students in the th 10  percentile, reading 6 CWPM) and 1.0 CWPM (students in the  th 25  percentile, reading  12 CWPM). An aim line was computed based on an expected weekly growth rate of 0.6 CWPM. During the intervention phase, the data show that Carrie’s results for CWPM exceeded the expected growth rates represented in the aim line. During the follow-up phase, her results for CWPM continued to exceed the aim line. Overall, visual analysis across the four participants in the first cohort indicated few to no changes in CWPM from baseline to intervention. The four participants in the first cohort demonstrated an average CWPM in baseline of 12.75 (range, 2 to 30). This average remained the same during intervention (range, 4 to 21). In terms of therapeutic criteria, one of the participants (Kelly) did not track her aim line at any point during intervention. Two of the four participants (Kevin and Tabitha), however, approached or tracked their respective aim lines. Additionally, one participant (Carrie) exceeded her aim line for the duration of the intervention and follow-up phase. Second cohort: CWPMfor the first, “cold” read of instructional passage. John, the first participant, was a 10-year-old male student in grade three who had a visual  126  diagnosis of optic nerve hypoplasia. Prior to the study, he had received approximately five years of formal braille instruction. He is of grade four age, as he repeated an earlier grade. On a weekly basis, he received seven 45-minute literacy lessons from a TVI. According to his TVI, John read uncontracted braille. Hence, for the purpose of this study, John was given uncontracted reading materials. These materials were at a grade one instructional level. The phase sequence for John was baseline and intervention, and data collection took place for seven weeks after screening was completed. Baseline data for John show a stable baseline and an average of 17 CWPM (range, 16 to 19). During intervention, a slightly increasing trend for CWPM was evidenced with an increase to an average of 23 CWPM (range, 19.5 to 35). An aim line was computed in order to evaluate John’s results for CWPM against therapeutic criteria. While John had been previously held back a grade and was currently a grade three student, he was instructional in grade one level reading material and, therefore, was compared to therapeutic criteria for grade one typically sighted students. With a mean of 17 CWPM in baseline, his expected weekly growth rate according to Hasbrouck and Tindal (2006) was between 1.0 CWPM (students in the CWPM) and 1.9 CWPM (students in the  th 50  th 25  percentile, reading 12  percentile, reading 23 CWPM). Given his  age and grade level, an aim line was computed based on a conservative expected weekly growth rate of 1.9 CWPM. The data show that John’s results for CWPM tracked the expected growth rates for the first two-thirds of the intervention phase (i.e., approximately four weeks into intervention), but then showed deterioration from the aim line.  127  Linda, the second participant, was a seven-year-old female student in grade one who had a significant visual impairment resulting from the treatment of a ganglioglioma (tumor). Prior to the study, she had received approximately two years of formal braille instruction. On a weekly basis, she received five 60-minute literacy lessons from a TVI. According to her TVI, Linda read uncontracted braille. Hence, for the purpose of this study, Linda was given uncontracted reading materials. These materials were at a grade one instructional level. The phase sequence for Linda was baseline, intervention, and follow-up, and data collection took place for seven weeks after screening was completed. Baseline data for Linda show a stable baseline and an average of 28 CWPM (range, 24 to 31). During intervention, CWPM showed variability with a slightly increasing trend and an improvement from baseline to an average of 35 CWPM (range, 28 to 37). An aim line was computed in order to evaluate Linda’s results for CWPM against therapeutic criteria. With a mean of 28 CWPM in baseline, her expected weekly growth rate according to Hasbrouck and Tindal (2006) was between 1.9 CWPM (students in the th 50  percentile, reading 23 CWPM) and 2.2 CWPM (students in the  75tl  percentile,  reading 47 CWPM). An aim line was computed based on an expected weekly growth rate of 1.9 CWPM. During the intervention phase, the data show that Linda’s results for CWPM exceeded the expected growth rates. During the follow-up phase, CWPM remained at or above the aim line. Mark, the third participant, was a nine-year-old male student in grade three with a visual diagnosis of optic nerve hypoplasia. Prior to the study, he had received  128  approximately four years of formal braille instruction. On a weekly basis, he received four 45-minute literacy lessons from a TVI. According to his TVI, Mark read contracted braille. Hence, for the purpose of this study, Mark was given contracted reading materials. These materials were at a grade three instructional level. The phase sequence for Mark was baseline and intervention, and data collection took place for six weeks after screening was completed. Baseline data for Mark show a stable baseline and an averag e of 38 CWPM (range, 28 to 42). During intervention, a slightly increasing trend for CWPM was evidenced with a slight increase to an average of 39 CWPM (range, 28 to 43). An aim line was computed in order to evaluate Mark’s results for CWPM against therapeutic criteria. With a mean of 38 CWPM in baseline, his expected weekly growth rate according to Hasbrouck and Tindal (2006) was betwee n 0.8 CWPM (students in the th 10  percentile, reading 3 6CWPM) and 1.1 CWPM (students in the 25t percentile,  reading 62 CWPM). An aim line was computed based on an expected weekly growth rate of 0.8 CWPM. During the intervention phase, the data show that Mark’s results for CWPM tracked the expected growth rates represented in the aim line until the final data point collected during this phase, which precipitously fell below the aim line. Tom, the fourth participant, was a seven-year-old male studen t in grade one with a visual diagnosis of optic atrophy. Prior to the study, he had received approximately 1.5 years of formal braille instruction. On a weekly basis, he received two 90-minute literacy lessons from a TVI and daily literacy support from a braillist during language arts. According to his TVI, Tom read contracted braille. For the purpose of this study, he received contracted materials. These materials were at a grade one instructional level.  129  The phase sequence for Tom was baseline and intervention, and data collection took place for six weeks after screening was completed. Baseline data for Tom show a slightly increasing trend and an average of 15 CWPM (range, 13 to 17). During intervention, the slightly improving trend for CWPM continued with an increase to an average of 26 CWPM (range, 17 to 34). An aim line was computed in order to evaluate Tom’s results for CWPM against therapeutic criteria. With a mean of 15 CWPM in baseline, his expected weekly growth rate according to Hasbrouck and Tindal (2006) was between 1.0 CWPM (students in the l0t percentile, reading 12 CWPM) and 1.9 CWPM (students in the  th 25  percentile,  reading 23 CWPM). An aim line was computed based on an expected weekly growth rate of 1.0 CWPM. During the intervention phase, the data show that Tom’s results for CWPM exceeded the expected growth rates represented in the aim line. Overall, visual analysis across the four participants in the second cohort indicates moderate to no changes in CWPM from baseline to intervention. In baseline, the four participants in the second cohort demonstrated an average CWPM of 24.5 (range, 15 to 38). During intervention, this increased to an average of 30.75 (23 to 39). In terms of therapeutic criteria, one of the four participants (John) tracked his aim line for the first four weeks of intervention, falling away from the aim line as the intervention progressed. Another participant (Linda) exceeded her aim line during the intervention and follow up phase. One participant (Mark) tracked his aim line for the duration of the intervention, only falling away from the aim line on the final assessment. Finally, one participant (Tom) exceeded his aim line for the duration of the intervention phase.  130  Errors per Minute The results for the first and second cohort’s errors for the first “cold” read of instructional passages across phases are shown in Figures 3 and 4 respectively. These figures are presented below in sequence. The data for errors are based on the first “cold” read of instructional passages across phases. Trend lines for errors during baseline and intervention phases are also presented in these figures to facilitate analysis.  131  -  III  C  2.  CII ID  0  C)  (PC  CD  3  0 ID  CII  m  —  0  0  C)  ‘4  ‘1  0  C.)  NJ Co  a,  P.)  P.)  N) 0)  (ID  N-)  C.,  NJ C.)  N) N,  N)  N) 0  Co  0)  0)  -  (1)_  C.)  “3  -  -  a)  0)  N)  C  0  I  -  (3  4  “3 01  Errors Per Minute 0)  (V  PC)  C)  -4 0)  C  —C  0  C.)  CO  N)  01  NJ  N)  CC  NJ  CC)  C-.)  “‘3 01  P.) C.)  C-.)  N)  P.)  NJ  (0  01  -4  0)  CS,  01  C.)  P.)  C  ID  01  —C  0)  0)  (3  1’)  0  0 -  N) (9  L  -  01  Errors Per Minute 0)  0)  -1  C.) 0  NJ ID  N) It  “3 —4  C)  N)  “.3 C”.  NJ 01  NJ C.)  1’.) N)  NJ  C  CO  01  -4  0)  0)  01  C.)  C-)  0  CD  a,  a)  (.1  C  0  K U  >  I  II  “3  (9 01 01  Errors Per Minute 0)  Ct,  -1 01 -  P.)  (3  (9 0  P-C CD  NJ 0)  I-4 -4  NJ 01  NJ 0)  N) -N  N)  N) N-)  N)  N) 0  CD  a,  -4  0)  0)  01  C.)  C-.)  C  CD  0)  --4  0)  E\  0  Cl C.) (31 0)  Errors Per Minute  -n C  a  0  0  a  ID  a  0  —4  01  8  Baseline  Intervention  Follow-up  7 6  a  ——-lst Read  A 0  123456769  —Trend  101112131415161718192021222324252627282930  aI  0  1  0  1  23456789  .  101112131415161718192021222324252627282930  —  23456789  101112131415161718192021222324252627282930  8 7 6 5  w  0  E 0  3  W2  0  0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15 16 Sessions  17  18  19  20  21  22  23  24  25  26  27  28  29  Figure 4. Cohort 2: Errors per minute for the first, ‘cold” read of instructional passages.  133  30  First cohort: Errors for the first, “cold” read of instructional passage. Figure 3 presents the results for errors per minute for the four participants in cohort one. Baseline data for the first participant, Kelly, show a steady increasing trend and an average of 1 error per minute (range, 0.5 to 2). During intervention, Kelly demonstrated an initial increase in errors in the first, “cold” read following the initiation of intervention (3 errors per minute). Overall, during intervention a slightly decreasing trend was evidenced but the error rate increased somewhat compared to baseline for an average of 2 CWPM (range, 0.5 to 3). During the follow-up phase, however, errors decreased to an average of 0.2 (range, 0 to 0.5) in the follow-up phase. Baseline data for the second participant, Kevin, show an increasing trend and an average of 3 errors per minute (range, 2 to 3.5). During intervention, a relatively stable level of errors with some variability was evidenced with a decrease, compared to baseline, to an average of 2 CWPM (range, 0.5 to 3). Baseline data for the third participant, Tabitha, show an increasing trend and an average of 3 errors per minute (range, 2 to 4). During intervention, error rates showed variability with an increase, compared to baseline, to an average of 4 errors per minute (range, 2.4 to 6). Baseline data for the fourth participant, Carrie, show some variability and an increasing trend with an average of 3 errors per minute (range, 2 to 4.5). During intervention, variability around a decreasing trend was evidenced with an average again of 3 errors per minute (range, 0.5 to 4). During the follow-up phase, however, errors decreased to an average of 1 (range, 1 to 1.5).  134  Overall, visual analyses across the four participants in the first cohort indicate no improvement in error rates from baseline to intervention. In baseline, the four participants in the first cohort demonstrated an average of 2.5 errors per minute (range, 1 to 3) on the first, “cold” read of instructional passages. During intervention, this increased slightly to an average of 2.8 (range, 2 to 4). The two participants who underwent a follow-up phase (Kelly and Carrie) demonstrated a decrease in their respective errors from baseline levels to an average of 0.5 (range, 0 to 1). Based on these results for errors, all participants read quite accurately across all phases, if slowly. These results confirmed the likely suitability of the skill-based repeated reading intervention for these participants. Additionally, the results indicate that the participants appeared to have been reading passages at their instructional level in terms of errors per minute according to criteria advanced by Fuchs and Deno (1982). Second cohort: Errors for the first, “cold” read ofinstructional passage. Figure 4 presents the results for errors for the four participants in cohort two. This figure shows the rate of errors for the first, “cold” read of instructional passages across the four participants. Baseline data for the first participant, John, show an increasing trend during baseline and an average of 2 errors per minute (range, 1 to 2). During intervention, a relatively stable level of errors with some variability was evidenced with a decrease from baseline to an average of 1 error per minute (range, 0.5 to 3). Baseline data for the second participant, Linda, show variability and a slightly increasing trend with an average of 1 error per minute (range, 0 to 2). During intervention, a relatively stable level of errors with some variability was evidenced with 135  an increase in errors from baseline to an average of 2 errors per minute (range, 0.5 to 3). Follow-up phase data show a decreasing trend, returning to baseline rates, with an average of 1 error per minute (range, 0 to 1.5). Baseline data for the third participant, Mark, show some variability around a slightly decreasing trend with a mean of 1 error per minute (range, 0 to 2.5). During intervention, variability continued with a slightly increasing trend for errors and an average of 1 error per minute (range, 0 to 1.5). Baseline data for the fourth participant, Tom, show high variability around a mean of 4 errors per minute (range, 2.5 to 5). During intervention, variability around a stable level was evidenced with a decrease in errors from baseline to an average of 3 errors per minute (range, 2 to 4). Overall, visual analysis across the four participants in the second cohort shows little to no improvement in error rates across phases. Two participants (John and Tom) slightly improved their error rates from baseline to intervention, while one participant (Linda) demonstrated a slight increase from baseline to intervention but a return to baseline levels during follow-up. One participant (Mark) demonstrated no change in error rates from baseline to intervention. In baseline, the four participants in the second cohort demonstrated an average of 2 errors per minute (range, 1 to 4) on the first, “cold” read of instructional passages. During intervention, this decreased slightly to an average of 1.8 (range, 1 to 3). The one participant who underwent a follow-up phase (Linda) demonstrated no change in error rate compared to baseline. As with the first cohort, these results for errors indicate that all participants read reasonably accurately, if slowly. These results also confirmed the likely suitability of the 136  skill-based repeated reading intervention for these participants. Additionally, the results indicate that the participants appeared to have been reading passages at their instructional level in terms of errors per minute (Fuchs & Deno, 1982). Oral Retell Fluency (Comprehension) The results for the first and second cohort’s comprehension, operationalized as oral retell fluency, are shown in Figures 5 and 6, respectively. These figures are presented below in sequence. Oral retell fluency was calculated for the first, “cold” read of instructional passages in baseline, the third re-reading of the instructional passages during intervention, and then in the first, “cold” read during follow-up, when the latter phase occurred.  137  100% ‘15  Baseline  90%  Intervention  Follow-up  V ‘  80% 70% 60%  •0  2o  50/0  40% 30%  I 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  100% —  IKevin  90%  0  80% a  70% 60%  r  >.Q  50% 40%  0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  100% 90% o a  I  [jabitha  80/a 70% 60% 50%  2o  40% 30% 20%  -  10% 0% 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  16  15  17  18  19  20  21  22  23  24  25  26  27  28  29  30  100% 0  ICarriel  90% a a —  U’Yo  70% 60% 50% 40% 30% 20%  •  10% 0% 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  Sessions Figure 5. Cohort I: Oral retell fluency (comprehension) for instructional passages.  138  30  100% ‘B  90%  Intervention  Baseline  [ :: 60% 50%  h  I  /  Follow-up  7  30%  0% 0  123456789  1011121314151617181920212223242526272829  30  100% 0  a  0) Cu C  a  0 a  Lindal  90% 80% 70% 60%  C), Cu  50% 40%  a  30%  cc  20%  a 0  0  10%  0% 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  15  100% 0 0 C,  90%  A  80%  Co C  a  70%  0_  60% 50% cv’ .2o LI a  40%  30% 20%  a cc  1 0%  a  0  0% 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  0  1  2  3  4  5  6  7  8  9  10  11  12  13  14 15 16 Sessions  17  18  19  20  21  22  23  24  25  26  27  28  29  30  100% ‘B a  S  90% 80% 7004  60% 50% 40% 30% 20% a  0  10% 0%  Figure 6. Cohort 2: Oral retell fluency (comprehension) for instructional passages.  139  First cohort: Oral retelifluency. Figure 5 presents the results for oral retell fluency for the four participants in cohort one. Baseline data for the first participant, Kelly, show a stable, slightly decreasing trend in baseline with an average oral retell fluency of 64% (range, 60 to 69%). During intervention, variability within the data was demonstrated, yet little change in level was evidenced with an average oral retell fluency of 65% (range, 47 to 86%). During the follow-up phase, there was a slight drop in level compared to intervention, with an average oral retell fluency of 52% (range, 50 to 53%). Baseline data for the second participant, Kevin, show a variable, increasing trend with an average oral reading fluency of 34% (range, 17 to 53%). During intervention, a decreasing trend yet little change in level was evidenced, with an average oral retell fluency of 33% (range, 20 to 69%). The analysis of Kevin’s data is compromised by a confounding variable introduced by his TVI across all observation sessions. His TVI consistently added additional, unscripted prompts during the oral retell fluency assessments. Specifically, she asked leading questions (e.g., “Well, what kind of a dog was it?”). Kevin’s results for oral retell fluency should be interpreted with this in mind. Baseline data for the third participant, Tabitha, show high variability, with a decreasing trend and an average oral retell fluency of 78% (range, 25 to 100%). During intervention, a change in level and a slightly decreasing trend were evidenced with a decrease in average oral retell fluency to 54% (range, 45 to 83%). Baseline data for the fourth participant, Carrie, show a variable, slightly decreasing trend in baseline with an average oral retell fluency of 41% (range, 27 to 63%). During intervention, variability and a slightly decreasing trend was evidenced again with an average oral retell fluency of 40% (range, 14 to 65%). During the follow-  140  up phase, Carrie demonstrated an increasing trend in oral retell fluency, but with an average level of 33% (range, 45 to 64%), which was lower than that evidenced in baseline and intervention Overall, visual analysis of all results for the first cohort suggest that oral retell fluency scores did not improve during the intervention phase. In baseline, the four participants in the first cohort demonstrated an average oral retell fluency of 54% (range, 34 to 78%) on the first, “cold” read of instructional passages. During intervention, this decreased slightly to an average of 48% (range, 33 to 54%). The one participant who underwent a follow-up phase (Carrie) demonstrated an increasing trend in oral retell fluency, but her average level was lower for that phase than that for baseline and intervention. Second cohort: Oral retelifluency. Figure 6 presents the results for oral retell fluency for the four participants in cohort two. Baseline data for the first participant, John, show an increasing trend with an average oral retell fluency of 59% (range, 40 to 69%). During intervention, a slightly decreasing trend, with some variability, was evidenced with a slight decrease in average oral retell fluency to 58% (range, 41 to 71%). Baseline data for the second participant, Linda, show an increasing trend, with an average oral retell fluency of 68% (range, 52 to 8 0%). During intervention, variability, around an increasing trend with little change in level, was evidenced, with an average of 65% (range, 40 to 92%). During the follow-up phase, a decreasing trend and drop in level were evidenced, with an average oral retell fluency of 48% (range, 42 to 54%). Baseline data for the third participant, Mark, show an increasing trend, with variability, and an average oral retell fluency of 58% (range, 39 to 83%). During 141  intervention, an increasing trend and a drop in level were evidenced with a decrease in average oral retell fluency of 47% (range, 32 to 70%). Baseline data for the fourth participant, Tom, show variability around a relatively stable level, with an average oral retell fluency of 38% (range, 26 to 5 5%). During intervention, variability with an improvement in level and an increasing trend were evidenced with an average of 57% (range, 44 to 90%). Visual analysis of results for three participants within the second cohort (John, Linda, and Mark) suggests that, overall, oral retell fluency scores did not improve and/or decreased during the intervention phase. One participant (Tom) demonstrated an increase in oral retell fluency during the course of the intervention. In baseline, the four participants in the second cohort demonstrated an average oral retell fluency of 56% (range, 38to 68%) on the first, “cold” read of instructional passages. During intervention, this increased slightly to an average of 57% (range, 33 to 54%). The one participant who underwent a follow-up phase (Linda) demonstrated a decreasing trend in oral retell fluency and a drop in average of 48% (range, 42 to 54%), compared to that of baseline and intervention. CWPMfor the Third Read ofInstructional Passages and High Content Overlap (HCO) Passages CWPM data were collected for the third read of instructional passages and HCO passages during the intervention phase, and the results for the first and second cohort are shown in Figures 7 and 8, respectively. These figures are presented below in sequence. The figures also show the CWPM for the first “cold” read of instructional passages across phases, for the purpose of comparison. Trend lines and aim lines for CWPM for the first  142  and third read and HCO passages during intervention phases are also presented in these figures to facilitate analysis.  143  —  53  -  C  CD  C  53  CD  —  CD  CD  C  (.5  N) CO  N) 0)  -4  p.)  C’) 0)  (CC  C’)  (.3  C’)  $3  053  •0  CD  C  CD  0  C’ CD 0.  0) a’°  CD  0.  CD  Co  5  0.  0  Cs  CD  —  0  -  _.j  0)  0 CD CD Cfl CD  —  -  S.C  -  Q  —  CD  CD..  DQ  c•o  OCD  CDT1 C.  C  C  C  5)  C  (.5  C C  (5)  C  0)  Correct Words Per Minute C) C  0)  C C C  53  C  (.5  C  .  C  (CC  C  0)  Correct Words Per Minute cC C 0)  0)  .-  C C  C C  C  °‘  Correct Words Per Minute C C  °‘  C C C  C’)  C  (.5  C C  0)  C  0)  Correct Wordst Per Minute 4  C  C  0)  50  Basehne  45  lntion  FolIowup  40 C  35 30 25 20  a  15 0  10  High Content Overlap  C.)  5  —  Trends  0 0  123456789  101112131415161718192021222324252627282930  80  Li  70 60 C  0  50  a 0  40  0  30 20 10 0 0  123456789  101112131415161718192021222324252627282930  80 70 60 C  50  a  40  0  30  Z  20 0  10 0 0  123456789  101112131415161718192021222324252627282930  80 70 60 50 40 30 20 0 C)  10 0  0123456789101112131415161718192021222324252627282930 Sessions Figure 8. Cohort 2: Correct words per minute for the first “cold’ read and the third read of the instructional passages and for High Content Overlap (HCO) passages.  145  For the analysis of participants’ CWPM during the intervention phase for the third read and for High Content Overlap (HCO) passages, across baseline and intervention phase comparisons and within intervention phase comparisons were made. First, participants’ baseline phase first “cold” reads were compared to their intervention phase third reads (of the same content). Second, participants’ intervention phase first “cold” reads were compared to their intervention phase third reads. Third, participants’ intervention phase first “cold” reads were compared to their intervention phase HCO passage reads. This third comparison offered a measure of modest generalization of reading to different text but with similar content. These results are summarized below within cohorts for each participant. First cohort: CWPMfor the third read andfor HCO passages. Figure 7 presents the results of CWPM for the first “cold” read, the third read, and the HCO passage read for the four participants in cohort one. Kelly’s baseline data for the first “cold” read shows a decreasing trend with an average level of 30 CWPM (range, 26 to 34). During intervention, the first “cold” read shows a decrease in CWPM to an average of 21 (range, 15 to 23.5). However, during the third read of the same passages in intervention, Kelly’s CWPM increased to an average of 39 (range, 31-47). This exceeded the average baseline level of the first “cold” read by 9 CWPM, and the average intervention level of the first “cold” read by 18 CWPM. Although no improvement was observed in CWPM for Kelly when comparing baseline to intervention for the first “cold” read, moderate to substantial improvement was observed when comparing CWPM in baseline to the third read in intervention, and when comparing CWPM during the first “cold” read in intervention to  146  the third read in intervention. These data suggest that Kelly’s reading ability improved with the repeated reading of the same text. Kelly’s CWPM for the first “cold” read during intervention also was compared to her CWPM for HCO passages during intervention. This measure was gathered after the third read. While the first “cold” read in intervention yielded an average CWPM of 21 (range 15 to 23.5), Kelly’s read of HCO passages evidenced an average of 28 CWPM (range 17.5 to 35.5). This represented an average within intervention phase improvement of 7 CWPM when comparing the first “cold read” passages to HCO passages. However, because CWPM for HCO passages was not measured in the baseline phase, an interpretation of modest generalization of reading ability to HCO passages is necessarily constrained. Kevin’s baseline data for the first, “cold” read show a stable level with an average of 11 CWPM (range, 9 to 13.5). During intervention, the first, “cold” read shows a slight improvement to an average of 13 CWPM (range, 9 to 15.5). With respect to the third read of the same passages in intervention, Kevin’s CWPM increased to an average of 24 (range, 15 to 32). Modest to substantial improvement was observed in CWPM for Kevin from baseline first, “cold” read to the third read of intervention (mean gain of 13 CWPM) and from the first to the third read during intervention (mean gain of 11 CWPM). These data suggest that Kevin’s reading ability improved with the repeated reading of the same text. Kevin’s CWPM for the first, “cold” read during intervention also was compared to his CWPM for HCO passages during intervention. While the first “cold” read in intervention yielded an average of 13 CWPM (range, 9 to 15.5), Kevin’s read of HCO  147  passages evidenced an average of 18 CWPM (range, 13 to 22.5). This represented an average within intervention phase improvement of 5 CWPM when comparing the first “cold read” passages to HCO passages. However, because CWPM for HCO passages was not measured in the baseline phase, an interpretation of modest generalization of reading ability to HCO passages is, again, constrained. Tabitha’s baseline data for the first, “cold” read show a stable level with an average of 2 CWPM (range, 0.5 to 2). During intervention, the first, “cold” read shows a slight improvement to an average of 4 CWPM (range, 0.5 to 7.5). In regards to the third read of the same passages in intervention, Tabitha’ s CWPM increased to an average of 8 CWPM (range, 4 to 12). Small to moderate improvement was observed in CWPM for Tabitha from baseline first, “cold” read to the third read of intervention (mean gain of 6 CWPM) and from the first to the third read during intervention (mean gain of 4 CWPM). These data suggest that Tabitha’ s reading ability improved with the repeated reading of the same text. Tabitha’s CWPM for the first, “cold” read during intervention also was compared to her CWPM for HCO passages during intervention. While the first “cold” read in intervention yielded an average of 4 CWPM (range, 0.5 to 7.5), Tabitha’s read of HCO passages evidenced an average of 5 (range, 1.5 to 7). This represented an average within intervention phase improvement of 1 CWPM when comparing the first “cold read” passages to HCO passages. Again, these results suggest little to no generalization to HCO passages; however, this interpretation is necessarily constrained by the absence of comparative data in the baseline phase.  148  Carrie’s baseline data for the first, “cold” read shows a stable level with an average 8 CWPM (range, 6 to 9). During intervention, the first, “cold” read shows a moderate improvement to an average of 13 CWPM (range, 6 to 21.5). With respect to the third read of the same passages in intervention, Carrie’s CWPM increased to an average of 23 (range, 15 to 30.5). Moderate to substantial improvement was observed in CWPM for Carrie from baseline first, “cold” read to the third read of intervention (mean gain of 15 CWPM) and from the first to the third read during intervention (mean gain of 10 CWPM). These data suggest that Carrie’s reading ability improved with the repeated reading of the same text. Carrie’s CWPM for the first, “cold” read during intervention also was compared to her CWPM for HCO passages during intervention. While the first “cold” read in intervention yielded an average of 13 CWPM (range, 6 to 21.5), Carrie’s read of HCO passages evidenced an average of 17 CWPM (range, 10 to 24). This represented an average within intervention phase improvement of 4 CWPM when comparing the first “cold read” passages to HCO passages. However, because CWPM for HCO passages was not measured in the baseline phase, an interpretation of modest generalization of reading ability to HCO passages is again constrained. Overall, visual analysis of the results for all participants in cohort one indicate slight to substantial improvements for CWPM from the first, “cold” read in baseline to the third read of instructional passages, with an average improvement of 10.8 CWPM (range, 6 to 15). The results again indicate an average improvement of 10.8 CWPM (range, 4 to 18) from the first to the third reading. In addition, an average improvement of 4.3 CWPM (range, 1 to 7) was demonstrated from the first, “cold” read in intervention to  149  the HCO passages. These results indicate that CWPM for all participants in cohort one increased after completing three rereads. Additionally, these participants demonstrated higher CWPM scores for HCO passages compared to the first reading of instructional passages in intervention. Second cohort. CWPMfor the third read and HCO passages. Figure 8 presents the results for CWPM for the first, “cold” read, the third read, and the HCO passages for the four participants in cohort two. John’s baseline data for the first, “cold” read shows a stable level with an average of 17 CWPM (range, 16 to 19). During intervention, the first, “cold” read shows a moderate improvement to an average of 23 CWPM (range, 19.5 to  35). During the third read of the same passages in intervention, John demonstrated an increase to an average of 34 CWPM (range, 26 to 45.5). Modest to substantial improvement was observed in CWPM for John from baseline first, “cold” read to the third read of intervention (mean gain of 17 CWPM) and from the first to the third read during intervention (mean gain of 11 CWPM). These data suggest that John’s reading ability improved with the repeated reading of the same text. John’s CWPM for the first, “cold” read during intervention also was compared to his CWPM for HCO passages during intervention. This measure was obtained after the third read. While the first “cold” read in intervention yielded an average of 23 CWPM (range, 19.5 to 35). John’s read of HCO passages evidenced an average of 27 CWPM (range, 11.5 to 34.5). This represented an average within intervention phase improvement of 4 CWPM when comparing the first “cold read” passages to HCO passages. However, because CWPM for HCO passages was not measured in the baseline phase, an  150  interpretation of modest generalization of reading ability to HCO passages is necessarily constrained. Linda’s baseline data for the first, “cold” read shows a stable level with an average of 28 CWPM (range, 25.5 to 31). During intervention, the first, “cold” read shows a moderate improvement to an average of 35 CWPM (range, 21.5 to 46.5). During the third read of the same passages in intervention, Linda’s CWPM increased to an average of 52 CWPM (range, 37 to 69). Substantial improvement was observed in CWPM for Linda from baseline first, “cold” read to the third read of intervention (mean gain of 24 CWPM) and from the first to the third read during intervention (mean gain of 17 CWPM). These data suggest that Linda’s reading ability improved with the repeated reading of the same text. Linda’s CWPM for the first, “cold” read during intervention was also compared to her CWPM for HCO passages during intervention. While the first “cold” read in intervention yielded an average of 35 CWPM (range, 21.5 to 46.5), Linda’s read of HCO passages evidenced an average of 41 CWPM (range, 28 to 51). This represented an average within intervention phase improvement of 6 CWPM when comparing the first “cold read” passages to HCO passages. However, as with the other participants, because CWPM for HCO passages was not measured in the baseline phase, an interpretation of modest generalization of reading ability to HCO passages is, again, constrained. Mark’s baseline data for the first, “cold” read shows a stable level with an average of 38 CWPM (range, 29 to 45). During intervention, the first, “cold” read shows a slight improvement to an average of 39 CWPM (range, 30 to 46.5). During the third read of the same passages in intervention, Mark’s CWPM markedly increased to an average of 61  151  CWPM (range, 52.5 to 72.5). Substantial improvement was observed in CWPM for Mark from baseline first, “cold” read to the third read of intervention (mean gain of 23 CWPM) and from the first to the third read during intervention (mean gain of 22 CWPM). These data suggest that Mark’s reading ability improved substantially with the repeated reading of the same text. Mark’s CWPM for the first, “cold” read during intervention also was compared to his CWPM for HCO passages during intervention. While the first “cold” read in intervention yielded an average of 38 CWPM (range, 29 to 45), Mark’s read of HCO passages evidenced an average of 46 CWPM (range, 35 to 53). This represented an average within intervention phase improvement of 8 CWPM when comparing the first “cold read” passages to HCO passages. However, because CWPM for HCO passages was not measured in the baseline phase, an interpretation of modest generalization of reading ability to HCO passages is constrained for Mark as well. Tom’s baseline data for the first, “cold” read shows a slightly increasing trend with an average of 15 CWPM (range, 11.5 to 24). During intervention, the first, “cold” read shows a substantial improvement to an average of 26 CWPM (range, 19.5 to 38.5). With respect to the third read of the same passages in intervention, Tom’s CWPM increased further to an average of 43 CWPM (range, 36 to 52.57). Substantial improvement was observed in CWPM for Tom from baseline first, “cold” read to the third read of intervention (mean gain of 28 CWPM) and from the first to the third read during intervention (mean gain of 17 CWPM). These data suggest that Tom’s reading ability improved substantially with the repeated reading of the same text.  152  Tom’s CWPM for the first, “cold” read during intervention also was compared to his CWPM for HCO passages during intervention. While the first “cold” read in intervention yielded an average of 26 CWPM (range, 19.5 to 38.5), Tom’s read of HCO passages evidenced an average of 37 CWPM (range, 26.5 to 52). This represented an average within intervention phase improvement of 11 CWPM when comparing the first “cold read” passages to HCO passages. However, because CWPM for HCO passages was not measured in the baseline phase, an interpretation of modest generalization of reading ability to HCO passages is necessarily constrained. Overall, visual analysis of the results for all participants in cohort two indicate slight to substantial improvements for CWPM from the first, “cold” read in baseline to the third read of instructional passages, with an average improvement of 23 CWPM (range, 17 to 28). The results indicate an average improvement of 16.8 CWPM (range, 11 to 22) from the first to the third reading. In addition, an average improvement of 7.3 CWPM (range, 4 to 11) was demonstrated from the first, “cold” read in intervention to the HCO passages. These results indicate that CWPM for all participants in cohort two increased after completing three repeated reads. Additionally, these participants demonstrated higher CWPM scores for HCO passages compared to the first reading of instructional passages in intervention. Errors for the Third Read ofInstructional Passages and High Content Overlap (HCO) Passages Data regarding error rates were collected for the third read of instructional passages and HCO passages during the intervention phase. For the analysis of participants’ errors during the intervention phase for the third read and for High Content  153  Overlap (HCO) passages, across baseline and intervention phase comparisons and within intervention phase comparisons were made. First, participants’ errors in baseline for the first “cold” reads were compared to their errors for intervention third reads (of the same content). Second, participants’ intervention phase errors for the first “cold” reads were compared to their errors for intervention phase third reads. Third, participants’ intervention phase errors for first “cold” reads were compared to their errors for intervention phase HCO passage reads. This third comparison offered a measure of modest generalization of accuracy to different text but with similar content. These results are summarized below within cohorts for each participant. First cohort: Errors for the third read and HCO passages. Baseline data for Kelly show a steady increasing trend for errors and an average of 1 error per minute (range, 0.5 to 2). During intervention, Kelly demonstrated a slight decreasing trend in errors for the first “cold” read, but an overall higher average of 2 errors per minute (range, 1 to 3). During the third read of the same passages in intervention, Kelly’s error rate decreased to an average of 1 error per minute (range, 0 to 1.5). This equaled the average baseline level of errors for first “cold” read, and showed a reduction from the average intervention level of the first “cold” read by 1 error per minute. Although no improvement was observed in error rates for Kelly when comparing baseline to intervention for the first “cold” read, she returned to baseline levels for the third read in intervention. These data suggest that Kelly’s reading accuracy improved during intervention with the repeated reading of the same text. Kelly’s errors for the first “cold” read during intervention also were compared to her errors for HCO passages during intervention. This measure was gathered after the  154  third read. While the first “cold” read in intervention yielded an average of 2 errors per minute (range, 1 to 3), Kelly’s read of HCO passages evidenced an average of 1 error per minute on HCO passages (range, 0 to 1). This represented an average within intervention phase improvement of 1 error per minute when comparing the first “cold read” passages to HCO passages. However, because CWPM for HCO passages was not measured in the baseline phase, an interpretation of modest generalization of reading ability to HCO passages is necessarily constrained. The results across phases indicate that Kelly generally evidenced very low error rates. Baseline data for the second participant, Kevin, show an increasing trend and an average of 3 errors per minute (range, 2 to 3.5). During intervention, a relatively stable level of errors with some variability was evidenced with a decrease, compared to baseline, to an average of 2 CWPM (range, 0.5 to 3). With respect to the third read of the same passages in intervention, Kevin’s error rate decreased further to an average of 0.5 errors per minute (range, 0 to 1.5). Modest improvement was observed in errors for Kevin from baseline first, “cold” read to the third read of intervention (mean reduction of 2.5 errors per minute) and from the first to the third read during intervention (mean  reduction of 1.5 errors per minute). These data suggest that Kevin’s accuracy improved with the repeated reading of the same text. Kevin’s error rate for the first, “cold” read during intervention also was compared to his error rate for HCO passages during intervention. Kevin evidenced a mean error rate for HCO passages of 1 error per minute (range, 0 to 2), which represented a mean within intervention phase improvement of I error per minute when comparing the first “cold read” passages to HCO passages. However, because error rates for HCO passages were  155  not measured in the baseline phase, an interpretation of modest generalization of reading ability to HCO passages is necessarily constrained. The results across phases indicate that Kevin generally evidenced very low error rates. Baseline data for the third participant, Tabitha, show an increasing trend and an average of 3 errors per minute (range, 2 to 4). During intervention, error rates showed variability with an increase, compared to baseline, to an average of 4 errors per minute (range, 2.4 to 6). With respect to the third read of the same passages in intervention, Tabitha’ s error rate decreased to baseline levels with an average of 3 errors per minute (range, 1.5 to 4.5). No improvement was observed in errors for Tabitha from baseline first, “cold” read to the third read of intervention. Slight improvement was observed from the first to the third read during intervention (mean reduction of 1 error per minute). These data are equivocal and suggest that Tabitha’ s accuracy did not necessarily improve with the repeated reading of the same text. Tabitha’s error rate for the first, “cold” read during intervention also was compared to her error rate for HCO passages during intervention. Tabitha evidenced a mean error rate for HCO passages of 4 errors per minute (range, 2.5 to 7.5), which represented no within intervention phase improvement when comparing the first “cold read” passages to HCO passages. As error rates for HCO passages were not measured in the baseline phase, the interpretation of a lack of generalization of reading ability to HCO passages is again constrained. These results indicate that Tabitha generally evidenced moderately high and very variable error rates for the third read and HCO passages during intervention  156  Baseline data for the fourth participant, Carrie, show some variability and an increasing trend with an average of 3 errors per minute (range, 2 to 4.5). During intervention, variability around a decreasing trend was evidenced with an average of 3 errors per minute (range, 0.5 to 4). During the follow-up phase, however, errors decreased to an average of 1 (range, 1 to 1.5). In regards to the third read of the same passages in intervention, Carrie’s error rate decreased to an average of I error per minute (range, 0 to 2). A modest improvement of 2 errors per minute was observed in errors for Carrie both from baseline first, “cold” read to the third read of intervention and from the first to the third read during intervention. These data suggest that Carrie’s accuracy improved with the repeated reading of the same text. Carrie’s error rate for the first, “cold” read during intervention also was compared  to her error rate for HCO passages during intervention. Carrie evidenced a mean error rate for HCO passages of 2 errors per minute (range, 0.5 to 3.5), which represented a mean within intervention phase improvement of 1 error per minute when comparing the first “cold read” passages to HCO passages. However, because error rates for HCO passages were not measured in the baseline phase, an interpretation of slight generalization of reading ability to HCO passages is, again, constrained. The results across phases indicate that Carrie generally evidenced very low error rates. Overall, visual analysis of the results for all participants in cohort one indicate little to no improvements for errors from the first, “cold” read in baseline to the third read of instructional passages, with an average improvement of 1 CWPM (range, 0 to 2). The results indicate an average improvement of 1 error per minute (range, 1 to 2) from the first to the third reading. In addition, an average improvement of 1 error per minute  157  (range, 0 to 1) was demonstrated from the first, “cold” read in intervention to the HCO passages. These results indicate that all participants in cohort one decreased their error rates after completing three repeated reads. Additionally, three of four participants demonstrated lower error rates for HCO passages compared to the first reading of instructional passages in intervention. Second cohort: Errors for the third read and HCO passages. Baseline data for the first participant, John, show an increasing trend during baseline and an average of 2 errors per minute (range, 1 to 2). During intervention, a relatively stable level of errors with some variability was evidenced with a decrease from baseline to an average of 1 error per minute (range, 0.5 to 3). With respect to the third read of the same passages in intervention, John’s error rate remained at an average of 1 error per minute (range, 0 to 1.5). Slight improvement was observed in errors for John from baseline first, “cold” read to the third read of intervention (mean reduction of 1 error per minute). No improvement was demonstrated from the first to the third read during intervention. These data suggest that John’s accuracy remained high and did not improve with the repeated reading of the same text. John’s error rate for the first, “cold” read during intervention also was compared to his error rate for HCO passages during intervention. John evidenced a mean error rate for HCO passages of 1 error per minute (range, 0 to 2), which represented a mean within intervention phase improvement of 1 error per minute when comparing the first “cold read” passages to HCO passages. Again, because error rates for HCO passages were not measured in the baseline phase, an interpretation of little to no generalization of reading  158  ability to HCO passages is constrained. The results across phases indicate that John generally evidenced very low error rates. Baseline data for the second participant, Linda, show variability and a slightly increasing trend with an average of 1 error per minute (range, 0 to 2). During intervention, a relatively stable level of errors with some variability was evidenced with an increase in errors from baseline to an average of 2 errors per minute (range, 0.5 to 3). In regards to the third read of the same passages in intervention, Linda’s error rate remained at an average of 2 errors per minute (range, 0 to 4). A slight increase in errors per minute was observed in errors for Linda from baseline first, “cold” read to the third read of intervention (increase of 1) and no improvement from the first to the third read during intervention. These data suggest that Linda’s already high accuracy was not improved with the repeated reading of the same text. Linda’s error rate for the first, “cold” read during intervention also was compared to her error rate for HCO passages during intervention. Linda evidenced a mean error rate for HCO passages of 1 error per minute (range, 0.5 to 2.5), which represented a mean within intervention phase improvement of 1 error per minute when comparing the first “cold read” passages to HCO passages. However, because error rates for HCO passages were not measured in the baseline phase, an interpretation of slight generalization of reading ability to HCO passages is, again, constrained. The results across phases indicate that Linda generally demonstrated very low error rates. Baseline data for the third participant, Mark, show some variability around a slightly decreasing trend with a mean of 1 error per minute (range, 0 to 2.5). During intervention, variability continued with a slightly increasing trend for errors and an  159  average of 1 error per minute (range, 0 to 1.5). With respect to the third read of the same passages in intervention, Mark’s error rate remained at 1 error per minute (range, 0 to 2). No improvement was observed in errors for Mark from baseline first, “cold” read to the third read of intervention or from the first to the third read during intervention. These data suggest that Mark’s already high accuracy was unaffected by the repeated reading of the same text. Mark’s error rate for the first, “cold” read during intervention also was compared to his error rate for HCO passages during intervention. Mark evidenced a mean error rate for HCO passages of 0.4 errors per minute (range, 0 to 1.5), which represented a mean within intervention phase improvement of 1 error per minute when comparing the first “cold read” passages to HCO passages. Again, however, error rates for 1-ICO passages were not measured in the baseline phase, an interpretation of slight generalization of reading ability to HCO passages is constrained. The results across phases indicate that Mark generally evidenced very low error rates. Baseline data for the fourth participant, Tom, show high variability around a mean of 4 errors per minute (range, 2.5 to 5). During intervention, variability around a stable level was evidenced with a decrease in errors from baseline to an average of 3 errors per minute (range, 2 to 4). With regards to the third read of the same passages in intervention, Tom’s error rate decreased further to an average of 1 error per minute (range, 0.5 to 2.5). Modest improvement was observed in errors for Tom from baseline first, “cold” read to the third read of intervention (mean reduction of 3 errors per minute) and from the first to the third read during intervention (mean reduction of 2 errors per minute). These data suggest that Tom’s accuracy improved with the repeated reading of the same text.  160  Tom’s error rate for the first, “cold” read during intervention also was compared to his error rate for HCO passages during intervention. Tom evidenced a mean error rate for HCO passages of 2 error per minute (range, 0.5 to 3.5), which represented a mean within intervention phase improvement of 1 error per minute when comparing the first “cold read” passages to HCO passages. However, because error rates for HCO passages were not measured in the baseline phase, an interpretation of modest generalization of reading ability to HCO passages is necessarily constrained. The results across phases indicate that Tom generally evidenced very low error rates and improved his accuracy with the rereading of text. Overall, visual analysis of the results for all participants in cohort two indicate few to no improvements for errors from the first, “cold” read in baseline to the third read of instructional passages, with an average improvement of 1 CWPM (range, 0 to 3). The results indicate an average improvement of 0.5 errors per minute (range, 0 to 2) from the first to the third reading. In addition, an average improvement of 1 error per minute (range, 1) was demonstrated from the first, “cold” read in intervention to the HCO passages. These results indicate that one participant in cohort two improved his error rate after completing three repeated reads. Additionally, three of four participants demonstrated lower error rates for HCO passages compared to the first reading of instructional passages in intervention. CWPM and Errors for Low Content Overlap (LCO; Progress Monitoring) Passages  The results for the first and second cohort for CWPM and errors for the LCO, or progress monitoring passages, are shown in Figures 9 and 10 respectively. These figures  161  are presented below in sequence. The figures show the CWPM and errors for the LCO passages across phases.  162  11  .  CD  CO CD S  DC  DC CD CD  0 0  CD  CD  CD CD 0  CD  CD 0 C) CD 0  1)1  NJ 0  N.) CD CD 0  CD CD 0  Correct Words I Errors Per Minute 1)1  CD C)  NJ CD  NJ NJ  NJ C)  CD  U,  0)  CD  CD  C)  CD  U,  —I  0)  CD  CD  NJ  CD C)  NJ Co  NJ 0)  NJ —C  NJ 0)  CD  CD C)  NJ CO  NJ 0)  —4  NJ 0)  C) CD  0  -  CD  —  NJ 0 NJ CD CD C) CD CD C)  Correct Words / Errors Per Minute  C)  NJ  0  CD 0  NJ CD  CD  N.) CD  NJ  C’  NJ 0  Correct Words / Errors Per Minute  (U  NJ  NJ CD  NJ NJ  NJ  NJ  Co  0  0  U,  CD  0  —C  0)  CD  CD  NJ  C)  CD  U,  0)  CD  CD  NJ  0  C)  -  0 rD  Co  reID  0)  0  CD  3  CD  (5  0  a  DC  CD  3  ID  (5  0.  0  CD C,  0 0  0 0 0  0  CO.  D  (p  CD  CD C) -  CD  F  I C,)  C,,  0) 0)  CD  -U—  CD  mo  -  C)  D0 CD0  CD  NJ C) NJ CD  CD C) CD CD C)  U,  CD D  CD  D CD  (U  DC  DC CD  CD  C  0  11 0  0 D  Correct Words / Errors Per Minute CD C)  CO CD CC,  DC  CCI CCI  SC  r C, 0  SC  CD  0  0 0 CD 0  C-)  I 0  I CCI  m  V  0  C CD  3  CD  0 C”  CD  CD DC  CD  SC  0  CD C,  0  C,  I’)  0  0  C-)  (0  -n  (0  0 0  C’)  C-.) C)  N) (0  N) 0)  N)  N) 0)  N) (0  N)  N) C.)  N) N)  N)  N) C)  0)  0)  -  C.)  N)  C)  (0  0)  -1  0)  (n  C.)  N)  C)  C)  CD C)  0)  -  N) C) N) 0)  C.) C) C.) CCC C)  (CC  Correct Words! Errors Per Minute 0) C) C) (0 C) (0  N) C) C))  N)  C)  C.)  CCC  C.) C) (CC  Correct Words/ Errors Per Minute 0) C) C)  (IC C)  -  CCI  N) C) N) CCI 0) C)  C.) (CC 0. C) 0)  Correct Words! Errors Per Minute (0 C) C)  o C:)  -  (0  -  N) C) N) 0)  C.) 0 C.) (0  C)  .  .0.  (0  Correct Words / Errors Per Minute (ii  (0  First cohort: CWPM and errors for the LCO passages. Baseline data for the first participant, Kelly, show a very slightly decreasing trend and an average of 24 CWPM (range, 24.5 to 26). During intervention, a slight decrease in level for CWPM was evidenced with an average of 23 CWPM (range, 20 to 25.5). During the follow-up phase, CWPM increased to an average of 30 (range, 26.5 to 34.5). These visual results for novel, untrained LCO passages suggest a slight decline in CWPM from baseline to intervention and moderate improvement in CWPM in the follow-up phase. With respect to errors for LCO passages, Kelly’s baseline data show a low, stable level with an average of 1 error per minute. During intervention, no change in level, trend, or mean errors was evidenced (M  =  1; range, 0.5 to 1). Data for the follow-up  phase show a decrease in errors with an average of 0.2 (range, 0 to 0.5). These results indicate that Kelly maintained a very low, stable level of errors for LCO passages across baseline and intervention phases. Baseline data for the second participant, Kevin, show a stable level with an average of 10 CWPM (range, 9.5 to 10.5). During intervention, a gradually increasing trend was evidenced for an average of 11 CWPM (range, 8.4 to 14). These results suggest little improvement in CWPM for untrained, novel LCO passages over the course of the study. With respect to errors for LCO passages, Kevin’s baseline data show a low, stable level with an average of 3 errors per minute (range, 2.5 to 3). During intervention, no change in level, trend, or mean errors was demonstrated (M  =  3; range, 1.5 to 4.5). These  results indicate that Kevin also demonstrated a very low, stable level of errors for LCO passages across baseline and intervention phases.  165  Baseline data for the third participant, Tabitha, show a low, stable level with an average of 2 CWPM (range, 2 to 2.5). During intervention, a very slightly increasing trend was evidenced with an average of 3 CWPM (range, 2 to 5.5). These results suggest little improvement in CWPM for untrained, novel LCO passages over the course of the study. With respect to errors for LCO passages, Tabitha’s baseline data show a low and stable level with an average of 4 errors per minute (range, 2.5 to 4.5). This mean error rate was double the mean CWPM rate for LCO passages during baseline. During intervention, a slightly increasing trend was evidenced for an average of 5 errors per minute (range, 4 to 5.5). This mean error rate exceeded the mean CWPM for LCO passages during intervention. These results indicate that while Tabitha demonstrated a low, relatively stable level of errors (i.e., within instructional levels), her mean error rate was higher than her mean CWPM for LCO passages across phases. Baseline data for the fourth participant, Carrie, show a stable level with an average of 9 CWPM (range, 5.5 to 11). During intervention, an increasing trend was evidenced for an average of 12 CWPM (range, 10 to 11.5) and a gain of 3 CWPM. During the follow-up phase, an improvement in level was evidenced with a mean of 15 (range, 12 to 17). These results suggest improvement in CWPM for untrained, novel LCO passages over the course of the study. With respect to errors for LCO passages, Carrie’s baseline data show a low, stable level with an average of 3 errors per minute (range, 2.5 to 3). During intervention, no change in level, trend, or mean errors was demonstrated (M  =  3; range, 2.5 to 4.5).  During the follow-up phase, a decrease in level was evidenced with a mean of 2 errors  166  per minute (range, 2 to 2.5). These results indicate that Carrie also demonstrated a very low, stable level of errors for LCO passages across baseline and intervention phases. Overall, visual analysis of the results for all participants in cohort one indicates slight improvements for CWPM in LCO passages across phases. In baseline, the four participants in the first cohort demonstrated an average of 11.3 CWPM (range, 2 to 24). During intervention, this increased only slightly to 12.4 CWPM (range 3 to 23). The two participants (Kelly and Carrie) who underwent a follow-up phase demonstrated moderate improvement, each evidencing a gain of 6 CWPM over baseline levels. These results indicate that all four participants in cohort one were able to affect slight improvements in their mean CWPM for LCO passages across phases. Visual analysis of the results for all participants in cohort one indicates little change in error rates for CWPM in LCO passages across phases. In baseline, the four participants in the first cohort demonstrated an average of 2.8 CWPM (range, 1 to 4). During intervention, this increased slightly to an average of 3 errors per minute (range 1 to 5). The two participants (Kelly and Carrie) who underwent a follow-up phase reduced their error rates by 1 over baseline levels. These results indicate that the four participants demonstrated very low, stable error rates with little change across phases. Second cohort: CWFM and errors for the LCO passages. Baseline data for the first participant, John, show a decreasing trend and an average of 21 CWPM (range, 16.5 to 26). During intervention, although an increasing trend was evidenced in CWPM, decreases were demonstrated in comparison to baseline to an average of 18 (range, 10.5 to 27). These visual results for novel, untrained LCO passages suggest a decline in CWPM from baseline to intervention.  167  With respect to errors for LCO passages, John’s baseline data show a slightly  increasing trend with an average of 1 error per minute (range, 0.5 to 2.5). During intervention, a stable trend with no change in level or mean errors was evidenced (M  =  1;  range, 1 to 2). These results indicate that John demonstrated a very low, stable level of errors for LCO passages across baseline and intervention phases. Baseline data for the second participant, Linda, show a slightly decreasing trend with an average of 27 CWPM (range, 23.5 to 29.5). During intervention, a slightly increasing trend was evidenced, however, with no change in mean level for an average of 27 CWPM (range, 20 to 37). During the follow-up phase, an increasing trend was  evidenced for an average of 30 CWPM (range, 20 to 39.5). These visual results for novel, untrained LCO passages suggest no change in mean CWPM from baseline to intervention and moderate improvement in CWPM in the follow-up phase. With respect to errors for LCO passages, Linda’s baseline data show a low, stable level with an average of 2 errors per minute (range, 1.5 to 2.5). During intervention, no change in level, trend, or mean errors was demonstrated (M  2; range, 1 to 4). Data for  the follow-up phase again show no change in level, trend, or mean errors (M = 2; range, 1 to 2/5). These results indicate that Linda demonstrated a very low, stable level of errors for LCO passages across baseline, intervention, and follow-up phases. Baseline data for the third participant, Mark, show a slightly increasing trend and an average of 39 CWPM (range, 37.5 to 41). During intervention, a slightly increasing trend was evidenced with an average of 40 CWPM (range, 35.5 to 44). These visual results for novel, untrained LCO passages suggest little improvement in CWPM from baseline to intervention.  168  With respect to errors for LCO passages, Mark’s baseline data show a very low, stable level with an average of 0.3 errors per minute (range, 0 to 1). During intervention, little change in level was evidenced with an average of 1 error per minute (range, 0.5 to 1). These results indicate that John demonstrated a very low, stable level of errors for LCO passages across baseline and intervention phases. Baseline data for the fourth participant, Tom, show an increasing trend and an average of 16 CWPM (range, 11 to 24). During intervention, a slightly decreasing trend was evidenced with an average of 23 CWPM (range, 17.5 to 26.5). These visual results for novel, untrained LCO passages suggest improvement in CWPM from baseline to intervention. With respect to errors for LCO passages, Tom’s baseline data show a stable level with an average of 4 errors per minute (range, 2.5 to 4). During intervention, a slightly increasing trend was evidenced with no change in mean errors from baseline (M  4;  range, 2.5 to 6). These results indicate that Tom demonstrated a stable level of errors for LCO passages across baseline and intervention phases. Overall, visual analysis of the results for all participants in cohort two indicates slight improvements for CWPM in LCO passages across phases. In baseline, the four participants in the second cohort demonstrated an average of 25.8 CWPM (range, 16 to 39). During intervention, this increased slightly to 27 CWPM (range, 18 to 40). The one participant (Linda) who underwent a follow-up phase demonstrated some improvement, evidencing a gain of 3 CWPM in comparison to baseline levels. These results indicate that three of four participants in cohort two (Linda, Mark, and Tom) showed slight improvements in their mean CWPM for LCO passages across phases. One participant  169  (John), however, evidenced a decrease in mean CWPM for these passages from baseline to intervention. Visual analysis of the results for all participants in cohort two indicates little change in error rates for CWPM in LCO passages across phases. In baseline, the four participants in the second cohort demonstrated an average of 1.8 CWPM (range, 0 to 4). During intervention, this increased slightly to an average of 2 errors per minute (range 1 to 4). The one participant (Linda) who underwent a follow-up phase evidenced no change in mean errors over baseline levels. These results indicate that the four participants demonstrated very low, stable error rates with little change across phases. Oral Retell Fluency (Comprehension) for Low Content Overlap (LCO; Progress Monitoring) Passages The results for the first and second cohort for oral retell fluency for the LCO, or progress monitoring passages, are shown in Figures 11 and 12, respectively. These figures are presented below in sequence. The figures show the oral retell fluency for the LCO passages across phases.  170  100% Baseline  90%  Intervention  Follow-Up  0  a a-S •  80% 70% 60%  /  50% 40% 30%  —.-—  20% a  1st Read  —Trend  10% 0% 0123456789  1011121314151617181920212223242526272829  100%  n  90% 0  80%  a)CC a  70%  C  60%  r 4)  z  0_  0 4 C, C0  C a,  30  50% 40% 30% 20% 10% 0% 0  1  2  3  4  .5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24 25  26  27  28  29  30  100%  ITabithal  90% 80% 70% 60% 50%  Ca,  a a)  40% 30%  a,  20%  C,  10%  0  0% 0  12345678  91011121314151617181920212223242526272829  l  100%  I  ICarriel  90% 0  a  80%  0C  70%  C  60%  /  (0  a, Ci  io a-S  30  .  50% 40% 30% 20% 10% 0% 01234567  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  Sessions Figure 11. Cohort 1; Oral retell fluency (comprehension> for DIBELS passages, Low Content Overlap (LCO) passages.  171  100% 90% 0  a)  Intervention  Baseline  Follow-up  80%  0) CU  70%  C  60%  .  50% 40% Ca)  a)  a,  30%  —--1st Read  20%  —Trend  a,  10%  CU  0%  0  0123456789  1011121314151617181920212223242526272829  30  100% 90% 0  a)  a  80%  0, CU  70%  C a) 0  60% 50% 40%  ii  30% 20% 10% 0% 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28 29  30  100% 90% 80% 70% I-  60%  A  ; 0 100%  12345678  9101112131415161718192021222324252627282930  F  90% 0 a) 0)  ‘U  r C  a)  ‘-C  80% 70% 60% 50% 40% 30%  .  20% 1 0°/ 0% 0123456789101112131415161718192021222324252627282930  Sessions Figure 12. Cohort 2: Oral retell fluency (comprehension) for DIBELS, Low Content Overlap (LCO) passages.  172  First cohort: Oral retelifluencyfor the LCO passages. Figure 11 presents the results for oral retell fluency for the four participants in cohort one. Baseline data for the first participant, Kelly, show an increasing trend in baseline with an average oral retell fluency of 65% (range, 55 to 74%). During intervention, variability within the data was demonstrated, yet little change in level was evidenced with an average oral retell fluency of 65% (range, 50 to 78%). During the follow-up phase, there was a drop in level compared to baseline and intervention, with an average oral retell fluency of 44% (range, 36 to 50%). Baseline data for the second participant, Kevin, show a stable, slightly increasing trend with an average oral reading fluency of 62% (range 60 to 63%). During intervention, a drop in level and slightly increasing trend were evidenced, with an average oral retell fluency of 43% (range, 23 to 58%). The analysis of Kevin’s data was again compromised by a confounding variable introduced by his TVI across all observation sessions. His TVI consistently added additional, unscripted prompts (i.e., leading questions) during the oral retell fluency assessments. Kevin’s results for oral retell fluency should be interpreted cautiously. Baseline data for the third participant, Tabitha, show high variability, with an increasing trend and an average oral retell fluency of 57% (range, 33 to 8 0%). During intervention, a decreasing trend was evidenced with a drop in average oral retell fluency to 39% (range, 20 to 57%). Baseline data for the fourth participant, Carrie, show a stable trend in baseline with an average oral retell fluency of 51% (range, 50 to 55%). During intervention, a decreasing trend was evidenced with a drop to an average oral retell fluency of 33%  173  (range, 24 to 47%). During the follow-up phase, Carrie demonstrated an increasing trend in oral retell fluency, with an average level of 69% (range, 55 to 82%), which was higher than that evidenced in baseline and intervention Overall, visual analysis of all results for the first cohort suggest that oral retell fluency scores did not improve or decreased during the intervention phase. In baseline, the four participants in the first cohort demonstrated an average oral retell fluency of 59% (range, 51 to 65%) on the first, “cold” read of instructional passages. During intervention, this decreased to an average of 45% (range, 33 to 65%) The one participant who underwent a follow-up phase (Carrie) demonstrated an increasing trend in oral retell fluency. Second cohort: Oral retelljiuencyfor the LCO passages. Figure 12 presents the results for oral retell fluency for the four participants in cohort two. Baseline data for the first participant, John, show a stable baseline with an average oral retell fluency of 68%. During intervention, a decreasing trend was evidenced, again with an average oral retell fluency of 68% (range, 50 to 85%). Baseline data for the second participant, Linda, show a slightly decreasing trend, with variability, and an average oral retell fluency of 57% (range, 48 to 71). During intervention, a stable trend with little change in level was evidenced, with an increase to an average of 62% (range, 50 to 70%). During the follow-up phase, a decreasing trend was evidenced, with an average oral retell fluency of 63% (range, 57 to 68%) Baseline data for the third participant, Mark, show a decreasing trend, with variability, and an average oral retell fluency of 64% (range, 54 to 74%). During  174  intervention, stability with no improvement in level was evidenced with a decrease in average oral retell fluency of 61% (range, 49 to 6 8%). Baseline data for the fourth participant, Tom, show a decreasing trend, with an average oral retell fluency of 40% (range, 33 to 47). During intervention, little variability with an improvement in level and a slightly decreasing trend were evidenced with an average of 55% (range, 46 to 61%). Visual analysis of results for three participants within the second cohort (John, Linda, and Mark) suggests that, overall, oral retell fluency scores showed moderate to no improvement during the intervention phase. In baseline, the four participants in the second cohort demonstrated an average oral retell fluency of 57 % (range, 40 to 68%) on the first, “cold” read of instructional passages. During intervention, this increased slightly to an average of 61% (range, 55 to 62%). The one participant who underwent a follow-up phase (Linda) demonstrated a decreasing trend in oral retell fluency, but an increase in oral retell fluency with an average of 63% (range, 57 to 68%) compared to that of baseline and intervention. Social Validity and Participants’ Seif-perception as Readers Social validity data were collected to help determine how important, acceptable, and useful the repeated reading intervention and implementation process were from the point of view of the TVIs. In addition, a questionnaire that assessed self-perception as readers was administered to the primary braille readers. TVIs and participants were asked to complete the same version of their respective questionnaires at two different times during the study. Questions for both the TVIs and participants were completed using a 5-  175  point Likert scale with a rating of”5” representing “agree very much” and “1” representing “disagree very much”. The social validity data are presented for TVIs, followed by the data regarding participants’ self-perception as readers. Scores for each question on the respective questionnaires were averaged and are reported below. Additionally, questionnaire data collected at the two intervals during the study were summed across each TVI and participant, and then compared by way of a Wilcoxon signed-rank test (Max & Caruso, 1998; Wilcoxon, 1945). Social validity results for TVIs. Each participant’s TVI completed an 11-item questionnaire after completing the first week of the intervention and again at the end of the intervention phase. The questions were designed to investigate the degree of acceptability of the intervention, the importance they ascribed to determining and monitoring ORF skill development, and their experiences with the investigator, both before and after they employed the intervention with their students. Six of the eight TVIs completed the questionnaires at both time points (75% return rate). Questionnaire results show that after the first week of conducting the intervention, most TVIs believed strongly that improving ORF was an important goal for their students (M  =  4.8, range, 4 to 5). Following the completion of the intervention, they ascribed even  more importance to improving ORF (M  =  5). Before conducting the repeated reading  intervention as part of the study, most TVIs perceived it to be an effective intervention to improve ORF (M study (M  =  4.8). This endorsement was strengthened at the conclusion of the  5). After some initial exposure to the intervention and after completing the  intervention, the TVIs also reported that the outcomes of intervention were beneficial to  176  their student’s reading skill development (week 1: M intervention: M  =  =  4.4 (range, 3 to 5); post  4.2 range, 3 to 5). Approximately half of the TVIs reported that the use  of the intervention improved their reading program (M  =  3.6 range, 3 to 5).  However, at the onset of the study, TVIs indicated that carrying out the intervention had caused some unanticipated problems in their work with students (M  =  2.8, range, 1 to 4). After completing the intervention, they reported slightly more unanticipated problems (M  =  3.4, range 1 to 4). One TVI reported, for example, that  carrying out the intervention was ultimately more time consuming than she had anticipated. Another TVI reported that undertaking the intervention took time away from other literacy skill instruction. A third TVI indicated that she was unable to continue to work on tracking skill development with her student, and the TVI noticed deterioration in the student’s skills. All TVIs reported that the training activities were quite helpful and wellorganized (M  =  4.6, range 3 to 5), and they indicated that the investigator showed respect  for their school programs and their students (M  =  4.6, range 3 to 5; M = 5). Nonetheless,  two TVIs found the intervention challenging to carry out after conducting it during the first week. By the end of the study, however, only one of the TVIs found it difficult to implement the intervention. Most TVIs reported their intention to use the intervention again to help improve their student’s ORF (M = 4.6, range, 4 to 5). A Wilcoxon signed-rank test, a nonparametric statistical test for repeated measurements on a single, small sample, was performed on the results of the social validity questionnaires. This process generated mean summed scores of 47.7 (range, 41 to 54) after the first week of the intervention had been completed and 48.6 (range, 38 to 54)  177  post-intervention. The statistical significance between these two scores was examined using the Wilcoxon signed-rank test. This difference was found to be statistically non significant (Z= -.67, p  =  .500), indicating that TVIs reported similarly high levels of  social validity for the intervention’s goals, procedures, and outcomes at the beginning and end of the intervention. Partici2,ants s4f.perception as readers. Participants were asked to complete a ‘  short questionnaire designed to shed light on their self-perceptions as readers (e.g., ability to improve).The same questionnaire was administered before baseline had begun and again at the conclusion of the intervention. Questions were completed using a 5-point Likert scale with a rating of “5” representing “agree very much” and “1” representing “disagree very much”. Four participants completed the questionnaire at both times during the study (50% return rate). Before baseline was conducted, most participants reported that they liked reading (M  =  4.4, range, 3 to 5), and it was fun to read (M  =  4.7, range, 2 to 5). After completing  the study, participants reported slightly lower levels of reading enjoyment (M = 4.3, range, 3 to 5). Prior to the start of the study, participants reported that they were good readers (M  =  3.9, range, 3 to 5), and this rating increased slightly following the  intervention (M  4, range, 3 to 5). They ranked their reading speed higher before  beginning the intervention (M =  =  3.1, range, 2 to 5) than after the study was concluded (M  2.5, range, 1 to 5). However, they reported that it was easier for them to read quickly  following the conclusion of the study (M = 4.5, range, 1 to 5) than before undergoing the intervention (M  =  3.3, range 2 to 5). They reported it was more difficult for them to read  before beginning the intervention (M  =  2.6, range, 1 to 4) than following the study (M  =  178  3.3, range, 1 to 5). There was little change in their perception before (M 5) and after (M  =  =  4.4, range 2 to  4.5, range, 3 to 5) the intervention that practice would help them  improve their reading. However, they reported a greater likelihood that they could learn to read faster and ultimately become a better reader after the intervention (M  =  4.5, range  4 to 5; M = 4.7, range, 4 to 5) than before (M= 3.9, range, 2 to 5; 4.4, range, 2 to 5). Participants reported higher levels of comprehension after the study (M 5) than before undertaking the intervention (M  =  =  4.5, range 2 to  4, range 3 to 5).  Most participants frequently reported that they really enjoyed being part of a study and having the chance to hear themselves on tape. They all embraced the challenge of improving their reading speed and were very keen to find out if they had “beaten” their scores during the intervention. They were generally very enthusiastic about graphing their scores, as they enjoyed the tactile representation of their progress. They routinely spoke directly into the tape recorder to relate personal stories, questions, and jokes to the investigator. Additionally, participants found humorous ways to deal with what they eventually indicated was the monotony of the directions (e.g., some would recite the directions verbatim keeping time with the TVI). They often helped the TVI remember steps in the intervention (e.g., “Don’t I have to tell you everything I remember about the story?” “Aren’t we gonna graph today?”). However, every participant periodically expressed some fatigue during the study, particularly on days when they were asked to read the progress monitoring story after the intervention. A Wilcoxon signed-rank test was performed on the results of the pre- and post intervention participant questionnaires. As part of this analysis, scores for the first questionnaire were summarized across all participants, as were scores for the second  179  questionnaire. This process generated overall average scores of 40.2 (range, 34 to 49) pre-intervention and 41 (range, 35 to 49) post-intervention. The statistical significance between these two scores was examined using the Wilcoxon signed-rank test. This difference was not at the level of statistical or clinical significance (Z= -.74, p  =  .461),  indicating that there was no significant effect of doing the intervention on participants’ already strong self-perception as readers.  180  CHAPTER 5 Discussion This study was designed to build on the limited repeated reading research relating to students with visual impairments, exclusively targeting braille-reading students demonstrating oral reading fluency (ORF) challenges in the critical early grades (i.e., primary grades one, two, and three). The study addressed three experimental and three descriptive research questions about the efficacy of this intervention to improve ORF, operationalized as correct words per minute (CWPM), errors, and comprehension. The study also addressed social validity from the perspective of teachers of students with visual impairments (TVIs), and the effects of the intervention on participants’ selfperception as readers. Summary ofResults  A nonconcurrent, multiple baseline design across two cohorts of four participants (for a total of eight participants) was used to experimentally determine the effects of the intervention. The results indicated no functional relationship between the implementation of a repeated reading intervention and changes in ORF or comprehension. The results suggest that the intervention was socially valid from the perspective of TVIs. The results also indicated that participating in the intervention did not significantly change participants’ already high self-perceptions of reading competency. A discussion of the multiple research outcomes points to the study’s unique contributions to the literature, cautions and limitations, and areas for further research.  181  Correct Words per Minute (CWPM) Experimental control was evaluated by looking for improvements in CWPM in terms of level and trend at the point of intervention for at least three of four participants in each cohort. The multiple baseline design across four students, repeated with a second cohort of four students, showed moderate to no changes in CWPM from baseline to intervention. Experimental control was not demonstrated for either cohort. The four participants in the first cohort demonstrated an averag e of 12.8 CWPM in baseline (range, 2 to 30). This average remained the same during intervention (range, 4 to 21). The four participants in the second cohort demonstrated an average of 24.5 CWPM in baseline (range, 15 to 38). During intervention, this increas ed to an average of 30.8 (23 to 39). Ultimately, any documented gains in CWPM notwit hstanding, at the conclusion of the study, all participants’ CWPM rates (derive d from the mean of the last three first, “cold” read scores during intervention) remained either within the “at risk” (Kelly, Kevin, Carrie, Tabitha, and Mark) or “some risk” (John, Linda, and Tom) levels for the spring term according to Dynamic Indicators of Basic Early Literacy Skills (DIBELS) criteria. Results for CWPM also were compared to therapeutic criteria, that is, the extent to which any changes were therapeutic or meaningful for partici pants. Therapeutic criteria were based on Hasbrouck and Tindal’s (2006) expect ed weekly growth rates for typically sighted students undergoing only Tier I level interve ntion (i.e., general education classroom instruction) within a Response to Interve ntion (RTI) framework. A comparison of the actual gains in CWPM over the duration of the intervention to weekly  182  predicted gains reported by Hasbrouck and Tindal indicated that the intervention did not provide therapeutically significant improvements for three of the eight participants. Errors per Minute Experimental control was evaluated by examining change in error rates per minute for the first (“cold”) read of instructional passages in terms of level and trend at the point of intervention for at least three of four participants in each cohort. The multiple baseline design across four students, repeated with a second cohort of four students, showed little to no changes in errors from baseline to intervention. Experimental control was not demonstrated for either cohort. Overall, with one exception (Tabitha), visual analysis showed that the participants read quite accurately during baseline and continued to do so throughout the study; there was little room for improvement with respect to error rates. Error rates were within instructional or mastery levels as reported by Fuchs and Fuchs (1982), and largely unaffected by changes in CWPM or passage type. Oral Retell Fluency (Comprehension) Experimental control was evaluated by examining change in oral retell fluency rates for the first, “cold” read of instructional passages in terms of level and trend at the point of intervention for at least three of four participants in each cohort. The multiple baseline design across four students, repeated with a second cohort of four students, showed few to no changes in oral retell fluency scores from baseline to intervention. Experimental control was not demonstrated for either cohort. Visual analysis of all results for both cohorts suggests that oral retell fluency scores either did not improve or decreased during the intervention phase. Overall comprehension rates, while variable across participants and passages, maintained reasonably acceptable levels (Good et al.,  183  2002). Comprehension may be linked with CWPM in that neither variable increased consistently or significantly across participants. CWPM Errors, and Oral Retell Fluency for Low Content Overlap (LCO; Progress  Monitoring) Passages The study addressed whether any gains in CWPM demonstrated during the repeated reading intervention were associated with generalized improvements in untrained, LCO passages by looking for improvements in terms of level and trend for CWPM, errors, and oral retell fluency for LCO passages. Experimental control was not demonstrated for either cohort. Overall, visual analysis of the results for the eight participants indicated little to no improvement for CWPM in LCO passages across phases. With respect to errors, the results also indicated that the eight participants demonstrated very low, stable error rates for LCO passages with little change across phases. In regard to comprehension, visual analysis of oral retell fluency for LCO passages for both cohorts suggests that oral retell fluency did not improve and/or decreased somewhat during the intervention phase. The equivocal findings for CWPM and the few to no improvements in comprehension that accompanied these results, again suggest that there may be a link between CWPM and comprehension for braille readers. Overall comprehension rates for LCO passages, although variable across participants, phases, and passages, maintained reasonably high levels (Good et al., 2002). Analysis ofa Functional Relationship between the Dependent Variables and the Repeated Reading Intervention Three experimental questions were addressed by this study to determine whether a functional relationship existed between the dependent variables and the repeated  184  reading intervention. Experimental control was not demonstrated for any of the dependent variables of CWPM, errors, or oral retell fluency. An analysis of the data clearly indicated that a functional relationship was not demonstrated between the implementation of a repeated reading intervention and an improvement in ORF, operationalized as CWPM and errors per minute, for braille-reading students in grades one, two, and three. The data also failed to demonstrate a functional relationship between the implementation of a repeated reading intervention and an improvement in comprehension, operationalized as oral retell fluency, for these same students. Further, no functional relationship was demonstrated between the implementation of the repeated reading intervention and gains in untrained, LCO (progress monitoring) passages. CWPMfor the Third Read ofInstructional Passages and High Content Overlap (HCO) Passages The study investigated whether any gains in CWPM during the repeated reading intervention were associated with generalized improvements in untrained, High Content Overlap (HCO) passages by looking for improvements in terms of level and trend for CWPM for HCO passages. From a descriptive standpoint, visual analysis of the results for all eight participants indicated consistently strong within-session improvements for CWPM from the first to the third read of instructional passages. Additionally, these participants demonstrated higher CWPM scores for HCO passages than for the first reading of instructional passages in intervention.  185  Errors for the Third Read ofInstructional Passages and High Content Overlap (HCO) Passages From a descriptive standpoint, visual analysis of the results for the eight participants indicates low error rates for the third read of instructional passages and HCO passages. The results indicated that participants were generally able to improve their errors on instructional passages that were reread three times. Additionally, participants demonstrated lower error rates for HCO passages than for the first reading of instructional passages in intervention. Analysis ofa Functional Relationship between the Dependent Variables and the Repeated Reading Intervention Three descriptive questions were addressed by this study as part of the investigation of the repeated reading intervention. The data clearly indicated that gains in CWPM during the repeated reading intervention were associated with gains in HCO passages. These gains were demonstrated across all participants. Social Validity The study investigated whether the repeated reading and implementation process were socially valid from the point of view of TVIs. They were asked to complete the same social validity questionnaire early in the study and at the conclusion of the intervention. At both junctures, they reported strong social validity for the goals, procedures, and outcomes of the intervention. As TVIs already reported high social validity ratings after completing one week of the intervention, there was no significant effect on social validity data of completing the intervention.  186  Participants Seperception as Readers ‘  The study investigated whether undergoing the repeated reading intervention changed participants’ self-perception as readers. Participants were asked to complete the same questionnaire during baseline and at the conclusion of the intervention. At both times, they indicated high levels of self-efficacy as readers. As participants already had high ratings, there was no significant effect on participants’ self-perception as readers of doing the intervention. Findings in Relation to the Literature The study explicitly links and demonstrates the relevance of the Response to Intervention (RTI) paradigm to braille-reading primary students, specifically the emphasis within RTI upon the need to design and empirically validate reading interventions specific to the nature of different reading difficulties (Wanzek & Vaughn, 2007). Further, this study extends the use of the Instructional Hierarchy (IH; Haring et al., 1978) in applying it to a new target student population, primary braille readers. As described previously, the IH is a “conceptual framework for guiding intervention selection when remediation is needed” (McCurdy et al., 2007, p. 342) such that interventions are well suited to student learning needs. The IH espouses a stage-based conceptual framework to help identif\j the most potentially effective intervention and/or intervention components depending on the students’ level of skill development (Daly, Lentz, et al., 1996). Matching interventions to students’ specific skill level and the nature of their deficits has been empirically validated by many researchers for students without visual impairments (Daly, Lentz, et al., 1996; Daly, Chafouleas, et al., 2005). Hence, the IH  187  informed the design and selection of the repeated reading intervention used this study. Accordingly, the repeated reading intervention was selected because it is primarily a skill-based intervention which, in keeping with the IH and reading research, was expected to enhance reading speed (i.e., CWPM) for participants who were accurate, yet slow readers. Given the equivocal nature of the results and the fact that the repeated reading intervention has yet to be empirically validated for braille readers, additional research is required to explore the utility of the IH more fully. Based on the findings in this study, it is currently unclear whether the Ill can be used effectively to design and select interventions. Analysis of participants’ results for errors may lend some limited, tentative support for the relevance of the IH for braille readers. The IH stipulates the need for interventions well matched to student needs, and the repeated reading intervention may not have been the best fit because Tabitha demonstrated an acquisition deficit early with the accompanying ORF challenges (i.e., volatile error rates). As a result, in terms of her error rates, she did not benefit as much from the skill-based repeated reading intervention relative to most other participants who were more accurate at the beginning of the study. Based on her screening results for ORF, Tabitha began the study at an initial, acquisition stage of learning to read; her errors often outnumbered her words read correctly. She fell within the “at risk range” for current and future reading problems based on DIBELS mid year (winter) ORF benchmarks (median score of 1.5 CWPM; median errors of 6). Further, because she spelled out words as she encountered them, she needed consistent prompting to sound them out and frequent reminders to then say the words in their entirety.  188  Despite the skill-based nature of the intervention and eligibility requirements that accordingly emphasized accuracy, Tabitha was included in the study because chronic absenteeism during kindergarten and the first half of grade one was perceived by her school team as the primary reason behind her ORF challenges. It was believed that, while the intervention highlighted both accuracy and speed in fluency development, participation could still be beneficial because the intervention involved specific expectations, sustained, intensive reading practice, immediate error correction, and consistent, scripted feedback, in addition to the novel opportunity to hear herself later on tape. Further, it was hoped, and later reported by her TVI, that her enthusiasm for participating in this study would affect dramatic improvements in her attendance and desire to learn to read. The results show that Tabitha’ s error rates were variable and high relative to her total CWPM, and that she was unable to show a meaningful reduction in errors during the course of the six weeks of intervention. In keeping with the IH paradigm (Haring et al., 1978), the volatility of her error data reflect her current status as a new reader, in the early, acquisition stages of learning to read (i.e., compromised accuracy and speed). Given her current level of skill, she was perhaps not yet ready to take full advantage of an intervention primarily designed for students at a higher level of proficiency who are accurate, yet need help building automaticity or speed. The other participants, in contrast, demonstrated high word accuracy at the onset of the study and, as in other studies (Pattillo et al., 2004; Rashotte & Torgesen, 1985), maintained high word accuracy for the duration of the study. However, it is important to note that overall, with respect to CWPM, participants did not significantly improve their CWPM as a result of the  189  intervention, which would be anticipated for typically sighted students based on the IH. Further investigation is required. Nonetheless, at the conclusion of the study, Tabitha anecdotally reported her enjoyment of the reading process, her pride at the improvements she had made, and the increased sense of self-efficacy she would carry onwards as she continues to develop as a reader. Her teacher of students with visual impairments (TVI) commented throughout the study about Tabitha’s continued ORF development, her new-found ability to sustain her focus while reading, and improvements in her hand movements (e.g., increasing bilateral hand use, locating the title more independently, etc.). At the conclusion of the study, however, Tabitha’ s CWPM scores continued to fall within the “at risk” range based on DIBELS criteria for winter and spring. A conservative approach to the experimental analysis ofrepeated reading. In investigating the efficacy of the repeated reading intervention, this study takes a conservative approach to the experimental analysis of the impact of the intervention on CWPM. Consistent with this conservative approach, the effect of the intervention was evaluated primarily upon evidence of changes to the initial readings for untrained, novel LCO passages across participants and phases. Specifically, experimental control in this study hinged upon seeing evidence of improvements in CWPM for “cold” or initial reads in terms of level and trend at the point of intervention for at least three of four participants in each cohort. The study also investigated CWPM for the DIBELS, LCO progress monitoring passages across phases. The study relied upon visual analysis within a single subject experimental design to detect clear, meaningful changes in CWPM that  190  would identify an “effective, robust technology of behavior change” (Cooper et a!., 2007, p. 249). As suggested by Gortmaker and colleagues (2007), generalization in terms of the current study was “conceptualized in terms of proximity to original stimulus conditions during training, [and] these forms of stimulus generalization represent two points along a continuum of conditions in which newly acquired responses may appear” (p. 205). For the purpose of this study, generalization for LCO passages, namely the first, “cold” read of instructional passages across phases and the DIBELS, progress monitoring passages in this study, was seen as a more stringent standard for judging the efficacy of the intervention because participants did not receive instruction for words within this type of text (Daly et al., 1999; Gortmaker et al., Therrien, Wickstrom, et a!., 2006). Generalization to LCO passages was conceived of as reflecting the participants’ ability to demonstrate gains beyond recognition of trained words to other words that may, for example, “come from a stimulus set that is functionally equivalent (e.g., along the lines of difficulty level, frequency of usage in curricula, or predictable phonetic properties)” (Gortmaker et al., p. 204). A number of studies report increases in these first, “cold” reads or “starting rates” of instructional passages (Samuels, 1979) and progress monitoring passages (Layton & Koenig, 1998; Therrien, Wickstrom, et al., 2006). Layton and Koenig reported that their participants with low vision demonstrated ‘steady increases” in CWPM for LCO, progress monitoring passages (e.g., a basal reader). However, interpretation of these results is made somewhat difficult as no LCO data were obtained during baseline or follow-up to compare with intervention LCO data. Therrien and colleagues reported  191  statistically and therapeutically significant improvements for CWPM on DIBELS LCO generalization passages as a result of undergoing a combined repeated reading and question generation intervention. Their participants (in grades four to eight) demonstrated a gain of 13 CWPM over a 16-week intervention, exceeding expected growth rates for their grades and those attained by the control group (i.e., 2.3 CWPM over the course of the intervention). Although they evaluated the success of the repeated reading intervention based primarily upon the within-session gains in CWPM, Layton and Koenig (1998) and Pattillo et al. (2004) also reported mean improvements in CWPM for first, “cold” reads from baseline to the final three to four assessments in intervention of 20% (range, 7 to 37%) and 41.8% (19.2 to 57.8%), respectively. The sole braille-reading participant (age 12, grade 7) to have been included in a previous repeated reading study (i.e., Pattillo et al.), demonstrated a mean gain over 12 sessions (spanning what appeared to be approximately 4 to 5 weeks) of 14.3% or 5 CWPM for first, “cold” reads from baseline (M = 35) and the last changing criterion phase during intervention (M  =  40).  The current study reported comparable levels of mean improvement for first, “cold” reads, despite ultimately determining the absence of a functional relationship between the implementation of the intervention and changes in CWPM for the first, “cold” reads and a lack of improvement in CWPM for the DIBELS, LCO progress monitoring. This study evidenced overall mean improvements in CWPM for first, “cold” reads from baseline to the final three to four assessments in intervention of 50.6% (range, -24.5 to 200%).  192  As previously mentioned, other studies have evaluated the success of the repeated reading intervention largely in terms of the within-session improvements in CWPM, in keeping with a changing criterion design (Layton & Koenig, 1998; Pattillo et al., 2004). For example, Pattillo and colleagues indicated that their study “did not aim to demonstrate progress with initial reading rates” (p. 44) for their five participants with visual impairments (in grades six to eight). Instead, these researchers reported a functional relationship between their modified repeated reading approach (i.e., repeated reading and a version of listening passage preview) and improvements in participants’ within-session CWPM rates that met changing criteria for CWPM. Layton and Koenig also reported this same functional relationship based on within-session improvements derived from participants’ re-reading the same story up to eight times in order to track the stepwise changes in criterion. The current study joins those already numerous studies reporting strong withinsession improvements for each successive rereading of text (Levy et al., 1997; Meyer & Felton, 1999; O’Shea et al., 1985; Rashotte & Torgesen, 1985; Samuels, 1979; Sindelar et al., 1990). The current study did not emphasize changes in HCO passages, because CWPM for HCO passages was not assessed during baseline phase, and any interpretation regarding the generalization of reading ability to HCO passages is necessarily constrained. However, it may be valuable to note that strong, within-session gains (i.e., beyond mean scores for the first, “cold” read in baseline and intervention) were demonstrated consistently by all participants. Further, these gains were in keeping with those described in other studies that did ascribe importance to them. This evidence may suggest that the intervention shows promise and warrants additional research and  193  investigation because it appeared to operate similarly for primary braille readers as with other participants with or without visual impairments. However, variations in methods of reporting within-session improvements for CWPM (e.g., providing no raw scores for each re-reading, only percentage change) and in the nature of the intervention (e.g., instructional components such as the number of re readings per session; type of single subject design) make it very difficult to compare this study’s within-session growth directly to that reported by other repeated reading studies. Pattillo and colleagues (2004), for example, reported their gains as a percentage of within-session improvement in CWPM between the mean of scores exceeding the specified criterion in the final phase (three to four assessments) over the baseline mean CWPM. During the last phase of the intervention, they report that their braille-reading participant demonstrated a 94% improvement between her mean rate above criterion during the final phase (68 CWPM) and her mean baseline first, “cold” read (M  =  35). The  participants in the study by Pattillo and colleagues evidenced an overall mean improvement for CWPM of 59.8% (range, 37 to 94%) between their mean rate above criterion during the final phase and their mean baseline first, “cold” read. Based on the aforementioned way that these researchers determined percentage of mean gain, participants in the current study demonstrated an overall mean improvement for CWPM of 145.2% (range, 32.2 to 3 16.7%) between the mean first, “cold” reads in baseline and the mean of the third read for the final three assessments during intervention. Still other studies have evaluated the success of the repeated reading intervention by looking, in part, at improvements (“transfer”) in CWPM evidenced for HCO passages compared to first, “cold” reads in intervention (Gortmaker et al., 2007). These results for  194  generalization of improvement to HCO text are reflected in the repeated reading literature for typically sighted students (Daly et al., 1998; Daly, Persampieri, et al., 2000; Gortmaker et al.). In the repeated reading literature, such generalization gains have been deemed evidence of “overall improvement over the course of the investigation” (Glazer, 2007, p. 46). CWPM for HCO passages is regarded as relatively stronger evidence of progress than within-session improvements in CWPM (Ardoin et al., 2007). All participants in the current study demonstrated a consistent improvement for CWPM for HCO passages compared to their mean CWPM for the first, or “cold” read in both baseline and intervention. For the purpose of this study, however, improvements from reading untrained, HCO were seen as a less rigorous demonstration of fluency gains because participants were taught most of the words within the training or intervention condition (Gortmaker et al.). Factors involved in the transfer of CWPM gains to novel text. According to Daly, Chafouleas, and colleague’s (2005) review of repeated reading studies, however, “there is no guarantee that generalized improvements will be observed” (p. 94) on the aforementioned types of text. For example, as in previous research (Therrien & Hughes, 2008), improvements from the rereading of the same text in this study were not associated with clear or stable improvements in CWPM in novel, LCO passages. Meyer and Felton (1999) and numerous others argue that it is complicated to determine which factors improve the chances that within-session gains from repeated reading will transfer to untrained passages. A number of factors have been advanced from a theoretical and empirical standpoint to account for the degree of “transfer” (Meyer & Felton), and several of these factors are considered below.  195  First, CWPM rates improve incrementally and slowly over time (Chard et a!., 2002; Hasbrouck & Tindal, 1992; Meyer & Felton, 1999). When ORF challenges are skill based (rather than performance based) in nature, the skill of quick and accurate reading, even with intensive doses of intervention, takes some time to develop. Concerns regarding dose and response have been raised in other studies (Pattillo et al., 2004; Rashotte & Torgesen, 1985; Therrien & Hughes, 2008; Vadasy & Sanders, 2008). Pattillo et al. voiced concerns about the short duration of their study (an average of 12 intervention sessions, conducted on every other school day) and how this “dose” may have negatively affected their results for ORF. Therrien and Hughes hypothesized that the failure of their participants to transfer within session gains to new text may have been attributable to the two-week duration (for a total of 5 sessions) of their study. The repeated reading research lacks consensus regarding the optimal intensity (e.g., duration, frequency, and assessment session length; student-teacher ratio) needed to carry out the intervention (Chard et a!., 2002; Daly, Martens, Barnett, Witt, & Olson, 2007; Fuchs & Fuchs, 2007; Meyer & Felton, 1999). For example, in a list of repeated reading studies compiled by Meyer and Felton, the length of the intervention ranged from 1 to 54 sessions. An example of the range in dosages within the repeated reading literature can be seen by comparing the seminal work by Rashotte and Torgesen (1985), which involved seven days of repeated reading intervention (for a total of seven 15minute sessions) to a more recent, well-received study by Therrien, Wickstrom, and colleague (2006), which spanned four months (for a total of fifty 10 to 15 minute sessions). Both studies reported that their intervention affected desired improvements in CWPM.  196  There is some agreement within the literature, however, that the intervention’s effectiveness is largely derived from “the practice time it affords”, and thus, it “can help to correct situations in which there is a lack of sufficient practice, which may be the greatest weakness of many reading curricula” (Daly, Chafouleas, et al., 2005, p. 89). Issues of dose-response come into play with this study, as the intervention was in place for a period of three to seven weeks, depending on the end of the school year. Additionally, intervention sessions in the current study took about 30 to 40 minutes to complete, but involved only 8 to 10 minutes of focused reading each time. Further, due to scheduling conflicts and availability of the TVIs, an average of three assessment sessions occurred each week, but the range was between two and four sessions in any given week. Second, text characteristics have been shown to mediate fluency gains, specifically whether a functional relationship can be demonstrated between repeated reading and an improvement in CWPM for novel text (Vadasy & Sanders, 2008). Two important text characteristics include the level of passage difficulty and word overlap or frequency. O’Connor and colleagues (2002) and others have reported that participants undergoing the intervention demonstrated greater fluency gains using instructional, rather than grade-level, reading materials. Beyond the general recommendation to use text at an instructional level of difficulty for interventions (Vadasy & Sanders), the literature is inconclusive regarding the nature of optimal reading materials (e.g., length of sentences or passages; Therrien, 2004; Therrien & Hughes, 2008). As stated by Therrien and Hughes, “This is unfortunate, as the effectiveness of any reading intervention may hinge on the material used” (p. 11).  197  Word frequency, another key textual feature, has implications for the efficacy of repeated reading (Martin-Chang & Levy, 2005; Rashotte & Torgesen, 1985). Rashotte and Torgesen, Martin-Chang and Levy and others argue that word overlap is essential to the successful transfer of within-session fluency gains to novel passages. As Gortmaker and colleagues (2007) and others have suggested, “one might expect limited generalization of repeated reading to materials that are not similar in words and/or content” (p. 56; see also Therrien, 2004). Results for the current study indicated that gains for CWPM were maximized when the number of familiar words was maximized. Gains for within-session and HCO passages far exceeded those associated with LCO passages. These types of results suggest that, within the parameters of the study (e.g., dosage), the repeated reading conferred “specific processing advantages at the word level” (Martin-Chang & Levy, p. 367) that facilitated the automatization of word reading (LaBerge & Samuels, 1974; Martin-Chang & Levy). The transfer of these advantages was largely contingent upon the degree of word overlap between passages (Rashotte & Torgesen; Therrien & Kubina, 2007). However, this analysis of these findings is made more complicated because the DIBELS LCO passages were more difficult than all other passages according to the Spache readability analysis. Consequently, additional research is required to determine the generalizability of CWPM within-session gains to LCO text for braille readers. Third, variations in the conceptualization and implementation of the intervention itself have been shown to affect the transfer of gains made by typically sighted students to novel passages (Therrien, 2004). The nature of the repeated reading intervention adopted in this study was parsimonious and reflective of both the logistical constraints  198  characterizing the study and the demands placed upon the TVIs who ultima tely conducted the intervention. Accordingly, the intervention included many, but not all of the most empirically validated components identified in meta analyses (Chard et a!., 2002; Therrien), and largely the same multi-component procedure was followed across passages and phases in an attempt to enhance treatment integrity. For example, this study’s intervention involved reading to an adult, the use of a “comb ined speed and comprehension cue” (Therrien, p. 257) prior to (re)reading each of the passages, and three rereadings. However, the study’s intervention did not include other well-validated components or procedures such as adult modeling (listening passag e preview) or phrase drill error correction (Begeny et al., 2006). For example, the study’s provision of corrective feedback was limited to telling the participant the word if he or she hesitated for more than six seconds. Further, the current study did not require participants to repeatedly read until they reached a performance criterion, another facet of the intervention that has been shown to be quite effective (Therrien, 2004) and compatible with a changing criterio n design (Layton & Koenig, 1998; Pattillo et al., 2004). In the current study, the number of readings was preset at three, and participants were encouraged to “beat” their previous scores on each of the three rereadings. The study adopted a nonconcurren t multiple baseline design, rather than a changing criterion design, for the experim ental analyses of this relatively parsimonious, user-friendly intervention. The latter design would have been difficult to implement remotely, as the investigator would have needed to wait until the session tapes had been received and reviewed to determine if criterio n was met. Additionally, criterion levels are often set based on consultation with the participants and  199  their teachers (Layton & Koenig), which would have been relatively problematic from a distance. Moreover, attempts to reach criterion can take up to seven or eight rereadings (Layton & Koenig), but the literature indicates that rereading text more than three or four to times confers little additional benefit for CWPM (Therrien). Repeated reading and comprehension. This study extends the repeated reading research to include braille readers in examining the efficacy of the repeated reading intervention to improve comprehension, the ultimate goal of reading (Durkin, 1993). Literature reviews (Faulkner & Levy, 1994; Therrien, 2004) provide support for the association of repeated reading gains with the enhancement of comprehension; however, the research results are often equivocal. Martin-Chang and Levy (2005) and others argue that improved reading rates associated with repeated reading may lead to “more efficient text processing” (p. 369), however, these gains do not necessarily lead to commensurate gains in text interpretation. As previously mentioned, the intervention used in this study involved cueing participants to read for comprehension, as recommended in the literature (Therrien; Meyer & Felton, 1999). Overall, however, the study’s findings indicated that the intervention failed to enhance (or erode) comprehension for braille-reading participants. Further research is needed; however these results suggest that the intervention did not improve CWPM sufficiently to free up sufficient cognitive resources such that comprehension would be enhanced. These results are in keeping with those found by a number of other repeated reading studies (Martin-Chang & Levy; Therrien & Hughes, 2008). The research investigating the reasons behind a lack of improvement in comprehension is, thus far, inconclusive and reflects the complex relationship between  200  fluency and comprehension (Meyer & Felton, 1999). Markell and Deno (1997) report that small improvements in CWPM (i.e., less than a 15 or 20 word improvement) for first, “cold” reads are likely insufficient to bring about enhanced comprehension. Hence, as Markell and Deno explain, given expected weekly rates (e.g., as suggested by Hasbrouck and Tindal, 2006), improvements in comprehension may not be seen until the reading intervention has been in place for 10 to 20 weeks. The aforementioned 16-week study by Therrien, Wickstrom, and colleague (2006), for example, reported both improvements in reading fluency that surpassed expected weekly growth rates advanced by Deno, Fuchs, Marston, and Shin (2001) and significant improvements in measures of inferential comprehension. From a theoretical standpoint then, while lower level processing need not necessarily be fully automatized (Stanovich, 1986, 2000), improvements in these lower level processes via repeated reading (e.g., more automatized decoding) must be sufficiently large to free up enough cognitive resources to facilitate measurable improvements in higher level processing (i.e., comprehension) (LaBerge & Samuels, 1974). However, the interpretation of this study’s results for comprehension is complicated because of the way in which comprehension was operationalized as oral retell fluency. Comprehension was measured as oral retell fluency for all passages and phases to maximize simplicity so as to enhance treatment integrity. However, because oral retell fluency reflects the percentage of content words retold, participants’ level of comprehension during baseline may have been overestimated, particularly for the slowest readers. For example, Tabitha demonstrated very low CWPM scores in baseline (M  2,  range, 0.5 to 2). During intervention, a slightly increasing trend for CWPM was  201  evidenced with an increase to an average of 4 CWPM (range, 0.5 to 7.5). Although her CWPM rates were low across phases, it may have been easier to remember more of the up to two content words read during baseline than the 7.5 read during intervention. Unique Contributions and Clinical Implications This study makes a number of unique contributions to the literature. The contributions, and associated clinical implications, are discussed in turn. The relevance ofRTI and IN to primary braille readers. The study strongly links the Response to Intervention (RTI) paradigm to primary braille readers, particularly with respect to RTI’ s emphasis upon and conceptualization of empirically validated interventions (Wanzek & Vaughn, 2007). Further, this study helps initiate the discussion regarding the potential applicability of the Instructional Hierarchy (IH) (Haring et al., 1978) to inform the design and selection of interventions within a RTI framework to match braille readers’ academic needs. Consistent with the tenets of the IH, the repeated reading intervention was chosen in an attempt to address issues of poor reading speed, because it is primarily a skill-based intervention. As such, the current study extends research on the repeated reading intervention by applying it exclusively to primary students who read braille. This study addressed three experimental and three descriptive questions regarding the efficacy of a repeated reading intervention for improving reading proficiency for primary braille reading students. The intervention featured fluency enhancing, empirically validated procedures for typically sighted students, adapted to reflect the unique needs of braille readers struggling with fluency.  202  The potential utility of the IH and experimental analysis for struggling braille readers, coupled with the strong degree of procedural fidelity in this study, suggests a very clear role for school psychologists to play in meeting the diverse needs of braille readers. The investigator had expertise in both visual impairment and school psychology, which facilitated her ability to undertake this study. However, school psychologists without this dual training who partnered with TVIs who have specific expertise with respect to visual impairment and braille, would be well positioned to guide data-based decision making and facilitate capacity building and knowledge transfer at a local, provincial, state, or national level. Based on the results of this study, it appears that school psychologists could provide this type of data collection and evaluation even from a distance, assuming some procedural safeguards (e.g., audiotaped reading sessions). Given the general lack of norms for students with visual impairments, school psychologists could collect ORF data from TVIs in order to begin to develop local and national norms or ranges of expectation for ORF development and student responsiveness to the repeated reading intervention, particularly in the critical primary years. Repeated reading and primary braille readers. This study serves as the first attempt to evaluate the efficacy of repeated reading for primary braille readers and contributes information about this intervention previously unknown in the literature. In investigating the efficacy of the repeated reading intervention, this study demonstrated the merits of adopting a conservative approach to the experimental analysis of the impact of the intervention on CWPM in an effort to find a clearly effective reading intervention. In keeping with this conservative approach, the effect of the intervention (i.e., generalization or learning) was evaluated largely upon evidence of changes to the first,  203  “cold” readings for untrained LCO passages across participants and phases. No experimental control was demonstrated for any of the dependent variables, and all participants’ CWPM rates remained either within the “at risk” (Kelly, Kevin, Carrie, Tabitha, and Mark) or “some risk” (John, Linda, and Tom) levels end of the school year according to DIBELS criteria. There are several clinical implications of these findings for TVIs, school psychologists, and other stakeholders. The primary implication is that while the method of repeated reading may be easy to use and appears to be well-liked by TVIs, it may fail to improve ORF or comprehension particularly as conceptualized and implemented in this study. Within-session gains did not transfer to LCO passages. Further, results of the current study suggest that, within the study’s time frame, the intervention may require data-driven modification after two to three weeks of implementation. The results of this study thereby provide guidance for further applied research on repeated reading techniques. Greater improvements in CWPM may have been possible if the intervention had been administered more intensely and for a longer time period. Additionally, despite its popularity, repeated reading has yet to be standardized (Vadasy & Saunders, 2008). Hence, efficacy might have been improved had the intervention involved a different combination of components, such as phrase drill error correction. Although the intervention included a selection of the most promising aspects of the intervention for typically sighted readers, the equivocal and idiosyncratic results, particularly as the intervention progressed beyond a few weeks, suggest that the intervention may need further tailoring for braille readers. Additional research is clearly needed.  204  A modelfor conducting single subject research remotely. The study makes another contribution to the literature in that it successfully employed a model of conducting single subject, repeated reading research remotely. The study procedures were designed to maximize treatment integrity, as undertaking this type of study remotely, across two countries is a daunting task, particularly in terms of procedural fidelity. It was assumed that TVIs and other staff could be effective interventionists with good training, and results for treatment integrity were favorable. Fidelity of implementation was facilitated through structured training sessions over the telephone, which involved “verbal instructions, a scripted protocol, practice, feedback, and reinforcement” (Gortmaker, 2006, p. 112). TVIs audiotaped all of their reading assessment sessions and checked off the steps outlined in the detailed treatment integrity checklists. Both the tapes and the checklists were sent to the investigator, which may have also enhanced the quality of treatment integrity. The success of this model of research may be welcome news for the chronically under-funded field of visual impairments (Ferrell et al., 2006) because of the fiscal challenges and logistical barriers intrinsic to the research of such low incidence, geographically dispersed populations. High rates of treatment integrity speak to the general acceptability of the intervention (Ehrhardt et al., 1996; Gresham & Lopez, 1996), as the results suggest that ORF and the intervention are socially valid from the perspective of TVIs. This is an important finding because, for example, treatment acceptability has been empirically shown to influence the extent to which the intervention is used, and used with procedural fidelity (Ehrhardt et al.). It was hoped that the intervention would be user-friendly and parsimonious in order to enhance treatment integrity, the chances of its use  205  extraexperimentally, and the probability of spurring more research about reeated reading for braille readers. The use of two social validity assessments for TVIs helped gauge and ensure early and post-treatment acceptability and perceived efficacy effectiveness (Gortmaker et al., 2007). Acceptability ratings for these two social validity assessments were high at the beginning of the intervention and following the conclusion of the study. These ratings suggest that the TVIs generally liked the intervention and believed it was a worthwhile intervention to employ. Further, it is noteworthy that there was no TVI or participant attrition in this study, to the benefit of social validity. At their request, two of the TVIs continued on with the study beyond the end of the school year (up to the end of June) in an effort to complete a follow-up phase. Many of the TVIs indicated their intention to use the intervention again to try to improve their students’ ORF skill development. All but one TVI asked to keep the timer, stopwatch, and other materials provided to them for the study. Conducting research in natural settings affords additional benefits, namely the potential to maximize and optimize ecological validity of the intervention and capacity building. For example, the study took place in the participants’ respective schools with their TVIs. This type of training has the potential to enhance capacity building, helping to develop staff who can act more effectively as “intervention agents” (Gortmaker, 2006) for struggling braille readers. The research model may also make it feasible for other professionals, such as school psychologists, to play an active role in conducting assessment and progress monitoring, training, and research that serves these students and their TVIs in distant locales. School psychologists, and other stakeholders, could, for  206  example, collect assessment data remotely in keeping with new demands for accountability regarding student reading progress (Hasbrouck & Tindal, 2006; NCLB, 2001). Limitations and Future Directions  Although the results of this study provide some early guidance for further applied research for repeated reading, several salient limitations of the study warrant discussion. For example, limitations were inherent in the nature of the materials used, as this study did not investigate the impact of text difficulty on ORE and comprehension. Although the impact of text difficulty on the effectiveness of repeated reading has received attention in the literature (Faulkner & Levy, 1994; Kuhn & Stahl, 2003; Therrien, 2004), the findings are inconclusive. A readability analysis was undertaken for each passage, in an effort to control passage difficulty as done in other studies (Fuchs & Deno, 1992). However, readability analysis formulae, such as that associated with Spache’s (1974) analysis, are under increasing scrutiny (Ardoin, Suldo, Witt, Aldrich, & McDonald, 2005). Additional research regarding passage difficulty in general, and specifically with respect to braille readers, is needed. The issue of determining the impact of text difficulty during a repeated reading intervention gains a dimension and perhaps becomes arguably more challenging when the materials are brailled. Text may be more or less difficult for braille readers depending on the complexity and familiarity of the braille contractions included in the text. For example, reading materials, such as the DIBELS passages used for this study were adapted only by transcribing them directly into braille.  207  For those three participants in this study who used contracted reading materials, the text was fully contracted and the contractions used were not in any way controlled for level of difficulty. Participants using uncontracted braille materials were learning contractions in their school reading programs, and it can be hypothesized that they may have been practicing uncontracted words less often as a result. During intervention, one TVI emailed the investigator wondering whether her continuous introduction of new contractions during schoolwork would make it more difficult for her student to improve speed and accuracy in uncontracted text. Anecdotally, none of the TVIs or participants who used contracted braille made any comments regarding the nature of the contractions featured in the passages (e.g., level of difficulty or novelty), but the impact of the use of contractions on the results is unknown. It may be reasonable to assume, however, that the opportunities for intense practice afforded by repeated reading would be equally available and beneficial regardless of whether students were repeatedly reading a contextualized word either in its contracted or uncontracted form. Further, as the decision regarding the impact of the intervention was based on multiple data points, some degree of variability among probes “is allowable” (Ardoin et al., 2005, p. 15). Limitations associated with the research design and procedures. Another limitation of the current study stems from the way in which the HCO passages were included in the study. Most of the current studies that feature repeated reading include HCO passages as a way of investigating a more robust form of generalization (Daly et al., 1999). As in this study, HCO passages are administered following the third rereading of the instructional passage. For the purpose of this study, HCO passages were not administered in baseline because they would effectively introduce aspects of the repeated  208  reading intervention into that phase, as they overlap considerably with the baseline instructional passages. However, conclusions regarding the meaningfulness of CWPM results for HCO in intervention may have been clearer had these passages also been administered in baseline. Future studies may consider evaluating HCO passages during baseline. Limitations also were inherent in the nature of the procedures. Although procedural integrity was high, conducting a study via email and phone was not without its challenges. For example, there were delays due to problems with the mail. Two tapes were lost in the mail, and treatment integrity for these reading assessments could not be verified. Delays may have posed threats to internal validity, such as history and maturation and so, given the time-sensitive nature of the intervention, costs were incurred to expedite braille and print materials and equipment quickly to TVIs. Issues of intervention dosage. The research is inconclusive regarding optimal dosages of the repeated reading intervention, yet the dosage of the intervention received by this study’s participants likely represents another limitation of the current study. The length of the intervention phase was largely determined by the onset of summer holidays. Further, TVIs faced many competing demands for their valuable time and were, therefore, unable to conduct the intervention on a daily basis. Conceptualizing the particzants’ response to the intervention. Given that the repeated reading intervention has yet to be empirically validated for braille readers, interpretation of participants’ responses to this intervention, in terms of both ORF and comprehension, is limited. The intervention was conceived within a RTI paradigm, and for typically sighted students, when the primary index of reading competency, CWPM, is  209  low, the repeated reading intervention is often administered as a first line of prevention. This process is based on a general consensus that repeated reading for typically sighted students is an effective, empirically validated intervention, well matched to students evidencing fluency concerns. When typically sighted students fail to respond to this intervention, they are often considered non-responders and their non-responsiveness is cause for concern and a signal to initiate another level of intervention intensity (Fuchs & Fuchs, 2007). The extent to which braille-reading participants in this study responded to this intervention cannot yet be conceptualized in the same manner. Extensive research is needed to investigate the parameters of the most effective reading strategies for braille readers such that a braille reader’s nonresponsiveness may be interpreted as a valid signal of inadequate response to intervention (Vaugh & Fuchs, 2003) and guide performance enhancing programming accordingly (Glover & DiPerna, 2007; Gortrnaker et al., 2007). Unfortunately, logistical and financial challenges to conducting reading intervention research with braille readers abound, yet additional research is required to empirically validate the repeated reading intervention. The present study extends support for the usefulness of single-subject research methodology for continuing this investigation. The use of single-subject designs is perhaps uniquely well suited to the investigation of effective reading interventions, particularly highly heterogeneous, low incidence population of braille readers. As this study is the first of its kind in its exclusive focus on braille readers and the repeated reading intervention, replication of the study is needed to enhance the external validity of the results from this study (Homer et al., 2005). The “boundaries” or generality of the repeated reading intervention should be  210  established “through systematic replication of effects across multiple studies conducted in multiple locations and across multiple researchers” (Homer et al., p. 171). Future research. To be considered an evidence-based practice, Homer and colleagues (2005) argue for the intervention to be the focus of “a minimum of five singlesubject studies that meet minimally acceptable methodological criteria and document experimental control, [and] have been published in peer-reviewed joumals” (p. 176). Further, Homer and colleagues recommend that these same studies should be “conducted by at least three different researchers across at least three different geographical locations” and, further, that these studies include a “total of at least 20 participants” (p. 176). Beyond the identification and validation of interventions, single subject methodology also had the potential to allow researchers to “test conceptual theory” (Homer et al., 2005, p. 171). Further research is required, but the results of this study suggest that the Instruction Hierarchy (IH) may warrant additional study. In keeping with the IH model, students with the most errors will benefit the least from this skill-based intervention. This highly rigorous methodology has great potential to inform and define educational practices at the level of the individual student with visual impairments (Homer et al.). The focus of this study was on repeated reading, but this emphasis does not imply a “one-size-fits-all” approach (Deno, 1990) to reading interventions for struggling braille readers. Rather, the research suggests idiosyncratic effects of oral reading interventions across participants (Daly, Martens, Dool, & Hintze, 1998; Daly et al. 1999) in keeping with the variable effects of the intervention on the eight participants in the current study.  211  It appears, for example, that, at times, if the intervention were initially effective in improving CWPM rates at therapeutic levels, it was insufficient to affect continued gains as the study progressed. Hence, it is hoped that this research will spur the development and empirical validation of a wide variety of effective reading interventions for braille readers, and that practitioners will consider tailoring these interventions specifically to individual students’ needs, for example, based on the IH. Experimental analysis, rooted in the IH, is emerging as a promising way of finding the most parsimonious, effective, individualized intervention package for typically sighted students (Chafouleas et al, 2004; Daly & Martens, 1994; Daly et al., 1998; Daly et al., 1999; Gortmaker et al., 2007). With respect to ORF, the process involves combining and dismantling different combinations of instructional and motivational intervention components (e.g., repeated reading, adult modeling, use of rewards) to find those component(s) that affect the greatest fluency gains, with maximal efficiency. The intent is to choose among empirically validated interventions in order to experimentally derive the optimum interventions for an individual student (Gortmaker, 2006). Multi-component interventions have shown great promise for typically sighted students (Daly et al., 1999; Daly, Persampieri, et al., 2005; Gortmaker et al.). Multicomponent designs, however, necessitate far more research, first into the individual interventions and components thereof and then into different intervention combinations. Conclusion Empirical evidence of the power of interventions to ameliorate reading problems and promote more functional levels of proficiency is growing across a wide spectrum of students. However, the field of visual impairment currently lacks a well-established  212  research base upon which to develop evidence-based practices. The purpose of this study was to investigate empirically whether there was a functional relationship between the implementation of the repeated reading intervention and correct words per minute (CWPM), errors, and comprehension. Given the predictive validity of early reading skills for future reading proficiency, early assessment and intervention in the primary grades are of vital importance. The stakes are particularly high for those students who are deemed “at risk” for current and future reading problems. Students who are blind and read braille may be at enhanced risk for literacy problems relating, for example, to reading speed and accuracy (Coppins & Barlow-Brown, 2006). The study’s repeated reading intervention design was informed by the IH’s stagebased model of learning (Haring et al., 1978) such that the intervention was matched to the skill-based needs of the participants (Daly & Martens, 1994). Accordingly, the intervention drew heavily on empirically validated best practices, employing curriculumbased measurement (CBM) and user-friendly assessment materials to investigate the effects of a repeated reading intervention on oral reading fluency within a Response to Intervention framework. This study adds to the limited extant repeated reading research that shows promise for improving oral reading fluency for students with visual impairments. The study’s findings indicated an absence of experimental control for any of the dependent variables of CWPM, errors, or oral retell fluency, and these results merit caution. However, the results suggest that the use of repeated reading with primary braille readers evidencing ORF challenges warrants additional study.  213  The literature in visual impairment attests to the challenges faced by researchers attempting to advance the scientific knowledge in this field. The results of this study highlight the particular utility of single-subject research methodology, promoting it as uniquely suited to investigating the effectiveness of reading interventions for braille readers. Further, this study provides one creative example of how collaboration between diverse personnel, such as school psychologists and TVIs, can translate into the development of new knowledge for these very deserving students.  214  REFERENCES Adams, M. J. (1990). Beginning to read: Thinking and learning about print. Cambridge, MA: MIT Press. Adkins, A. (2004, Spring). Advantages of uncontracted braille. SEE/HEAR, 9, 3 8-44. Alber-Morgan, S. R. Ramp, E. M., Anderson, L. L, & Martin, C. M. (2007). Effects of repeated readings, error correction, and performance feedback on the fluency and comprehension of middle school students with behavior problems. Journal of Special Education, 41, 17-30. Alberto, P. A., & Troutman, A. C. (2003). Applied behavior analysis for teachers (6th ed.). Upper Saddle River, NJ: Prentice-Hall. Allington, R. L. (1983). Fluency: The slighted goal. Reading Teacher, 36, 556-561. Amato, S. S. (2000). Descriptive study of standards and criteria for competence in braille literacy within teacher preparation programs in the United States and Canada. Dissertation Abstracts International, 61 (09), 3518A. (UMI No. 9989267) Ardoin, S. P., Eckert, T. L., & Cole, C. A. (2008). Promoting Generalization of Reading: A comparison of two fluency-based interventions for improving general education students’ oral reading rate. Journal ofBehavioral Education, 17, 237-252. Ardoin, S. P., McCall, M., & Klubnik, C. (2007). Promoting generalization of oral reading fluency: Providing drill versus practice opportunities. Journal of Behavioural Education, 16, 55-70. Ardoin, S. P., Suldo, S. M., Witt, J. E., Aldrich, S., & McDonald, E. (2005). Accuracy of readability estimates’ prediction of CBM performance. School Psychology Quarterly, 20, 1-20.  215  Ashcroft, S.C. (1960). Errors in oral reading of braille at elementary grade levels. Unpublished doctoral dissertation, University of Illinois. Baer, D. M. (1977). Perhaps it would be better not to know everything. Journal of Applied Behavior Analysis, 10, 167-172. Barbetta, P. M., Heron, T. E., & Heward, W. L. (1993). Effects of active student response during error correction on the acquisition, follow-up, and generalization of sight words by students with developmental disabilities. Journal ofApplied Behavior Analysis, 26, 111-119. Barbetta, P. M., Heward, W. L., & Bradley, D. M. C. (1993). Relative effects of wholeword and phonetic-prompt error correction on the acquisition and follow-up of sight words by students with developmental disabilities. Journal ofApplied Behavior Analysis, 26, 99-110. Barlow, D. H., & Hersen, M. (1984). Single case experimental designs: Strategies for studying behavior change. New York: Pergamon Press. Barlow, D. H., Nock, M. K., & Hersen, M. (2009). Single-case experimental designs: Strategiesfor studying behavior change (3rd ed.). Boston, MA: Allyn & Bacon. Barlow-Brown, F., & Connelly, V. (2002). The role of letter knowledge and phonological awareness in young braille readers. Journal ofResearch in Reading, 25, 259-270. Barraga, N. C., & Erin, J.N. (1992). Visual handicaps and learning (3rd ed.). Austin: TX: PRO-ED. Begeny, J. C., Daly, E. J. III, & Valleley, R. J. (2006). Improving oral reading fluency through response opportunities: A comparison of phase drill error correction with repeated readings. Journal ofBehavioral Education, 15, 229-235.  216  Betts, E. A. (1946). Foundations ofreading instruction. New York: American Book. Bigelow, A. (1990). Relationship between the development of language and thought in young blind children. Journal of Visual Impairment and Blindness, 84, 414-419. Billingsley, B. S., & Wildman, T. M. (1988). The effects of prereading activities on the comprehension of learning disabled adolescents. Learning Disabilities Research, 4, 3 6-44. Bradley, R., Danielson, L., & Doolittle, J. (2007). Responsiveness to intervention: 1997 to 2007. Teaching Exceptional Children, 39, 8-12. Bradley-Johnson, S. (1986). Psychoeducational assessment ofvisually impaired and blind students. Austin, TX: Pro-Ed Publications. Braille Authority of North America (1994). English braille American edition. Louisville, KY: American Printing House for the Blind. Breznitz, Z. (1987). Increasing first graders’ reading accuracy and comprehension by accelerating their reading rate. Journal ofEducational Psychology, 79, 236-242. Breznitz, Z. (1991). The beneficial effect of accelerating the reading rate of dyslexic readers on their reading comprehension. In M. Snowling & M. Thompson (Eds.), Dyslexia: Integrating theory and practice (pp. 236-244). London, Whurr Publishers. British Columbia Ministry of Education (2006). Special education services: Manual of policies, procedures, and guidelines. Retrieved December 1, 2008, from http://www.bced.gov.bc. ca/specialed/ppandg/toc.htm Brown-Chidsey, R., & Johnson, P. Jr., Fernstrom, R. (2005). Comparison of grade-level  217  controlled and literature based maze CBM reading passages. School Psychology Review, 34, 387-394. Caidwell, J. S. (2002). Reading assessment: A primerfor teachers and tutors. New York: Guilford Press. Canadian National Institute for the Blind. (2005). An unequal playingfield: Report on the needs ofpeople who are blind or visually impaired living in Canada. Retrieved August 31, 2007, from http://cnib.ca/en/about!pulications/reserachfNeeds%20Study%20Executive%2OSu mmary. Carnine, D., Silbert, J., & Kame’enui, E. (1990). Direct instruction reading (2nd ed.). Columbus, OH: Merrill. Carreiras, M., & Alveras, C. J. (1999). Comprehension processes in braille-reading. Journal of Visual Impairment and Blindness, 93, 589-595. Carver, R. P. (1989) Silent reading rates in grade equivalents. Journal ofReading Behavior, 2], 158-161. Caton, H. (1979). A primary reading program for beginning braille readers. Journal of Visual Impairment and Blindness, 73, 309-313. Caton, H., Bradley, E. J., & Pester, E. (Eds.). (1982). Patterns readiness level teachers edition. Louisville, KY: APH. Chafouleas, S. M., Martens, B.K., Dobson, R. L., Weinstein, K. S., & Gardner, K. B. (2004). Fluent reading as the improvement of stimulus control: Additive effects of performance-based interventions to Repeated Reading on students’ reading and error rates. Journal ofBehavioral Education, 13, 67-81.  218  Chall, J. (1979). Reading research- For whom? Curriculum Inquiry, 9, 3 7-43. Chall, J., Jacobs, V. A., & Baldwin, L. E. (1990). The reading crisis: Why poor children fall behind. Cambridge, Mass.: Harvard University Press. Chailman, B. E. (1978). Variables influencing the identification ofsingle braille characters. Unpublished master’s thesis, University of Louisville. Chard, D. J, Vaughn, S. & Tyler, B. (2002). A synthesis of research on effective interventions for building reading fluency with elementary students with learning disabilities. Journal ofLearning Disabilities, 35, 386-406. Cohen, A. L. (1988). An evaluation of the effectiveness of two methods for providing computer-assisted repeated reading training to reading disabled students. Unpublished doctoral dissertation, Florida State University, Tallahassee. Commission on Reading. (1985). A nation ofreaders: The report of the Commission on Reading. Washington, DC: The National Institute of Education. Cooper, J. 0., Heron, T. E., & Heward, W. L. (2007). Applied Behavioral Analysis (2nd ed.). Columbus, OH: Merrill. Coppins, N., & Barlow-Brown, F. (2006). Reading difficulties in blind, braille-reading children. The British Journal of Visual Impairment, 24, 37-39. Corn, A. L., & Ferrell, K. A. (2000). External funding for training and research in visual disabilities at colleges and universities, 1997-1998. Journal of Visual Impairment and Blindness, 94, 372-384. Corn, A. L., & Koenig, A. J. (1996). Perspectives on low vision. In A. L. Corn, & A. J. Koenig (Eds.), Foundation oflow vision: Clinical andfunctional (pp. 3-25). New York: AFB Press.  219  Corn, A. L., & Koenig, A. J. (2000). Literacy for students with low vision: A framework for delivering instruction. Journal of Visual Impairment and Blindness, 97 305321. Craig, C.J. (1996). Family support of the emergent literacy of children with visual impairments. Journal of Visual Impairment & Blindness, 90, 194-200. Daly, E. J., III, Bonfiglio, C. M., Mattson, T., Persampieri, M., & Foreman-Yates, K. (2006). Refining the experimental analysis of academic skills deficits: Part II use of brief functional analysis to evaluate reading fluency treatments. Journal of Behavioral Analysis, 39, 323-331. Daly, E. J., III, Chafouleas, S., & Skinner, C. H. (2005). Interventions for reading problems: Designing and evaluating effective strategies. New York, NY: Guilford Press. Daly, E. J., III, Lentz, F. E., & Boyer, J. (1996). The instructional hierarchy: A conceptual model for understanding the effective components of reading interventions. School Psychology Quarterly, 1], 369-386. Daly, E. J., III, & Martens, B. K. (1994). A comparison of three interventions for increasing oral reading performance: Application of the instructional hierarchy. Journal of Applied Behavior Analysis, 27, 459-469. Daly, E. J., III, Martens, B. K., Barnett, D., Witt, H., J. C., & Olsen, S. C. (2007). Varying intervention delivery in response to intervention: Confronting and resolving challenges with measurement, instruction, and intensity. School Psychology Review, 36, 562-581. Daly, E. J., III, Martens, B. K., Dool, E. J., & Hintze, J. M. (1998). Using brief functional  220  analysis to select interventions for oral reading. Journal ofBehavioral Education, 8, 203-218. Daly, E. 3., III, Martens, B. K., Hamler, K. R., Dool, E. 3., & Eckert, T.L. (1999). A brief experimental analysis for identifying instructional components needed to improve oral reading fluency. Journal ofApplied Behavior Analysis, 32, 83-94. Daly, E. 3., III, Martens, B. K., Kilmer, A., & Massie, D. (1996). The effects of instructional match and content overlap on generalized reading performance. Journal ofApplied Behavior Analysis, 29, 507-518. Daly, E. J., III, Murdoch, A., Lillenstein, L., Webber, L., &Lentz, F. E. (2002). An examination of methods for testing treatments: Conducting brief experimental analyses of the effects of instructional components on oral reading fluency. Education and Treatment of Children, 25, 288-316. Daly, E. J., III, Persampieri, M., McCurdy, M., & Gortmaker, V. (2005). Generating reading interventions through experimental analysis of academic skills: Demonstration and empirical evaluation. School Psychology Review, 34, 395-414. Daneman, M. (1988, November). How reading braille is both like and unlike reading print. Memoiy and Cognition, 497-504. Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219-232. Deno, S. L. (1989). Curriculum-based measurement and special education services: A fundamental and direct relationship. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 1-17). New York: Guilford Press. Deno, S. L. (1990). Individual differences and individual difference: The essential  221  difference of special education. Journal ofSpecial Education, 24, 160-173. Deno, S. L., Fuchs, L. S., Marston, D., Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with disabilities. School Psychology Review, 30, 507-524. Deno, S. L., Marston, D., Shinn, M., & Tindal, J. (1983). Oral reading fluency: A simple datum for scaling reading disability. Topics in Learning and Learning Disabilities, 2, 53-59. Deno, S. L., Mirkin, P., & Chiang, B. (1982). Identifying valid measures of reading. Exceptional Children, 49, 3 6-45. DiStefano, P., Noel, M., & Valencia, S. (1981). Measurement of the effects of purpose and passage difficulty on reading flexibility. Journal ofEducational Psychology, 73, 602-606. Dodd, B., & Conn, L. (2000). The effect of Braille orthography on blind children’s phonological awareness. Journal ofResearch in Reading, 23, 1-11. Dowhower, S. L. (1987). Effects of repeated reading on second-grade transitional readers’ fluency and comprehension. Reading Research Quarterly, 22, 389-406. Duckworth, B. J. (1993). Adapting standardized academic tests in braille and large type. Journal of Visual Impairment and Blindness, 87, 405-407. Duckworth, B. J., & Caton, H. (1986). Basic reading rate scale- Braille edition. Louisville, KY: American Printing House for the Blind. Durkin, D. (1993). Teaching them to read. (6th ed.). Boston, MA: Allyn & Bacon. Eckert, T. L., Ardoin, S. P., Daisey, D. M., & Scarola, M. D. (2000). Empirically  222  evaluating the effectiveness of reading interventions: The use of brief experimental analysis and single case designs. Psychology in the Schools, 37, 463-473. Eckert, T. L., Ardoin, S. P., Daly, E. J. III, & Martens, B. K. (2002). Improving oral reading fluency: A brief experimental analysis of combining an antecedent intervention with consequences. Journal ofApplied Behavior Analysis, 35, 271281. Ehrhardt, K. E., Barnett, D. W., Lentz, F. E., Jr., Stollar, S. A., & Refin, L. H. (1996). Innovative technology in ecological consultation: Use of scripts to promote treatment acceptability and integrity. School Psychology Quarterly, 11, 149-169. Erin, J. N., & Koenig, J. A. (1997). The student with a visual disability and a learning disability. Journal ofLearning Disabilities, 30, 3 09-320. Fairbanks, S., Sugai, G., Guardino, D., & Lathrop, M. (2007). Response to intervention: Examining classroom behavior support in second grade. Exceptional Children, 73, 88-310. Faulkner, H., & Levy, B. A. (1994). How text difficulty and reader skill interact to produce differential reliance on word and content overlap in rereading transfer. Journal ofExperimental Psychology, 58, 1-24. Ferrell, K. A., Mason, L., Young, J., & Cooney, J. (2006). Forty years ofliteracy research in blindness and visual impairment (Tech. Rep.). Greeley, CO: University of Northern Colorado, National Center on Low-Incidence Disabilities. Flanagan, P.J. (1966). Automated training and braille-reading. New Outlookfor the  223  Blind, 60, 141-146. Fletcher, J. M., & Lyon, G. R. (1998). Reading: A research-based approach. In W. M. Evers (Ed.), What has gone wrong in American’s classrooms (pp. 49-90). Stanford, CA: Hoover Jnstitute Press. Foorman, B.R., Francis, D.J., Winikates, D., Mehta, P., Schatschneider, C., & Fletcher, J. M. (1997). Early interventions for children with reading disabilities. ScientUIc Studies in Reading, 1, 255-276. Foullce, E. (1979a). Increasing the braille-reading rate. Journal of Visual Impairment and Blindness, 73, 3 18-323. Foulke, E. (1979b). Investigative approaches to the study of braille-reading. Journal of Visual Impairment and Blindness, 73, 298-3 08. Foulke, E. (1982). Reading Braille. In W. Schiff, & E. Foulke (Eds.), Tactual perception: A source book (pp. 168-208). Cambridge, UK: Cambridge University Press. Foulke, E., & Wirth, E. M. (1973). The role of identification in the reading of braille and print. In E. Foulke (Ed.), The development ofan expanded reading code for the blind: Part II. Louisville: Perceptual Alternatives Laboratory, University of Louisville. Francis, D. J., Shaywitz, S. E., Stuebing, K. K., Shaywitz, B. A., & Fletcher, J. M. (1996). Developmental lag versus deficit models of reading disability: A longitudinal individual growth curves analysis. Journal ofEducational Psychology 88, 3—17. Frith, U. (1985). Beneath the surface of developmental dyslexia. In K. E. Patterson, J. C. Marshall, & M. Coltheart (Eds.). Surface ofdyslexia. London: Eribaum.  224  Fuchs, L. S. (1989). Evaluating solutions: Monitoring progress and intervention plans. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp.153-181). New York: Guildford Press. Fuchs, L. S., & Deno, S. L. (1982). Developing goals and objectives for educational programs [Teaching guide]. U.S. Department of Education Grant, Institute for Research in Learning Disabilities, University of Minnesota, Minneapolis. Fuchs, L. S., & Deno, S. L. (1994). Must instructionally useful performance assessment be based in the curriculum? Exceptional Children, 61, 15-24. Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure or monitoring student reading progress. School Psychology Review, 21, 45-5 8. Fuchs, L. S., & Fuchs, D. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22, 1-30. Fuchs, L. S., & Fuchs, D. (1999). Monitoring student progress toward the development of reading competence: A review of three forms of classroom based assessment. School Psychology Review, 28, 659-671. Fuchs, L. S., & Fuchs, D. (2007). A model for implementing responsiveness to intervention. Teaching Exceptional Children, 39, 14-20. Fuchs, L. S., Fuchs, D., & Compton, D. L. (2004). Monitoring early reading development in first grade: Word identification fluency versus nonsense word fluency. Exceptional Children, 71, 7-21. Fuchs, L. S., Fuchs, D., Eaton, S., & Hamlett, C. L. (2000). [Relationship between reading fluency and reading comprehension as a function of silent versus oral reading mode]. Unpublished raw data.  225  Fuchs, L. S., Fuchs, D., Hamlett, C.L., & Ferguson, C. (1992). Effects of expert system consultation with curriculum-based measurement, using a reading maze task. Exceptional Children, 58, 436-450. Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. ScientUIc Studies ofReading, 5, 239-256. Fuchs, L. S., Fuchs, D., & Maxwell, L. (1988). The validity of informal measures of reading comprehension. Remedial and Special Education, 9, 20-28. Gillon, G. T., & Young, A. A. (2002). The phonological awareness skills of children who are blind. Journal of Visual Impairment and Blindness, 10, 3 8-49. Glazer, A. D. (2007). The effects ofa skill-based intervention package including repeated reading and error correction on the oral readingfluency ofat-risk readers. Unpublished doctoral dissertation, University of Connecticut. Glover, T. A., & Dipema, J. C. (2007). Service delivery models for Response to Intervention: Core components and directions for future research. School Psychology Review, 36, 526-640. Good, R. H., III, & Kaminski, R. A. (Eds.). (2002). Dynamic indicators ofbasic early literacy skills (6th ed.). Eugene, OR: Institute for the Development of Educational Achievement. Good, R. H., III, & Simmons, D., & Smith, S. (1998). Effective academic intervention in the United States: Evaluating and enhancing the acquisition of early reading skills. School Psychology Review, 27, 740-753. Gortmaker, V.J. (2006). Improving reading outcomes for children with learning  226  disabilities: Incorporating strategic and sequential experimental validation in the development of parent tutoring interventions for reading deficiencies. Dissertation Abstracts International, 67 (05). (UMI No. 3219056) Gortmaker, V. J., Daly, E. J., III, McCurdy, M., Persampieri, M. J., & Hergenrader, M. (2007). Improving reading outcomes for children with learning disabilities: Using brief experimental analysis to develop parent tutoring interventions. Journal of Applied Behavior Analysis, 40, 203-221. Greaney, J., Hill, E., & Tobin, M. (1998). Neale Analysis ofReading Ability (braille version). University of Birmingham: Nelson Publishing Company, Ltd. Greaney, J., & Reason, R. (2000). Braille-reading by children: Is there a phonological explanation for their difficulties? The British Journal of Visual Impairment, 18, 35-40. Gresham, F. M., Gansle, K.A., & Noell, G. H. (1993) Treatment integrity in applied behavioral analysis with children. Journal ofApplied Behavioral Analysis, 26 257-263. Gresham, F. M., & Lopez, M. F. (1996). Social validation: A unifying concept for school-based consultation research and practice. School Psychology Quarterly, 11, 204-227. Hall, A., Scholl, G. T., & Swallow, R.M. (1986). Psychoeducational assessment. In G. Scholl (Ed.), Foundations ofeducation for blind and visually handicapped children and youth (pp. 187-214). New York: AFB. Hamilton, C. & Shinn, M. R. (2003). Characteristics of word callers: An investigation of  227  the accuracy of teachers’ judgments of reading comprehension and oral reading skills. School Psychology Review, 32, 228-240. Hampshire, B. (1981). Working with braille: A study ofbraille as a medium of communication. Paris: The Unesco Press. Haring, N. G., Lovitt, T. C., Eaton, M. D., & Hansen, C. L. (1978). The fourth R: Research in the classroom. Columbus, OH: Charles E. Merrill Publishing Company. Harley, R. K., Truan, M. B., & Sanford, L. D. (1987). Communication skills for visually impaired learners. Springfield, IL: Charles, C. Thomas. Hasbrouck, J., & Tindal, G. A. (2006). Oral reading fluency norms: A valuable assessment tool for reading teachers. Reading Teacher, 59, 636-644. Hatlen, P. (1996). The expanded core curriculum for students with visual impairments including those with additional disabilities. RE. View, 28, 25-32. Healy, A. F. (1976). Proofreading errors on the word the: New evidence on reading units larger than letters. Journal ofExperimental Psychology: Human Perception and Performance, 2, 235-242. Henderson, F. M. (1967). The effect ofcharacter recognition on braille-reading. Unpublished specialist in education thesis, George Peabody College for Teachers, Nashville. Herzberg, T. S., Stough, L.M., & Clark, M.C. (2004). Teaching and assessing the appropriateness of uncontracted Braille. Journal of Visual Impairments and Blindness, 98, 773-779. Heward, W. L. (1994). Three “low-tech” strategies for increasing the frequency of  228  active student response during group instruction. In R. Gardner III, D. M., Sainato, J. 0. Cooper, T. E. Heron, W. L. Heward, J. W. Eshelman, & T. A. Grossi (Eds.), Behavior analysis in education: Focus on measurably superior instruction (pp. 283-320). Pacific Grove, CA: Brooks/Cole Publishing Co. Hintze, J. M., Pelle-Petitte, H. A. (2001). The generalizability of CBM of oral reading fluency measures across general and special education. Journal of Psychoeducational Assessment, 19, 158-170. Hong, S., & Erin, J. N. (2004). The impact of early exposure to uncontracted braillereading on students with visual impairments. Journal of Visual Impairment and Blindness, 98, 325-340. Homer, R. H., Carr, E. G., Halle, J., McGee, G, Odom, S. & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Councilfor Exceptional Children. 7], 165-179. Hosp, M. K., & Fuchs, L. S. (2000). The relation between word reading measures and reading comprehension: A review ofthe literature. Manuscript in preparation. Hosp, M. K., & Fuchs, L. S. (2005). Using CBM as an indicator of decoding, word reading, and comprehension: Do the relations change with grade? School Psychology Review, 34, 9-26. Hosp, M. K., Hosp, J. L., & Howell, K. (2007). The ABCs of CBM: A practical guide to curriculum-based measurement. New York, NY: The Guilford Press. Howell, K., & Nolet, V. (1999). Curriculum-based evaluation: Teaching and decision making. Pacific Grove, CA: Brooks and Cole. Hudson, R. F., Lane, H. B., Pullen, P. C. (2005). Reading fluency assessment and  229  instruction: What, why, how? The Reading Teacher 58, 702-7 14. Individuals with Disabilities Education Improvement Act (IDEA). (2004). Pub. L. NO. 108-446, 20 U. S. C 1400. Jan, J. E., & Groenveld, M. (1993). Visual behaviors and adaptations associated with cortical and ocular impairment in children. Journal of Visual Impairment and Blindness, 87, 101-105. Jenkins, J. R., Fuchs, L. S, van den Broek, P, Espin, C., & Deno, S. L. (2003). Accuracy and fluency in list and context reading of skilled and RD groups: Absolute and relative performance levels. Learning Disabilities: Research and Practice, 18, 237-245. Jenkins, J. R., & Jewell, M. (1993). Examining the validity of two measures for formative teaching: Reading aloud and mazes. Exceptional Children, 59, 42 1-432. Johnston, P. B. (1982). Implications ofbasic research for the assessment ofreading comprehension (Tech. Rep. No. 206). Urbana-Champaign: University of Illinois, Center for the Study of Reading. (ERIC Document Reproduction Service No. ED 201 987) Johnston, J. M., & Pennypacker, H. S. (1993). Strategies and tactics ofbehavioral research (2nd ed.). Hilisdale, NJ: Eribaum. Jolley, W. (2006, June). Unfled English Braille: A literacy bedrock in the digital age. Paper presented at the 12th International Council for the Education of People with Visual Impairments, Kuala Lumpur, Malaysia. Juel, C. (1988). Learning to read and write: A longitudinal study of fifty-four children from first through fourth grade. Journal ofEducational Psychology, 80, 437-447.  230  Juel, C., & Leavell, J. A. (1988). Retention and non-retention of at-risk readers in first grade and their subsequent reading achievement. Journal ofLearning Disabilities, 21, 571-580. Kame’enui, E. J. (2007). A new paradigm. Teaching Exceptional Children, 39, 6-7. Kame’enui, E. J., & Simmons, D. (2001). The DNA of reading fluency. Scientific Studies ofReading, 5, 203-210. Kazdin, A. E. (1982). Single-case research designs: Methods for clinical and applied settings. New York: Oxford. Kederis, C. J., Nolan, C. Y., & Morris, J. B. (1967) The use ofcontrolled Exposure devices to increase braille-reading rates. Unpublished manuscript, The American Printing House for the Blind. Kirchner, C., Johnson, G., & Harkins, D. (1997). Rehabilitation: Employment barriers and strategies for clients who are blind or visually impaired. Journal of Visual Impairment and Blindness, 91, 377-392. Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measuremeni, 56, 746-759. Knowlton, M., & Wetzel, R. (1996). Braille-reading rates as a function of reading tasks. Journal of Visual Impairment and Blindness, 90, 227—23 6. Koenig, A.J. (1992). A framework for understanding the literacy of individuals with  visual impairments. Journal of Visual Impairment & Blindness, 86, 277-284. Koenig, A. J. (1996a). Selection of learning and literacy media for children and youths with low vision. In A. L. Corn & A. J. Koenig (Eds.), Foundations oflow vision: Clinical andfunctional perspectives (pp. 246-279). New York: AFB Press.  231  Koenig, A. J. (1996b, Spring). Assessment of Braille Literacy Skills (ABLS). DOTSfor Braille Literacy, 2, Retrieved on July 30, 2007, from 1 996http ://www.afb.org/Section. asp?SectionlD=6&TopiclD= 1 9&SubTopicID 1 9&DocumentlD=1 64 Koenig, A. J., & Farrenkopf C. (1997). Essential experience to undergird the early development of literacy. Journal of Visual Impairment and Blindness, 9], 14-24. Koenig, A. J., & Holbrook, M. C. (1989). Determining the reading medium for students with visual impairments: A diagnostic teaching approach. Journal of Visual Impairment and Blindness, 83, 296-3 02. Koenig, A. J., & Holbrook, M. C. (1995). Learning Media Assessment ofStudents with Visual Impairments (2nd ed.). Austin, TX: Morgan Printing. Koenig, A.J., & Holbrook, M.C. (2000). Ensuring high-quality instruction for students in Braille literacy programs. Journal of Visual Impairment & Blindness, 94, 677694. Koenig, A. J., Holbrook, C. M., Corn, A. L., DePriest, L. B., Erin, J., & Presley, I. (2000). Specialized assessments for students with visual impairments. In A. J. Koenig & M. C. Holbrook (Eds.), Foundations ofeducation: Vol. 2.History and theory of teaching children and youths with visual impairments (2nd ed., pp. 103172). New York: AFB Press. Koenig, A., & Holbrook, M.C., Layton, C. (2001). Fluency and comprehension strategies for students with low vision. Texas Focus 200] Conference: Looking at Low Vision. Ft. Worth: Texas. Koenig, A., Sanspree, M. J., & Holbrook, M. C. (1991). Determining the reading  232  medium for students with visual handicaps. In Division for the Visually Handicapped Statements of Position. Reston, VA: Division for the Visually Impaired, Council for Exceptional Children. Kuhn, M. R., & Stahl, S. A. (2003). Fluency: A review of developmental and remedial practices. Journal ofEducational Psychology, 95, 3-21. Kurzhals, I., & Caton, H.R. (1973). A tactual road to reading. Louisville, Ky.: American Printing House for the Blind. Kusajima, T. (1974). Visual reading and braille-reading: An experimental investigation of the physiology and psychology ofvisual and tactual reading. New York: AFB LaBerge, D., & Samuels, S. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 293-323. Layton, C. A. (1994). Effects of repeated readings for increasing reading fluency in elementary students with low vision (Doctoral dissertation, Texas Tech University, 1993). Dissertation Abstracts International 55, 70. Layton, C. A., & Koenig, A. J. (1998). Increasing reading fluency in elementary students with low vision through repeated readings. Journal of Visual Impairment and Blindness, 92, 276- 293. Layton, C. A., & Lock, R. (2001). Determining learning disabilities in students with low vision. Journal of Visual Impairment and Blindness, 95, 288-299. Levy, B. A., Abello, B., & Lysynchuk, L. (1997). Transfer from word training to reading in context: Gains in reading fluency and comprehension. Learning Disabilities Quarterly, 20, 173-188. Lewis, S., & Russo, R. (1998). Educational assessment for students who have visual  233  impairments and other disabilities. In S. Z. Sacks & R. K. Silberman (Eds.), Educating students who have visual impairment with other disabilities. Baltimore, MD: Paul H. Brookes. Lipson, M. Y., & Wixson, K. K. (1986). Reading disability research: An interactionist perspective. Review ofEducational Research, 56, 111-136. Loftin, M. (1997). Critical factors in the assessment of students with visual impairments. RE: View, 28, 149-160. Loftin, M. (2006). Making evaluation meaningful: Determining additional eligibilities and appropriate instructional strategies for blind and visually impaired students. Austin, TX: Texas School for the Blind. Lorimer, J. (1990). Improving braille-reading skills: The case for extending the teaching of braille-reading to upper primary and lower senior classes. British Journal of Visual Impairment 8, 87-89. Lowenfeld, B. (Ed.). (1973). The visually handicapped child in school. New York: John Day. Lowenfeld, B., Abel, G. L., & Hatlen, P. H. (1969). Blind children learn to read. Springfield, IL: Charles C. Thomas. Lyon, G. R. (July 10, 1997). Report on Learning Disabilities Research, Congressional testimony. Lyon, G. R., & Moats, L.C (1997). Critical conceptual and methodological considerations in reading intervention research. Journal ofLearning Disabilities, 30, 578-588. Lusk, K. E., & Corn, A. L. (2006). Learning and using print and braille: A study of dual  234  media learners, Part 2. Journal of Visual Impairment and Blindness, 100, 653665. MacCuspie, P. A. (2002, August). Access to literacy instruction for students who are blind or visually impaired: A discussion paper. Toronto: CNIB. Retrieved June 10, 2004, from http://www.cnib.ca!eng/publications/access_to_literacy.htm Madelaine, A., & Wheldall, K. (1999). Curriculum-based measurement of reading: A critical review. International Journal ofDisability Development and Education, 46, 71-85. Mangold, S. (1978). Tactile perception and braille letter recognition: Effects of developmental teaching. Journal of Visual Impairment and Blindness, 72, 259266. Mangold, S. S. (2000, October). Trends in the use of braille contractions in the United States: Implications for UBC decisions. Braille Monitor, 43, 12-16. Markell, M. A, & Deno, S. L. (1997). Effects of increasing oral reading: Generalization across reading tasks. Journal ofSpecial Education, 31, 23 3-250. Martin, C. J., & Alonso, L. (1967). Comprehension of full length and telegraphic material among blind children: Final report, Education Research Series No. 42. Marston, D. (1989). A curriculum-based measurement approach to assessing academic performance: What is it and why do it. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 18—78). New York: Guilford. Martens, B. K., & Eckert, T. L. (2007). The Instructional Hierarchy as a model of stimulus control over student and teacher behavior: We’re close, but are we close enough? Journal ofBehavioral Education, 16, 83-91.  235  Martin-Chang, S. L., & Levy, B. A. (2005). Fluency transfer: Differential gains in reading speed and accuracy following isolated word and context training. Reading and Writing, 18, 343-376. Mason, C., & Davidson, R. (2000). National plan for training personnel to serve children with blindness and low vision. Reston, VA: The Council for Exceptional Children. Max, L., & Caruso, A. J. (1998). Adaptation of stuttering frequency during repeated readings: Associated changes in acoustic parameters of perceptually fluent speech. Journal ofSpeech, Language, and Hearing, 41, 1265-1281. McCall, S. (1997). The development of literacy through touch. In H. Mason & S. McCall (Eds.), Visual Impairment (pp. 149-158). London: David Fulton Publishers. McBride, V. G. (1974). Exploration in rapid reading in braille. New Outlookfor the Blind, 68, 8-12. McCurdy, M., Daly, E., Gortmaker, V., Bonfiglio, C., Persampieri, M. (2007). Use of brief instructional trials to identify small group reading instructional strategies: A two experiment study. Journal ofBehavioral Education, 16, 7-26. Meyer, M. S., & Felton, R. H. (1999). Repeated reading to enhance fluency: Old approaches and new direction. Annals ofDyslexia, 49, 283-306. Millar, S. (1988). Prose reading by touch: The role of stimulus quality, orthography, and context. British Journal ofPsychology, 79, 87-103. Millar, S. (1997). Reading by touch. London: Routledge. Miller, C., & Rash, A. (2001, Summer). Reading for everyone: Expanding literacy options. SEE/HEAR, 6, 22-26. Morgan, S. K., & Bradley-Johnson, S. (1995). Technical adequacy of a curriculum-based  236  measure for visually impaired braille readers. School Psychology Review, 24, 94103. National Joint Committee on Learning Disabilities. (2001). Learning disabilities: Issues of definition. In National Joint Committee on Learning Disabilities (Ed.), Collective perspectives on issues affecting learning disabilities: Position papers, statements, and reports (2nd ed., pp. 27-32). Austin, TX: Pro-Ed. National Reading Panel. (2000). Report of the National Reading Panel: Teaching children to read: An evidenced-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: US Department of Health and Human Services. Nelson, J. S., Alber, S. R., & Gordy, A. (2004). Effects of systematic error correction and repeated readings on the reading accuracy and proficiency of second graders with disabilities. Education and Treatment of Children, 27, 186-198. No Child Left Behind Act of 2001 Pub. L. No. 107-110,  § 1001, 115 Stat. 1425 (2002).  Nolan, C. Y., & Ashcroft, S. (1969) The visually handicapped. Review ofEducation Research, 39, 52-70. Nolan, C. Y., & Kederis, C.J. (1969). Perceptualfactors in braille recognition. New York: AFB Press. Nolte, R. Y., & Singer, H. (1985). Active comprehension: Teaching a process of reading comprehension and its effects on reading achievement. The Reading Teacher, 39, 24-31. O’Conner, R. E., Bell, K. M., Harty, K. R., Larkin, L. K., Sackor, S. M., & Zigmond, N.  237  (2002). Teaching reading to poor readers in the intermediate grades: A comparison of text difficulty. Journal ofEducational Psychology, 94, 474—485. Olson, M., Harlow, S. D., & Williams, J. (1975). Rapid reading in braille and large print: An examination of McBride’s procedures. New Outlookfor the Blind, 68, 392395. O’Shea, L. J., Sindelar, P. T., & O’Shea, D. J. (1985). The effects of reading and attentional cues on reading fluency and comprehension. Learning Disabilities Research, 2, 103-109. Parker, R., Hasbrouck, J. E., & Tindal, G. (1992). The maze as a classroom-based reading measure: Construction, methods, reliability, and validity. Journal ofSpecial Education, 26, 195-218. Pattillo, S. T., Heller, K. W., & Smith, M. (2004). The impact of a modified repeated reading strategy paired with optical character recognition on the reading rates of students with visual impairments. Journal of Visual Impairment and Blindness, 98, 28-46. Perfetti, C. A. (1977). Language comprehension and fast decoding: Some psycholinguistic prerequisites for skilled reading comprehension. In J. T. Guthrie (Ed.), Cognition, curriculum, and comprehension (pp. 20-4 1). Newark, DE: International Reading Association. Perfetti, C. A. (1985). Reading ability. New York: Oxford University Press. Pikulski, 3. J., & Chard, D. J. (2005). Fluency: Bridge between decoding and reading comprehension. International Reading Association, 10, 510-519. Pinnell, G. S., Pikulski, J. J., Wixson, K. K., Cambell, J. R., Gough, P. B., & Beatty, A.  238  S. (1995). Listening to children read aloud. Washington, DC: US Department of Education, National Center for Educational Statistics. Ponchillia, P. E., & Durant, P. A. (1995). Teaching behaviors and attitudes of braille instructors in adult rehabilitation centers. Journal of Visual Impairment and Blindness, 90, 227-236. Ponchillia, P. E., & Ponchillia, S. V. (1996). Foundations ofRehabilitation Teaching: With Persons Who are Blind or Visually Impaired. New York: American Foundation for the Blind. Potter, M. L., & Wamre, H. M. (1990). Curriculum-based measurement and developmental reading models: Opportunities for cross validation. Exceptional Children, 57, 16-25. Pring, L. (1982). Phonological and tactual coding of Braille by blind children. British Journal ofPsychology, 73, 35 1-359. Pring, L. (1984). A comparison of the word recognition processes of blind and sighted children. Child Development, 55, 1865-1877. Pring L. (1994). Touch and go: Learning to read Braille. Reading Research Quarterly, 29, 66-74. Rashotte, C. A., & Torgesen, J. K. (1985). Repeated reading and reading fluency in learning disabled children. Reading Research Quarterly, 20, 180-18 8. Rasinski, T. V. (1989). Fluency for everyone: Incorporating fluency instruction in the classroom. The Reading Teacher, 42, 690-693. Rasinski, T. V. (1990). Effects of repeated reading and listening-while-reading on reading fluency. Journal ofEducational Research, 83, 147-150.  239  Rex, E. J. (1971). A study of basal readers and experimental supplementary materials for teaching primary reading in braille. Education ofthe Visually Handicapped, 3, 17. Rex, E. J., Koenig, A. J. Wormsley, D. P., & Baker, R. L. (1994). Foundations ofbraille literacy. New York: AFB Press. Richards, S. B., Taylor, R. L., Ramasamy, R., & Richards, R. Y. (1999). Single Subject Research. San Diego, CA: Singular Publishing Group, Inc. Ryles, R. (1996). The impact of braille-reading skills on employment, income, education, and reading habits. Journal of Visual Impairment and Blindness, 90, 219-226. Ryles, R. (1998). The relationship of reading medium with the literacy skills of high school students who are visually impaired. Dissertation Abstracts International, 58,4616 (UMINo. 9819296) Salvia, J., & Hughes, C. (1990). Curriculum-based assessment. New York: Macmillan Publishing Company. Salvia, J., & Ysseldyke, J. E. (1988). Assessment (7th ed.). Boston: Houghton Mifflin Company. Samuels, S. J. (1979). The method of repeated readings. The Reading Teacher, 50, 376381. Samuels, S. J. (1988). Decoding and automaticity: Helping poor readers become automatic at word recognition. The Reading Teacher, 41, 756-760. Schiff, W., & Foulke, E. (Eds.). (1982). Tactual perception: A sourcebook. Cambridge University Press, Cambridge, NY. Schroeder, F. K. (1989). Literacy: The key to opportunity. Journal of Visual Impairment  240  and Blindness, 83, 290-293. Shapiro, E. S. (2004). Academic skills problems. Direct assessment and intervention (3rd ed.). New York: Guilford Press. Shaywitz, B. A. (1997). The Yale Center for the Study of Learning and Attention: Longitudinal and neurobiological studies. Learning Disabilities. A Multidisciplinary Journal, 8, 2 1-30. Shaywitz, S. E., Escobar, M. D., Shaywitz, B. A., Fletcher, J.M., & Makuch, R. (1992). Distribution and temporal stability of dyslexia in an epidemiological sample of 414 children followed longitudinally. New England Journal ofMedicine, 326, 145-150. Shinn, M. R. (1989). Curriculum-based measurement: Assessing special children. New York: Guilford. Shinn, M. R. (2002). Best practices in using curriculum-based measurement in a problem-solving model. In A. Thomas, & J. Grimes (Eds.), Best practices in school psychology IV(pp. 67 1-697). Bethesda, MD: National Association of School Psychologists. Shinn, M. R., & Good, R. H., III. (1993). CBA: An assessment of its current status and prognosis for its future. In J. J. Kramer (Ed.), Curriculum-based measurement (pp. 139-178). Lincoln, NE: Buros Institute of Mental Measurements. Shinn, M. R., Good, R. H., III, Knutson, N., Tilly, W. D., & Collins, V. (1992). Curriculum-based measurement of oral reading fluency: A confirmatory factor analysis of its relation to reading. School Psychology Review, 21, 459-479. Silberman, R. K., & Sowell, V. (1998). Educating students who have visual impairments  241  with learning disabilities. In S. Sacks & R. Silberman (Eds.), Educating students who have visual impairments with other disabilities (pp. 161-185). Baltimore, MD: Paul H. Brookes Publishing. Silberglitt, B., & Hintze, J. M. (2007). How much growth can we expect? A conditional analysis of R-CBM growth rates by level of performance. Exceptional Children, 74, 71-84. Simon, C., & Huertas, J. A. (1998). How blind readers perceive and gather information written in braille. Journal of Visual Impairment and Blindness, 92, 322-330. Sindelar, P. T., Monda, L. E., & O’Shea, L. J. (1990). Effects of repeated readings on instructional- and mastery-level readers. Journal ofEducational Research, 83, 220-226. Skinner, C. H., Fletcher, P. A., & Henington, C. (1996). Increasing learning rates by increasing student response rates: A summary of research. School Psychology Quarterly, 1], 313-325. Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press. Smith, C. B. (1989). Emergent literacy: An environmental concept. Reading Teacher 42, 528.Smith, D. D. (1979). The improvement of children’s oral reading through the use of teacher modeling. Journal ofLearning Disabilities, 12, 172-175. Spache, G. D. (1974). The Spache readability formula. Good readingfor poor readers (rev. ed.). Champaign, IL: Garrard. Spache, G. D. (1981). Diagnostic Reading Scales. Monterey, CA: McGraw Hill. Spungin, S. J. (1989). Braille literacy: Issues for blind persons, families, professionals,  242  and producers ofBraille. New York: AFB. Stanovich, K. E. (1986). Mathew effects in reading: Some consequences in individual differences in the acquisition of literacy. Reading Research Quarterly, 21, 360407. Stanovich, K. E. (2000). Progress in understanding reading: Scientflcfoundations and newfrontiers. New York: Guilford. Stecker, S. K., Roser, N. L., & Martinez, M. G. (1998). Understanding oral reading fluency. In T. Shanahan & F. V. Rodriguez-Brown (Eds.), 47th Year Book ofthe National Reading Conference (pp. 295-3 10). Chicago: National Reading Conference. Steinman, B. A., LeJeune, B. J., & Kimbrough, B. T. (2006). Developmental stages of reading processes in children who are blind and sighted. Journal of Visual Impairment and Blindness, 100, 3 6-46. Stoddard, K., Valcante, G., Sindelar, P., O’Shea, L., & Algozzine, B. (1993). Increasing reading rate and comprehension: The effects of repeated reading, sentence segmentation, and intervention training. Reading Research and Instruction, 32, 53-65. Sutherland, K. S., & Snyder, A. (2007). Effects of reciprocal peer tutoring and selfgraphing on reading fluency and classroom behavior of middle school students with emotional or behavioral disorders. Journal ofEmotional and Behavioral Disorders, 15, 103-118. Therrien, W. J. (2004). Fluency and comprehension gains as a result of repeated reading: A meta-analysis. Remedial and Special Education, 25, 252-261.  243  Therrien, W. J., Gormley, S., & Kubina, R. (2006). Boosting fluency and comprehension to improve reading achievement. Teaching Exceptional Children, 38, 22-26. Therrien, W. J., & Hughes, C. (2008). Comparison of repeated reading and question generation on students’ reading fluency and comprehension. Learning Disabilities- A Contemporaiy Journal, 6, 1-16. Therrien, W. J., & Kubina, R. M., Jr. (2006). Developing reading fluency with repeated reading. Intervention in School and Clinic, 41, 156-160. Therrien, W. J., & Kubina, R. M, Jr., (2007). The importance of context in repeated reading. Reading Improvement, 44, 179-188. Therrien, W. J., & Wickstrom, K., Jones, K (2006). Effect of a combined repeated reading and question generation intervention on reading achievement. Learning Disabilities Research and Practice, 21, 89-97. Tilly, D. W. (1999). The effect ofpassage difficulty on reading curriculum-based measurement data: A generalizable theory approach. Presentation at Pacific Coast Research Council. La Jolla, California. Tilly, D. W., III., & Flugum, K. R. (1995). Best practices in ensuring quality interventions. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (3rd ed., pp. 485-500). Washington, DC: National Association of School Psychologists. Tobin, M. (1994). Assessing visually handicapped people. London: David Fulton Publishers. Tomkins, G. E., & McGee, L. M. (1986). Visually impaired and sighted children’s  244  emerging concepts about written language. In D. Yaden & S. Templeton (Eds.), Metalinguistic awareness and beginning literacy (pp. 259-275). Portsmouth, NH: Heinemann. Treiman, R., & Rodriguez, K. (1999). Young children use letter names in learning to read words. Psychological Science, 10, 334-338. Trent, S.D., & Truan, M.B. (1997). Speed, accuracy and comprehension of adolescent Braille readers in a specialized school. Journal of Visual Impairments and Blindness, 91, 494-500 Truan, M. B. (1978). The effect ofinstructionalfeedback on the correct oral reading rate  ofvisually impaired students. Unpublished doctoral dissertation, George Peabody College for Teachers, Vanderbuilt University, Nashville, TN. Umstead, R. G. (1970). Improvement of braille-reading through training. Unpublished doctoral dissertation, George Peabody College for Teachers, Nashville. Upah, K. R. F., & Tilly, W. D., III. (2002). Best practices in designing, implementing, and evaluating quality interventions. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (4th ed., pp. 483-50 1). Bethseda, MD: National Association of School Psychologists. Vadasy, P. F., & Sanders, E. A. (2008). Repeated reading intervention: Outcomes and interactions with readers’ skills and classroom instruction. Journal ofEducational Psychology, 100, 272-290. Vaughn, S.. R., Chard, D. J., Pedrotty Bryant, D., Coleman, M., Tyler, B., Linan Thompson, S., et al. (2000). Fluency and comprehension interventions for third grade students. Remedial and Special Education, 21, 325-33 5.  245  Vaughn, S. R., & Fuchs, L. S. (2003). Redefining learning disabilities as inadequate response to treatment: Rational and assumptions. Learning Disabilities Research and Practice, 18, 33-58. Wagner, D., McComas, J. J., Boliman, K., & Holton, E. (2006). The Use of Functional Reading Analysis to Identify Effective Reading Interventions. Assessmentfor Effective Intervention, 32, 40-49. Wallace, D. (1973). The effect ofrapid reading instruction and recognition training on the reading rate and comprehension ofadult legally blindprint and braille readers. Unpublished doctoral dissertation, Brigham Young University. Wanzek, J., & Vaughn, S. R. (2007). Research-based implications for extensive early  reading interventions. School Psychology Review, 36, 541-56 1. Wetzel, R., & Knowlton, M. (2006). Studies of braille-reading rates and implications for the Unified English Braille Code, Journal of Visual Impairment and Blindness, 100, 275-284. Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics Bulletin, 1, 80-83. Williams, M. (1971). Braille-reading. Teacher ofthe Blind Vol. LIX, 103-116. Wolf, M. M., & Bowers, P. (2000). The question of naming-speed deficits in developmental reading disabilities: An introduction to the double-deficit hypothesis. Journal ofLearning Disabilities, 33, 322-324. Wolf, M.., & Katzir-Cohen, T. (2001). Reading fluency and its intervention. Scientific Studies ofReading, 5, 2211-238. Wolfe, M.M. (1978). Social validity: The case for subjective measurement or how  246  applied behavioral analysis is finding its heart. Journal ofApplied Behavior Analysis, 11, 2-3-214. Wolffe, K. (2000). Making it! Successful transition competencies for youth with visual disabilities. SEE/HEAR, 5, 19-24. Wormsley, D. P. (1979). The effects ofa hand movement training program on the hand movements and reading rates ofyoung braille readers. Ann Arbor, MI: University Microfilms International. Wormsley, D. P. (1996). Reading rates of young braille-reading children. Journal of Visual Impairment and Blindness, 90, 278-282. Wormsley, D. P. (1997). Learning to read, reading to learn: Teaching braille-reading and writing. In D. P. Wormsley & F.M. D’Andrea (Eds.) Instructional strategies for braille literacy (pp. 1 -16). New York: AFB Press. Wormsley, D. P., & D’Andrea, F. M. (Eds.). (1997). Instructional strategies for braille literacy. New York: AFB Press. Young, A. R., Bowers, P. G., & MacKinnon, G. E. (1996). Effeccts of prosodic modeling and repeated reading on poor readers’ fluency and comprehension. Applied Psycholinguistics, 17, 59-84.  247  APPENDICES Appendix A Recruitment Email  UBC  The University of British Columbia Educational and Counselling Psychology and Special Education  Recruitment email Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Response to Intervention framework Principal Investigator: Dr. Cay Holbrook, Associate Professor Educational and Counselling Psychology and Special Education, University of British Columbia (UBC), (604) 822-2235 Co-Investigator: Ms. Erika Forster, Doctoral student, Educational and Couns elling Psychology and Special Education, UBC, Teacher for Students with Visual Impair ments, (604) February 15, 2008  YOUNG BRAILLE READERS CAN BE EFFICIENT BRAILLE READERS! DO YOU WORK WITH A BRAILLE READER IN GRADE ONE, TWO OR THREE WHO IS HAVING DIFFICULTY WITH READING SPEED? THIS PROJECT MIGHT INTEREST YOU! Dear Teacher for Students with Visual Impairments: We are currently recruiting participants for a study on increacing oral reading fluency for young braille readers through a process of Repeated Reading. The purpose of this email is to tell you about our study and to invite you to partici pate. If you are curently working with a student in grades one, two, or three who reads braille exclusively, has no known additional disabilities, and appears to be reading slowly , please contact Erika Forster at to find out how to participate in our study. ‘  Participation in the study would involve your using the Repeated Readin g intervention with your struggling braile reader over a period of five weeks (i.e., three 15 minute Repeated Reading sessions per week for a total of approximately 45 minute s per week).  248  The Repeated Reading intervention would take 15 minutes to complete and involves the following steps: 1. Students would reread a short story three times while you provide some specific corrective feedback if they make errors. Students would read the short story for 2 minutes each time, and you would record the number of words they read correctly. 2. The students’ comprehension of the passage would also be assessed by asking students to orally recall what they can remember about the story, and this process would take one minute. 3. The Repeated Reading session would conclude with the students reading another short story once for 2 minutes. Two weeks after the 5-week intervention period, the intervention would be carried out 3 more times (i.e., 45 minutes in total) to help determine if students are able to maintain any improvements gained during the intervention. Any training or help you need in carrying out the Repeated Reading intervention will be provided by Erika Forster at  Thank you in advance for considering our request  Kind regards, Dr. Cay Holbrook Erika Forster  249  _________  Appendix B Child Assent Form THEUNIVERSITYOFBRITISHCOLUMBIA Department of Educational & Counselling Psychology, & Special Education 2l25MainMall Vancouver, B.C. Canada V6T 2B5 Tel: (604) 822-4602 Fax: (604) 822-3302  Braille-reading speed study Student assent form for participant screening and selection and the full study Parents: Please read this to your child as part of helping him or her understand what this study is about. Study Title: Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Response to Intervention framework My name is Erika Forster, and I am trying to learn about how to help braille readers read faster. If you would like, you can be in my study. If you decide you want to be in my study, you will be asked to spend some time reading with Teacher of Students with Visual Impairments, (his or her name). She/He will ask you to read some interesting short stories over again four times. (Teacher of Students with Visual Impairment’s name) will time you as you read, and he/she will tape record your reading. You will also be asked to tell him/her what you remember about the story. By taking part in my research project you will help me learn more about how to help students who get to read braille and who find it difficult to read quickly. So, if you choose to participate in the research project, you will be able to help me learn things that y help you, and y also help other kids with their reading. Other people will not know if you are in our study. The reading that you do for the study will not affect your marks in school. You don’t have to be in this study. If you don’t want to be in the study, no one will be mad at you. It is up to you. You will still get help with your reading from (Teacher of Students with Visual Impairment’s name) even if you decide that you don’t want to be in the study.  250  __________  If you say okay now, but change your mind later, that’s okay too. You can stop at any time. Just tell your parents or (Teacher of Students with Visual Impairment’s name) that you would like to stop. This study was explained to your parents, and they said that you could be in it. You can talk this over with them before you decide whether or not you want to be part of this study. You can call Erika if you have questions about the study or if you decide you don’t want to be in the study any more. My telephone number is (604) I will give you a copy of this form in case you want to ask questions later. Thank you! Assent: Yes, I have decided to be part of the study even though I know that I don’t have to.  Date  Your signature  251  .  d  J%  c_  d-  c_  6  d%  d  c,.  o.-  -  6%.  6%.  6  ..  q  6’.  J  (if you would like to participate, please DETACH HERE AND RETURN THIS PORTION to the researchers in the enclosed postage-paid envelope. Please KEEP the rest of this document for your records) STUDENT ASSENT FORM I have read and understand the attached letter regarding the study entitled “Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Response to Intervention framework”.  I have also kept copies of both the letter describing the study and this permission slip. Assent: Yes, I have decided to be part of the study even though I know that I don’t have to.  Date  Your signature  252  Appendix C Parental Consent Form  UBC  The University of British Columbia Educational and Counselling Psychology and Special Education Parent Consent Form for participation in the participant screening and selection and full study  Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Response to Intervention framework Principal Investigator: Dr. Cay Holbrook, Associate Professor Educational and Counselling Psychology and Special Education, University of British Columbia (UBC), (604) 822-2235 Co-Investigator: Ms. Erika Forster, Doctoral student, Educational and Counselling Psychology and Special Education, UBC, (604) f -  Dear Parent or Guardian: Many people often express concerns about braille readers’ reading speeds. With these concerns in mind, we have developed this reading study designed to try to find ways of helping to improve their braille-reading skills. We are interested in assessing the effects of using a reading intervention called “repeated reading” on oral reading fluency (i.e., reading speed and accuracy). The purpose of this letter is to ask your permission to include your child in the participant selection process for our study to determine if the repeated reading intervention may be helpful for him or her. Listed below are several aspects of this project that you need to know. Purpose: The purpose of this research is to evaluate the usefulness of this reading intervention in helping braille-reading students improve their oral reading fluency. The purpose of the initial screening and participant selection for this research study is to see whether this intervention would be suitable for your son or daughter. If this intervention appears to be suitable for your son or daughter, he or she will be eligible for participation in this study. Study Procedures: Participation in the initial screening and participant selection for this research study would involve your son or daughter doing some brief reading assessments with your  253  current Teacher of Students with Visual Impairments (TVI) to determine whether the repeated reading intervention would be suitable for your son or daughter. These assessments would involve asking your son or daughter to read approximately four short stories for two minutes each and then tell his or her TVI about what happened in each story. Your son or daughter’s reading will be timed and the whole process will be audiotaped. Your TVI would also be asked to complete two short questionnaires regarding your child’s reading skill and literacy background and the nature of his or her visual impairment (e.g., ophthalmological diagnosis) to help us decide if the repeated reading intervention would be helpful. Should the repeated reading intervention appear suitable for your child, he or she will be eligible to participate in the full study. Participation in the full study would involve implementing the repeated reading intervention. This intervention would involve your child reading short passages several times in one 15 minute session to his or her TV!, 3 times per week for approximately 5 weeks (The intervention would take approximately 45 minutes per week in total). All of the sessions would be audio taped and take place in a quiet room at your child’s school.  Each session of the Repeated Reading intervention would take 15 minutes to complete and involves the following steps: 1. Your child would reread a short story aloud three times while the TV! would provide some specific corrective feedback if your child made any errors. Your child would read the short story for 2 minutes each time, and the TVI would record the number of words your child read correctly. 2. Your child’s comprehension of the passage would also be assessed by asking him/her to orally recall what he/she can remember about the story, and this process would take 1 minute. 3. The Repeated Reading session would conclude with your child reading another short story aloud once for 2 minutes. Two weeks after the 5-week intervention period, the intervention would be carried out 3 more times (i. e., 45 minutes in total) to help determine if your child was able to maintain any improvements gained during the intervention. Any training or help your TVI or child need in carrying out the Repeated Reading intervention will be provided by Erika Forster, either over the phone, via the internet (e.g., through online demonstrations) and, if necessary, during face-to-face meetings at your child’s school.  Potential risks: Potential risks of participating in the repeated reading intervention may include your child becoming more aware of his or her reading challenges.  254  In the event that your child is ineligible for the proposed full study, the Principal Investigator and Co-investigator will, at your invitation, offer additional, more individualized resources to assist in enhancing your child’s reading abilities. Following participation in the full study, should the repeated reading intervention not help to improve your child’s reading speed, the Principal Investigator and the Co investigator will, at your invitation, offer additional, more individualized supports to assist in enhancing your child’s reading abilities. Potential benefits: It is hoped that the process involved in this research study will encourage your child and improve his or her reading skills in response to the guided reading practice. If you choose to participate, you will receive a report of your child’s results on all of the assessments that are part of the research study. Confidentiality: We recognize your right to privacy should you decide to give consent for your child to participate in this process. Your identity, the identity of your child, his or her school, and TVI will be kept strictly confidential. No identifying information will appear in any written or oral presentation of this research. Should you decide to participate in this study, all information collected for research purposes will be identified only by code number and kept in a locked filing cabinet in Dr. Holbrook’s office at UBC. Only Dr. Holbrook and the Erika Forster will have access to the information stored at UBC. RemunerationlCompensation: There are no costs or remuneration associated with participation in this study. Contact for information about the study: If you have any questions or desire further information with respect to this process, you may, at any time, contact Dr. Holbrook at (604) 822-2235 or cay.holbrook@ubc.ca or Erika Forster at (604) Contact for concerns about the rights of research subjects: If at any time you have any concerns about your treatment or rights as a research participant, you may contact the Research Subject Information Line in the UBC Office of Research Services at 604-822-8598 or at RSIL@ors.ubc.ca. Consent: Participation in the research is completely voluntary, and you may refuse to have your child participate or withdraw your child from the study at any time, even after signing the consent form. Your child may also choose to withdraw from this study at any time. If you or your child choose not to participate or withdraw from this study, there will be no consequences relating to his or her educational program. The results from this study will be presented in a dissertation at UBC and potentially at conferences and in a journal article. However, your identifying  255  _____  information and that of your child and his or her TVI will remain confidential. Also, we will respect your child’s wishes as to whether or not he or she wants to participate. Refusing to participate will not jeopardize your child’s education in any way. In addition to your consent and your child’s assent, we will need to receive signed consent forms from your TVI and school district representative before we can begin (If consent is given, they will send their forms directly to Erika Forster).  Please sign and date the attached slip if you give your permission for your son or daughter to participate in this research study. If your child also assents to participate in this study, would you then kindly send this signed parental consent form and the signed child assent form directly to Erika Forster in the addressed, stamped envelope? Thank you very much for considering this request. Sincerely, Cay Holbrook, Ph.D. Associate Professor Erika Forster (doctoral student) PARENT CONSENT FORM Study Title: “Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Response to Intervention framework”  Principal Investigator:  Dr. Cay Holbrook, Ph.D., Associate Professor, Department of Educational and Counselling Psychology, and Special Education, University of British Columbia, 2125 Main Mall, Vancouver, B.C. V6T 1Z4 Phone: (604) 8222235 e-mail: cay.holbrook@ubc.ca  I have read and understand the attached letter regarding the study entitled “Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Response to Intervention framework”. I have also kept copies of both the letter describing the study and this permission slip. Yes, my son/daughter has my permission to participate.  Parent’s Signature Parent’s name________________________________  256  _____  ___________________________________________  Son/Daughter’s Name Date_________________  Q,.-  .  -  d  c-  6’.  .  6’.  cu..-  cç.  q  d’•..  6”..  d  —  6’..  6”..  6”..  (Please RETURN THIS PORTION directly to Erika Forster in the postage-paid envelope. Please KEEP the rest of the document for your records) I have read and understand the attached letter regarding the study entitled “Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Response to Intervention framework”. I have also kept copies of both the letter describing the study and this permission slip. Yes, my son/daughter has my permission to participate. Parent’s signature Parent’s  name  Son/Daughter’s name_______________________________________ Date________________________________________________________  257  Appendix D Teacher Consent Form The University of British Columbia Educational and Counselling Psychology and Special Education Teacher Consent Form for both the participant screening and selection process and participation in the full study Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Response to Intervention framework  Principal Investigator: Dr. Cay Holbrook, Associate Professor Educational and Counselling Psychology and Special Education, University of British Columbia (UBC), (604) 822-2235 Co-Investigator: Ms. Erika Forster, Doctoral student, Educational and Counselling Psychology and Special Education, UBC, (604) —  Dear Teacher for Students with Visual Impairments: Professionals and parents often express concerns about braille readers’ reading speeds. With these concerns in mind, we have developed this reading study designed to try to find ways of helping to improve their braille-reading skills. We are interested in assessing the effects of using a reading intervention called “repeated reading” on oral reading fluency (i.e., reading speed and accuracy). The purpose of this letter is to invite you to participate in the initial screening and participant selection for this research study and in the full study if it appears that your student will benefit from the use of this intervention. You are being invited to participate because of your experience teaching braille to primary school students. This research is part of Erika Forster’ s graduate degree and will be part of her dissertation. Listed below are several aspects of this project that you need to know. Purpose: The purpose of this research is to evaluate the usefulness of a reading intervention designed to help braille-reading students in the primary grades improve their oral reading fluency.  Study Procedures: Particzation in the participant screening and selection part ofthe study:  258  The participant screening and selection part of the study involves determining if the repeated reading intervention may be suitable for your student. Participation in this initial screening and participant selection for this research study would involve your completing two very short questionnaires (about your student’s reading skills and the nature of his or her visual impairment) and your carrying out brief preliminary reading assessments to determine whether the repeated reading intervention would likely be suitable for the student. These brief preliminary assessments involve approximately four timed twominute oral reading fluency assessments and four timed one-minute assessments in which the student orally retells what happened in the reading passage. The Co-investigator, Erika Forster, would assist you in developing the skills necessary to administer these assessments. Should the repeated reading intervention appear suitable for your student, he or she will be eligible to participate in the full study. Before we can consider including your student in the study, we need to obtain consent from his or her parents and assent from the child. If you are interested in participating in this research study, we would appreciate it if you would 1. forward the attached consent form to your district representative for his or her signature. If he or she consents to allowing us to conduct the study in your district, he or she will send us the signed consent form directly. 2. forward the parent consent form and assent form to your student’s parents. If the parents and child would like to participate in this study, they will send us the signed (consent and assent) forms directly. If we receive signed consent forms from you, the parents, and your district representative and a signed assent form from the child, then the child may be included in the study. You are eligible to participate with more than one student, however, we must receive signed consent and assent forms from you, the parents, and the district representative with respect to each child. Particzation in the full study: The repeated reading intervention is the focus of the full study, and this intervention would involve your student reading short passages several times in one session over a 15 minute period, 3 times per week for approximately 5 weeks (i.e., three 15 minute Repeated Reading sessions per week for a total of approximately 45 minutes per week). The Repeated Reading intervention would take 15 minutes to complete each time and involves the following steps: 1. Students would reread a short story aloud three times while you correct them if they make errors. Students would read the short story for 2 minutes each time, and you would record the number of words they read correctly. 2. The students’ comprehension of the passage would also be assessed by your asking students to orally recall what they can remember about the reading passage. This process would take 1 minute.  259  3. The Repeated Reading session would conclude with the students reading another short story aloud once for 2 minutes. All of the sessions would be audio taped and take place in a quiet room within your student’s school. Two weeks after the 5-week intervention period, the intervention would be carried out 3 more times (i. e., 45 minutes in total) to help determine if students are able to maintain any improvements gained during the intervention. Any training or help your TVI or child need in carrying out the Repeated Reading intervention will be provided by Erika Forster, either over the phone, via the internet (e.g., through online demonstrations) and, if necessary, during face-to-face meetings at your student’s school. Potential risks: In the event that your student is ineligible for the proposed full study, the Principal Investigator and Erika Forster will, at your discretion, offer additional, more individualized supports to assist in enhancing your student’s reading abilities. If your student participates in the full study, and the repeated reading intervention does not improve your student’s oral reading fluency, the Principal Investigator and Co investigator will, at your invitation, offer additional, more individualized supports to assist in enhancing your student’s reading abilities. Potential benefits: It is hoped that the process involved in both the initial screening and participant selection and the full study will encourage your student and improve his or her reading skills. If you choose to participate in any part of the study, you will receive a report of your student’s results on the assessments that are part of this research study. Confidentiality: We recognize your right to privacy should you decide to give consent to participate in this process. Your identity, and the identity of your student and his or her school, will be kept strictly confidential. No identifying information will appear in any written or oral presentation of this research. Should you decide to participate in this research study, all information collected for research purposes will be identified only by code number and kept in a locked filing cabinet in Dr. Holbrook’s office at UBC. Only Dr. Holbrook and the Co-investigator will have access to the information stored at UBC. RemunerationlCompensation: There are no costs or remuneration associated with participation in this study. Contact for information about the study:  260  If you have any questions or desire further information with respect to this process, you may, at any time, contact Dr. Holbrook at (604) 822-2235 or cay.holbrook@ubc.ca or Erika Forster at (604) —  Contact for concerns about the rights of research subjects: If at any time you have any concerns about your treatment or rights as a research participant, you may contact the Research Subject Information Line in the UBC Office of Research Services at 604-822-8598 or at RSIL@ors.ubc.ca. Consent: Participation in the research is completely voluntary, and you may withdraw from the study at any time, even after signing the consent form. The results from this study will be presented in a dissertation at UBC and potentially at conferences and in a journal article. However, your identifying information and that of your student will remain confidential. Your signature below indicates that you consent to participate in this study and have received a copy of this consent form for your own records. Please sign and date the attached slip if you would like to participate in this research study and kindly send the signed forms to Erika Forster in the pre-paid envelope? Thank you very much for considering this request. Sincerely, Cay Holbrook, Ph.D. Associate Professor Erika Forster (doctoral student)  TEACHER CONSENT FORM I have read and understand the attached letter regarding the study entitled “Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Response to Intervention framework”.  I have also kept copies of both the letter describing the study and this permission slip. Yes, I would like to participate in this research study.  Please print your name: Participant’s  signature:  Date:  261  _____ __________________  -  c_  c..  d..  6  _____________________________________  ____________________________________________  c..  6  d  -  6  d.  c..-  c_.  o  1.  d  6  o  d  J  (if you would like to participate, please DETACH HERE AND RETURN THIS PORTION to the researchers in the enclosed postage-paid envelope. Please KEEP the rest of this document for your records)  TEACHER CONSENT FORM I have read and understand the attached letter regarding the study entitled “Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Response to Intervention framework”.  I have also kept copies of both the letter describing the study and this permission slip. Yes, I would like to participate in this research study.  Please print your name: Participant’s  signature:  Date:  262  Appendix E School District Administrator Consent Form The University of British Columbia Educational and Counselling Psychology and Special Education School District Administrator Representative Consent Form for both the participant screening and selection process and participation in the full study Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Response to Intervention framework  Principal Investigator: Dr. Cay Holbrook, Associate Professor Educational and Counselling Psychology and Special Education, University of British Columbia (UBC), (604) 822-2235 Co-Investigator: Ms. Erika Forster, Doctoral student, Educational and Counselling Psychology and Special Education, UBC, (604) Dear School District Administrator Representative: Administrators, parents, and Teachers for Students with Visual Impairments (TVIs) often express concerns about braille readers’ reading speeds. With these concerns in mind, we have developed this reading study designed to try to find ways of helping to improve their braille-reading skills. We are interested in assessing the effects of using a reading intervention called “repeated reading” on oral reading fluency (i.e., reading speed and accuracy). This research is part of Erika Forster’s graduate degree and will be part of her dissertation. A TVI and a primary braille-reading child in your school district are being invited to participate in this reading study. The purpose ofthis letter is to askfor your consent to allow this study to take place in your school district. The TVI will be invited to participate in the initial screening and participant selection for this research study and in the full study if it appears that his or her student will benefit from the use of this intervention. The TVI is being invited to participate because of his or her experience teaching braille to primary school students. The child is being invited to participate in the study because, according to the TVI, he or she is reading slowly and/or making numerous errors. Listed below are several aspects of this project that you need to know.  263  Purpose: The purpose of this research is to evaluate the usefulness of a reading intervention designed to help braille-reading students in the primary grades improve their oral reading fluency. Study Procedures: Participation in the participant screening and selection part of the study; The participant screening and selection part of the study involves determining if the repeated reading intervention may be suitable for the student. Participation in this initial screening and participant selection for this research study would involve the TVI completing two very short questionnaires about his or her student’s reading skills and the nature of his or her visual impairment. In addition, the TVI would be asked to carry out brief preliminary reading assessments to determine whether the repeated reading intervention would likely be suitable for the student. These brief preliminary assessments involve approximately four timed two-minute oral reading fluency assessments and four timed one-minute assessments in which the student orally retells what happened in the reading passage. Any training or help your TVI or the participant may need in carrying out the Repeated Reading intervention will be provided by Erika Forster, either over the phone, via the internet (e.g., through online demonstrations) and, if necessary, during face-to-face meetings at the child’s school. Should the repeated reading intervention appear suitable for the student, he or she will be eligible to participate in the full study. TVIs are eligible to participate with more than one student, however, the researchers must receive signed consent and assent forms from you, the parents, and the TVI and a signed assent form from the child with respect to each child.  Particzation in the full study: The repeated reading intervention is the focus of the full study, and this intervention would involve your student reading short passages several times in one session over a 15 minute period, 3 times per week for approximately 5 weeks (i.e., three 15 minute Repeated Reading sessions per week for a total of approximately 45 minutes per week). The Repeated Reading intervention would take 15 minutes to complete each time and involves the following steps: 4. Students would reread a short story aloud three times while the TVI provides some specific corrective feedback if they make errors. Students would read the short story for 2 minutes each time, and the TVI would record the number of words they read correctly. 5. The students’ comprehension of the passage would also be assessed by the TVI asking students to orally recall what they can remember about the reading passage, and this process would take one minute. 6. The Repeated Reading session would conclude with the students reading another short story aloud once for 2 minutes.  264  All of the sessions would be audio taped and take place in a quiet room within the student’s school. Two weeks after the 5-week intervention period, the intervention would be carried out 3 more times (i.e., 45 minutes in total) to help determine if students are able to maintain any improvements gained during the intervention. Any training or help you need in carrying out the Repeated Reading intervention will be provided by Enka Forster. Before we can consider including the student in the study, we need to obtain consent from you, the district administrator, the child’s parents and assent from the child. If you give your permission for this research to be conducted at this school, we would appreciate it if you would sign and forward the bottom portion of this consent form to the researcher in the addressed, prepaid envelope. If we receive signed consent forms from you, the parents, and the TVI and a signed assent form from the child, then the child may be included in the study. Potential risks: In the event that the student is ineligible for the proposed full study or the repeated reading intervention does not improve the student’s oral reading fluency, the Principal Investigator and Erika Forster will, at the invitation of the TVI or parent, offer additional, more individualized resources to assist in enhancing the student’s reading abilities. Potential benefits: It is hoped that the process involved in both the initial screening and participant selection and the full study will encourage the student and improve his or her reading skills. If the child participates in any part of the study, the TVI and parents will receive a report of the student’s results on the assessments that are part of this research study. Confidentiality: We recognize your right to privacy should you decide to give consent to allow this research study to be conducted in your district. Your identity, that ofyour district, TVL student and his or her parents and school, will be kept strictly confidential. No identifying information will appear in any written or oral presentation of this research. Should you decide to allow this research study to be conducted in your district, all information collected for research purposes will be identified only by code number and kept in a locked filing cabinet in Dr. Holbrook’s office at UBC. Only Dr. Holbrook and the Co-investigator will have access to the information stored at UBC. Remuneration/Compensation: There are no costs or remuneration associated with participation in this study.  265  Contact for information about the study: If you have any questions or desire further information with respect to this process, you may, at any time, contact Dr. Holbrook at (604) 822-2235 or cay.holbrook@ubc.ca or Erika Forster at (604) Contact for concerns about the rights of research subjects: If at any time you have any concerns about your treatment or rights as a research participant, you may contact the Research Subject Information Line in the UBC Office of Research Services at 604-822-8598 or at RSIL@ors.ubc.ca. Consent: Participation in the research is completely voluntary, and you may withdraw from the study at any time, even after signing the consent form. The results from this study will be presented in a dissertation at UBC and potentially at conferences and in a journal article. However, your identity, that ofyour district, TVL student, and his or her parents and school will be kept strictly confidential. Your signature below indicates that you consent to allow this study to take place in your school district and have received a copy of this consent form for your own records. Please sign and date the attached slip if you consent to allow this research to be conducted in your school district, and mail it directly to Erika Forster in the addressed. stamped envelope. If you, the parents, and TVI all consent with respect to this study, the child will be eligible to participate in the study. Thank you very much for considering this request. Sincerely, Cay Holbrook, Ph.D. Associate Professor Erika Forster (doctoral student)  SCHOOL DISTRICT ADM1NISTRATOR REPRESENTATIVE I have read and understand the attached letter regarding the study entitled “Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Response to Intervention framework”. I have also kept copies of both the letter describing the study and this permission slip. Yes, I give permission for this research study to be conducted in this school district. Yes, I also give permission for Erika Forster to work directly with the TVI and his or her student and/or provide additional reading intervention resources as part of this research study, as requested by the TVI and as necessary.  266  _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ ______ _____ _  Please print your name:  -  Position: School district: Your signature: Date:  (If you would like to give permission for this research study to take place in your district, please DETACH HERE AND RETURN THIS PORTION in the enclosed postag e-paid envelope. Please KEEP the rest of this document for your records) SCHOOL DISTRICT ADMINISTRATIVE REPRESENTATiVE I have read and understand the attached letter regarding the study entitled “Inves tigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum-based measurement within a Respo nse to Intervention framework”.  I have also kept copies of both the letter describing the study and this permis sion slip. Yes, I give permission for this research study to be conducted in this school district. Yes, I also give permission for Erika Forster to work directly with the TVI and his or her student and/or provide additional reading intervention resources as part of this research study, as requested by the TVI and as necessary. Please print your name: Position: School district: Your signature: Date:  267  ________ ______  __________________________ ______________________ ________________________ _____________________________________  Appendix F Student Participant Information Form  Name of Teacher of Students with Visual Impairments (TVI) completing the form: Student name: Date of birth:  Age:  Grade: Date form completed:  Type of visual impairment (please provide the ophthalmological diagnosis):  Most recent corrected distance acuity: Additional disabilities (please describe any additional disabilities that have been confirmed by an educational or medical diagnosis):  Years of formal braille-reading  and writing instruction received by the student:  Does the student currently read braille exclusively?  yes  —  no  Does the student icurrently use print materials in school? If no, was there any time when the student used print? yes no If yes, please indicate when he/she began to read braille exclusively:  268  ___________  __________________  Frequency of braille-reading and writing instruction (e.g., number of lessons per week) byaTVI: per week Average length of each lesson: Does another staff member also provide instruction in braille-reading and writing? no If yes, please indicate this person’s position: Frequency of braille instruction by this other person: per week Average length of each lesson:  —  yes  Many thanks, Erika Forster  269  ________________  ______ _____  _____  Appendix G Sample Treatment Integrity Checklist (Baseline) Student: Story title:  TVI:  Date:  STEP 1. READING ASSESSMENT PROCEDURES: FIRST STORY 1. Organize the materials in a quiet room. Materials  yes  no  Li Li Li Li Li Li  Examiner’s print copy of the story: FIRST STORY Student’s braille copy of the story: FIRST STORY Stopwatch Pen or Pencil Tape Recorder and tape Calculator 2. Turn ON the tape recorder. Locate it for optimal recording. 3. Greet the student. Provide a general outline of what will happen by stating the following exactly as it is written: “I will be asking you to read and timing you with my stopwatch. (allow student to explore the watch briefly if he/she requests); You may ask questions_after you_are finished reading the story.” 4. Give the student the braille story. 5. Say the following directions exactly as they are written: “Please read this story out loud. Ifyou get stuck, I will tell you the word so you can keep reading. When I say “stop I may ask you to tell me about what you read, so do your best reading. Start here (show with finger on the braille page). Begin.” 6. START the stopwatch (you will be timing for 2 MINUTES) when the student says the FIRST word of the passage. o The title is NOT counted. “,  **If the student fails to say the first word after 6 seconds, TELL them the word and MARK it as incorrect, THEN start your stopwatch. 7. • •  Follow along on the print copy of the story. On your print copy of the story, put a slash ( / ) through any ERRORS.  ERRORS are words you helped the student say after seconds, words the student_read_incorrectly,_or words_the_student_omitted.  270  _____ ____  -_____ _____ ____  REPEATED or INSERTED words are OK and are NOT counted as errors. 8. At the end of TWO MINUTES, place a bracket (] ) after the last word stated by the student, and say “Stop “.  9. Take back the student’s story. 10. Record the TOTAL number of correct words at the bottom of your print copy of the story. • TOTAL NUMBER OF CORRECT WORDS = total words read minus errors • DIVIDE the total words by 2 to find the words per MINUTE • OR, if the student finishes the story before the 2 minutes are up, PRORATE the reading speed score as per the formula on your print copy of the story 11. DO NOT tell the student his/her number of words read correctly. If asked for feedback, say something like: “Good work” OR “Keep trying your best”. COMPREHENSION CHECK: 1. Say the following directions exactly as they are written: “Please tell me all about what you just read. Try to tell me everything you can.  Begin”  2. START your stopwatch and begin timing for ONE MINUTE. • prompt student if the he/she does not say anything for 3 seconds, by saying: “Try to tell me everything you can” (you_can_only_use this prompt_ONCE) • say “STOP” if the student does not say anything or gets off track for 5 seconds 3. At the end of ONE MINUTE, say “STOP” 4. GO onto STEP 2  Student:  TVI:  N/A  Date:  Story title:  STEP 2. READING ASSESSMENT PROCEDURES: SECOND STORY 1. Organize the materials in a quiet room. Materials  Li Li Li LI  yes  no  Examiner’s print copy of the story: SECOND STORY Student’s braille copy of the story: SECOND STORY Stopwatch Pen or Pencil  271  Li Li  Tape Recorder and tape Calculator 2. If not still on, TURN ON the tape recorder. Locate it for optimal recording. 3. (Greet the student.) Provide a general outline of what will happen by stating the following exactly as it is written: “I will be asking you to read and timing you with my stopwatch. (allow student to explore the watch briefly if he/she requests); You may ask questions_after you_are finished reading_the story.” 4. Give the student the braille story. 5. Say the following directions exactly as they are written: “Please read this story out loud. Ifyou get stuclç I will tell you the word so you can keep reading. When Isay “stop I may ask you to tell me about what you read, so do your best reading. Start here (show with finger on the braille page). Begin.” 6. START the stopwatch (you will be timing for 2 MINUTES) when the student says the FIRST word of the passage. o The title is NOT counted. “,  **If the student fails to say the first word after 6 seconds, TELL them the word and MARK. it as incorrect, THEN start your stopwatch. 7.  • •  Follow along on the print copy of the story. On your print copy of the story, put a slash ( I) through any ERRORS.  ERRORS are words you helped the student say after seconds, words the student read incorrectly, or words the student omitted. REPEATED or INSERTED words are OK and are NOT counted as errors. 8. At the end of TWO MINUTES, place a bracket (] ) after the last word stated by the student, and say “Stop “.  9. Take back the student’s story. 10. Record the TOTAL number of correct words at the bottom of your print copy of the story. . TOTAL NUMBER OF CORRECT WORDS total words read minus errors . DIVIDE the total words by 2 to find the words per MINUTE . OR, if the student finishes the story before the 2 minutes are up, PRORATE the reading speed score as per the formula on your print copy of the story  272  11. DO NOT tell the student his/her number of words read correctly. If asked for feedback, say something like: “Good work” OR “Keep trying your best”. COMPREHENSION CHECK: 1. Say the following directions exactly as they are written: “Please tell me all about what you just read. Try to tell me everything you can. Begin” 2. START your stopwatch and begin timing for ONE MiNUTE. • prompt student if the he/she does not say anything for 3 seconds, by saying: “Try to tell me everything you can” (you_can_only use this_prompt_ONCE) • say “STOP” if the student does not say anything or gets off track for 5 seconds 3. At the end of ONE_MINUTE,_say_“STOP” 4. GO on to STEP 3.  Student:  TVI:  N/A  Date:  Story title:  STEP 3. READING ASSESSMENT PROCEDURES: THIRD STORY 1. Organize the materials in a quiet room. Materials  yes  no  Li Li LI Li LI Li  Examiner’s print copy of the story: THIRD STORY Student’s braille copy of the story: THIRD STORY Stopwatch Pen or Pencil Tape Recorder and tape Calculator 2. If not still on, TURN ON the tape recorder. Locate it for optimal recording. 3. (Greet the student.) Provide a general outline of what will happen by stating the following exactly as it is written: “I will be asking you to read and timing you with my stopwatch. (allow student to explore the watch briefly if he/she requests); You may ask questions_after you_are finished reading the story.” 4. Give the student the braille story. 5. Say the following directions exactly as they are written: “Please read this story out loud. Ifyou get stuclç I will tell you the word so you can keep reading. When I say “stop I may ask you to tell me about what you read, so do your best reading. Start here (show with finger on the braille page). Begin.” 6. START the stopwatch (you will be timing for 2 MINUTES) when the “,  273  student says the FIRST word of the passage. o The title is NOT counted. **If the student fails to say the first word after 6 seconds, TELL them the word and MARK it as incorrect, THEN start your stopwatch. 7. • •  Follow along on the print copy of the story. On your print copy of the story, put a slash ( I) through any ERRORS.  ERRORS are words you helped the student say after seconds, words the student read incorrectly, or words the student omitted. REPEATED or INSERTED words are OK and are NOT counted as errors. 8. At the end of TWO MINUTES, place a bracket (] ) after the last word stated by the student, and say “Stop “.  9. Take back the student’s story. 10. Record the TOTAL number of correct words at the bottom of your print copy of the story. . TOTAL NUMBER OF CORRECT WORDS = total words read minus errors • DIVIDE the total words by 2 to find the words per MINUTE . OR, if the student finishes the story before the 2 minutes are up, PRORATE the reading speed score as per the formula on your print copy of the story 11. DO NOT tell the student his/her number of words read correctly. If asked for feedback, say something like: “Good work” OR “Keep trying your best”. COMPREHENSION CHECK: 1. Say the following directions exactly as they are written: “Please tell me all about what you just read. Try to tell me everything you can. Begin” 2. START your stopwatch and begin timing for ONE MINUTE. • prompt student if the he/she does not say anything for seconds, by saying: “Try to tell me everything you can” (you_can_only use this_prompt_ONCE) . say “STOP” if the student does not say anything or gets off track for 5 seconds 3. At the end of ONE MINUTE,_say_“STOP” 4. GO on to STEP 4.  N/A  .  274  Step 4. REPORTING THE DATA CHECKLIST After the FIRST, SECOND, OR THIRD STORY:  yes  no  1. EMAIL Erika at erika.forster@shaw.ca •  List the number of CORRECT WORDS read and the number of ERRORS for the FIRST STORY.  •  List the number of CORRECT WORDS read and the number of ERRORS for the SECOND STORY.  •  List the number of CORRECT WORDS read and the number of ERRORS for the THIRD STORY.  •  Note any additional questions or comments about the student’s reading or anything else about the reading experience.  N/A  2. MAIL the following materials AS SOON AS POSSIBLE to Erika in the self-addressed, stamped envelope. Materials to enclose  Li  3 completed “Reading Assessment Procedures” checklists (for the FIRST, SECOND, and THIRD stories)  Li Li  1 completed “Reporting the Data” checklist 1 tape of the assessment (enclose a 2’ tape if it was needed)  Thank you!  275  Appendix H Social Validity Questionnaire for TVIs  Please rate these responses using a 5 point Likert scale: strongly disagree 1  2  strongly agree 3  4  5  1. Improving oral reading fluency is an important goal for my student.  2. It is not important to monitor oral reading fluency on a regular basis (e.g., at least monthly).  3. The repeated reading intervention goals are/were appropriate.  4. Repeated reading is an effective intervention to improve oral reading fluency.  5. The goals were consistent with the student’s Individual Education Plan. 6. The repeated reading intervention was dfficult to carry out.  276  strongly disagree 1  2  strongly agree 3  4  5  7. Training activities were well organized and helpful to me.  8. Carrying out the intervention caused unanticipated problems in my work with the student.  9. The UBC investigator showed respect for my school program for the student.  10. The outcomes of the repeated reading intervention were beneficial to my student’s reading skill development.  11. The outcomes of the repeated reading intervention were not beneficial to my student’s overall learning and development.  12. Overall, the repeated reading intervention improved our reading program.  277  Appendix I  Participants’ Self-perception as Readers Questionnaire Dear Teacher for Students with Visual Impairments, A) Please read these directions aloud to the student:  The researchers for this study would like to know more about how you feel about your reading. There is no right or wrong answer. They just want to know what you think. I am going to read you some sentences about being a reader. Please use numbers from 1 to 5 to tell me how much you agree or disagree with any of the sentences that I read to you.  After I read a sentence: •  If you say 1, that means that you disagree with the sentence.  •  If you say 2, that means that you disagree with the sentence, but just not as much.  •  If you say 3, that means that you don’t really agree or disagree with the sentence (neutral).  •  If you say 4, that means that you kind of agree with the sentence.  •  If you say 5, that means that you agree with the sentence.  (please provide additional help and instruction as needed).  278  B) Please circle his/her responses: How do you feel about your reading?  disagree  agree  1. I like reading.  1  2  3  4  5  2. I am a good reader.  1  2  3  4  5  3. I am a fast reader.  1  2  3  4  5  4. Reading is difficult.  1  2  3  4  5  5. It is hard for me to read quickly.  1  2  3  4  5  6. Practice helps me read better.  1  2  3  4  5  7. I can learn to read faster.  1  2  3  4  5  8. I understand what I read.  1  2  3  4  5  9. I can learn to be a better reader  1  2  3  4  5  10. Reading is fun.  1  2  3  4  5  Please return this questionnaire to Erika in the postage-paid envelope provided. Thank you!  279  Appendix J UBC Ethics Review Board Certificate of Approval  The University of British Columbia of Research Services  Office  Behavioural Research Ethics Board Suite 102, 6190 Agronomy Road, Vancouver, B.C. V6T 1Z3  CERTIFICATE OF APPROVAL- MINIMAL RISK RENEWAL PRINCIPAL INVESTIGATOR:  DEPARTMENT: IUBC BREB NUMBER: UBC/Education/Educational & Day Holbrook Counselling Psychology, and Special H07-02501 Education NSTITUTION(S) WHERE RESEARCH WILL BE CARRIED OUT: Institution  Site  NIA )ther locations where the research will be conducted:  N/A  on site at (he participants’ respective schools TVls in any and all BC school districts will be approached through the PRCVI list serve. TVIs throughout Canada and the US will be approached through publicly available list serves  DO-INVESTIGATOR(S): Ruth Ervin Erika M. Forster  PONSORING AGENCIES: N/A  ‘ROJECT TITLE: Investigating the effects of a repeated reading intervention for increasing oral reading fluency with primary, braille-reading students using curriculum based measurement within a Response to Intervention framework  EXPIRY DATE OF THIS APPROVAL: December 16, 2009 APPROVAL DATE: December 16, 2008 Fhe Annual Renewal for Study have been reviewed and the procedures were found to be acceptable on ethical rounds for research involving human subjects. Approval is issued on behalf of the Behavioural Research Ethics Board  Dr. M. Judith Lynam, Chair Dr. Ken Craig, Chair Dr. Jim Rupert, Associate Chair Dr. Laurie Ford, Associate Chair Dr. Daniel Salhani, Associate Chair Dr. Anita Ho, Associate Chair  280  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share