Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Effects of individual components of school-wide positive behaviour support on rates of problem behaviour Turri, Mary Gwendolyn 2013

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_spring_turri_mary.pdf [ 364.52kB ]
Metadata
JSON: 24-1.0165689.json
JSON-LD: 24-1.0165689-ld.json
RDF/XML (Pretty): 24-1.0165689-rdf.xml
RDF/JSON: 24-1.0165689-rdf.json
Turtle: 24-1.0165689-turtle.txt
N-Triples: 24-1.0165689-rdf-ntriples.txt
Original Record: 24-1.0165689-source.json
Full Text
24-1.0165689-fulltext.txt
Citation
24-1.0165689.ris

Full Text

           EFFECTS OF INDIVIDUAL COMPONENTS OF SCHOOL-WIDE POSITIVE BEHAVIOUR SUPPORT ON RATES OF PROBLEM BEHAVIOUR  by MARY GWENDOLYN TURRI B.A. (Honours), The University of Calgary, 2010  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF  THE REQUIRMENTS FOR THE DEGREE OF  MASTER OF ARTS in  THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES  (School Psychology)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)   December 2013      ? Mary Gwendolyn Turri, 2013       ii  Abstract  The purpose of this study was to examine the outcomes of the implementation of the individual components of School-wide Positive Behaviour Support (SWPBS) on rates of office discipline referrals.  Data from 415 schools at varying years of implementation were drawn from a database of U.S. schools implementing SWPBS and using the School-wide Information System (SWIS; May et al., 2008) to enter and manage office discipline referrals.  The implementation levels of each component of SWPBS were analyzed in relation to their effect on school-level rates of office discipline referrals, using multilevel modeling.  In contrast to previous research, analyses did not indicate statistically significant associations among office discipline referrals and any of the measured SWPBS components.  Possible reasons for results include a restricted range of implementation in the sample and lack of controlling for number of years implementing, as possible decreases in previous years were not captured in these analyses. Results are discussed in terms of implications for future research.       iii  Preface   This thesis is the original, unpublished work of Mary G. Turri, under the supervision of Dr. Kent McIntosh, with methodological and statistical expertise provided by Dr. Sterett Mercer.  Personnel from the Educational and Community Supports research unit at the University of Oregon retrieved the archived School-Wide Information System (SWIS) data.  This study was conducted with approval from the University of British Columbia?s Behavioural Research Ethics Board (H11-01800-A001).       iv  Table of Contents  Abstract .............................................................................................................................. ii Preface ............................................................................................................................... iii Table of Contents ............................................................................................................. iv List of Tables ..................................................................................................................... v Acknowledgements .......................................................................................................... vi Chapter 1: Introduction ................................................................................................... 1 1.1 Components of SWPBS .......................................................................................... 2 1.2 Evidence Base for SWPBS ................................................................................... 10 1.3 The Present Study ................................................................................................. 12 Chapter 2: Method .......................................................................................................... 15 2.1 Participants and Settings ...................................................................................... 15 2.2 Measures ................................................................................................................ 15 2.3 Procedure ............................................................................................................... 19 2.4 Design and Analyses ............................................................................................. 20 Chapter 3: Results........................................................................................................... 22 Chapter 4: Discussion ..................................................................................................... 27 4.1 Limitations ............................................................................................................. 32 4.2 Implications and Future Research ...................................................................... 32 References ........................................................................................................................ 35             v  List of Tables  Table 1 Descriptive statistics for evaluation year variables .............................................. 24 Table 2 Correlation matrix for the continuous study variables ......................................... 25 Table 3 Parameter estimates for models of the predictors and covariates of ODR rates .. 26      vi  Acknowledgements  I would like to thank my supervisor, Dr. Kent McIntosh, for his expertise and insight, and for his invaluable mentorship and guidance throughout my Masters program; I feel very fortunate to have had the opportunity to work with him.  I am grateful also to Dr. Sterett Mercer for his methodological and statistical expertise, exceptional patience, and for providing support over and above that expected of a committee member throughout this project.  Many thanks also to Dr. Laura Grow for her insight and support of this project as a member of my research committee, and to Dr. William McKee for his encouragement along the way. I would like to acknowledge my esteemed colleagues and fellow members of the SUPPORT Lab, Christina Moniz, Sara Greflund, and Sophie Ty, as well as Leigh Yang, Marita Partanen, Karissa Martens, Meghan Topjian, Teira Stauth, Jennifer Prentice, Robyn Goplen, and Tessa Johnson.  Their steadfast support, friendship, and encouragement throughout this process have been essential to me.  Lastly, I am above all else grateful to my family.  They have taught me, either directly or indirectly, to see the value in hard work, to look for the good in every situation, and to always be myself.  I thank them for their unwavering belief in me; it is an endless source of encouragement and a reminder of their unconditional love.       1  Chapter 1: Introduction  Managing problem behaviour continues to be a top priority for schools.  In the 2009-2010 school year, 85% of American public schools reported school-based crime incidents, and in 2011, 7% of high school students indicated that they had been threatened or injured with a weapon while at school (Robers, Kemp, Truman, & Snyder, 2013).  The amount of time teachers must spend addressing problem behaviour disrupts learning and may lead to decreased instructional time (Scott & Barrett, 2004; Sprague et al., 2001).  Moreover, displays of problem behaviour are associated with poor student outcomes (Malecki & Elliot, 2002; Wentzel, 1993).  As a result, effective and efficient interventions are needed to prevent problem behaviour and encourage prosocial behaviour in schools.   The presence of these types of behaviours in schools has often been met with punitive and exclusionary measures, such as suspensions, expulsions, verbal reprimands, and ?zero tolerance? campaigns (Skiba & Peterson, 1999).  One assumption behind these techniques is that the punitive measure will subsequently encourage more adaptive behaviour.  Although such policies are widespread, there is a lack of evidence regarding their effectiveness (Skiba & Peterson, 1999; Sugai & Horner, 2002, 2006).    In light of this challenging climate, there is a need for schools to develop a more constructive atmosphere, valuing corrective and positive over punitive approaches to addressing problem behaviour (Day, Golench, MacDougal, & Beals-Gonzalez, 2002).  Placing an emphasis on prevention, targeting the school as a whole, and focusing on specific environmental changes are more promising techniques for supporting students      2  (Mayer, 1995; Sugai & Horner, 2002).  One viable approach that has received extensive support in the literature is School-wide Positive Behaviour Support (SWPBS; Horner, Sugai, Todd, & Lewis-Palmer 2005).    SWPBS is a framework for establishing a positive school climate through the implementation of evidence-based strategies and organizational systems for supporting valued behavioural and academic outcomes for all students (Sugai & Horner, 2002).  Embedded in the framework is a continuum of support based on student need, conceptualized as a three-tiered prevention model (Sugai & Horner, 2006).  The first tier, universal support, encompasses the whole school, emphasizes prevention, and is predominately characterized by the direct teaching and acknowledgement of school-wide behaviour expectations (Sugai & Horner, 2002).  The intensity of support increases at each tier to provide effective and efficient levels of support for students who may not be reached by school-wide initiatives (Horner et al., 2005).  Achieving these levels of support requires the use and integration of several intervention and systematic features, making SWPBS a multicomponent approach. 1.1 Components of SWPBS  The foundation of the SWPBS approach is the implementation of seven components that characterize the universal (tier 1) system of support, which are grounded in longstanding principles of effective behaviour support for students.  These components are: 1) defining a small number of memorable, positively stated school-wide expectations for socially appropriate behaviour, 2) designing a system to actively teach these expectations to all students, 3) using systematic procedures for acknowledging students      3  for displaying the expected behaviour, 4) designing a continuum of consistently implemented consequences for responding to problem behaviour, 5) developing a system of data collection, and using this data to monitor student progress and make decisions about program development, 6) obtaining active support from the school administrator, who views SWPBS as a priority for the school, and 7) obtaining support from the school district (Horner et al., 2004).  1.1.1 Defining Expectations.  The first component of implementation is defining school-wide expectations for socially appropriate behaviour.  McKevitt and Braaksma (2004) describe a process for defining expectations and emphasize that the expectations should be 1) limited to a small number, typically three to five, 2) positively stated, 3) memorable for the students, 4) relevant to all locations in the school, and 5) used consistently by all staff members.  Typical school-wide expectations may be: Be Respectful, Be Responsible, Be Safe.  These expectations are positively stated, in that they tell students what to do instead of what not to do (e.g., don?t yell, no bullying), and are relevant to any school location.  Importantly, the chosen expectations should be relevant to the school culture and the specific needs of the student population (H. P. George, Kincaid, & Pollard-Sage, 2009).  For example, schools that emphasize religious values may include expectations regarding reverence.  Schools with a high population of students identifying as Aboriginal may define their expectations to be responsive to the students? culture.  The ability to design expectations that can be adapted to fit the specific needs of a school is a feature of SWPBS that differentiates it from more structured, manualized school-based interventions.        4  1.1.2 Teaching Expectations.  The second component is the direct teaching of the defined expectations, so that the students know what is expected of them.  Providing clear and direct instruction in behaviour skills is not unlike the procedures used to teach academic skills; students should be taught directly, with practice and opportunities to test their knowledge, and they should receive feedback as the developing use of their skills is supervised by school personnel (Sugai & Horner, 2002).  A first step to teaching the skills is to operationalize the expectations for each location in the school; for example, school personnel must define what being respectful ?looks like? in the hallway, versus the cafeteria or classroom (H. P. George et al., 2009).  To do so, schools can design an expectations matrix that outlines examples of appropriate behaviour for each expectation, for each location in the school.  An example of being respectful in the hallway may be using whisper voices, in the cafeteria it might be saying please and thank you to the lunch staff, and in the classroom it might be raising one?s hand before asking a question.   An effective and efficient way of expanding the matrix as a teaching tool is to design lesson plans for each skill.  Schools can then keep a database of the plans that can be used from year to year and adapted for different classrooms or student groups, without having to create brand new plans.  Langland, Lewis-Palmer, and Sugai (1998) provided a template for designing lessons for teaching social behaviours in a manner similar to teaching academic skills that can easily be used at the school-wide or classroom level.  For these lesson plans, the first step is to clearly define the name of the skill using consistent language, with the intent of developing a common vocabulary among students and staff.  Second, clear and observable examples of the behaviour are given, along with      5  a range of nonexamples, which describe situations where the particular skill would not be used; in doing so, students can distinguish between the appropriate and inappropriate uses of a skill, and can reinforce the appropriate use by changing the nonexample to a positive example.  Third, students engage in practice (e.g., role-play) activities of situations involving the use of the skill.  Finally, procedures are put in place to help students as they develop their skills moving forward, by providing prompts for expected behaviours before situations with a high probability of problem behaviour, reminding students of the expectations during situations when a particular skill should be in use but is not, and praising students when they are appropriately using the skills.  The use of this teaching format was investigated using a multiple baseline design across two middle school classrooms.  Using lesson plans for teaching respectfulness, the results demonstrated an immediate and sustained reduction in disrespectful behaviours in both classrooms following the implementation of the strategy.   1.1.3 Acknowledgement Systems.  Designing a system to acknowledge students for displaying desired behaviour is a critical component for reinforcing the school?s behaviour expectations, congratulating students for appropriate behaviour, and encouraging future use.  Typically, school personnel establish systems to increase rates of acknowledgement through rewards systems using tangible rewards, in addition to the use of any verbal praise strategies.  The structure of a tangible rewards system provides a consistent way for all school personnel to acknowledge desired behaviour across settings.  Moreover, rewards systems assist in encouraging the desired behaviour though increasing positive student-staff interactions within an instructional model whereby the student?s      6  display of appropriate behaviour is paired with a positive social consequence (Lewis & Sugai, 1999).  Although rewards systems may vary based on the school context and can be tailored to the needs and resources of individual schools, typical systems within a SWPBS framework take the form of a token economy.  This framework involves students receiving tokens contingent upon exhibiting desired behaviours and then exchanging the tokens for prizes or privileges.  The most common token economy used in SWPBS implementation is a ticket system.  In these systems, staff provide tickets when they witness students following one of the school?s behaviour expectations.  The tickets are small sheets of paper, typically with the school?s behaviour expectations written on them, as well as a space for the student and referring staff member?s names.  Offering a ticket is contingent upon the student displaying the behaviour, and a key component is for the staff to indicate the behaviour expectation that was observed by providing specific verbal feedback to the student as to why they are receiving the ticket (Taylor-Greene et al., 1997; Turnbull et al., 2002).  Students exchange the tickets, either through accumulation or a lottery, for tangible rewards or privileges.  Ticket systems have been effectively used in many documented school-wide efforts to manage challenging behaviour (Metzler, Biglan, Rusby, & Sprague, 2001; Taylor-Greene et al., 1997; Turnbull et al., 2002).   1.1.4 Responding to Problem Behaviour.  Although defining, teaching, and acknowledging expectations are core components for preventing occurrences of problem behaviour, it is essential to have procedures in place to respond to behavioural violations when they do occur.  Sugai and Horner (2009) provide six key guidelines for developing      7  a continuum of consequences within a school-wide approach to discipline for responding to problem behaviour.  First, it is important to define and teach with clear and varied examples what behavioural violations are, just as behavioural expectations are taught.  Second, at a systematic level, it is essential to determine the types of behaviours that should be managed in the classroom or by a school staff member, versus behaviours that should be managed by the administrator, and to determine consistent strategies for how the behaviours are managed.  Third, in order to facilitate progress monitoring, school teams should develop a record form to document occurrences of behavioural violations.  Fourth, it is important to initiate supports for students who violate rules more frequently, through increased monitoring, precorrections, additional instruction, and adult support.  Fifth, the consequences for behavioural violation should be chosen using a functional approach, and sixth, additional supports at the secondary and tertiary levels (tiers 2 and 3) should be established for students who require support in excess of that provided at the school-wide level.  In general, occurrences of problem behaviour within a SWPBS framework are viewed similar to errors made in an academic context; the error is an indication that the student needs to be re-taught the expected behaviour and provided additional feedback as they learn the skill (Simonsen, Sugai, & Negron, 2008). 1.1.5 Monitoring and Decision Making.  The use of data is a hallmark of the SWPBS approach; collecting and tracking student data is essential for monitoring student progress and making informed decisions about program development.  However, to be confident in data-based decisions, several conditions must be met, outlined by Sugai and Horner (2002).  First, the data chosen must be reliable and valid, and also must serve a      8  clear purpose for collection.  Second, there must be procedures and systems in place to effectively enter and summarize the data in a way that maximizes the information but minimizes effort on the part of school personnel.  Finally, processes must be implemented to ensure the use of the data for decision making, such as team meetings where data are reviewed.    There are several types of data that a school may choose to collect to help make decisions regarding student behaviour.  Common sources of data include attendance, lateness, suspensions, and standardized achievement tests.  In schools implementing SWPBS, the most common data collected are office discipline referrals (ODRs). A single ODR represents a school staff member witnessing a student violating one of the school?s behaviour expectations and formally documenting the incident and corresponding consequence (Sugai, Sprague, Horner, & Walker, 2000).  ODRs are a key metric for measuring occurrences of problem behaviour, and reduced ODRs are associated with additional positive school-level outcomes.  For example, in their report of the effects of SWPBS implementation in an elementary school, Scott and Barrett (2004) reported a baseline calculation (i.e., pre-SWPBS implementation) of an average of 20 minutes of lost instructional time for every ODR.  Following the second year of SWPBS implementation, the reported decrease in ODRs achieved an average gain of 10,620 minutes (equal to 29.5 school days) of instructional time, compared to baseline, thus implying that lower levels of problem behaviour may result in increased access to instructional time, in addition to the administrative time and resources gained by spending less time managing office referrals.        9  1.1.6 Administrator Support.  The school administrator is a key figure in SWPBS implementation efforts.  It is essential that the administrator is seen as an integral member of the SWPBS leadership team, bears responsibility for the allocation of time and resources to the practice, and establishes the priority of the practice amongst other school initiatives (Horner et al., 2004).  The administrator sets the tone for the school team in terms of their commitment to the practice, and so it is important that they have a good knowledge base of SWPBS practices, and that they help encourage staff during implementation, through modeling and reinforcement (Handler et al., 2007).  Active administrator support has been identified as a critical component to the successful implementation of SWPBS (Kincaid, Childs, Blase, & Wallace, 2007), and lack of administrator support has been identified as a contributor to staff resistance to SWPBS (Lohrmann, Forman, Martin, & Palmieri, 2008). 1.1.7 District Support.  As with administrator support, the support of the school district is an integral component to successful and sustained implementation of SWPBS.  Establishing a team at the district level to oversee and support school teams helps to ensure consistent and ongoing training, access to funding, resources, and technical assistance, and the continued priority of the practice (H. P. George & Kincaid, 2008).  There must be an understanding at the district level that the adoption of SWPBS requires long-term development, a commitment to change (Handler et al., 2007), and collecting data regarding school-level problem behaviour (Horner et al., 2004).      10  1.2 Evidence Base for SWPBS As a whole, SWPBS has demonstrated effectiveness related to positive student outcomes, including reduced problem behaviour and suspensions as well as increased academic and social emotional outcomes. George, White, and Schlaffer (2007) presented a case study of an urban elementary school with high safety needs for their students, being located in an area of high-crime, with low community support.  The school?s implementation of SWPBS grew gradually from the initial location target of the school cafeteria, to full implementation at the whole school level.  After the first year of implementation, office referrals dropped from 1717 at baseline to 702, and after school suspensions dropped dramatically from 845 at baseline to 85.  Additionally, increased community support was observed following the first year of implementation.  Another pre-post intervention case study of a rural middle school from Taylor-Greene et al. (1997) revealed a 42% reduction in office discipline referrals following SWPBS implementation.    SWPBS has also shown positive effects from studies using more rigorous experimental procedures, longitudinal randomized control designs.  Horner and colleagues (2009) conducted a randomized wait-list control analysis of the effectiveness of SWPBS implemented by school personnel in elementary schools with no previous exposure to the SWPBS approach.  The results indicated that compared to the wait-list control group, schools implementing SWPBS had higher perceived levels of school safety.  Additionally, preliminary evidence suggested increased reading achievement and lower rates of office discipline referrals compared to non-implementing schools.  Also using a randomized control design, Bradshaw, Mitchell, and Leaf (2010) examined the      11  effectiveness of SWPBS implemented in elementary schools across a 5-year period.  Results indicated that schools implementing SWPBS had reduced office discipline referrals and suspensions.   In addition to behavioural outcomes such as decreased office discipline referrals, several studies have demonstrated the relation between SWPBS implementation and additional student outcomes, including social-emotional outcomes, and academic behaviours and achievements.  Waasdorp, Bradshaw, and Leaf  (2012) examined the effectiveness of SWPBS implementation on bullying and peer rejection outcomes in elementary schools.  Results indicated that schools implementing SWPBS had lower levels of teacher reported bullying and peer rejection compared to non-implementing schools.  Luiselli, Putnam, Handler and Feinberg (2005) described results from the implementation of SWPBS in an urban elementary school.  Descriptive results showed increases in standardized test percentile ranks (increases of 18 and 25 percentage points for reading comprehension and mathematics, respectively) compared to pre-intervention scores.  These effects were seen in addition to decreased office discipline referrals and suspensions.  Algozzine and Algozzine (2007) showed increased on-task academic behaviour (hand raising and paying attention), as well as decreased off-task behaviour, in classrooms within a school implementing SWPBS compared to classrooms in a school without a systematic school-wide approach to behaviour support.   Although the majority of the literature for SWPBS is conducted in the United States, the research base for SWPBS in Canada is continuing to grow.  For example, McIntosh, Bennett, and Price (2011) presented a case study of schools implementing      12  SWPBS in an urban school district in British Columbia, Canada.  Schools implementing SWPBS and with available office discipline referral data showed a reduction in referrals and decreased numbers of students at-risk for behavioural infractions, compared to the year previous to the evaluation year.  Analysis of academic outcomes revealed that schools implementing SWPBS with moderate to high fidelity met or exceeded outcomes of schools implementing with low fidelity on standardized provincial achievement tests, also exceeding the district average on the majority of the tests.  Finally, students from moderate to high implementing schools had higher perceived levels of school safety on the majority of assessed domains, compared to schools implementing with low fidelity and the school district average. The evidence base of SWPBS shows that when implemented with fidelity as a whole, SWPBS is effective in achieving valued student outcomes.  However, there is little research to suggest whether all components are needed to produce these outcomes.  For instance, it may be that the components are differentially effective, with some being core components or ?active ingredients,? and others being supporting or perhaps even extraneous features (Backer, 2002; Embry & Biglan, 2008)  1.3 The Present Study Given the widespread use of SWPBS, it is worthwhile to isolate and examine each component of the framework.  Indeed, determining the active ingredients in an intervention framework such as SWPBS is essential for strengthening the intervention and maximizing the potential for achieving valued outcomes (Embry & Biglan, 2008).  In their report of the seven implementation features of SWPBS described above, Horner and      13  colleagues (2004) indicated the importance of empirically determining the relationship between the implementation fidelity of each specific component and the range of student outcomes.  In a recent study, Molloy and colleagues (2013) conducted such an analysis, examining the active ingredients of SWPBS by comparing three types of ODRs (i.e., three problem behavior categories) at the student level between schools implementing SWPBS at various levels of implementation, using multi-level modeling.  The results indicated that ODRs were consistently lower among schools that had higher levels of implementation on the Expectations Taught, Reward System, and Violation System components.  Specifically, defiance related ODRs were associated with all three components, aggression related ODRs were related to the Reward System and Violation components, and drug related ODRs were associated with the Expectations Taught and Rewards System components.  However, mixed findings were reported for the other components.  Although the results indicate consistency among the SWPBS components most related to reduced problem behaviour, the nature of the analysis was such that only students receiving at least one ODR in the school year were included in the analysis; with these inclusion criteria, the authors noted that the number of students included represented 32% of the schools? populations, meaning that the majority of students (68%) were excluded from the analysis.  Although this approach has merits in terms of analyzing the relative importance of the SWPBS components for this particular population of students, it is important to consider the entire student population, given that SWPBS is a whole-school approach.  Additionally, although the analysis controlled for student and school demographics in the evaluation year, it was not possible to control for      14  previous year SWPBS implementation or ODRs, given the difficulties of tracking individual student data from year to year.  Despite these limitations, this emerging evidence supports the notion of differential effectiveness of the SWPBS components, and also suggests that the effectiveness of each component may vary by the measured outcome.     The purpose of the present study was to extend the research by Molloy and colleagues (2013) by examining the differential effectiveness of each individual component of SWPBS in reducing problem behaviour.  The present study extends previous research by utilizing two years of consecutive data to control for SWPBS implementation and ODRs in the year previous to the year evaluated.  The following research question was evaluated:  1) Controlling for school demographic variables and previous year ODR rates and implementation fidelity, do the individual SWPBS components show differential effectiveness in reducing problem behaviour?   It was hypothesized that some components of the framework would show a greater relation to reduced office discipline referral rates than others, thus suggesting greater importance for certain factors in terms of implementation.      15  Chapter 2: Method  2.1 Participants and Settings  The sample was composed of 415 schools, representing 161 school districts within 21 U.S. states.  Demographic data from the Institute of Education Sciences? National Center for Education Statistics (NCES) Common Core of Data were available for 97% of the sample.  School-reported grade level indicated that the majority of the sample were elementary schools (grades K-6; n = 286, 69%), followed by middle schools (grades 6-9; n = 60, 15%), elementary-high schools (grades K8-12; n = 45, 11%), and high schools (grades 9-12; n = 24, 6%).  Mean school enrolment was 548 students (SD = 316, range = 34 ? 2367, n = 415).  NCES reported school urbanicity information indicated that 20% (n = 82) of schools were located in a rural area, 19% (n = 78) in a town, 29% (n = 121) in a suburb, and 29% (n = 120) in a city, (3%, n = 14, were unreported).  Additionally, the average proportion of students receiving free or reduced price lunch was 56% (SD = 24%, range = 5% - 100%, n = 401).  2.2 Measures 2.2.1 Office discipline referrals.   Office discipline referrals (ODRs) were used as a measure of school-level problem behaviour.  Although there are some limitations to the use of ODRs in research, particularly variability in referral rates among schools (Wright & Dusek, 1998) and among individual personnel within schools (Kern & Manz, 2004), there is literature to support the use of ODRs as a viable index of problem behaviours in schools.  In an extensive review by Irvin and colleagues (2004), evidence was documented regarding the use of ODRs as a valid index of a school?s behavioural      16  climate, based on reviewed associations between observed indicators of problem behaviour in schools and levels of ODRs.  For example, the results of the review revealed the consistency of teacher perceptions of behaviour with actual numbers of ODRs across time, and similarly, ODRs were positively associated with student perceptions and attitudes of misbehaviour (e.g., attitudes regarding drug use, types of rebellious behaviour), and negatively associated with adaptive outcomes (e.g., believing in rules, being committed to education).  ODRs were also found to be more likely to occur when preceded by higher numbers of negative student-teacher interactions.  In terms of academic achievement, GPA was found to be negatively associated with certain ODR behaviours for sixth grade boys, including fighting, harassment, violent threats, and general non-violent misbehaviour.  Taken together, the studies reviewed by Irvin and colleagues demonstrated that levels of ODRs were strongly correlated with measures of problem behaviour.  Correspondingly, ODRs were negatively associated with adaptive behaviours, thus supporting the use of ODRs as a viable index of school-level problem behaviour.   In the current study, the rate of ODRs, calculated as number of ODRs per 100 students per school day, was used as the dependent variable, representing an index of school-level problem behaviour that accounts for differences in enrollment and school days.  These data were calculated in the School-wide Information System (SWIS; May et al., 2008).  SWIS is a web-based data management system for schools and is used to record and monitor office discipline referrals and suspensions.  For 33 schools in the sample, this calculation was not available based on missing data for the number of school      17  days in either 2010-2011, 2011-2012, or both.  To retain these schools, the ODR rate was calculated based on the formula: Total Number of ODRs / Total Enrolment /  SWIS average for number of school days, multiplied by 100.    2.2.2 SWPBS Fidelity of Implementation.  Fidelity of implementation of specific components of SWPBS was assessed through the School-wide Evaluation Tool (SET; Sugai, Lewis-Palmer, Todd, & Horner, 2001).  The SET is an evaluation of the seven key components of SWPBS: 1) school-wide expectations are defined (Expectations Defined), 2) these expectations are taught to all children in the school (Behavioral Expectations Taught), 3) rewards are provided for following the expectations (On-going System for Rewarding Behavioral Expectations), 4) a consistently implemented continuum of consequences for problem behaviour is put in place (System for Responding to Behavioral Violations, 5) problem behaviour patterns are monitored and the information is used for ongoing decision-making (Monitoring & Decision-Making), 6) an administrator actively supports and is involved in the effort (Management), and 7) the school district provides support to the school in the form of functional policies, staff training opportunities, and data collection options (District Support), (Horner et al., 2004).  The 28-item measure is completed by an external evaluator and includes observations, permanent product reviews, and interviews with administrators, teachers, and students.  Each item is scored on a scale from 0 to 2, for a possible total of 56 points.  To meet criteria for adequate implementation fidelity, it is suggested that schools must score at or above 80% on both the total score and the behaviour expectations taught subscale (Horner et al., 2004).  Therefore, it is possible for a school to meet the fidelity      18  criterion with a low score on one or more of the individual components.  For example, if a school has not established a rewards system, they would receive a score of 0% on the associated subscale.  However, if they receive an adequate summed percentage from each of the remaining six subscales then they would still reach the 80% criterion for fidelity.  For example, a school could score 100% on the Expectations Defined subscale, 90% on Behaviour Expectations Taught, 0% on On-going System for Rewarding Behavioural Expectations, 88% on System for Responding to Behavioural Violations, 100% on Monitoring & Decision-Making, 94% on Management, and 100% on District Support, for a total summed percentage of 572, divided by the total number of subscales, 572/7 = 81.7%.   The psychometric properties for the SET, reported by Horner et al. (2004) are strong, with an internal consistency of ? = .96, mean test-retest reliability of 97.3%, mean interobserver agreement of 99%, and construct related evidence for validity, determined through correlation with a similar measure, the Effective Behaviour Support Self-assessment Survey (EBSSAS; Sugai, Horner, & Todd, 2000) of r = .75.  An analysis of the sensitivity of the SET, comparing scores pre and post SWPBS implementation, revealed that the SET is sensitive to training in SWPBS (t = 7.63, p ? .001).  Therefore, the SET is seen as both a reliable and valid indicator of SWPBS implementation fidelity. The SET was used in the current study to determine the level of implementation for each of the SWPBS components, while controlling for implementation of the other components.      19  2.2.3 School demographic data.  Five variables were used in the study to control for a range of school demographic factors, drawn from either NCES or school reported from SWIS, including: 1) school-level poverty, indicated by the percent of students in each school receiving free or reduced price lunch (NCES), 2) whole school enrolment (NCES; 14 cases school reported due to missing NCES data), 3) urbanicity (e.g., city, suburb, town, rural; NCES), 4) grade levels served (school reported), and 5) school district (school reported). The most recent data available from NCES (2010-2011 school year) were used in analyses. 2.3 Procedure The data for the study were drawn from a database located at the University of Oregon, consisting of a set of schools implementing SWPBS across the United States and contributing data to SWIS.  Publically available data from the NCES Common Core of Data program were accessed to provide demographic information about the schools. Schools in the database were included in the study if they reported both SET and SWIS data for the 2010-2011 and 2011-2012 (evaluation year) school years.  First, from the entire dataset across these two years, any duplicate SET entries for each school were removed; the SET entry retained was either 1) entered on the most recent date, or 2) the entry with the highest SET ID number, if all were entered on the same date. Two further cases were removed due to missing enrolment information, because without this information, an ODR rate (the dependent variable), could not be calculated.  In total, 422 schools were retained following this process and before the removal of outliers.       20  2.4 Design and Analyses   Prior to analysis, the data were screened for outliers on the ODR rate calculation variables for both 2010-2011 and 2011-2012 (evaluation year).  Using a z-score ? 3 criterion, 7 cases were deleted.  For one case identified as an outlier, an analysis of the case and corresponding variables indicated that it was not a true outlier, but rather a data entry error.  To retain this case, the ODR rate from SWIS was substituted using the calculated ODR rate, as described above.   Following this process, 415 cases were retained, representing the final dataset.  Tests of assumptions indicated that the data violated the assumption of normality.  To account for this violation, a Box-Cox transformation (Box & Cox, 1964) was conducted to determine the optimum lambda value to apply as a power transformation to the dependent variable.  Following procedures by Osborne (2010), a lambda value of -1.40 was deemed the optimum value, yielding a skew value of -.020, compared to the original skew value of 2.080.  All analyses were conducted with this transformation on the dependent variable.  Additionally, grand-mean centering was applied to each of the continuous independent variables and covariates prior to analysis, with the exception of urbanicity, The urbanicity variable, originally containing twelve levels, was collapsed into four groups and coded from least to most urban, (rural, town, suburb, city), and analyzed as an ordered continuous variable for ease of interpretation.   Because the data were nested (schools within school districts), multilevel modeling was used to test the hypothesis regarding fidelity of implementation of each of the specific SWPBS components on rates of ODRs, while controlling for implementation      21  of the other components and school demographic variables.  A school district ID variable was included as the sole level-2 variable to account for nesting.  The independent variables were the 7 SET subscales, assessed during the evaluation year.  The covariates were the previous year?s ODR rates, SET implementation average score, and school demographic variables: total enrolment, percent of students receiving free or reduced price lunch, grade levels served, and urbanicity.  All analyses were conducted using SPSS (v.21).  Effect size was measured using pseudo-R2, which represents the proportion of variance in the outcome accounted for in each model, compared to the baseline model (Raudenbush & Bryk, 2002).  To determine the proportion of variance accounted for by the 7 SET subscales incremental to the covariates, three models were fitted.  First, the baseline model was fitted, containing only the dependent variable and the level-2 variable (school district).  In Model 2, the covariates (previous year school demographic variables, ODR rates, and SET overall average score) were added.  In Model 3, the final fitted model, the 7 SET subscales were added.            22  Chapter 3: Results   Descriptive statistics for the predictors of ODR rates are reported in Table 1.  Pearson?s correlation coefficients between each of the continuous variables are reported in Table 2.  There were no statistically significant associations between ODR rates and any of the SET subscales or the previous year SET overall average score.  ODR rates showed a statistically significant positive correlation with previous year ODR rates (r = .640, p < .01), and a statistically significant but small negative correlation with Enrolment (r = -.134, p < .01).  Enrolment was also negatively associated with previous year ODR rates, although the correlation was weak (r = -.111, p < .05).  Correlations among the 7 SET subscales and previous year SET overall average score were statistically significant and positive (r range = .117 - .466, p range < .05 - < .01), with the exception of the correlation between District Support and Monitoring & Decision-Making, which was not statistically significant.  Percent free or reduced price lunch showed a statistically significant negative correlation with Enrolment, (r = -.261, p < .01), but was not statistically significantly correlated with any of the other variables. Parameter estimates for all covariates and independent variables in each model are reported in Table 3.  The calculated intraclass correlation coefficient, calculated using the estimated variance in the baseline model, was 0.31, indicating that 31% of the variance in ODR rates was at the school district level (level-2).  The pseudo-R2 for the covariates model, Model 2, indicated that previous year ODR rates,  SET overall average score, and the included school demographic variables accounted for 48% of the variance      23  in ODR rates.  After including the 7 SET subscales in the final fitted model, Model 3, 49% (an additional 1%) of the variance in ODR rates was accounted for.   For the final fitted model (Model 3), no statistically significant associations were observed between the dependent variable and any of the SET subscales: Expectations Defined, F(1,374) = .033, ns; Expectations Taught, F(1,373) = .083, ns; Rewards System, F(1,390) = 0.834, ns; Violations System, F(1, 390) = 1.195, ns; Monitoring & Decision-Making, F(1, 395) = 1.249, ns; Management, F(1, 384) = .044, ns; and District Support F(1, 391) = 1.571, ns.                     24  Table 1 Descriptive statistics for evaluation year variables  N Minimum Maximum Mean Std. Deviation Percentage of schools with full (100%) implementation SET Expectations Defined 415 .000 1.00 .931 .143 78% SET Expectations Taught 415 .200 1.00 .928 .121 62% SET Rewards System 415 .000 1.00 .922 .168 74% SET Violations System 415 .125 1.00 .895 .153 54% SET Monitoring & Decision-Making 415 .500 1.00 .961 .098 82% SET Management 415 .1875 1.00 .908 .122 45% SET District Support 415 .000 1.00 .908 .203 82% SET Overall Average 415 .360 1.00 .923 .084 - ODR Rates 415 .020 4.60 .816 .762 -      25  Table 2 Correlation matrix for the continuous study variables      ODR   Rates Prev. Yr  ODR Rates SET:  Exp. Defined SET:  Exp. Taught SET: Violations System SET: Monitoring  & Dec. Making SET:  Mgmt SET: District Support SET:  Prev. Yr:  Overall Avg. Enrolment Free/ Reduced Price Lunch ODR Rates    __           Prev. Yr ODR Rates  .640** __          SET: Exp. Defined  -.076 -.059 __         SET: Exp. Taught  -.046 -.054 .410** __        SET:  Violations System  -.031 .000 .367** .305** __       SET: Monitoring & Dec. Making   .008 .003 .218** .305** .153** __      SET:  Mgmt   .017 .011 .273** .413** .255** .466** __     SET: District Support  -.067 -.010 .137** .130** .117*      .009 .160** __    SET Prev. Yr: Overall Avg.  -.046 .022 .167** .373**   .231** .189** .367** .175** __   Enrolment  -.134** -.111*  -.055 -.199**      .042     -.069  -.071  -.025 -.072 __  Free/Reduced Price Lunch   .052 .061   .009   .029     -.071     -.041  -.092  -.069  .018 -.261** __ **Correlation is significant at the 0.01 level (2-tailed); *Correlation is significant at the 0.05 level (2-tailed) Note: Correlations were calculated with the non-transformed dependent variable (ODR rates)      26  Table 3 Parameter estimates for models of the predictors and covariates of ODR rates  Parameter    Model 1  Model 2    Model 3 Fixed Effects Intercept 0.352**(0.010) 0.385**(0.023) 0.388**(0.023) Previous Year ODR Rates  0.103**(0.007) 0.104**(0.007) Previous Year SET Overall Average  -0.009 (0.056) -0.033 (0.065) Enrolment  -0.000 (0.000) -0.000 (0.000) Percent Free/Reduced Lunch  0.073**(0.028)  0.072*(0.028) Urbanicity   -0.001 (0.006) -0.001 (0.006) Grade Level: Elementary School  -0.060**(0.019) -0.062*(0.019) Grade Level: Middle School   0.038 (0.023)  0.036 (0.023) Grade Level: High School  -0.025 (0.037) -0.015 (0.038) Grade Level: Elementary-High School   0 0 SET: Expectations Defined   -0.008 (0.042) SET: Expectations Taught    0.015 (0.053) SET: Rewards System     0.034 (0.037) SET: Violations System   -0.044 (0.040) SET: Monitoring & Decision-Making    0.069 (0.061) SET: Management    0.011 (0.054) SET: District Support   -0.034 (0.027) Random Parameters Intercept (School District) 0.007**(0.002) 0.004**(0.001) 0.004**(0.001) Residual 0.016**(0.001) 0.009**(0.001) 0.009**(0.001) Model Fit Indices -2 Log Likelihood -421.20 -667.93 -673.45 Pseudo-R2  0.479 0.490 Note: Standard errors are in parentheses *p < .05; **p < .01      27  Chapter 4: Discussion    The purpose of the present study was to examine the differential effectiveness of each individual component of SWPBS in reducing problem behaviour.  It was hypothesized that some components of the framework would show a greater relation to reduced problem behaviour than others, thus suggesting greater importance for certain factors in terms of implementation.  The sample was composed of 415 schools, representing 161 school districts within 21 U.S. states.  Multilevel modeling was used to determine the effectiveness of each of the seven components of SWPBS on reducing school-level problem behaviour, using the metric of the number of office discipline referrals per 100 students per day, while controlling for district effects, school-level demographic variables, and previous ODRs and SWPBS implementation.  The results did not indicate any significant associations between any of the seven SWPBS components and rates of ODRs.    The results of the present study were not as expected, based on either the present hypothesis or previous research.  The evidence base for SWPBS when implemented as a whole is substantial, indicating that SWPBS is related not only to reduced ODRs, but also a range of positive student outcomes.  Over a decade of research has shown reduced ODRs (e.g., Bradshaw et al., 2010; M. P. George et al., 2007; McIntosh et al., 2011; Taylor-Greene et al., 1997) teacher reported bullying incidents (Waasdorp et al., 2012), and increased perceptions of school safety (Horner et al., 2009; McIntosh et al., 2011), on-task student behaviour (Algozzine & Algozzine, 2007), and academic achievement (Luiselli et al., 2005).  Given the substantial evidence for the effectiveness of SWPBS as      28  a comprehensive framework, it is logical to hypothesize that at least some of the individual components of the practice would also be related in some way to similar outcomes, though perhaps to a lesser degree, and some perhaps more so than others.  Moreover, the basis of the individual components of SWPBS have long predated the present conception of the framework; the components are not new ideas, and they are based on longstanding principles of effective behaviour support.     However, despite the theoretical underpinnings and logical hypotheses regarding the individual components of SWPBS, research on these specific components? relations to student outcomes within a SWPBS context is scarce.  Even so, the relative importance of some of the features has been implied.  For example, the School-wide Evaluation Tool (Horner et al., 2004), the fidelity of implementation measure used in the present analysis, indicates that the criterion for adequate fidelity requires a score of at least 80% on both the overall measure (including each of the 7 SWPBS components) and the Expectations Taught component.  The authors based their rationale for this criterion on case study observations indicating the low likelihood of behaviour change without explicit instruction on expected behaviours.  This indicates an implied greater importance of the Expectations Taught component relative to the other individual components; however, the authors indicated that this criterion had yet to be empirically validated.    Results from the Molloy and colleagues study (2013) provided some evidence to help empirically substantiate the SET criterion for adequate fidelity, indicating a relatively consistent finding that the Expectations Taught subscale was associated with lower ODR rates for two of the three measured types, defiance and drug use or      29  possession.  Also revealed were findings to suggest the relative importance of the Violations System component for aggressive and defiant type ODRs.  The Rewards System component was found to be the most consistent, related to lower ODR rates for all of the three measured types.  Overall, the findings suggest the relative importance of teaching and acknowledging expectations, as well as the implementation of systematic methods for responding to problem behaviour when it does occur.  However, given the mixed findings reported for the additional components, further research is still warranted.      The present study was designed with several methodological and statistical strengths.  First, the data were drawn from a national dataset of schools implementing SWPBS across the U.S., with schools spanning 161 districts, maximizing the generalizability of generated findings.  Second, the requirements that schools must meet to use SWIS maximizes the reliability and validity of the collected data.  Schools must meet ten requirements, including allocating staff time for data entry and review, and commitment to training from a certified facilitator.  Third, the use of the SET as the measure of implementation fidelity was consistent across schools, and the measure has shown to be psychometrically strong with excellent inter-rater reliability, averaging 99% agreement (Horner et al., 2004).  An additional benefit of using the SET is that the measure is completed by a trained external evaluator, minimizing any bias from school team members.  Statistically, using two years of consecutive data allowed for the control of SET scores and ODRs in the year previous to the evaluation.  Finally, the use of multilevel modeling allowed for the accounting of nesting effects, allowing for a more precise analysis.          30  Despite the strengths of the present study, the results were not as hypothesized.  Given the strong evidence base for SWPBS, the difference in results compared to the Molloy and colleagues (2013) study, and the strong design of the present study, it was important to investigate potential reasons for why these results emerged.  First, as seen in Table 1, the obtained mean scores on each of the seven SET subscales were extremely high; all of the scores were above or close to 90%, with an overall SET implementation average score of 92% (SD = .084).  These patterns indicate a restriction of range for each of the assessed components, greatly limiting the variability between schools.  The SET scores from the Molloy and colleagues study also revealed range restriction, with a reported mean overall SET score of 90%, however, the standard deviation value of 0.11 indicates slightly greater variability compared to the present study?s value of 0.084, indicating that the restricted range may have been further magnified in the current study compared to Molloy and colleagues.  Additionally, higher overall implementation levels were obtained by the present study, with 89% of schools meeting the SET criteria for adequate implementation (80% on the overall measure and 80% on the Expectations Taught component), compared to 83% of schools from Molloy and colleagues.  Finally, the percentage of schools in the present study achieving full (100%) implementation on each individual component was higher for the majority of components; the percentage of schools was 6 to 17% higher for five of the seven SET components in the present study compared to Molloy and colleagues.   One potential reason for the higher scores yielded in the present analysis is the years of data that were selected.  The data from the present analysis and the Molloy and      31  colleagues (2013) study were drawn from the same database of schools; however, the data from Molloy and colleagues was drawn from the 2007-2008 school year, and the present analysis used data from 2011-2012, a difference of five years.  It is possible that a number of schools have continued contributing data to the database throughout the years and have concurrently worked towards higher levels of implementation over time, thus increasing their SET scores.  Similarly, with increased or sustained high implementation fidelity, over time, ODRs may have decreased, leading to null results.  To illustrate, McIntosh, Horner, and Sugai (2009) reported ODR data across 12 years from a middle school in the United States.  In the first year of SWPBS implementation, ODRs dropped 47% from the year previous.  This reduction not only sustained across the evaluated years as SWPBS implementation fidelity remained high, but ODRs continued to reduce as years passed.  In terms of methodology, there were relative strengths and weaknesses of the present study and the Molloy and colleagues study.  As previously noted, Molloy and colleagues? analysis included only students with at least 1 ODR in the evaluation year, thus eliminating the majority of the student population.  Although it could be argued that only including students exhibiting problem behaviour in the analysis is a strength in terms of the focus of SWPBS in supporting behaviour, including the entire school population as was done in the present study could also be seen as a strength, given that SWPBS is meant to target the school as a whole, and not solely specific populations of students.  Additionally, including two consecutive years of SWPBS implementation fidelity and ODR data was a strength of the present study; without this data, the Molloy      32  and colleagues study was unable to control for ODRs and SET scores in the year previous to the evaluation year.   4.1 Limitations    There are several limitations for the present study that are worth noting.  First, no data were available regarding the year of SWPBS implementation for schools.  Without this information, it is not possible to control for how many years schools have been implementing.  In terms of the structure of the SET, the nature of how the measure is scored limits the potential for variability in scores.  The facets of each component are scored on a 3-point scale (0 to 2), and there are a limited number of facets for each component.  For example, the Expectations Defined and District Support components have only two facets, with a maximum score of 4 points on each.  Although this scoring method lends itself to ease, efficiency, and objectivity for the examiner, it limits the range of possible scores, thus limiting the potential for variability between schools.  Finally, the design of this study was non-experimental.  The addition of a control group and experimental manipulation of the individual SET components would be the only way to ascertain causal relations between SWPBS and rates of ODRs, even if significant results were obtained in the present study.    4.2 Implications and Future Research  Determining the active ingredients for SWPBS remains an important area of study, yet work in this area is scarce.  The present and the Molloy and colleagues (2013) studies are the only ones known to date, and the results of these analyses differed significantly.  As such, further research in this area is essential in terms of the      33  implications for practice.  A large active ingredients literature base will allow for commonalities and differences to be observed between samples, with the potential of observing consistency between studies, allowing for greater confidence in terms of the effects of specific components for implementation.  For example, replicating the Molloy and colleagues? finding regarding rewards systems will be important for determining whether rewards systems should become a mandatory component to implementation, much like the SET Expectations Taught criterion.  As SWPBS continues to grow, determining the active ingredients of the practice will be essential for strengthening its current form and informing decisions regarding adaptation.  The adaptation of intervention frameworks like SWPBS is inevitable, whether in response to changes in resources or in an attempt to better fit a particular school or environment (Backer, 2002).  In light of this, it is important to draw on research to inform what elements of the practice are considered core components and cannot be adapted; that is, what are the elements that if removed or significantly adapted, we would expect to see a decrease in positive effects.  Additionally, this research would help inform what components are supporting features that help further strengthen the core components, but are not necessarily essential.  Lastly, it is possible that further research may reveal that some aspects of the practice are extraneous, which is valuable information to consider for streamlining the practice and increasing efficiency for educators.  It would be beneficial for future studies to utilize different measurement tools to assess SWPBS implementation with a wider range of scoring procedures to enhance      34  variability.  Similarly, it may be beneficial to consider adapting the SET scoring methods to include a greater range of scoring options, allowing for more precise measurement and enhanced variability, which is consistent with recommendations noted by Molloy and colleagues (2013).  Perhaps it may be that schools that have high and sustained implementation may benefit from an adapted version of the SET that is targeted towards higher level outcomes for schools that may have reached full implementation on the basic components.    In terms of outcomes, both the present and Molloy and colleagues (2013) studies examined ODRs, but other outcomes should also be examined.  For example, it is important to determine the differential effectiveness of SWPBS components for suspensions, academic achievement, school safety, bullying, and other social-emotional outcomes, all which have shown to be positively influenced by the implementation of SWPBS as a whole.  Similarly, determining differential effectiveness by population is also warranted.  For example, studies may examine outcomes for schools at different levels of implementation, such as high implementing versus low implementing schools, and schools who have sustained implementation versus new implementing schools.  Analyses by school type may also be beneficial for determining whether, for example, certain components of the practice are more important in an elementary versus high school context.       35  References  Algozzine, K., & Algozzine, B. (2007). Classroom instructional ecology and school-wide positive behavior support. Journal of Applied School Psychology, 24, 29-47. doi: 10.1300/J370v24n01_02 Backer, T. E. (2002). Finding the balance: Program fidelity and adaptation in substance abuse prevention: Executive summary of a state-of-the-art review. Rockville, MD: Center for Substance Abuse Prevention, Substance Abuse and Mental Health Services Administration.  Box, G. E., & Cox, D. R. (1964). An analysis of transformations. Journal of the Royal Statistical Society. Series B (Methodological), 211-252.  Bradshaw, C. P., Mitchell, M. M., & Leaf, P. J. (2010). Examining the effects of schoolwide positive behavioral interventions and supports on student outcomes. Journal of Positive Behavior Interventions, 12, 133-148. doi: 10.1177/1098300709334798 Day, D. M., Golench, C. A., MacDougal, J., & Beals-Gonzalez, C. A. (2002). School-based violence prevention in Canada: Results of a national survey of policies and programs, 1995-02.  Ottawa, ON. Embry, D. D., & Biglan, A. (2008). Evidence-based kernels: Fundamental units of behavioral influence. Clinical Child and Family Psychology Review, 11(3), 75-113. doi: 10.1007/s10567-008-0036-x      36  George, H. P., & Kincaid, D. (2008). Building district-level capacity for positive behavior support. Journal of Positive Behavior Interventions, 10(1), 20-32. doi: 10.1177/1098300707311367 George, H. P., Kincaid, D., & Pollard-Sage, J. (2009). Primary-tier interventions and supports. In W. Sailor, G. Dunlap, G. Sugai & R. Horner (Eds.), Handbook of positive behavior support (pp. 375-394). New York: Springer. George, M. P., White, G. P., & Schlaffer, J. J. (2007). Implementing school-wide behavior change: Lessons from the field. Psychology in the Schools, 44, 41-51. doi: 10.1002/pits.20204 Handler, M. W., Rey, J., Connell, J., Thier, K., Feinberg, A., & Putnam, R. (2007). Practical considerations in creating school-wide positive behavior support in public schools. Psychology in the Schools, 44, 29-39. doi: 10.1002/pits.20203 Horner, R. H., Sugai, G., Smolkowski, K., Eber, L., Nakasato, J., Todd, A. W., & Esperanza, J. (2009). A randomized, wait-list controlled effectiveness trial assessing school-wide positive behavior support in elementary schools. Journal of Positive Behavior Interventions, 11, 133-144. doi: 10.1177/1098300709332067 Horner, R. H., Sugai, G., Todd, A. W., & Lewis-Palmer, T. (2005). School-wide positive behavior support. In L. Bambara & L. Kern (Eds.), Individualized supports for students with problem behaviors: Designing positive beahvior plans (pp. 359-390). New York: Guilford. Horner, R. H., Todd, A. W., Lewis-Palmer, T. L., Irvin, L. K., Sugai, G., & Boland, J. B. (2004). The School-Wide Evaluation Tool (SET): A research instrument for      37  assessing school-wide positive behavior support. Journal of Positive Behavior Interventions, 6, 3-12. doi: 10.1177/10983007040060010201 Irvin, L. K., Tobin, T. J., Sprague, J. R., Sugai, G., & Vincent, C. G. (2004). Validity of office discipline referral measures as indices of school-wide behavioral status and effects of school-wide behavioral interventions. Journal of Positive Behavior Interventions, 6, 131-147. doi: 10.1177/10983007040060030201 Kern, L., & Manz, P. (2004). A look at current validity issues of school-wide behavior support. Behavioral Disorders, 30, 47-59.  Kincaid, D., Childs, K. E., Blase, K. A., & Wallace, F. (2007). Identifying barriers and facilitators in implementing schoolwide positive behavior support. Journal of Positive Behavior Interventions, 9, 174-184. doi: 10.1177/10983007070090030501 Langland, S., Lewis-Palmer, T. L., & Sugai, G. (1998). Teaching respect in the classroom: An instructional approach. Journal of Behavioral Education, 8(2), 245-262.  Lewis, T. J., & Sugai, G. (1999). Effective behavior support: A systems approach to proactive schoolwide management. Focus on Exceptional Children, 31(6), 1-24.  Lohrmann, S., Forman, S., Martin, S., & Palmieri, M. (2008). Understanding school personnel's resistance to adopting schoolwide positive behavior support at a universal level of intervention. Journal of Positive Behavior Interventions, 10, 256-269. doi: 10.1177/1098300708318963      38  Luiselli, J. K., Putnam, R. F., Handler, M. W., & Feinberg, A. B. (2005). Whole-school positive behaviour support: Effects on student discipline problems and academic performance. Educational Psychology, 25(2-3), 183-198. doi: 10.1080/0144341042000301265 Malecki, C. K., & Elliot, S. N. (2002). Children's social behaviors as predictors of academic achievement: A longitudinal analysis. School Psychology Quarterly, 17, 1-23. doi: 10.1521/scpq.17.1.1.19902 May, S., Ard, W. I., Todd, A. W., Horner, R. H., Glasgow, A., & Sugai, G. (2008). School-wide Information System. Educational and Community Supports, University of Oregon Eugene, OR.  Mayer, G. R. (1995). Preventing antisocial behavior in the schools. Journal of Applied Behavior Analysis, 28, 467-478. doi: 10.1901/jaba.1995.28-467 McIntosh, K., Bennett, J. L., & Price, K. (2011). Evaluation of social and academic effects of school-wide positive behaviour support in a Canadian school district. Exceptionality Education International, 21, 46-60.  McIntosh, K., Horner, R., & Sugai, G. (2009). Sustainability of systems-level evidence-based practices in schools: Current knowledge and future directions. In W. Sailor, G. Dunlap, G. Sugai & R. Horner (Eds.), Handbook of Positive Behavior Support (pp. 327-352): Springer US. McKevitt, B., & Braaksma, A. (2004). Best practices in developing a positive behavior Ssupport system at the school level. In A. Thomas & J. P. Grimes (Eds.), Best      39  Practices in School Psychology V (pp. 735-748). Bethesda, MD: National Association of School Psychologists. Metzler, C. W., Biglan, A., Rusby, J. C., & Sprague, J. R. (2001). Evaluation of a comprehensive behavior management program to improve school-wide positive behavior support. Education & Treatment of Children (ETC), 24(4), 448-479.  Molloy, L. E., Moore, J. E., Trail, J., Van Epps, J. J., & Hopfer, S. (2013). Understanding real-world implementation quality and  "active ingredients" of PBIS. Prevention Science, 1-13. doi: 10.1007/s11121-012-0343-9 Osborne, J. W. (2010). Improving your data transformations: Applying the Box-Cox transformation. Practical Assessment, Research & Evaluation, 15(12), 1-9.  Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2 ed.). Newbury Park: Sage. Robers, S., Kemp, J., Truman, J., & Snyder, T. D. (2013). Indicators of school crime and safety: 2012 (NCES 2013-038/NCJ 241448).  National Center for Education Statisitcs, U.S. Department of Education, and Bureau of Justice Statistics, Office of Justice Programs, U.S. Department of Justice, Washington, DC. Scott, T. M., & Barrett, S. B. (2004). Using staff and student time engaged in disciplinary procedures to evaluate the impact of school-wide PBS. Journal of Positive Behavior Interventions, 6, 21-27. doi: 10.1177/10983007040060010401 Simonsen, B., Sugai, G., & Negron, M. (2008). Schoolwide positive behavior supports primary systems and practices. Teaching Exceptional Children, 40(6), 32-40.       40  Skiba, R., & Peterson, R. (1999). The dark side of zero tolerance. Phi Delta Kappan, 80(5), 372-376.  Sprague, J., Walker, H., Golly, A., White, K., Myers, D. R., & Shannon, T. (2001). Translating research into effective practice: The effects of a universal staff and student intervention on indicators of discipline and school safety. Education & Treatment of Children 24, 495-511.  Sugai, G., & Horner, R. H. (2002). The evolution of discipline practices: School-wide positive behavior supports. Child & Family Behavior Therapy, 24(1-2), 23-50. doi: 10.1300/J019v24n01_03 Sugai, G., & Horner, R. H. (2006). A promising approach for expanding and sustaining school-wide positive behavior support. School Psychology Review, 35, 245-259.  Sugai, G., & Horner, R. H. (2009). Defining and describing schoolwide positive behavior support. In W. Sailor, G. Dunlap, G. Sugai & R. H. Horner (Eds.), Handbook of positive behavior support (pp. 307-326). New York: Springer. Sugai, G., Horner, R. H., & Todd, A. W. (2000). Effective Behavior Support Self-Assessment Survey 2.0. Eugene, OR: Educational and Community Supports. Retrieved from http://www.pbis.org. Sugai, G., Lewis-Palmer, T. L., Todd, A. W., & Horner, R. H. (2001). School-wide Evaluation Tool (SET). Eugene, OR: Educational and Community Supports. Retrieved from http://www.pbis.org Sugai, G., Sprague, J. R., Horner, R. H., & Walker, H. M. (2000). Preventing school violence: The use of office discipline referrals to assess and monitor school-wide      41  discipline interventions. Journal of Emotional & Behavioral Disorders, 8, 94-101. doi: 10.1177/106342660000800205 Taylor-Greene, S., Brown, D., Nelson, L., Longton, J., Gassman, T., Cohen, J., . . . Hall, S. (1997). School-wide behavioral support: Starting the year off right. Journal of Behavioral Education, 7(1), 99-112. doi: 10.1023/a:1022849722465 Turnbull, A., Edmonson, H., Griggs, P., Wickham, D., Sailor, W., Freeman, R., . . . Warren, J. (2002). A blueprint for schoolwide positive behavior support: Implementation of three components. Exceptional Children, 68, 377-402.  Waasdorp, T. E., Bradshaw, C. P., & Leaf, P. J. (2012). The impact of schoolwide positive behavioral interventions and supports on bullying and peer rejection. Archives of Pediatrics & Adolescent Medicine, 166, 149-156.  Wentzel, K. R. (1993). Does being good make the grade? Social behavior and academic competence in middle school. Journal of Educational Psychology, 85, 357-364. doi: 10.1037/0022-0663.85.2.357 Wright, J. A., & Dusek, J. B. (1998). Compiling school base rates for disruptive behaviors from student disciplinary referral data. School Psychology Review, 27, 138-147.    

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0165689/manifest

Comment

Related Items