UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Development and validity assessment of a measure of social information processing within an online context… Maghsoudi, Rose 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_may_maghsoudi_rose.pdf [ 2.03MB ]
Metadata
JSON: 24-1.0167127.json
JSON-LD: 24-1.0167127-ld.json
RDF/XML (Pretty): 24-1.0167127-rdf.xml
RDF/JSON: 24-1.0167127-rdf.json
Turtle: 24-1.0167127-turtle.txt
N-Triples: 24-1.0167127-rdf-ntriples.txt
Original Record: 24-1.0167127-source.json
Full Text
24-1.0167127-fulltext.txt
Citation
24-1.0167127.ris

Full Text

  DEVELOPMENT AND VALIDITY ASSESSMENT OF A MEASURE OF SOCIAL INFORMATION PROCESSING WITHIN AN ONLINE CONTEXT AMONG ADOLESCENTS   by   Rose Maghsoudi   Ph.D., The University of Tehran, 2007     A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF   MASTER OF ARTS   in   The Faculty of Graduate and Postdoctoral Studies   (Measurement, Evaluation, and Research Methodology)   THE UNIVERSITY OF BRITISH COLUMBIA  (Vancouver)    February 2015      © Rose Maghsoudi, 2015   ii Abstract  The Online Social Information Processing scale (OSIP) is a measure with 116 items that was developed based on the Social Information Processing model (SIP). The OSIP measures six social information processing skills, with a focus on how these skills are used in the face of online aggression. This goal of this study was to examine the validity of the OSIP for measuring how adolescents processed social information in online settings. After developing the items, to collect validity evidence, experts’ evaluation, as well as adolescents’ assessment through a think aloud protocol methods, was used. Evidence for validity from the item development emerged through content definition, test specification, and item editing. Evidence from expert evaluation is related to the construct of the test items in terms of alignment with the content domain and language appropriateness. Finally, evidence from student assessment came from using a think aloud protocol, which helped evaluate the language of the test items, as well as helped me understand how engaging adolescents found the measure.    iii Preface This thesis is original, unpublished, and independent work by the author, Rose Maghsoudi. The associated methods in Chapter 3 were approved by the University of British Columbia’s Behavioral Research Ethics Board (certificate #H14-00148).  I was the principal investigator, responsible for receiving parental/guardians’ consent and students’ assent forms, data collection, and analysis, as well as transcriptions. Chapter 4 and 5 are my original work and I created the online aggression assessment tool through social information processing (Appendix A, B, & C) in consultation with Dr. Jennifer Shapka.     iv Table of Contents  Abstract ......................................................................................................................................................... ii Preface .......................................................................................................................................................... iii Table of Contents .........................................................................................................................................iv List of Tables.................................................................................................................................................vi List of Figures ............................................................................................................................................. vii Acknowledgements .................................................................................................................................... viii Dedication .....................................................................................................................................................ix CHAPTER 1: INTRODUCTION ................................................................................................................ 1 1.1 Social Information Processing Skills ............................................................................................................................. 2 1.1.1Defining Social Information Processing Skills ................................................................................ 2 1.2 Validity Evidence .................................................................................................................................................................... 3 1.2.1 Item Development Process .............................................................................................................. 5 1.2.2 Expert Evaluation ............................................................................................................................ 6 1.2.3 Think Aloud Protocols(TAPs)......................................................................................................... 6 1.3 Significance and Purpose of the Study .......................................................................................................................... 6 1.4 Research Design...................................................................................................................................................................... 7 CHAPTER 2: LITERATURE REVIEW .................................................................................................... 9 2.1 A Brief History of Social Information Processing .................................................................................................... 9 2.1.1 Measuring Social Information Procecssing (SIP) ......................................................................... 14 2.1.2 Measuring Social Information Procecssing in Online Settings ..................................................................... 20 2.2 Evidence-based Construct Validity .............................................................................................................................. 21 2.2.1 Review of Item Development Process .......................................................................................... 22 2.2.2 Review of Expert Evaluation ........................................................................................................ 24 2.2.3 Review of Think Aloud Protocols (TAPs) .................................................................................... 26 2.3 The Currenet Study ............................................................................................................................................................. 27 CHAPTER 3: METHODOLOGY ............................................................................................................. 29 3.1 Item Development Process (Phase 1)  ........................................................................................................................ 29 3.2 Expert Evalution (Phase 2)  ..................................................................................................................... 30 3.2.1 Participants .................................................................................................................................... 30 3.2.2 Procedure……… ......................................................................................................................................................... 30 3.2.3 Coding of the Experts' Evaluation……… ..................................................................................... 31 3.3 Adoelscents' Think Aloud Protocots (Phase 3) ....................................................................................... 31 3.3.1 Participants ................................................................................................................................... 31 3.3.2 TAPs Procedure ........................................................................................................................... 33 3.3.3 Transcription of TAPs .................................................................................................................. 33 3.3.4 Scoring   ....................................................................................................................................... 33 3.3.5 Coding and Analysis   .................................................................................................................. 34 CHAPTER 4: RESULTS ............................................................................................................................ 35  v 4.1 Is the OSIP Closely Aligned with Theoretical Definition of SIP?  .................................................................... 35 4.1.1 Evidence from the Item Development Process .............................................................................. 35 4.1.2 Evidence from Experts’ Evaluation............................................................................................... 43 4.2 Is the Language of the Measurement Appropriate for the Targeted Adolescent Group and Does It Accurately Measure SIP in an Online Context?  ..................................................................................................... 52 4.2.1 Evidence from Expert Evaluation   ............................................................................................... 52 4.2.2 Evidence from TAPs’ Participants ................................................................................................ 57 4.3 Do Items Engage Participants to Respond Accurately about the Social Information Processes? .... 64 4.3.1 Evidence from TAPs’ Participants ................................................................................................ 64 CHAPTER 5: DISCUSSION ...................................................................................................................... 69 5.1 Major Finding of the Research ....................................................................................................................................... 69 5.1.1 Outcomes of the OSIP Item Development Process ....................................................................... 70 5.1.2 Outcomes of Evaluating the OSIP Items and its language Appropriateness by Reviewing Experts’ Analysis .................................................................................................................................................. 71 5.1.3 Outcomes of the OSIP Participants by TAPs ................................................................................................ 72 5.2 Implication .............................................................................................................................................. 72 5.3 Contribution ............................................................................................................................................ 73 5.4 Limitation ................................................................................................................................................ 74 5.5 Furture Direction ................................................................................................................................................................. 74 5.6 Summary… .............................................................................................................................................................................. 75 References .................................................................................................................................................... 76 Appendices ................................................................................................................................................... 97 Appendix A: OSIP Assessment Tool (Original) ............................................................................................................... 97 Appendix B: OSIP Assessment Tool (Based on Expert Evaluation) ................................................................... 104 Appendix C: C OSIP Assessment Tool (Based on Participant Evaluation, Using TAPs)............................. 124 Appendix D: OSIP Expert Invitation Letter   ................................................................................................................. 144 Appendix E: OSIP Expert Evaluation Form ................................................................................................................... 145 Appendix F: OSIP Parent Information Letter and Consent Form ........................................................................ 146 Appendix G: OSIP Student Assent Form ......................................................................................................................... 148 Appendix H: OSIP Participant Assessment Form ....................................................................................................... 149        vi List of Tables   Table 3.1 Demographic information for TAP samples ................................................................................................ 32 Table 4.1 Proposed sources of validity evidence for each potential misuse for the OSIP measurement ..................... 37 Table 4.2 Test specification for the OSIP .................................................................................................................... 40 Table 4.3 Item Editing of the OSIP measurement based on peer group feedback ....................................................... 42 Table 4.4 Experts' Evaluation about OSIP items in terms of alignment with construct and domain ........................... 50 Table 4.5 Experts' Evaluation about the Language Appropriateness of the OSIP ....................................................... 56  Table 4.6 Percent of students who provided evidence about OSIP items in terms of clarity, complexity, and offensiveness ...................................................................................................................................................... 60 Table 4.7 Overview of proportion of participants’ responding to each point on the scale, averaged for each skill  ... 67               vii List of Figures  Figure 4.1 Scatter plot of the participants’ responses to each point on the OSIP response scale  ............................... 68    viii Acknowledgements  I would like to express my gratitude to my supervisors, Dr. Kadriye Ercikan, and Dr. Jennifer Shapka, who supported me throughout this process.     ix Dedication  To my supervisor: Dr. Jennifer Shapka.    1 Chapter 1: Introduction Currently, there is no measure that assesses how adolescents process social information derived from an online setting. Developing and validating the Online Social Information Processing scale (OSIP) is the overarching goal of this research, through item development, expert review, and adolescent think-aloud protocols evaluation. The OSIP is a new measure, which is designed for adolescents between the ages of 14 and18. The initial version of this scale had 46 items. However, after experts and students’ evaluation, the final version included 116 items. This new measure has six subsections, which are consistent with the six stages of Social Information Processing (e.g., Crick & Dodge, 1994). Each item has a seven-point Likert scale ranging from zero to six. The OSIP offers a more comprehensive model as it includes measurements of all six staged of the SIP model, is designed for adolescents, is self-report, and focuses on an online context.  As such, it is hoped that it will be a viable alternative to the following currently available measures: Crick and Dodge’s vignette-based social information processing measurement (1994), adapted vignette-based Dell Fitzgerald and Asher’s intent attribution instrument (Dodge, 1980; Dodge & Frame, 1982), Crick and Ladd’s vignette-based measure (1990; response decision instrument), Perry, Perry and Rasmussen (1986) and Wheeler and Ladd’s (1982) self-efficacy vignette-based measure for aggression, and Dodge and Crick’s vignette-based social goal instrument (1996). All these measures assess social information processing in offline contexts whereas the purpose of this study is to measure social information processing skills in online contexts. 1.1 Social Information Processing Over the past few decades, research on social information processing skills has provided a rich foundation in the field of developmental psychology and education. The term social   2 information processing refers to “how mental operations affect behavioural responding in social situations” (Dodge & Rabiner, 2004, p.1003), and has been used to understand the social lives of children, including youth who have exhibited aggressive (Crick & Dodge, 1994), delinquent (Nas, De Castro, & Koops, 2007), and substance abuse behaviours (Rain et al., 2006), as well as students who have cognitive and social communication deficits (Bryan, Sullivan-Burstein, & Mathur,1998; Dodge & Frame, 1982). Accordingly, the assessment of social information processing is critical for understanding how individuals deal with social information, as well as to determine where deficits occur (Dodge & Coie, 1987).  For example, regarding aggression in children, assessments allow educators and psychologists to evaluate whether deficits in the processing of social information can explain the aggression (Akhtar & Bradley, 1991) that may happen within a dysfunctional family (see Lober & Dishon, 1983), particularly harsh discipline (Weiss, Dodge, Bates, & Pettit, 1992), academic difficulties (see Hawkins & Lishner, 1987), or poor interpersonal relations (see Cantrell & Prinz, 1985; Pettit, Lansford, Malone, Dodge, & Bates, 2010).  1.1.1 Defining Social Information Processing Skills. The literature surrounding social information processing skills is filled with varying definitions and models to explain the nature of social cognitive behaviours. Among such definitions and models, Crick and Dodge’s definition is the most influential, and particularly so for child and youth development (Crick & Dodge, 1994; Dodge, 1980, 1986).  According to this definition, social information processing (SIP) patterns can identify unique social responses based on the connection between socio-emotional reactions and cognitive responses. Stated simply, these patterns can be used to predict social behaviours. As a result, researchers have used SIP patterns to assess the cognitive skills that individuals use to construct competent behavioural responses to social stimuli (Crozier et al.,   3 2008; Fontaine & Dodge, 2006). The SIP skills consist of a sequential series of mental operations that contribute to behavioural outcomes, including: (a) encoding of internal and external cues, (b) interpretation of these cues, (c) goal selection and clarification, (d) construction of possible responses, (e) response decision, and (f) behavioural enactment (see Crick & Dodge, 1994, 1996; Dodge, 1986; Dooley, Pyzalski, & Cross, 2009; Fontaine & Dodge, 2006). Following this perspective, a substantial body of empirical evidence has been able to link unique patterns of SIP to social maladjustment, most specifically to aggressive behaviour (Coie & Dodge, 1988; Dodge & Price, 1994; Dodge et al., 2013; Huesmann, 1988; Lansford et al., 2006). As such, the SIP model has been used to measure and understand children and youth’s aggressive behaviours in different offline settings and environments.  Due to the rapid development of computer technology which provides new opportunities for online socializing, it is important to understand how adolescents are processing social information in online settings, and particularly when they are engaged in online aggression (e.g., cyberaggression/cyberbullying).  The purpose of this study, therefore was to develop the Online Social Information Processing (OSIP) a new self-report measure, which is built upon the Crick and Dodge’s social information processing (1994) model and is designed for online settings. To date, no research has developed an integrated, comprehensive self-report measure in online aggression for adolescents in the age between 14-18 years of old.  1.2 Validity Evidence Traditionally, validity has been defined as a measure that determines if “a test measures what it is supposed to measure” (Lado, 1961, p. 321).  If this criterion was met, the measurement was deemed valid. However, more recent definitions of validity argue that validity is “the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses   4 of tests” (AREA et al., 1999, p. 9). Following this perspective, Kane (2006) explains that validation “involves the development of evidence to support the proposed interpretations and uses… [it] is associated with an evaluation of the extent to which the proposed interpretations and uses are plausible and appropriate” (p. 17). Recent views of validity are much more comprehensive, and focus on providing validity evidence from different aspects of the assessment, including content, response process, internal structure, relationships to other variables, as well as on the consequences of using the assessment (AERA, APA, & NCME, 1999). Accordingly, each form of validity evidence is based on different sources. For example, validity evidence for content comes from existing definitions (usually based on current theory; Bordignon & Lam, 2004), external subject matter experts’ evaluations (Downing, 2006), documentation of the systematic process employed during item development (Downing, 2006), item evaluation process (Downing, 2003), and expert evaluation of each item in the assessment (measured according to specificity, representativeness, and relevance to the construct (Messick, 1995). Validity evidence for response process comes from documenting and determining whether all source of errors are controlled during administration of the assessment (Downing, 2003) and from Think Aloud Protocols (TAP), where participants read the assessment and describe out loud their reactions and thoughts (AERA et al., 1999; Ericsson & Simon, 1993; Cook & Beckman, 2006).  Validity evidence for internal structure comes from documentation about the intended use and interpretation of the measurement scores (AERA et al., 1999). Validity evidence for the assessment’s relationships to other variables comes from analyzing convergent and discriminant relationships (Campbell & Fiske, 1959), as well as predictive relationships (Allen & Yen, 2002). Finally, validity evidence for consequences of using an   5 assessment comes from documenting the potential intended and unintended consequences of the measurement (AERA et al., 1999). As this brief overview of the different methods and sources of validity evidence shows, current views on validity require a much more comprehensive, multi-faceted approach.  As such, the current study approached the development and validation of the OSIP from this perspective.  More specifically, the current study uses multiple sources of validity, including item development processing and expert review of the items to assess content validity, as well as participant TAPs for assessing the response process validity of the OSIP.  Below is a brief overview of each of these sources. 1.2.1 Item Development Process. The item development process is an initial step of providing validity evidence in terms of test content and response construction, with the goal of ensuring that the evidence gathered supports the validity of the test. This process can be used for gathering sufficient and reasonable information to understand the extent to which the construct that is being measured is representative of the assessment being developed. It can also clarify the rational for why the assessment is being developed, how it is developed, what the response options should be, as well as which process should be followed for data collection (Haynes, Richard, & Kubany, 1995). There are a wide variety of factors that are important in item development validity, including item content, test specifications, item writer training, cognitive behaviour, item content verification, item editing, bias-sensitivity review, item piloting and pretesting, key validation and verification, and test security plan respectively (Downing & Haladyna, 1997). This study focused on three of these, namely item content validity, test specification, and item editing, each of which is described more fully in the next chapter.   6 1.2.2 Expert Evaluation. Expert evaluation refers to a situation in which an identified content expert makes a judgment about the nature of the items in the assessment (Messick, 1990). Although Messick (1989) argues that expert evaluation is fallible and should be supported by other evidence, this method does provide important evidence about the appropriateness of the items included in the measurement (Kane & Case, 2003). More specifically, this method is ideal for addressing the extent to which the construct domain is relevant to the proposed interpretations of the scores of the measurement (Messick, 1995). Additionally, expert evaluation can be used to assess the specified content domain (Goodwin & Leech, 2003). As such, expert review can be used to provide evidence about the measurement characteristics in terms of “sufficiency, clarity, relevancy, comprehensiveness, and commonality between the items and tasks and the definitions of the construct being measured” (Brown, 2010, p. 34-37).  1.2.3 Think Aloud Protocols. As noted above, a think-aloud protocol (TAP) or concurrent verbalization, is a method that asks individuals to verbalize their thoughts during the task performance without changing the course of their thinking (Ericsson & Simon 1984, 1993). This method provides an opportunity to assess many aspects of the person’s thought process (Ericsson & Simon 1993). Research has shown that TAPs can provide detailed evidence that highlights: (1) item difficulties (Ercikan et al., 2010); (2) links between measurement tasks and an individual’s performance by addressing the relationship between evaluation and measurement models (Ercikan, 2006); and (3) differences between the different tasks associated with the assessment (Lyons-Thomas, 2014).  1.3 Significance and Purpose of the Study This study aimed to develop and validate the online social information processing (OSIP) measurement among adolescents. This study showed the initial process of OSIP validation by   7 providing evidence from item developing process, expert review, and TAPs, which allows researchers to build upon the field of validity within the context of online aggression. The OSIP item development process validity evidence, in the current study, was based on the key stages of content validity, including content definition, test specification, and item editing; this is unique given that most research on the validity of a measurement for online aggressive behaviors is concentrated on statistical evidence (Webster et al., 2014).  The development of the OSIP measurement tool will improve research efforts and enable researchers to establish a clear understanding of online aggressive behaviors among youth. The purpose is to build upon the existing research by providing OSIP as a tool for understanding online aggression. In summary, this research will validate the psychometric properties of the OSIP and increase our understanding of online aggression among adolescents. 1.4 Research Design The current research was undertaken in three phases. In the first, the Item Development Phase, I followed Downing’s (2006) test development procedure, with specific attention to content definition, item specification, and item editing. To this end, the OSIP construction was based on the features that researchers have demonstrated as the most representative of the construct in question. Accordingly, a list of potential self-report items was developed for the OSIP, which represented each of the six SIP skills: encoding, interpretation, goal clarification, decision construction, decision evaluation, and enactment.  These skills and the corresponding items were developed by performing a content analysis of the relevant literature and identifying shared themes and concepts.  In addition, after reviewing published and unpublished sources on cyberbullying and adolescent internet use, initial items were developed to reflect the content domains of online retaliation and online social support. The response scale and time frame were   8 then defined, and final edits and/or modifications were made based on an informal peer review process.  In preparation for Phase 2, The Expert Evaluation Phase, an invitation email was sent to two experts who were identified as having expertise on the topics of online aggression and social information processing. Upon agreeing to participate, these experts were asked to assess the test items based on the purpose of the test.  In addition, they were asked report their comments and suggestions for each item.  They then returned the forms to the researcher. This data collection took place over the four weeks. Finally, for Phase 3, the Think Aloud (TAP) Phase, seven adolescents were invited to complete the measure, but provide oral feedback through a TAP while they were responding to the questions.  Participants were also asked them to identify if each item is clear, complex, or offensive.   The outline of the three phases above emphasized both deductive and inductive approaches.  More specifically, Phase 1, which included item writing, content definition, and test specification, used a top-down method or deductive method, and was primarily based on existing literature.  In contrast, Phase 2 (expert review) and Phase 3 (TAP), used a bottom-up inductive approach, and relied on expert and participant assessment (Hinkin, 1995).      9 Chapter 2: Literature Review This chapter is divided into two main sections. The first section focuses on a brief review of social information processing and its measurements. The second section presents a review of the evidence-based validity methods used in this study: item development processing, expert evaluation, and think aloud protocols.  2.1 A Brief History of Social Information Processing Social cognitive skills/patterns among adolescents have long been of interest to academics and educators (Arsenio, Adams, & Gold, 2009; Crick & Dodge, 1994; Nelson & Crick, 1999; Shahinfar, Kupersmidt, & Matza, 2001).  Indeed, it is increasingly being recognized that adolescent development affects well-being over the life-span (c.f. National Institute of Child Health and Human Development & National Institute of Mental Health). As such, there are consequent demands for national and international programs that support improving social behaviours of all adolescents, as well as increasing emphases on social skills development programs (e.g., Conduct Programs Prevention Research Group, 1999; UNICEF, 2011).  A number of studies have articulated the conceptualization of social cognitive skills and resultant approaches to its measurement, including what should be measured, who should be the informant (e.g., the adolescent or an outside observer such as a teacher), and whether the assessment should be considered as a complete set of distinct mental steps or different measurements for different skills/patterns (see Dodge et al., 2002). According to the SIP model (Crick & Dodge, 1994; Dodge et al., 1986; Huesmann, 1988), individuals respond rapidly to socially challenging situations with a sequence of mental operations.  Often this response pattern can lead to aggressive behaviours. SIP measures reflect the assumptions that (1) there are social encoding deficits to human interaction, (2) humans have a tendency towards attribution bias, (3)   10 we tend to focus on instrumental goals rather than social goals, and (4) we have a tendency towards aggressive responses to social problems (see Dodge, 1993; Huesmann, 1988). These assumptions end with a general reflection that these social cognitive skills/patterns show individual differences in social adjustment. Moreover, any deficit in such skills and patterns among adolescents can lead to increasing social maladjustment, and in particular, aggressive behaviour problems (Coie & Dodge, 1998; Huesmann, 1988).  Below is an overview of the specific steps of the SIP Model. In total, there are six steps, which start with the encoding of social information and end with an act or behaviour. Encoding. Encoding is the first step in responding to a social event (Larkin, Jahoda, & MacMahon, 2013). It refers to a set of social hints to guide social communications, including facial, nonverbal, gestural, intonational, and social contextual cues (Runions et al., 2013).  In this step, individuals selectively choose a stimulus and pay more attention to its cues, which might be situational or internal (Crick & Dodge, 1994). Evidence has shown that people engaging in aggressive behavior tend to miss certain cues or seem to attend only to a very narrow cue set (Benson, 1994; Rojahn, Lederer, & Tasse, 1995). In addition to this, another study found that missing certain social cues might lead the person to react in a hostile and violent manner (Babcock, Green, & Webb, 2008). The second step is the interpretation of cues.  Interpretation of Cues. This refers to the meaning-making process (Larkin, Jahoda, & MacMahon, 2013; Runions et al., 2013).   More specifically, when a person encodes information, he/she uses different techniques to give meaning to the information. By this definition, Crick and Dodge (1994) assert that individuals interpret the situation in one of the following ways: (1) filter or personalize the situational cues; (2) infer the causal or correlational relationships, as well as the meaning of the event; (3) rely on schemata or scripts as a guide to understand the social   11 relationships; (4) assess the situation in terms of similar previous experiences; (5) focus on social cues that occurred at the end of a social interactions; or (6) store or recall social information inadequately. In some cases, individuals may also move one step back (e.g., social encoding) in order to clarify their interpretations. Given this, several studies have examined the association between this processing step and adolescents’ aggressive attitudes. In fact, it has been found that individuals who misinterpret social information based on their previous reactions, experiences, and environment are more likely to engage in aggressive behaviors (Dodge & Price, 1994; Huesmann & Guerra, 1997; Randall, 1997). In addition to this, several studies have explored a hostile attribution bias as an interpretive bias that may lead individuals to misinterpret the cues. They have asserted that hostile attribution bias is associated with aggressive behaviors (Basquill, Nezu, Nezu, & Klein, 2004; Crick & Dodge, 1994; Dodge, 1980; Dodge & Frame, 1982; Dodge & Newman, 1981; Dodge & Somberg, 1987; Dodge et al., 1990; Jahoda, Pert, & Trower, 2006; Pert, Jahoda, & Squire, 1999). Further to these results, researchers have also studied the mediator and moderator effects of the relationship between hostile attribution bias and aggressive acts. For example, MacMahon, Jahoda, Espie, and Broomfield (2006) have found that anger can play a significant role as a mediator in the relationship between attribution of hostile intent and aggressive behaviors. However, despite a corresponding increase in research exploring acts of aggression from this perspective, most of this research is focused on understanding the consequence of misinterpreted cues for aggression rather than the other steps (e.g., encoding). The third step is the clarification of one’s goal.  Clarification and Selection of Goals. This refers to a step of the process where an individual decides what should be accomplished. The goal of this step is to help individuals understand the situation in a more comprehensive way. Individuals tend to clarify their goals by   12 redesigning the previous plan or making a new plan if they change their decisions. Goal clarification includes evaluating different interpretations, considering moral and social issues, looking at referent audiences, and creating and weighting agent and community goals (Runions et al., 2013). In some cases, individuals may take one or more steps back to reformulate their goals based on internal needs, such as self-survival or external needs, like a desire to develop social relationships (Nigoff, 2008). The evidence shows that clarification of goals can be affected by several factors; however, one of the most significant factors is the individuals’ current emotional state (Crick & Dodge, 1994).  Response Construction. The fourth step is response construction, which refers to creating different responses to a stimulus. Individuals may either construct a new response, or use a response from memory and a previous experience in a novel situation. Each response may or may not follow the identified goals from the previous step (Crick & Dodge, 1994). Individuals often make their decisions based on appraising potential responses, weighting potential responses, prioritizing options, and evaluating the pros and cons of each option.  Response Evaluation. The fifth step is response evaluation, which refers to evaluating possible responses and selecting the preferred response. Response decision is an outcome-focused process (Fontaine, Burks, & Dodge, 2002), which shows the real and measurable change in an individual’s behaviour in a way that can be seen/observable. Crick and Dodge (1994) have stated that there are several variables that affect an individuals’ decision making, including outcome expectations, response effectiveness, and personal characteristics. Fontaine et al. (2002) explored two main domains that play a significant role in making decisions among adolescents, response valuation and outcome expectation. In this work, they have found that adolescents who either expected or valued aggressive responses are more likely to make aggressive decisions. In   13 addition to these studies, a number of researchers have focused on understanding the effect and consequences of personal characteristics such as moral disengagement on adolescents’ response decisions during aggressive interactions. For example, Gini, Pozzoli, and Hymel (2013) have found that there is a positive correlation between moral disengagement and aggressive behaviors. Similarly, Ang and Goh (2010) found that deviant behaviors are more likely to be seen among adolescents who have a lower level of empathy. Also, some studies have examined the effect and consequence of self-efficacy judgments on adolescents’ response decisions during aggressive interactions (Bandura, 1994; Dodge, 1993). For instance, Crick and Dodge (1989) found that aggressive children are more likely to report high efficacy for acting aggressively. In a similar outcome, Fontaine and Dodge (2006) demonstrated that aggressive youth are more likely to judge violent retaliation as justifiable response if they think that others are intending to cause harm. Based on this literature, it appears that the Response Evaluation phase is very closely linked with aggressive outcomes. Behavioral Enactment.  The last step is behavioral enactment. This refers to how individuals act in social communication when they are engaged in aggressive behaviors. Given these trends, research has shown that biased or deficient processing of social information at each step of SIP is significantly associated with increases in the likelihood of adolescents engaging in aggressive behaviors (Dodge & Crick, 1990; Dodge, Coie, & Lynam, 2006; Dodge & Schwartz, 1997).  In sum, reviewing the literature of all six steps of the SIP showed that having deficit in social encoding, willing towards attribution bias, focusing on instrumental goals rather than the social goals, having tendency towards constructing more aggressive decisions, willing to evaluate aggressive decisions positively, and acting aggressive behaviours can be associated with   14 aggressive behaviours.  Regarding aggression that is occurring online (cf. cyberaggression, cyberbullying), SIP has been put forth as a viable model for understanding this behaviour (Runions, et al. 2013), owing in part to debates regarding increases in cyberaggressive behaviour problems for adolescents who are interacting online. By including SIP theory and its skill classification, item content validity (item classification) was supported by the alignment of these six social information processing skills - encoding, interpretation, goal clarification, decision construction, decision evaluation, and enactment- to the six classified items, developed in OSIP.  2.1.1 Measuring Social Information Processing (SIP). Much of the research on SIP  is based on Dodge’s study (1986) and has focused on one aspect of the SIP process, with hostile attribution bias (see Dodge, Price, Bachorowski, & Newman, 1990; Dodge & Somberg, 1987; Steinberg & Dodge, 1983) and decision making (see Dodge & Newman, 1981; Fontaine & Dodge, 2006; Fontaine, Yang, Dodge, Pettit, & Bates, 2009) receiving the most research attention. A smaller body of research has simultaneously explored two aspects of the SIP model, such as hostile attribution bias and response decision (Fontaine et al., 2010; Stickle, Kirkpatrick, & Brush, 2009), cue encoding and interpretation (Horsley, Orobio de Castro, & Van der Schoot, 2010), or goal clarification and response decision (Harper, Lemerise, & Caverly, 2010).  An even smaller body of work has looked at three or four aspects of SIP (Orobio de Castro, Veerman, Koops, Vosch, & Monshouwer, 2002; Yoon, Hughes, Gaur, & Thompson, 1999). For example, Schultz et al. (2010) focused on cue interpretation, response access, and response decision patterns.  In fact, only Dodge and his colleagues have attempted to explore and measure the SIP model in a comprehensive fashion by developing the SIP assessment tool (e.g., Dodge & Prince, 1994). Moreover, although the SIP assessment tool has been used extensively for research by   15 Dodge and his colleagues (e.g., Crick & Dodge, 1994; Dodge, 1986; Dodge et al., 1990; Dodge et al., 2002; Dodge & Price, 1994; Kupersmidt, Stelter, & Dodge, 2011; Lansford et al., 2006; Schultz & Shaw, 2003, and Zelli, Dodge, Lochman, Laird, & Conduct Problems Prevention Research Group, 1999), only one study has focused on the validation of a comprehensive SIP measure (Dodge et al., 2002); several studies, however, have focused on assessing the psychometric properties of a one to two aspects of the tool (Hughes, Meehan, & Cavell, 2004; Leff, Cassano, MacEvoy, & Costigan, 2010; Schultz et al., 2010; Yoon, Hughes, Cavell, & Thompson, 2000;  Zelli et al., 1999). This work, as well as the common tools for describing SIP, is described below. Historically, Zelli and colleagues (1999) were the first to publish an assessment of the SIP tool, using a sample of 387 children in grades 4 and 5. The study was done over three successive years in four sites. Children’s interpretation of peers’ motives, their behavioral responses, and their evaluation of aggressive solutions were assessed based on various hypothetical scenarios. The scenarios illustrated ambiguous situations of harm or provocation, problematic peer-group interactions, or situations where a child had been wrongly accused of something. Home interviews with the children were used to measure their internal attribution and response access. Two different instruments were used to measure the children’s response-evaluation pattern: Things That Happen to Me (Crick & Dodge, 1996) and What Do You Think (WYT; see Dodge, Murphy, & Buchsbaum, 1984). For each scenario, children were asked to report a possible way to respond to the hypothetical problem. Children’s aggressive beliefs were measured through their ratings of an aggressive solution for a hypothetical problem, on a five-point Likert scale. The results of the study showed that there were four cognitive constructs that impacted SIP: normative beliefs legitimizing aggression, hostile attributional biases, mental   16 accessing of aggressive responses, and evaluation of aggressive solutions. More importantly, the study provided evidence for discriminant validity of the four constructs: aggression beliefs, hostile attribution, response access, and response evaluation.  After this work, in 2002, Dodge et al. (2002) examined the multidimensional latent factor structures of the SIP model. In this study, they postulated that there are latent knowledge structures that are associated with the processing of social information across different situations that may affect aggressive behaviors. To test this hypothesis, they recruited 332 participants among elementary school children (grades 1 to 3). Children were interviewed for two hours in order to collect data about their four SIP processing patterns (attribution of peer intent, generation of responses to peer relationship dilemmas, evolution of presented responses to peer relationship dilemmas, and endorsement of instrumental as opposed to social goals) and their knowledge about emotions. In this study, three instruments were used to assess SIP patterns either through ambiguous peer provocation or problematic group entry situation. The Home Interview With Child (Dodge, 1986) was used to assess hostile attribution, the Social Problem Solving Scale (Dodge, Bates, & Pettit, 1990; Rubin & Krasnor, 1986) was used to assess accessing of responses to peer relational dilemmas, and Things That Happens To Me was used to assess both goal orientation in peer relationships and children’s evaluations of assertive and aggressive responses. In this study, Dodge and colleagues used coefficient alpha to evaluate the internal consistency of item responses and correlations among the five constructs of the SIP model to evaluate convergent and discriminant validity through structural equation modeling.  The results of this study, first, provided evidence of the psychometric characteristics of SIP assessment item responses. Second, the results of the confirmatory factor analysis supported the   17 convergent and discriminant validity of these four SIP constructs. Third, the study supported the idea that SIP patterns differed according to particular situations. Yoon et al. (2000) studied SIP patterns among a sample of 152 children who attended grades 2 to 4 at elementary school, in a school district in a small southwestern city. In this study, they assessed the Social Cognitive Assessment Profile (SCAP; see Hughes, Hart, & Grossman, 1993) as an interview measure to assess three patterns of SIP (attribution, generation of solutions, and outcome expectations) as well as self-efficacy. The study illustrated peer provocation situations (either relational or over provocations) with a distinct format for boys and girls, using seven vignettes.  The study provided further evidence for the descriptive discriminant analysis and criterion validity of the SCAP, although the SCAP showed low internal consistency reliability.   It has been noted by Hughes et al. (2004) that existing SIP measures are limited in distinguishing between boys and girls, as well as between African American and non-African American children.  They argue that this is because most published SIP measures focused on overt aggression among boys, or on a limited number of SIP patterns. As a result, Hughes and her colleagues explored the psychometric properties of a revised version of the Social-Cognitive Assessment Profile (SCAP). The test considered four out of six patterns of SIP, including interpretation of cues, clarification of goals, response access, and response decision, and used eight hypothetical events based on relational and overt aggression. The researchers excluded encoding cues and behavioral enactment patterns from the study due to the difficulty of assessing them in an interview format. The study used a sample of 371 students in 2nd to 4th grade. Results indicated that there were four latent factors, with the data showing good fit for both boys and girls, as well as for different ethnic groups. Regression analysis provided evidence for the SIP   18 measure’s convergent and discriminant validity and hierarchical regression analyses revealed the criterion-related validity of the SCAP.  In a sample of 125 preschool children (3- to 5-year-olds), Schultz et al. (2010) assessed the reliability and predictive validity of a new video-based assessment of SIP for young children from lower income families. This new method had three subtests- Emotions, Provocation, and Goal Acquisition- that were related to three SIP patterns: cue interpretation, response access, and response decision. The test included 62 vignette videos, with 42 videos representing scenarios about children’s interpretation and response access, and 20 videos representing children’s goal acquisition. The result of the study showed that the Schultz Test of Emotion Processing-Preliminary Version (STEP-P) scale, which they developed for the purpose of this study, had adequate reliability and was significantly correlated with children’s social cognitive functions, accounting for 23% of the variance in children’s adaptive behavior and 17% in their disruptive behaviors.  More recently, Leff et al. (2010) developed a new assessment tool for an aggression prevention program that utilizes a SIP model for urban elementary school populations. This measure, the Knowledge of Anger Processing Scale (KAPS), evaluates children’s knowledge of social and emotional processing. Components of KAPS included physiological arousal, the importance of staying calm in reaction to an event, attribution of intentionality, and making decision. The 224 students, who were in 3rd and 4th grade, were recruited from an urban elementary school that was located in a low-income residential area. Results from this study showed that KAPS had strong psychometric properties, which can discriminate between more and less knowledgeable individuals. The findings also showed that the overall scores on the   19 KAPS test were moderately associated with attributions of intentionality in relational and instrumental situations.  In a special journal issue about SIP assessment, Ziv and Sorongon (2011) developed a modified version of the Social Information Processing Interview (SIPI; Dodge & Price, 1994) for preschoolers who are 48-61 months of age (SIPI-P). First, the scale was piloted with 26 children.  The results of the pilot study showed that SIPI-P had good psychometric properties, except for the social encoding questions. Based on the result of the pilot study, the open-ended social encoding question was excluded from the main study. Consequently, in the final version of the questionnaire, the open-ended questions were replaced by close-ended questions and the story’s characters were replaced by cartoon characters. A version of the scale was developed for each gender group. A sample of 196 children was recruited to participate in the study. The data were collected in two different time points. The results showed that SIPI-P had good reliability and significant correlation with socio-demographic risk factors and teacher ratings of children’s behavior. However, the authors recommend that more research should be done to improve the instrument to assess encoding, hostile attribution, and response generation skills among preschool children.  The above review of how SIP has been traditionally measured clearly reveals that although the construct is well-studied among children and early adolescents, it has yet to be adequately tested with older adolescents. Additionally, neither a self-assessment scale, nor a multidimensional scale for measuring SIP (with a comprehensive approach to assess all six skills) in online settings has been developed in the literature. Therefore, this research developed such a measurement, but before any interpretations can be made, it is necessary to provide   20 validity evidence that supports the inferences of such a measure.  Thus,the purpose of the current work to develop and validate such a measure. 2.1.2 Measuring Social Information Processing in Online Settings. With the increased use of Information Communication Technologies (ICTs) there has been increased documentation of individuals engaging in online aggression (Law, Shapka, Hymel, Olson, & Waterhouse, 2012b; Li, 2006; Tokunaga, 2010). It has been postulated that there are specific characteristics of ICTs that may be contributing to the negative outcomes associated with cyberbullying (e.g., Runions et al., 2013), including the ability of the perpetrator to maintain anonymity, the permanency of social cues, the potentially infinite audience, as well as a lack of social cues, such as the aggressor’s inability to observe the target’s immediate reaction (Runions et al., 2013; Slonje & Smith, 2008; Tokunaga, 2010).  It has been argued that each of these characteristics may alter how a person processes social information.  For example, Runions et al. (2013) have noted that the permanence of social cues on ICTs may negatively affect the encoding and interpretation phases of SIP, which may lead to increased hostile and/or depressive ruminations. They have argued that this, in turn, may increase the likelihood that the individual will engage in retaliatory aggression (c.f. Anesti, Anestis, Sleby, & Joiner, 2009; Collins & Bell, 1997). Runions and colleagues (2013) also have suggested that the lack of social cues inherent to an online environment may lead to the likelihood of making hostile attribution errors. Emoticons and internet slang—smiley faces, slang terms, acronyms, and abbreviations used in online communication—may further add to misinterpretation of online communication.  Indeed, Derks, Bos, and Grumbkow (2008) have illustrated that emoticons and abbreviation usage is positively associated with message ambiguity. As noted above, the increased use of ICTs is unprecedented, and it is an important social context for developing adolescents. From a young age, adolescents   21 are in regular connection with each other through mobile devices, instant messaging, and social networking websites (e.g. Facebook). Indeed, recent reports have indicated that the use of social networking sites is highest among adolescents (Lenhart, Purcell, Smith, & Zickuhr, 2010b), and that by age thirteen, 73% of teens have their own mobile phone (Lenhart et al., 2010a). Given this, it is important that we begin to understand how adolescents process social information that is garnered through online social contexts.  The current study will begin to fill this gap by developing a psychometrically sound assessment tool for SIP within an online context. 2.2 Evidence-based Construct Validity    Any new measurement requires a significant investment in development to ensure that it meets the required level of validity. According to the Standards for Educational and Psychological Testing (AERA, APA & NCME, 1999), validity requires an accumulation of evidence about a specific test score interpretation and use as stated:  Validity refers to the degree to which evidence and theory support the interpretations of test scores by proposed uses of tests. Validity is, therefore, the most fundamental consideration in developing and evaluation tests.  The process of validation evolves accumulating evidence to provide a sound scientific basis for the proposed score interpretation (p.9).  From this definition it can be seen that several types of validity evidence based on test content, response processes, internal structure, relations to other variables and the consequences of testing should be collected. Content validity is one of the main validity measures in test construction, and one that is a primary focus in this study. Content validity provides evidence to support “the domain relevance and representativeness of the test instrument” in a way that ratify the “inferences to be made from test scores” (Messick, 1989, p.17). Content validity evidence includes evaluating the assessment’s construct definition, its intended purpose, the process of developing and selecting items, the wording of each item,   22 and the qualifications of item writers and reviewers (Cook & Beckman, 2006). For the current study, this evidence was collected through systematic processing of item development and expert review methods (Downing & Haladyna, 1997). More specifically, the item development process collected evidence to identify whether each item is associated with its domain and content The  experts’ evaluation collected evidence to determine if each item proposed an appropriate response option.  Although experts may evaluate response options, they are uniquely in a position to evaluate each item in terms of its clarity, complexity, and offensiveness (Cook & Beckman, 2006; Downing, 2003). This was an important consideration for the current study given the complexity of processing social information in an online context.  The final source of validity evidence focused on in this study was response process. It is a primary source of evidence that participants are engaging in a think aloud protocol (TAP). By enabling the research to review the respondent’s response process, TAPs allow researchers to understand if each item fits the underlying construct of the test, (AERA et al, 1999; Cook & Beckman, 2006; Downing, 2003).  This source of evidence can also help researchers explain differences between and within groups (Olmsted-Hawala & Bergstrom, 2012). In sum, three sources of validity evidence were used to support the OSIP measure, including item development process, expert review, and think aloud protocols.   2.2.1 Review of Item Development Process. Developing a measurement for a specific purpose raises a question about construct validity. The test developer often has a concern about the psychometric quality of the measurement and tries to provide evidence for construct validity of the measurement. . According to Downing and Haladyna (1997), one way to collect validity evidence is to use the item development process. The first step in this   23 process is to provide an overall plan. The overall plan includes identifying: the target group, the purpose of the test, the details of the test design, plans for assembly and production administration, plans for scoring, plans for reporting results, how item bank has been created, as well as plans for post hoc analysis and logistical requirements for completing the assessment (Hochlehnert  et al., 2012).   Once this plan has been developed, the researcher focuses on content definition.  Evidence for content definition can come from using several methods such as content defining task analysis, practice analysis, expert evaluation, classifying items, and setting standards using item data (Downing & Haladyna, 1997). After defining the purpose of test, one of the most important steps of test development is to verify what specifically the test should measure (test specification), and finally, the next steps are item writer training, adherence to item writing principles, cognitive behavior analysis, item content verification, item editing, bias-sensitivity review, item tryout and pretesting, key validation and verification, and test security plan (Downing & Haladyna, 1997).  For the current study, the item development process was based on content definition, test specification, and item editing. Each is described below.  2.2.1.1 Content Definition as Validity Evidence. Research has shown that articulating the purpose of a test, along with a rationale for the relevance of the interpretation to the proposed use, can provide rich evidence for validating the test (AERA et al., 1999; Sireci, 2013). As such, demonstrating a clear statement about the purpose of the measurement is the first logical step in developing a test.  In addition to this, the test developer should consider the potential misuses of the test scores. Any misuses about the test scores may lead the researcher to misinterpretation of the test scores. Moreover, exploring potential cross-test purposes is another consideration that may prevent the unintentional use of the test (Sireci, 2013)   24 2.2.1.2 Test Specification as Validity Evidence.  Test specification refers to specifying the tasks of a test. One way to specify the tasks of a test is to create a test specification table, which provides detailed information about the tasks of a test, as well as the number of test questions that can be selected for each task or skill. The test specification table is generally formed by using a matrix, which describes tasks/skills in rows and the number of the test items and domains in columns. In addition to this, each cell (subject) presents the relative value of that subject by looking at the number of items to total items created in the test (see Downing, 2003).  2.2.1.3 Item Editing as Validity Evidence. Item editing is one of the most important aspects of the item development process, but it is also usually the most neglected aspect. Item editing mostly includes focusing on editorial style, item formats, acceptable acronyms, proofreading, clarity, and appearance of the items, as well as ensuring that the exact meaning of the item is what the writer intended (Baranowski, 2006). Documentation about who edited the measurement, how items were edited, what changes were made, what questions were raised, and how the editing processing was followed provide validity evidence for the test (Baranowski, 2006). The item editing procedure for the OSIP measurement was applied to the texts, items, and response options. 2.2.2 Review of Expert Evaluation. There have been a number of studies that have emphasized the importance of expert evaluation for gathering validity evidence of a measurement (Ercikan, Arim, Law, Domene, Gagnon, & Lacroix, 2010). However, Messick (1989) mentioned that documentation of the assessment of items provided by these experts is seldom included in the literature.  Expert evaluation documentation, like many validity construct evidence, tends to be limited to traditional psychometric methods based on Classical Test Theory   25 (CTT) or generalization (G) theory (Gajewski, Price, Coffland, Boyle, & Bott, 2011). Typically, researchers use the expert evaluation method after a test has been developed to assess the underlying construct of test items, using quantitative measures of agreement, a content validity index, or multivariate methods. For example, Victor Martuza (1977) introduced a content validity index, which has been successfully used in many nursing studies (Polit & Beck, 2006). Multivariate methods are also used for investigating and summarizing expert evaluation assessments, including factor analysis and multidimensional scaling methods (Markus & Smith, 2010). However, too little attention has been paid to presenting validity evidence through expert evaluation method (Berk, 1990; Goodwin & Leech, 2003). More specifically, Berk (1990) has demonstrated that the expert evaluation method should be used in the initial stages of a test construction, including during the domain specification, item development and item subscale, and scale content validation stages. For example, in domain specification, experts are ideally suited to assess the appropriateness of the content of the test, the accuracy of the content domain structure, and the representativeness of the content coverage in relation to the domain (Berk, 1990). Similarly, during item development, experts can effectively assess the item format, item content, and the method for generation of the items (Berk, 1990). Finally, for the item, subscale, and scale validation stages, experts are able to identify problematic items and suggest some clarifications (Berk, 1990). This, for example, means that they would likely be able to examine (1) the degree of congruence between an item and its indicator, (2) the level of readability of each item, (3) the degree of accuracy of the information provided for each item, (4) the offensiveness of the language used in any item, and (5) the degree to which the sample item is representative of each domain.    26 The current study fills this gap in the literature by focusing on the expert review at the measure development stage.  More specifically, the expert evaluators in the current study assessed the OSIP after it had gone through its item development process, and experts were asked to determine the degree of congruence between an item and its indicator, the level of readability of each item, the degree of accuracy of the information provided for each item, the offensiveness of the language used in any item, and the degree to which the sample item was representative of each domain.  2.2.3 Review of Think Aloud Protocols (TAPs). Think aloud protocol is a data collection method used to evaluate whether a test is aligned with its construct and use. The think aloud method was mainly introduced by Ericsson and Simon (1984, 1993) for use in psychological and other social sciences research (van Someren, Barnard, and Sandberg, 1994). This method is one of the most direct methods that can be used in examining the ongoing decision-making processes of the individuals completing the assessments (Ericson & Simon, 1993). This method involves participants thinking aloud without any semantic changes to their thought processes as they respond to the items in an assessment (Ericson, 2003; Ericson & Simon, 1993). This method can help researchers access first hand data via audio or video recording about how the assessment is perceived (Bernardini, 2002). The purpose of this method is to make explicit what is implicitly going on for subjects who are completing a specific task.  The TAP method can be used through two different experimental techniques: concurrent and retrospective (Ericson & Simon, 1993; Kuusela and Paul, 2000). The first technique collects data during the decision task, while the second one collects the information after the decision task.  The current study focused on the former, as I was interested in how adolescents thought about and processed the OSIP items.   27 2.3 The Current Study SIP refers to cognitive-emotional networks interacting in social setting to explain behavior, including aggressive behavior (Anderson & Huesmann, 2003).  SIP among adolescence undergoes great developmental changes (see Nelson, Leibenluft, McClure, & Pine, 2005) through internal (e.g., personal characteristics; see Hessels et al., 2014) and environmental stimuli (e.g., social communication; see Dodge et al. 2013; Fontaine et al., 2010). As such, it is important to effectively assess factors that influence the manifestation of aggressive behavior in adolescence.  This is particularly important as we move into an information age, when an increasing amount of adolescent socialization is happening in an online context.  To achieve this goal, a new measure of online social cognitive processing patterns (OSIP; with a particular focus on cyberaggression and cybervictimization based on a reformulated SIP model of aggression (Crick & Dodge, 1989) was developed. If the validity evidence and arguments justify the interpretations and uses of the OSIP scores, the OSIP measure has the potential to shed light on the role of social information processing for adolescents who are engaged in aggressive communications online. Three research questions guide this work: 1. Is the OSIP measurement closely aligned with the theoretical definition of the construct? How would the OSIP items be supported by evidence collected through item development process, including item content, item specifications, and item editing? Are these documentations related to social information processing skills in online aggressive contexts?  2. How do expert reviewers provide validity evidence above and beyond psychometric evidence for assessments of social information processing skills in online aggressive contexts?  That is, to what extent do they provide evidence that is collected for the assessment, and what kinds of additional information, if any, do they offer? In particular, does expert review provide   28 validity evidence about whether items capture social information processing skills in online aggressive contexts?  3. How do TAPs provide validity evidence above and beyond psychometric evidence for assessments of social information processing skills? In particular, do TAPs provide validity evidence about whether items capture social information processing skills?  To justify the interpretations and uses of the OSIP measure, the current study has used interpretivism theory and applied Kane’s argument-based framework (2013). Interpretivism theory is chosen because of its ability to reach the deep meaning of the social cognitive processing functions in cyberaggressive contexts by exploring the interpretation, reflection, and relevant dialogue (see Laverty, 2003). This theory will be used in this study in order to tap the OSIP’s meaning, as well as the score interpretations and uses. Kane’s argument-based approach is chosen because of its ability to evaluate those interpretive arguments within a construct validity umbrella (see Kane, 2013; Messick, 1995).      29 Chapter 3: Methodology  This chapter provides a description of the methods of the current study. Information is provided about the participants, recruitment strategies, sampling method, and the data collection procedures. Following a description of the procedure, information about data preparation, scoring, coding, and coding analysis are also explained.  3.1 Item Development Process (Phase 1) The item development processing phase involved both deductive (construct definition and item specification) and inductive (item editing) approaches.  To develop the OSIP, for each skill, domain sampling was used with purposive sampling (Shadish, Cook & Camplell, 2002) of homogeneous items (Dawis, 1987; Devellie, 2003; Polit & Beck, 2008) .   As noted earlier, I followed a deductive approach to review the literature and to provide evidence for defining the purpose, uses, and misuses of the OSIP, as well as to identify the sources of validity evidence that should be considered to protect against misinterpretation of the test scores and misuses of the test. Following this approach, I specified the OSIP test tasks/skills and domains, reviewing the SIP literature.  After this, I used an inductive approach to edit the OSIP items. Four graduate students, including the author, as well as the research supervisor were provided with an electronic copy of the OSIP to edit.  A 45 minute meeting was then held to discuss the OSIP.  At that point, a hard copy of the test was provided to everyone for reference.  Issues of word meanings and grammar were raised and discussed, and the OSIP was adjusted accordingly. This OSIP editing was coded for the number of language difficulty and the number of grammar problems per item, as well as the number of typos.      30 3.2 Expert Evaluation (Phase 2) 3.2.1 Participants. Specific guidelines used for the selection and exclusion of experts included: (1) having greater than five-years-experience in either the field of social information processing or cyberbullying or both; and (2) some experience in the area of measure development/validation. The two experts used in this study were identified based on their published work in the field.  One had ten-years-experience in the social information processing skills field and more than 3 years in the field of aggression. In addition, he had expertise in the field of validation. The other expert has experience of about 8 years in the area of cyberbullying and 2 years in the area of SIP, as well as expertise in measure development. One of the experts was female and the other one was male and they were both professors at a Canadian university.  3.2.2 Procedure. After receiving ethics approval from the University of British Colombia Behavioral Research Ethics Board for the study, the evaluation form and the OSIP measurement were sent via email with an introductory cover letter to each of the expert reviewers. They were asked to send back the completed evaluation form to the author. The experts were asked to assess each item of the OSIP measure through structured, open-ended questions. Specifically, the experts were asked to report concerns they had about the content, language, response options, or any other feedback that came to mind for each item. They were also requested to provide their recommendations for improving the measure. The expert evaluation form consisted of six sections relevant to the subsections of the SIP model, including encoding, interpretation, goal clarification, decision construction, decision evaluation, and enactment or final response. At the end of each section, there was a space for additional comments about any omissions that they had noticed. Once the expert evaluations were returned to me, I categorized all the information for each item and recoded all feedback.  The data from this evaluation helped to identify   31 elements of the assessment instrument that might require modification or might suggest that an item or items should be deleted, if they were not relevant.   3.2.3 Coding of the Experts’ Evaluation. Expert evaluations were coded for language difficulties and alignment with content and domain, (note that suggestions for additions and deletions were considered as language and content difficulties for this study). Each category is described below:  Language difficulties: When experts mentioned that the students would probably have a problem understanding a word or phrase of an item clearly, the item was coded as ‘language difficulty’;   Alignment with content: When experts mentioned that an item aligned with the content of the scale, the item was coded as ‘alignment with content’;   Alignment with domain: When experts mentioned that an item aligned with the domain of the scale, the item was coded as ‘alignment with domain’. Interater reliability, which involved the author as well as another graduate student, was 90% or greater.  3.3 Adolescent Think Aloud Protocol (Phase3) 3.3.1 Participants. Seven adolescents from the Lower Mainland of British Columbia were involved in this phase. The TAP procedure involved having participants verbalize their thoughts about the OSIP as they responded to each item. The recruitment process followed specific guidelines for the selection and exclusion of students, which included: (1) participants needed to be between 14 and18 years old, (2) participants needed to live in BC, (3) participants needed to be active in online social networking, and (4) participants needed to read and write   32 English fluently. Three participants were girls and four were boys. The characteristics of the sample are presented in Table 3.1.  Table 3.1  Demographic information for TAP samples Demographic characteristic   TAP sample Gender  Male 4 (57%) Female 3 (43%) Age  15 years old or younger 2 (29%) 16 years old 2 (29%) 17 years old 3 (43%) 18 years old 0  (0 %) Grade             8 1(14%)            9 1(14%)            10 2(29%)            11 3(43%) Place of birth   Canada 4 (57%)          Others 3(43%) Ethnic background             Asian 3(43%)           Caucasian 2(29%)          African 1(14%)          Others 1(14%) Most commonly used language at home  English  4 (57%) Others  3 (43%) Religious affiliation           No religious affiliation 3(43%)          Muslim 2(29%)         Christian 2(29%) Adults lived with most of the time         Father 1(14%)        Mother 0(0%)        Father and mother 4(57%)        Father, mother, and other family members 1(14%)        Half with mom, half with dad 1(14%)        Others 0(0%) Note.  Percentages are rounded to the nearest whole number.   33 3.3.2 TAPs Procedures. Participants were recruited primarily through word of mouth.  Adolescents and/or participants who expressed an interest in the study were emailed a letter that described the study, as well as a copy of the parental consent and participant assent forms. After obtaining the verbal or written consent from participants and their parents, participants were asked to set a time to evaluate the OSIP at one of the public libraries in the Vancouver area.  The evaluation sessions took no more than one hour to complete.  Upon meeting the participant, and finding a comfortable quiet place in the library, the researcher introduced herself and explained the nature of the study.  The researcher then gave a copy of the OSIP and the evaluation form to the participant and asked participants to carefully read each question aloud, to directly verbalize their thoughts and interpret the questions in their own words.  The researcher acted out what she meant by thinking aloud to make sure participants understood what was expected of them. Participants were also asked to assess the items of the OSIP through structured, close-ended questions. In addition to this, they were requested to provide their recommendations or suggestions on ways to improve the measure. Students are also asked to complete a demographic questionnaire, which consisted of nine questions that asked their age, grade, gender, living status, religion, and ethnic background. Background factors for this study relate to participants' online socialization experience and are included for the purpose of describing and assessing the study population's online aggressive experience. 3.3.3 Transcription of TAPs. After recording the testing sessions, the author transcribed all the sessions, wrote down each session verbatim, and recorded instances of pauses or filler words such as “um” or “ah.”  3.3.4 Scoring. As the students completed the OSIP, they were also asked to verbally assess the language difficulties of each item. The response scale, which was created by the   34 author, ranged from “1” to “4”, where 1= “Item is not clear”, “Not complex at all”, or “Not offensive at all” and 4= “Item is clear, “Complex”, or “Offensive”.  3.3.5 Coding and Analyses. The TAPs were coded for language difficulties, including clarity, complexity, and offensiveness, for example, when students understood the item clearly, the item was coded as clear. The TAPs were also coded for comments pertaining to the purpose and domain of the study, as well as the level of engagement in the OSIP.  For example, when students explained or made a comment on the purpose of the item, the item was coded as purpose;  Similarly, when students’ experience and interpretation of an item was relevant to a specific domain (e.g., aggressive or non-aggressive), the item was coded as domain.  Finally, engagement was based on how many items students responded to.  Interrater reliability was 90% or greater for these codes.      35 Chapter 4: Results  Results are organized around the three research questions, each of which is answered by providing the construct validity evidence.  In Appendix A through C you will find several versions of the OSIP, starting with the original (Appendix A), then the version that incorporated expert feedback (Appendix B), and finally the version that incorporated adolescent feedback from the TAP (Appendix C) 4.1 Is OSIP Closely Aligned with Theoretical Definition of SIP? To answer this research question, evidence was drawn from both the Item Development Processing and the Expert Review phases.  4.1.1 Evidence from the Item Development Process 4.1.1.1 Content Definition. Collecting evidence about the content of each item in the OSIP definition was one of the primary sources of evidence in the OSIP item development, and involved 3 steps:  articulating the testing purpose, considering test misuses, and exploring the relationship between purposes and misuses.  For the first step, I clarified that the purpose of the OSIP was to measure the extent to which an adolescent responds to aggressive behaviour online, for example, by retaliating or by seeking social support. As part of this, I clarified the potential uses of the OSIP measure, where were to: 1) predict adolescents’ aggressive behavior in online contexts; 2) measure adolescents’ deviant processing patterns in online aggressive contexts; 3) measure adolescents’ deficient encoding functions in online aggressive contexts; 4) measure adolescents’ biased interpretation in online aggressive contexts; 5) determine adolescents’ goal orientation in online aggressive contexts; 6) measure to what extent adolescents’ decision construction are relevant to aggression in online contexts; 7) measure to what extent adolescents’ decision evaluations are relevant to aggression in online contexts; 8) provide information to   36 understand the extent to which adolescents’ aggressively behave or act in online contexts; 9) predict social information processing deficits in two situations: peer provocation and peer support; 10) predict deviant processing for the same situational domain.  For the second step, I explained the potential misuses of the test in order to address misinterpretations about test scores and to prevent the unintended results of the test (Sireci, 2013). The potential misuses of the OSIP were: 1) the OSIP test scores should not be used to make inferences regarding any social cognitive domains; 2) the test scores should not be used to assess aggression that is not occurring online; 3) the OSIP test should not be used to validate preschool children’s aggressive behaviors and should not be administrated to preteen school students, except under special circumstances; 4) adolescents or students who do not use electronic devices, such as cell phones, or the Internet, for their social communications are forbidden to use the OSIP test to verify the level of their online aggression; and 5) the OSIP is culturally sensitive and cannot be used for any cultural groups.  For the third step, I developed a table to explore the sources of validity evidence that could be used to combat the OSIP’s potential misuses (content, response process, internal structure, relations with other variables, and test consequences; see Table 4.1).  This table helped the researcher to identify which source(s) of validity evidence should be focused on. As can be seen in Table 4.1, content and response validity are the most significant sources of validity evidence for OSIP. It was the reason that this study focused on collecting evidence for OSIP using experts’ evaluation and think aloud protocols.      37 Table 4.1 Proposed Sources of Validity Evidence for each Potential Misuse for the OSIP Measurement   Potential misuses Source of validity evidence  Content  Response  Internal  External  Consequences 1 Measure students’ aggressive behaviour and make inferences for any social cognitive domains √ √  √  2 Assess aggression that is not occurring online  √ √ √  √ 3 Validate preschool children’s aggressive behaviors √ √   √ 4 Measure aggressive behaviours among adolescents or students who do not use electronic devices, such as cell phones, or the Internet, for their social communications √ √ √  √ 5 Use the measure for any cultural groups √ √    √ √  In addition to articulating the OSIP purposes, uses and misuses, as well as the potential sources of validity for each potential misuse, the following evidence was collected about the content of each item in the OSIP definition.  This information was used to develop the list of initial test items: Encoding. The first section of the measure focuses on encoding and consists of seven questions. As mentioned above, encoding in this study refers to the focus of an individual’s selective attention to a few cues that might be internal or external (Crick & Dodge, 1994). In this study, social encoding deficits are appraised by attention and perception constructs. To measure how social information processing encodes the information, I created items 1, 4, 5 and 7 based on the definition described above. Items 1 (positive saying or doing) and 2 (negative saying or doing) are adopted from Dodge and Prince (1994), item 4 (remember what said or done) is   38 adopted form Dodge and Newman (1981), and item 6 (focused on the last saying or doing) is adopted from Dodge and Tomlin (1983).  Interpretation. This section consisted of 14 questions. As mentioned above, interpretation in this study refers to giving meaning to the social events and can be appraised by using four subscales based on 14 items. The first subscale is about the method that participants use to understand the provocateur’s words/actions. This construct was measured by asking participants to indicate how often they use different methods to understand the other person’s intention when a provocateur’s words/actions hurt their feelings. The second subscale is based on a situation in which an individual moves one step back to clarify his or her interpretation, as mentioned above. In fact, individuals in this circumstance may think more about the event and provide more interpretations for such a conflict. Therefore, the ability to provide more interpretation (positively or negatively) may affect his or her final decision toward action. The third subscale is about understanding the cause of the event. This item is adopted from Zelli et al. (1999) and Kupersmidt et al. (2011). The forth subscale is about attribution bias. Ideas for creating items 4 through 14 are adopted from Halligan and Philips (2010) as well as the Social Emotional Processing-Attribution bias Questionnaire (SIP-AEQ), which was developed by Coccaro et al. (2009).  Goal clarification. This construct was assessed through five items. In this study goal clarification refers to a process where an individual decides what should be accomplished. It includes evaluating different interpretations, considering moral and social issues, looking at referent audiences, and creating and weighting agent and community goals (Runions et al., 2013). Goal clarification items are developed based on the desired outcomes from the social   39 event, including: wanting revenge, wanting dominance, or prosocial outcomes. Such items are adopted from different sources, including Kupersmidt et al. (2011) and Schippell et al. (2003). Response construction. This section consisted of five questions. In this study response construction refers to creating different responses to a stimulus. Individuals often make their decisions based on appraising potential responses, weighting potential responses, prioritize options, and evaluating pros and cons of each option. These items are created based on cyberaggression, online dominance, relational cyberaggression, forgiveness, and cybervictimization.  Such items are adopted from different sources, including Kupersmidt et al. (2011) and Dodge et al. (2013).  Response evaluation. This section consisted of five questions. In this study response evaluation refers to evaluating possible options and selecting the preferred response. In this study, response evaluation was assessed by evaluating potential cyberaggressive and cyberbullying responses in terms of adolescents’ expectations or values of aggressive decisions. Fontaine et al. (2002) have found that adolescents who either expected or valued more aggressive responses are more likely to make more aggressive decisions. Fontaine and Dodge (2006) demonstrated that aggressive youths are more likely to judge violent retaliation as a justifiable response if they think that others are intending to cause harm. Response evaluation items are adopted from Kupersmidt et al. (2011) and Dodge et al. (2013). Enactment. This section consisted of five questions. In this study enactment refers to how individuals act in social communication when they are engaged in aggressive behaviors. Research has shown that, in adolescents, biased or deficient processing of social information at each step of SIP is significantly associated with an increased likelihood of engaging in   40 aggressive behaviors (Dodge & Crick, 1990; Dodge, Coie, & Lynam, 2006; Dodge & Schwartz, 1997). The first four items are adopted from Keil and Price (2009). 4.1.1.2 Test Specification.  During developing the OSIP, this study specified the test tasks/skills and the number of the items in each skill and domain based on the above review of research. The results of this analysis showed in Table 4.2.  As can be seen in the table, interpretation skill has the highest number of items, followed by encoding skills. Table 4.2 Test Specification for the OSIP  Social Information Processing Skills  Sub-Skills # of Items (%) # of sub-items(%) # of items for response type (%) Retaliate Social support  Encoding   7(15%)    Attention   5 (11%) 3 (6%) 2 (4%) Memory   2 (4%) 1 (2%) 1 (2%)      Interpretation  19 (42%)    Ways of interpretation  3 (6%) 2 (4%) 1(2%) Rational/reasoning  5 (11%) 3 (6%) 2 (4%) Feels about words or actions  11 (24%) 9 (20%) 2 (4%) Goal clarification  5 (11%)    Anti-social  2(4%) 2 (4%) 0 (0%) Get along   1 (2%) 0 (0%) 1 (2%) Time and information  2(4%) 0 (0%) 2 (4%)           Response generation  5 (11%)    Social support  1(2%) 0 (0%) 1 (2%) Anti-social   4 (11%) 4 (11%) 0 (0%) Response evaluation  5 (11%)    Social support  2 (4%) 0 (0%) 2 (4%) Anti-social   3(6%) 3 (7%) 0 (0%) Enactment  5 (11%)    Social support  2 (4%) 2 (4%) 0 (0%) Anti-social   3 (6%) 1 (2%) 2 (4%) Total   46 46 30 16   41  4.1.1.3 Item Editing. Item editing is another important source of validity evidence to support the consistency among the items. To collect such evidence, the initial version of the OSIP was presented to a group of graduate students and one of the supervisors for further editing.  This section describes the changes made to the texts and test items based on this process.  Table 4.3 below provides a summary of subsequent item editing that was done for each skill of the OSIP.  The edits that each person suggested are described below: Supervisor. This supervisor 's item editing was based on item content, structure, format, and grammar. She made 21 edits, and revised her changes in a few items. She pointed out that a few phases, such as logical order or too much kindness, were confusing. She corrected questions with grammatical mistakes. She deleted one phrase and added a new one to give the text more richness and depth.  Graduate Student 1.This student’s comments addressed a few (three) grammatical changes.  Her changes were relatively few.  She mentioned that some of the items were straightforward, but others were more difficult.  Graduate Student 2.Although this student didn’t have a lot of expertise in online aggression, her comments were fruitful based on her experience in editing. She made a few basic changes to some phrases, such as logical order. She stated that there was some confusion over a two items that might be appropriate to look beyond these item phrases and to consider whether a different form of the same word could be used. She also addressed 10 grammatical issues as well as a complete revision of the stem question for one of the sections.   Graduate Student 3. This student made some edits to the stem question. He pointed out that the stem should not only clearly define the participants’ tasks, but also concisely include all   42 specifications necessary to perform the tasks. He also found that items were at an appropriate level of difficulty, but encoding and interpretation item were not reader-friendly.   Overall, these editors raised the following main issues over the OSIP skills and test items:  Some of the encoding and interpretation items were too hard to understand, meaning that participants may possibly infer something different than the purpose of the measurement.  The goal clarification, decision construction, decision evaluation, and enactment items were easy to read and understand, but needed proofreading. Table 4.3 Item Editing of the OSIP Measurement based on Peer Group Feedback (Number of Items=46) Item Skills # of language difficulty Problematic terms  # of grammar problems  # of proof reading   Encoding   3 10 1  2 Unpleasant    2  1 Pleasant    3  2 Pay attention   4  2 Shift     Interpretation   6 15 5  2 Guess   6  3 Logical     Goal clarification   1 5 7  1 More information   8  3 Need time    Construct decision    5 10 9  4 Provocateur    Evaluate decision    2 6 10  4 Provocateur    Enactment    1 2 11  1 Frozen      43 The editors identified various words, which had language difficulty. For instance, when reading the word provocateur, one editor commented, “provocateur….what? I don’t really know …pro…voca…teur” while another editor said, “Guess, I am not sure…guess what....” Editors also addressed several grammar problems while proofreading the items. In general, feedback from peer groups provided an additional source of validity evidence for the OSIP items.  By collecting evidence about content definition, item specification, and engaging in item editing, the results show that the OSIP measure is closely aligned with theoretical definition of SIP.  4.1.2 Evidence from Experts’ Evaluation. The experts’ evaluation was centered around each of the six sections of the OSIP.  Each of these sections is described, in turn, below. Encoding. This section had seven items before expert evaluation. According to the experts’ feedback, item 1 needed content modifications. Between the two experts, one of them explained that, “the “pay[ing] attention” [phrase] may be a trickier phrase than you’d like it to be here … It’s colloquial use has as much to do [with?] memory and rumination as it does to encoding, I suspect … “I don’t pay attention to what they say about me” doesn’t imply that they didn’t encode, but that they are coping by trying to forget what was said … Encoding has always been the most challenging SIP step to address via self-report.  Research on encoding has now moved on to trying to capture the allocation of attention via eye movement etc.”  From my point of view, this statement has highlighted three points. First, ‘paying attention’ cannot be considered as the only way a person encodes the information, as there are different ways for encoding the information. The expert implies that this item should consider the other facets of encoding the information. Based on this, a new sub-question was added: In your last online aggressive engagement, to what extent did you give it your full attention?   44 The second point that the expert emphasized was the allocation of attention via eye movement. He believes that understanding individual’s attention through eye movement is a new area of research. In fact, eye movement is a behavior for processing visual information that can be “acquired only during the fixation” (Rayner & Castelhano, 2007, p. 3649). This point implies that the author should capture the allocation of attention via eye movement by adding a new question. Based on this point, new sub-questions were added:   In your last online aggressive engagement, how much were your eyes fixated on your screen?   How long your eyes were closed or off the screen during your real or imagined online fight?   To what extent were your eyes moving fast and processing what was being said or done on the screen?  The third point that the expert stressed was that “paying no attention” to someone’s doing or saying doesn’t help the author to find the underlying concept of encoding information because working actively to forget disturbing events or experience is considered a coping strategy, as it helps the person to cope with the problem by trying to forget what was said or done to him or her. However, the purpose of writing this item was not about understanding the underlying coping strategies or how a person copes with a problem; rather, it was about how a person encodes the information. According to the attention definition that is mentioned above, an individual in an aggressive engagement may (unintentionally) focus on specific words/actions (for any reason) and either not notice or quickly forget the others, and this ability may affect his or her encoding process of the information. Thus, the forgetting function can be viewed as a part   45 of the encoding process.  As a result, item “1” was divided into three questions and each questions focused on one of the above points: attention, eye movement, and forgetting functions:  1. In your last online aggressive engagement, to what extent did you give it your full attention? 2. In your last online aggressive engagement, to what extent have you experienced any physical change to your face or body (e.g. your eyes were ‘hooked’ or stopped moving)? 3. In your last online aggressive engagement, to what extent did you focus on one specific action or word for a long period of time and forget other things? In addition to the above comments, the experts had item-specific feedback.  For example, it was suggested that item2 needed content modification:“this [item; item 2] may be a more valid attempt as it acknowledges that if it isn’t remembered, it wasn’t deeply encoded …. Perhaps this approach will be more fruitful”. In addition to this, the other expert added, “sometimes people can remember things forever but it no longer bothers them – which is different from “dwelling” on the words/actions…” This point implies that this item could focus on memories that bother an individual. To clarify the item, more descriptions were added to the question by providing an example.  The experts’ feedback about item 4 (remember what was said or done) was related to item domain. One expert mentioned that “[you] may want to say, “after someone has said something mean to you, how often do you…” This point implies that this item should focus on a specific circumstance. To clarify this item, more descriptions were added to the question by providing an example.   46 Experts reported their feedback about Item 7 (shift attention) in terms of domain modification. One of the experts stated that “clarification on what “situation” [the author is] talking about [is necessary].” This point implies that this item should focus on a specific circumstance. To this end, the “in your last online aggressive engagement” phrase was added to clarify the situation the item is asking about. The other expert pointed out that “this is a post-fact coping strategy.” He implied that this item helps researchers understand how a person copes with the problem. This interpretation could be true, however, selective attention is defined as an individual’s tendency to focus on or select out a few cues in the environment. In this study, I tried to partially assess attention in terms of an individual’s ability to focus on a few selected cues because research has showed that aggressive adolescents’ attention is more likely to be captured by aggressive social interactions and is less likely to be shifted away from such circumstances (Gouze,1987). As a result, an individual who is engaging in cyberbullying may focus on a specific word or action, but either way, this behavior may affect his or her encoding. To reconcile this, shifting or deflecting attention away from an aggressive circumstance was modified from “how much do you shift your attention away from the situation where that person’s words/actions hurt your feeling?” to “in your last online aggressive engagement, to what extent did your mind get stuck on the situation?” Interpretation. The second phase consisted of 14 items, before the experts’ evaluation. According to the experts’ feedback, item 1 should be more specific. One of the experts stated that “a situation when a person makes you angry…[should be added to this item].” This point implies that this item should focus on a specific circumstance. The other expert pointed out that “these [sub-items, for example, a guess based on a lack of the information] seem like responses – the end result of SIP, and not specific to the interpretation step.” Although these sub-items can be   47 used in enactment, individuals can also use these ways to understand the circumstance. As such, the items were included in this section. The experts’ feedback about item 2 is related to the SIP construct. One of the experts stated “this feels to me to be more about rumination, and not interpretation.” However, the author believes that it is not only rumination, but also interpretation given that the purpose of this item is to understand the underlying factors of interpretation. This item was therefore modified from “How much time do you spend thinking about why this social conflict happened to you?” to “In your last online aggressive experience, how often have you spent time thinking about why this happened to you?”  The experts reported different feedback about item 3. One of the experts said, “this is certainly getting closer in tackling the perceived locus of control, which will be part of any interpretive process … but without a clear example called to mind, it will be hard to answer in general.” The other expert mentioned that this item has difficulties in test specification. She stated, “Perhaps clarify what you mean by ‘the issue’ by saying, ‘When something mean/hurtful/embarrassing happens to you, do you think the cause…” To clarify this item, more explanation was added to specify the item by providing an example.   Goal Clarification. As mentioned above, goal clarification in this study was assessed with five items prior to the expert evaluation. According to the expert’s feedback, the purpose for developing item 4 and 5 was not clear. It was also pointed out, “[you/ the author] should have a look at [the] Salmivalli circumplex model and rethink how [to] approach social goals.” Briefly, the Salmivalli circumplex model (Salmivalli, Ojanen, Haanpaa, & Peets, 2005) was developed on the bases of eight social goals: (1) agentic: appearing self-confident and being admired by others; (2) agentic and communal: expressing oneself openly, being heard; (3) communal:   48 feeling closed to the others and developing true friendships with them, (4) submissive and communal: seeking others’ approval by complying with their opinion, (5) submissive: avoiding making others angry by pleasing them, (6) submissive and separate: avoiding social embarrassment, (7) separate: appearing detached, without revealing one’s thoughts and feelings, and (8) agentic and separated: being in control, having no interest in others’ opinions (Ojanen, Gronroos, & Salmivalli, 2005). This model provides a way to evaluate different adolescents’ social goals. In particular, this model distinguishes between two different goal domains (agentic and community), which have been shown to be relevant to aggression and prosocial behaviour.  For example, Salmivalli and Peets (2009) realized that agentic goals often develop based on proactive aggression while communal goals usually form based on prosocial behaviors. Therefore, the current study utilized this model (Salmivalli et al., 2005) as a framework for clarifying students’ goals while they were communicating online.  To do this, several items were added to the OSIP:  In your last online aggressive engagement, how often did you purposefully try …? a) To be respected and admired by the other person (or the person you oppose) b) To have self-confidence and make an impression on the other person (or the person you oppose) c) To be viewed as a smart person d) To say exactly what you wanted  e) To have your opinion heard f) To state your opinion plainly  g) To be able to tell the other person (or the person you oppose) how you felt h) To feel close to the other person (or the person you oppose)  i) To feel good (both of you, or you and the person you oppose) j) To put the other person (or the person you oppose) in a good mood  k) To develop a real friendship between both of you or you and the person you opposed    49 l) To be liked by your peers m) To be accepted by the other person (or the person you oppose)  n) To be invited by the other person (or the person you oppose) to join in a social event  o) To agree with the other person (or the person you oppose) about things  p) To let the other person (or the person you oppose) decide  q) Not to be angry because of the other person (or the person you oppose) r) Not to make the other person (or the person you oppose) angry  s) To be able to please the other person  (or the person you oppose)  t) Not to annoy the other person (or the person you oppose) u) Not to say stupid things when the other person (or the person you oppose) is listening    v) Not to be laughed at by the other person (or the person you oppose) w) Not to make a fool of yourself in front of the other person (or the person you oppose)  x) Not to show your feelings in front of the others (or peers)  y) Not to give away too much about yourself  z) To keep your thoughts to yourself  aa) To keeping the others at a suitable distance  bb) Not to let anyone get too close to you  cc) Not to show that you care about them  dd) To be agreed by the other person (or the person you oppose) to do what you suggest  ee) To get to decide how to hang out (e.g., playing game with friends online) ff) To do what the person (or the person you oppose) say or suggest Response construction. As mentioned above, response construction in this study was assessed by five questions prior to the experts’ evaluation. According to the experts, item 3 should be more specific. One of the experts asked a question: “How do you talk about someone behind their back online?” and then answered by saying “in private message?” To clarify this item, more explanations were added to the item by including the private message phrase. The experts’ feedback about item 5 was related to domain specification. One of the experts stated that this “because of the mean things that are being said online” phrase should be added to the   50 question. Accordingly, the “because of the person’s saying or doing” phrase was added to this item. At the end of this section, three new items (6 through 8) were added because one of the experts suggested that the items in the decision-making section should include items that describe non-retaliatory responses as well. Response evaluation. As mentioned above, response evaluation in this study was assessed by five questions prior to the experts’ evaluation. According to experts, “[the items in this section were] exclusively about evaluating retaliatory responses?” As the purpose of this section is to make a distinction between adolescents who make aggressive decisions and the ones who don’t, three items (6 to 8) were added to this section to help make this distinction.  4.1.2.1 Summary. Experts made numerous useful recommendations for the items of the OSIP measurement in terms of construct and domain modifications. Based on expert evaluation, encoding items were deemed to be the most problematic in terms of test specification or test content. There were few recommendations mostly in terms of test domain for the SIP skills of Interpretation, Goal Construction and Decision Construction and no comments for Decision Evaluation and Enactment.  An overview of congruence (e.g., no changes suggested by experts) is provided in the Table 4.4.  Table 4.4 Expert Evaluation about OSIP Items in terms of Alignment with Content and Domain Skills Item  Sub-items Alignment with content Alignment with domain Expert 1 Expert 2 Expert 1 Expert 2 Encoding        1  . No No .  2  . No No .  3  . No No .  4  No Yes No .  5  . . . .  6  . . . .  7  . . No .   51 Skills Item  Sub- Alignment with content  Alignment with domain   items Expert 1 Expert 2 Expert 1 Expert 2 Interpretation        1        a . . No .   b . . No .   c . . No .  2   No .   3        a . Yes . Yes   b . Yes . Yes   c . Yes . Yes   d . Yes . Yes  4  . . No Yes  5  . . . Yes  6  . . . Yes  7  . . . Yes  8  . . . Yes  9  . . . Yes  10  . . . Yes  11  . . . Yes  12  . . . Yes  13  . . . Yes  14  . . . Yes Goal clarification        1  . . No .  2  . . . .  3  No Yes . Yes  4  . . No .  5  . . No . Decision construction         1  . Yes . Yes  2  . . . .  3  . Yes No Yes  4  . Yes . Yes  5  . . No . Decision evaluation         1  . Yes . Yes  2  . Yes . Yes  3  . Yes . Yes  4  . Yes . Yes  5  . Yes . Yes          52 Skills Item  Sub- Alignment with content  Alignment with domain   items Expert 1 Expert 2 Expert 1 Expert 2 Enactment        1  . Yes . Yes  2  . Yes . Yes  3  . Yes . Yes  4  . . . .  5   Yes  Yes Note. “.” = missing data   4.2 Is the Language of the Measurement Appropriate for the targeted adolescent group and does it accurately measure SIP in an online context? To support research question 2, I utilized evidence from the item development process and expert reviews.  4.2.1 Evidence from Expert Evaluation. The experts’ evaluation about the language appropriateness of the OSIP items was described below.  Encoding. According to expert’s feedback, item 1 needed language revisions. Between the two experts, one of them identified the language difficulties in two words/phrases of this item, including “how much” and “action.” The expert believed that the “how much” phrase was not clear in regards to whether it referred to “degree” or “amount.” She also suggested that the word “action” should be well-defined in this context or it could be replaced with another word such as “doing.” I, therefore, modified item 1 based on this feedback and changed “how much” and “action” to “to what extent” and “doing,” respectively. I also received the same feedback from the experts about items 2 and 3; one of the experts identified the language difficulties in these items, in terms of using the “pleasant” and “unpleasant” words. The expert felt that “the “pleasant” and “unpleasant” words and actions needed to be defined, especially in the encoding phase…“online, even pleasant words can be interpreted as sarcastic and mean.” I, therefore, changed the words from “pleasant” and “unpleasant” to “socially acceptable” and “socially   53 unacceptable,” respectively. This solution maintains the intended meaning, but avoids the possibility that they could be interpreted as sarcastic or mean. For item 4, “how much” was revised in a similar fashion as item 1. For item 5, one of the experts added that the “interaction” word should be clarified in this question. Based on the experts’ feedback, I clarified this item. The experts’ feedback about item 6 was also related to language difficulties. One of the experts stated that the author should talk about “in what context” this item can be used. The other expert pointed out that “heavily focus” [looks] to be vague.” He implied that using the “heavily” word made the sentence ambiguous. As a result, this item was revised and the word heavily was removed.    Interpretation. The experts’ feedback about item 1 was related to language difficulties. One of the experts suggested that “[the author] may want to add ‘other’ as there are many other ways that someone might react to a situation.” This point implied that this item needed to add “the other methods” as the other way that a person may behave in such a situation.  This was revised accordingly. The experts’ feedback about item 3 was related to language difficulties. One of the experts stated “perhaps clarify what you mean by ‘the issue’ by saying ‘when something mean/hurtful/embarrassing happens to you, do you think the cause…’. According to this feedback, this item was revised. Both experts pointed out that item 5 had language difficulties, as well. One of them demonstrated that “This question is confusing because “communicating” could be interpreted in many ways.  For example, someone might say, ‘yes, they are communicating that they hate me’ – or someone might interpret it as ‘yes, they are trying to get to know me better but don’t know how to communicate it properly’”. The other expert stated that “[I am] not sure what you’re aiming at with [saying ‘interested’] one. According to experts’ feedback, this item was modified in terms of language difficulties.    54   One of the experts implied that it is better “to split [a few questions] up a bit” because participants may interpret differently what another person in an altercation  and what other people (e.g., peers in the class) might say or do. Accordingly, based on the expert’s feedback, items 4, and 5 to 14 were modified to make a distinction between the participant’s, the other person, and the other person’s perspective.  Goal Clarification. One of the experts mentioned that item 1 needed language revision and more clarification about the “what happened to you” phrase. Experts mentioned that item 2 needs language revision in three words and phrases: “the other person,” “goal,” and “sufficient social privileges.” They implied that these words and phrases should be defined. Therefore, these items were modified. Regarding item 3, the experts mentioned this item would need language revision. According to experts’ feedback, “rewording this [item] could work.” Therefore, this item was modified in terms of language difficulties. One of the experts mentioned that item 4 and 5 should be reworded because the meaning of the word “goal” in these questions is not clear. As above, these items were reworded.  Also, as noted above, as per the Salmivelli Circumplex Model (Salmivalli, Ojanen, Haanpaa, & Peets, 2005), a subsection of 32 items was added to understand how goals are linked with aggressive motivation.   Response construction. The experts’ feedback about items 1 to 5 was related to language difficulties. One of the experts stated that the author should replace the “provocateur” word with the “person being mean to you” phrase because this phrase is “more age appropriate to your participants.” The same feedback has been received for item 2; so, these items addressed such difficulties. Experts’ feedback about item 5 was related to language difficulties as well. One of the experts stated that the phrase “because of the mean things that are being said online” should   55 be added to the question. And the other expert preferred to use the “other way” rather than the “other day” phrase. Therefore, item 5 was modified to make it clearer.  Response evaluation. The experts’ feedback about items 1 to 5 was related to language difficulties (such as provocateur). One of the experts indicated that “what person” in item 1 should be explained.  Based on the received experts’ feedback, item 1 was changed to two sub-questions in order to define the “person” in the item. Accordingly, the other items (2, 3, & 4) have been changed, respectively. Item 5 received the same feedback from the experts and was changed accordingly. In addition to this, one of the experts stated the phrase “had their feeling hurt” could be used instead of the phrase “got hurt”. Item 5 was modified accordingly.  Enactment. Experts’ feedback about items from 2 to 4 was related to language difficulties. One of the experts stated that item 2 and 4 should be reworded, item 3 should use a better word instead of “appropriate,” and item 5 involved a leading question. The other expert explained that item 4 was “a bit confusing in its wording.” As a result, items 1 to 4 were modified in terms of language difficulties.  At the end of scale, there was a box for experts to address their overall omissions. One of the experts pointed out “what happens if no conflict happened?” To resolve this problem, one more question was added at the beginning of the scale to account for situations where no conflict happened. Finally, one of the experts noted that clarification of who “that person” was would important in a few questions. To address this problem, participants are asked to report what they think about (1) themselves, (2) the other person (the person you oppose or support), or (3) the other people (e.g., friends or peers) in such questions. 4.2.1.1 Summary. The expert evaluation results are provided in the Table 4.5.  As can be seen in in the table, experts commented that most items needed revision in terms of language   56 appropriateness. This evidence confirms that addressing language difficulties of the OSIP items was necessary for several sections. In addition, the table shows that interpretation items had the highest language appropriateness after enactment items. In contrast, encoding items had the lowest language appropriateness.    Table 4.5 Experts’ Evaluation about the Language Appropriateness of the OSIP   Skills Item Sub-item Language appropriateness Mean of language appropriateness Expert 1 Expert 2 Encoding     (1/7)100= 14%  1  Yes Yes Yes  2  No Yes No  3  No Yes No  4  No Yes No  5  No Yes No  6  Yes No No  7  No Yes No Interpretation       1    (10/19)100=53%   a Yes Yes Yes   b Yes Yes Yes   c Yes Yes Yes  2  Yes Yes Yes  3  No Yes No   a Yes Yes Yes   b Yes Yes Yes   c Yes Yes Yes   d Yes Yes Yes  4  No No No  5  No No No  6  Yes Yes Yes  7  Yes Yes Yes  8  No Yes No  9  No Yes No  10  No Yes No  11  No Yes No  12  No Yes No  13  No Yes No  14  No No No         57 Skills Item Sub-item Language appropriateness Mean of language appropriateness    Expert 1           Expert 2 Goal clarification      (1/5) 100= 20%  1  No Yes No  2  No No No  3  No Yes No  4  No Yes No  5  Yes Yes Yes Decision construction      (1/5)100= 20%  1  No Yes No  2  No Yes No  3  No Yes No  4  Yes Yes Yes  5  No No No Decision evaluation      (1/5)100= 20%  1  No Yes No  2  Yes Yes Yes  3  No Yes No  4  No Yes No  5  No Yes No Enactment      (3/5)100= 60%  1  Yes Yes Yes  2  No Yes No  3  Yes Yes Yes  4  No Yes No  5  Yes Yes Yes   4.2.2 Evidence from TAPs’ Participants.  In terms of the OSIP language difficulties, the results of the participants’ evaluation focused on clarity, complexity, and offensiveness. The adolescents’ evaluation about the OSIP scale included the introductory section, as well as all six sections (encoding, interpretation, goal clarification, decision construction, decision evaluation, and enactment or final response). The introductory questions are about whether participants have any experience in online aggression. Participants’ feedback about introductory questions was that these items were clear   58 and not complex or offensive. However, two participants stated that the introductory questions could be shorter. Given the importance of setting the stage for the questionnaire, it was not obvious how to make this section shorter, however, it will be recommended that the researcher who is administering the OSIP go over the introductory section carefully with the participants (in either a group or individual setting). Encoding. The first section in the OSIP measurement included 9 items. Item 1 was identified as the “paying full attention” item within the memory and attention domain of the OSIP construct. As can be seen in the literature and experts’ evaluation, this item is one of the most difficult to assess within SIP model. A close examination of this item showed that 86% students assessed that item 1 was not complex or offensive, but according to participants’ comments, this item required a minor clarification in terms of language difficulties. Two of the participants had difficulties understanding the meaning of “attention” in this context. One of the students said, “attention… I don’t know…I honestly don’t…ohm, but I think it is about how attentive you were during the online aggressive engagement”. As a result, the question was slightly reworded to: In your real or imagined online fight, how attentive were you? Item 2 (new items added after experts’ evaluation; See Appendix B), which focused on physical change (eyes were frozen and stopped moving) was deemed as slightly complex. Ninety percent of participants assessed that this item was clear, not complex or offensive. One participant had difficulties understanding the words relevant to “physical change” in the item.  Item 3 needed minor revisions given that one of the participants mentioned that the “socially acceptable words” in this item could be replaced by a “positive” word. Another participant said the “online slangs” phrase could also be used instead of “socially acceptable saying or doing” phrase along with more updated examples such as “Instagram,” not   59 “Facebook.” This item was adjusted to address all of these concerns. Item 5 and 6 were identified as clear and not complex or offensive. However, similar to the previous item, participants encouraged the inclusion of updated examples. These items were revised accordingly. Item 7, which focused on participants’ memories of the aggression, was deemed unclear and hard to understand. For example, one of the participants said, “[it is] like thinking back on it in my mind.” The other participant stated “[it is like] what was going along and what happened.” This item was therefore changed in attempt clarify it. In item 8, participants had difficulty understanding what was meant by the ‘last thing that was said or done.’ They made comments that the author should make the item clear by talking about which person she is talking about (e.g., who is doing or saying something). This item, as a result, was slightly changed to address this clarification.  Interpretation. The second section in the OSIP measurement is interpretation, and it includes 13 items (see Appendix B). Item 1 focused on the method of interpretation and although this item was clear, and not complex or offensive, participants suggested the addition of two other methods, which were about ‘asking someone else to interfere in order to have another person’s perspective on the story,’ and ‘jumping to conclusions.’ Item 3 focused on the cause of the problem, and item participants’ feedback about this item suggested that it was slightly too complex.  For example, one of the participants mentioned that ‘other person’ phrase in this context should be clarified. Another participant explained that if the author used the ‘as a result of’ phrase instead of ‘due to,’ the item would be clearer to understand. The wording was changed accordingly.  Goal clarification. Items 1 to 5 in goal clarification focused on the purpose of the interaction.  Based on participants’ evaluation, all items were clear, not complex, and not   60 offensive. However, participants declared that item 5 needed minor revisions. Considering participants’ responses, item 5, therefore, was modified by making the item more concise. There was no feedback regarding the other sections of the OSIP in terms of language difficulties.  4.2.2.1 Summary. Table 4.6 systematically summarized the findings across the items and the participants’ responses about the items. The outcomes showed that all encoding items were almost clear, slightly complex, and not offensive. Similarly, all interpretation items were almost clear, slightly complex, and not offensive. Note that participants supported all goal clarification, response construction, response evolution, and enactment items (100%) as clear, not complex or offensive.  As such, they are not included in Table 4.6.  Table 4.6 Percent of Students who Provided Evidence about OSIP items in terms of Clarity, Complexity, and Offensiveness  Skills Item  Sub-item Language appropriateness Mean of Language Appropriatenessa Clarity (out of 7)  (A) Not complex  (out of 7)  (B) Not offensive (out of 7)  (C) Encoding      87%  1  5 6 7 86%  2  6 6 7 90%  3  6 6 7 90%  4  6 6 7 90%  5  6 6 6 86%  6  5 6 7 86%  7  6 7 7 95%  8  4 7 7 86%  9  6 6 7 90% Interpretation      96%  1        a 7 7 7 100%   b 7 5 7 90%   c 7 5 7 90%   d 7 5 7 90%  2  7 6 7 95%  3        61 Skills Item  Sub- Language appropriateness Mean of Language Appropriatenessa   item Clarity (out of 7)  (A) Not complex  (out of 7)  (B) Not offensive (out of 7)  (C)   a 5 6 7 86%   b 5 6 7 86%   c 5 6 7 86%   d 5 6 7 86%  4        a 5 7 7 90%   b 5 7 7 90%   c 6 6 7 90%  5  7 7 7 100%  6        a 6 6 7 90%   b 6 6 7 90%   c 5 6 7 86%  7        a 7 7 7 100%   b 7 7 7 100%   c 6 7 7 95%  8        a 7 7 7 100%   b 7 7 7 100%   c 7 7 7 100%  9        a 7 7 7 100%   b 7 7 7 100%   c 7 7 7 100%  10        a 7 7 7 100%   b 7 7 7 100%   c 7 7 7 100%  11        a 7 7 7 100%   b 7 7 7 100%   c 7 7 7 100%  12        a 7 7 7 100%   b 7 7 7 100%   c 7 7 7 100%  13        a 7 7 7 100%   b 7 7 7 100%   c 7 7 7 100%  14        62 Skills Item  Sub- Language appropriateness Mean of Language Appropriatenessa   item Clarity (out of 7)  (A) Not complex  (out of 7)  (B) Not offensive (out of 7)  (C)   a 7 7 7 100%   b 7 7 7 100%   c 7 7 7 100% Goal clarification      99%  1  7 7 7 100%  2  7 7 7 100%  3  7 7 7 100%  4  7 7 7 100%  5  6 6 7 90%  6        a 7 7 7 100%   b 7 7 7 100%   c 7 7 7 100%   d 7 7 7 100%   e 7 7 7 100%   f 7 7 7 100%   g 7 7 7 100%   h 7 7 7 100%   i 7 7 7 100%   j 7 7 7 100%   k 7 7 7 100%   l 7 7 7 100%   m 7 7 7 100%   n 7 7 7 100%   o 7 7 7 100%   p 7 7 7 100%   q 7 7 7 100%   r 7 7 7 100%   s 7 7 7 100%   t 7 7 7 100%   u 7 7 7 100%   v 7 7 7 100%   w 7 7 7 100%   x 7 7 7 100%   y 7 7 7 100%   z 7 7 7 100%   aa 7 7 7 100%   bb 7 7 7 100%   cc 7 7 7 100%   dd 7 7 7 100%   ee 7 7 7 100%   63 Skills Item  Sub- Language appropriateness Mean of Language Appropriatenessa   item Clarity (out of 7)  (A) Not complex  (out of 7)  (B) Not offensive (out of 7)  (C)   ff 7 7 7 100% Decision construction       100%  1  7 7 7 100%  2  7 7 7 100%  3  7 7 7 100%  5  7 7 7 100%  6  7 7 7 100%  7  7 7 7 100%  8  7 7 7 100% Decision evaluation       99%  1        a 7 7 7 100%   b 7 7 7 100%  2        a 7 7 7 100%   b 7 7 7 100%  3        a 6 7 7 95%   b 6 7 7 95%  4        a 6 7 7 95%   b 7 7 7 100%  5         a 7 7 7 100%   b 7 7 7 100%  6         a 7 7 7 100%   b 7 7 7 100%  7         a 7 7 7 100%   b 7 7 7 100%  8         a 7 7 7 100%   b 7 7 7 100% Enactment       100%  1  7 7 7 100%  2  7 7 7 100%  3  7 7 7 100%  4  7 7 7 100%  5  7 7 7 100% a. (((A+B+C)/3)/7)*100   64 4.3 Do items engage participants to respond accurately about their social information processes? To support research question 3, the main source of evidence was obtained through participants’ assessment of the OSIP measurement, using a TAP.  4.3.1 Evidence from TAPs’ Participant. When participants were asked to respond to the introductory question – to think about their last cyberbullying experience – most of the seven participants responded that they did not have any cyberbullying experience (although they all understood the construct).  For example, one of the participants interpreted such an experience as a harmful and mean behavior that may lead an adolescent to commit suicide or to kill someone else. Another participant stated that joking around might hurt a person’s feeling as well. One of the participants, after reading the question, stated: No, I haven’t [experienced in being involved in online aggression], lots of my friends aren’t very like very aggressive, like they don’t fight a lot…Maybe [I have] a tiny bit, but nothing like significant that hurts….Like saying you hurt someone to the point that they commit suicide…but I do [make fun] a lot with my friends, like joke around and everything. The other participant in response to this question said:  Well I don’t know because I’ve joked with friend but I know it’s out there, like Amanda Todd and stuff. However, when I provided a broader definition of online aggression or cyberbullying – behavior intended to harm or hurt someone else using communication technologies – they were much more likely to agree that they had engaged in it (but did not see the joking behavior described above as part of this).  In addition, they acknowledged that they had witnessed online aggression. For example, seeing one of their friends posted a mean comment online. They have also heard   65 about online aggression, especially extreme cases that were in the media, such as the cyberbullying incident involving Amanda Todd.  Based on these results, the introduction section has been revamped.  First, given a lack of common understanding about online aggression, at the beginning of the OSIP, there is now a definition of online aggression in plain language.  In addition, for participants who have not directly been involved in online aggression, they are asked to think of an event that they have only witnessed or heard about and imagine it as though it happened to them.  They are also asked to indicate how they will respond to the questionnaire (based on a real event they participated in or an imagined event).  In this way, I will maximize the number of individuals who can use the OSIP, and I can also explore for differences between SIP for imaginary versus real events.   Because of the change to the introduction, we were required to change the item stem for each question. Instead of each item starting with the stem: ‘In your last online aggression experience’, it reads: ‘In your real or imagined online fight’ to capture the participants who will be responding to the OSIP based on an imaginary event. Participants’ interpretation and reporting of the online social information processing items showed that, while participants were answering the items, their minds were engaged in following the multiple social information skills in aggressive contexts in online settings. For example, when participants were answering the questions for each section, I asked them, “What are the question/s about?” They then explained the content of each item or section. More specifically, one of the students explained the item/s for each section like this:  This section (encoding) is about how you feel.,… paying attention to something,… how much you remember something,… focus on something,… and how engaged in the circumstance, ….this section (interpretation) is about understanding the situation,….asking about the other person,…about peers,… it is about why such a thing happened,….the cause of a   66 problem,…wanting to know the other person,… intended mean things,…being excluded,… thinking about getting you to do something,…..this section (goal clarification) is about purposely try to do something,….this section (decision construction) is about deciding what to do, I guess,….this is about what the other person thinking [to do]….this section (decision evaluation) is about yourself…this section (enactment) is about what decision you make….  In addition to this, another student explained his understanding of online social information processing skills and elaborated his conception of such cognitive skills/tasks in the followings way:   This section (encoding) is about how like you hold sort of like a grudge,…what details do you remember, …focus on something, …like writing something on my girlfriend’s wall,…this section (interpretation) is about understanding and interpretation of online aggressive behavior, …about the cause of such a problem, about fight,…about intended mean things, rumor or gossip, …about the other person, …peers, …about getting you to do something, …about your saying or doing, …about the other person, …this section (goal clarification) is about thinking about such circumstance and to clarify your purpose,  …is about purposefully try to say exactly what you want, …this section (construct decision) is about your decision be about to [say/do] …this section (decision evaluation) is like evaluation,…about the right, …or to be bad, …hard or easy, …for you…for the other person, …this section (enactment) is about how you act or behave…. The other student also explained what she understood about OSIP items and skills, when I asked to talk about the question/s. She clearly stated: This section (encoding) is about memory, …not directly,…this section (interpretation) is about how I interpret the event, …this section (goal clarification) is about thinking about the purpose of having this type of aggressive behavior, …this section (construct decision) is about decision making process, …this section (decision evaluation) is about evaluation about the various options before the decision is actually made,…this section (enactment) is about my response to online aggressive behavior….   67 Participants’ statements provided evidence about how they were engaged in the SIP cognitive skill/tasks and how they understood each skill when they were answering the OSIP questions.  4.3.1.1 Summary. The average numbers of participants’ answers for each point in each skill of the OSIP scale were calculated.  An overview of the proportion of participants who responded to each of point of the OSIP response scale (compiled for each OSIP skill) is presented in Table 4.7. As can be seen, the variety in responses (e.g, that, for the most part, the full range of the scale was used) suggests that participants were actively thinking about and responding to the items in the intended way.  This provides evidence that they understood the questions, as well as the meaning and implications of the different response options. Further support for this comes from the scatter patterns of participants’ answers for each response option for each skill of the OSIP scale were illustrated in Figure 4.1. A normal curve patterns were notable for the Encoding, Interpretation, Goal clarification, Decision Evaluation and Enactment Skills.   Table 4.7 Overview of the Proportion of Participants’ Responding to each Point on the Scale, Averaged for each Skill  Skills Response Options (0=never; 6=all the time) 0 1 2 3 4 5 6 Encoding  1/7 2/7 1/7 1/7  1/7 1/7 Interpretation  1/7 2/7 1/7 1/7 1/7 1/7  Goal Clarification 1/7  1/7 1/7 1/7 2/7 1/7 Decision Construction 2/7 1/7 1/7 1/7 1/7 1/7  Decision Evaluation   1/7 2/7 3/7 1/7 0  Enactment  1/7 1/7 2/7 1/7 1/7 1/7      68 Figure 4.1. Scatter plot of the participants’ responses to each point on the OSIP response scale for each skill      012345670 1 2 3 4 5 6Particpants  Response Options (0=Never to 6=All the time) Encoding InterpretationGoal Clarification Decision ConstructionDecision Evaluation Enactment  69 Chapter 5: Discussion This research investigated how three evidence-based validity methods (item development process, expert reviews, and TAP) contribute to the validity investigation of the assessment of the online social information processing measure (OSIP). In this chapter, the research findings are compared to the literature presented in chapter two. Specifically, the current study established three main points: (a) OSIP has content validity, which was supported by providing evidence from item development process, including content defining tasks/skills, test specification, and item editing; (b) the construct validity of OSIP was supported by expert review; and (c) OSIP’s validity was also supported by TAP evidence. In particular, this research examined how item development process, expert reviews, and TAPs provide evidence that goes above and beyond traditional validity assessment.  This chapter begins with a summary of the major findings from each research question and discusses the interpretation of the research findings, then explains strengths and limitations of the current study, and implications for the online aggression framework. The chapter concludes with possible future directions for related research. 5.1 Major Findings of the Research Online aggression is considered a damaging, hurtful behavior (Tokunga, 2010). It has been argued that adolescents who engage in cyberbullying are more likely to behave aggressively when they have deficits in encoding of the social cues, bias in interpreting the social situation, willingness to focus on instrumental goals, inclinations toward evaluating more aggressive behaviors, and tendencies to respond aggressively (Runions, Shapka, Dooley, & Modecki, 2012). While research has shown that online aggressive behaviour is common among adolescents, the literature is limited (Tokunga, 2010). Indeed, a lack of agreement among   70 scholars about definition, motivations, characteristics, domains, and underlying factors of online aggression makes it difficult to develop a psychometrically sound assessment tool based on a social information processing model (SIP; Runions, et al, 2012)).  Understanding how online aggression affects adolescents’ behavior and act is important for researchers and practitioners. This study therefore aimed to examine the construct validity of the OSIP, by providing evidence from the OSIP item development process, experts’ evaluation, and TAPs.  5.1.1 Outcomes of the OSIP item development process. Item developing process activities for OSIP provided satisfactory construct validity evidence. This process helped the author define the OSIP purposes, uses and misuses of the test, to determine the number of the skills and the domain of the test, and to edit the items in terms of alignment with the targeted construct of the test.  However, findings also suggest that more evidence was required regarding the structural, internal, and external construct validity. This study examined three segments (content definition, test specification, and item editing) of the item developing process for the OSIP measurement. 5.1.1.1 Content Definition as Validity Evidence. During this study, a clear statement about the purpose of the OSIP measurement was provided. This statement addressed what the test intended to measure. Articulating the purpose of the OSIP test with relevant interpretations provided a rich evidence for validating the content of the OSIP test. Moreover, the potential OSIP use and misuses were addressed as sources of validity and invalidity (AERA et al., 1999; Sireci, 2013).  5.1.1.2 Test Specification as Validity Evidence. The current study created the OSIP test specification table based on detailed information about the different social information   71 processing skills. This procedure provided evidence that there is more alignment with the targeted construct of the OSIP test.  5.1.1.3 Item Editing as Validity Evidence. The OSIP Item editing stage mostly involved looking at item formats, acceptable acronyms, proofreading, clarity, and the appearance of the items. Throughout this, in accordance with Baranowski (2006), effort was made to maintain the exact meaning of each item.  During the editing the OSIP items, documentation was made about who edited the items, how items were edited, what changes were made, what questions were raised, and how the editing processing were followed. Such documentation was relevant to the OSIP texts, items, and response options and provided further validity content evidence for the OSIP.  5.1.2 Outcomes of Evaluating the OSIP items and its language appropriateness by reviewing experts’ analysis. In the current study, in accordance with Berk (1990), an expert evaluation method was used in the initial stages of the OSIP construction to assess the underlying construct of the OSIP items. Experts evaluated the OSIP items, response options, and language appropriateness by providing indications and suggestions for change. In this way, experts provided documentation about the OSIP domain specification and the accuracy of the content domain structure, the item format, the appropriateness of the content of the test, and the representativeness of the content coverage in relation to the OSIP domain (Berk, 1990). In addition to this, experts addressed the problematic items and suggested some clarifications for some subscales and items, as well as some explanations about the clarity, complexity and offensiveness of the language used in the items. Overall, this procedure provided more construct validity evidence for OSIP and the outcomes of the expert evaluation resulted in modifying the OSIP measure.   72 5.1.3 Outcomes of the OSIP Participants by TAPs. A TAP, in this study, was used during the last stage of the OSIP construction. The purpose of using TAPs was to assess if the OSIP was aligned with its construct and use (see Ericsson & Simon, 1984, 1993; van Someren, Barnard, & Sandberg, 1994). TAPs served to explore the ongoing process of the social cognitive skills of adolescents (Ericson & Simon, 1993) while they were thinking aloud and responding to the OSIP items (Ericson, 2003; Ericson & Simon, 1993). This method provided extensive validity evidence about how the OSIP assessment was perceived (Bernardini, 2002, Ercikan et al, 2010). The outcome of the TAPs showed how adolescents thought about the OSIP measure, processed the OSIP stages and their items, and responded to items. Evidence collected through this method supported the construct validity of the OSIP and its outcome resulted in developing the final version of OSIP.  5.2 Implications The results of this study had several implications for the use of item development process, expert reviews and TAPs in validity investigations, particularly for assessments of social information processing skills in online aggressive contexts. With respect to the first research question, the results showed that item development processing provided content validity evidence that showed that, by analyzing the test content definition, test specification, and item editing, the OSIP aligned with the theoretical definition of the construct. With respect to the second research question, the results showed that expert evaluation provided content and response processes validity evidence for the OSIP measure by collecting data from the domain specification, content structure, item format, appropriateness of the content, representativeness of the content and problematic items of OSIP. Experts also assessed the difficulties in response options of the OSIP measurement. Overall, experts, in the current study, were considered as valid   73 sources of the OSIP score interpretation and provided content and response processes evidence for the OSIP. However, evidence form expert reviews can’t be considered as sufficient evidence of the construct validity of the OSIP (Ercikan et al., 2010). As a result, in this study, TAPs was used as another source of validity of OSIP to answer the third research question. By using this method, participants were asked to verbalize their thoughts while they were completing the OSIP. As such, the TAPs showed that the OSIP assessed in situ an individuals’ thinking about the various SIP skills within an online aggressive contexts. In general, the TAP was considered as a complementary source of validity evidence that could provide evidence for the response process of OSIP.   5.3 Contribution This research has shown that item-development-processing, expert review, and TAPs can be used as sources of validity evidence that compliments other types of evidence, including psychometric information. Such evidence contributes to the overall validity argument in test content and response processes.  Item development processing, expert’s review, and TAPs contribute to the interpretive argument about the inferences of an assessment by showing that adolescents would be engaged in skills provided in the test items in the online aggressive contexts. This research makes an important contribution to the field of measurement because it shows that item development processing, expert review and TAPs can be considered as significant sources of validity evidence that is specific to social information processing in online contexts.  These contributions are substantial to validity research because there is a growing need for educational assessments to accurately measure the social information processing skills of adolescents. As with any assessment, the interpretations of scores need to be validated. Item   74 development processing, expert review and TAPs could help the researcher to validate the OSIP assessment, in conjunction with other validation techniques.  5.4 Limitations There were a number of limitations to this study. First, the results cannot be generalizable beyond the given population. Having two experts and seven participants was not adequate to provide evidence that strongly or clearly support (or refuted) the propositions I studied. An additional limitation to this research is that this study focused on the cognitive aspects related to the processing of social information. However, emotional information is associated with such a procedure as well. By excluding emotional control skills, which was beyond the scope of this work, the question arises about whether or not this scale can be used to measure other types of skills. Finally, this work focused on adolescents in grade 9 to 12. Though previous studies have used children and young adolescents, high school students represent one particular developmental stage, and so the question remains as to whether or not these results are generalizable to children and younger adolescents, having a small sample size.  5.5 Future Directions Continuing to explore the evidence based construct validity of the OSIP items requires a number of additional research studies. The current study only shed a light to one of the many aspects of construct validity in terms of content and response process.  The result of this study confirmed that the validity of the new measurement can be assessed from the initial step through validity evidence based methods including item development processing, expert review, and TAPs.  However, an important future direction for this work will be to investigate emotional aspects of social information processing skills. Likewise, future research should also investigate how item development processing, expert review and TAPs can be used to validate assessments   75 of social information processing skills for children and adolescents in other developmental stages.    5.6 Summary In this study, a new measure of OSIP has been developed and evidence of its validity has been collected, by providing evidence from the item development process, expert evaluation, and participant assessment methods. Similarly, researchers can use the three mentioned validity evidence-based methods of this research to collect evidence while developing a new measure. They can also use this measure to learn more about underlying factors of online aggression especially among adolescents.      76 References  AERA, APA, & NCME (1999). Standards for educational and psychological testing. Washington DC: Authors.  Akhatar, N. Bradley, J. (1991). Social information processing deficits of aggressive children: Present findings and implications for social skills training. Clinical Psychology Review, 11, 621-644.    Anderson, N. J., Bachman, L., Perkins, K., & Cohen, A. (1991). An exploratory study into the construct validity of a reading comprehension test: Triangulation of data sources. Language Testing, 8, 41-66. doi: 10.1177/026553229100800104 Anderson, C. A., & Huesmann, L. R. (2003). Human aggression: A social-cognitive view. London: Sage Publications. Ang, R. P., & Goh, D. H. (2010). Cyberbullying among adolescents: The role of affective and cognitive empathy, and gender. Child Psychiatry and Human Development, 41, 387-397. doi:10.1007/s10578-010-0176-3. Anesti, M.D., Anestis, J.C., Selby, E.A., & Joiner, T.E. (2009).  Anger rumination across forms of aggression. Personality and Individual Differences, 46, 192-196. Arsenio, W. F., Adams, E., Gold, J. (2009). Social information processing, moral reasoning, and emotion attributions: Relations with adolescents’ reactive and proactive aggression. Child Development, 80, 1739-1755. Babcock, J. C., Green, C. E., & Webb, S. A. (2008). Decoding deficits of different types of batterers during presentation of facial affect slides. Journal of Family Violence, 23. doi: 10.1007/s10896‐008‐ 9151‐1.  Bartlett, J. E., Kotrlik, J. W., & Higgins, C. C. (2001). Organizational research: Determining   77 appropriate sample size in survey research. Information Technology, Learning, and Performance Journal, 19, 43-50.  Bandura, A. (1973). Aggression: A social learning analysis. Englewood Cliffs, NJ: Prentice-Hall.  Bandura, A. (1986). Social foundation of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.  Bandura, A. (1994). Self-efficacy. In V. S. Ramachaudran (Ed.), Encyclopedia of Human Behavior (pp. 71–81). New York: Academic Press. Basquill, M., Nezu, C. M., Nezu, A. M., & Klein, T. L. (2004). Aggression-related hostility bias and social problem-solving deficits in adult males with mental retardation. American Journal on Mental Retardation, 109, 255–263. Bell, D. J., Luebbe, A. M., Swenson, L. P., & Allwood, M. A. (2009). The children's evaluation of everyday social encounters questionnaire: Comprehensive assessment of children's social information processing and its relation to internalizing problems. Journal of Clinical Child & Adolescent Psychology, 38, 705-720. Benson, B. A. (1994) Anger management training: A self control program for persons with mild mental retardation. In N. Bouras (Ed.), Mental Health in Mental Retardation (pp. 224–232). Cambridge University Press, Cambridge. Benter, P.M., & Chou, C.P. (1987). Practical issues in structural modeling. Sociological Methods and Research, 16, 78–117. Bernardini, S. (2002). Think-aloud protocols in translation research: Achievements, limits, future prospects. Target, 13, 231-263.    78 Berk, R. A. (1990). Importance of expert judgment in content-related validity evidence. Western Journal of Nursing Research, 12, 659-671.  Boelen, P.A., Hout, M. A., & Bout, J. (2008). The factor structure of posttraumatic stress disorder symptoms among bereaved individuals: a confirmatory factor analysis study. Journal Anxiety Disorder, 22, 1377-1383.  Bru, E., Murberg, T. A., & Stephens, P. (2001). Social support, negative life events and pupil misbehavior among young Norwegian adolescents. Journal of Adolescence, 24, 715-727.  Brown, T. (2010). Construct validation: A unitary concept for occupational therapy assessment and measurement. Hong Kong Journal of Occupational Therapy, 20, 30-42. Brown, M.W., &Cudeck, R. (1993).  Alternative ways of assessing model fit. In K.A. Bollen and J.S. Long (Eds.), Testing structural equation models (pp. 136–162). Newbury Park, CA: Sage. Brown, T.A. (2006). Confirmatory factor analysis for applied research. New York, NY: Guilford. Bryan, T. Sullivan-Burstein, K. Mathur, S. (1998). The influence of affect on social-information processing. Journal of Learning Disabilities, 31, 418-426.  Camerer, C.F., & Johnson, E.J. (1991). The process-performance paradox in expert judgment: How can experts know so much and predict so badly? In K.A. Ericcson & J. Smith (Eds.), Toward a general theory of expertise (pp. 195-217). Cambridge, England: Cambridge University Press. Campbell, M. A. (2005) Cyber bullying: An old problem in a new guise? Australian Journal of Guidance and Counseling, 15, 68-76. Campbell, M. A., Slee, P. T., Spears, B., Butler, D., & Kift, S. (2013). Do cyberbullies suffer   79 too? Cyberbullies’ perceptions of the harm they cause to others and to their own mental health. School Psychology International. 1-17. doi: 10.1177/0143034313479698 Cantrell, V. L., and Prinz, R. J. (1985). Multiple perspectives of rejected, neglected, and accepted children: Relations between sociometric status and behavioral characteristics. Journal of Consulting and Clinical Psychology, 53, 884-889.  Cizek, G. J., Rosenberg, S. L., & Koons, H. H. (2008). Sources of validity evidence for educational and psychological tests. Educational and Psychological Measurement, 68, 397-412. doi:10.1177/0013164407310130  Churchill, G.A. (1979). A paradigm for developing better measures of marketing constructs. Journal of Marketing Research, 16, 64–73. Coie, J. D., & Dodge, K. A. (1988). Multi sources of data on social behavior and social status. Child Development, 59, 815-829. Crick, N. R. (1995). Relational aggression: The role of intent attributions, feelings of distress, and provocation type. Development and Psychopathology, 7, 313-322.  Crick, N., & Dodge, K. A. (1989). Children’s perceptions of peer entry and conflict situations: Social strategies, goals, and outcome expectations. In B. Schneider, J. Nadel, G. Attili, & R. Weissberg (Eds.), Social competence in developmental perspective (396-399). Boston: Kluwer-Nijhoff.  Crick, N., &Dodge, K. A. (1994). A review and reformulation of social information-processing mechanism in children’s social adjustment. Psychological Bulletin, 115, 74-101.   Crick, N. R., & Dodge, K. A. (1996). Social information-processing mechanisms in reactive and proactive aggression. Child Development, 67, 993.   80 Crick, N. R., Grotpeter, J. K., & Bigbee, M. (2002). Relationally and physically aggressive children’s intent attributions and feelings of distress for relational and instrumental peer provocations. Child Development, 73, 1134–1142. Crozier, J.C., Dodge, K.A., Fontaine, R.G., Lansford, J.E., Bates, J.E., Pettit, G.S., & Levenson, R.W. (2008). Social information processing and cardiac predictors of adolescent antisocial behavior. Journal of Abnormal Psychology, 117, 253-267. Collins, K.,& Bell, R. (1997), Personality and aggression: The dissipation-rumination scale. Personality and Individual Differences, 22, 751-755. Coccaro, E. F., Noblett, K. L., & McCloskey, M. S. (2009). Attributional and emotional responses to socially ambiguous cues: Validation of a new assessment of social/emotional information processing in healthy adults and impulsive aggressive patients. Journal of Psychiatric Research, 43, 915-925. doi:10.1016/j.jpsychires.2009.01.012 Cook, D. A., and Beckman, T. J. (2006). Current concepts in validity and reliability for psychometric instruments: Theory and application. The American Journal of Medicine, 119, 166.e7-166.e16.  DeCoster, J. (1998). Overview of Factor Analysis. Retrieved March 2, 2014, from www.stat-help.com/notes.html  Derks, D., Bos, A. E. R.,& Grumbkow, J. (2008). Emoticons and online message interpretation. Social Science Computer Review, 26, 3. doi: 10.1177/0894439307311611 Dodge, K. A. (1980). Social cognition and children’s aggressive behavior. Child Development, 51, 162-170. Dodge, K. A. (1986). A social information processing model of social competence in children. In   81 M. Perlmutter (Ed.), Minnesota symposium on child psychology (pp. 77–125). Hillsdale, NJ: Erlbaum. Dodge, K. A. (1993). Social-cognitive mechanisms in the development of conduct disorder and depression. Annual Review of Psychology, 44, 559-584. doi:10.1146/annurev.psych.44.1.559 Dodge, K. A. (2003). Do social information-processing patterns mediate aggressive behavior? In B. B Lahey, T. E. Moffitt, & A. Caspi (Eds.), Causes of conduct disorder and juvenile delinquency (pp. 254-274). New York: The Guildford Press.  Dodge, K. A., Bates, J., & Pettit, G. (1990). Mechanisms in the cycle of violence. Science, 250, 1678-1683.  Dodge, K. A., & Coie, J. D., (1987). Social-information-processing factors in reactive and proactive aggression in children’s peer groups. Journal of Personality and Social Psychology, 53, 1146-1158.  Dodge, K. A., Coie, J. D., & Lynam, D. R. (2006). Aggression and antisocial behavior in youth. In W. Damon & N. Eisenberg (Eds.), Handbook of child psychology: Social, emotional, and personality development (pp. 719 –788). New York: Wiley. Dodge, K. A., & Frame, C. L. (1982). Social cognitive bias and deficits in aggressive boys. Child Development, 53, 620-635. Dodge, K. A., Godwin, J., & The Conduct Problems Prevention Research Group. (2013). Social-information-processing patterns mediate the impact of preventive intervention on adolescent antisocial behavior. Psychological Science, 24, 456-465. doi: 10.1177/0956797612457394 Dodge, K. A., Laird, R., Lochman, J. E., & Zelli, A. (2002). Multidimensional latent‐construct   82 analysis of children’s social information processing patterns: Correlations with aggressive behavior problems. Psychological Assessment, 14, 60–73. Dodge, K. A., & Newman, J. P. (1981). Biased decision-making processes in aggressive boys. Journal of Abnormal Psychology, 90, 375-379.  Dodge, K.A., Murphy, R.R., & Buchsbaum, K. (1984). The assessment of intention-cue detection skills in children: Implications for developmental psychopathology, Child Development, 55, 163-173. Dodge, K. A., & Price, J. M. (1994). On the relation between social information processing and socially competent behavior in early school‐aged children. Child Development, 65, 1379-1385. Dodge, K. A., Price, J. M., Bachorowski, J., & Newman, J. P. (1990). Hostile attributional biases in severely aggressive adolescents. Journal of Abnormal Psychology, 99, 385-392. Dodge, K.A., & Rabiner, D.L. (2004). Returning to Roots: On Social Information Processing and Moral Development. Child Development, 75, 1003-1008. Dodge, K. A., & Schwartz, D. (1997). Social information processing mechanisms in aggressive behavior. In D. M. Stoff, J. Breiling, & J. D. Maser (Eds.), Handbook of antisocial behavior (pp. 171–180). New York: Wiley. Dodge, K. A., & Somberg, D. (1987). Hostile attributional biases are exacerbated under conditions of threat to the self. Child Development, 58, 213-224. Dodge, K. A., &Tomlin, A. (1987). Cue utilization as a mechanism of attributional bias in aggressive children. Social Cognition, 5, 280-300.  Dooley, J. Pyzalski, j.,& Cross, D. (2009). Cyberbullying versus face-to-face bullying: A theoretical and conceptual review. Journal of Psychology, 217, 182-188. doi:   83 10.1027/0044-3409.217.4.182. Downing, S. M. (2003). Validity: On the meaningful interpretation of assessment data. Medical Education, 37, 830-837.  Downing, S. M. (2006). Twelve Steps for Effective Test Development. In S.M. Downing & T. M. Haladyna (Eds.), Handbook of test development (pp. 3-25). Mahwah, NJ: Lawrence Erlbaum Associates.  Downing, S. M. Haladyna, T. M. (1997). Test item development: Validity evidence from quality assurance procedures, Applied Measurement in Eructation, 10, 61-82.  Ercikan, K., Arim, R., Law, D., Domene, J., Gagnon, F., & Lacroix, S. (2010). Application of think aloud protocols for examining and confirming sources of differential item functioning identified by expert review. Educational Measurement: Issues and Practice, 29, 24-35. Ercikan, K., Arim, R., Law, D., Domene, J., Gagnon, F., & Lacroix, S. (2010). Application of think aloud protocols for examining and confirming sources of differential item functioning identified by expert reviews. Educational Measurement: Issues and Practice, 29, 24-35. Ericsson, K. A. (2006). Protocol analysis and expert thought: Concurrent verbalizations of thinking during experts’ performance on representative tasks. In K. A. Ericsson, N. Charness, P. J. Feltovich, & R. Hoffman. (Eds.), Handbook of expertise and expert performance (pp. 223–241). New York: Cambridge University Press. Ericsson, K. A., & Simon, H. A. (1984). Protocol analysis. MIT-press.   84 Ericsson, K. A., & Simon, H. A. (1998). How to study thinking in everyday life: Contrasting think-aloud protocols with descriptions and explanations of thinking. Mind, Culture, and Activity, 5, 178-186. Finney, S.J., & DiStefano, C. (2006). Non-normal and categorical data in structural equation modeling. In G.R. Hancock & R.O. Mueller (Eds.), Structural equation modeling: A second course (pp.269-314). Greenwich, CT: Information Age Publishing. Flora, D. B., Labrish, C., & Chalmers, R. P. (2012). Old and new ideas for data screening and assumption testing for exploratory and confirmatory factor analysis. Frontiers In Psychology, 3, 55-55. Fontaine, R. G., Burks, V. S., &Dodge, K. A. (2002). Response decision process and externalizing behavior problems in adolescents. Development and Psychopathology, 14, 107-122.  Fontaine, R. G., & Dodge, K. A. (2006). Real-time decision making and aggressive behavior in youth: A heuristic model of response evaluation and decision (RED). Aggressive Behavior, 32, 604- 624.  Fontaine, R. G., Tanha, M., Yang, C., Dodge, K. A., Bates, J.E., & Pettit, G. S. (2010). Does response evaluation and decision (RED) mediate the relation between hostile attributional style and antisocial behavior in adolescence? Journal of Abnormal Child Psychology, 38, 615-626. Fontaine, R.G., Yang, C., Dodge, K.A., Pettis, G.S., & Bates, J.E. (2009). Development of response evaluation and decision (RED) and antisocial behavior in childhood and adolescence. Developmental Psychology, 45, 447-459.  Goodwin, L. and Leech, N. (2003). The meaning of validity in the new standards for education   85 and psychological testing: Implications for measurement courses. Measurement and Evaluation in Counseling and Development, 36, 181-191.  Gini, G., Pozzoli, T., & Hymel, S. (2013). Moral disengagement among children and youth: A meta-analytic review of link to aggressive behavior. Journal of Aggressive Behavior, 40, 56-68. doi: 10.1002/ab.21502 Gross, E.F., Juvonen, J., & Gable, S.L. (2002). Internet use and well-being in adolescence. Journal of Social Issues, 58, 75-90. Haynes, S. N., Richards, D. C. S., and Kubany, E.S. (1995). Content validity in psychological assessment: A functional approach to concepts and methods, Psychological Assessment, 7, 238-247.  Hallingan, S. L., & Philips, K. J. (2010). Are you thinking what I’m thinking? Peer group similarities in adolescent hostile attribution tendencies. Developmental Psychology, 46, 1385-1388. doi: 10.1037/a0020383. Hambleton, R. K., & Rogers, H. J. (1989). Detecting potentially biased test items: Comparison of IRT area and Mantel-Haenszel methods. Applied Measurement in Education, 2(4), 313-334. Hessels, C., Hanenberg, D., Orobio de Castro, B., & Aken, M.A. G. (2014). Relationships: Empirical contribution: Understanding personality pathology in adolescents: The five factor model of personality and social information processing. Journal of Personality Disorders, 28, 121-142. Hancock, G., & Freeman, M. J. (2001), Power and sample size for the root mean square error of approximation test of not close fit in structural equation modeling. Educational and Psychological Measurement, 61, 741-758. doi:10.1177/00131640121971491   86 Hawkins, J. D., & Lishner, D. M. (1987). Schooling and delinquency, In E. H. Johnson (Ed.), Handbook on crime and delinquency prevention (pp. 179-221). New York: Greenwood Press.  Harper, B. D., Lemerise, E. A., & Caverly, S. L. (2010). The effect of induced mood on children's social information processing: Goal clarification and response decision. Journal of Abnormal Child Psychology, 57, 575-586.  Hochlehnert A, Brass K, Möltner A, Schultz JH, Norcini J, Tekian A, et al. (2012). Good exams made easy: The item management system for multiple examination formats. BMC Medical Education,12, 63. Holgado-Tello, F., Chacón-Moscoso, S., Barbero-García, I., & Vila-Abad, E. (2010). Polychoric versus Pearson correlations in exploratory and confirmatory factor analysis of ordinal variables. [Article]. Quality & Quantity, 44, 153-166. Hoyle, R. H., & Duvall, J. L. (2004). Determining the number of factors in exploratory and confirmatory factor analysis. In D. Kaplan (Ed.), Handbook of quantitative methodology for the social sciences (pp. 301-315). Thousand Oaks, CA: Sage Publications. Horsley, T.A., Orobio de Castro, B., & Van der Schoot, M. (2010). In the eye of the beholder: Eye-tracking assessment of social information processing in aggressive behavior. Journal of Abnormal Child Psychology, 38, 587-599. doi: 10.1007/s10802-009-9361-x.  Hu, L., & Bentler, P.M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1-55.  Hubley, A. M., & Zumbo, B. D. (1996). A dialectic on validity: Where we have been and where we are going. Journal of General Psychology, 123, 207-215.  Huesmann, L. R. (1988). An information-processing model for the development of aggression.   87 Aggressive Behavior, 14, 13-24.  Huesmann, L.R., & Guerra, N. G. (1997). Children’s normative beliefs about aggression and aggressive behavior. Journal of Personality and Social Psychology, 72, 408-419. Hughes, J. N., Hart, M. T., & Grossman, P. B. (1993, August). Development and validation of an interview measure of social cognitive skills. Paper presented at the annual meeting of the America Psychological Association, Toronto. Hughes, J. N., Meehan, B., & Cavell, T. ( 2004). Development and validation of a gender-balancedmeasure of aggression-relevant social cognition. Journal of Clinical Child and Adolescent Psychology, 33, 292– 302. doi: 10.1207/s15374424jccp3302_11 Jahoda, A., Pert, C., & Trower, P. (2006). Frequent aggression and attribution of hostile intent in people with mild to moderate intellectual disabilities: An empirical investigation. American Journal on Mental Retardation, 111, 90–99. Juvonen, J., & Gross, E.F. (2008). Extending the school grounds? Bullying experiences in cyberspace. The Journal of School Health, 78, 496-505. Kane, M. T. (1992). An argument-based approach to validity. Psychological Bulletin, 112, 527-535.  Kane, M. T. (2001). Current concerns in validity theory. Journal of Educational Measurement, 38, 319-342. doi:10.1111/j.1745-3984.2001.tb01130.x  Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational Measurement (4th ed., pp. 17-64). Washington, DC: American Council on Education. Keil, V., & Price, J. M. (2009). Social information-processing patterns of maltreated children in two social domain. Journal of Applied Developmental Psychology. 30, 43-52. Kuhn, D. (2004). Adolescent thinking. In R. M. Lerner & L. Steinberg (Eds.), Handbook of   88 adolescent development (pp. 152-188).  Hoboken, NJ: Wiley. Kupersmidt, J. Stelter, R., & Dodge, K. A.  (2011). Development and validation of the social information processing application: A Web-based measure of social information processing patterns in elementary school-age boys. Psychological Assessment, 23, 834-847. Larkin, P., Jahoda, A., & MacMahon, K. (2013). The social information processing model as a framework for explaining frequent aggression in adults with mild to moderate intellectual disabilities: A systematic review of the evidence. Journal of Applied Research in Intellectual Disabilities, 26, 447-465. doi:10.1111/jar.12031 Lansford, J. E., Malone, P. S., Dodge K. A., Crozier, J. C., Pettit, G. S., & Bates, J. E. (2006). A 12‐year prospective study of patterns of social information processing problems and externalizing behaviors. Journal of Abnormal Child Psychology, 34, 715–724. Laverty, S. M. (2003). Hermeneutic phenomenology and phenomenology: A comparison of historical and methodological considerations. International Journal of Qualitative Methods, 2(3). Article 3. Retrieved 3/30/2006 from http://www.ualberta.ca/~iiqm/backissues/2_3final/pdf/laverty.pdf.  Law, D., Shapka, J., Domene, J., & Gagne, M. (2012a). Are cyberbullies really bullies? An investigation of reactive and proactive online aggression. Computers in Human Behavior, 28, 664-672. doi:10.1016/j.chb.2011.11.013 Law, D., Shapka, J., Hymel, S., Olson, B., & Waterhouse, T. (2012b). The changing face of bullying: An empirical comparison between traditional and Internet bullying and victimization. Computers in Human Behavior, 28, 226-232. doi:10.1016/j.chb.2011.09.004   89 Lenhart, A., Ling, R., Campbell, S., & Purcell, K. (2010a). Teens and mobile phones: Text messaging explodes as teens embrace it as the centerpiece of their communication strategies with friends. Retrieved from http://pewinternet group.org/Reports/2010/Teens-and-Mobile-Phones.aspx. Lenhart, A., Purcell, K., Smith, A., & Zickuhr, K. (2010b). Social media & mobile internet use among teens and young adults. Retrieved from http://pewinternetgroup.org/ Reports/2010/Social-Media-and-Young-Adults.aspx. Loeher, R., 8c Dishion, T, J. (1983). Early predictors of male delinquency: A review. Psychological Bulletin, 94, 68-99.  Lohmeier, J. H., & Lee, S. W. (2011). A school connectedness scale for use with adolescents. Educational Research and Evaluation, 17, 85-95.    MacMahon, K. M., A., Jahoda A., Espie, C. A. & Broomfield, N. A. (2006). The Influence of anger arousal level on attribution of hostile intent and problem solving capability of an individual with mild intellectual disability and a history of difficulties with aggression. Journal of Applied Research in Intellectual Disability, 19, 99–108. Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational Measurement (3rd Ed.) (pp. 13-103). New York, NY: Macmillan.  Messick, S. (1990). Validity of test interpretation and use. Educational Testing Service. NJ: Princeton.  Messick, S. (1995). Validity of psychological assessment: Validation of inferences from person’s responses and performance as scientific inquiry into score meaning. American Psychologist, 50, 741-749.    90 Messick, S. (1998). Test validity: A matter of consequences. Social Indicators Research, 45, 35-44. McArdle, J. J. (1996). Current directions in structural factor analysis. Current Directions in Psychological Science, 5, 11-18.  Nas, C. N., De Castro, B. O., and Koops, W. (2007). Social information processing in adolescents. Psychology, Crime, and Law, 11, 363-375.  Nelson, D. A., and Crick, N. R. (1999). Rose-colored glasses: Examining the social information-processing of prosocial young adolescents. The Journal of Early Adolescence, 19, 17-38.  Nelson, E., Leibenluft, E., McClure, E. B. and Pine, D. S. (2005). The social re-orientation of adolescence: a neuroscience perspective on the process and its relation to psychopathology. Psychological Medicine, 35, 163-174. doi:10.1017/S0033291704003915. Nigoff, A. (2008). Social information processing and aggression in understanding school violence: An application of crick and dodge's model. In T. W. Miller (Ed.), School Violence and Primary Prevention (pp. 79-93). New York, NY: Springer New York. doi:10.1007/978-0-387-77119-9_5 Ojanen, T., Grönroos, M., & Salmivalli, C. (2005). An interpersonal circumplex model of children's social goals: links with peer-reported behavior and sociometric status. Developmental Psychology, 41, 699. Ojanen, T., Aunola, K., & Salmivalli, C. (2007). Changing goals according to changing situations? Connections between children's situation-specific goals and their social adjustment. International Journal of Behavioral Development, 31, 232–241. Orobio de Castro, B., Veerman, J. W., Koops, W., Vosch, J. D., & Monshouwer, H. J. (2002).   91 Hostile attribution of intent and aggressive behavior: A meta-analysis. Child Development, 73, 916-934.  Pert, C. Jahoda, A. & Squire, J. (1999). Attribution on intent and role-taking: Cognitive factors as mediators of aggression with people who have mental retardation. American Journal of Mental Retardation, 104, 399-409.  Pettit, G. S., Lansford, J. E., Malone, P. S., Dodge, K. A., & Bates, J. E. (2010). Domain specificity in relationship history, social-information processing, and violent behavior in early adulthood. Journal of Personality and Social Psychology, 98, 190. Polit, D. F., & Beck, C. T. (2006). The content validity index: Are you sure you know what's being reported? Critique and recommendations. Research in Nursing and Health, 29, 489-497. doi: 10.1002/nur.20147.  Raine, A., Dodge, K., Loeber, R., Gatzke-Kopp, L., Lynam, D., Reynolds, C., . . . Liu, J. (2006). The Reactive-Proactive Aggression (RPQ) Questionnaire: Differential correlates of reactive and proactive aggression in adolescent boys. Aggressive Behavior, 32, 159-171.  Randall, P. (1997). Adult bullying: Perpetrators and victims. London: Rutledge. Renshaw, P., & Asher, S. (1983). Children's goals and strategies for social interaction.  Journal of Experimental Psychology, 29, 353– 374. Reynolds, C. R., & Kamphaus, R. W. (2004). Behavior Assessment System for Children (2nd ed.). Circle Pines, MN: American Guidance Service.  Ribordy, S., Camras, L., Stefani, R., & Spaccarelli, S. (1988). Vignettes for emotion recognition research and affect education programs with children. Journal of Clinical Child Psychology, 17, 322-325.   92 Rico, E. D., Dios, H. C., & Ruch, W. (2012). Content validity evidences in test development: An applied perspective. International Journal of Clinical and Health Psychology, 12, 449-460. Rojahn, J., Lederer, M., & Tasse, M. J. (1995). Facial emotion recognition by persons with mental retardation: A review of literature. Research in Developmental Disabilities, 16. 393-414. Rubin, K. H., Krasnor, L. R. (1986). Social-cognitive and social-behavioral perspectives on problem solving. In: M. Perimutter (Ed.), Cognitive Perspectives on Children’s Social Behavioral Development, Minnesota Symposia on child psychology (pp. 1-68) Hillsdale, NJ: Erlbaum. Runions, K., Shapka, J., Dooley, J., & Modecki, K. (2013). Cyberaggression and victimization and social information processing: Integrating the medium and the message. Psychology of Violence, 3, 9-26. doi:10.1037/a0030511. Salmivalli, C., Ojanen, T., Haanpää, J., & Peets, K. (2005). " I'm OK but you're not" and other peer-relational schemas: explaining individual differences in children's social goals. Developmental Psychology, 41, 363. Salmivalli, C., & Peets, K. (2009). Pre‐adolescents’ peer‐relational schemas and social goals across relational contexts. Social Development, 18, 817-832. Schippell, P. L., Vasey, M. W., Cravens-Brown, L. M., & Bretveld, R. A. (2003). Suppressed attention to rejection, ridicule, and failure cues: A unique correlate of reactive but not proactive aggression in youth. Journal of Clinical Child and Adolescent Psychology, 32, 40-45.   93 Schultz, D., Ambike, A., Logie, S. K., Bohner, K. E., Stapleton, L. M., VanderWalde, H., Min, C., B., & Betkowski, J. A. (2010). Assessment of social information processing in early childhood: Development and initial validation of the Schultz test of emotion processing –preliminary version. Journal of Abnormal Child Psychology, 38, 601-613.  Schultz, D., & Shaw, D. S. (2003). Boys’ maladaptive social information processing, family emotional climate, and pathways to early conduct problems. Social Development, 12, 440-460. Shapka, J. D. & Law, D. M. (2013). Does one size fit all? Ethnic differences in parenting behaviors and motivations for adolescent engagement in cyberbullying.  Journal of Youth and Adolescence, 42, 723-738. Shahinfar, A., Kupersmidt, J. B., and Matza, L. S. (2001). The relation between exposure to violence and social information processing among incarcerated adolescents. Journal of Abnormal Psychology, 110, 136-141. Shea, V. (1994). Netiquette. San Francisco, CA: Albion Books. Sireci, S. G. (2013). Agreeing on validity arguments. Journal of educational measurement, 50, 99-104. Slonje, R., & Smith, P. K. (2008). Cyberbullying: Another main type of bullying? Scandinavian Journal of Psychology, 49, 147-154.  Spear, L. P. (2000). The adolescent brain and age-related behavioral manifestations. Neuroscience and Biobehavioral Reviews, 24, 4, 417- 463.  Steinberg, M. S., & Dodge, K. A. (1983). Attributional bias in aggressive adolescent boys and girls. Journal of Social and Clinical Psychology, 1, 312-321.   94 Stickle, T.R., Kirkpatrick, N. M., & Brush, L. N. (2009). Callous-unemotional traits and social information processing: Multiple risk-factor models for understanding aggressive behavior in antisocial youth. Law and Human Behavior, 33, 515-529. doi: 10.1007/s10979-008-9171. Tabachnick, B. G. & Fidell, L. S., (2007). Using multivariate statistics. Boston: Pearson.  Tan, C. S. (2007). Test Review: Reynolds, C. R., & Kamphaus, R. W. (2004). Behaviour Assessment System for Children (2nd ed.). Circle Pines. MN: American Guidance Service. Assessment of Effective Intervention. 32, 121-124.  Tokunaga, R. S. (2010). Following you home from school: A critical review and synthesis of research on cyberbullying victimization. Computers in Human Behavior, 26, 277-287. doi:10.1016/j.chb.2009.11.014 VanOostrum, N., &Horvath, P. (1997). The effects of hostile attribution on adolescents’ aggressive  responses to social situations. Canadian Journal of School Psychology, 13, 48-59. Wade, N. G., Vogel, D. L., Liao, K. Y., &Goldman, D. B. (2008). Measuring state-specific rumination: Development of the rumination about an interpersonal offense scale. Journal of Counseling Psychology. 55, 419-426.  Wallace, P. (1999). The psychology of the Internet. New York: Cambridge University Press. Weiss, B., Dodge, K. A., Bates, J. E., & Pettit, G. A. (1992). Some consequences of early harsh discipline: Child aggression and a maladaptive social information processing style. Child Development, 63, 1321– 1335.   95 Ybarra, M. and Mitchell, K. J. (2004). Online aggressor/targets, aggressors, and targets: A comparison of associated youth characteristics. Journal of Child Psychology and Psychiatry, 45, 1308-1316.  Yoon, J., Hughes, J., Cavell, T. A., & Thompson, B. (2000). Social cognitive differences between aggressive-rejected and aggressive-nonrejected children. Journal of School Psychology. 38, 551-570.  Yoon, J., Hughes, J., Gaur, A., & Thompson, B. (1999). Social cognition in aggressive children: A metaanalytic review. Cognitive and Behavioral Practice, 6, 320-331.  Zelli, A., Dodge, K. A., Lochman, J. E., Laird, R. D., & Conduct Problems Prevention Research Group. (1999). The distinction between beliefs legitimizing aggression and deviant processing of social cues: Testing measurement validity and the hypothesis that biased processing mediates the effects of beliefs on aggression. Journal of Personality and Social Psychology. 77, 150-166.  Ziv, Y., & Sorongon, A. (2011). Social information processing in preschool children: Relations to sociodemographic risk and problem behavior. Journal of Experimental Child Psychology, 109, 412-429.  Zumbo, B. D. (1999). A Handbook on the Theory and Method of Differential Item Functioning (DIF): Logistic regression Modeling as a Unitary Framework for Binary and Likert-Type (Ordinal) Item Scores. Ottawa, ON: Directorate of Human Resources Research and Evaluation, Department of National Defense. Zumbo, B. D. (2009). Validity as contextualized and pragmatic explanation, and its implications for validation practice. In Robert W. Lissitz (Ed.) The Concept of Validity: Revisions,   96 New Directions and Applications, (pp. 65-82). Charlotte, NC: Information Age Publishing.    97 Appendices Appendix A: OSIP Assessment Tool (Original)   Teenagers are currently the largest users of the internet, and spend much of their time online socializing with friends.  Given the unique aspects of socializing online (the ability to be anonymous, not having direct contact with people, etc.), we want to know more about how socializing online is related to adolescents’ social relationships and cognitive abilities.  The information you will provide is very important for us. This survey is voluntary and your responses are confidential. That is, no one will know your name. If you are unsure about any question, please ask for help. There is no right or wrong answer. Please read all sections and answer as honestly as you can. This section is about how socializing online affects adolescents’ social relationships. Please answer all questions imagining that you have become very angry after communicating with a person online (e.g., you have had a fight). Step 1 1. How much do you pay attention to the words/ actions of the person who made you angry online? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                2. How much do you pay attention to the unpleasant words/actions of the person who made you angry online?                  98 3. How much do you pay attention to the pleasant words/actions of the person who made you angry online? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                4. How much do you remember the person’s words/actions?                5. How often do you go over the interaction in your mind in the same sequence or order that it actually happened (e.g., from beginning to end)?                6. How often do you heavily focus on the last words/actions of the person?                7. How much do you shift your attention away from the situation when the person’s words/actions hurt your feeling?                 Step 2  1. How much do you use the following methods to understand a situation when the person makes you angry and hurts your feeling online?                a. guess and provide relevant information                 b. ask again for clarification                  99 c. ignore it Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                2. How much time do you spend thinking about why this social conflict happened to you?                3. How often do you think the cause of the issue is due to the following factors?                a. You                b. The other person                 c. Both of you                d. Neither of you                 4. Do you think that the person’s words/actions intended to be mean, when his/her words and actions repeatedly hurt your feeling and make you angry online?                  100 5. Do you think that the person who made you angry online is interested in communicating with you? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                6. How disliked or rejected would you feel if that person’s words and actions repeatedly hurt your feeling?                7. How disrespected would you feel if the person’s words and actions repeatedly hurt your feeling?                8. Do you think the person who made you angry online is scared or intimidated by you?                9. Do you think the person who made you angry online is annoyed with you?                10. Do you think the person who made you angry online wants to make you look bad?                11. Do you think the person who made you angry online wants to get you to do something for him/her?                 12. Do you think the person who makes you angry online needs some space?                  101 13. Do you think the person who makes you angry online is worried when his/her words and actions repeatedly hurt your feeling? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                14. Do you think the person who made you angry online feels bad about you?                Step 3 1. Would you set a goal to get back at the person or get the person in trouble if this happened to you?                2. Would you set a goal in such a way to make sure the person knows that you have sufficient social privileges and s/he can’t push you around?                3. Would you set a goal in such a way to get along with the person?                4. Would you need more information to set your goal?                5. How much time do you need to set your goal?                Step 4    102 1. Would you call names, or insult the provocateur or try to hurt the person in some other way online? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                2. Would you threaten the provocateur, order the provocateur, or let the provocateur know you are the boss in some other way online?                3. Would you post or talk about the provocateur behind his/her back online or try to get other people to not make any connection online with him/her?                4. If the provocateur apologized, would you forgive the provocateur for what he/she did to you?                5. Would you harm or try to hurt yourself in some other day?                Step 5  1. How right or wrong would it be to get back at the person?                2. If you got back at the provocateur, how much would other people like you if they saw you acting like this?                  103    3. If you get back at the provocateur, would things turn out to be good or bad for you? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                4. How easy or hard would it be for you to get back at the provocateur?                5. If you got back at the provocateur, how much would you care if the person got hurt?                 Step 6 1. How likely are you to stop communicating because you are too upset/ frozen?                2. How likely are you to communicate less in a cold or distant way?                3. How likely are you to communicate in appropriate ways, as if nothing has happened?                4. How likely are you to communicate more than normal ways (e.g., treat someone with too much kindness)?                5. How likely are you to act or behave in a same way that the other person (or the person you oppose) treated you?                  104 Appendix B: OSIP Assessment Tool (Based on Expert Evaluation)   Teenagers are currently the largest users of the internet, and spend much of their time online socializing with friends.  Given the unique aspects of socializing online (the ability to be anonymous, not having direct contact with people, etc.), we want to know more about how socializing online is related to adolescents’ social relationships and cognitive abilities.  The information you will provide is very important for us. This survey is voluntary and your responses are confidential. That is, no one will know your name. If you are unsure about any question, please ask for help. There is no right or wrong answer. Please read all sections and answer as honestly as you can.  This section is about how socializing online affects adolescents’ social relationships. Please answer all questions imagining that you have become very angry after communicating with a person online (e.g., you have had a fight).          Introductory Question Have you ever heard or seen (as a witness) or personally experienced an individual repeatedly and intentionally harm or hurt the other person’s feeling, using the internet, email or cell phone text messaging?  a) If no, please go to the next section.   b) If yes, please read the followings.  As you know, students can be engaged in online aggression in different ways, how were you involved in such an event (if applicable, please choose more than one option)? a) Listening to the person (e.g., a friend or peer) who was involved  b) Seeing the other person (e.g., a friend or peer) who was involved  c) Personally experienced online aggression: a. As a victim of another person’s behavior that was hurtful or harmful  b. As a person whose saying things to hurt another person’s feeling   Step 1   105 1. In your last online aggressive engagement, to what extent did you give it your full attention?  Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                2. In your last online aggressive engagement, to what extent did you experience any physical change to your face or body (e.g. your eyes were frozen or stopped moving)?                3. In your last online aggressive engagement, to what extent did you focus on one specific thing that was said or done for long period of time and forgot other things?                4. In your last online aggressive engagement, to what extent did you focus on the socially acceptable things that were said or done (e.g., “LOL( )” in text messaging, or “Like ( )” in Facebook”)?                5. In your last online aggressive engagement, to what extent did you focus on socially unacceptable things that were said or done (e.g., calling someone names on Facebook)?                  106 6. In your last online aggressive engagement, to what extend do you remember the details of what was said or done to you? (e.g., all of the emotions and thoughts you had) Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                7. In your last online aggressive engagement, to what extent did you remember the whole interaction of the situation in your mind in a logical sequence or the actual order of events?                8. In your last online aggressive engagement, to what extent did you focus on the last thing that was said or done?                9. In your last online aggressive engagement, to what extent did you get stuck thinking about the situation?                Step 2  1. In your last online aggressive experience, how often have you used the following methods to understand such a circumstance:                a. Guess without sufficient information,                  107 b. ask again for clarification Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                c. ignore it                d. Other methods, please write___                2. In your last online aggressive experience, how often did you spend time thinking about why this happened to you                3. In your last online aggressive experience, how often did you think about who caused the problem?                a. You (e.g., I am the cause of the problem because I said….)                b. The other person (e.g., she is the cause of the problem because she said…)                  108 c. Both of you Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                d. Neither of you                  4. In your real or imagined online fight, how  often did you think about the mean things, rumors, or gossip being said or done to…?                 a. You                b. The other person (the person you oppose)                c. Others (friends or peers)                 5. In your last online aggressive experience, how often were you interested in getting to know the other person better?                  109 6. In your real or imagined online fight, how often do you think …have (has) been socially excluded? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                a. You                b. The other person (the person you oppose)                c. Others (friends or peers)                7. In your real or imagined online fight, how often do you think … had been disrespected or dissed?                a. You                b. The other person (the person you oppose)                c. Others (friends or peers)                  110 8. In your last online aggressive experience, how often did you think … had been intimidated or scared? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 a. You                b. The other person (the person you oppose)                c. Others (friends or peers)                9. In your last online aggressive experience, how often did you think … had been annoyed?                a. You                b. The other person (the person you oppose)                c. Others (friends or peers)                  111 10. In your last online aggressive experience, how often did you think that … try (tries) to make … look(s) bad? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 a. You … the other person (the person you oppose)                b. The other person (the person you oppose)…you                c. Others (friends or peers) … you or the other person (the person you oppose)                11. In your last online aggressive experience, how often did you think about getting … to do something for …?                a. You …the other person (the person you oppose)                b. The other person (the person you oppose) … you                c. Others (friends or peers) … you or the other person (the person you oppose)                  112  12. In your last online aggressive experience, how often did you think about trying to create some space or distance from the other person? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 a. You …the other person (the person you oppose)                b. The other person (the person you oppose)… you                c. Others (friends or peers) … you or the other person (the person you oppose)                13. In your last online aggressive experience, how often have (has) … been worried about what the other person said or did? Never      0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100% a. You                b. The other person (the person you oppose)                  113 c. Others (friends or peers) Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                14. In your last online aggressive experience, how often have you felt bad about …? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100% a. You                b. The other person (the person you oppose),                 c. Others (friends or peers)                Step 3 1. In your last online aggressive experience, how often did you purposefully try to get back at that person (or the person you oppose) or get the person in trouble?                  114 2. In your last online aggressive experience, how often did you purposefully try to make sure that person (or the person you oppose) knows that you have sufficient social privileges (e.g., more people like you) and he or she can’t push you around? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 3. In your last online aggressive experience, how often did you purposefully try to be nice and get along with that person (or the person you oppose)?                 4. In your last online aggressive experience, how often did you purposefully try to hear what that person was saying (or the person you oppose) before giving a response?                 5. In your last online aggressive experience, how often did you purposefully try to extend your time to think about such circumstance to clarify your purpose of having this type of social networking before giving a response?                  115 6. In your last online aggressive engagement, how often did you purposefully try …? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                a. To be respected and admired by the other person (or the person you oppose)                b. To have self-confidence and make an impression on the other person (or the person you oppose)                c. To be viewed as a smart person                d. To say exactly what you want                e. To have your opinion heard                f. To state your opinion plainly                g. To be able to tell the other person (or the person you oppose) how you feel                  116 h. To feel close to the other person (or the person you oppose) Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                i. To feel good (both of you, or you and the person you oppose)                j. To put the other person (or the person you oppose) in a good mood                k. To develop a real friendship between both of you or you and the person you oppose                l. To be liked by your peers                m. To be accepted by the other person (or the person you oppose)              n. To be invited by the other person (or the person you oppose) to join in a social event                o. To agree with the other person (or the person you oppose) about things                  117 p. To let the other person (or the person you oppose) decide Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 q. Not to be angry because of the other person (or the person you oppose)                r. Not to make the other person (or the person you oppose) angry                s. To be able to please the other person  (or the person you oppose)                t. Not to annoy the other person (or the person you oppose)                u. Not to say stupid things when the other person (or the person you oppose) is listening                v. Not to be laughed at by the other person (or the person you oppose)                w. Not to make a fool of yourself in front of the other person (or the person you oppose)                  118 x. Not to show your feelings in front of the others (or peers) Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 y. Not to give away too much about yourself                z. To keep your thoughts to yourself                aa. To keeping the others at a suitable distance                bb. Not to let anyone get too close to you                cc. Not to show that you care about them                dd. To be agreed by the other person (or the person you oppose) to do what you suggest                ee. To get to decide how to hang out (e.g., playing game with friends online)                  119 ff. To do what the person (or the person you oppose) say or suggest Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                Step 4  1. In your last online aggressive engagement, how likely would you try to hurt that person (or the person you oppose) because of his or her saying or doing in some other way                2. In your last online aggressive engagement, how likely would you try to threaten, order, or let the other person (or the person you oppose) know you are the boss in some other way?                3. In your last online aggressive engagement, how likely would you try to do or say something about that person (or the person you oppose) behind his/her back (e.g. on private message) and get the other people to not make any connection online with him/her?                4. In your last online aggressive engagement, how likely would you try to forgive the person (or the person you oppose) for what s/he did to you, if the person apologized?                  120 5. In your last online aggressive engagement, how likely would you try to hurt yourself in some other way because of that person’s (or the person you oppose) saying or doing? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                6. In you last online aggressive engagement, how likely would your decision be about to stop talking to that person (or the person you oppose)?                7. In you last online aggressive engagement, how easy or hard would it be for you to talk with your parents to get help? Extremely hard  Very hard Slightly hard Neutral Slightly easy Very easy Extremely easy                 8. In you last online aggressive engagement, how easy or hard would it be for you to you talk with your teacher to fix the problem?                Step 5    1. In you last online aggressive engagement, how right or wrong would be, if you get back at the other person (or the person you oppose)? Extremely wrong Very wrong Slightly wrong Neutral Slightly right Very right Extremely right                  121 2. In you last online aggressive engagement, how much would the other people like you, if you were acting like that person (or the person you oppose)?   Far too little   Too little   A bit little   About the right   A bit much   Too much   Far too much                  3. In you last online aggressive engagement, how likely things turn out to be good or bad for you? Extremely bad Very bad Slightly bad Neutral Slightly good Very good Extremely good                4. In you last online aggressive engagement, how easy or hard would it be for you to get back at the other person (or the person you oppose)? Extremely hard  Very hard Slightly hard Neutral Slightly easy Very easy Extremely easy                 5. In you last online aggressive engagement, how much would you care, if the other person’s (or the person you oppose) feelings had hurt? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                   122   6. In you last online aggressive engagement, how right or wrong would it be for you to gossip to your peers (private chat) to make the other person (the person you oppose) looks bad? Extremely bad  Very bad Slightly bad Neutral Slightly good Very good Extremely good                   7. In you last online aggressive engagement, how easy or hard would it be for you to talk with your parents to get help? Extremely hard  Very hard Slightly hard Neutral Slightly easy Very easy Extremely easy                 8. In you last online aggressive engagement, how easy or hard would it be for you to you talk with your teacher to fix the problem?                Step 6 1. In you last online aggressive engagements, how likely do you act or behave in a way to stop communicating with that person (or the person you oppose)? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 2. In you last online aggressive engagements, how likely do you act or behave in a cold or distant manner with that person (or the person you oppose)?                  123    3. In you last online aggressive engagements, how likely do you act or behave in a way that nothing had happened between both of you (or you and the person you oppose)? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 4. In you last online aggressive engagement, how likely do you act or behave in a way to continue talking with that person (or the person you oppose) with too much kindness?                5. In you last online aggressive engagement, how likely do you act or behave in a same way that the other person (or the person you oppose) treated you?                  124 Appendix C: OSIP Assessment Tool (Based on Participant Evaluation, Using TAPs)   Online aggression involves an individual being harmed or hurt by another personusing the internet, email, or cell phone text messaging.  The incident can be minor or major.  For the questions below, if you have been involved in an online fight, answer the questions below based on that experience. If you haven’t been involved in online aggression, imagine that you have been. Try to make it as real as possible by imagining you were part of a fight that you witnessed or heard about.  Please indicate below what you will be focusing on for answering the questions below: I. An online fight/cyberbullying incident that I was personally involved in an: a. As the recipient b. As the provocateur c. As both II. An online fight/cyberbullying incident that I witnessed, but was not involved in III. An online fight/cyberbullying incident that I heard about, but didn’t see and wasn’t involved with  Step 1 1. In your real or imagined online fight, how attentive were you?  Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 2.  In your real or imagined online fight, to what extent did you experience any physical change to your face or body (e.g. your eyes were frozen or stopped moving)?                  125 3. In your real or imagined online fight, to what extent were you focused on one specific thing for long period of time and forgot other things? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 4. In your real or imagined online fight, to what extent were you focused on the positive online slangs (e.g., “LOL ( )” in text messaging, or “Like ( )” in Facebook or “Cute” in Instagram)?                5. In your real or imagined online fight, to what extent were you focused on negative things that were said or done?                6. In your real or imagined online fight, what level of detail do you remember about what has been said or done to you?                7. In your real or imagined online fight, to what extent would you remember what was going on at those moments in the actual order it happened?                8. What extent was you focused on the last thing that was said or done to you?                  126 9. In your real or imagined online fight, to what extent was your mind stuck on the situation? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 Step 2  1. In your real or imagined online fight, how often have you used the following methods to understand what happened:                a. Guess without sufficient information,                b. Ask for clarification                c. Ignore it                d. Ask other people for clarification (or have other people’s perspectives)                e. Jump to conclusion                  127 f. Other methods, please write___ Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 2. In your real or imagined online fight, how often did you spend time thinking about why this happened to you                3. In your real or imagined online fight, how often did you think about the whether the following things were the cause of the problem?                a. You (e.g., I am the cause of the problem because I said….)                b. The other person (e.g., she is the cause of the problem because she said…)                c. Both of you                d. Neither of you                   128 4. In your real or imagined online fight, how often did you think about the mean things, rumors, or gossip being said or done to…? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 a. You                b. The other person (the person you oppose)                c. Others (friends or peers)                5. In your real or imagined online fight, how often were you interested in getting to know the other person better?                6.   a. In your real or imagined online fight, how often did you think you had been socially excluded?                b. In your real or imagined online fight, how often did you think the other person had been socially excluded?                  129 c. In your real or imagined online fight, how often did you think other friends or peers were socially excluded? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 7.   a. In your real or imagined online fight, how often did you think you had been disrespected (dissed)?                b. In your real or imagined online fight, how often did you think the other person had been disrespected (dissed)?                c. In your real or imagined online fight, how often did you think other friends or peers were disrespected (dissed)?                8.   a. In your real or imagined online fight, how often did you think you had been intimidated or scared?                b. In your real or imagined online fight, how often did you think the other person had been intimidated or scared?                c. In your real or imagined online fight, how often did you think other friends or peers were intimidated or scared?                  130  Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%  9.   a. In your real or imagined online fight, how often did you think you had been annoyed?                b. In your real or imagined online fight, how often did you think the other person had been annoyed?                c. In your real or imagined online fight, how often did you think other friends and peers were annoyed?                10.   a. In your real or imagined online fight, how often did you try to make the other person look bad?                b. In your real or imagined online fight, how often did the other person try to make you look bad?                c. In your real or imagined online fight, how often did other friends or peers try to make you or the other person look bad?                11.   a. In your real or imagined online fight, how often did you think about doing something for the other person?                  131  Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%  b. In your real or imagined online fight, how often did you think about the other person doing something for you?                c. In your real or imagined online fight, how often did you think about other friends or peers doing something for you or the other person?                12.   a. In your real or imagined online fight, how often did you try to create some distance from the other person?                b. In your real or imagined online fight, how often did the other person try to create some distance from you?                c. In your real or imagined online fight, how often did friends or peers try to create some distance from you or the other person?                13.   a. In your real or imagined online fight, how often have you been worried about what the other person is saying or doing?                  132  Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%  b. In your real or imagined online fight, how often do you think that the other person has been worried about what you are saying or doing?                c. In your real or imagined online fight, how often do you think that other friends or peers have been worried about what you are saying or doing?                14.   a. In your real or imagined online fight, how often did you feel bad?                b. In your real or imagined online fight, how often do you think the other person felt bad?                 c. In your real or imagined online fight, how often do you think the other friends and peers felt bad?                Step 3 1. In your real or imagined online fight, how often did you purposefully try to get back at that person or get the person in trouble?                  133  Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%  2. In your real or imagined online fight, how often did you purposefully try to make sure that person knew that you have sufficient social privileges (e.g., more people like you) and he or she can’t push you around?                3. In your real or imagined online fight, how often did you purposefully try to be nice and get along with that person?                4. In your real or imagined online fight, how often did you purposefully try to hear and understand that person before giving a response?                5. In your real or imagined online fight, how often did you purposefully try to think about the purpose of having this type of social networking?                6. In your real or imagined online fight, how often did you purposefully try …?                a. To be respected and admired by the other person                   134 b. To have self-confidence and make an impression on the other person  Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 c. To be viewed as a smart person                d. To say exactly what you want                e. To have your opinion heard                f. To state your opinion plainly                g. To be able to tell the other person how you feel                h. To feel close to the other person                 i. To feel good about yourself                  135 j. To put the other person in a good mood Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 k. To develop a real friendship between both of you                 l. To be liked by your peers                m. To be accepted by the other person               n. To be invited by the other person to join in a social event                o. To agree with the other person about things                p. To let the other person decide things                q. Not  to be angry because of the other person                   136 r. Not to make the other person angry Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 s. To be able to please the other person                  t. Not to annoy the other person                 u. Not to say stupid things when the other person is listening                v. Not to be laughed at by the other person                 w. Not to make a fool of yourself in front of the other person                 x. Not to show your feelings in front of other peers                y. Not to give away too much about yourself                  137 z. To keep your thoughts to yourself Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 aa. To keeping others at a suitable distance                bb. Not to let anyone get too close to you                cc. Not to show that you care about others                dd. To get the other person to do what you suggest                ee. To get to decide how to hang out (e.g., playing game with friends online)                ff. To do what the person says or suggests                Step 4    138 1. In your real or imagined online fight, how much did you try to hurt that person because he or she said or did something you didn’t like Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 2. In your real or imagined online fight, how much did you try to threaten, or let the other person know that you are the boss?                3. In your real or imagined online fight, how much did you try to do or say something about that person (behind his/her back (e.g. on private message) to get the other friends or peers to not make any connection online with him/her?                4. In your real or imagined online fight, how much did you try to forgive the person for what s/he did to you?                5. In your real or imagined online fight, how much did you try to hurt yourself in some other way because of what the other person said or did?                6. In your real or imagined online fight, how much did you try to stop talking to the other person?                  139 7. In your real or imagined online fight, how much did you try to talk with your parents to get help? Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%                 8. In your real or imagined online fight, how much did you try to talk with your teacher(s) to get help?                Step 5  1.   Extremely wrong Very wrong Slightly wrong   Neutral Slightly right Very right Extremely right a. In your real or imagined online fight, how right would it be for you to get back at the other person?                b. In your real or imagined online fight, how right would it be for the other person to get back at you?                2.  Far too little  Too little A bit little About the right A bit much Too much Far too much a. In your real or imagined online fight, how much would the other person like you if they saw you acting like them                  140 b. In your real or imagined online fight, how much would you like the other person if you saw them acting like you Far too little  Too little A bit little About the right A bit much Too much Far too much                3.  Never 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%  a. In your real or imagined online fight, how likely was it that things would turn out well for you?                b. In your real or imagined online fight, how likely was it that things would turn out well for the other person?                4.  Extremely Hard  Very Hard Slightly Hard Neutral  Slightly easy Very easy Extremely easy a. In your real or imagined online fight, how easy would it be for you to get back at the other person?                b. In your real or imagined online fight, how easy would it be for the other person to get back at you?                  141 5.  Never Care 0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Always care ~100%  a. In your real or imagined online fight, how much did you care if the other person’s feelings had been hurt?                b. In your real or imagined online fight, how much do you think the other person cared if your feelings had been hurt?                6.  Extremely wrong Very wrong Slightly wrong   Neutral Slightly right Very right Extremely right a. In your real or imagined online fight, how wrong would have been for you to gossip to your peers about the other person?                b. In your real or imagined online fight, how wrong would have been for the other person to gossip to other peers about you?                7.  Extremely Hard  Very Hard Slightly Hard Neutral  Slightly easy Very easy Extremely easy a. In your real or imagined online fight, how hard was it for you to talk with your parents to get help?                  142 b. In your real or imagined online fight, how hard do you think it was it for the other person to talk with his or her parents to get help? Extremely Hard  Very Hard Slightly Hard Neutral  Slightly easy Very easy Extremely easy                8.         a. In you last online aggressive engagement, how hard was it for you to talk with your teacher(s) to get help?                b. In your real or imagined online fight, how hard do you think it was it for the other person to talk with his or her teacher(s) to get help?                Step 6   Never  0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%  1. In you last online aggressive engagements, how likely were you to stop communicating with that person?                2. In you last online aggressive engagements, how likely were you to act or behave in a cold or distant manner towards that person?                3. In you last online aggressive engagements, how likely were you to act as though nothing had happened between both of you?                  143                                Never  0 Rarely ~10% of the time Occasionally ~30% of the time Sometimes ~50% of the time Frequently ~70% of the time Usually ~90% of the time Every time ~100%  4. In you last online aggressive engagement, how likely were you to continue being kind to that person?                5. In you last online aggressive engagement, how likely were you act or behave in the same way that the other person treated you?                  144 Appendix D: OSIP Expert Invitation Letter         Dear Expert Reviewer,   As a graduate student, who is conducting a thesis entitled “Development and Validity Assessment of a Measure to Explore Social Information Processing within an Online Context among Adolescents” under the supervision of Dr. Kadriye Erickan and Dr. Jennifer Shapka, at the University of British Columbia, I would like to ask your assistance in serving as a construct validity reviewer for the development of a measure that is looking at Social Information Processing in an online context. The goal of my study is to examine the psychometric properties of the Online Social Information Processing (OSIP) measure related to social information processing theory, proposed by Crick and Dodge (1982).     By conducting this research project, I hope to refine the instrument items and gather evidence of the instrument’s psychometric properties. In order to do that, I will evaluate the validity of interpreting scores as individuals’ social information process, which involves the evaluation of whether items in a scale are relevant and measure all aspects of the construct.  To this end, expert reviewers are a powerful resource for ensuring validity.  If you agree to take part, I will send you an email containing a brief rating form. The form asks for you to rate each item based on its relation to the underlying construct of social information processing patterns, rate each item’s level of clarity, as well as provide suggestions for revisions. The form also asks about potential omissions from the measure. In total, the form will take about 15-20 minutes to complete. Once you have completed the form, I will ask you to return it to me electronically. Upon the completion of my thesis project, and if you are interested, I will be happy to provide you with a summary of the results and a copy of the final instrument.   Thank you so much for considering my request for assistance with my thesis project. Please let me know your decision to participate at your earliest convenience. Please feel free to contact me should you have any questions regarding the validity evaluation or my project.   Sincerely,   Rose Maaghsoudi  MAcandidate,  Department of Educational & Counselling Psychology, and Special Education,  Faculty of Education University of British Columbia   Department of Educational and Counselling Psychology, and Special Education The University of British Columbia Faculty of Education  2125 Main Mall Vancouver BC Canada V6T 1Z4 Tel 604-822-0242 Fax 604-822-3302 www.ecps.educ.ubc.ca   145 Appendix E: OSIP Expert Evaluation Form            INSTRUCTION: This form is designed to evaluate the construct validity of an instrument measuring Online Social Information Processing (OSIP).   The OSIP is a self-report scale that measures how adolescents process social information in an online context, using 7-point likert scale, with a particular focus on aggressive communication.   There are six subsections of the OSIP, pertaining to each of the six steps of the Social Information Processing (SIP) Model (Encoding, Interpretation, Goal Clarification, Response Construction, Response Evaluation, and Enactment). In the space provided by each item, please provide any feedback you have about the wording of the item and whether it fits within the SIP model.  At the end of each section, there is a space for additional comments about any omissions that you notice.    Thank you for your time.     Encoding Feedback about wording and theoretical fit #                       Department of Educational and Counselling Psychology, and Special Education The University of British Columbia Faculty of Education  2125 Main Mall Vancouver BC Canada V6T 1Z4 Tel 604-822-0242 Fax 604-822-3302 www.ecps.educ.ubc.ca   146 Appendix F: OSIP Parent Information Letter and Consent Form          Dear Parent/Guardian,   Your child has been invited to participate in a research study conducted by the University of British Columbia that will be occurring in the next couple of days.  Please take some time to read more about this study. Principal Investigators:  Dr. Kadriye Ercikan, professor and Dr. Jennifer Shapka, Associate Professor in Educational and Counselling Psychology, and Special Education, at UBC, are in charge of this project. Purpose:  Teenagers are currently the largest users of the Internet, and spend much of their time online socializing with friends.  Given the unique aspects of socializing online (the ability to be anonymous, not having direct contact with people, etc.), we want to know more about how socializing online is related to adolescents’ social relationships and cognitive abilities.  This pilot study involves having your child complete a questionnaire during class time about his or her computer and Internet use.  After completing the questionnaire, your child will be asked a few questions, for example, to rate the clarity of each question. The questionnaire, in general, will take about 30-45 minutes to complete. Questionnaires will be completed during class time and there are no known risks associated with this study, however, should your child feel uncomfortable, he/she has the right to withdraw from the study without any penalty, at any time.  Confidentiality:  Every effort will be made to ensure the confidentiality of the participants. It is important to note that no identifying information will be collected and that all data collected will be kept securely.  In addition, all files will be password protected and will be accessible only to the core research team.  Contact for information about the study: If you have any questions or desire further information with respect to this study or to obtain a copy of the questionnaire, please contact Dr. Jennifer Shapka (email: jennifer.shapka@ubc.ca; phone: 604-822-5253), Dr. Kadriye Ercikan (email: kadriye.ercikan@ubc.ca; phone: 604-822-8953), or Rose Maghsoudi at 604-822-3000, or visit our website: http://www.educ.ubc.ca/faculty/shapka/TeenTech. Contact for concerns about the rights of research subjects:  If you have any concerns about your child’s treatment or rights as a research subject, you may contact the Research Subject Information Line in the UBC Office of Research Services at 604-822-8598.       Department of Educational and Counselling Psychology, and Special Education The University of British Columbia Faculty of Education  2125 Main Mall Vancouver BC Canada V6T 1Z4 Tel 604-822-0242 Fax 604-822-3302 www.ecps.educ.ubc.ca   147 Consent:  Please return this page to school with your child only if you DO NOT wish your child to participate in any part of this study, otherwise we will assume you consent to having your child participate. Your child’s participation in this study is entirely voluntary and you may refuse to have him or her participate this study by returning this form, or have him or her withdraw from the study at any point without jeopardy to his or her class standing. Your signature below indicates that you have received a copy of this consent form for your own records and that you DO NOT consent to your child’s participation in the study.   I DO NOT consent to my child's participation.  Name of Child (please print):______________________________________________________ Your Name (please print):            _____________________________________________________________________________ Your Signature                                                                   Date                                148       Appendix G: OSIP Student Assent Form      Dear student,  You are being invited to help us, at the University of the British Columbia (UBC). Please read the information below to learn more about our study.   Persons in charge of this project: Rose Maghsoudi, graduate student, Dr. Kadriye Ercikan, professor, and Dr. Jennifer Shapka, Associate Professor in Educational and Counselling Psychology, and Special Education, at UBC, are in charge of this project. This project is part of the Master’s thesis of Rose Maghsoudi. If you have any question, please contact Rose Maghsoudi at 604-822-3000, Dr. Jennifer Shapka (email: jennifer.shapka@ubc.ca; phone: 604-822-5253), or Dr. Kadriye Ercikan (email: kadriye.ercikan@ubc.ca; phone: 604-822-8953), or visit our website:  http://www.educ.ubc.ca/faculty/shapka/TeenTech. Why we are doing this project: We want to learn more about how socializing online is related to adolescents’ social relationships and cognitive abilities. To help us learn more, we will be asking you questions about your communications that occur on the internet.  What this project means for you: If you want to take part in this project, then after a few days, we will ask you to fill out a 30 minute questionnaire at school.    Your answers will be kept safe and private: We want to make sure you feel safe answering question as honestly as you can, so we will be doing all we can to make sure your answers remain anonymous.  The researchers will be the only people who have access to data. All your answers will be kept private and will not be shared with your parents, friends, or teachers.  Questions or concerns: If you have any concerns about your treatment or rights as a research participant, you may contact the Research Subject Information Line in the UBC Office of Research Services at 604-822-8598. Consent: Your decision to take part in this study is completely up to you and your parents.  This means that if you do not want to take part, or if you change your mind in the middle, you can stop at any time and it will not affect your grades in any way.  If you want to help us out and take part in this project, please fill out the form below: _________________________________________________________________________ Printed Name of Participant         Grade ____________________________________________________________________________ Email address                        Cell number ___________________________________________________________________________ Participant Signature          Date    Department of Educational and Counselling Psychology, and Special Education The University of British Columbia Faculty of Education  2125 Main Mall Vancouver BC Canada V6T 1Z4 Tel: 604-822-0242, Fax:  604-822-3302 www.ecps.educ.ubc.ca   149      Appendix H: OSIP Participant Assessment Form       INSTRUCTION: This form is designed to evaluate the construct validity of an instrument measuring Online Social Information Processing (OSIP).   Construct of interest: OSIP is an individual’s self-report scale that measures how the social information patterns are processing in adolescents’ mind when encountering an aggressive engagement through Information Communication Technologies (ICTs) and provide a reaction or response.              Please rate each item as follows:  Column 1: Please indicate the level of clarity of each item on a 4-point scale, where 1= “Item is not clear”, 2= “Item needs major revisions to be clear”, 3= “Item needs minor revisions to be clear”, 4= “Item is clear”.   Column 2: Please rate the level of each item’s complexity to the construct of interest on a scale of 1-4, where 1= “Not complex at all”, 2= “Slightly complex”, 3= “Somewhat complex”, 4= “Complex” as it relates to the construct of OSIP.   Column 3: Please rate the level of offensiveness of each item in term of your culture, religion and ethnic background on a 4-point scale, where 1= “Not offensive at all”, 2= “Slightly offensive”, 3= “Somewhat offensive”, 4= “Offensive”.  Thank you for your time. Item Column 1 Column 2 Column 3 #  Item Description 1=Item is not clear, 2=Needs major revisions to be clear, 3=Needs minor revision to be clear, 4=item is clear  1= Not complex at all,  2= Slightly complex,  3= Somewhat complex,  4= Complex 1= Not offensive at all,  2= Slightly offensive,  3= Somewhat offensive, 4= Offensive             Department of Educational and Counselling Psychology, and Special Education The University of British Columbia Faculty of Education  2125 Main Mall Vancouver BC Canada V6T 1Z4 Tel 604-822-0242 Fax 604-822-3302 www.ecps.educ.ubc.ca 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0167127/manifest

Comment

Related Items