UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Single versus optional topics in ESL writing tests Sŏ, Nam-wŏn 1999

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1999-0614.pdf [ 4.65MB ]
Metadata
JSON: 831-1.0078204.json
JSON-LD: 831-1.0078204-ld.json
RDF/XML (Pretty): 831-1.0078204-rdf.xml
RDF/JSON: 831-1.0078204-rdf.json
Turtle: 831-1.0078204-turtle.txt
N-Triples: 831-1.0078204-rdf-ntriples.txt
Original Record: 831-1.0078204-source.json
Full Text
831-1.0078204-fulltext.txt
Citation
831-1.0078204.ris

Full Text

SINGLE VERSUS OPTIONAL TOPICS IN ESL WRITING TESTS by N A M WON SO B.A. in Education, Sung Kyun Kwan University, Seoul, Korea, 1987 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS in THE FACULTY OF GRADUATE STUDIES (Department of Language Education) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA September 1999 ©Nam Won So, 1999 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of The University of British Columbia Vancouver, Canada Date oj. r, m i DE-6 (2/88) ii ABSTRACT There is a growing number of ESL learners taking writing examinations to enter English universities every year. The purpose of these writing tests is to measure general writing competence rather than content knowledge. To address this purpose, some tests offer ESL students multiple topics to choose from while others include only one topic. To date, research on the effects of topics on student writing has been primarily focused on performance, but no hard evidence has been provided in support of one test condition over the other. Moreover, there is less research investigating the process of writing under the two testing conditions. Research on the writing process in a timed-test condition is an important area because it can provide background information about how ESL students reach certain scores. The present study used a qualitative approach with 22 ESL students to explore how they chose and/ or wrote under multiple topic versus single topic test conditions and how they felt about each test condition. Video tapes of the testing sessions and transcripts of the participants' interviews were analyzed to address the research questions, which focused on exploring the time needed for prewriting, the topic selection process, the criteria for topic choice, and attitudes toward each test condition. The findings suggest that while most participants spent more time prewriting for the multiple topic test, they did not consider this a waste of time. They believed that their use of time was productive in that it allowed them to take a great leap forward. Under the multiple topic test condition, the ESL students appeared to read all the topics before choosing. Topic familiarity was the most popular criterion they referred to in making a choice. The majority of the ESL students in this study iii stated that they preferred to have options in a real test situation. These results suggest that offering options in a timed-test situation may help many ESL learners feel more comfortable about taking a writing test, and this comfort may in turn help them display a good representation of their general writing ability. iv TABLE OF CONTENTS Abstract u ' Table of contents iv List of tables vii Acknowledgments viii CHAPTER ONE Introduction 1 1.1 Background of the study 1 1.2 The research problem 2 1.3 Purpose of the study and research questions 5 1.4 Significance of the study 6 1.5 Limitations 7 CHAPTER TWO Literature review 9 2.1 Introduction 9 2.2 Can a timed-writing test be a research subject of the writing process? 10 2.3 Section One: Generic factors affecting student writing 11 2.3.1 Writer characteri sties 11 2.3.2 Essay characteristics 12 2.3.3 Rater characteristics 14 2.3.4 Test conditions 15 2.3.5 Topic characteristics 16 2.4 Section Two: Does the topic make a difference? 18 2.4.1 Effects of the nature and type of topic on student writing 18 2.4.1.1 Mode of discourse 18 2.4.1.2 Rhetorical statement 19 2.4.1.3 Content knowledge 20 2.4.1.4 Cognitive demand 22 2.4.2 Effects of a choice of topics versus a single topic on student writing 24 2.4.2.1 Should not offer options 25 2.4.2.2 Should offer options 27 2.4.2.3 Comparison studies between both testing conditions .. 29 2.4.2.4 Topic choice with options 31 2.5 Summary and need for the present research 32 V CHAPTER THREE Methodology 33 3.1 Research design 33 3.2 Research methods 34 3.2.1 Research participants 35 3.2.1.1 High-intermediate level students 36 3.2.1.2 Advanced level students 36 3^ 2.2 Topics 37 3.2.3 Methods 38 3.2.3.1 Questionnaires 38 3.2.3.2 Observations 39 3.2.3.3 Interviews 39 3.3 Procedures 40 CHAPTER FOUR Results 44 4.1 Prewriting stage 44 4.1.1 Topic selection process 44 4.1.2 Time for prewriting under the two different test conditions 46 4.2 Responses to each test condition 48 4.2.1 Criteria for choice 49 4.2.2 Waste time? 53 4.2.3 Prefer a choice of topics to a single topic or vise versa 55 4.3 Process of writing 61 4.4 Other issues 65 4.5 Summary 67 CHAPTER FIVE Discussion 70 5.1 How did the ESL students manage time in a test situation? 70 5.2 What made the ESL students choose a certain topic? 72 5.3 What were the ESL students' attitudes toward each test condition? ... 74 5.4 What was the pattern for topic choice? 77 CHAPTER SIX Implications 80 6.1 Cautions 80 6.2 Implications for ESL composition teachers 82 6.3 Implications for test-givers 83 6.4 Implications for researchers 84 6.5 Reflections 84 Bibliography 87 vi Appendix A: Annual TWE examinees e..... 95 Appendix B: Topics 96 Appendix C: Letter of permission for topics 97 Appendix D: Questionnaires 100 Appendix E: Students' topic selection behaviors 104 Appendix F: Interview questions 113 vi i LIST O F T A B L E S Table 4.1. Students' topic selection behaviors 45 Table 4.2. Prewriting time (High-intermediate writing class) 47 Table 4.3. Prewriting time (Advanced writing class) 48 Table 4.4. Criteria for topic choice in the mid-term examination (High-intermediate level) 50 Table 4.5. Criteria for topic choice in the present study 51 Table 4.6. Reasons for not choosing a topic 52 Table 4.7. Topics with students who chose each topic 53 Table 4.8. Number of topics 59 Table 4.9. Test scores (High-intermediate class) 60 Table 4.10. Test scores (Advanced class) 61 Table 4.11. Writing process (High-intermediate class) 63 Table 4.12. Writing process (Advanced class) 64 Table 4.13. Fluency 66 ACKNOWLEDGMENTS Looking back at the long journey of this project, I see how God has helped me get through every single moment. He has prepared many people to help me to accomplish this journey, and without their encouragement and assistance, I would not have been able to finish this project. I would like to give special thanks to Dr. Gunderson and Dr. Belanger, both for their warm support and guidance in helping me find directions for the present research and refine my thoughts about the ESL writing process in a timed-test situation. Many thanks also to LERC staff, Keith McPherson, Yan Guo, and Mario Lopez for their encouragement and love, and to Ms. Siennicki and Ms. Deupree and their students for their voluntary participation in this project. I would also like to offer my special thanks to Tammy Slater, George Hann, and Chin-Hyun Kim for sharing their expertise and helping me in many ways. Finally, I would like to extend my biggest thanks to my parents and parents-in-law and brothers and sisters for their prayers, and to my beloved wife Hyeon Ju Kim and baby Soo-Min So for sharing every moment of life and believing in me. 1 CHAPTER ONE Introduction 1.1 Background of the study Several years ago, I was asked to write on a single topic for half an hour as part of the TOEFL Test of Written English (hereafter TWE). Upon opening the TWE test sheet, examinees around me started writing on the assigned topic incredibly quickly, and some of them were writing the second page filled with small letters, whereas I was in the middle of the first page. I was very embarrassed. In those days, I had just started learning to write in a second language. Though I was interested in expressing my ideas and concerns in a written second language, I lacked confidence in writing English. At the time when I was taking the writing examination, I had just a few ideas on the topic from which I needed to prove my writing ability. As a result, I failed to achieve a satisfactory score which I had hoped to send to the department to which I was applying. Approximately two months after the TWE examination, I took a writing test which was the final exit examination of the ESL course I was taking in Montreal. On the examination, I was given several writing topics with the direction, 'Choose one of the topics below and write a 300-500 word essay in one hour.' More than two out of four topics looked very familiar, so I chose one of them, wrote what I believed was a well-organized essay, and obtained a very good mark. Reflecting on my experience of writing English essays under the two different conditions, I believe that the latter writing task was relatively better for me to display my writing ability, although there are many variables to be considered. Since that experience, 2 I have always preferred having options on writing tests. It seems, however, too hasty to apply my personal experience and preference to other ESL learners because researchers have presented different arguments with different findings and, therefore, the question of whether or not students should be given a choice of topics in a writing test has not yet been answered. 1.2 The research problem There are several purposes for implementing language tests, and educational tests in general, in classrooms, schools, districts, and provinces. Henning (1987) addresses the most common purposes of language tests in four categories. First, they are to diagnose strengths and weaknesses in the learned abilities of the student and to provide critical information to the student, teacher, and administrator that should make the learning process more efficient (diagnosis and feedback). Second, they are to assist in the decision to select who should be allowed to participate in a particular instructional program (screening and selection). A third use of language tests is to identify a particular performance level to place the students at an appropriate level of instruction (placement). The last is to provide information about the effectiveness of programs of instruction (program evaluation). The writing test, which is one of the language tests that has been increasingly adopted in academic institutions and ESL schools (Engelhard, Gordon, & Gabrielson, 1992; Spaan, 1993), is also often given to students with these purposes which direct instructors, administrators, and curriculum designers to review and decide upon their actions. Unlike class writing assignments, however, the writing test has a different context. Ruth and Murphy (1988) provide a distinction between class writing assignments and writing tests. In classrooms, instructors often allow students to write about what they 3 are interested in, give examples of appropriate strategies for structuring the essential material of the composition, and sometimes help students revise their essays. Writing tests, on the other hand, often start with the command, 'Write, as directed.' Students are often not allowed to choose their own topic. They must write on an assigned topic and, in some cases, are given a choice of topics. No help is provided for revision or writing strategies. In addition, a strict time restriction is applied so that students must write in a given time; otherwise, they fail the tests. This type of timed writing test may be used more frequently for placing, screening, and selecting prospective students than diagnosing and giving feedback. Interestingly, however, academic institutions which have similar purposes of a writing test—that is, screening and placing—tend to manage writing assessment programs different from each other based on different beliefs. For example, some large scale tests (e.g., TWE) do not offer students a choice of topics whereas many other tests do, especially university placement tests or entrance tests (e.g., the Language Proficiency Index examination1). With the increased demand for writing tests (Engelhard, Gordon, & Gabrielson, 1992; Hoetker, 1982; Leonhardt, 1985; Spaan, 1993), research has examined many of the factors that can affect students' writing performance, including writers, topics, raters, essay characteristics, and test conditions. Of the variables measured, the topic factor has been the most controversial. In a review of second language assessment, Hamp-Lyons (1990) addresses the fact that many researchers have claimed the topic of the question significantly affects content quality and quantity in an essay, while some have found no 'The LPI examination is held in British Columbia, Canada, to provide post-secondary institutions with information about an individual student's competency in English. It consists of four parts. One part is an expository essay test which offers six options. 4 significant differences among a large number of topics. While the research on the effect of topic on writing performance is heavily focused on a large number of variables that make up the topic or prompt itself, not many studies have examined whether students should be offered a single topic or a choice of topics to help them produce their best writing samples (Chiste & Oshea, 1988; Ducette & Wolk, cited in Hoetker, 1979; Gabrielson, Gordon, & Engelhard, 1995; Hoetker, 1979; Leonhardt, 1985; Mehrens, 1990, 1976; Polio & Glew, 1996; Ruth & Murphy, 1988; Stiggins, 1982). Some have studied student writers' topic selection processes with options—that is what to choose, how to choose, and why (Chiste & Oshea, 1988; Polio & Glew, 1996). The burning issue of providing one topic or optional topics has been vigorously discussed with seemingly no consensus, yet among researchers and professional testers each favor of the single topic (Ducette & Wolk, cited in Hoetker, 1979; Heaton, 1975; Mehrens, 1990, 1976; Ruth & Murphy, 1984; Stiggins, 1982) or the optional topics (Atwell, 1987; Leonhardt, 1985; Polio & Glew, 1996; White, 1995). Some (Gabrielson, Gordon, & Engelhard, 1995) have reported how teachers see this issue. They state that composition teachers are in favor of providing many options for the sake of fairness to the examinees, as this will offer the examinees more opportunities for choosing a better topic for themselves. Carlman (1986) indicates how these contradictory perspectives between choice and non-choice conditions become a problem: This discrepancy between professional testers' practice and teachers' assumptions becomes a problem when individuals fail writing tests that are important for their futures, when they are not admitted to post-secondary institutions or when they are denied scholarships, advanced 5 placements, or even degrees, (p. 40) As for the topic selection with options, research has found that there is no significant difference between students' choice of topics and scores. However, as Polio and Glew (1996) and Chiste and O'Shea (1988) point out, most previous attempts using quantitative methods to examine factors affecting student choice of topics only provide statistical numbers but often fail to explain why such results come out. Moreover, most research on the effect of topics on writing has neglected to explore the opinions and feelings of student writers themselves toward a single topic and optional topics or the process of interaction between topics and writers during writing test sessions. 1.3 Purpose of the study & research questions The present study is designed to explore how ESL students write under two different test conditions (a single topic versus a choice of writing topics) and their attitudes to each test condition. Specifically, under the optional topic test condition, it examines what kinds of topics ESL students who have at least high school education in their home countries choose among several topics offered, how they choose a topic, and what the criteria governing their choices are. This study also aims to uncover what ESL students feel when given either a single topic or a choice of topics and how they respond to each test condition in a timed-test context. Due to the nature of the study that explores the opinions of ESL students taking writing tests and interactions between the ESL students and writing topics, this study regards the ESL students' opinions as key elements of evidence. Although the purpose of the present study is not to rigorously investigate the effect of topic on the ESL students' writing performance under the two test conditions, it will be mentioned in part to complement the research findings. Therefore, the questions that guide this study are: 1. How much time do ESL students spend choosing a topic under the optional test condition? 2. How much time do the ESL students spend generating ideas before they start writing an essay, after choosing a topic? Under the assigned topic condition, how long does it take the ESL students to generate ideas before starting to write an essay? 3. Do the ESL students change their minds after choosing a topic? Do they believe that they waste time? 4. What criteria do they use when choosing or not choosing? 5. What are the ESL students' responses to each test condition? Do they feel the single topic test condition is easier to write than the optional topic test condition? Why or why not? Or, do they not mind either type of test condition? 6. Which test condition do they believe helps produce a better essay? 7. Do their attitudes influence their writing processes? 1.4 Sienificance of the study Since the population of ESL students who take a variety of writing tests (e.g., TWE) in order to enter academic institutions in North America has been growing gradually2, and many of these students have different knowledge backgrounds, interests, and cultural experiences, it is of great importance to examine the effect of topic which is a critical element in writing tests. Do ESL students believe that they waste time choosing the right topic from among options? Do they read all topics under the optional topic test condition? Do they choose a short, easy topic or any other topics that are complicated, 2 For example, the number of TWE examinees has gradually increased. See Appendix A. 7 but which motivates them to write? What criteria do they use when choosing or not choosing a topic? Do they like the topic they choose or do they regret it at any moment during the testing period? Do their feelings toward each test condition affect their writing process? Do they believe that having options helps produce a better writing sample or hinders them in presenting their writing ability? Answers to these questions are important to professional test givers and administrators as well as ESL composition teachers because how ESL students go through each step of the writing process helps these stakeholders access background information relating to the students' writing scores. It may help professional test givers and administrators offer ESL applicants a better opportunity to display their writing ability. As well, composition teachers can help ESL students reduce their negative attitudes toward either test condition and prepare for each test condition. It is also important to existing knowledge in the field of writing assessment to have insights into what these students think about each test condition and how they go through the writing processes because of their contribution to an understanding of the effect of topic on writing as witnessed by test takers. 1.5 Limitations To conduct the present study, the researcher set up three conditions for the participants: college-level ESL learners; oral communication abilities in English; and academic writing experience in English. First, since the present research explored college-level ESL students' writing under two different test conditions, the participants should come from ESL population. Second, due to the importance of interview sessions that were expected to provide the most critical information for the study, the students' spoken proficiency had to be good enough to communicate with the researcher in 8 English. Third, the ESL students had to have experience writing academic essays in English so that the researcher could be assured that these students already knew about how to write an academic essay and how to generate and organize essay ideas before writing on a topic. Therefore, any generalizations should be limited to college-level ESL learners who have experience academic writing with high-intermediate or advanced spoken proficiency. 9 CHAPTER TWO Literature review [W]e know almost nothing about topic variables because the attention of researchers has been devoted almost entirely to issues of rater reliability, while issues of validity have been ignored, as have the other two sources ' of variation in essay examination results: the students and the topics. (Hoetker, 1982, p. 38; emphasis added) 2.1 Introduction Research has investigated student writing with very young children to adults in a variety of context and reported many factors affecting writing process and performance. Of the affecting factors, there has been relatively little research done on the effects of topic and students' perception on writing topics in a timed-test situation. To provide a basis for understanding the present study and to position this study within the world of writing research, the researcher presents the issue of writing process in a timed-test situation and reviews two bodies of literature1 relevant first to generic affecting factors and then to a discussion of the topic variables in detail. This chapter consists of two major sections. The first section looks at the research on generic factors affecting student writing, which include writer, essay, rater and topic characteristics, and test conditions. Section two reviews the research on effects of the nature and type of topic on student writing to provide detailed information of the topic variable and on effects of optional 1 Since the present study was conducted in a 'test' situation and the 'test' is often aimed at assessing performance, this chapter presents the review of literature in relation to the writing process as well as the writing performance. 10 topics on writing process and performance in comparison with those of a single topic. The latter part of Section two, along with other literature, leads the researcher to the investigation of the question of whether ESL students should be offered a choice in a timed-test condition. 2.2 Can a timed-writin2 test be a research subject of the writing process? Research on the process of writing has tended to examine student writing over a long period of time, emphasizing multiple revisions, and has reported that students need time to plan, write, revise, and edit to produce a better essay (Horowitz, 1986; Polio & Glew, 1996), and that writing is to discover meaning, that is, it is the process of exploring one's thoughts (Murray, 1980; Zamel, 1982). In raising problematic issues related to the process-oriented approach, Horowitz (1986) reports that some people adhering to the value of the process approach in teaching writing do not consider examination writing as a real writing. Horowitz (1986) and Hamp-Lyons (1986), however, dismiss the claim and contest that the writing examination is an essential part of student life in university. Hamp-Lyons (1986) goes further, arguing that writing essay examinations—a timed-writing test—is as much a process as any writing task in the writing classroom. To date, however, researchers have not examined this issue much. Considering the little knowledge available on the writing process in timed contexts, Hamp-Lyons (1986) suggests that writing examinations offer the writing researcher rich opportunities to investigate the composing processes. 11 2.3 Section One: Generic factors affecting student writing Hoetker (1979) argues that there are many sources which influence students' essay examinations. These sources are often interrelated and therefore make it difficult for researchers to find out which most affects students' writing. 2.3.1 Writer characteristics Many researchers have examined student writers as a variable and claimed that a student's writing performance as well as writing process can be affected by the writer himself (Atwell, 1987; Baker & Quellmalz, 1981; Brossell, 1986; Chase, 1968; Faigley, Daly, & Witte, 1981; Fishman, 1984; Freedman, 1983, 1981; Hamp-Lyons, 1990; Hilgers, 1982; Hoetker, 1979; Keech, 1982; O'Shea, 1987; Powers, Cook, & Meyer, 1979; Selfe, 1985, 1984; Shaugnessy, 1977). Some researchers (Baker & Quellmalz, 1981; Freedman, 1981) have hypothesized and found that attitudes toward particular topics can cause students to perform better on some topics than on others. Keech (1982) takes this hypothesis further, saying that similar topics make different cognitive demands on students and that the same topic might be interpreted differently by different students. Keech explains that if a student finds a topic too difficult or boring, the student may not write as well as if the student is fascinated by the topic. Hoetker (1982) reports on Rushton and Young's (1974) study that shows complicated interactions among essay content, mode, and the social class of the student writer. Rushton and Young (1974) investigate differences between elaborated and restricted language codes of different social groups ~ that is, public school students versus factory workers of the same age — and find that the advantaged language codes of 12 public school pupils over factory workers on academic topics disappear when they write on technical subjects. According to Selfe (1984, 1985) and O'Shea (1987), student writers' attitudes toward writing tasks in a timed-test condition can affect the process of writing. They find that apprehensive writers have difficulty extracting ideas about audience and organization from their readings of the topic. This may cause them to read the question carelessly and write off the topic or to ignore the prewriting stage because they are afraid of wasting time. During the drafting stage as well, apprehensive writers take longer to complete writing tasks, pausing often and spending less time on the actual writing (Hayes, 1981). They also concentrate on superficial matters such as spelling and minor sentence changes rather than on organizational considerations. Every writer's performance appears differently from occasion to occasion (Hamp-Lyons, 1990). Brossell (1986) provides a good explanation of the effect of the writer on performance when he states "all writers are influenced in writing assessment by innumerable factors related to background and personality. Elements of culture, gender, ethnicity, language, psychology and experience all bear upon the way different people respond to a writing task" (p. 175). Because of the vast influences of the writer on the writing process and performance, researchers (Chiste & O'Shea, 1988; Kennedy, 1994; Polio & Glew, 1996) have called for adopting qualitative methods to find explanations about how student writers have reached certain scores. 2.3.2 Essay characteristics Hoetker (1979) states that research on essay characteristics (Chase, 1966; Henderson, 1977; Klein & Hart, 1968; Marshall, 1967, 1972; Marshall & Powers, 1969) 13 has tended to have a heavy emphasis on student writers' poor handwriting and/or frequency and seriousness of mechanical and usage errors in relation to raters' judgments (cited in Hoetker, 1979). These types of essay characteristics have caused some researchers to keep investigating whether raters' judgments are influenced by those factors, along with other variables, such as topics and the rating environment. Kinzer (1987) examines tenth-grade students' essays from two topics (topic 1-description; topic 2-argumentation) to specify factors which can influence holistic scores. He divides affective factors into two categories, that is, response-based variables (legibility, amount of writing, cohesion, and grammatical and usage errors) and topic-based variables (task demands in each topic) and finds that there is a statistically significant difference in mean holistic score across the two topics, but, of the response-based variables, there appear somewhat contradictory results in mean holistic scores. In topic 1, statistically no significant relationship between score and any variables (e.g., legibility, proportions of errors) is found. In topic 2, however, all of the variables, especially legibility, are found to have a statistically significant relationship to score. Kinzer concludes that the task demands of the topics are more influential on score than topic-external variables such as legibility. Freedman (1981), in a study with 64 college students, examines several types of variables, such as the essay itself, the rater, and the environment of the rating, to provide information about essay scoring. Of the three variables, she finds that the essay characteristics contribute most significantly to the scores, and that the raters and the rating environment do not affect the variance in the scores. Since the inevitable effects of handwriting and neatness can be confounding 14 issues in scoring, some researchers (Markham, 1976; McColly, 1970) recommend typing students essays before they are rated. 2.3.3 Rater characteristics Whether or not the raters are qualified and come from similar backgrounds (Charney, 1984; Cooper, 1977; McColly, 1970), whether they are adequately trained to use the criteria employed for scoring rather than their own (e.g., Charney, 1984; Hamp-Lyons, 1990), and whether they become tired, bored, or inattentive (Charney, 1984; Hoetker, 1979) are most of the issues that researchers have examined when considering the effects of the rater on student writing in relation to mainly the issue of reliability. McColly (1970) emphasizes the importance of raters' competence in evaluating student writing, defining competence as scholarship or knowledgibility. McColly comments on raters' competence in relation to validity by arguing that "the more competent the judges of essays are, the more they will agree and the more valid will be their judgments" (p. 150). This competence, McColly said, can be increased through proper training and practice regardless of how knowledgeable readers are. Hamp-Lyons (1990) and Newcomb (1977) also stress the importance of rater training as raters are likely to respond differently to student essays because of their experiential, cultural backgrounds, discipline, gender, and race, although they are competent and come from a homogeneous group. Attention to rater training is now a naturally accepted part of any rigorous research on writing assessment or any professional writing assessment. Freedman (1981) emphasizes the importance of rater training by saying that "training changes readers' rating behavior" (p.253). In an excellent review of the validity of using holistic scoring to evaluate writing, Charney (1984) 15 contends that the raters must be trained to use whatever criteria are chosen for the ratings: "Readers must be trained not to apply their own criteria and must be monitored while they rate the essays because their adherence to the new criteria might slip over time; they may forget the criteria if they become tired" (p. 74). However, Charney does not forget to point out the dilemma that although raters have been trained, their judgments are strongly affected by superficial characteristics of the student essays, such as handwriting and neatness, and whether raters actually adhere to the criteria established for the judgment is not confirmed by the available evidence. 2.3.4 Test conditions Another strong factor is test conditions that affect the process and performance of student writing, such as length of testing time, day-to-day variation, testing place, and the like. How much time students should be given to write an essay is very often asked by professional test-givers as well as composition teachers and academic administrators. Testing time varies depending on the purposes and types of each test (e.g., 30 minutes, Kroll, 1991; 45 minutes, Freedman, 1981; one hour, Freedman & Robinson, 1982). A Georgia study (cited in Hoetker, 1979) compares essays written under four contrasting conditions; a 30- minute time limit with a choice of topic versus no choice of topic, and a 45-minute time limit with a choice of topic versus no choice of topic, and concludes that 30 minutes is too short for anything except showing off, an hour is just about right, and 45 minutes is minimally acceptable. The time limit, however, often causes a problem. O'Shea (1987) cites a student's complaint: "Time is probably the biggest obstacle for me to overcome. If I cannot think of anything to write, then I begin looking at the clock. This 16 makes me very tense and blocks my thoughts... I start to run out of time" (p.291). Of the quality of general test conditions, Hoetker (1979) declares that the testing place must be well-lighted, quiet, comfortable, and free from distracting movements, and the psychological environment must be positive and non-threatening. Clark (cited in Hoetker, 1979) also argues that the psychological environment can influence a writer's anxiety level and willingness to compose. Kincaid (1963) investigates the quality of writing with college freshmen to determine whether a single paper written by a student on a given topic at a particular time constitutes a valid basis for evaluation. Kincaid finds significant differences between scores on papers by the same student and concludes that variations in the quality of a single student's writing result from variations in the day-to-day efficiency of the student. 2.3.5 Topic characteristics Research on topic effects in writing assessment has heavily concentrated on the quality and difficulty of essay topics and rater reliabilities, most of which are designed to allow students a chance to demonstrate their writing ability (Brown, Hilgers, & Marsella, 1991; Carlman, 1986; Freedman & Robinson, 1982; Gordon, 1986; Hamp-Lyons, 1990; Hoetker, 1979; O'Donnell, 1984; O'Shea, 1987; Spaan, 1993; Troyka, 1984; White, 1988). In order to explore the complex nature of topic variable in writing assessment, researchers have also examined different aspects of topic characteristics, including mode of discourse, rhetorical statement, content knowledge, and cognitive demand. As many studies indicate, however, determining the effect of topics on writing quality is complicated (Greenberg, 1981; Hoetker, 1982; Huot, 1990; Keech, 1982; Mellon, 1976; Witte, 1992; Witte and Faigley, 1981). Meanwhile, whether students should be given a single topic or optional topics on writing examinations has not been explored much, and therefore no concrete conclusions have been produced yet. Answers might offer professional testers and academic administrators greater freedom to implement writing assessment programs based on their experience or for the convenience of administration. Since writing topic is the main focus of the present study, more detailed explanations are described in the following section. It is worth noting, as Hoetker (1979) addresses, that all of these factors are interrelated in writing assessment: It's obvious that none of these... factors really operates independently. The writer's attitude and emotional state may be in part a function of the task he finds himself asked to perform. The quality of a topic cannot be established apart from the characteristics of the examinees who are asked to respond to it. The effectiveness of the training given to the raters is put to the test by essays that are written in a slovenly hand or that deal controversially with matters dear to the rater's heart. And so forth (p. 21). The following section describes what research has been done concerning the influence of topics on writing with particular emphasis on the effect of a choice of topics. 18 2.4 Section Two: Does the topic make a difference? 2.4.1 Effects of the nature and type of topic on student writing A great number of studies have investigated the effects of topics on the writing performance of mostly native English speakers and some non-native speakers of English. Most researchers have been concerned with the mode of discourse called for by the topic and the nature or extent of the rhetorical context it provides, and some with content knowledge and the cognitive demand of the topic. 2.4.1.1 Mode of discourse According to Hillocks (1986), the majority of research claims that different modes of discourse entail different degrees of syntactic complexity, with argument and exposition generally involving greater complexity than narrative and expressive writing. Other researchers (Carlman, 1986; Crowhurst & Piche, 1979; Hennig, 1980; Martinez San Jose, 1972; Nietzke, 1972; Watson, 1980) have also argued that argumentative/expository writing produces considerably more mature syntax than does narrative/descriptive writing. With regard to the quality of student writing, on the contrary, researchers such as Engelhard, Gordon, and Gabrielson (1992), Kegley (1986), Prater (1985), and Freedman and Pringle (1984) have reported lower ratings for expository than for narrative writing tasks. In a study of statewide assessment of writing during 1989 and 1990, Engelhard, Gordon, and Gabrielson (1991) examined the influence of mode of discourse of eighth-grade students. The results suggested that mode of discourse was a significant predictor of writing quality. Narrative writing tasks elicited the highest ratings followed by descriptive and expository writing tasks in that order. 19 In a case study with six college-aged ESL students, So (1997) examines the relationship between syntactic maturity and the quality of student writing in different modes of discourse. So finds that mode of discourse has an effect on syntactic complexity and the quality of student writing. The ESL students were given lower ratings for a narrative than for an expository writing task in syntactic complexity; on the contrary, the results were opposite in the quality of student writing. These contradictory results are in accordance with those of native English students (Crowhurst & Piche, 1979; Engelhard, Gordon, & Gabrielson, 1992; Freedman & Pringle, 1984; Hennig, 1980; Kegley, 1986; Martinez San Hose, 1972; Nietzke, 1972; Prater, 1985; Stewart & Grobe, 1979; Watson, 1980). 2.4.1.2 Rhetorical statement Many researchers have suggested that rhetorical specifications in essay topics have an influence on students' writing performance (Brossell, 1983; Engelhard, Gordon, & Gabrielson, 1992; Hoetker, 1982; Hoetker & Brossell, 1986; Murphy and Ruth, 1993; O'Donnell, 1984; Oliver, 1995; Prater & Padia, 1983; Quellmalz, Copell & Chou, 1982; Ruth & Murphy, 1984). In his 1982 paper reviewing the literature of the effects of essay topics on student writing, Hoetker states that students may have a higher chance for creative misreadings of a topic which has a lengthy specification, because the skill being tested can be reading comprehension rather than writing ability. Brossell (1983) examined three different levels of rhetorical specification of topic: level one, topic described as a briefly stated subject; level two, topic described as a statement about a subject followed by a short instruction; level three, topic described as a 20 full scenario. Brossell found that students produced the highest rated essays with the level two topic statement, while producing the lowest with the third level—although not statistically significant—and concluded that too much or too little a specification of rhetorical context can weaken writing quality and that full rhetorical specification may hinder rather than help writers in an examination setting by inducing them to repeat needlessly the text in the topic and thus waste time. With 624 essays written in different rhetorical specification by four age levels (seventh-, ninth-, and eleventh-grade students, and college freshmen), Oliver (1995) examined whether rhetorical specification in writing prompts makes a difference and found that while seventh graders tended to respond to simpler topic specifications, ninth graders reacted strongly to more elaborated topics. Eleventh graders more frequently utilized rhetorical specification, which college writers less frequently relied on. Oliver suggested that specific rhetorical information may be important to students at certain ages for pedagogical reasons as well as for assessment. 2.4.1.3 Content knowledge During the past decades, studies of essay topics have begun to examine if variations in content of a topic would affect what and how well students write. The common findings are that students with more topic-specific knowledge produce better quality writing than those having less topic-specific knowledge. Most of the research on the effects of content knowledge on student writing has been done with LI (first language) students (elementary, DeGroff, 1987; McCutcheon, 1986; secondary, Chesky & Hiebert, 1987; Davis & Winek, 1989; Langer, 1984; Newell, 1984; Squire, 1983; college writers, Hilgers, 1982). The most comprehensive studies 21 among these have been conducted by Langer (1984) and Chesky and Hiebert (1987). Chesky and Hiebert (1987) examined the effects of high- and low-prior knowledge and peer and teacher audiences on high school students' writing. In their study, Chesky and Hiebert took specifically into account the relationship between prior knowledge and six writing measures—holistic scores, context-creating statements, essay length, cohesion, T-unit length, and error analysis— to discern what aspects of writing are more influenced by differences in prior knowledge. Overall, the results of the writing measures showed that students with high-prior knowledge outperformed those who had low-prior knowledge, except T-unit length and error analysis, which revealed no significant difference between high- and low-prior knowledge groups. In addition, Chesky and Hiebert implemented an affect survey to assess the relationship between students' attitudes toward the writing assignment and the quality of the composition. Not surprisingly, the students in the high-prior knowledge groups reported that they were more involved in their writing, liked their writing better, and found the writing task much easier than those in the low-prior knowledge groups. The L2 (second language) studies on the effects of content knowledge on writing to date have been conducted by Tedick (1990) and Winfield and Barnes-Felfeli (1982). Tedick (1990) investigated the effects of topic variables on writing performance and subject-matter knowledge and its impact on the writing performance of three levels of E S L graduate students: beginning, intermediate, and advanced. Tedick used two writing prompts—one for a general topic, the other for a field-related topic—to ask the students to respond in writing and found that L2 writers produced qualitatively better writing when provided with a topic 22 that allows them to make use of their prior knowledge. However, if L2 writers have a limited amount of linguistic knowledge in the L2, their familiarity with the subject matter of a writing task does not provide them with the linguistic knowledge also required to produce quality writing. On the other hand, if L2 writers are capable of producing syntactically complex utterances with fewer errors, their familiarity with the subject matter allows them to demonstrate this capability, (p. 136) In order to understand how students feel about the effects of topic-specific knowledge on their writing, it is worthwhile noting a student's complaint about having to write on unfamiliar topics: If the topic is totally new to the students, they do not have sufficient inform-ation. Without sufficient information about the topic, they can hardly write anything on the paper. They begin to be panicky. Since the panic in their minds grows higher and higher, their writing becomes more and more disorganized. Then I am made to write on a topic I could care less about. There is nothing to take our minds off the pressure. When under pressure it is really difficult to try to write about something that is not interesting of important. (O'Shea, 1987, p.289) 2.4.1.4 Cognitive demand Many researchers have also investigated how different rhetorical information, mode, and subject matter of writing topics call for cognitive efforts on writers and found that different types of topics demand different types of responses and thus may result in different scores (Carlson & Bridgeman, 1986; Crowhurst, 1990; Crowhurst & Piche, 1979; Freedman & Pringle, 1980; Greenberg, 1981; Hoetker & Brossell, 1986; Mohan & Lo, 1985; Ruth & Murphy, 1984; Spaan, 1993; Tedick, 1990; Witte, 1987). In explaining different results in syntactic complexity and the quality of student writing between an argumentative and a narrative topic, Crowhurst (1990) suggests that the answers may be found in cognitive difficulties and difficulties associated with insufficient experience and knowledge: The writing of formal argument places heavy cognitive demands on the writer. The organization of argument is more difficult than the chrono-logical order typical of narrative. Generating content is also more difficult. Story writers have great freedom to select appropriate content from an extensive body of experiential knowledge. Relevant content for making an argument is more restricted and less accessible because stored in scattered nodes in memory. Generating content is especially difficult for universal topics or issues of public policy which require specialized know-ledge and vocabulary removed from students' usual experience. Finally, argument requires an ability to abstract and to generalize, particularly for universal topics and general audiences, (p.355) As seen in the previous section, Tedick (1990) also reports that ESL graduate students obtain higher scores on field-related topics than on general ones when allowed to make use of their prior knowledge of the subject matter. In a study of topic difference on ESL writing performance, Spaan (1993) provided 88 college-level ESL students with two sets of four writing prompts consisting of two personal and two impersonal topics and analyzed those prompts in terms of cognitive 24 demand, rhetorical specification, purpose, role, and audience. Spaan found that impersonal topics demanded more cognitive effort on the part of the students who were required to invent and generate and evaluate ideas and facts. No significant differences in the scores on these two types, however, were determined. Greenberg (1981) combined elements representing high and low cognitive-eliciting content drawn from psychological research and high and low experiential content in four different pairings. For example, the instruction "agree or disagree" is considered high cognitive, whereas "describe" is considered low. Greenberg, however, did not find statistically significant differences between topics. It seems safe to say that generally, most researchers contend that different types of topics demand different cognitive efforts on the part of student writers, but the results in the scores are different from case to case. 2.4.2 Effects of a choice of topics versus a single topic on student writing Compared to the research trends of writing tests that focus on the nature or type of topic, relatively little research has determined how the number of topics influences the writing process and performance. So far the question of whether or not students should be offered a choice of writing topics has not been answered. It is interesting, however, to look at the current situation of writing assessment programs. Some large-scale tests (e.g., TWE, B.C. Provincial Examination2) provide only one assigned topic which does not give students any choice, whereas state-wide writing tests or many university placement tests in North America (e.g., Language Proficiency Index test) offer multiple topics to choose from. There has been no clear evidence to claim that either of the two positions is better, especially for ESL students. Moreover, the picture is even bleaker when it comes 25 to the investigation of the process of student writing under the two testing conditions. Reviewing the literature of offering students a choice of topics, there appears to be two groups of researchers holding very different perspectives or beliefs on this issue. Among the researchers favoring a single topic, there has been a strong argument that "the provision of options may penalize the brighter students, who may choose the more difficult and complex topic and not be able to treat it adequately in the available time" (Mehrens, cited in Hoetker, 1979, p. 41). The researchers claim that students should not be given options because of reliability between raters, loss of time, and topic equivalency. In contrast, other researchers believe that students bring different experiences, background knowledge, interests, beliefs, and cultures into testing situations, therefore the provision of options would help them to display their best writing ability. 2.4.2.1 Should not offer options Since Meyer (cited in Hoetker, 1979) argued that students should not be allowed options because they lack the ability to choose the option on which they would perform best, this claim has been echoed and more developed by many researchers in the area of writing test and has become part of the conventional wisdom on the issue. Their arguments are generally categorized into three parts. The primary reason for not offering students options in a writing test is related to the reliability issue. Jacobs, Zinkgraf, Wormuth, Hartfiel, and Hughey (1981) assert: There is no completely reliable basis for comparison of scores on a test unless all of the students have performed the same writing task(s); moreover, reader consistency or reliability in evaluating the test may be reduced if all of the papers read at a single scoring session are not on 2 B .C. Provincial Examination offers students only one topic to write in 55 minutes. 26 the same topic, (p. 16) Against Jacobs et al.'s claim, however, Hoetker and Brossell (1986) contend that a variety of different writing topics would take readers' attention more to student essays instead of making them bored during a long evaluation period. It results in higher reader reliability. Another reason is that constructing optional topics to be equivalent in terms of understandibility, difficulty, time requirements, and the like is extremely hard. It may cause student writers, as Mehrens (1976) points out, to choose a topic which at first seems to challenge their interest but is actually more complex and difficult. The students spend a good amount of time trying to generate ideas because they do not find any proper arguments. Therefore, they may fail examinations which are important for their natures and careers (Freedman, 1983; Kroll, 1991; Ruth & Murphy, 1984, 1988). Hoetker (1979) pinpoints the problem of a single topic, too. He said that there is no basis for assuming that it is any easier to produce a single topic that will be fair to a large and diverse population of examinees. French (1962) previously indicated this problematic characteristic of a single topic: The composition test is almost like a one-item test. Some students may happen to enjoy the topic assigned, while others may find it difficult and unstimulating; this results in error, (cited inMcColly, 1970, p. 152) Since there is often a time limit in a writing examination, time is also an important element in considering whether or not students should be offered options. Kennedy (1994) suggests that one must consider how much time students will give to choosing the topic rather than to actually writing. If it takes students a long time to select a topic, they may not be able to revise or even complete their essays in a time limit. Polio and Glew 27 (1996) report that some researchers (e.g., Heaton, 1975) claim that "forcing students to make a choice simply wastes their time" (p.38). However, none of them provides the comparison in student writing under each single and optional topic test situations and, except Polio and Glew's work reviewed later in this study, no one has yet published students' opinions as to whether students think it is a waste of time choosing a topic in a timed-test condition. All of these claims seem to indicate that all students should write on the same topic to eliminate variances in relation to assessment due to topic, not because of writing ability. There has been, however, not much research done to reveal the effect of options in a writing test. 2.4.2.2 Should offer options According to Polio and Glew (1996), "the primary reason for offering students a choice of prompts is the belief that students should be allowed to choose a prompt that will enable them to display their best writing" (p. 37). This argument is based on the assumption that single writing topics will likely force a number of students to scramble for something to say about a subject that is unfamiliar and uninteresting to them. In turn, optional topics increase the opportunity that each student will find one upon which he has something to write (Hoetker, 1979). Gabrielson, Gordon, and Engelhard (1995) provide an explanation in the following: Practitioners who favor introducing examinee choice into testing situations maintain that, due to wide disparities in the background knowledge of students, certain individuals would invariably be penalized by requiring all examinees to respond to the same task... Allowing students to choose tasks 28 would reduce the possibility that they would be required to write on a topic that is unfamiliar, (p. 274) Brossell and Ash (cited in O'Donnell, 1984) found that students writing on the topic of violence in the schools produced essays that were more organized, more sharply focused, and more fluent. In other words, it is more likely that students who are living in violent neighborhoods can produce more focused papers about the subject than those who have similar language ability but who have not experienced that lifestyle. It may explain the reasons why the majority of students in Polio and Glew's study (1996) mentioned topic familiarity or background knowledge as the biggest criterion for choice. The arguments for presenting a choice of topics are also from among composition teachers (Hoetker, 1979). These teachers are in favor of providing many options for the sake of fairness to the examinees, and as long as test-givers control for mode, that is, offer a choice of topics within the same mode, such choices would permit a fair test of students' writing ability (Carlman, 1986). Arguing against the timed impromptu essay test in evaluating students' writing ability, White (1995) also points out the problem of a single topic test in that "if a student cannot answer one question on a test with many questions, that is no problem; but if he or she is stumpted (or delighted) by the only question available, scores will plummet (or soar)" (p. 41). Among other researchers presenting interesting arguments in favor of optional topics in writing tests, Atwell (1987) contends that students develop a sense of ownership through the process of choice and that this feeling encourages them to write more comfortably under positive testing conditions. 29 2.4.2.3 Comparison studies between both test conditions With those contrastive perspectives between two types of testing conditions, there have been many researchers arguing against each other on behalf of students about presenting a choice or assigning only one topic in writing tests. However, there has been considerably little research conducted to compare one testing condition with the other to find which helps students produce their best writing ability (Gabrielson, Gordon, & Engelhard, 1995; Hoetker, 1979; Leonhardt, 1985). In a statewide test examining the effect of task choice on writing scores and the possible interactions with the student characteristics of gender and race in two contexts such as an assigned-task condition and a choice-task condition, Gabrielson, Gordon, and Engelhard (1995) have eleventh grade students write a persuasive essay in 90 minutes under two different conditions. The authors find that the student attributes of gender (female over male) and race (white over black) have a much greater effect on writing scores than either the choice condition or the single task variable, and it appears that the scores of the assigned group are slightly higher than those of the choice group, but the differences are not significant. Hoetker (1979) reports a Georgia study (Rentz, 1977) comparing essays written under four contrasting test conditions: (1) 30-minute time limit with no choice of topic; (2) 30-minute time limit with choice of topic; (3) 45-minute time limit with no choice of topic; and (4) 45-minute time limit with choice of topic. This study did not find any significant difference between the test conditions except for the thirty-minute choice of topic condition under which students obtained significantly lower scores. The results suggest that offering options presumably hinders students when a short time is given to 30 write because they spend too much time considering what to write about. Du Cette and Wolk (1976), which is also mentioned in Hoetker (1979), compared 187 college students' performance between a single topic and an optional topic in a university course mid-term examination. They found that students given one topic had significantly higher scores than those given options. In an experimental study with college-level ESL students, Leonhardt (1985) investigates the effects of assigning one topic on an essay writing test as opposed to providing an open topic with which students choose any subject to write an essay on. Leonhardt finds no difference in the students' mean scores of the two writing tasks and dispels the negative claims toward the optional topics as irrelevant by insisting that topics make no difference. Consenting to Leonhardt, Spaan (1993) also argues that offering a choice of topics is not detrimental to students. After examining two pairs of optional topics with 88 university ESL students in a 30-minute impromptu essay test, Spaan finds no significant difference in mean scores of the writing tasks and claims that performance is not different when the topics are different. Spaan also reports that the majority of the students (85%) wished to have a choice of topics, 11% did not want a choice, and 4% had no preference. Reviewing the comparison studies, it can be summed up that most of the studies have found almost no differences in performance between the two test conditions and that there is no absolutely solid ground for the single topic test condition over the options or vise versa. A few researchers have tried to further explore topic selection processes and criteria for a topic choice. 31 2.4.2.4 Topic choice with options Chiste and O'Shea (1988) investigate university-level students' essays written in a test condition by nonnative and native speakers of English to find a pattern of choice by question position and by question length. Students are given four options for a two hour essay. Chiste and O'Shea find that ESL students heavily favor first and second questions in each set of four and the shorter questions which contain fewer words. In comparison, native English students choose topics which spread all over the topics and they favor questions of middle length. However, the ESL students' topic choice does not correlate with success on the test. The results show that whether or not students prefer specific question types, each topic produces similar essay scores. Chiste and O'Shea's study is meaningful in terms of providing a fundamental ground for the study of writing process with options in a timed-test situation—that is, what kind of topic ESL students tend to choose and how it is related to their performance. Their study, however, does not offer any explanations about how ESL students choose a feasible topic, if it is not the best, and why they choose. These weaknesses get supplemented in Polio and Glew (1996) when they explore ESL students' writing processes. Unlike other studies, Polio and Glew's (1996) research on ESL writing topics provides a qualitative look at the process of task selection in a timed test condition. The authors ask 26 college-level ESL students to write on one of three topics given in a half hour to observe using video-cameras what process they would go through. Polio and Glew also hold post interviews to explore how the students felt about the prompts and how they made their final decision during the test sessions. After switching back and forth to choose an appropriate topic, the students chose questions that allowed them to 32 display their best writing ability and claimed that they would have been hindered had they been forced to choose a different topic. The students believed that it was better to have a choice of writing topics -- preferably three topics — on a timed-writing test. 2.5. Summary and need for the present research Studies of affective factors, such as writer, rater, essay, topic, and test conditions, on writing process and performance were described in Section One of this chapter. Among the major factors, research has not much examined the effects of topic and students' perception on the topic in a timed-test condition. Section Two is divided into two subsections. Effects of the nature and type of topic on student writing were dealt with in the first one to provide ground for topic selection for this project and discussion of the results. The second half of Section Two reviewed ongoing arguments raised by researchers, educators, and teachers about the effects of a choice of topic versus a single topic on writing and found almost no difference in students' performance between the two types of topic conditions. At the end of this section, two studies (Chiste & O'Shea, 1988; Polio & Glew, 1996) were reported examining a pattern of topic choice, topic selection processes, and criteria for choice. These studies provide invaluable explanations about how students see the topics given, choose one of them, and write an essay with the topic. They may provide some answers to the controversial arguments between proponents and opponents of the optional topics in writing tests. The authors, however, investigated this issue only with the optional topic under which students might not be able to figure out which condition would help them more. The present research, therefore, aims to provide ESL students with both test conditions to look into how the students go through the process and to let them reflect what they feel about the two different test conditions. 33 CHAPTER THREE Methodology Classical statistical methods have typically been used, but are unable to provide sufficiently detailed information about the complex interactions and behaviors that underlie writing ability. (Hamp-Lyons & Mathias, 1994, p. 50) In this chapter, the researcher describes the research design and methods chosen for the study. A case study method was adopted to explore the writing process of two groups of ESL students and their opinions of writing tasks in a timed-test situation. The population and data collection methods such as interviews, observations, and questionnaires are also described in detail, followed by research procedures. 3.1 Research design Since this study aims to explore individual ESL student writers' writing process in a timed-test condition and the interactions between their attitudes and practices under each test condition (single versus option), qualitative research methods were used to look at individual writers in depth, supplemented by quantitative data to provide a broader picture of this issue. Qualitative research is not a new methodology in the field of educational studies and is often used to help understand and explain the meaning people have constructed in context, that is, how they make sense of their world and the experiences they have in the world (Merriam, 1998). The exact use and definition of qualitative research, however, 34 varies from user to user and from time to time (Bogdan & Biklen, 1998). In this study, the researcher understands it as an umbrella term covering a variety of inquiries that help to understand the phenomenon of interest from the research participants' perspectives, not the researcher's, with as little disruption of the natural setting as possible (Merriam, 1998). In this light, this study adopts a qualitative case study approach which helps the researcher uncover how ESL students go through each step of the writing process, how they interpret each test condition, what goes on in their minds during the testing time, and the interaction of their attitudes and activities under two different test situations. In order to analyze data, this study adopts the inductive analysis method (McMillan & Schumacher, 1993), which is prevalently used in qualitative research. In addition, descriptive statistics is used to explain how long ESL students spend choosing a topic and/ or planning a topic before starting to write. These statistics lend insight into whether or not offering a choice of topics wastes the students' time and provide summaries of the ESL students' responses to main questions during the interview sessions. 3.2 Research methods The researcher conducted a pilot study to develop a clearer understanding of the issues of how students choose a topic in a writing test with options and how they feel about each single and optional topic test condition, to more clearly articulate the research questions, to develop research methods, and to set up effective data collecting strategies. Four college-level Korean EFL (English as a Foreign Language) students who were taking intensive English courses in a large urban city in Korea volunteered for the study to examine how well they could perform in written English. They were asked to take two 35 types of writing tests (single and optional topics) each on a different day within a one-week period and to attend interview sessions immediately following each test. The EFL students felt they spent more time prewriting an optional topic than a single topic because of the topic selection processes, but most of them believed that this was not a waste of time. In contrast, they felt that the optional topic test helped to write a better essay. Topic familiarity was the most popular criterion for a choice, and it appeared that the students preferred to have a choice condition than a single topic condition. After reflecting on the process and findings of the pilot study, the researcher implemented the present study with ESL students in an English environment between February and March, 1999. 3.2.1 Research participants The research participants were from two ESL composition classes (high intermediate and advanced) in an English language institute at a large urban university in British Columbia, Canada. The students' written proficiency levels were determined either by their scores on the institute's placement test or by promotion from lower levels into an advanced or high intermediate level based on teachers' evaluations. The criteria for selecting students were as follows. First, due to the importance of interview sessions that were expected to provide the most critical information for the study, the students' spoken proficiency had to be good enough to communicate with the researcher in English. Second, the ESL students had to have experience writing academic essays so that the researcher could be assured that these students already knew about how to write an academic essay and how to generate and organize essay ideas before writing on a topic. The ESL students were from varied language and educational backgrounds. 3 6 3.2.1.1 High-intermediate level students Of the 12 students in this level, there were five Spanish speakers, three native speakers of Japanese, two of Taiwanese, and one each Portuguese and Swiss-German speakers. The students ranged in age from 20 to 37 years old, but the majority were in their late twenties. They had all studied English in their home countries as well as after arriving in Canada. Their length of English study ranged from one to twelve years along with two months to seven years of English writing experience. At the beginning of this study, the length of the students' exposure to a predominantly English-speaking society ranged from two to six months. The number of other languages they spoke, in addition to English and their native language, ranged from zero to two. Only four students had experienced taking a formal writing test (three TWE and one Cambridge English Proficiency Test), but all of them had at least one in-class writing examination for the purpose of course grading before participating in this study. Their educational backgrounds also varied from high school diploma to masters degrees specializing in such fields as psychology, language and literature, law, business administration, and engineering. There were seven female and five male students. The main goal of the high intermediate writing course is to teach/learn writing skills—mainly academic writing-required for university and college, practice process writing, including planning, drafting, and revising, and complete a portfolio of short essays. 3.2.1.2 Advanced level students There were six females and four males in this class, including four Korean speakers, three Spanish, and one each of Japanese, French, and Mandarin students. The 37 age of the students ranged from 22 to 29 years old, with an average age of 25. They had all studied Engl ish in their home countries as wel l as after arriving in Canada. Their length of Engl ish study ranged from one to ten years. A t the beginning of the study, they had been exposed to an English environment, ranging from two to ten months, and had been learning academic Engl ish composition for two to four months. The number of languages they spoke, in addition to English and their native language, ranged from zero to one. O f these students, three had taken a formal writing test (TWE), but all o f them had experienced in-class writing examination several times. The main goal of this level is to practise advanced essay writing required for university and college, focusing on organizing thoughts and doing timed writing. 3.2.2 Topics1 Writing examination topics were selected out of 112 T W E practice topics ready-made by Educational Testing Service (1998). T W E topics were chosen in this study for the following reasons. First, the T W E is an internationally well known examination used for measuring non-native English speakers' writing proficiency so that many international students or professionals who plan to pursue their academic studies are generally aware of the importance of this examination. Furthermore, the T W E score is incorporated into the T O E F L score as of 1998. Second, T W E topics call for academic, expository essays that are non-content based (Krol l , 1991). In other words, they do not require examinees to have any specific content knowledge. K ro l l (1991) addresses it as follows: Hamp-Lyons (1989:6) calls the topics for the T W E "invention-based" in contrast to the term "fact-based" writing. She notes that the former requires 1 See Appendix B for the complete topics used in this study. 38 writers to create or invent their material by searching their minds and imagi-nations for ideas, responses, and opinions, while the latter type of writing evolves as a response to such external stimuli as reading, lectures, experi-ments, discussions, and so on. This latter type of writing is not solicited on the TWE. (p. 22) Third, TWE topics are constructed with careful consideration of accessibility, actual or potential sensitivity of subject matter, and prevention of misunderstanding of the wording (Kroll, 1991). Last, using TWE topics also benefits the ESL students because practicing the TWE examination in an actual test format assists in their plans to proceed to studies in North American universities. 3.2.3 Methods Observations and interviews were adopted to follow the ESL students' writing processes and collect information about writing tests from the test takers' perspectives (Murphy & Ruth, 1993). The researcher also used questionnaires to collect background information from the ESL students. 3.2.3.1 Questionnaires A questionnaire was developed for the general survey, (see Appendix D) Besides asking information about the ESL students such as birthdate, sex, school education, and; English learning experience in home countries and in Canada, it collected their academic writing experience in L2 and their attitudes toward LI and L2 writing (whether they liked to write or tried to avoid it whenever possible). It provided the researcher with background information on the research participants which helped the researcher understand more about these participants' perspectives and attitudes toward writing topics 39 and tests. 3.2.3.2 Observations Sessions were recorded by two video cameras located in each front corner of the classroom. These video cameras were used to capture the students' behaviors during the writing tests. Heavy emphasis was given to the beginning of each test session because one of the main issues of the present research was to explore whether having a choice of topics causes students to waste more time than having only a single topic to choose from. The rest of the test sessions were also videotaped to see if the students chose a new topic after starting to write on the original choice or spent time surfing from topic to topic during the writing tests. The researcher did not try to doublecheck how much time each student spent choosing a topic. Instead, he tried to maintain the classroom to be as quiet and comfortable as possible (e.g., blocking noise from outside the classroom and answering students' questions) and to note small behavioral differences between students to elaborate some interview questions (e.g., what went on in his/her mind when s/he finished writing an essay ten minutes earlier than the ending time?). 3.2.3.3 Interviews Since this study dealt with students' opinions about two test conditions and how they would go through each writing process, follow-up interviews were of great importance. The 28 relatively semi-structured interview questions (McMillan & Schumacher, 1993) were developed to carry out interview sessions after each writing test. These included students' attitudes (1) to the writing test per se, (2) to optional writing topics, (3) to a single writing topic, and (4) to both test conditions (comparison), and how their attitudes interacted with the process of writing. The interview sessions 40 were held oh a one-on-one basis in an attempt to prevent students from being affected by their peers. In order to avoid the researcher's influence on the students, the interview questions were asked as much as possible in the same order, but the researcher skipped some questions or added more questions in cases where he felt it necessary to omit or add further questions or cast ice-breaking questions. Each interview, lasting about fifteen minutes, and held in English was recorded with an audio-cassette recorder. 3.3 Procedures The research site was chosen based on two criteria: first, the language institution in which this study would be conducted had to have an academic English composition course where ESL students learn writing conventions required for academic institutions in North America and where they practice the process of writing to produce better essays. Second, there had to be a sufficient number of students taking a composition course because subject attrition often hinders the quality of research (McMillan & Schumacher, 1993). The researcher intended to hold in-class writing examinations on two separate days to avoid research participants' weariness, and to have the follow-up interviews after each test session. Students' absence from either of the two tests would hinder the researcher in comparing individual students' actions and attitudes to each test condition. In order to secure enough students, therefore, the researcher was advised by the program coordinator of the language institution to have either high-intermediate or advanced writing class, or both classes in which there were more than ten registered students. Moreover, the institution was a short distance away from the researcher's office, which helped him travel there quite often to interview students. Keeping in mind the criteria, the researcher contacted the director of the language 41 institution in British Columbia, Canada and was given ready consent to undertake the research. Two composition course teachers volunteered for the research and were willing to allot class time for the research. They seemed excited at participating in the research on behalf of their students, many of whom wanted to practice TWE topics in a timed-test context. Since these two teachers taught the classes at the same time, the advanced composition class was arranged to take part in the research first. On the first test day, students were informed of the research and were asked if they would be willing to participate in two practice examinations which were a similar format to the actual TWE examination. All of the students agreed since the researcher promised to give them feedback upon completion. Once they agreed, the students were informed of their rights as research participants, assured that their real names and personal information would not be revealed for any purpose and that their information would be accessed only by the researcher and the research committee, and asked to read and sign a consent form. While the students were reading the consent form, the researcher handed out the test paper and set up two video cameras at each front corner of the classroom, focusing on the students and the test papers. The students were then given final instructions for the writing examination and told to turn over the test paper and start writing. The test paper prepared for the first examination had an instruction with four topics. Each topic was typed on a different piece of paper and stapled together to have the students read only one question at a time so that the researcher could count the time the students spent. The examination time lasted 30 minutes as the actual TWE test does. The number of students who chose each topic is reported in Chapter 4. After the first test, the students were interviewed on an individual basis in a separate room just outside the classroom. The interviews were 42 audio-taped and held over a two- day period. All of the ten students participated in the follow-up interviews which lasted about 10 to 20 minutes. As the language used in the interview session was English, the researcher considered this as a factor which possibly limited the students to very short responses to each question; therefore, he prepared a copy of pre-established interview questions for the students so that they would avoid misunderstanding of the interview questions and feel more comfortable. The copy was given at the interview session. Whenever the researcher found it necessary, he skipped or added more questions to further explore students' opinions. After completing the first interview, the students were asked to fill out a questionnaire form by the next testing date. The second examination was held on the following day because of the class schedule. This time the students were given an assigned topic. The test and interview procedures were the same as the previous day. One week after the first research session, the second one with the other class, a high-intermediate composition class, was held in their classroom. There were originally 12 students willing to participate in the study, but 3 of them did not show up on the second test date. Instead, two new students joined the second test, but one of them did not attend the follow-up interview. In the data analysis, therefore, the information of those who missed any test or interview sessions was used only for supplementing research findings. This time the interval between the two writing examinations was one week. The rest of the procedures were the same. At the end, the essays were holistically scored by two raters on a six-point scale adapted from TWE Writing Scoring Guide (1987). The evaluation of the student essays was conducted for the benefit of the students, but the scores were also used to supplement 43 the findings of the present study. The raters were current graduate students in Teaching English as a Second Language (TESL) who had experience teaching ESL composition and assessing student writing. They had taken a training program in different places where they learned the same holistic scoring techniques and practiced evaluating sample essays using the Writing Scoring Guide by ETS. In order to avoid raters' biases coming from the impressions of students' handwriting (Markham, 1976; McColly, 1970), all of the essays were typed, and the authors' names were eliminated. After the two raters marked each essay individually, the two final scores were averaged. That is, the final score of an essay which received a score of 4 and 5, respectively, was 4.5. A third rater was prepared to evaluate the essays in any cases where the scores differed by more than one point (e.g., a score of 5 and a score of 3), but no such discrepancies appeared. Rater reliabilities were computed using Cronbach's (1970) alpha coefficient. The inter-rater reliability was .94 for the optional topic test and .90 for the single topic test2. 2 According to Hamp-Lyons (1990), the score reliability usually achieved on a direct test of writing has been raised to around .80 which is commonly regarded as a satisfactory level for decision-making purposes. Rater reliabilities appeared to be higher in this study than the satisfactory level or those reported in Kroll (1991), .86 to .88. The explanation may be found in the fact that this study examined only a small number of students' essays. If a significant number of students had participated in this study, inter-rater reliabilities may have decreased. 44 CHAPTER FOUR Results The findings of the study are presented in this chapter. First, the ESL students' prewriting behaviors under;the two different test conditions are compared. The findings shed light on the issue of time by showing whether students wasted time only to read and choose topics (objectivity), along with the students' perceptions of whether they wasted time (subjectivity). The ESL students' responses to each test condition are then reported. Finally, their writing processes and other issues that arose are presented. 4.1 Prewriting stage In this section, the researcher describes the behaviors of the ESL students as they went through the process of topic selection and compares the times for prewriting under the two test conditions. Polio and Glew (1996) argue that examining the amount of prewriting time the students spend before drafting an essay is of great importance, and therefore this comparison is aimed at providing information to judge whether or not time is wasted in choosing a topic compared to a non-choice condition. 4.1.1 Topic selection process The ESL students were given four topics in the first test. It is interesting to look at how each student went through each writing topic and finally made their choices. There are two typical examples of the prewriting process that the students in this study displayed. The researcher adopts Polio and Glew's table, which clearly shows students' prewriting actions. 45 Table 4.1. Students' topic selection behaviors 1. Student #1 (High-intermediate writing class) Instruction 5 seconds Topic #1 1 6 1 Topic #2 i 7 1 Topic #3 i 10 1 Topic #4 + 15* Total 43 seconds 2. Student #5 (Advanced writing class) Instruction 1 10 seconds Topic #1 • 1 10 12 _L • s -Topic #2 • 16 12 15* 1 w Topic #3 14 16 Topic #4 • / 10 Total 115 seconds * See. Appendix E for the rest of the students' behaviors. The numbers in the right column present how much time the students spent reading each topic, the arrows provide the process of selecting a topic, and the number which has an asterisk mark is the topic chosen. In order to calculate topic selection time, the researcher counted the students' reading of each topic and stopped at the point they 46 actually started writing ideas (the students called them "idea banks.") on the test paper, stopped to plan without making notes, or began to write essays with no planning time. The two students' topic-selecting behaviors presented above were typical examples of those of the students in this study: student #1 represented five of the students of the high-intermediate writing class and five of the advanced writing class; student #5 for seven in the high-intermediate level and five in the advanced one. 4.1.2 Time for prewriting under the two different test conditions There existed an unclear line between time for topic selection and prewriting on the first test with options. That is, in order to choose the best topic, the students had to look at each topic, but while reading each topic, they might have chosen one and planned some ideas in their minds but continuously compared it with others until they decided upon the right one. They may have abandoned the original topic and gone on to the next one if a new topic looked more feasible to write, or fix the original choice after reading all the topics. The researcher, however, could not include these hidden activities in the prewriting time separate from the topic selection time. In order to compare the length of time for prewriting under the two test conditions, therefore, the researcher added up all the time the students spent choosing and prewriting a topic before they started writing an essay (drafting) and regarded it as the entire prewriting time. Table 4.2 shows the comparison. Out of twelve students from the high-intermediate level participating in the first test, three students did not take the second test, therefore these three students' data were not used for comparing prewriting time between the two tests. Among the nine students taking both of the tests, three spent more time prewriting on the second test, while six 47 spent more time on the first test. When the topic selection time was eliminated from the total amount of time for prewriting, however, the results were opposite: three spent more time on the first test and six on the second one. Overall, the majority of the students in this level used more time to select and prewrite an optional topic. Table 4.2. . Prewriting time (High-intermediate writing class) Student Optional topic test Single topic test Topic selection Prewriting Total Prewriting #1 43sec. 2min. Msec. 2min. 57sec. 2min. 3 5 sec. #2 59sec. 4min. 29sec. 5min. 28sec. 4min. 19sec. #3 lmin. 42sec. 4min. 42sec. 6min. 24sec. 6min. 41 sec. #4 lmin. 47sec. 2min. 43sec. 4min. 30sec. 6min. Olsec. #5 53sec. 43sec. lmin. 36sec. Not available #6 2min. lOsec. 8min. 50sec. llmin. 7min. 18sec. #7 lmin. 21 sec. 8min, 40sec. lOmin. Olsec. 5min. 33 sec. #8 lmin. 28sec. 6min. 25sec. 7min. 53sec. Not available #9 2min. 02sec. lmin. 13sec. 3min. 15sec. lmin. 15sec. #10 lmin. 39sec. 9min. 14sec. lOmin. 53sec. Not available #11 lmin. Msec. 4min. 18sec. 5min. 32sec. 7min. Olsec. #12 53sec. 4min. 58sec. 5min. 51sec. lmin. 42sec. None of the students from the advanced level missed writing either of the two tests. It took almost all the students longer time to prewrite an optional topic. When the time for topic selection was not included, however, it was found that half the students spent more time on the single topic test, while the other half on the optional topic test. These results are similar to those from the high-intermediate writing class. Table 4.3. Prewriting time (Advanced writing class) Student Optional topic test Single topic test Topic selection Prewriting Total Prewriting #1 44sec. 57sec. lmin. 41 sec. 53 sec. #2 lmin. 24sec. 18sec. lmin. 42sec. 21 sec. #3 lmin. 36sec. 54sec. 2min. 30sec. 40sec. #4 21 sec. 28sec. 49sec. lmin. lOsec. #5 lmin. 55sec. 3min. 41sec. 5min. 36sec. 4min. 19sec. #6 lmin. 23sec. 58sec. 2min. 21 sec. lmin. 34sec. #1 lmin. 24sec. 4min. 28sec. 5min. 52sec. 3min. 28sec. m 36sec. 45 sec. lmin. 21 sec. 50sec. #9 lmin. 25sec. 2min. 43 sec. 4min. 08sec. lmin. 02sec. #10 lmin. 37sec. 3min. Msec. 4min. 51 sec. 57sec. 4.2 Responses to each test condition All of the students said that they were used to having a writing task at least once a week. The students from the high-intermediate level had taken an in-class writing examination once, while the advanced level students had done it several times before participating in the present study. Four students from each of the two classes had experience taking a formal writing test such as the TWE and the Cambridge Proficiency 49 Examination. Therefore, the researcher regarded the students as having had experience writing essays in a timed-test condition. In order to explore whether the students had writing apprehension, which is often claimed as an influential factor on students' writing (e.g., Hillocks, 1986), the researcher asked how they felt about writing tests in general. For the high-intermediate writing class, half of the students said they liked writing and felt fine with writing, while the other half felt pressure about writing in English. Only three students from the advanced writing class claimed that they felt pressure on writing tests. One student responded, "If it is an exam, I'm nervous and can't concentrate." When the students in both classes were asked about the present test, which was similar to the TWE test, more than half of the high-intermediate level students said that they felt nervous and anxious while taking the test mainly due to lack of time. One student who felt comfortable with a general writing task said that she was a bit nervous taking this test because she needed a TWE score. She seemed to feel anxious as if it were a real test. This student's response was mirrored by another student who felt less pressure because the present test was not the real TWE test. He planned to take the official TWE test in a few months. Many of the students from the advanced writing class did not feel pressured. They said that it was similar to the normal writing tests in class. 4.2.1 Criteria for choice Before exploring reasons why the students chose a certain topic on the optional topic test, the researcher inquired into their general behaviors in selecting a topic to examine if these behaviors were also found in this specific test situation. All of the students in the high-intermediate class had had a mid-term writing examination with options (three topics) a week before the research began, so they seemed to easily recall their choices and the criteria which are listed in Table 4.4. Table 4.4. Criteria for topic choice in the mid-term examination (High-intermediate level) Criteria Students Familiarity or background knowledge 9 Interest in topic 2 Knowledge of appropriate vocabulary 2 Feeling easy 2 Related to score 2 Having ideas 1 * The students were allowed to claim more than one criterion for topic choice if they so desired. Most of the students acknowledged that topic familiarity or background knowledge was the biggest reason for a topic choice on the mid-term test. Interestingly, out of the rest of the criteria claimed, two students said that they chose a topic to get a better score no matter whether the topic was familiar, interesting, or easy. It was also claimed by some EFL students in the pilot study. One student commented, "When I take a test, I always think about score. It means I want to choose a topic to get a high score." That students chose a topic dependent upon their impressions was another reason. The advanced class students' criteria for topic choice were more or less similar to those found in the high-intermediate writing class: background knowledge or familiarity [four]; feeling easy [two]; interest and vocabulary [each one]. However, four students did not have experience taking an optional topic test and therefore could not comment on the issue. When asked about the criteria on which the students chose a topic in the present study, eight of the high-intermediate level students pointed out topic familiarity or background knowledge as the most frequent reason, (see. Table 4.5.) 51 Table 4.5. Criteria for topic choice in the present study Criteria High-intermediate Advanced class class Familiarity or background knowledge 8 5 Interest in topic 5 2 Feeling easy 3 3 Having ideas 2 2 Knowledge of appropriate vocabulary 3 . For example, one student chose topic #4 (First impression about a person's character). She said, "Because I am a psychologist and had working experience, I thought I could deal with this topic well. I could easily support evidence from my knowledge and experience." Students' interest in a specific topic also appeared to be a strong factor in the high-intermediate class. In the advanced writing class, the students chose a topic for similar reasons. Topic familiarity was the most important reason they referred to, followed by feeling easy, interest, and ideas, but vocabulary, which was mentioned in the high-intermediate class, was not among the advanced level students' primary concerns in a topic selection. While the researcher was conducting the pilot study in January, 1999, he found it necessary to ask students why they did not choose other topics. This question is of great importance in exploring students' minds in selecting a topic. The students can reflect upon their behaviors, and possibly criteria, and they might be able to clarify actual reasons for their choices while they were explaining the reasons that they did not choose others. Furthermore, it might help professional testers, administrators, and educators understand what types of topics hinder ESL students in displaying their writing abilities in a test 52 condition. These reasons for not choosing a topic are listed in Table 4.6. Table 4.6. Reasons for not choosing a topic Reasons High-intermediate class Advanced class Time pressure 12 2 Feeling difficult 9 6 No interest 2 8 No (or few) ideas 4 8 Hard to develop 5 2 Lack of vocabulary 5 1 Other 1 2 As reviewed in Chapter two, the majority of students in the high-intermediate level named time pressure as the biggest obstacle. They seemed to recognize the importance of appropriate time management in a timed-test situation because they did not choose topics which they felt would take them too long to finish. One student commented on this issue, "I didn't choose topic #1 because I had very few ideas...It would take me more time to think and generate ideas, so I chose another one." As well, students' impression about each topic while reading obviously influenced them in a way that they did not choose a topic which they felt to be difficult or complicated to deal with. Vocabulary, ideas, and development problems were also frequently reported. The students in the advanced level, however, had different priorities in avoiding certain topics. The most frequently cited reasons were that a specific topic was not interesting enough for them to choose and that they had just a few or no ideas about a topic. For example, one student mentioned, "I didn't have specific ideas about topic #1 (Parents as the best teacher). It was too vague and too general, so couldn't narrow it down." Feeling that a topic was too difficult was another big obstacle reported by many students. Not surprisingly, as seen in the topic selection 53 criteria, only one student commented that she did not choose a topic because of her lack of vocabulary. In this advanced level, however, students did not report time pressure as the big issue. When we see which topic the students chose most, the picture for the issue of criteria may become a bit clearer. Table 4.7. Topics with students who chose each topic Topics Wordage High-intermediate class Advanced class Students Reasons Students Reasons #1: Parents as the best teacher 23 5 Familiarity. Vocabulary Feeling easy 4 Feeling easy. Having ideas. Familiarity #2: Leader or member 36 3 Familiarity. Interest #3: Education for all or selected 38 5 Interest Familiarity. Feeling easy, Vocabulary 3 Familiarity. Interest #4: First impression about a person's character 51 2 Interest Having ideas. Familiarity The majority of the students in the high-intermediate class chose topic #1 or #3 and the main reasons, which are underlined, were topic familiarity and interest. Most of the students said that they did not choose a topic based on whether the topic was short or easy. In the advanced level, students' topic choice was spread over topic #1, #2, and #3. Topic familiarity was still the biggest reason for choice, but their feelings toward each topic were also one that influenced their selection. 4.2.2 Waste time? In order to deal with the time issue, which is often raised by many researchers 54 (Heaton, 1975; Hoetker, 1979; Kennedy, 1994; O'Shea, 1987; Polio & Glew, 1996), several questions were developed. First, the students were asked whether they read all the questions in the optional topic test to choose the right topic or skipped any topics to save time. Other questions were also prepared to compare the actual time the students spent for choice with their feelings toward the time issue. All of the students said that they chose a topic after looking at each topic, which was easily confirmed by observing the videotapes. When they were asked if they chose a topic immediately, nine high-intermediate and nine advanced level students stated that they did it quickly. Observing the videotapes, however, the researcher found that only five each of the lower and higher level students chose a topic right after reading all the topics, and four each read topics more than once until they decided. The other three in the high-intermediate class said that they changed a topic before starting to write. One student said that she selected topic #1 right away, but as she read by each topic, she found herself more interested in topic #3 which she felt was easier. For her, topic #1 seemed, she said, to be personal, which would take her more time to think. The other two first chose topic #2 which was familiar to them, but they found it difficult to develop. One student commented, "People in my country always do group actions. I was a leader of a group in my country and always want to be a leader... I can write from only one side and cannot develop more than one paragraph, so I did not choose #2." One advanced class student who did not choose a topic quickly stated that optional topic tests were always difficult to write. She said that offering a choice of topics made her hesitate and think why she had to choose a specific one, and then she always felt regretful for her choice. Overall, in this study, most of the students appeared to believe that they chose their topics very quickly, 55 although many of them obviously spent some time reading back and forth. The researcher asked further if students felt it was a waste of time to be given a choice of topics. Six students (two high-intermediate and four advanced class students) mentioned that it was a bit of a waste of time, but five of them still preferred having the option. Most of them did not feel they wasted time because they could select a good topic for themselves. This sentiment is clarified by one student: "It was not a waste of time. You can find a familiar topic. You may lose some time, but after choosing, you can develop much faster and easier. If you have a single topic, you may have to spend more time what to write about." The time issue is also discussed later in this chapter. 4.2.3 Prefer a choice of topics to a single topic or vise versa In order to explore the ESL students' reactions to each test condition, the researcher asked about the first test, the optional topic test. The majority of the students said that they felt topic #1 was the easiest (the high-intermediate level-7 students and the advanced level-4) and topic #4 the most difficult (the high-intermediate-7, the advanced-5). The major reasons for the former were feeling easier because it was simple, clear, and more familiar than others, while those for the latter were that it was felt to be difficult because they did not have proper knowledge of the topic and vocabulary. In selecting a topic, two of the students who regarded #1 as the easiest did however choose another topic. They said that #1 was easier but they argued they were not the type of people who chose an easy topic—actually, they chose #3 which they said more familiar and interesting to write about. Not surprisingly, none of the students chose a topic that they claimed was the most difficult. It was also interesting to look into whether the students liked what they chose or if they wished to choose another one during the process of writing. Almost all of 56 the students readily commented that they did not want to change. They thought they made a good decision. Some students mentioned that they thought about changing a topic while writing. "Even though I realize it were not a good one, I wouldn't be able to change it because of time restriction. I just had to concentrate on the topic." About whether they had confidence in writing an essay with the topic they chose, most of them positively responded with some comments that the topic chosen was the best among the four topics, but many of them argued that there would be no difference in their writing performance with any topics. One of the students said, "I like this topic and I didn't change my mind at any point, but I don't think I could produce my best writing with this among the four topics on the test. I can write a similar quality of writing with any topics." When they were asked if they were assigned only one topic which they felt to be the most difficult, the responses were different from each other. For the high-intermediate class, ten students agreed that they would have produced lower quality of writing within the same time limit. Only two students said that the result would be similar. Meanwhile, seven of the ten advanced class students seemed to believe that they would have achieved a similar mark with the hardest topic as well as the easiest one, although it might have taken them longer time to think. This last question was developed to find out whether the students had any different perspectives on writing performance between the right topic they chose and the most difficult one they tried to avoid. If they did not perceive any differences on their writing performance with the two different types of topics, it may not be necessary to offer them a choice of topics. Surprisingly, it appeared that each level of students had a different perception of their writing performance between the easiest and the hardest topics. The researcher then asked about the second test, the single topic test. Questions for 57 the second test were asked after both the optional and single topic tests; therefore, it was expected that the students would compare these two different test conditions. The first question was about the comparison between the single topic and the choice of topics. None of the students1 from the high-intermediate level rated the test with optional topics as more difficult than the test with a single topic. Half of them felt difficulty in writing the single topic, which they considered was mainly caused from the no-choice condition. Many students said that they felt more comfortable when given options because they had more opportunities to choose the right one, but when there was no choice, they felt great pressure. They just had to write on the topic whether or not they liked it. The other half commented that they did not find any difference between the two test conditions. Four of them said that the single topic (Advertising) was relevant to their majors: two actually had working experience at an advertising company; another one had practiced the topic in his country. Half of the advanced level students felt the optional writing topic test was easier than the single topic test, which was advocated by four students. These four students found the single topic easier because they knew about the topic and could concentrate only on that topic. The other one said that he found every test to be the same, and therefore he did not see any difference between the two test conditions. The students were then asked if they wished to have another topic during the second test. Six students each from the two classes did not want to change the assigned topic, while the others wished to have a choice mainly because they were not interested in the topic. Following a series of questions about each test condition, the researcher led the students to compare one with the other. All of the students except one in the high-1 Two of the high-intermediate class students did not attend the second interview session; therefore there were ten students each from the two classes for the second interview. 58 intermediate class preferred to have a choice than to have a single topic, if they would be able to choose a test condition in the future. Although many of them liked the single topic in this study, they still wanted to have the opportunity for a choice. One of the students claimed, "This time I was lucky, but other cases, I am not sure whether there will be good topics. It's better to have options." The one student who preferred a single topic test form said that he had to think a lot about topics while he was reading each topic and that he found it was a waste of time choosing one. Six out of the ten advanced class students also wanted to have a choice rather than an assigned topic with more or less similar reasons to those in the high-intermediate class. Two other students argued that they would prefer a single topic because an optional condition disturbed their concentration. The other two students showed no preference between the two test conditions, saying it would depend on the topic. Since many students may have different perspectives between a real and an experimental test situation, a similar question was designed to find out whether they would like to have optional topics or an assigned topic if it were a real test. The answers were the same for the high-intermediate class as those in the previous question, while more students from the advanced class said they would like to have optional topics on a real test: eight students for the optional topic test, one for the single topic test, and one stated no preference. Another question was then cast to the students who stated they would like to have options about how many topics they should have on the test. (See. Table 4.8.) 59 Table 4.8. Number of topics Number of topics High-intermediate class Advanced class 2-3 2 1 3 3 1 3-4 3 3 4-5 1 5 1 Most said within the range of 2 to 5 topics, but 3 or 4 in general. They said that it would be risky to have only 2 topics, but over 5 topics would be too many because of the time restriction. In the final question of the comparison between the two tests, the researcher inquired if there was any relationship between their preference (single over option or vice versa) and expectation for the writing performance. Seven of the nine high-intermediate level students who liked the optional topic test used in this study said that they would have a better mark on the optional topic test. The other two were different from one another: one said, "I will get a better mark on the single topic because I didn't write well the first one (the optional topic). If I have another test, I will get a better mark on an optional topic test than a single topic test." The other argued that it would depend on the topic, and he liked each test topic, so he was not sure on which test he would obtain a better score. Among the six students in the advanced level who preferred an option, one student stated that she would score better on the single topic test although she still wanted an optional topic test: "Because the single topic was related to my major, I had confidence in the subject. But, I think this time I was just lucky." Other students who chose the single topic condition or showed no preference expected that they would have a higher score on the 60 condition they preferred. Although evaluating the students' essays was not the primary concern for the present study, the researcher measured all of the essays written to compensate the ESL students in return for the participation in the study, and he found it might help readers understand more about the relationship between preference and performance. The scores are listed in Table 4.9 and Table 4.10. Table 4.9. Test scores (High-intermediate class) Student Preference (Expectation for better score) Rater 1 Optional topic test Rater 2 Average Rater 1 Single topic test Rater 2 Average St. #1 Option (Option) 4 4 [41 5 4 [4.5] St. #2 Option (Option) 4 3 [3.5] 3 3 [3] St. #3 Option (Option) 3 3 [3] 4 3 [3.5] St. #4 Option (Option) 3 3 [3] 3 3 [3] St. #5 4 3 [3.5] - - -St. #6 Option (Option) 2 2 [2] 2 3 [2.5] St. #7 Single (Single) 3 3 [3] 4 4 [4] St. #8 2 2 [2] - - -St. #9 Option (No expectation) 3 3 [3] 3 3 [3] St. #10 Option 3 3 [3] - - -St. #11 Option (Option) 3 3 [3] 3 3 [3] St. #12 Option (Option) 3 3 [31 3 3 [31 * Students #5, #8, and #10 did not take the second test, but Student#10 attended the second interview. 61 Table 4.10. Test scores (Advanced class) Student Preference (Expectation for better score) Rater 1 Optional topic test Rater 2 Average Rater 1 Single topic test Rater 2 Average St. #1 Option (Option) 3 3 [31 3 3 [3] St. #2 Option (Option) 4 4 [4] 4 4 [4] St. #3 Option (Option) 4 4 [41 3 '3 [3] St. #4 No preference (No pre.) 2 2 [2] 3 3 [3] St. #5 Single (Single) 4 4 [4] 4 4 [4] St. #6 Option (Option) 3 3 [3] 3 3 [3] St. #7 No preference (No pre.) 3 3 [3] 3 3 [3] St. #8 Option (Option) 4 3 [3-5] 4 3 [3.5] St. #9 Single (Single) 3 3 [3] 2 2 [2] St. #10 Option (Single) 5 5 [5] 5 5 [5] Overall, there appeared to be no significant score discrepancies between the two tests. Except for student #7 of the high-intermediate class who obtained 3 from the optional topic test and 4 from the single topic test, and for student #3, #4, and #9 of the advanced class who had one point score discrepancy between the two tests, all of the students who took both of the tests achieved very similar scores from each test. In this study, therefore, the results did not seem to support the claim that there might be a relationship between the students' preference of one test condition to the other and their performance. 4.3 Process of writing Writing process generally includes planning or prewriting, drafting, revising and/or editing. As Polio and Glew (1996) point out, however, writing is not a discrete activity, 62 but a recursive process. In this light, it is hard to determine by which moment one writer has planned, drafted, revised and/or edited. In the first section of this chapter, the topic selection process and time for planning was zoomed in on to discuss the issue of time as part of the whole writing process. This section broadens the scope from a very narrow-focused process of writing to the overall writing process of the ESL students, and it also discusses how the students' attitudes to each test condition interacted with the process of writing on the two tests. In the writing class, the students were taught to generate ideas and organize them before beginning to write an essay, and they often practiced the process during in-class and take-home writing activities. Table 4.11 and Table 4.12 show what process each student went through or skipped on each half-hour test. Interestingly, almost all of the students in the high-intermediate level did write idea banks either on the test paper or on a separate exercise sheet, although they knew they did not have enough time to write a complete essay in 30 minutes. Some of them mentioned that they were taught to write idea banks before every writing task and they found it helped them produce a well-organized essay. Two students (Students #5 and #12) planned in their minds to save time instead of writing notes. Student #12 spent some time writing idea banks on the first test, following the teacher's instruction. After the test, however, she found the half-hour test to be too short and she needed a bit more time to read and change her essay, so she thought about how to write the second test in her mind only and started writing. Student #9 claimed that he started writing immediately on the second test, because he not only recognized the importance of time but had experience writing on the topic as well. Referring to his prewriting time in Table 4.2, however, it was apparent that 63 he spent at least a certain amount of time planning before starting to write on the topic. Table 4.11. Writing process (High-intermediate class) Student Optional topic test Single topic test Planning Drafting Revising (Editing) Planning Drafting Revising (Editing) St. #1 V V V V St. #2 V v : V V V St. #3 V V - -St. #4 V V -St. #5 *v V - Not Available St. #6 V - -St. #7 - V V -St. #8 V V - Not Available St. #9 V V - V -St. #10 V - Not Available St. #11 - V V St. #12 V *V V V V : Students went through each step of writing process. * : These students did not make a list of idea banks during the planning stage but planned in their minds. ** : These students did not conclude their essays when the ending time was called. In contrast, the majority of the advanced level students had planning time but did not make notes. Many of them considered the main reasons to be the time factor and habitual behaviors. "I didn't write down the ideas because it is a waste of time (Student #6)." "I did not use to write planning (Student #8)." In order to see if the students did not make notes to offset the time needed for reading all the topics on the optional topic test, the researcher 64 compared the students' planning behaviors and found that those behaviors were consistent on both of the tests. Like student #9 of the high-intermediate class, student #2 and #4 argued that they just started writing with no plan, but they appeared to spend more time than just reading the topics before starting to write. Table 4.12. Writing process (Advanced class) Student Planning Optional topic test Drafting Revising (Editing) Planning Single topic test Drafting Revising (Editing) St #1 V V V -St. #2 - V V - V St. #3 V V V -St. #4 - V V - V St. #5 V - -St. #6 - *v -St. #1 V V V V V V St. #8 *v V *v V V St. #9 - *v V V St. #10 V V V -V : Students went through each step of writing process. * : These students did not make a list of idea banks during the planning stage but planned in their minds. ** : These students did not conclude their essays when the ending time was called. As seen in the previous section, many of the students preferred to have a choice rather than an assigned topic and the main reason was that it gave them more opportunities to select what they would like to write from among options. They commented that, in general, they had confidence in their choice and, as a result, it enabled them to write 65 smoothly. Many of them felt more pressure with the single topic, which caused them to wish they had a choice. Different from their feelings, however, the observations of the video tapes and the evaluations of the essays written by the students demonstrated that the ESL students showed almost no different behaviors between the two tests. Among the two levels of ESL students who liked one test condition better than the other or who had no preference between the two tests, only two students from each writing class failed to write a conclusion to their essay and two of the high-intermediate level students did not have any time available for editing in the test which they preferred. 4.4 Other issues After all of the questions were asked in the second interview session, the students made general comments on these types of timed writing tests. Eight students in the high-intermediate level mentioned that 30 minutes was not enough time for them to think and organize ideas, provide examples, and then revise. In order to write a good essay in a second language, they said, they would need about one hour. Some students acknowledged the importance of practice and of knowledge of appropriate vocabulary and grammar to write a better essay. Many of the advanced class students were much more concerned about the topic factor than the time pressure. "It depends on a topic to write a good essay. If topics are difficult, writing would be very difficult.", answered one student to the final question. Examining the fluency of the ESL students' essays under the two test conditions was not the main focus in this study. Nonetheless, the researcher found that it might provide some useful information about the relationship between their performance and preference of one test condition over the other in terms of fluency, along with the scores of 66 the student essays. Table 4.13. Fluency Student High-intermediate class Student Advanced class Length (words) Length (words) Optional topic test Single topic test Optional topic test Single topic test #1 364 455 #1 287 290 #2 222 232 #2 369 300 #3 197 230 #3 314 204 #4 196 235 #4 263 256 #5 227 _ #5 406 405 #6 119 123 #6 237 168 #7 170 254 #7 202 234 #8 116 - #8 372 305 #9 293 296 #9 229 225 #10 193 - #10 417 392 #11 167 178 #12 174 219 Looking at the results, the ESL students in the high-intermediate class wrote more fluently in the single topic test than in the multiple topic test, while the results appeared to be opposite in the advanced class. 67 4..5 Summary ESL students from two academic writing classes (high-intermediate and advanced level) took part in the present study, taking two writing tests (optional vs. single topic) and attending the follow-up interview sessions after each test. The findings presented in this chapter suggest that each level of ESL students experienced similar writing processes under the two different test conditions and shared similar perspectives on these two types of tests, but they had different beliefs about each test condition and writing habits (e.g., making prewriting notes). There is a brief summary of these findings from the two classes below. 4.5.1 Prewriting stage In selecting a topic on the test with options, both levels of the ESL students went through similar processes. Two types of selection processes were observed: choose a topic right after reading all the topics straight; select a topic after reading the topics back and forth. Measuring the time for prewriting under the two different test conditions, the researcher found that the students in the two classes spent more time on the optional topic test than on the single topic test. 4.5.2 Responses to each test condition During the interview sessions, the ESL students were asked about three major issues: criteria for topic choice; the time factor; and preference of one test condition over the other. When asked about the criteria on which the students based their choice of topic on the optional topic test, the high-intermediate level students pointed out topic familiarity as 6 8 the biggest reason, followed by topic interest, knowledge of appropriate vocabulary, feeling easy, and having ideas. For the advanced level students, topic familiarity was the most frequent criterion they referred to, followed by feeling easy, topic interest, and having ideas. In addition, the students from the high-intermediate class commented that time pressure and feeling difficult were the big obstacles in choosing a topic, while the advanced level students tried to avoid selecting certain topics that did not interest them or that sparked no (or very few) ideas to write. The students in this study read all the questions offered in a choice condition although they recognized the importance of time management in a timed-test situation. When asked if they felt that being given a choice of topics was a waste of time, most of them commented that they did not feel it wasted time because they could select a good topic. Comparing the optional topic test with the single topic one, the majority of the students stated they would like to have a choice rather than a single topic if they were able to choose a test condition in the future. When asked about the number of topics they would like to have in a choice condition, most said three or four topics, explaining that two topics would be too few and over five would be too many. Exploring the relationship between the students' preference of one condition over the other and their expectation for writing performance, the researcher found that although the majority of the students in the two classes claimed they would have a better score on the condition they preferred, the results did not appear to support any significant relationship between the students' preference and performance. 69 4.5.3 Process of writing All of the students had already learned the importance of the writing process to produce a good quality essay before participating in this study. It was interesting to look at whether they followed their teachers' instructions in this timed-test situation. The data collected from the two classes suggested that most of the students went through this writing process in a very short time. Many of them, however, failed to manage the test time. They did not have time to revise their essays, and some students did not even write a conclusion. Interestingly, almost all of the students in the high-intermediate class jotted down ideas before starting to write, while only a few students in the advanced class made notes for the essay, stating the main reasons to be the time factor and their habitual behaviors. Many students felt more pressure with the single topic, which caused them to want to have a choice. It turned out, however, that the ESL students showed almost no different behaviors between the two tests. 70 CHAPTER FIVE Discussion This chapter discusses the main findings of the study in relation to the research questions as well as the issues dealt with in the review of related literature. First, the ESL students' time management under the two test conditions is discussed to speculate about arguments against the optional topic test-that is, it is a waste of time to offer students a choice. Second, the ESL students' criteria for a topic choice or avoidance are reviewed to look into what factors lead them to reach a decision. Third, the interactions between the ESL writers and the test conditions are discussed in relation to the whole writing process of each particular writing task. There is also a minor issue relevant to the present study, which is a pattern of a topic choice. 5.1 How did the ESL students manage time in a test situation? The time factor is one of the traditional arguments against offering students a choice of topics. Heaton (1975), for example, says that forcing students to choose atopic makes them waste time. In considering whether students should be given options, Kennedy (1994) suggests that attention should be paid to the amount of time given to the topic selection process rather than to the actual writing. Polio and Glew (1996) examine ESL students' topic selection processes to deal with the time issue and conclude, based on their observation, that the students did not spend very much time making a choice, but the authors do not provide any explanations about how students felt about the time. Therefore, the researcher planned to uncover the time issue by observing the prewriting 71 processes of the ESL students in the two test conditions and by inquiring into the students' opinions. As described in the previous chapter, the majority of the ESL students in this study spent more time prewriting, which includes topic selection time, in the optional topic test. They read all the optional topics before choosing one, and they argued that they chose a topic immediately, seemingly recognizing the importance of time. Asked a direct question about the choice condition in relation to the time, the majority believed that they did not waste time because they were able to select a good topic: I don't think it was a waste of time. Even though it took me more time to read all the topics, I could write very fast on the topic I chose, because I knew the topic much better than the others. On the single topic test, I spent more time... and did not have any editing time. This time I really did not have any ideas on the topic. While writing, I wished I had been able to choose another one. (Student #1: Advanced writing class) The ESL students' topic choice was also affected by the time factor. It appeared most apparently in the high-intermediate class. Most of them stated that they avoided topics which they felt would take them too long to finish writing. They felt the time pressure most when they encountered a topic which they had only a few ideas about. In the advanced class, many students did not mention the time factor as a big issue, but found the biggest obstacle to be a lack of ideas. This suggests that they tended to choose a topic about which they had many ideas so that they could finish an essay within the time limit. 72 Some students asserted that the time factor influenced their writing process. They had already learned the importance of the writing process to produce a good quality essays and had practiced the writing process regularly in writing classes. However, to save time in the present research, they did not follow the teachers' instructions and normal practices in the writing classes. Interestingly, comparing the ESL students' behaviors between the two test conditions, the researcher found that almost all the students, who argued they did not follow the normal practices in this project, presented the same planning behaviors in the two test situations. That is, students who made planning notes in the optional topic test also made planning notes in the single topic test. Looking at the findings, it can be inferred that in a timed-test condition the ESL students' planning behaviors are not only affected by the time pressure but also by their writing habits. 5.2 What made the E S L students choose a certain topic? As indicated by the majority of the students from both of the writing classes about the criteria for choice, topic familiarity or background knowledge appeared to be the most popular reason ESL students refer to. Studies (e.g., Chesky & Hiebert, 1987; Tedick, 1990) show that students who are more familiar with the topic tend to like their writing better, to feel that the writing task is much easier, and, as a result, to produce a better quality composition. The findings of the present study are well supported by Polio and Glew's (1996) study as well. Polio and Glew state that many ESL students point out topic familiarity as the biggest reason for choosing a specific topic. Looking at the research participants' comments in the present study on the single topic test, this argument seems to have some validity. The students who liked the single topic or who did not want to 73 change it mentioned that the topic was familiar to them or their majors were related to the topic. The criteria for topic choice were found to be consistent at least in the high-intermediate class. It can be predicted, therefore, that the ESL students in the high-intermediate level will choose a familiar topic among the options in future writing tests as well. The issue of topic choice can be related to the issue of whether students choose a topic that allows them to display their best writing ability. In order to discuss this controversial issue, the researcher reviewed the scores of the student essays and found that there was almost no difference in scores between the two tests. That is, only three students out of 19 had a one-point score discrepancy between the optional topic test and the single topic test, which is often claimed acceptable by researchers and professional writing assessment programs (Kroll, 1991). For this small difference, Gabrielson, Gordon, and Engelhard (1995) provide some explanations: (1) all the writing tasks were equally difficult to respond to so the choice of task made no difference; (2) students did not have the ability to choose the writing task that would result in the best written essay; (3) students did have the ability to choose the better task, but the necessity of choosing resulted in anxiety or loss of time that interfered with the writing process. Although the scores should be interpreted with caution due to the nature of the research design employed for the present study, the results suggest that presenting the ESL students with options did not appear to lessen the quality of their essays. It is worthwhile mentioning that some of the students indicated they chose a topic simply to get a better score no matter whether the topic was familiar, interesting, or easy. 74 It is hard to draw a conclusion that every ESL student in this study had the same opinion. However, when chatting about the writing tests with the research participants after the interviews, the researcher was told that all of them except those who did not plan to pursue their studies in English countries wanted to have good marks in the writing tests of the present study as well as in other writing tests (e.g., TWE is the most popular one they plan to take). The desire to achieve a good score in a writing test was frequently mentioned by the EFL students in the pilot study, too. It seems safe to infer that most of the ESL students in the two writing classes whose plan was to take a writing test to enter academic institutions in English countries participated in the research with a desire to practice writing tests to achieve a satisfactory score in future tests. With the desire, many of them chose a topic which was the most familiar. If the criteria which the ESL students mentioned can be called an explicit reason for topic choice, the desire can be called the implicit reason. This implicit desire seems to control ESL students' topic choice—whether or not they like the topic—as witnessed by the ESL students in this study, and it makes the students try harder to get a better mark than they would during normal writing activities in the writing class, activities which are aimed at developing overall writing abilities. In this sense, Zamel's (1982) assertion that writing is the process of discovering meaning might not apply to the test situation. 5.3 What were the ESL students' attitudes toward each test condition? The rest of the research questions (#5, #6, and #7) were aimed at exploring the ESL students' attitudes toward each test condition. As reported in the previous chapter, a great majority of the ESL students in the two writing classes felt more comfortable with 7 5 the optional topic test than with the single topic test. The most crucial point was that the optional topic test provided them with an opportunity to choose a topic that they would like to write about. This opportunity led most of them to the choice of a familiar topic and seems to let them write an essay with the sense of ownership of the writing task (Atwell, 1984). In turn, the non-provision of a choice of topics makes many of them feel great pressure. One student argued that "I like the optional topic test. Each person has different knowledge on each topic. Some know more about the topic. Some don't know. If all people have to write only one topic, it is unfair." Students commented that they were not interested in the topic assigned in the single topic test but just had to write. The findings were claimed long ago by Hoetker (1979) in his review on the effects of a single versus optional writing topic. Hoetker stated that: if a single topic is set, at least some of the examinees will find it unstimu-lating or outside their experience and will not write their best in response to it.. .the provision of optional topics increases the chance each examinee will find one upon which he has something to say. (pp. 42-48) It is also worthwhile to note several students' comments that a choice of topics disturbed their concentration on the test because they had to read all the topics and move back and forth until they chose one, and this made them waste time. The disparities between these two groups of ESL students after experiencing the same conditions lead the researcher not to come to the hasty conclusion that ESL students should be offered a choice in a writing test. Looking at the relationship between the students' preference of one test condition over the other and their performance may provide a clue for the solution of this complex issue. For example, if no significant relationship has been detected between 76 preference and performance, one may be able to argue that students' feelings are probably not the factor affecting a better score. Rigorous experimental research would show the correlation between preference and performance and the condition under which students perform better, along with more carefully designed qualitative research which is aimed at looking at ESL students' minds in depth. The interactions between the ESL students' attitudes to each test condition and the process of writing in the two tests were also the aim of exploration in this study. As already mentioned, most of the students felt more comfortable with the choice condition than with the single one, which may have allowed the students to write more fluidly. Observing their process of writing, however, the researcher found that the ESL students showed very similar behaviors in the two tests. According to Selfe (1984) and O'Shea (1987), apprehensive writers often go through different writing processes. They have difficulty in extracting ideas for the topic and often try to plan the first few sentences of the essay before or instead of formulating a general overall plan. In this study, many ESL students made a list of ideas or a few sentences, but not often an overall plan, but whether or not they felt comfortable with the choice condition, they showed very similar planning behaviors in the two tests. Students who generated a list of idea banks in the first test, for example, did the same in the second. During the process of drafting an essay, Hayes (1981) argues that the more apprehensive writer takes longer to complete writing assignments, while less apprehensive writers write quickly. The ratio of the ESL students in the present study who completed a writing task which they preferred in time was almost identical to the ratio of those who finished a writing task which they did not like. Selfe (1985) also points out that apprehensive writers when revising spend less time 77 editing and concentrate on superficial matters such as spelling and minor sentence changes rather than on organizational considerations. Again, most of the ESL students in this study presented very similar revision behaviors in both of the tests. Students who revised their essays corrected only mechanical errors and, in a few cases, sentence changes, but none of them changed the essays at an organizational level. These results may come from various sources of variables. First, a short time limit could be one of the variables producing different results from the previous findings. Had the test time been long enough, the ESL students may have showed different behaviors following their teacher's instructions, such as more specific prewriting, several pieces of drafting, and thorough revision. Second, the ESL students' writing ability could be another variable to be considered. Selfe's participants (four high and four low writing apprehensives) were native English speakers in college freshmen composition courses. Although the ESL students in the present study were taking at least a high-intermediate writing course, their level of writing proficiency would possibly be lower than, or at best similar to, that of Selfe's low writing apprenhensives. That may explain why most of the ESL students showed somewhat similar behaviors to the low writing apprenhensives in Selfe (1984) by not formulating a general overall plan and focusing on superficial matters. 5.4 What was the pattern for topic choice? As introduced in Chapter 2, Chiste and O'Shea (1988) studied the pattern of choice by question position in set (first, second, third, or fourth) and the pattern of choice by question length (shortest, second shortest, second longest, or longest in set). Chiste and O'Shea found that the ESL writers heavily favored first or second questions in a set 78 of four and the shortest or second shortest questions in a set. Choosing the shortest or earliest positioned questions in a set, however, did not correlate with success. The present study set up a similar design to Chiste and O'Shea's study in terms of offering four expository or argumentative topics by topic length in a timed-test situation, except for some conditions, such as the length of the test time and reality of the examination (Chiste & O'Shea used a two hour compulsory university writing competence test; the present study used a 30-minute experimental writing test). The results of the two studies were, however, very different from each other. The present research showed that most of the ESL students chose topic #1 or topic #3, and topic #2 or topic #4 were the least chosen. The ESL students commented that none had skipped any topics deliberately to save time. Observing their choice of topic by question length, the researcher found that the ESL students chose the shortest (23 words) or the second longest topic (38 words) most and the second shortest (36 words) or the longest (51 words) least. The results appeared to be different from those in Chiste and O'Shea, showing no special patterns in the topic selection process. In this study, the ESL students' pattern of choice may be better explained by criteria used for topic choice than question position or question length. Most of the ESL students chose a topic by such criteria as topic familiarity, interest, and feeling easy. Their choice, however, seemed to make almost no difference in scores between the two tests. The results suggest some possible explanations: (1) that topics which seemed most accessible to the ESL students were positioned at the first and the third of the set, and (2) that the ESL students might not feel any difficulty in reading topics that were phrased in around 40 words. Many students 79 indicated that they did not carefully read topic #4 (the longest topic) because it looked too long and complicated. 80 CHAPTER SIX Implications Whether to offer options or not has been an issue which has been getting on professional test-makers and researchers' nerves, and so far the answer has not been clear enough to decide which would really help students display their best writing ability in test contexts. The results of the present study are also inconclusive in terms of comprehensiveness and generalizability. This study focused on the interactions between ESL student writers and topics in a timed-test condition, but did not take rater variables into consideration. As well, it was a case study with a small number of ESL students. In spite of the lack of comprehensiveness and generalizability, however, the research findings suggest some implications for ESL composition teachers, test makers, administrators, and researchers. These implications follow the limitations of the research. 6.1 Cautions If scores or grades explain what students have achieved in a writing test, the process of writing provides background information for understanding how the students have reached certain scores. The research on process is, therefore, often conducted by using qualitative methodology to analyze the complex nature of interactions between students and inter- and intra-variables1. The present study was also aimed at exploring 1 Inter-variables are elements such as the nature of writing task (e.g., university entrance test, placement test), topic, time limit, audience, physical and/ or psychological environments, etc., which are outside the student writer and affect the writing process and performance. Intra-variables are elements such as background knowledge, experience, culture, interest, feelings, health, and the like, which are inside the student writer and affect the writing process and performance. 81 part of the nature of student writing, considering especially student writers and topics in a test condition. Therefore, collecting enough information regarding the attitudes and opinions of the student writers was one of the most important tasks in this study. For students' perspectives about each test condition, an interview method was used after each test session. The researcher, however, had great difficulty in securing enough time for the interview sessions. The ESL students wanted to participate in the interview only during the break time between classes or at lunch time in the language institution where this research was conducted. Since the researcher planned to collect fresh memories about each test from the ESL students, he could not simply put off the interviews. Each student, therefore, met the researcher for only ten to fifteen minutes, and the researcher found that it was not long enough to explore the students' minds in depth. The accessibility to the ESL students after completion of the data collection was another problem. In the language institution, each term of writing classes was scheduled to last twelve weeks. When the researcher started the present study with the ESL students, the two classes were already finished more than two thirds of the term. While analyzing the data, the researcher sometimes found a lack of information on some important issues, but the term was already finished when he planned to revisit and many of the ESL students had gone back to their countries or other places in North America. Another limitation was that this study also does not provide the answer to the question of whether or not ESL students should be given a choice in a writing test. Students' attitudes toward and opinions of the test conditions made it obvious that they wanted to have options. However, the students' opinions are not good enough to secure 82 f irm ground for any claims. Empir ical research is, therefore, required to determine the relationships between the students' performance when they are offered a choice of topics or a single topic. 6.2 Implications for E S L composition teachers L ike most of the E S L students in this study, many E S L learners tend to come to academic language institutions to prepare for English proficiency tests (e.g., T O E F L , T W E ) to enter English speaking universities. For them, especially those who are taking academic writing courses, practicing writing tests in class is very important. Some E S L composition teachers may believe that offering students a regular writing test with a time limit is not ideal to improve the students' academic writing ability. A s Wolcott (1987) argues, the test situation often precludes substantive additions, deletions, and shifts in organizational structure and changes. In this light, their arguments are quite right. E S L students need to improve writing ability through the whole process of writing. At the same time, however, E S L students also need to practice writing skills in a timed-test situation as i f it were a real test to help them achieve a satisfactory score in a writing test. Once the teachers offer E S L students an in-class writing examination, they need to provide students with both types of tests (single topic and optional topic) because writing assessment programs offer different numbers o f topics. As well , teachers should explain to the E S L students about the nature of the written competence examination which is often called 'non-content based writing test' (Krol l , 1991). This kind of examination does not require students to have background knowledge to write on the topic, but to use their imaginations. Keeping in mind that many of the E S L students tend to choose a familiar 83 topic in a test condition but seem to produce a similar quality of essay compared to that of the single topic, the teachers should counsel ESL students who are complaining about a topic and encourage and teach them how to show the full strengths of their writing skills. 6.3 Implications for test-givers For the test-givers, the results of the study suggest that providing options may not have a detrimental effect on student writing, and many students want to have a choice although they recognize that the time given is only 30 minutes. The test-makers may argue that offering a choice of topics causes problems in constructing topics and actually evaluating students' essays, and they are right in terms of securing validity and reliability for the writing test. It may mean, however, that they need make a more serious effort to resolve the problems around this issue for the sake of ESL students as well as native English students who are struggling with an assigned topic which does not really interest them. The goal should be to provide students with an opportunity to display their best writing ability. Test-givers need to pay attention to the students' voices unless rigorous empirical research finds that students write significantly better under a single topic condition than under a choice condition. The time factor is another issue many ESL students raised after the tests. Many of them asked for more time, 15 to 30 minutes more, to complete their essays in both of the test situations. If the purpose of a writing test is to estimate the examinees' general writing ability in a second language (Hoetker, 1979), and ESL students ask more time (e.g., one hour) to show it, the test-givers should reexamine the length of test time to judge whether it is appropriate for ESL students with a variety of written proficiency 84 levels. Most of the ESL students in this study believed that they needed about one hour to write a good essay in a second language. There are also other factors the test-givers should consider when offering options. Most of the ESL students in this study wanted to have three or four options, supporting Polio and Glew's (1996) findings that their students wished they had three. Wording is also problematic as examined in Brossell (1983). Almost all the ESL students in this project avoided or even did not read carefully topic #4, which had 51 words. The study of wording was beyond the scope of the present research, but this issue should be carefully scrutinized when offering options. 6.4 Implications for researchers As implied throughout this paper, this study does not provide critical information about the ESL students' writing performance. Although the ESL students' essays were evaluated by two trained raters, the results cannot be considered generalizable. ESL students' writing performance, along with their perspectives in depth, may provide information about which condition would help ESL students produce their best writing ability, and this can only be done by further empirical research. In order to conduct experimental research, researchers should consider rater-related issues (qualification, scoring criteria, training program, etc.) and test conditions as well as students and topics. 6.5 Reflections Even though a system is in place, even though it "works," it must be continually reassessed, reevaluated, reexamined, studied, and probed, questioned, and requestioned. (Fishman, 1984, p. 24) 8 5 Throughout the long process of the research project, the researcher has gained a valuable experience learning from the previous literature, the research participants, and people around him, and he has become fascinated with being a researcher. He has found that research should be meaningful to the researcher as much as to his readers. He grows through the process, changing or reconfirming his perspectives based on the findings, or finding a new direction for further exploration. The deep love of inquiring, finding, and understanding a situation propels the researcher to go further and to find practical applications for the classroom. Feedback from the classroom again inspires the researcher to reinvestigate the situation. The previous literature helped the researcher understand the complex nature of topic effects on student writing and provided the ground for the present research. The researcher found that studies of whether students should be given a choice of topics in a writing test provided inconclusive explanations, and most of the studies did not even examine how test-takers go through the writing process under each choice and non-choice timed-test condition nor how they felt about each test condition. College-level ESL students from two academic writing classes were observed to examine their writing process, and the findings showed that they went through similar processes of writing. They already learned the importance of the writing process in writing a good essay, but many students appeared to skip making prewriting notes and revising essays in this research. They seemed to find, as demonstrated by many students, that the teachers' instructions did not fit well in a short time test condition. The ESL students spent more time prewriting in the optional topic test than in the single topic test. 86 The results may be natural because, simply speaking, four topics take longer to consider than one. The point should be given to the issue of whether reading optional topics was completely a waste of time or a productive step in taking a great leap forward. Not surprisingly, most of the students did not feel it was a waste of time and liked their choice. Of the criteria for topic choice, the majority of the ESL students ranked topic familiarity or background knowledge on the top as many studies (e.g., Polio & Glew, 1996) reported. Most of the ESL students would rather have a choice than a single topic in a real test condition. The researcher also had the students' essays evaluated by two raters and found that there appeared to be almost no relationship between the students' performance and their preference of one test condition over the other. The results, along with the ESL students' opinions, led the researcher to reflect that offering options in a timed-test situation may help many ESL students feel more comfortable about taking a writing test, and the comfortable feelings may help them display their general writing ability, even if it is not the best. The researcher, however, admits that the present research should be placed in part of the whole picture examining the effects of topic choice on student writing. Taking affective factors as a whole into consideration, more comprehensive, empirical research would help define a larger and clearer picture of the issue of whether to offer options or not in a timed-test condition. 8 7 Bibliography Atwell, N . (1987). In the middle: Writing, reading, and learning with adolescents. Montclair, NJ: Boynton/Cook. Baker, E., & Quellmarz, E. (1981). Effects of visual or written topic information on essay quality. In E. Quellmarz (Ed.), Test design: Annual Report (pp. 153-183). (ERIC Document Reproduction Service No. ED 212 650) Bogdan, R. C , & Biklen, S. K. (1998). Qualitative research in education: An introduction to theory and methods (3rd ed.). Boston, MA: Allyn and Bacon. Brossell, G. (1986). Current research and unanswered questions in writing assessment. In K. Greenberg, H. Wiener, and R. Donovan (Eds.), Writing assessment: Issues and strategies (pp. 168-182). New York: Longman. Brossell, G. (1983). Rhetorical specification in essay examination topics. College English, 45(2), 165-173. Brossell, G., & Ash, B.H. (1984). An experiment with the writing of essay topics. College Composition and Communication, 35 (4), 423-425. Brown, J. D., Hilgers, T., & Marsella, J. (1991). Essay prompts and topics. Written Communication, 8 (4), 533-556. Carlman, N . (1986). Topic differences on writing tests: How much do they matter? English Quarterly, 19(1), 39-49. Carlson, S., & Bridgeman, B. (1986). Testing ESL student writers. In K. Greenberg, H. Wiener, and R. Donovan (Eds.), Writing assessment: Issues and strategies (pp. 126-152). New York: Longman. Charney, D. (1984). The validity of using holistic scoring to evaluate writing: A critical review. Research in the Teaching of English, 18 (1), 65-81. Chase, C. (1968). The impact of some obvious variables on essay test scores. Journal of Educational Measurement, 5 (4), 315-361. Chesky, J., & Hiebert, E.H. (1987). The effects of prior knowledge and audience on high school students' writing. Journal of Educational Research, 80 (5), 304-313. Chiste, K. B., & Oshea, J. (1988). Patterns of question selection and writing performance of ESL students. TESOL Quarterly, 22 (4), 681-684. gg Cooper, C. (1977). Holistic evaluation of writing. In C. Cooper and L. Odell (Eds.), Evaluating writing: Describing, measuring, judging (pp. 3-31). Urbana, IL: National Council of Teachers of English. Cronbach, L. J. (1970). Essentials of psychological testing (3rd ed.). New York: Harper & Row. Crowhurst, M . (1990). Teaching and learning the writing of persuasive/ argumentative discourse. Canadian Journal of Education, 15 (4), 348-359. Crowhurst, M. , & Piche, G. L. (1979). Audience and mode of discourse effects on syntactic complexity in writing at two grades levels. Research in the Teaching of English, 13 (2), 101-109. Davis, S. J., & Winek, J. (1989). Improving expository writing by increasing background knowledge. Journal of Reading, 33 (3), 178-181. DeGroff, L. C. (1987). The influence of prior knowledge on writing, conferencing, and revising. The Elementary School Journal, 88 (2), 105-118. Engelhard, G., Jr., Gordon, B , & Gabrielson, S. (1992). The influences of mode of discourse, experiential demand, and gender on the quality of student writing. Research in the Teaching of English, 26 (3), 315-336. Faigley, L., Daly, J. A., & Witte, S. (1981). The role of writing apprehension in writing performance and competence. Journal of Educational Research, 75 (1), 16-21. Fishman, J. (1984). Do you agree or disagree: The epistemology of the CUNY writing assessment test. Writing Program Administration, 8 (1&2), 17-25. Freedman, A., & Pringle, I. (1984). Why students can't write arguments. English in Education, 18 (2), 73-84. Freedman, A., & Pringle, I. (1980). Writing in the college years: Some indices of growth. College Composition and Communication, 31 (3), 311-324. Freedman, S. W. (1983). Student characteristics and essay test writing performance. Research in the Teaching of English, 17 (4), 313-325. Freedman, S. W. (1981). Influences on evaluators of expository essays: Beyond the text. Research in the Teaching of English, 15 (3), 245-255. Freedman, S. W. & Robinson, W. S. (1982). Testing proficiency in writing at San Francisco State University. College Composition and Communication, 33 (4), 393-398. 89 Gabrielson, S., Gordon, B., & Engelhard, G., Jr. (1995). The effects of task choice on the quality of writing obtained in a statewide assessment. Applied Measurement in Education, 8 (4), 273-290. Gordon, E. (1986). Students' rationale for topic choice in writing an argumentative essay. (ERIC Document Reproduction Service No. ED 270 786) Greenberg, K. (1981). The effects of variation in essay questions on the writing performance of CUNYfreshmen. New York: City University of New York Instructional Resource Center. Hamp-Lyons, L. (1990). Second language writing: Assessment issues. In B. Kroll (Ed.), Second language writing: Research insights for the classroom (pp. 69-87). New York, NY: Cambridge University Press. Hamp-Lyons, L. (1986). No new lamps for old yet, please. TESOL Quarterly, 20 (4), 790-796. Hamp-Lyons, L., & Mathias, S. P. (1994). Examining expert judgments of task difficulty on essay tests. Journal of Second Language Writing, 3 (1), 49-68. Hayes, C. (1981). Exploring apprehension: Composing processes of apprehensive and non-apprehensive intermediate freshmen writers. (ERIC Document Reproduction Service No. ED 210 678) Heaton, J. B. (1975). Writing English language tests. London: Longman. Henning, G. (1987). A guide to language testing. New York: Newbury House. Hennig, K. R., Jr. (1980). Composition writing and the functions of language. Dissertation Abstracts International, 41, 2424-A. Hilgers, T. L. (1982). Experimental control and the writing stimulus: The problem of unequal familiarity with content. Research in the Teaching of English, 16 (4), 381-390. Hillocks, Jr., G. (1986). Research on written composition: New directions for teaching. Urbana, II: National Conference on Research in English. Hoetker, J. (1982). Essay examination topics and students' writing. College Composition and Communication, 33 (4), 377-392. Hoetker, J. (1979). On writing essay topics for a Test of the Composition Skills of Prospective Teachers: With a review of literature on the creation, validation, and effects of topics on essay examination. Volume Four of Five. (ERIC Document Reproduction Service No. ED 194615) 90 Hoetker, J., & Brossell, G. (1986). A procedure for writing content-fair essay examination topics for large-scale writing assessments. College Composition and Communication, 37 (3), 328-335. Horowitz, D. (1986). Process not product: Less than meets the eye. TESOL Quarterly, 20 (1), 141-144. Huot, B. (1990). Reliability, validity, and holistic scoring: What we know and what we need to know. College Composition and Communication, 41 (2), 201-213. Jacobs, H., Zinkgrat, S., Wormuth, D., Hartfiel, V., & Hughey, J. (1981). Testing ESL composition: A practical approach. Rowley, MA: Newbury House. Keech, C. (1982). Unexpected directions of change in student writing performance. (ERIC Document Reproduction Service No. ED 220 538) Kegley, P. H. (1986). The effect of mode of discourse on student writing performance: Implications for policy. Educational Evaluation and Policy Analysis, 8 (2), 147-154. Kennedy, B. L. (1994). The role of topic and the reading/writing connection. TESL-EJ, 1 (1), 1-11. Kincaid, G. L. (1963). Some factors affecting variations in the quality of students' writing. In R. Braddock, R. Lloyd-Jones, & L. Schoer (Eds.), Research in written composition (pp. 83-95). Urbana, IL: National Council of Teachers of English. Kinzer, C. (1987). Effects of topic and response variables on holistic score. English Quarterly, 20 (2), 106-120. Kroll, B. (1991). Understanding TOEFL's Test of Written English. RELC Journal, 22 (1), 20-33. Langer, J. A. (1984). The effects of available information on responses to school writing tasks. Research in the Teaching of English, 18 (1), 27-44. Leonhardt, N . L. (1985). The effects of assigned versus open topics on the writing scores of university-level nonnative English speakers. Unpublished doctoral dissertation. The Florida State University, Florida. McColly, W. (1970). What does educational research say about the judging of writing ability? Journal of Educational Research, 64 (4), 147-156. 91 MaCutcheon, D. (1986). Domain knowledge and linguistic knowledge in the development of writing ability. Journal of Memory and Language, 25 (4), 431-444. Markham, L. (1976). Influences of handwriting quality on teacher evaluation of written work. American Educational Research Journal, 13 (4), 277-283. Martinez San Jose, C. P. (1972). Grammatical structures in four modes of writing at fourth grade level. Dissertation Abstracts International, 33, 5411-A. McMillan, J. H., & Schumacher, S. (1993). Research in education: A conceptual introduction (3rd ed.). New York, NY: HarperCollins College Publishers. Mehrens, W. A., & Lehmann, I. J. (1990). Measurement and evaluation in education and psychology (4th ed.). New York, NY: Holt, Rinehart and Winston. Mellon, J. (1976). Round two of the National Writing Assessment-interpreting the apparent decline of writing ability: A review. Research in the Teaching of English, 10 (1), 66-74. Merriam, S. B. (1998). Qualitative research and case study applications in education. San Francisco, CA: Jossey-Bass Publishers. Mohan, B., & Lo, W. A. (1985). Academic writing and Chinese students: Transfer and developmental factors. TESOL Quarterly, 19 (3), 515-534. Murphy, S., & Ruth, L. (1993). The field testing of writing prompts reconsidered. In M . Williamson and B. Huot (Eds.), Validating holistic scoring for writing assessment-Theoretical and empirical foundations (pp. 266-302). Cresskill, NJ: Hampton Press. Murray, D. M . (1980). Writing as process: How writing finds its own meaning. In T. R. Donovan and B. W. McClelland (Eds.), Eight approaches to teaching composition (pp. 3-20). Urbana, IL: National Council of Teachers of English. Newell, G. E. (1984). Learning from writing in two content areas: A case study/ Protocol analysis. Research in the Teaching of English, 18 (3), 265-287. Newcomb, J. S. (1977). The influence of readers on the holistic grading of essays. Unpublished doctoral dissertation. University of Michigan. Nietzke, D. A. (1972). The influence of composition assignment upon grammatical structure. Dissertation Abstracts International, 32, 5476-A. O'Donnell, H. (1984). ERIC/RCS Report: The effects of topic on writing performance. English Education, 16 (4), 243-249. 92 Oliver, E. (1995). The writing quality of seventh, ninth, and eleventh graders, and college freshmen: Does rhetorical specification in writing prompts make a difference? Research in the Teaching of English, 29 (4), 422-450. O'Shea, J. (1987). Writing apprehension and university tests of writing competence. English Querterly, 20 (4), 285-295. Polio, C , & Glew, M . (1996). ESL writing assessment prompts: How students choose. Journal of Second Language Writing, 5 (1), 35-49. Powers, W. G., Cook, J. A., & Meyer, R. (1979). The effect of compulsory writing on writing apprehension. Research in the Teaching of English, 13 (3), 225-230. Prater, D. L. (1985). The effects of modes of discourse, sex of writer, and attitude toward task on writing performance in grade 10. Educational and Psychological Research, 5 (3), 241-259. Prater, D., & Padia, W. (1983). Effects of modes of discourse on writing performance in grades four and six. Research in the Teaching of English, 17 (2), 127-134. Quellmalz, E. S., Copell, J., & Chou, C. (1982). Effects of discourse and response mode on the measurement of writing competence. Journal of Educational Measurement, 19 (4), 241-258. Ruth, L., & Murphy, S. (1988). Designing writing tasks for the assessment of writing. Norwood, NJ: Ablex. Ruth, L., & Murphy, S. (1984). Designing topics for writing assessment: Problems of meaning. College Composition and Communications, 35 (4), 410-422. Selfe, C. L. (1985): An apprehensive writer composes. In M . Rose (Ed.), When a writer can't write: Studies in writer's block and other composing process problems (pp. 83-95). New York: Guilford Press. Selfe, C. L. (1984)! The predrafting processes of four high- and four low-apprehensive writers. Research in the Teaching of English, 18 (1), 45-64. Shaugnessy, M . P. (1977). Errors and expectations: A guide for the teacher of basic writing. New York: Oxford University Press. So, N . W. (1997). The influence of mode of discourse on syntactic complexity and the quality of student writing at college ESL levels. Unpublished paper. The University of British Columbia, Canada. 93 Spaan, M . (1993). The effect of prompt in essay examinations. In D. Douglas & C. Chapelle (Eds.), A new decade of language testing research: Selected papers from the 1990 language testing colloquium (pp. 98-121). Alexandria, VA: TESOL Squire, J. R. (1983). Composing and comprehending: Two sides of the same basic process. Language Arts, 60 (5), 581-589. Stewart, M . F., & Grobe, C. H. (1979). Syntactic maturity, mechanics of writing, and teachers' quality ratings. Research in the Teaching of English, 13 (3), 207-215. Stiggins, R. J. (1982). A comparison of direct and indirect writing assessment methods. Research in the Teaching of English, 16(2), 101-114. Tedick, D. J.(1990). ESL writing assessment: Subject-matter knowledge and its impact on performance. English for Specific Purposes, 9 (2), 123-143 Troyka, L. Q. (1984). The phenomenon of impact: The CUNY writing assessment test. Writing Program Administration, 8 (1-2), 27-34. Watson, C. (1980). The effects of maturity and discourse type on the written syntax of superior high school seniors and upper level college English majors. Dissertation Abstracts International, 41, 141-A. White, E. M . (1995). An apologia for the timed impromptu essay test. College Composition and Communication, 46(1), 30-45. White, J. O. (1988). Who writes these questions, anyway? College Composition and Communication, 39 (2), 230-235. Winfield, F. E., & Barnes-Felfeli, P. (1982). The effects of familiar and unfamiliar cultural context on foreign language composition. Modern Language Journal, 66(4), 373-378. Witte, S. (1992). Context, text, intertext: Toward a constructivist semiotic of writing. Written Composition, 9 (2), 237-308. Witte, S. (1987). Pre-text and composing. College Composition and Communication, 38 (4), 397-425. Witte, S., & Faigley, L. (1981). Coherence, cohesion and writing quality. College Composition and Communication, 32 (2), 189-204. Wolcott, W. (1987). Writing instruction and assessment: The need for interplay between process and product. College Composition and Communication, 38 (1), 40-46. 94 Zamel, V. (1982). Writing: The process of discovering meaning. TESOL Quarterly, 16(2), 195-209. 95 Appendix A: Annual TWE examinees Period Administration Number of TWE examinees '93. Aug. ~'94. May 5 times 295,473 '94. Aug. ~'95.May 5 times 311,410 '95. Jul. ~'96. Jun. 5 times 325,125 '96. Jul ~ '97. Jun. 5 times 338,832 '97. Jul. ~"98. Jun. 5 times 348,063 (Source is from Annual TOEFL Test and Score Data Summary by Educational Testing Service.) 96 Appendix B : Topics1 [TEST A] Instructions: There are four topics attached to these instructions. Choose one of the four topics to write your best essay on. You will have 30 minutes to write as much as you can. 1. Do you agree or disagree with the following statement? Parents are the best teachers. Use specific reasons and examples to support your answer. 2. Do you agree or disagree with the following statement? It is better to be a member of a group than to be the leader of a group. Use specific reasons and examples to support your answer. 3. Some people believe that a college or university education should be available to all students. Others believe that higher education should be available only to good students. Discuss these views. Which view do you agree with? Explain why. 4. Some people trust their first impressions about a person's character because they believe these judgments are generally correct. Other people do not judge a person's character quickly because they believe first impressions are often wrong. Compare these two attitudes. Which attitude do you agree with? Support your choice with specific examples. [TEST B] Instruction: Write an essay on the following topic. You have only 30 minutes to write on the topic. Some people say that advertising encourages us to buy things we really do not need. Others say that advertisements tell us about new products that may improve our lives. Which viewpoint do you agree with? Use specific reasons and examples to support your answer. 1 Topics reprinted by permission of Educational Testing Service, the copyright owner. Appendix Materials Covered by this Permission: TWE Writing Topics (www.toefl.org) 100 Appendix D: Questionnaires Background Information Phone: 1. Name: (first name) (surname) 2. Year of birth: 3. Gender (circle one): Male Female 4. Occupation: (General Education & Background Knowledge) §€ School Education 5. How many years of school education have you had in Canada? (e.g., 3 years 8 months) years months 6. How many years of school education have you had in your country? (e.g., 3 years 8 months) years months 7. Which of the following subjects did you study in secondary school? Please check all the appropriate boxes. Foreign languages (other than English) • English • History • Religion • Literature • Physics • Social Studies • Economics • Biology • Computer Science • Chemistry • Geography • Mathematics • Other subjects . 8. Which of the following subjects have you studied in university? Please check all the appropriate boxes. Foreign languages (other than English) • English • History • Religion • Literature • Physics • Social Studies • Economics • Biology • Computer Science • Chemistry • Geography • Mathematics • Other subjects . 9. What is your major (minor) in university? (e.g., engineering, political science, etc.) 10. Level of course (check the appropriate one): Undergraduate • Graduate (M.A.) • Graduate (Ph.D.) • 101 K Future course of study 11. What subject(s) will you study next? (e.g., engineering, political science, etc.) 12. Level of course (check the appropriate one): Undergraduate • Graduate (M.A.) • Graduate (Ph.D.) • Others • SHI Background knowledge 13. Think about the reading you do for your work or during your spare time. Do you read books, magazines, academic papers, or newspaper articles on any of the following subjects? (Circle each appropriate one.) Often Sometimes Never Social Science 1 2 3 Education 1 2 3 Business 1 2 3 Economics 1 2 3 Geography 1 2 3 History 1 2 3 Literature 1 2 3 Languages 1 2 3 Computer Science 1 2 3 Others (please specify) 1 2 3 1 2 3 3€ ESL learning experience 14. How many years/months did you study English in your country? (e.g., 3 years 5 months) years months 15. How many years/months have you studied English in an English environment? (e.g., North America, England, Australia, etc.) years months 16. What languages can you speak? If you speak more than one, circle your first language. If you consider yourself bilingual, circle both. 1) • 2) • 3) . 102 17. If you have ever had any native English-speaking teachers, please list the courses, the years, and the length of time: Courses (e.g., University English Course) Years Length of time 18. If you have ever taken or are currently taking any writing courses, please list the courses, the years, the length of time, and level of courses: Courses (e.g., academic writing course) Years Length of time Level Advanced Intermediate Low 19. Please indicate which of the following types of writing you have done in your first language and English. (Check all the appropriate lines) (First language) (English) Written a book or professional article Written for a magazine or newspaper Written poetry, short stories, or plays Written letters to friends or family Kept a diary or journal Written papers in high school Written papers in university Others (Please explain) 20. Do you like to write in your first language? (Check one answer) 1) I love to write 2) I like to write sometimes 3) I write only when I have to write 4) I do not like writing and try to avoid it whenever possible Do you like to write in English? (Check one answer) 1) I love to write 2) I like to write sometimes 3) I write only when I have to write 4) I do not like writing and try to avoid it whenever possible 104 Appendix E: Students' topic selection behaviors 1. High-intermediate writing class Student #1 Instruction 5 seconds Topic #1 1 Topic #2 ^7 Topic #3 • l O 1 Topic #4 ^15* Total 43 sec. Student #2 Instruction 1 3 seconds Topic #1 r* Topic #2 • i 1 3 Topic #3 1 Topic #4 I n Total 59 sec. 105 Student #3 Instruction ^ 8 seconds Topic #1 . 16 6* Topic #2 ^20 / Topic #3 ^17 / Topic #4 + 3 5 / Total lmin. 42sec. Student #4 Instruction . 9 seconds Topic #1 l 9 Topic #2 +9 / \ 4 Topic #3 v 1 0 / 33 1 / Topic #4 * 3 0 ' Total lmin. 47sec. Student #5 Instruction 1 2 seconds 8 1 * \ i Topic #1 / 2 / 10 1 / \ _ Topic #2 V 3 / 11 1 ' V Topic #3 • 2 / 13* 1 ' Topic #4 W Total 53 sec. 106 Student #6 Instruction , 11 seconds Topic #1 I13 Topic #2 I14 ^ Topic #3 V 19 28 ^ 3 * 1 ^ Topic #4 * 3 7 ^ Total 2min. lOsec. Student #7 Instruction . 11 seconds Topic #1 6 Topic #2 T / v 9 / 7 1 / \ , Topic #3 T 1 2 / 7* 1 / Topic #4 ^24 / Total lmin. 21 sec. Student #8 Instruction 1 7 seconds Topic #1 T 10 19 1 J* \ Topic #2 X / * V 7 6 5 1 * V Topic #3 ^9 / * 8 * Topic #4 Total lmin. 28sec. 107 Student #9 Instruction ^9 seconds Topic #1 Topic #2 » / X Topic #3 * 9 / *32* Topic #4 *22 / Total 2min. 2sec. Student #10 Instruction . 6 seconds Topic #1 1 8 Topic #2 1 * Topic #3 1 ' 1 ^ Topic #4 + 24/ Total lmin. 39sec. Student#11 Instruction 1 9 seconds Topic #1 T r Topic #2 T 10 Topic #3 12 1 Topic #4 T36* Total lmin. Msec. 108 Student #12 Instruction • 9 seconds Topic #1 v i o * Topic #2 ? Topic #3 +9 I Topic #4 ^ 1 9 Total 53sec. 109 2. Advanced writing class Student #1 Instruction 1 6 seconds Topic #1 Topic #2 • 1 9 Topic #3 » Topic #4 Total 44sec. Student #2 Instruction • 10 seconds Topic #1 Topic #2 + 8 Topic #3 *14 9* 1 ^ N L Topic #4 M 9 ^ 16 Total lmin. 24sec. Student #3 Instruction 10 seconds Topic #1 + 15 1 Topic #2 MO 10 11* 1 X \ * Topic #3 Topic #4 ^ 1 7 / ^ l 3 / Total lmin. 36sec. 110 Student #4 Instruction , 4 seconds Topic #1 -Topic #2 Topic #3 ^4 1 Topic #4 Total 21 seconds Student #5 Instruction . 10 seconds Topic #1 10 12 Topic #2 ± ' ±-T 1 6 12 15* 1 * Topic #3 1 ' T 1 4 16 1 " Topic #4 ^ 1 0 / Total lmin. 55sec. Student #6 Instruction ^12 seconds Topic #1 I12 Topic #2 j 1 7 Topic #3 .14. Topic #4 28 Total lmin. 23sec. I l l Student #7 Instruction ^6 seconds Topic #1 I8 , 5 \ Topic #2 • / H 12 10 5* Topic #3 1 ^ Topic #4 Total lmin. 24sec. Student #8 Instruction . 3 seconds Topic #1 1 Topic #2 + 9 1 Topic #3 * 1 0 * 1 Topic #4 ^8 Total 36sec. Student #9 Instruction 13 seconds^^,8 \ ^ Topic #1 Topic #2 1 5 \ Topic #3 16 Topic #4 22 Total lmin. 25sec. J 112 Student #10 Instruction . 9 seconds Topic #1 Topic #2 • / 7 / 3 1 / X Topic #3 + 12 / * 5 Topic #4 v 2 8 / 11 Total lmin. 37sec. 113 Appendix F: Interview questions [Questions for the optional writing topic test] 1. Do you often take writing tests in class? 2. How often do you take writing tests in class? Do you feel pressure when you are taking writing tests, or do you feel comfortable? ... Compared to the normal writing tests, how did you feel during this writing task? Was it similar to or different from other tests? Why? 3. You had only 30 minutes to write an essay. Can you tell me how you went through the steps of writing an essay, what did you do first and so on? In other words, did you take time to think, plan, go back while writing, and edit at the end? 4. Did you read all topics? What topic did you choose? 5. Were you able to choose the topic immediately? Or, did you go back and forth to choose the right one? 6. Why did you choose this one? 7. Why didn't you choose the others? 8. Which was the easiest topic? Why? Which was the hardest topic? Why? 9. When you were writing, were you happy that you had chosen this topic or did you ever think, "I should have chosen another topic?" Why? What caused you to feel like that? 10. Did you change your mind at any point? 11. Do you think you could produce your best writing ability with the topic you chose? 12. What if you had been forced to choose another topic? Would you have been hindered? 13. How do you usually decide on a topic during a test situation? 114 [Questions for the single writing prompt] 1. Last time, you took a multiple writing prompt test. Compared to the first test, how did you find the single writing topic test? 2. Was it more difficult than the last one? Why or why not? 3. What caused you to feel like that? 4. What did you think when you first saw the writing topic? Did you wish you could have had another one? Why? 5. How did you go through the process of writing an essay with a single topic? [Closing questions] 1. Which one do you prefer, optional topics or single one? [if optional] 2.How many are a good number of choices: 2,3,4,5? [if single] 3. Do you prefer to have a single writing topic? 4. Are there any special reasons you prefer one to the other? 5. If this were a real test, which one would you prefer? Why? 6. Which test condition do you think helped you produce a better essay? Why? 7. Did it affect your writing under each condition? 8. In order to choose the right topic, you spent a certain amount of time (e.g., 2 minutes). Was 30 minutes enough for you to finish the writing test? Or, did you feel short of time because of the time you spent choosing the right topic? 9. Do you think you wasted time because you had to choose a topic? 10. Do you have any other comments on the writing tests? 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0078204/manifest

Comment

Related Items