Predicting failure in first-term calculus coursesKseniya GaraschukScience Teaching and Learning FellowCarl Weiman Science Education InitiativeUniversity of British ColumbiaMay 27, 2017AbstractIn this paper, we describe the set up and administration of a precalculusdiagnostic test and an attitudinal survey to first-term calculus students. Wethen illustrate how these tools can be used to identify groups of students athigh risk of failing their respective calculus course. The tools and analysisdescribed here can be easily adapted and implemented at any post-secondaryinstitution. Instructors and academic advisors can use this information tobetter support the students by providing them with appropriate options andresources.1 Introduction and backgroundPost-secondary institutions invest a lot of time and effort into their first-year courses.In recent years, we have seen universities take particular interest in improving studentsuccess in first-year calculus courses that are compulsory for many programs yetcontinue to have some of the highest failure rates. The first step in addressing thisproblem is to be able to predict students’ final grades, particularly failures, usingsome available tools and measurable factors.In-term assessments are generally the best predictors of students’ final exam per-formance (they are clearly the best predictors of the final grade itself because theycontribute to it). However, it is desirable to identify students at risk of failing as earlyin the term as possible. Therefore, numerous studies have examined student successin first-year calculus using factors available at the start of the term, such as standardtests (SAT and ACT), high school mathematics grades, demographic factors (age,gender, ethnicity, parents’ level of education, socioeconomic status), non-cognitivefactors (attitudes towards science, motivation, student’s opinion of their academicskills) and so on. See [4, 13] for a good literature review on the topic of predictingstudent achievement in mathematics. These studies have informed many universityand college admission requirements as well as course entrance requirements, which1further resulted in various placement mechanisms that aim to filter students intothe courses that correspond to their level of calculus readiness. These mechanismsrange from simple placement tests to placement interviews to adaptive placementassessments and have been extensively studied (see [1, 8, 12]).Since there are contrary findings on the best predictors of calculus success (or failure),institutions often choose to run their own analyses of correlation between various fac-tors and student success in their courses. One can find many studies, ranging fromextensive reports (such as [4]) to theses (for example [11]) on best predictors of cal-culus performance at various schools across the globe. It is not surprising that thesefindings do not agree: institution-to-institution and course-to-course, student demo-graphic differs drastically and so do the skills that students require to do well. Sinceinstitutions tend to know their students and their courses best, one would expectthat placement tests designed and administered by the institution itself would serveas the best predictor of student success at the same institution. In fact, researchsupports this statement (see [4] for sources). This is one of the reasons that manyinstitutions use placement tests (see [11] for an outdated yet comprehensive sum-mary of all placement tests held in post-secondary institutions across one provinceof Canada).The project described in this paper involves designing and administering a precalcu-lus diagnostic test and an attitudinal survey to incoming first-term calculus studentsduring the first week of classes. The data collected is then correlated with finalcourse grades. We present the grade distributions in relation to diagnostic test andattitudinal survey results. We then identify the main risk group, that is the groupof students most likely to fail the course.The research described here took place at a large public comprehensive university lo-cated in Canada that has 4 separate calculus streams: science, life science, economicsand two-term differential calculus. Two of the streams run as a regular session anda session with a tutorial, which makes for 6 different course numbers. In this paper,we analyze 5 one-term calculus courses. They will be referred to as V, W, X, Y andZ below.The tools and analysis described here can be easily adapted and implemented at anypost-secondary institution. Students, instructors and academic advisors can thenuse this information to support the risk group by providing them with appropriateoptions and resources.22 Tools: composition and implementationWe present analysis based on two tools, namely a precalculus diagnostic test and anattitudinal survey, both developed and validated at the same university. We also rancorrelation analysis between final grades and other factors, such as high school grades,gender and country of origin, but the correlations were not as strong as the onesbetween the final grades and the diagnostic test. Moreover, we wanted to include onlythe tools that could be implemented by instructors or course coordinators themselves,so they would not have to depend on obtaining unreliable and potentially incompletehigh school grades or standard testing grade information.2.1 Precalculus diagnostic testWe developed the precalculus diagnostic test, or diagnostic test, a year prior toadministering it. The diagnostic test is based on the placement test designed andcurrently given by the mathematics department to assess calculus readiness of stu-dents whose high school precalculus grade is low, outdated or is not recognized bythe institution (the latter mainly applies to international students). The placementtest is a 60-minute proctored multiple choice exam that students write in the firstweek of classes. We carried out two pilot runs of a review assignment based on theplacement test and analysed the results. The necessary corrections included elimi-nating some questions, adjusting difficulty of some questions and introducing severalnew questions that tested the material not covered by the placement test.The final diagnostic test consists of 20 multiple choice questions to be completed in60 minutes. The test is scored based on 6 sections: numbers, equations, inequalities,functions, graphs and trigonometry. Each question on the test receives a score inone or two of the sections. After the test, students receive a score for each sectionand are then invited to practice their precalculus background skills using the reviewpackages that are organized into the same 6 sections. The review materials areavailable through the university’s main learning management system and studentsare particularly encouraged to review the sections they were weakest in. The efficacyand student usage of the review materials is the topic of author’s ongoing research.For the purpose of this paper, we will work with the combined diagnostic test score.The diagnostic test is administered through an open-source online homework systemthat students use throughout the term. Students are first given a small homeworkassignment to familiarize themselves with the system and minimize technical issues.3The diagnostic test is then launched at the end of the first week of classes once stu-dents completed their first homework. Due to software limitations, the diagnostictest is run on a honour-based system: students are asked to not use any outsidesources and to time themselves for 60 minutes to best assess their precalculus knowl-edge. Students receive one homework’s worth of credit (approximately 1% of thetotal course grade) for completing the diagnostic test. As the test is marked oncompletion and not correctness, students have no reason not to honestly self-assess.While the test itself is marked out of 36 points, the analysis below uses the scalefrom 0 to 100 with higher score representing better precalculus skills.2.2 Mathematics Attitudes and Perceptions SurveyAlongside diagnostic test, students are asked to complete the Mathematics Attitudesand Perceptions Survey (MAPS). This attitudinal survey (see [5]) consists of 30questions split into 7 factors: confidence, growth mindset, real world applicability,interest in mathematics, persistence, sensemaking and the nature of answers. Thesurvey is administered online with voluntary participation. Students are encouragedto take it earning them some bonus homework points for participation. The surveytakes 5-10 minutes to complete and is scored on the scale from 0 to 100 with higherscore representing more expert-like attitudes.3 ResultsThe results below correspond to the data collected in the fall 2015 semester. Thediagnostic test and the MAPS were given to all students in first-term calculus courses,which consist of approximately 4700 persons. The majority of students completedthe diagnostic test, while the participation in MAPS was slightly lower. In ouranalysis, we include students who have participated in both the diagnostic test andMAPS and who completed their respective calculus course to earn a letter grade.This amounts to the total of 3303 students across all 5 one-term calculus courses.Table 1 summarizes the diagnostic test participation by students per course. Thevisual representation of averages in Table 1 is included in Figure 1 where we alsoinclude the distributions of the incoming MAPS scores across all 5 one-term calculuscourses.4Course Average Number of students Percentagewho took the test who took the testV 65% 996 90%W 65% 837 96%X 61% 859 93%Y 53% 360 93%Z 49% 649 92%Table 1: Diagnostic test participation by course.Figure 1: Diagnostic test and MAPS score averages by course.By the end of the term, MAPS scores fell in all courses, which is consistent with re-sults published on MAPS and other attitudinal studies in math and other disciplines.However, a careful analysis of one of the courses by section (8 sections of the samecourse, taught by 8 different instructors with common assessments) revealed that thedecrease in sections using active learning was significantly less that the decrease inother sections, which is consistent with findings in [14].Diagnostic test has above-moderate correlation with final course grades with Pearsonr = 0.53 across all courses, which is statistically significant for the sample size.However, this means that only 28% (r2 = 0.28) of the total variation about themean is explained by the regression line. This indicates that the data is highlynon-linear and hence no linear predictor can be reasonably applied in this case.If we take diagnostic test average within each final letter grade, we see a clear trend(see Figure 2). Analogous analysis within each course reveals similar trends but withvarying numerical values. One has to be careful in correlating any measurable factorwith final grades since the latter are somewhat flexible as the grade bins might vary bycourse or even by instructor with true failure cut-offs lying somewhere between 45%5and 55%. Furthermore, the borderline pass-fail cases behave somewhat randomlyand students close to the border can end up in either pass or fail category. In fact,if resources allow and student numbers are manageable, we recommend treatingstudents in D-range as belonging to the risk group, particularly if they are bound forintegral calculus.Figure 2: Diagnostic test averages by final grade across all one-term courses.In identifying the main risk group, one needs to decide on the suitable threshold.Figure 3 presents the percentage of failure given a specific diagnostic test score inone calculus course. While scoring less than 50% on the diagnostic indicates a higherprobability of failure, it might not be possible to reach out to all of those studentsdue to the sheer number of people (indicated by N in Figure 3).Figure 3: Percentage of students failing the course by diagnostic test score.6Course-specific data provides better insight into the particular student demographicsand so we examine one course in detail below. We consider a subset of 783 studentsin this course who took both the diagnostic test and the MAPS. We then split thestudents by their diagnostic test score group into two groups (called H-Diag and L-Diag) according to whether they scored above or below the diagnostic class averageof 66%. Here, within each course, the used threshold for identifying the risk groupis the class’s own average. These two groups, H-Diag and L-Diag, are nearly equalin size with 387 and 396 students. Here we group D and F students together sinceC or higher is required in order to advance to integral calculus. Therefore, from ourperspective, a D grade is equivalent to an F since D students will, in most cases,have to retake differential calculus (in terminal differential calculus courses that doexist in some institutions or for programs that only require one calculus, one mightneed to adjust this analysis).Figure 4: Distribution of grades according to the diagnostic test score in terms ofnumber of people (left) and in percentage of total (right).Figure 4 shows the distribution of grades in terms of number of people and in per-centage of total. We can see that there is a clear trend in grades according to thediagnostic test performance. In particular, 74% of all As occur in the H-Diag groupwhereas 90% of all Ds and Fs occur in the L-Diag group.We further split the H-Diag and L-Diag groups into two each according to whether thestudents scored above or below the MAPS class average of 59% (we label these groupsH-Diag/H-MAPS, H-Diag/L-MAPS, L-Diag/H-MAPS and L-Diag/L-MAPS). Fig-ure 5 illustrates the composition of each grade by diagnostic test and MAPS score.Again, the most obvious trends are in As and in D+F group as we see most As occurin the H-Diag/H-MAPS group and most Ds and Fs occur in the L-Diag/L-MAPSgroup.7Figure 5: Composition of each grade by the diagnostic test score and MAPS score.Analogous analysis in other calculus courses produces similar results.4 Conclusions and discussionThe simple but comprehensive analysis performed in this paper shows that a diag-nostic test can be used as a predictor of failure in first-term calculus courses. Thediagnostic test prediction can be further sharpened with attitudinal survey or, po-tentially, some other non-cognitive tests. The diagnostic test itself would potentiallybe more reliable if it was administered in a proctored timed environment in the firstweek of class [6]. However, the above analysis shows that even if instructors are notwiling to sacrifice class time and dedicate marking efforts, the test taken in the onlinehomework environment still produces good results.As a tool, it is uncomplicated to compose, administer and analyze but providesinvaluable data to the students and the instructor at the very beginning of the term.The set-up and analysis described above can be adapted and implemented at anypost-secondary institution to identify students at risk of failing. Said institutionshould then establish resources to support students at risk of failing their course.85 Ongoing research and developmentAddressing predicted failures in first-term courses is the topic of the author’s ongoingresearch. As pointed out in Section 2.1, the diagnostic test is scored based on 6sections and is supplemented by review materials organized in the same 6 topicalsections. Figure 6 shows the breakdown of averages by section for one of the courses,which is representative (in relative distribution) of all other courses.Figure 6: Averages per section.Correlations between all 6 sections of the diagnostic test were similar to the corre-lation between the combined diagnostic test score and the final grade. For thosestudents who are weak (strong) in all the topics, the combined score was low (high)therefore resulting in high (low) failure rate. However, a student with low scores insome sections and high scores in others might truly benefits from a targeted reviewand increase his or her grade. Having detailed data about their precalculus back-ground, students can concentrate their studying efforts and focus on their weakesttopics. The author is currently developing a new comprehensive set of review mate-rials to be used in conjunction with the diagnostic test to help students address theirprecalculus weaknesses. The efficacy and student usage of the review materials willbe then investigated using data analysis and student interviews.The author is also currently adapting the diagnostic test to be administered at an-other institution where multi-year data analysis will be possible. The plan is torun the diagnostic test alongside review materials and track the changes in studentsuccess year-to-year.9Another direction for future work is to refine the diagnostic test itself. In November2015, we surveyed the students to assess how they approached the diagnostic andwhether they thought it was helpful. Just over 50% of all first-term calculus studentsfilled in the survey (2384 students). The students survey revealed that the majorityof students agree that strong precalculus skills are necessary for calculus success, butfewer proportion of students agreed that the diagnostic test helped them identifygaps in their precalculus knowledge. Student interviews and focus groups could beheld to figure out what the students thought the test was lacking and how it couldbe improved. Moreover, closer analysis of the separate questions themselves couldbe pursued further to modify the diagnostic test.6 AcknowledgementsI would like to thank all the members of the Carl Wieman Science Education Ini-tiative at the University of British Columbia for their support and always helpfuladvice. My sincere gratitude goes out to Mark Mac Lean, Costanza Piccolo and EricCytrynbaum for numerous useful conversations we had about this project.References[1] A. Ahlgren Reddy and M. Harper, (2013). Mathematics placement at the Univer-sity of Illinois. Problems, Resources, and Issues in Mathematics UndergraduateStudies, 23(8), 683-702. doi:10.1080/10511970.2013.801378.[2] G. D. Allen, S. Nite, M. S. Pilant and J. Whitfields, (2013). Using a Math Place-ment Exam to Develop a Personalized Precalculus Program, Electronic Proceed-ings of the Twenty-fifth Annual International Conference on Technology in Col-legiate Mathematics. Retrieved from http://archives.math.utk.edu/ICTCM/VOL25/S037/paper.pdf.[3] W. E. Barnes and J. W. Asher, (1962). Predicting students’ success in first-yearalgebra. The Mathematics Teacher, 55(8), 651-654.[4] R. Benford and J. Gess-Newsome, (2006). Factors Affecting Student AcademicSuccess in Gateway Courses at Northern Arizona University. Retrieved fromhttp://www2.nau.edu/~facdev-p/TR/Factors.pdf.10[5] W. Code, S. Merchant, W. Maciejewski, M. Thomas and J. Lo, (2016). TheMathematics Attitudes and Perceptions Survey: an instrument to assess expert-like views and dispositions among undergraduate mathematics students, Inter-national Journal of Mathematical Education in Science and Technology, 47(6),917-937. doi:10.1080/0020739X.2015.1133854.[6] S. Drake, (2010). Placement into First College Mathematics Course: A Com-parison of the Results of the Michigan State University Proctored MathematicsPlacement Examination and the Unproctored Mathematics Placement Examina-tion. PhD thesis.[7] S. Fitchett, K. King and J. Champion, (2011). Outcomes of mathemat-ics placement: An analysis of advising and enrollment data. Problems, Re-sources, and Issues in Mathematics Undergraduate Studies, 21(7), 577-591.doi:10.1080/10511970903515323.[8] N. M. Kingston and G. Anderson, (2013). Using State Assessments for Predict-ing Student Success in Dual-Enrollment College Classes. Educational Measure-ment: Issues and Practice, vol. 32, no. 3, 3-10.[9] D. Leeming, (2001), A summary of the results of a survey of BC’s post-secondary educational institutions on their use of Placement Tests for Calcu-lus I. Retrieved at http://www.bccupms.ca/Documents/Project_Documents/PlacementTestSurvey.pdf.[10] B. L. Madison, C. S. Linde, B. R. Decker, E. M. Rigsby, S. W. Dingman and C.E. Stegman. (2015). A study of placement and grade prediction in first collegemathematics courses. Problems, Resources, and Issues in Mathematics Under-graduate Studies, 25(2), 131-157. doi:10.1080/10511970.2014.921653.[11] S. S. Mayo, (2012). Predicting academic success in first-year mathematicscourses using ACT mathematics scores and high school grade point average.PhD thesis.[12] K. W. Norman, A. G. Medhanie, M. R. Harwell, E. Anderson and T. R. Post,(2011). High School Mathematics Curricula, University Mathematics PlacementRecommendations, and Student University Mathematics Performance, Prob-lems, Resources, and Issues in Mathematics Undergraduate Studies, 21(5), 434-455. doi: 10.1080/10511970903261902.[13] L. J. Pyzdrowski, Y. Sun, R. Curtis, D. Miller, G. Winn and R. A. M. Hensel,(2013). Readiness and Attitudes as Indicators for Success in College Calculus.11International Journal of Science and Mathematics Education, 11(3), 529-554.doi:10.1007/s10763-012-9352-1.[14] W. Maciejewski, (2016). Flipping the calculus classroom: an evaluativestudy, Teaching Mathematics Applications, 35 (4), 187-201. doi: 10.1093/tea-mat/hrv019.12
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Faculty Research and Publications /
- Predicting failure in first-term calculus courses
Open Collections
UBC Faculty Research and Publications
Predicting failure in first-term calculus courses Garaschuk, Kseniya May 27, 2017
pdf
Page Metadata
Item Metadata
Title | Predicting failure in first-term calculus courses |
Creator |
Garaschuk, Kseniya |
Contributor | University of British Columbia. Carl Weiman Science Education Initiative |
Date Issued | 2017-05-27 |
Description | In this paper, we describe the set up and administration of a precalculus diagnostic test and an attitudinal survey to first-term calculus students. We then illustrate how these tools can be used to identify groups of students at high risk of failing their respective calculus course. The tools and analysis described here can be easily adapted and implemented at any post-secondary institution. Instructors and academic advisors can use this information to better support the students by providing them with appropriate options and resources. |
Genre |
Report |
Type |
Text |
Language | eng |
Date Available | 2017-10-31 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution-NonCommercial-NoDerivatives 4.0 International |
DOI | 10.14288/1.0357414 |
URI | http://hdl.handle.net/2429/63494 |
Affiliation |
Science, Faculty of Mathematics, Department of |
Peer Review Status | Unreviewed |
Scholarly Level | Postdoctoral |
Rights URI | http://creativecommons.org/licenses/by-nc-nd/4.0/ |
Aggregated Source Repository | DSpace |
Download
- Media
- 52383-Garaschuk_Kseniya_Predicting_failure_first_term_calculus.pdf [ 304.73kB ]
- Metadata
- JSON: 52383-1.0357414.json
- JSON-LD: 52383-1.0357414-ld.json
- RDF/XML (Pretty): 52383-1.0357414-rdf.xml
- RDF/JSON: 52383-1.0357414-rdf.json
- Turtle: 52383-1.0357414-turtle.txt
- N-Triples: 52383-1.0357414-rdf-ntriples.txt
- Original Record: 52383-1.0357414-source.json
- Full Text
- 52383-1.0357414-fulltext.txt
- Citation
- 52383-1.0357414.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.52383.1-0357414/manifest