UBC Faculty Research and Publications

The Potential for Automated Text Evaluation to Improve the Technical Adequacy of Written Expression Curriculum-Based Measurement Mercer, Sterett H.; Keller-Margulis, Milena A.; Faith, Erin L.; Reid, Erin K.; Ochs, Sarah

Abstract

Written-expression curriculum-based measurement (WE-CBM) is used for screening and progress monitoring students with or at risk of learning disabilities (LD) for academic supports; however, WE-CBM has limitations in technical adequacy, construct representation, and scoring feasibility as grade level increases. The purpose of this study was to examine the structural and external validity of automated text evaluation with Coh-Metrix vs. traditional WE-CBM scoring for narrative writing samples (7 min duration) collected in fall and winter from 144 second through fifth grade students. Seven algorithms were applied to train models of Coh-Metrix and traditional WE-CBM scores to predict holistic quality of the writing samples as evidence of structural validity; then, external validity was evaluated via correlations with rated quality on other writing samples. Key findings were that (a) structural validity coefficients were higher for Coh-Metrix compared to traditional WE-CBM but similar in the external validity analyses, (b) external validity coefficients were higher than reported in prior WE-CBM studies with holistic or analytic ratings as a criterion measure, and (c) there were few differences in performance across the predictive algorithms. Overall, the results highlight the potential use of automated text evaluation for WE-CBM scoring. Implications for screening and progress monitoring are discussed.

Item Citations and Data

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International