Validation methods for aggregate-level test scale linking: A case study mapping school district test score distributions to a common scale

Author/s: 

Sean F. Reardon

,

Andrew D. Ho

,

Demetra Kalogrides

Year of Publication: 
2019

Linking score scales across different tests is considered speculative and fraught, even at the aggregate level (Feuer et al., 1999; Thissen, 2007). We introduce and illustrate validation methods for aggregate linkages, using the challenge of linking U.S. school district average test scores across states as a motivating example. We show that aggregate linkages can be validated both directly and indirectly under certain conditions, such as when the scores for at least some target units (districts) are available on a common test (e.g., the National Assessment of Educational Progress). We introduce precision-adjusted random effects models to estimate linking error, for populations and for subpopulations, for averages and for progress over time. These models allow us to distinguish linking error from sampling variability and illustrate how linking error plays a larger role in aggregates with smaller sample sizes. Assuming that target districts generalize to the full population of districts, we can show that standard errors for district means are generally less than 0.2 standard deviation units, leading to reliabilities above 0.7 for roughly 90% of districts. We also show how sources of imprecision and linking error contribute to both within- and between-state district comparisons within vs. between states. This approach is applicable whenever the essential counterfactual question—“what would means/variance/progress for the aggregate units be, had students taken the other test?”—can be answered directly for at least some of the units.

Primary Research Area:

APA Citation

Reardon, S.F., Ho, A.D., Kalogrides, D. (2019). Validation methods for aggregate-level test scale linking: A case study mapping school district test score distributions to a common scale.

Media Mentions