Professor of Education, University of California, Berkeley
Constructed response versus selected response Items: Seeking a new perspective In this study, we investigated the equivalence, or otherwise, of constructed-response (CR) item types and selected-response (SR) item types, in particular, where the assessment is of higher-order-thinking: specifically, we examined the practice of argumentation in the context of science. We analyzed data obtained from 303 middle school and high school students who were randomly assigned to different assessment conditions using matched CR and SR items. Our findings indicate that (a) the results of the SR items are highly consistent with those from the CR items in both qualitative and quantitative terms, except that (b) the CR items were notably harder than the SR items, the equivalent for students of about a grade level. We interpret this finding to show that, in the CR case, the students are hampered by the requirement to write their responses in sentences that communicate their higher level thinking and capability. Thus, their facility with written expression is a problem when only constructed response items are used to assess student knowledge. We note that this finding is of concern for those who see machine-learning based scoring approaches as being a panacea for the assessment enterprise. We conclude with some suggestions about practical resolution of this issue.