Associate Professor, Graduate School of Education
Member, Bio-X
Phone: 720-984-1702
Email: bdomingue@stanford.edu
Address: 520 Galvez Mall #510,CERAS building, Stanford, CA 94305-3084
Ben Domingue is an associate professor at the Graduate School of Education at Stanford University with research interests in psychometrics and quantitative methods. He is interested in how statistical tools can be used to better understand psychological and educational outcomes—e.g., what is this child’s reading ability?—that are challenging to measure and yet ubiquitous in education as well as the social and biomedical sciences more generally. The following are examples of the kinds of questions that he gets excited about studying (with the essential help of many collaborators):
--As response time is collected in a broader range of psychometric studies, how should it be used? Response time is a potentially valuable piece of information about the response process but it is not always clear how it should be interpreted. One issue that we have focused on is whether the speed-accuracy tradeoff—the observation that people make more error-prone responses when hurried—arises absent priming in observational data (“Speed-accuracy tradeoff? Not so fast: Marginal changes in speed have inconsistent relationships with accuracy in real-world settings”). A related question is whether it can be potentially used to study group differences on low-stakes examinations (“Differences in time usage as a competing hypothesis for observed group differences in accuracy with an application to observed gender differences in PISA data.”).
--How can we better index prediction quality from models of binary outcomes? Our work leverages the fact that prediction quality can be translated to statements about weighted coins and introduces an index that has an interpretation which is consistently interpretable across a range of outcomes. Alongside our development of this idea in a generic setting (“InterModel Vigorish (IMV): A novel approach for quantifying predictive accuracy when outcomes are binary”), we also have done work to show how it can be used in both item response theory (“The InterModel Vigorish as a lens for understanding (and quantifying) the value of item response models for dichotomously coded items”) and structural equation models (“The InterModel Vigorish for Model Comparison in Confirmatory Factor Analysis with Binary Outcomes”).
--Can we use psychometric approaches to further our understanding of what we are learning from large experiments in education? Conventional approaches to understanding treatment effects frequently focus on outcomes that are composites of individual items. We ask whether we can identify and observe item-level variation in treatment sensitivity (“Heterogeneity of item-treatment interactions masks complexity and generalizability in randomized controlled trials”). Such variation may offer useful information about the nature of the intervention and the skills it is affecting.
Information on above titles along with a full list of publications can be found in his CV https://www.dropbox.com/scl/fi/58b9y4j6pb0n1uiiho6sw/BDomingue_CV.pdf?rl...