Associate Professor and Chancellor's Fellow, University of California, Davis
Michal Kurlaender and Scott Carrell, University of California, Davis
Jacob Jackson, Public Policy Institute of California
In this paper we explore institutional effects on student outcomes in the nation’s largest public two-year higher education system—the California Community College system. We investigate whether there are significant differences in student outcomes across community college campuses after adjusting for observed student differences and potential unobserved determinates that drive selection. Additionally, we ask whether college rankings based on unadjusted mean differences across campuses provide meaningful information. To do so, we leverage a unique administrative dataset that links community college students to their K-12 records in order to control for key student inputs. Results show that there are meaningful differences in student outcomes across the 108 California Community Colleges in our sample, after adjusting for differences in student inputs. For example, our lower bound estimates show that going from the 10th to 90th percentile of campus quality is associated with a 3.32 (34.3 percent) increase in student transfer units earned, an 0.07 (9.6 percent) increase in the probability of persisting, an 0.09 (40.7 percent) increase in the probability of transferring to a four-year college, and an 0.08 (27.1 percent) increase in the probability of completion. We also show that college rankings based on unadjusted mean differences can be quite misleading. After adjusting for differences across campus, the average school rank changed by over 30 ranks. As such, our results suggest that policymakers wishing to rank schools based on quality should adjust such rankings for differences in student-level inputs across campuses.