What PISA Can't Teach Us

February 10, 2016

By Martin Carnoy, Emma Garcia, & Tatiana Khavenson

The results of the Program for International Student Assessment, or PISA, the test that is used to do an international ranking of students' academic performance, will be trumpeted later this year, and the responses will undoubtedly repeat those of previous years: U.S. officials will wring their hands and lament that American student achievement is stagnant, that it is lagging woefully behind our economic competitors', and that we therefore need to import features of schooling from higher-scoring countries.

This edu-masochism—a distinctly American way of focusing on our educational shortcomings—can be traced at least back to the late 1950s and the Soviet Union's launch of Sputnik. Perhaps the nation's early trailblazing successes in establishing mass schooling and developing a uniquely excellent system of higher education have left U.S. educators particularly vulnerable to charges that other nations are surpassing us.

These fears unfortunately have led education policymakers astray. They focus on what other nations are achieving, and they fall prey to those urging us to copy Finland's, Singapore's, or South Korea's schooling practices.

But why compare national student performance in the United States with average scores in other countries, when U.S. students attend schools in 51 separate education systems responsible to states and the District of Columbia, not the federal government? The U.S. education system is a construct that does not exist operationally. What's more, a well-respected test, the National Assessment of Educational Progress (NAEP) provides a state-by-state picture of our schools that is much more relevant than either PISA or other major international tests, such as the Trends in International Mathematics and Science Study (TIMSS).