Call us: 0300 30 30 333
Home Menu

Volatility happens: Understanding variation in schools’ GCSE results

Changes in results are not necessarily a reliable indicator of school effectiveness.

A report out earlier this month from Cambridge Assessments, argues that volatility in schools' GCSE exam results is normal, quantifiable and predictable. The research builds on an earlier study which ruled out exam grade boundaries and marking as major components of volatility. 

The researchers say fluctuations are to be expected and can largely be explained by a change in the students between years or even just simple chance. The researchers stress that this is important because if most of the changes in results can be predicted without any information about schools, then changes in test scores are not necessarily a reliable indicator of school effectiveness.

The study used data about GCSE performance between 2011 and 2015 from the Department for Education's National Pupil Database. The researchers then used statistical models to estimate how much of the national variation in students' results was explained by differences between schools, differences between years (cohorts) at the same schools and differences between pupils within a cohort.

Key Points:

  • Once student ability is taken into account, the likely performance of a cohort in a particular year at one school is no more or less predictable from that school's past performance than it is from other similar schools' past or current performance.
  • The observed volatility in cohort attainment is not only due to the variation in students within cohorts, but also to the inherent uncertainty in the outcome of any individual pupil on a specific exam - in this way, uncertainty, and hence volatility, in schools' results is a direct consequence of uncertainty for individual students. 
  • Students that have a relatively equal likelihood of ending up on both sides of the boundary (e.g. C or above) are most likely to be affected by small changes in any test-related variable. For example, a small change in the test conditions (e.g., temperature of the exam room) might cause a candidate who would have achieved a C to fall just below the C boundary and get a D instead. 
  • The effect of the increase in students on the uncertainty in the group's outcomes will differ based on the ability levels of the students i.e. uncertainty in a C-level student does not disappear when we have a lot of themThis means that even in a big school with very large and stable cohorts, depending on the ability level of typical students, there may still be substantial volatility in school performance simply because of uncertainty in students' outcomes.

The report concludes by saying that "Because there is chance involved, and because schools and their students change over time, it is natural that schools would see different results from one year to the next. It would be extremely worrisome if schools' results were too stable, because it would mean one of two things: either that the tests are not sensitive enough to differences in ability to tell us anything meaningful, or worse, that the relationship between a student's performance and his or her grade on an assessment are not as tightly correlated as they should be."

The full report can be read here.

First published 02 January 2018