One of the most beguiling assumptions in teaching is that children succeed in school because of what schools and teachers do. We feel this to be true because we’re acutely aware of all the things we’ve done; all the hours of teaching, marking, planning, pastoral support and everything else we do. We know these things are what make the difference to young people’s lives.

But how do we know? It would be obviously unethical to test this assumption using a randomised control trial with some children assigned to a control group in which they experience none of things schools do, but is any way to test the beliefs we would like to be true?

Typically, schools spend a good deal of time looking at data. Our assumption that we’re making a positive difference is based on the idea that if the numbers go up, we’re making more of an impact and if they go down our impact is less. This is very intuitive, but also highly problematic. The concept of ‘natural volatility’ is well established but is, all too often, unknown in schools. Cara Crawford and Tom Benton’s 2017 paper for Cambridge Assessment, Volatility happens: Understanding variation in schools’ GCSE results should be required reading.

That results are subject to natural volatility essentially means that results can go up or down (sometimes by as much as 19%) and have absolutely nothing to do with actions taken by teachers or school leaders. Crawford and Benton explain that because there is always uncertainty about three things: 1) how individual students will perform in examinations, 2) how particular cohorts are composed, and 3) how school populations change over time, we can never be sure that results going up or down can be attributed to school or teacher effects. Sometimes – often – stuff just happens. Natural volatility is both well-known and predictably unpredictable.

Unless a school consistently records 100%, there never is a pattern in any historical data. This is because the data is based on children’s results, and children are complicated and individual, and the population in any given school is too small to make meaningful generalisations. Long-term trends in something as complex as educational outcomes are always random.

So, what does your data tell you? Not as much as you think it does. Now ask yourself this: could students be achieving successful outcomes despite what you’re doing? Very possibly. Rather than looking to the false reassurance of positive data trends we should instead think about other sources of information. For instance, everyone knows there is a gap in the performance between students of different socio-economic profiles. According to the OECD, a student’s socio economic profile is the factor most likely to effect outcomes – the wealthier a student’s background the more likely they are to do well. Now look again at your exam data: how are the very poorest students doing? The likelihood is that most of your positive results derive from those students most likely to do well regardless of anything you did. Equally, the positive performance of the least advantaged is more likely to be due to teacher and school effects – left to their own devices these students are much more likely to fail.

Now, how confident are you that you’re doing the best for these students?

I think we might improve what we do in schools if we make the following working assumptions:

  1. The factor most likely to influence children’s educational attainment is their socio-economic profile
  2. Schools systematically privilege the most privileged and disadvantage the least advantaged
  3. The success of children form more advantaged backgrounds is just as likely to be despite not because of actions we’ve taken.

These assumption may not be true – and they’re certainly not fated – but they are healthy to make.

If the performance of the children from the poorest backgrounds seems to be steadily improving, then maybe we can infer that what we’re doing really is making a genuine difference. If their performance is static or on a downward trend, perhaps we ought to assume that we’re not serving any of our students well.

Students’ socio-economic profiles are just a proxy. How much your parents earn does not directly cause you to do better in examinations. To really think about how we might better serve students we need to consider what the proxy represents. In my view, the most significant difference between students is the quality and the quantity of what they know. Children from wealthier backgrounds are statistically more likely to have greater knowledge of the world and it is this background knowledge that is strongly predictive of GCSE results.

My contention is that if what we’re doing is not aligned with the aim of increasing the quantity and quality of what students know, then it’s likely to disproportionately benefit wealthy students. Those students who come into school with a fairly broad knowledge of the world are far less disadvantaged by misguided attempts to teach ‘skills‘ or by focussing on depth before breadth.

It’s far easier to teach kids with greater reserves of background knowledge than it is to teach those without. Therefore it’s very easy to fool ourselves that what we’re doing is ‘working’ when some students benefit. A better question to ask is, Who is it working for?