“Just give it a try,” was the advice I received when I started teaching. I wanted to innovate, as though I were the first teacher on earth, and those words gave me licence to do my own thing. On reflection now, I see they put ego before students.
One of the most positive things to happen in teaching in the decades since is the hugely-increased reverence for research and the desire to be informed by ‘what works’. In further education, you can scarcely find a teacher who hasn’t been involved in some kind of practitioner research.
Yet we still seem to be in the dark when it comes to what has been proven to work when replicated and scaled up.
A good example of this is 16-19 English and maths. They’re FE’s two biggest qualifications by far. They are vital for progression in education or employment. And they have been a focus of colleges’ Herculean efforts for almost a decade now. Yet where things are piloted or trialled, we don’t seem to have much luck.
Using text messages to nudge learners? No impact.
Contextualisation? Apparently difficult to assess “whether the intervention had an impact”.
The evaluation of ‘Assess for Success’ sounded more like my school report card: “some evidence of promise”. However, there are no plans to trial it further.
And a year on, we’re still waiting to hear about the impact of ‘5Rs’.
Part of the problem is applying school methodologies to colleges that can have over ten thousand students and turnovers in nine figures. The above studies hint at that, with comments such as needing “more clarity about the expectations of participating staff” or “less than half of teachers attended both of the first two training sessions”.
College English and maths teams are commonly bigger than the staffs of entire schools, but they receive fewer contracted hours. And when things get busy, dropping a burden that comes without sufficient incentive is the obvious choice.
We must make informed choices to ‘adopt/iterate/avoid’
I know from managing the DfE’s Centres for Excellence in Maths (CfEM) project how tricky it is to retain participants. For instance, it was a sad irony to see a large ‘Engagement & Resilience’ project scuppered by a 79% student drop-out rate.
It’s also daft to try anything other than recruiting whole teams within colleges. Lone teachers, or even pairs, can rarely swim against the tide for the duration of a trial. Besides, it requires tracking individual students’ outcomes rather than provider-level data, which is much more onerous.
There are some green shoots though. In addition to the groundbreaking CfEM mastery trial that led to improved GCSE maths scores, there were enormously encouraging results from college-led projects on bar modelling and ratio tables.
On the oft-neglected English side of things, my own creative-writing resits intervention, funded by SHINE, retained almost 500 student participants across three colleges. It not only demonstrated improved confidence in the high-tariff GCSE writing tasks, but also clear mental health benefits.
It’s also important to acknowledge that finding out something doesn’t work is valuable. I’ve seen the zero-impact ‘Lesson Study’ approach raising its head again recently, alongside long-debunked ‘discovery learning’.
It’s our duty as professionals to learn from what’s already been tested, so that we can make an informed choice to ‘adopt/iterate/avoid’.
The best challenge I ever heard on CfEM when I inherited that project was: “How do we know it wouldn’t have had more impact to just give the students the £250?” (based on the £30 million cost divided by the c.120,000 student reach).
My answer (which I thought of too late) is that the £30 million wasn’t just for those students participating between 2018 and 2023. It means we know – forever – that mastery teaching will improve maths exam scores.
Every teacher who takes that into the classroom, year on year, brings down that per-student cost while improving outcomes for learners.
So when an enthusiastic new teacher asks for your advice on trying something out, don’t send them out to ‘mess around and find out’. Instead, ask them: “What does the evidence say?”
Your thoughts