A few years ago, I read a short but incisive opinion piece in this publication by Jill Whittaker (executive chair of HIT Training) that left me asking: Why does our sector still accept achievement rate over pass rate as the norm?
Every year, the Department for Education (DfE) releases its apprenticeship achievement rate tables, and the sector reacts like it is results day for grown‑ups. The “winners” get applause, the “losers” get a little cosier with their quality team, and Ofsted sharpens its inspection pencil.
But here is the awkward truth: achievement rate is not the flawless measure of quality that we pretend it is.
For years, this single composite figure has been treated as our gold standard. It appears in performance tables, drives inspection narratives and shapes public perception. Governors scrutinise it, providers defend it, and Ofsted uses it as a primary judgment tool (whatever they say!).
It fails, however, to isolate the question that matters most to parents, learners, and employers alike: How good is the teaching, training and assessment the apprentice receives?
Flaw in the metric
Achievement rate blends two things – retention and pass rate – into one number. That means it is influenced not only by the quality of delivery, but also by whether apprentices remain on the programme until the end.
Retention is shaped by a host of factors outside a provider’s control, including redundancy, relocation, caring responsibilities, ill health, or shifts in business priorities.
A provider can deliver outstanding training and offer excellent pastoral support, then still see its achievement rate dragged down by circumstances entirely beyond its control and unrelated to quality.
Why pass rate is fairer
Pass rate – the proportion of apprentices who reach their end-point assessment and pass – is a far clearer and fairer indicator of quality. It focuses on what happens when apprentices are themselves engaged, supported and given the opportunity to demonstrate their competence.
A high pass rate points to:
- Effective curriculum planning
- Skilled teachers, trainers and assessors
- Robust preparation for end-point assessment
- Strong employer engagement
It reflects what providers can directly influence, which is the learning experience and the outcomes it produces.
Retention still matters, but it is not the headline
Of course, we want apprentices to stay the course. But retention is a multi-stakeholder challenge, shaped as much by employer commitment, job security and challenging and escalating economic conditions as by training quality. Folding it into the headline measure risks penalising providers who take on the most challenging and often the most rewarding work.
The unintended consequence is that providers may feel pressured to protect the metric rather than the mission. That can mean avoiding high-risk sectors, steering employers away from demanding standards, or recruiting only “safe-bet” apprentices.
None of this serves the wider purpose of apprenticeships, which is to open doors and develop skills across the local and national economies.
A better signal to the sector
Making pass rate the primary measure would send a clear message that what matters most is the quality of training, learning and assessment for those who complete. Achievement rate, retention, progression and destinations should obviously still be tracked, but we must stop pretending that a single composite number can capture the full story of apprenticeship quality.
Apprenticeships are about opportunity, transformation and the belief that skills training can change lives and strengthen industries. If we continue to judge providers primarily on achievement rate, we risk punishing those who serve the apprentices and employers who need us most.
Pass rate is not perfect, but it is the clearest, fairest and most direct measure of what apprenticeship providers do best, which is enabling apprentices to succeed when given the chance. We know what great delivery looks like; now let’s make sure the measure matches the mission.
It is time to put quality front and centre, and for Ofsted to lead that change.
I think it’s reasonable to say that the achievement rates in apprenticeships are different to achievement rates in non work based programmes, because of the 3 way relationship between apprentice, employer and provider. In solely attributing a performance % to providers, even though some non achievement is wholly outside their control, the measure can be wielded a little unfairly. For that reason, I cringe when I see commentators or the media compare achievement rates for apprenticeships alongside rates for things like A levels, like comparing apples to orangutans.
I agree that a pass rate is fairer than an achievement rate for measuring provider performance. However, a pass rate as the primary measure of provider performance also risks missing that providers can be solely or partially responsible for apprentices withdrawing, and you wouldn’t want a measure that could drive perverse behaviours for the sake of better optics by having a high %. Even though a pass rate of 97-99% each year might look great, it would be a largely meaningless gauge of quality.
Personally, I’d take it a step further and suggest performance measures for apprenticeship should be flipped so the primary focus is on identifying and removing the barriers to achievement. That would help avoid performance measures, like achievement rates, from being used clumsily as a finger pointy accountability tool and toward an approach requiring a collaborative effort to minimise non-achievement.
Am somewhat concerned about using pass rate because of the current definition of withdrawal, ie if someone does half the EPA and then drops out, then they’re a withdrawal not a completion and fail (mainly because a completion releases the last 20% of the funding and you can’t have that without attempting all elements of the EPA).
So it’s kind of no wonder that pass rates hover around the 90% mark, also because a not insignificant number of learners will be retaking elements (sometimes multiple times) as well…
If most providers are in the 90% plus range then it doesn’t really tell us *that* much about the provision in comparison to others which, for all their flaws, Achievement Rates are really good at (which is kind of why Charles Clarke introduced the measure in the first place!).
I agree that it’s daft to compare App rates to Classroom rates and I think we could do a lot more to tease out what’s going on in a 65% Ach rate BUT I worry that any major fiddling with it (some kind of “employer’s fault” withdrawal reason that gets zapped from the stats) would be either ripe for exploitation or come with evidence requirements the size of the moon…