Skip to content
13 May 2026

We don’t need new apprenticeship metrics, we need to use the ones we have

Completion data already tells us which providers deliver. The problem is, employers can’t easily see or use it
Harry Hobbs Guest Contributor

Baltic Apprenticeships head of business intelligence

4 min read
|

The release of the latest apprenticeship performance data should underline a straightforward point for both the sector and employers: training outcomes still vary widely between providers.

Yet much of the conversation in the sector focuses on how we should define and measure apprenticeship success, especially as we expand into areas like AI. Those conversations should continue. But before we redesign how we measure success, we should probably start by making consistent use of the performance data we already publish.

Completion data is one of the clearest indicators of whether apprenticeship training has actually delivered in practice. When learners reach the end of a programme, employers are far more likely to see the skills, retention and long-term workforce value they invested in. In that sense, completion is not just an education outcome; it is a return-on-investment indicator for employers.

In the education and training sector, qualification achievement rates (QAR) remain an industry standard measure of apprenticeship quality because they answer a basic but important question: how many learners actually complete the programme they started?

But when you step outside the echo chamber of our industry, awareness of QAR remains limited. Many employers don’t know where to find QAR data, how to compare it across providers, or how much weight to give it when choosing a training partner. As such, it remains far less visible and usable in the world where apprenticeship decisions are actually being made.

This becomes even more critical against the backdrop of a slower hiring market where businesses often become more selective. In response, apprenticeships are increasingly treated as long-term workforce investments; employers want reliable delivery and to know that the training they back will result in completed programmes, skills and real return.

Completion sits at the heart of that. Enrolments and course starts matter, but if a learner does not reach the end of the programme, the value of the investment looks very different.

This is where the sector needs to be more direct. We already have a baseline measure of quality, and we should be making far better use of it before we rush to dilute the conversation with any new metrics.

DfE already publishes detailed figures each year and has taken welcome steps to improve accessibility through its dashboard. But to date, there has been no simple route for an employer trying to answer an entirely practical and totally understandable question: ‘Which providers consistently get learners through to completion in the area I want to invest in?’

Instead, employers are often left navigating large data tables, interpreting dense terminology and trying to build their own comparisons from raw information. For an SME owner with limited resource trying to make a critical hiring decision, or for an already stretched HR team managing multiple recruitment and retention programmes, that is more friction than there should be around such a basic question, and it has consequences.

This is probably why the conversation in the sector keeps returning to new ways of measuring apprenticeship success. When the most established performance data is difficult for employers to access, interpret and compare, it’s easy to assume the problem is the metric itself rather than how visible and usable it is. But new measures will not help employers make better decisions if the existing ones remain hard to use.

Poor usability makes weak outcomes easier to miss. Employers can choose providers without a clear view of delivery performance. Learners can enter programmes with lower chances of completion. Levy-funded investment can flow without enough practical visibility of likely results.

In response to this problem, we’ve built additional public resources to make this data much easier to compare. The intention is not to create new league tables or to reduce apprenticeship quality to a single number, but to make the information that already exists on QAR more visible and more usable for the employers who are expected to rely on it.

QAR remains one of the few measures that shows, at scale and objectively, whether learners complete the programmes they begin. Before the sector gets too eager to embrace newer or broader measures, it should first make sure that the most established one is visible and usable to the employers who rely on it. Apprenticeship quality data is not the problem; leaving already stretched employers to do too much of the work is.

Share

Explore more on these topics

3 Comments

  1. Anon

    The main danger with QAR (along with other stats) is the ever present risk that figures are misrepresented or misunderstood.

    For example, in the ‘About us’ section of the Baltic website, they state that “97% of our learners successfully pass their apprenticeship” – yet in another part of the of the website, they say they have an “80% Qualification Achievement Rate”.

    They also state “we are the largest independent training provider in England”.

    Only one of those three stats is factually accurate & although an 80% achievement rate is pretty good, it’s worth remembering that no two providers or sectors are the same. There are differences between levels, disadvantage, non-levy prevalence, age profile, industrial economic factors, etc – all of which can combine to impact a QAR – It’s entirely possible for a provider with a lower QAR to actually be having a more beneficial impact than one with a higher QAR…

    1. Harry Hobbs

      I think that is a fair point to a degree. QAR can absolutely be misunderstood if it is presented without context, and I do not think providers operating in very different sectors, learner groups or economic conditions should be compared simplistically on a single headline figure. My point in the article was not that QAR is the only measure that matters, or that it removes the need for judgement about context, but that completion data remains one of the clearest and most established indicators of whether apprenticeship delivery has translated into a finished outcome for the learner and the employer. That matters especially where employers are making practical choices between providers delivering at meaningful scale in the same or similar standards, because in that context enrolments tell one story, but completions tell another, and they are often closer to the question an employer is actually trying to answer. So yes, a provider with a lower QAR may still be delivering strong impact in a harder context, but that does not make QAR less important. It makes it more important that the measure is visible, usable and understood properly.

  2. Anon

    QAR shows how many learners who the provider said were going to complete in that academic year successfully attempted all elements of EPA (retention) or Passed (overall achievement). It doesn’t show the number who completed learning but then drifted off because they the EPAO weren’t able to keep them engaged over a six month EPA window. The number of people who having failed one part of EPA the EPAO will not advance to the next part of EPA. Or the people who decided epa just wasn’t for them. Or the people who their employer decided they didn’t need epa! It places all the risk with the training provider,

    QAR was written for a qualification system that hasn’t existed now since 2017 when we moved away from frameworks. When apprenticeships were assessed during the apprenticeship it made more sense as the providers were responsible for the whole process. Now it’s a shared responsibility, however all the quality analysis sits entirely with the provider.

    It’s so complicated that when we went through Ofsted last year and they asked to see our retention figure, we gave them our retention figure as calcaulted by the QAR AAF, which they didn’t seem to understand or had heard of. When we explained it to them they thought we were talking nonsense and gave us a different figure to calculate. This methodology was similar to the recently removed Withdrawal AAF analysis, but in reverse.

Featured jobs from FE Week jobs / Schools Week jobs

Browse more news