In the three years since generative AI entered the public consciousness, it has moved faster than any technological shift in our history. In 2025 the skills and education sector had to grapple with how to equip people for that shift, leading to a year of profound tension.
We were caught between an economy sprinting toward an AI-driven future and a regulatory system that moves at a more measured pace.
Skills England in its AI skills for the UK workforce report characterised it as “slow curriculum responsiveness to emerging AI tools and sector-specific needs”.
The Department for Education and Skills England have now made admirable progress. By launching a dedicated level 4 AI apprenticeship standard, committing to a faster approvals process and signalling the start of shorter ‘apprenticeship units’ from next month, the government is paving the path that employers and providers have been walking for months.
The key question now is how to measure the quality and impact of these skills programmes.
Existing measures that merely tell us whether a learner passes a programme do not adequately capture the value delivered. It does not tell us whether the government’s aims on AI skills have been delivered – nor does it demonstrate to employers the return on their investment.
AI’s rapid growth means we must keep pace with best practice in measuring successful outcomes, just as we’ve broadened the scope of what an apprenticeship can be.
Bridging the innovation-regulation gap
By the time the dedicated AI and automation apprenticeship standard fully enters the market, it will have been 3.5 years since the launch of ChatGPT.
UK businesses couldn’t wait that long. So providers innovated within the system we had. At Multiverse we integrated AI training into relevant existing standards, like business analyst.
Broadly, it worked: we’ve equipped thousands of people with the skills to harness this powerful technology. Those skills have had real-world impact: bringing down waiting lists in hospitals; offering charity support services to more people; and enabling small businesses to innovate at a fraction of the typical cost.
But businesses didn’t yet know exactly what AI skills they required and for whom; and not all of our assumptions on what would work came right.
Measuring what matters
Apprenticeships by nature require skills to be applied on the job. It’s not easy to capture the success of that only through an assessment at the end.
That’s why we measure success in other ways too: things like costs avoided, revenue generated, issues solved for local residents, and better patient outcomes. And at a learner level, we track promotions and pay rises; nearly half of our apprentices secure a promotion.
Yet the Qualification Achievement Rate (QAR) captures none of it. The primary measure of apprenticeship quality is still whether a learner crossed a finish line – not what they built along the way.
QAR is a lagging indicator. It measures against decisions made up to two years ago or more. In AI, two years may as well be 20.
If a learner gains the skills they need to secure a promotion and then moves into a new role before reaching an end-point assessment, the system records that as a failure of retention. But in reality it’s a triumph of social mobility and economic impact.
Better success metrics exist in other areas of education. The Higher Education Statistics Agency’s graduate outcomes survey, tracking salaries and career paths, is a great example: has your study enabled you to advance in your career and earn more?
We know training pays dividends. The Learning and Work Institute found how those who access training see a 15% salary uplift across their lifetime compared to those who don’t. Why not measure the size of that prize?
The UK has the potential to lead the world in AI adoption, not least because of its world-class education systems. Our regulatory frameworks should incentivise innovation and impact.
Only then will we move from surviving the AI transition to truly leading it.
Your thoughts