While ministers are right to consider reforms of end-point assessments (EPAs) in apprenticeships, the independence of the assessments themselves is the cornerstone of employer confidence in the system and must be maintained.
EPAs validate an apprentice’s hard-earned knowledge, skills and behaviours, boosting their confidence and providing a clear pathway into future employment.
For employers, it is crucial that these assessments are fair and rigorous. Without an independent assessment system, the trust employers have in the competence of employees – and in the apprenticeship system as a whole – would be severely undermined.
We recognise that the system is not without its challenges. Many apprenticeship providers have reported difficulties with EPAs. These include the availability of assessors, the consistency of end-point assessment organisations (EPAOs), the complexity of the process and the inclusion of mandatory qualifications within certain apprenticeship standards.
However, EPAs have provided a rigorous, outcome-based method of assessment since their introduction in 2017. As DfE begins to roll out pilots, here are some of the suggestions making the rounds and why each involves substantial risks.
Pilot 1: Provider-led EPAs
We understand that one of the options under consideration is for apprenticeship providers to carry out their own EPAs. The Association of Colleges argues that colleges could maintain the necessary rigour and independence, but this approach poses a number of challenges.
First, if providers are required to gain Ofqual recognition as EPAOs, this doesn’t necessarily streamline the process and could add more bureaucracy. There’s also potential for confusion stemming from colleges and ITPs administering EPAs alongside EPAOs.
Moreover, colleges deliver less than one-fifth of apprenticeships (17.4 per cent in 2022/23), and for smaller independent training providers (ITPs), setting up as an EPA organisation may not be feasible.
Then, there’s the significant risk of conflict of interest. With growing pressure to boost achievement rates, how can we ensure objectivity if those training apprentices are also responsible for assessing their competence?
And finally, we already have a shortage of assessors with the appropriate level of technical expertise, and provider pay lags behind the private sector. Therefore, they are unlikely to recruit and retain the staff they need to assess, in particular in emerging sectors such as technology and green energy.
Pilot 2: Employer-supported assessment
We understand that the pilots may also include the option to transfer assessment of behaviours from EPAOs to employers. This has some intuitive appeal; after all, employers see their apprentices in action daily and could be well-placed to judge workplace behaviours.
However, employers may lack the expertise required to measure them against apprenticeship standards, or at least would need significant support to carry out this role effectively.
More critically, placing this responsibility on employers risks introducing inconsistency across the system. It opens up the possibility of businesses applying varying standards, which may call into question the credibility of apprenticeship assessment.
Pilot 3: Simpler KSB assessments
One further option under consideration is to reduce the number of knowledge, skills, and behaviours (KSBs) that must be directly assessed. But while streamlining assessments could make the process more efficient (and should be considered) a one-size-fits-all approach risks undermining quality.
This is especially true for high-risk sectors such as science, technology, and engineering, where competence is critical to safety and performance. Employers in the science and technology industries have in fact argued for more – not fewer – assessment methods.
In short, simplifying the assessment process cannot come at the cost of protecting safety and competence.
The EPA system has evolved significantly since its introduction and, if reforms allow, will continue to do so. Proportionality and flexibility are key – streamlining where possible without sacrificing quality.
Testing new proposals with employers and sector skills bodies is also essential to ensure that changes work across their sectors.
But above all, preserving the independence of assessment is crucial. Independence ensures objectivity and upholds high standards; without it, we risk losing the confidence that makes apprenticeships such a vital part of our skills landscape.
It’s clear that EPA and standards have faced resistance from colleges since their introduction. A significant issue is that colleges are still not proactive in setting up EPA arrangements, and many struggle with predicting accurate planned end dates, which makes it difficult for EPA organisations to manage scheduling effectively. These are challenges that could have been avoided with proper planning and provision from the start.
I believe a national EPA booking system, perhaps integrated into the DAS, would help hold both colleges and training providers accountable for better EPA planning. This would also make it easier to organise assessors efficiently. Additionally, seconding subject specialists who are actively working in the sector to EPAOs could bring valuable expertise and further streamline the process.
Providers are already both trainer and assessor in those disciplines where a Diploma runs alongside the Apprenticeship. Surely internal and external quality assurance can ensure standards are maintained in a world of provider EPAs?
EPAs are not suitable for all apprenticeships and the assessment criteria should be set up and agreed by the employer groups or sector skill bodies for some it would be better to have on going assessment or demonstration of the skills as they build in complexity over time. Has anyone actually gathered any actual evidence over whether EPAs are giving better quality? If so what are you basing the measures of ‘quality’ on? Surely it would be that we have more people able to do the jobs that we need and if that is the case we have failed.