Judging an apprentice’s skill and knowledge through an end-point assessment is an approach fraught with issues, explains Simon Reddy.
There has been much talk about quality in apprenticeships and the urgent need to improve training and assessment in vocational qualifications.
To address these quality issues, end-tests are being introduced on the back of the Richard Review, which was scathing in its appraisal of the existing ‘tick box’ approaches.
Trailblazer employer groups have been charged with drawing up standards and assessment plans for the end-tests.
The move towards end-testing means an increased likelihood of ‘teaching to test’
However, in my opinion, this move towards end-testing means an increased likelihood of ‘teaching to test,’ which, in addition to the Richard Review’s problematic approach to knowledge transfer, stands to undermine rather than increase quality in the training process.
One major weakness of the Richard Review is that it is not based on empirical evidence. Indeed, the author used the driving test to substantiate the benefits of end-testing.
Doug Richard’s argument was that it does not matter whether a person has taught themselves to drive, whether they have completed an intensive course or whether they have been taking driving lessons for five years — all that matters is that they can drive.
“And it is this which makes passing a driving test a transferable qualification, trusted and recognised. The same is true for apprenticeships,” he said.
While the driving test may present a logical example of a transferable qualification, it is a mistake to think “the same is true for apprenticeships”.
This is particularly true when the bulk of apprenticeship assessments are carried out in simulated college environments.
In his report, Mr Richard used terms like “real world context” and “real world based”, seemingly trying to avoid any mention of assessments taking place in the reality of the workplace. Why? Because end-tests are most likely to be delivered in purpose-made assessment centres.
The empirical findings in my study of full-time courses and apprenticeships in plumbing revealed the complexities associated with knowledge transfer [see feweek.co.uk for link to study and findings].
The study highlighted the problems of low-fidelity assessment simulations, which neither replicated the reality of the workplace nor created the conditions in which students could be adequately supported in their learning and in transferring that knowledge and skill over to the work context.
The research also revealed the nature of the ‘unforeseen’ in the workplace, which incidentally included students having to deal with routine problems that could not be replicated in college simulations.
For example, college simulations consisted of dry systems with new pipes and fittings, while in the workplace, apprentices were having to deal with pressurised plumbing, connected to electrical and fuel systems that were often corroded.
Furthermore, the study found a total lack of synchrony in the plumbing curriculum between college and work, so the full-time students and some of the apprentices did not often have the opportunity to embody their knowledge in practical activities.
It is clear Mr Richard took a simplistic approach to knowledge transfer and did not consider the variations in performance requirements faced by apprentices at work in comparison with their experience of poorly simulated assessments in college.
Perhaps the most telling statement in his report was the assertion that “someone already doing the job for a significant period of time, should, by definition, already be at the standard required to do the job”.
This is a major contradiction to his position on end-tests. The apprentices in my study were taking up to four years to qualify, so it is likely they were up to the required standard without the need to demonstrate their competence through an end-test.
These end-tests therefore only really serve to create a proxy for skill in a qualification for unapprenticed students on full-time college courses, and this does not do justice to the skills or depth and breadth of knowledge learned by apprentices at work. This inequality in training and assessment leads to Trailblazer double standards and serves to undermine quality in the process and outcomes of apprenticeships.