With Destination Data fast becoming the new FE show in town, Lynne Sedgmore looks at the issues that may have held it back — and may continue to do so.
The Skills Funding Statement hints not very subtly again that ‘payment by results’ is on the way.
That is, payment by ‘real’ results — or rather, what learners actually go on to do after they leave their programme.
It seems that this is likely to happen as soon as Destination Data, currently being collated by the Department for Education and the Department for Business, Innovation and Skills and published in ‘pilot’ format, is deemed ‘robust’.
Privately, many admit this may not be for quite some time, but the principle and the direction of travel is clear.
We have, in this discussion, to take for granted the rather peculiar belief that many politicians have that the only way to get people to worry about something is to tie funding to it. Leaving that aside for a moment, let’s think about what we are trying to achieve for our learners.
We cannot argue, as we often do, that qualifications are something of a straitjacket and then suggest that they are the main measure by which we should be judged and funded.
We find ourselves in a strange coalition with many employer groups in pointing out that assessment is not the same thing as a qualification, and that, in turn, having a qualification does not necessarily indicate how well-prepared for work any learner actually is.
The All Party Parliamentary Group on social mobility recently published its Manifesto for Character and Resilience. It is one of a growing number of documents aimed at the Westminster village which argue the case for outcomes to be about more than
just cognitive ability — and, dare I say it, practical skill.
What better demonstration of whether or not that education has succeeded in its goal than to look at where the learner ends up at the end?
Tied to funding or not, it is hard to argue that learner destination is not among the best, if not the best, indicators of our impact.
Some universities would certainly be forced to re-examine their curriculum if they were judged on the basis of graduate unemployment after six months. But here is where we hit an obstacle.
Assessing whether a destination is a positive one is not a very easy thing. There is a degree of subjectivity involved. Positive for whom? The learner? The local economy? Society as a whole?
The ‘hairdresser debate’ has been much rehearsed, but it is perhaps the clearest example of how difficult this question is. If a student studies hairdressing for three years and then goes on to get a job in retail management, how do we link the impact of the education to the ultimate outcome?
Many in education will argue that it is the development of broad skills during the hairdressing course which has enabled that student to get the job, but how can we prove it?
And, in any case, do all jobs count as ‘successful’ outcomes? Do they have to pay well? Is it not enough for someone who was not in education, employment or training, for example, and comes from generations of unemployment, simply to have a job?
It will be impossible to factor into a measurable scale such social factors, or whether the student themselves is happy with their outcome — or feels better prepared to take on the world.
And recent history is littered with attempts to measure the unmeasurable in order that the bureaucracy of a funding machine can be made to work efficiently.
So, student destinations are absolutely the right way to determine the impact of FE, but they are troublesome to analyse.
Most colleges, though, already carry out detailed assessments of their own student destinations. They take account of many of the subtleties, in a way that a national database cannot.
Perhaps, in a truly professional world, these assessments should be the key to unlocking funding. Surely we can work with policy makers to find a way to hold colleges accountable for those
Lynne Sedgmore, executive director of the 157 Group