The recent FE Week headline about the leaked Tenon report once again brings to the fore accusations of manipulating data in order to boost success rates.
Originally highlighted by Geoff Russell’s infamous ‘cease and desist’ letter whilst Chief Executive of the LSC in December 2009, it’s also the reason we developed our ADaM software, allowing providers themselves to quickly and simply identify any data management issues and take remedial action where necessary. More than two years on, it seems the issue just won’t go away.
Just this week, the latest FE Week survey shows over 80% of respondents still think there’s a problem with the majority seeing it as widespread amongst colleges, seemingly reinforcing this argument.
So, are the dramatic headlines true? Are you being cheated or are you a cheat? Perhaps a good question to ask is why the number of providers using ADaM continues to grow (it’s currently well over 100).
If they thought or even knew they were manipulating their data why would they continue to pay for a piece of software whose specific purpose is to bring any bad practice into the light? In our experience, yes there is a problem but the truth behind it is normally far more ordinary and less clear-cut than feared or portrayed.
It doesn’t mean that data is never wrong (a next to impossible task with large datasets, a highly flexible curriculum and complicated rules) but making the jump from bad data to cheating is a huge one to make and somewhat simplistic.
What we’ve consistently found since 2010 is that, where they do arise, most data issues that could be interpreted as impacting on success rates are due to inadequate data management practices or processes. That’s to say, the significant number of in-year data changes which we do see are largely representative of corrections to data which shouldn’t have been wrong in the first place.
The report makes reference to a number of data issues which certainly concur with our own findings, including: enrolments being removed from later ILRs; enrolments becoming non-funded; learners being enrolled late; and planned end dates being changed. However, whilst it seems to imply that there are underhand motives behind these in-year changes, our extensive experience is very different – generally, the data was wrong so they corrected it.
That said, it’s certainly not unusual to come across isolated pockets of inappropriate behaviour, but even these tend to reflect plain ignorance of the rules rather than a deliberate attempt to flout them. However, their generally isolated nature means that they’re typically not large enough to have a significant impact on a provider’s success rates.
We look at provider data every day and rarely (if ever) come across one which doesn’t have issues but I can count on one hand the number of instances we have encountered that could be termed systematic abuse. Whilst that doesn’t mean the rest are acceptable, what I can say with confidence is that there is quite obviously a great desire across the sector to self-improve and the year-on-year data we see bears this out.
So where does all this leave us regarding the latest headlines? Do we agree that data management practices still leave plenty of room for improvement – yes.
Do we understand that the scale of in-year data changes could be interpreted as smelling fishy? – yes. Do a small number of providers occasionally do stupid things which tarnish the reputation of the many – yes. Do we see any widespread evidence of systematic abuse? – very little.
In our experience, most people are just trying to ‘do the right thing’ and are using tools like ADaM to help them identify data issues and go on to improve procedures, staff training, workflow and efficiency. Ultimately it’s in their best interest so why wouldn’t they?
Looking at the raw data only raises questions, it doesn’t provide answers. Only by working hand-in-hand with providers, in examining their practices and the motives behind them, can one hope to jump to the right conclusions.
Mark Smith, Development
Director, Drake Lane Associates