The government must address ‘inexplicable variations’ in sixth form colleges’ A-level results

13 Aug 2020, 7:00

Sixth form colleges are reporting that their A-level results do not reflect the grades they provided, or three year trends. Here, Sixth Form Colleges Association chief executive Bill Watkin explains why government action is needed.


There is a huge discrepancy between the teacher-led centre-assessment grades (CAGs) and the exam board algorithm-led calculated grades. Just 4 per cent of the members that responded to an SFCA survey yesterday felt that their CAGs were in line with their calculated grades.

The survey also reported a huge disparity between this year’s results and colleges’ three-year trends. Not because the teaching was worse, not because the students were worse, not because the exams were more difficult. But because the lockdown algorithm has got it wrong.

This is not a sector lacking in experience or expertise. It is a sector that has consistently been among the highest performing in the country. This is not a problem of over-grading or wholesale blips. This is a clear indication that the statisticians have got it wrong this year. As one college leader quoted in the survey said, “We deliver over 30 different A-level subjects, and we are below the three year average by some distance in every single one”.

The model has not worked

Back in late March, when it was first clear that the summer exams in 2020 could not proceed as scheduled, Ofqual and the Department for Education immediately set out their position: exams are the most reliable form of assessment of a young person’s ability and potential, but in their absence, the system would rely on teachers’ professional judgement and every effort would be made to reduce bureaucracy.

Centres were asked to submit, using all the available evidence (including mock exams), CAGs based on their professional expertise to the exam boards. There would also be a statistical standardisation exercise, to be carried out by exam boards, to ensure that this year’s results were in line with the last three years. This would help preserve the integrity of the exam system and ensure that this year’s students were neither advantaged nor disadvantaged. The model was designed to ensure that there was comparability in outcomes, and no grade inflation. This year-on-year consistency became the holy grail at Ofqual and DfE.

But this was always going to be difficult, with no external assessment in a year when schools and colleges have only just switched to brand new exams – new content (more rigorous and extensive), new structure (linear), new assessment (terminal exams, not modules).

At the same time, teachers were also asked to rank order their students – a much more difficult exercise and one which pitted students against their classmates. But it became clear that the rank order was going to be critical, especially for those clustered around grade boundaries, who would be most at risk of dropping a grade if boundaries were moved by exam boards.

Last week saw two important changes:

1. Appeals were to be allowed in those cases where the centre’s population had changed, or where there had been a change of leadership or if evidence showed that a recent blip was not reflective of a trend.

2. The exam boards would now discount CAGs in classes of 15 or more students; for small classes they would only use CAGs, and in between they would take CAGs into consideration.

The latest decision – to allow mock exams to be used as part of the appeals process – is not as worrying as first thought. To use mock exam results as part of an appeals process is not unhelpful. Indeed, mock exams have already been used this summer as a vital piece of evidence in arriving at the CAGs, though the CAG is a much more sophisticated measure than the mock exam alone.

Mock exams are sometimes made deliberately difficult, they may address only a proportion of the syllabus, they may be taken six months before the actual exams, they are not subject to moderation. And – as our survey highlights – many centres did not hold mock exams this year, because they were scheduled to take place after lockdown started.

So, in a large class, which did not take mocks, the rank order and the boards’ standardisation will still determine the grades. In a small class, whose students had sat mocks, the CAGs and, in the event of an appeal, the mocks will be the determining factors.

The national picture will show no significant overall change in grades awarded this year, but the national picture is like an average; it masks huge variations. Sixth form colleges, in which one in four of all A-levels is taken, have experienced inexplicable variations. The solution is to shift the focus away from year-on-year comparability and use CAGs as the truest measure, even if it means accepting some grade inflation this year.

It is imperative that we revert to CAGs as the sole determinant for this year’s cohort. Not just those whose grades are lower than their CAGs. But every student. In this way, there will be some winners, those whose teachers were generous in the CAGs, but there will be no losers. All will get the grades their teachers said they would get if there had taken the exams this summer.

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *