Opinion

Next year’s exams could be statistically contentious if we don’t act now

11 Aug 2021, 14:17



Clear conversations about next year’s exams are needed right away, writes Sharon Witherspoon

The last two years have been a rollercoaster when it comes to exams.

While last year the government abandoned their poorly considered plans to use an algorithm to award individual grades, this year they took a different approach and delegated responsibility to teachers and exam boards.

But then it took them until late February to make the decision about how grades would be assessed.

Instead, earlier guidance about assessment would have helped ensure more consistency in the evidence available for grading, and saved both teachers and students a lot of stress and uncertainty.

As it was, teachers were sometimes scrambling to identify pieces of evidence from each student’s work and there was no way to ensure that students at different colleges and schools were being assessed on the same types of work or in the same way.

Had the government set out in early autumn what would happen if exams had to be cancelled again, they could have encouraged more consistency in the work to be assessed and teachers would have had a wider range of evidence to use.

Having a “plan B” would also have meant there were more “data points” for teachers to use in their assessments, and there might have been more consistency between colleges and between schools.

This would have created more of a level playing field in assessment.

‘Attainment gap has widened’

The importance of consistency is highlighted by the fact that this year the attainment gap has widened for students on free school meals, from areas of high deprivation and Black candidates.

Results rose more in private schools than in other types of education institutions.

Some of this will reflect real differences in learning loss, but in the absence of consistency in assessment processes and evidence between colleges and between schools, we cannot be sure.

This underlines the real need to start a proper conversation about next year’s exams now.

There are contentious statistical and wider issues to consider. It is important that there is broad agreement about how best to proceed.

I see two clear priorities for the future.

‘Full transparency from exam boards’

First, there should be a full and transparent account of what statistical evidence was used by exam boards to select which colleges and schools would be subject to further scrutiny.

Though Ofqual has provided some information on this, further details are promised for “later this year”.

There were also several news stories in the run-up to results day about parents – especially at private schools – pressurising teachers over grades.

So it is important to understand how results were queried at private, and state schools and colleges to understand the extent to which the scrutiny process tackled this issue.

‘Open discussion needed for next year’

Second  – and even more important – there needs to be open discussion about how exams and assessment will work next year.

The return to exams will help ensure consistency of grading between colleges and between schools, but it will not address the hours of education lost due to the pandemic, and how this varies between students and different education institutions.

The government has announced steps to ensure exams take some account of learning loss (for instance by giving a choice of topics to answer).

These steps are welcome. But exams are statistically ‘norm referenced’, which means the cut-offs between different grades are partly set by prior decisions about what grading results should look like.

Is the plan to revert in one fell swoop to 2019 grade profiles, or to adjust them more slowly? Public debate about this should start now.

The cohort of 18-year-olds is growing. That, combined with aspirations for more students to attend university (particularly given the small number of available degree-level apprenticeships), means that decisions will need to be taken that intersect statistics and policy.

The public – including teachers – needs to be part of that conversation.



Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *

One comment

  1. It should have all been avoided by issuing provisional grades and giving the opportunity for students to take exams to get a confirmed grade when it was safe to do so.

    That would have negated the need for algorithms, perhaps knocked the sharp edges of well meaning unconscious bias and yielded useful data to look at outliers between provisional and confirmed grades.

    But that horse has bolted and now we’re considering some sort of rebasing (pressing the reset button slowly?). Presumably we’ll now end up with good reasons to question the credibility of all results in a scramble to avoid questioning just some.

    Imagine if this situation could be recreated in laboratory conditions, the same person with the same knowledge taking the same assessment in 2019, 2020, 2021, 2022, 2023 and in all likelihood getting a different score each time!

    Don’t even get me started on University grade inflation which has been happening for years – after all, you need a nice shiny product to flog for £9,000 a year…