Ofsted will not assess FE providers’ use of AI “as a stand-alone part” of inspections – but the tech’s impact on outcomes for learners will be looked at, and could even “inform” enforcement action.
Guidance published by the watchdog this morning reveals how it will evaluate FE providers’ use of such tools during inspections.
The inspectorate has also released new research into so-called AI “early adopter” school and FE colleges – which revealed that some are developing their own chatbots.
Ofsted chief Sir Martyn Oliver said: “As the use of AI in education increases, we need to better understand how schools and colleges are using this technology to take advantage of its potential, as well as manage the risks it poses for pupils, learners and staff.”
AI not standalone part of Ofsted checks
The new guidance states inspectors will not look at providers’ use of the technology “as a stand-alone part” of their assessments and won’t “directly evaluate” its use.
Part of the reason for this is Ofsted does not “not have the evidence we would need to define good use of AI for the purposes of inspection or regulation”.
But Oliver stressed the watchdog “can consider the impact a provider’s use has on the outcomes and experiences of children and learners”.
This may “help inform any enforcement action”, the guidance added. Ofsted has been asked to clarify what action this relates to.
Any evaluation “of the use of AI will ask whether the provider has made sensible decisions”.
As part of this, inspectors could ask how leaders ensure any use “supports the best interests of children and learners”.
Assessing AI’s risks
When any AI is used by learners at an FE provider, inspectors will assess if this is being done in their “best interests”.
If learners are deemed to be using it “inappropriately, inspectors may evaluate how the provider has responded and addressed the impact of this”.
The guidance says that while the risks associated with the tech “will not be evaluated separately in our inspections, they will be addressed when they have implications for areas that are already considered”.
This can include data protection, safeguarding and bias and discrimination.
“Any evaluation that Ofsted makes is about the provider’s decision-making, what they have considered, and the impacts on children and learners, not about the tool itself,” the guidance says.
“Inspectors only need to consider AI when it is relevant to something specific in their evaluations.”
‘Early adopter’ research
The guidance was informed by research, also released this morning, into how 21 “early adopter” schools, colleges and MATs are integrating AI into teaching, learning and admin.
Research suggested that FE colleges were “more likely” to permit learners to use AI unsupervised due to their age.
In total, four senior college staff spoke to researchers earlier this year – they had all started their “AI journey” in 2022 and 2023
Most settings had an “AI champion” charged with getting staff to “embrace” the tech. They typically created a “buzz” and “played a vital role in demystifying” it to address “anxieties and build confidence”.
In larger settings champions would bring “together their data management teams, IT systems managers and curriculum leads” as AI “requires skills and knowledge across more than one department”.
One champion, an FE college’s director of digital transformation, said a demonstration session was a “turning point” that convinced staff of ChatGPT’s significance in terms of both teacher workload and learning.
He told Ofsted: ““One of the first things the principal said was, ‘This is an employability skill I need students to have.’”
A critical lesson for staff was effective use of prompts. They added: “If you put junk in, you’ll get junk out.”
AI’s use was usually divided between those wanting to cut workload and those who wanted it to “directly” support learning. But the researchers found this “often shifted with time”.
A few of the leaders were already developing and testing “their own AI chatbot, while others were in the process of doing so”.
Some also highlighted how tools “allowed teachers to personalise and adapt resources, activities and teaching for different groups of pupils”, including young carers and refugees.
A ‘wild west’
However, most were “at the early stages of developing a longer-term strategy” on how to integrate AI into their curriculums as they had not yet considered how to combine it with pedagogy.
One of the reasons for this is there are “not many… tools tailored to individual school or college contexts”. Some bosses also had not thought “strategically about what success with AI looked like or how to evaluate its impact”.
One college deputy principal said it was difficult to envision a wider strategy as they were also learning “day by day”.
A principal said: “It’s the Wild West and all we are at the minute is the sheriff. What comes in and what goes out of the town is what we’re managing to deal with at the minute.”
Safe use
Leaders were said to be “clear about the risks of AI around bias, personal data, misinformation and safety”.
Some had a separate AI policy, while others added it to “relevant existing policies including those for safeguarding, data protection, staff conduct, and teaching and learning”.
But “the pace of change meant that many leaders were updating their AI policies as often as monthly”.
Earlier this month, the government published toolkits and guidance on how colleges should plan to use AI.
Your thoughts