Colleges could use AI to help monitor attendance patterns, generate tender documents and come up with ideas for lessons, new government toolkits have said.
The guidance, published today and drawn up by the Chiltern Learning Trust and Chartered College of Teaching, also says college should plan for “wider use” of AI – including to analyse budgets and help plan CPD.
Government said the toolkits are part of a new “innovation drive”, which has also includes investment to “accelerate development” of AI marking and feedback tools.
A new pilot has also been launched today to trial tools in “testbed” FE providers.
The government has also previously produced guidance on “safety expectations” for the use of generative AI – artificial intelligence that creates content – in education, along with policy papers and research on the subject.
Education secretary Bridget Phillipson said: “By harnessing AI’s power to cut workloads, we’re revolutionising classrooms and driving high standards everywhere – breaking down barriers to opportunity so every child can achieve and thrive.”
Here’s what you need to know on the new toolkits (which can be viewed in full here)…
1. Marking feedback and ideas for lessons
For teaching and learning, the documents state generative AI may be able to support ideas for lesson content and structure, formative assessments, analysis of marking data and creating “text in a specific style, length or reading age”.
On assessments, the guidance says this could include quiz generation from specific content or offering feedback on errors. AI could also “support with data analysis of marking”.
It can also produce “images to support understanding of a concept or as an exemplar”, exam-style questions from set texts, and visual resources, like “slide decks, knowledge organisers and infographics”, a slide in one of the toolkits adds.
2. Draw up an AI ‘vision’
The guidance stressed “it’s essential” colleges “are clear with staff around what tools are safe to use and how they can use them”. Those included on the list should “have been assessed” and allows colleges “control over” them.
It recommended college leaders lead by example and use AI tools responsibly themselves and set boundaries for AI use so users can safely play around with tools.
When exploring AI use, the guidance encouraged colleges to invest in staff training and to collaborate with industry as well as creating an AI culture within the college community.
Chris Loveday, vice principal at Barton Peveril 6th form College, said his college used inset days to train staff in AI.
He said: “The public large language models were available and I think if we didn’t have clear guidelines to support staff, it would have been easy for them to think it would be okay to put a class set of data into the open source models without truly understanding that that was training the large language model that it was available in the public domain. So the first INSET was focused on AI safety.”
As part of this, the report also warned about two issues “inherent” in AI systems: hallucinations and bias.
The former are “inaccuracies in an otherwise factual output”. Meanwhile, bias can occur if “there was bias in the data that it was trained on, or the developer could have intentionally or unintentionally introduced bias or censorship into the model”.
It recommended to always have a human in the loop to double check what AI systems produce.
3. Reducing administrative burden
The toolkits also say technology could support cutting down time spent on admin, like email and letter writing, data analysis and long-term planning.
One example given for school leaders was producing a letter home for parents about an outbreak of head lice.
The toolkit also said policy writing, timetabling, trip planning and staff CPD were other areas in which it could be used.
A 2024 user research report by the DfE said teachers were most keen on using time saving AI tools for marking, data entry and analysis of pupil progress or attainment.
Colleges can also reduce the administrative burden by using AI to analyse attendance patterns and supporting home communications, whilst “bearing in mind that all outputs need to be checked for accuracy.”
4. Plan for ‘wider use’, like budget planning and tenders
But leaders have been also told to plan for AI’s “wider use”.
The writers of the report said some “finance teams [are] using safe and approved” tools to analyse budgets and support planning. Business managers are also using it to generate “tender documents based on a survey of requirements”.
“By involving all school or college staff in CPD on AI, you can help improve efficiency and effectiveness across operations – ultimately having a positive impact on pupil and student outcomes.”
The guidance suggested “integrating AI into management information systems”. This can “can give insights that may not otherwise be possible, and these insights could support interventions around behaviour, attendance and progress”.
5. Adapt materials for pupils with SEND
According to the DfE, the technology “offers valuable tools to support learners with SEND by adapting materials to individual learning needs and providing personalised instruction and feedback”.
For example, it can “take a scene and describe it in detail to those who are visually impaired”.
But specialists and education, health and care plans (EHCPs) should be consulted to “help identify specific needs and consider carefully whether an AI tool is the most appropriate solution on a case-by-case basis”.
Meanwhile, many programmes are multilingual and “could be used with pupils, students and families who have English as an additional language”.
6. Critical thinking lessons, mending digital divide
As the technology becomes more prevalent, “integrating AI literacy and critical thinking into existing lessons and activities should be considered”. For example, AI ethics and digital citizenship could incorporated into PSHE or computing curriculums.
Some schools and colleges have promoted “AI literacy within their curricula, including through the use of resources provided by the National Centre for Computing Education”.
This ensures young people understand how systems work, their limitations and potential biases. Approaches to homework may also have to be considered, focusing on “tasks that can’t be easily completed by AI”.
The guidance added many systems “will simply provide an answer rather than explain the process and so do not contribute to the learning process”.
Loveday added that Barton Perveril is piloting its own bespoke large language model which has “enhanced safeguards” built into it that will not answer questions on misogyny or violence.
He said that provided the pilot is successful, it will be rolled out to all 5,000 students free of charge so there is equality in students’ access to the same model.
“If you give that same student access to a premium large language model, that’s no longer a digital divide, that’s a digital chasm, and we’re trying to make sure that we can help our students bridge that,” he added.
7. Transparency and human oversight ‘essential’
Colleges should also “consider factors such as inclusivity, accessibility, cost-effectiveness” and compliance with internal privacy and security policies.
A “key consideration” listed in the guidance is whether its “output has a clear, positive impact on staff workload and/or the learning environment”.
It is also “essential that no decision that could adversely impact a student’s outcomes is based purely [on] AI without human review and oversight”.
An example of this is “generating a student’s final mark or declining their admission based on an AI-generated decision”.
The guidance said: “Transparency and human oversight are essential to ensure AI systems assist, but do not replace, human decision-making.”
The toolkits also warned over mental health apps, which they said “must be regulated by the medicines and healthcare products regulatory authority”.
8. Beware AI risks: IP, safeguarding and privacy
There were also broader warnings about using AI.
The guidance notes that learners’ “work may be protected under intellectual property laws even if it does not contain personal data”.
To safeguard against this, colleges should be certain AI marking tools do not “train on the work that we enter”.
Copyright breaches can also happen if the systems are “trained on unlicensed material and the outputs are then used in educational settings or published more widely”.
Colleges should ensure AI systems comply with UK GDPR rules before using them. If it “stores, learns from, or shares the data, staff could be breaching data protection law”.
Any AI use must also be line with the keeping children and young people safe in education guidance.
Most free sites “will not be suitable for student use as they will not have the appropriate safeguards in place and the AI tool or model may learn on the prompts and information that is input”.
Child protection policies, including online safety and behaviour policies, should “be updated to reflect the rapidly changing risks from AI use” as well.
The guidance also said newsletters and school websites could “provide regular updates on AI and online safety guidelines”. Parental workshops “can extend the online safety net beyond school or college boundaries”.
9. Be ‘proactive’ to educate young people on deep-fakes
The “increasing accessibility of AI image generation tools” also presents new challenges to schools, the guidance added.
“Proactive measures”, like initiatives to educate students, staff and parents about this risk, have been identified as “essential to minimise [this] potential harm”.
Colleges have also been told to conduct regular staff training “on identifying and responding to online risks, including AI-generated sexual extortion”. These sessions should be recurring “to address emerging threats”.
“Government guidance for frontline staff on how to respond to incidents where nudes and semi-nudes have been shared also applies to incidents where sexualised deep-fakes (computer-generated images) have been created and shared,” the guidance continued.
Your thoughts