AI is game-changing when it’s human, ethical and equitable

AI will be used in harmful ways if leadership teams don’t immerse themselves in understanding its risks and opportunities

AI will be used in harmful ways if leadership teams don’t immerse themselves in understanding its risks and opportunities

15 Aug 2025, 5:37

The decisions taken on AI by colleges and training providers in the next year will fundamentally alter the lives of learners and teachers over the next decade. Already there is enough evidence of both the enormous benefits that well-designed AI systems can offer, and the harm that poorly implemented AI can cause. If AI literacy is not treated as a priority within leadership teams then the risk of harmful AI increases.

You will notice that I haven’t explored the possibility of no AI here. Consumer accessible generative AI applications are already widely used by learners and tutors, and will continue to be so, above or under the radar. They may be used in ways that harm learning or are unethical.

There are many entirely reasonable objections to the current wave of big tech-led AI innovation: it is driven by profit rather than purpose, it replicates societal racial and gender biases, and from a sustainability perspective it has highly negative consequences on the visible horizon. So surely avoidance is the ethical choice?

The consequence of leadership teams not deeply immersing themselves in both risks and opportunities will be that AI is used in harmful ways: bypassing cognitive development, deskilling professionals, creating unfair advantages for those with AI skills, and contracting out critical thinking to technologies that have undoubted flaws

You are an AI organisation already – whether that’s acknowledged or not

If you haven’t developed a systematic approach then you may already be facing the harm that unethical, inequitable and dehumanising AI can cause. Because these technologies are both readily available and widely accessed. You are an AI organisation already – whether that is acknowledged or not.

Ignoring AI may make us feel ethically better but we can shape a better future by using it in a mindful way cognisant of environmental harms, in a human way crafted to improve the knowledge and skills of learners and tutors, and in an equitable way aware of inequalities and poor representation. Some colleges and training providers are doing this now.

It is vital to look at the range of evidence when designing AI systems: to help learners develop their skills, to support tutors in designing personalised and engaging programmes of tuition whilst helping them manage their workload, and support staff in providing richer data insights and better processes. A recent MIT study shows the cognitive deficit when students outsource their learning to AI. It also shows that “brain first, AI later” to help review work is a good combination.

An experiment on the impact of using ChatGPT in lesson planning showed that it saved 30 per cent on preparation time with no impact on lesson quality as assessed by an expert panel. All this emphasises the importance of reviewing the available evidence systematically.

We are seeing some institutions adopt and even develop AI systems that are heavily human, ethics and equity focused. Ofsted has reviewed some of the best practice in its paper “The biggest risk is doing nothing”. Activate Learning has implemented a suite of AI tools, early-stage evaluation of which have shown improved outcomes and well-being. Windsor Forest Colleges Group have developed a teacher support AI, “Winnie”. Basingstoke College of Technology has taken a whole-college approach to upskilling staff and students in AI and giving them a license to innovate responsibly.

Deliberately designing AI systems to stretch learners rather than bypass their learning is key. Developing datasets with fewer systemic biases and training AI on them, including available open-source AI, can help reduce biases.

And we need to widen access to the development of critical thinking and communication skills that enable individuals to adapt to future AI innovations.

Data-safe environments are essential to protect private data. Whilst the actions of one individual or college are not going to significantly dampen environmental impacts, we should be as mindful of the carbon impact of our actions when using AI, just as when driving our car.

The Finnish government has committed to pursuing human-centred and ethical AI, whilst supporting its integration into education. Estonia has encouraged similar whilst leaving education institutions to innovate. Safe, ethical and responsible use is in their national curriculum.

Our DfE has recently issued a policy paper on generative AI in education, and appears to be determined to see AI spread.

We will be working through our partnerships with sector bodies to see wider adoption of responsible AI. The whole skills community needs to get this right – at a whole system level. There is much that is encouraging in both policy and practice. There now needs to be collective action to make positive, human-centred tech happen.

Latest education roles from

Principal & Chief Executive – Bath College

Principal & Chief Executive – Bath College

Dodd Partners

IT Technician

IT Technician

Harris Academy Morden

Teacher of Geography

Teacher of Geography

Harris Academy Orpington

Lecturer/Assessor in Electrical

Lecturer/Assessor in Electrical

South Gloucestershire and Stroud College

Director of Management Information Systems (MIS)

Director of Management Information Systems (MIS)

South Gloucestershire and Stroud College

Exams Assistant

Exams Assistant

Richmond and Hillcroft Adult & Community College

Sponsored posts

Sponsored post

Funding Adult Green Skills

New sources of funding are available to finance the delivery of green skills to all learners. Government policy is...

Tyler Palmer
Sponsored post

Plan for change funding to drive green construction skills

The government has launched a new plan for change to address the skills deficit in the construction industry, providing...

Advertorial
Sponsored post

Reshaping the New Green Skills Landscape

The UK government is embarking on a transformative journey to reshape its skills landscape, placing a significant emphasis on...

Advertorial
Sponsored post

Safe to speak, ready to act: SaferSpace targets harassment and misconduct in education 

In an era where safeguarding and compliance are firmly in the spotlight, education providers face a growing responsibility: to...

Advertorial

More from this theme

AI

Ofsted reveals how it will inspect providers’ AI use

Inspectors will not check tech use as a ‘standalone’ part of inspections, but will look at its impact on...

Jack Dyson
AI, Colleges

AI guidance for colleges: 9 key findings for leaders

Government toolkits say colleges should train staff on safe AI use and to spot deep-fakes

Jack Dyson
AI

FE providers wanted to become edtech ‘testbeds’

Pilot to build 'evidence base' on impact of workload-cutting tech

Jack Dyson
AI

AI tool for electronics teaching developed by ex-IfATE board member

The tool was funded through a competition for AI tools that could save teachers time

Josh Mellor

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *