The decisions taken on AI by colleges and training providers in the next year will fundamentally alter the lives of learners and teachers over the next decade. Already there is enough evidence of both the enormous benefits that well-designed AI systems can offer, and the harm that poorly implemented AI can cause. If AI literacy is not treated as a priority within leadership teams then the risk of harmful AI increases.
You will notice that I haven’t explored the possibility of no AI here. Consumer accessible generative AI applications are already widely used by learners and tutors, and will continue to be so, above or under the radar. They may be used in ways that harm learning or are unethical.
There are many entirely reasonable objections to the current wave of big tech-led AI innovation: it is driven by profit rather than purpose, it replicates societal racial and gender biases, and from a sustainability perspective it has highly negative consequences on the visible horizon. So surely avoidance is the ethical choice?
The consequence of leadership teams not deeply immersing themselves in both risks and opportunities will be that AI is used in harmful ways: bypassing cognitive development, deskilling professionals, creating unfair advantages for those with AI skills, and contracting out critical thinking to technologies that have undoubted flaws
You are an AI organisation already – whether that’s acknowledged or not
If you haven’t developed a systematic approach then you may already be facing the harm that unethical, inequitable and dehumanising AI can cause. Because these technologies are both readily available and widely accessed. You are an AI organisation already – whether that is acknowledged or not.
Ignoring AI may make us feel ethically better but we can shape a better future by using it in a mindful way cognisant of environmental harms, in a human way crafted to improve the knowledge and skills of learners and tutors, and in an equitable way aware of inequalities and poor representation. Some colleges and training providers are doing this now.
It is vital to look at the range of evidence when designing AI systems: to help learners develop their skills, to support tutors in designing personalised and engaging programmes of tuition whilst helping them manage their workload, and support staff in providing richer data insights and better processes. A recent MIT study shows the cognitive deficit when students outsource their learning to AI. It also shows that “brain first, AI later” to help review work is a good combination.
An experiment on the impact of using ChatGPT in lesson planning showed that it saved 30 per cent on preparation time with no impact on lesson quality as assessed by an expert panel. All this emphasises the importance of reviewing the available evidence systematically.
We are seeing some institutions adopt and even develop AI systems that are heavily human, ethics and equity focused. Ofsted has reviewed some of the best practice in its paper “The biggest risk is doing nothing”. Activate Learning has implemented a suite of AI tools, early-stage evaluation of which have shown improved outcomes and well-being. Windsor Forest Colleges Group have developed a teacher support AI, “Winnie”. Basingstoke College of Technology has taken a whole-college approach to upskilling staff and students in AI and giving them a license to innovate responsibly.
Deliberately designing AI systems to stretch learners rather than bypass their learning is key. Developing datasets with fewer systemic biases and training AI on them, including available open-source AI, can help reduce biases.
And we need to widen access to the development of critical thinking and communication skills that enable individuals to adapt to future AI innovations.
Data-safe environments are essential to protect private data. Whilst the actions of one individual or college are not going to significantly dampen environmental impacts, we should be as mindful of the carbon impact of our actions when using AI, just as when driving our car.
The Finnish government has committed to pursuing human-centred and ethical AI, whilst supporting its integration into education. Estonia has encouraged similar whilst leaving education institutions to innovate. Safe, ethical and responsible use is in their national curriculum.
Our DfE has recently issued a policy paper on generative AI in education, and appears to be determined to see AI spread.
We will be working through our partnerships with sector bodies to see wider adoption of responsible AI. The whole skills community needs to get this right – at a whole system level. There is much that is encouraging in both policy and practice. There now needs to be collective action to make positive, human-centred tech happen.
Your thoughts