Artificial intelligence is transforming how we work by offering opportunities to enhance productivity, improve service delivery and streamline processes. But with these opportunities comes a growing, often invisible risk: shadow AI.
Shadow AI refers to the use of artificial intelligence tools, applications or models within an organisation without formal approval, oversight or governance from IT, data protection or risk management teams.
Three-quarters of knowledge workers are using AI tools at work, according to the 2024 Work Trend Index annual report by Microsoft and LinkedIn.
This may be seen as positive news for AI adoption and efficiency, but a more concerning statistic is that 78 per cent of those workers are doing so without their employer’s knowledge. For apprenticeship providers and their employer customers, this presents a significant risk.
Apprenticeship providers and colleges hold large volumes of sensitive learner, employer and funding data – from ILR and LRS records to Ofqual-regulated qualifications. Shadow AI use within these organisations introduces several risks:
- Data privacy and GDPR breaches: Unregulated AI tools may process personal or sensitive data without consent or safeguards, breaching UK GDPR and the Data Protection Act 2018.
- Information security and data leakage: Shadow AI can transmit sensitive organisational data to external servers in unknown locations, increasing the risk of data exposure, intellectual property theft and security breaches.
- Non-compliant use of publicly funded data: The mishandling of sensitive apprenticeship and funding data through unapproved AI tools could violate strict Department for Education/Information Commissioner’s Office compliance rules.
- Academic integrity: Unmonitored AI use in assessment processes can undermine academic standards, devalue qualifications and complicate appeals processes.
- Bias and fairness: Without human oversight, AI-driven assessment and decision-making risks embedding unconscious bias, potentially breaching equality legislation.
- Damage to public trust and sector reputation: As education providers hold a position of public trust, any scandal arising from shadow AI can severely damage both institutional and sector-wide reputations.
Providers must protect employer data too. For instance, if a tutor puts a transcript from a progress review into ChatGPT to generate a summary, it could well contain information that their employer partner wouldn’t want exposed. Examples could be information gleaned from a leadership programme covering specific internal challenges and how the apprentices have applied their learning to that issue, or a project management apprentice talking about a sensitive project that isn’t yet in the public domain.
To address these risks, apprenticeship providers need AI tools that are built for their specific context – with data protection, compliance and academic standards at their core.
Aptem collaborates with providers to have in place secure, auditable AI solutions designed specifically for apprenticeship delivery. Partnership working ensures:
- Secure AI solutions to prevent data and security breaches
- Audit trails to demonstrate compliance and transparency
- Human-in-the-loop solutions to prevent bias and uphold fairness
- In-built compliance with regulatory requirements.
In this way, we can guarantee the compliant handling of publicly funded data, while AI tools designed for the apprenticeship sector maintain academic integrity and quality standards.
Providers need the confidence to use AI in the right way. The conversation should be one of opportunity, because there is significant potential to deliver efficiency gains and higher quality standards. At the same time, being responsible is equally important.
At present, neither Ofsted nor Ofqual has taken an overly prescriptive approach to the use of AI, but that may change if audits reveal widespread misuse.
Both bodies are balancing the need to embrace innovation with the equally important need to protect learners and preserve academic standards. The regulatory principles offer a clear framework for providers that demonstrates why shadow AI usage presents such a risk.
Providers who understand the dangers of shadow AI usage can proactively implement IT policies to support the proportionate use of AI. These policies will support the adoption of secure, compliant solutions, which can mitigate the risk of shadow usage.
The right policies and solutions allow apprenticeship providers to protect their data, reputation and academic standards while making the most of AI’s potential.
Your thoughts