A copy-and-paste into AI opens up a new shadowy world of risk

For providers handling sensitive funding and learner data, invisible ‘shadow AI’ poses GDPR, security and reputational risks that can’t be ignored

For providers handling sensitive funding and learner data, invisible ‘shadow AI’ poses GDPR, security and reputational risks that can’t be ignored

18 Jul 2025, 5:55

Artificial intelligence is transforming how we work by offering opportunities to enhance productivity, improve service delivery and streamline processes. But with these opportunities comes a growing, often invisible risk: shadow AI.

Shadow AI refers to the use of artificial intelligence tools, applications or models within an organisation without formal approval, oversight or governance from IT, data protection or risk management teams.

Three-quarters of knowledge workers are using AI tools at work, according to the 2024 Work Trend Index annual report by Microsoft and LinkedIn.

This may be seen as positive news for AI adoption and efficiency, but a more concerning statistic is that 78 per cent of those workers are doing so without their employer’s knowledge. For apprenticeship providers and their employer customers, this presents a significant risk.

Apprenticeship providers and colleges hold large volumes of sensitive learner, employer and funding data – from ILR and LRS records to Ofqual-regulated qualifications. Shadow AI use within these organisations introduces several risks:

  • Data privacy and GDPR breaches: Unregulated AI tools may process personal or sensitive data without consent or safeguards, breaching UK GDPR and the Data Protection Act 2018.
  • Information security and data leakage: Shadow AI can transmit sensitive organisational data to external servers in unknown locations, increasing the risk of data exposure, intellectual property theft and security breaches.
  • Non-compliant use of publicly funded data: The mishandling of sensitive apprenticeship and funding data through unapproved AI tools could violate strict Department for Education/Information Commissioner’s Office compliance rules.
  • Academic integrity: Unmonitored AI use in assessment processes can undermine academic standards, devalue qualifications and complicate appeals processes.
  • Bias and fairness: Without human oversight, AI-driven assessment and decision-making risks embedding unconscious bias, potentially breaching equality legislation.
  • Damage to public trust and sector reputation: As education providers hold a position of public trust, any scandal arising from shadow AI can severely damage both institutional and sector-wide reputations.

Providers must protect employer data too. For instance, if a tutor puts a transcript from a progress review into ChatGPT to generate a summary, it could well contain information that their employer partner wouldn’t want exposed. Examples could be information gleaned from a leadership programme covering specific internal challenges and how the apprentices have applied their learning to that issue, or a project management apprentice talking about a sensitive project that isn’t yet in the public domain.

To address these risks, apprenticeship providers need AI tools that are built for their specific context – with data protection, compliance and academic standards at their core.

Aptem collaborates with providers to have in place secure, auditable AI solutions designed specifically for apprenticeship delivery. Partnership working ensures:

  • Secure AI solutions to prevent data and security breaches
  • Audit trails to demonstrate compliance and transparency
  • Human-in-the-loop solutions to prevent bias and uphold fairness
  • In-built compliance with regulatory requirements.

In this way, we can guarantee the compliant handling of publicly funded data, while AI tools designed for the apprenticeship sector maintain academic integrity and quality standards.

Providers need the confidence to use AI in the right way. The conversation should be one of opportunity, because there is significant potential to deliver efficiency gains and higher quality standards. At the same time, being responsible is equally important.

At present, neither Ofsted nor Ofqual has taken an overly prescriptive approach to the use of AI, but that may change if audits reveal widespread misuse.

Both bodies are balancing the need to embrace innovation with the equally important need to protect learners and preserve academic standards. The regulatory principles offer a clear framework for providers that demonstrates why shadow AI usage presents such a risk.

Providers who understand the dangers of shadow AI usage can proactively implement IT policies to support the proportionate use of AI. These policies will support the adoption of secure, compliant solutions, which can mitigate the risk of shadow usage.

The right policies and solutions allow apprenticeship providers to protect their data, reputation and academic standards while making the most of AI’s potential.

Latest education roles from

Principal & Chief Executive – Bath College

Principal & Chief Executive – Bath College

Dodd Partners

IT Technician

IT Technician

Harris Academy Morden

Teacher of Geography

Teacher of Geography

Harris Academy Orpington

Lecturer/Assessor in Electrical

Lecturer/Assessor in Electrical

South Gloucestershire and Stroud College

Director of Management Information Systems (MIS)

Director of Management Information Systems (MIS)

South Gloucestershire and Stroud College

Exams Assistant

Exams Assistant

Richmond and Hillcroft Adult & Community College

Sponsored posts

Sponsored post

Plan for change funding to drive green construction skills

The government has launched a new plan for change to address the skills deficit in the construction industry, providing...

Advertorial
Sponsored post

Reshaping the New Green Skills Landscape

The UK government is embarking on a transformative journey to reshape its skills landscape, placing a significant emphasis on...

Advertorial
Sponsored post

Safe to speak, ready to act: SaferSpace targets harassment and misconduct in education 

In an era where safeguarding and compliance are firmly in the spotlight, education providers face a growing responsibility: to...

Advertorial
Sponsored post

Screening for the cognitive needs of apprentices is essential – does it matter if the process is engaging?

Engagement should be the first priority in cognitive assessment. An engaging assessment is an inclusive assessment — when cognitive...

Advertorial

More from this theme

AI

Ofsted reveals how it will inspect providers’ AI use

Inspectors will not check tech use as a ‘standalone’ part of inspections, but will look at its impact on...

Jack Dyson
AI, Colleges

AI guidance for colleges: 9 key findings for leaders

Government toolkits say colleges should train staff on safe AI use and to spot deep-fakes

Jack Dyson
AI

FE providers wanted to become edtech ‘testbeds’

Pilot to build 'evidence base' on impact of workload-cutting tech

Jack Dyson
AI

AI tool for electronics teaching developed by ex-IfATE board member

The tool was funded through a competition for AI tools that could save teachers time

Josh Mellor

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *