DfE’s skills boss moves to schools brief

The Department for Education’s most senior skills official is set to leave the role to take charge of schools and SEND reform on an interim basis, FE Week can reveal.

Julia Kinniburgh, who has been the DfE’s director general for skills since December 2022, will step in as interim director for the schools group, replacing Juliet Chua who has a new role as director general for the economic and domestic secretariat in the Cabinet Office.

Sinead O’Sullivan, the DfE’s current director for labour market skills and funding, will temporarily take on the director general for skills role.

It’s not clear when the top officials will start their new roles.

The shuffle follows a major machinery of government change for skills last year which saw responsibility for adult education and apprenticeships move to the Department for Work and Pensions (DWP). The lead official at DWP is Katherine Green, director general for labour market and skills.

Kinniburgh (left)

The Department for Education remains responsible for 16-19 education and higher education, including a raft of reforms set out in the October 2025 post-16 education and skills white paper which introduced new V Level qualifications, a 16-19 funding formula review and new level 1 “stepping stone” English and maths qualifications.

O’Sullivan

Kinniburgh’s appointment to the schools brief comes as ministers are poised to unveil another white paper covering schools and the SEND system.

Susan Acland-Hood, the DfE’s permanent secretary, said: “As interim director general, Julia Kinniburgh will bring a breadth of knowledge of the business of the department and strong leadership at a crucial time, ensuring continuity and momentum in the delivery of our reforms.”

O’Sullivan was director of delivery and accounting officer of the National College for Teaching and Leadership before it was abolished in 2018. Some of its functions were transferred to the Teaching Regulation Agency (TRA), which is set to have its remit expanded to include further education teachers later this year. 

She then became the DfE’s director for career learning, analysis and skills and then in 2021 was appointed director for labour market skills and funding, overseeing adult skills programmes like skills bootcamps and Multiply.

Young coders lack the experience to spot when AI’s getting it wrong

Software testing and quality engineering have evolved drastically over two decades. But nothing compares to the pace of change driven by generative AI across business workflows, and the skills needed to deliver quality software.

The government digital service’s recent blog post illustrates how AI is rapidly speeding up development and engineering lifecycles.

Meanwhile, 49 per cent of engineering and technology businesses report difficulties with recruitment due to skills shortages, which is estimated to cost the UK £1.5 billion annually.

This tension between increasingly efficient pipelines and a shrinking talent pool raises the question: are training and education systems aligning with what modern engineering actually demands?

In a rush to encourage young people to use AI tools, further and higher education providers and businesses should not overlook giving young people the foundational understanding so they know when and why those tools are wrong.

Experienced engineers have the knowledge and contextual understanding to spot flawed AI outputs almost immediately. However, employers are finding that though their graduate candidates can swiftly prompt AI models, they struggle with debugging and differentiating good test coverage from poorly written output.

This is because the foundational exposure to coding and test design that once came naturally through education and early projects is being skipped in favour of vibe coding and an increasing dependence on large language models (LLMs).

Without learning how to validate outputs and understand the principles behind the code, graduates struggle to rely on judgment when automation falls short.

Strong foundations in coding, data literacy, testing principles and system design remain essential to understanding how data quality and manual oversight influence every step of a project.

For higher-level apprenticeships and technical programmes, this means rethinking curriculum design.

It’s not enough to add an AI module to existing content or teaching case studies without practical exposure. Rather, programmes need to ensure apprentices still get meaningful hands-on experience with coding, debugging and test design principles, even as they learn generative AI tools. They need to learn not just how to write tests, but to evaluate them so they can spot when something is over-engineered or likely to raise issues down the pipeline.

Employers, too, need to create environments where early-career engineers are mentored properly, exposed to complex problems gradually, and encouraged to question outputs rather than accept them at face value.

Take the importance of embedding quality at every stage of the engineering pipeline, for instance. Businesses increasingly demand a holistic approach, so graduates don’t have the luxury of applying judgment in some areas while relying solely on AI for the rest.

Training young talent to build quality into every stage of delivery is essential, and this is where higher education providers need to play a vital role.

Through carefully designed apprenticeship schemes and on-the-job placements that incorporate low-risk projects, learners can experiment in fail-safe environments.

These structured experiences allow them to make mistakes, understand consequences, and refine their critical skills and practice without high-stakes pressure.

It’s true that the challenge cuts both ways. Apprentices and graduates struggle to find roles that match their skills because employers are looking for capabilities that traditional education is yet to effectively prioritise.

Similarly, employers struggle to fill positions because the talent pipeline hasn’t adapted to how rapidly the work itself has changed.

Caught in the middle, education providers must rapidly adapt their curricula for a job market that is more brutal than ever for young workers.

The industry is evidently in an experimental phase, figuring out where AI adds genuine value and where it introduces risk. But education and training cannot afford the same luxury of trial and error.

Intentional curriculum design and business partnerships that give the next generation the foundations of balancing cutting-edge tools with timeless principles are essential.

That means ensuring speed never comes at the expense of understanding, and that apprentices leave programmes equipped not just to use AI, but to work alongside it.

In the AI arms race, teachers can use detection tools to stay ahead

AI is transforming education faster than expected, and its impact is impossible to ignore. While 92 per cent of UK students report using generative AI tools, educators are struggling to adapt.

Confidence in spotting AI-written work has plummeted, with a survey by online training provider Coursera claiming just 26 per cent of teachers feel capable, down from 42 per cent in 2023.

And AI tools like ChatGPT are evolving fast; In November, a ChatGPT update enabled users to prevent its notorious overuse of the em dash (—), thereby making AI-generated content even harder for people to detect.

AI versus human writing

AI-generated work typically appears polished and technically correct, but lacks the depth, intent and personal perspective that only human writing can bring.

Human writers draw from lived experiences, vocational knowledge and individual reasoning to shape meaning and purpose, elements AI cannot replicate.

Human writing is intentional. It starts with an idea, and each word is chosen to shape a story, build flow and add meaning. From sentence structure to grammar, human writing shows purpose.

AI writing, by contrast, operates on probability, not perspective. Large language models do not think or understand; they rely on predictions and statistical patterns.

They imitate human language, but lack genuine comprehension and creativity, the subjectivity and insight that defines authentic writing.

These limitations show up in the text. AI-generated content often features repetitive phrases, uniform sentences, probabilistic word chains and overused clichés.

Over time, it reads more like a predictable pattern than something created with intent. While these traits can help identify AI writing, they’re not always obvious at first glance.

Advanced detection technology provides a more systematic way to surface these subtle patterns and give teachers confidence in their assessments.

Using AI detection tools

AI detection tools analyse the underlying structures of a written piece. Rather than relying on quirky punctuation, such as em dashes, for clues, they examine how sentences are formed, how often certain words appear, both independently and in phrases, and whether the text feels more mechanical than deliberate.

Detection tools typically break text down into sections and compare each sentence for the tell-tale signs, assigning probabilities rather than binary judgements to indicate whether it was likely written by a human or generated by AI.

Teachers need advanced tools to tackle the challenge of AI paraphrasing tools or “bypassers”, which allow students to rephrase AI-generated content to make it sound more human-like.

These tools allow the user to see the entire writing process and understand how the piece has evolved through the revision history.

This deeper layer of analysis provides granular insights, which are helpful when marking assignments for large groups or assessing extended written tasks.

It gives a clearer sense of when writing may have been shaped by AI, and provides context to support informed conversations with students about thoughtful and honest use of digital tools.

Responsible AI use

While detection software can help flag potential issues, it is only part of the solution.

What matters most is creating a culture where students feel confident talking about how they use AI and understand the importance of learning integrity.

When students understand that responsible use is about building skills for the workplace, like problem-solving and critical analysis rather than a false sense of competency, they are more likely to make considered choices.

Academic integrity is built on partnership, not just policing. AI detection is a valuable starting point, and teachers can use it as a bridge to an honest conversation, helping students navigate AI as a tool for the future, rather than a shortcut for the present.

By combining clear guardrails with professional insight, teachers can turn the challenge of AI into an opportunity for growth – preparing students for the modern workplace through meaningful dialogue rather than just detection.

Students must be more Kendrick, less Drake with their learning

With the arrival of gen AI, our relationship with the written word requires an urgent re-evaluation.

We are understandably worried about academic integrity, but the solution isn’t found in stricter policing. Maybe the answer lies in the 1500s, with the man who defined the form that many are struggling to protect: Michel de Montaigne.

He didn’t view the essay as a polished, static product. He called his writings essais – from the French verb essayer, “to try.”

For him, writing was a process of radical self-examination and learning through trial and error. His famous question, Que sais-je? (“What do I know?”), embodied an intellectual humility that is missing from the high-stakes, “one-shot” assessments that many learners now dread.

In FE over the last decade, the essay has lost this utility. It has become a generic proxy for learning, a stand-in that transmutes a student’s unique thinking into a computable sameness that a chatbot mimics in seconds.

When we set tasks that machines can finish instantly, we aren’t testing skill; we are testing the ability to follow a template.

I say this as a sceptic. Despite my job title, I am as addicted to these technologies as anyone else. At 43, I remember life before algorithms were designed to fragment our focus. These platforms are built for engagement and data extraction, not necessarily for learning.

Without intentional scaffolding from educational experts, technology becomes a crutch. We must choose depth over a superficiality designed to keep us scrolling rather than growing.

I am inspired by artists like Rosalía and Kendrick Lamar, who embody Montaigne’s essai – the long, reflective attempt.

For her album Lux, Rosalía described using tools such as Google Translate to explore phrasing across languages, before spending a year testing meaning, rhythm, and tone with human collaborators.

The technology was the sparring partner, not the substitute. The skill and knowledge were applied through rehearsal and then in the studio.

In 2024, the music world was transfixed by a renewed “rap battle” between Kendrick Lamar and Drake. Beyond the feud, it surfaced a deeper cultural tension about effort and authenticity.

Lamar – the first hip-hop artist to win a Pulitzer Prize – represents the “long game”; extended periods away from the spotlight to prioritise craft, community investment and local legacy.

By contrast, critics argue that Drake’s recent output reflects a high-volume, streaming-era strategy – a form of “cultural strip-mining” that personifies a dopamine-driven, instant-gratification model. It is a montage of fashion trends and phoney quick-fix shortcuts designed for algorithmic visibility.

In our classrooms, the same tension exists: the “Drake” method of fast, AI-generated shortcuts versus the “Kendrick” method of slow, transformative and authentic craft.

At BCoT, we are choosing the latter. In our travel and tourism department, students are moving away from extractive, generic tasks toward creating digital legacies.

They leverage vlogs to practise customer service and use podcasts to develop the skills of cooperation, problem-solving and vocal cadence required in the industry.

By building multimedia portfolios on Google Sites, they align their evidence with professional expectations.

Evidence of their skills and knowledge is captured through witness statements and recorded simulations, creating tapestries of proof that include annotated whiteboards, sketches and iterative blogs.

If an assessment is simple enough for a machine to do in seconds, it isn’t the learner who needs to change – it’s the assessment.

Wrestling with this article has reminded me why the written word still matters. I spent hours agonising over these paragraphs to clarify my thinking for an upcoming training session – voice-typing into Google Docs, editing and using AI to align my thoughts with formal conventions and my notes.

It was a challenging, deeply human process of clarification that no prompt could replace.

The essay as ‘default’ assessment in FE may be dead, but writing remains vital when it is used to shape the thoughts we are willing to be accountable for. By reclaiming the  essai – the brave, high-effort attempt – we aren’t just making our assessments AI-proof; we are making them more human.

Accrediting TQs will strengthen ‘long-term credibility’ of T Levels, Ofqual proposes

Awarding organisations should once again be required to have their T Level technical qualifications accredited by Ofqual, reversing reforms made by the previous government.

The exams regulator has today launched a consultation on proposals to reintroduce an accreditation requirement for technical qualifications (TQs) within T Levels, arguing the move would help strengthen the “long-term credibility” of the flagship technical courses.

If approved, it would mean TQs taught from September 2028 must meet Ofqual’s accreditation standards, giving the regulator the power to block qualifications that do not meet those standards.

TQs are a mandatory element of T Levels alongside the industry placement. They deliver the core theory and concepts for the subject as well as specialist skills.

Each TQ is delivered by a single awarding organisation contracted by the Department for Education through a competitive procurement.

Regulatory return

Ofqual initially had accreditation powers over TQs when T Levels were first introduced in 2020. A year later, the then Institute for Apprenticeships and Technical Education (IfATE) was made responsible for approving and regulating the qualifications, reducing Ofqual’s role, by then education secretary Gavin Williamson.

The Federation of Awarding Bodies warned at the time of a “muddled and cumbersome” two-tier qualifications regulation system. When the skills and post-16 education act was enacted in 2023, Ofqual lost its TQ accreditation role entirely and has since then just provided feedback as part of the approval process. 

IfATE was abolished last year and its powers transferred to the Department for Education. Education secretary Bridget Phillipson notified Ofqual in October that it could “make a determination” to subject TQs to accreditation requirements once again.

Proposals

Ofqual said its accreditation proposals provide an “important additional safeguard for quality and consistency” and ensure T Levels are delivered to a high standard.

“We are confident this change for T Levels will strengthen the long-term credibility of these qualifications,” it said, adding that the proposals align T Levels with its approach to A Levels.

If agreed, the change means awarding organisations will not be able to award a TQ unless it has met Ofqual’s accreditation standards.

Ofqual’s proposed accreditation criteria for TQs is: “An awarding organisation must demonstrate to Ofqual’s satisfaction that it is capable of complying, on an ongoing basis, with all of the general conditions of recognition that apply in respect of the qualification for which it is seeking accreditation, including all relevant qualification level conditions and subject level conditions.” 

Additional “burdens” placed on awarding organisations as a result of having to go through the accreditation process will be “manageable and proportionate”, Ofqual said.

Awarding organisations, providers and representative organisations have until March 4, 2026, to respond to the consultation.

What some of our students overcome is without compare

When it came time to undertake the results review for one of the classes I taught last year, things looked a little bleak.

Such things don’t really hit me on results day when I’m too busy exulting in students’ success to pay close attention to the data. But the day for data soon comes around when the new term dawns.

By all value-added measures, the students in one class had not done as well as they should have; other comparable students elsewhere in the country apparently outperformed them. A case for the defence would have to be made in the boardroom.  

I sliced and diced figures, analysed entry interviews, reviewed initial advice and guidance and trawled pastoral logs. But my eye kept drifting back to my class list names. My argument would have to be data-driven, but my teaching practice is instinctively personalist.

I looked at M’s name. M’s father spent much of the last two years very ill indeed and he succumbed to his condition halfway through her last year on the course.

He was insistent she should not stop studying. But M has younger siblings so she spent much of the last few bereaved months caring for them, supporting her mother and carrying her own crippling sadness too.

Alongside this she had her own congenital health issues. So yes, hers is a grade that was lower than comparable students elsewhere. But how many students elsewhere are comparable? 

Then I saw C’s name. C came to our college as a school refuser with no qualifications. Over three years, I nurtured her through the process and gave her a firm foundation.

There were breakdowns, walkouts, givings-up, outbursts and surrenders. Even up to the exams themselves, we didn’t know what would happen.

I stood with C outside every one of the exams she sat for me, encouraging, cajoling, enthusing to get her into the hall. The invigilator even reached the point of meeting me at the door as a kind of handover so C could be escorted to her desk.

But C underperformed against comparable students elsewhere. Again, I’m dubious how many students there are who would even be comparable. Anywhere.  

Of all my students in this group – some of whom did exceedingly well comparably – M and C stand out for me because they are my success stories. Their attainment was not necessarily high, but their achievement was enormous.

But no grade is given for tenacity. For overcoming. For dealing with life, with its vast complexities. There are no papers in handling family responsibilities, grief, self-doubt, or fear.  

I received emails after the results from both M and C. They have progressed on to their desired destinations and are proud of their success. And I am too.  

I turned back to the numbers. When it came to my data-driven deep-dive defence of the grades, I had the idea of stripping out my lowest performers and seeing how the group overall then compared.

Value-added rocketed. But that would mean denying and discounting M and C’s results, explaining them away, excluding them.

The essence of their success was the sheer strength of character that gave them the ability to cling on by the tips of their fingers, and to fight for their own inclusion. I could not bring myself to exclude them now.  

So I changed my mind. I took the hit. The responsibility for the results was mine. But privately I know how incomparable M and C both were.

Now I have new students, measured again against their own cohorts to judge their success. I continue to keep an eye out for the incomparable ones.

Keeping some of those new students within the class, urging them over the finishing line and even shepherding them into the exam hall will still be the same, no doubt.

That much, maybe, can be compared. I just wish we could have a new data category for resilience. I know who I would place right at the top of that list.   

FE leaders must work together on AI to deliver what students need

Across and beyond FE, there is a mixture of excitement, cautious optimism and nervousness around artificial intelligence (AI).  

Colleges are increasingly aware of the significant opportunities AI presents, but also the challenges it brings for leaders, staff, students and parents alike.  

AI represents a fundamental shift in expectations – for learners, employers and for the education system itself. The question is not whether FE engages with AI, but how we do so in a way that is strategic, inclusive and most importantly true to our values. 

There is cause for optimism. AI is already reshaping the skills employers need and the way students expect to learn and be supported. 

But there are barriers to our ability to provide the tools employers see emerging, and which their future employees will be required to use. In FE, funding, time and capacity are hindrances to potential early AI adoption. 

As a sector, we have a collective responsibility to ensure FE plays its part in leading this change.  

That is why we have formed a community of leaders focused on the adoption and advancement of AI in FE.

This is a community built for the sector, by the sector. A space where leaders can come together to define priorities and formulate roadmaps to tackle them head on. 

Partnership working enables more shared resources, funding and opportunities.

The American computer scientist Fei-Fei Li said “AI is not a substitute for human intelligence, it is a tool to amplify human creativity and ingenuity”.

This should resonate deeply with FE. Our mission has always been about investing in people – developing skills, confidence and opportunity so learners can progress into further study, meaningful work and fulfilling careers. AI should enhance that mission, not distract from it. 

However, there is a risk that the sector approaches AI as a technology purchasing challenge.

If AI becomes about buying more tools, without clarity of purpose or alignment to strategy, we will fail to realise its potential.

Some colleges appear to be moving quickly, while others are struggling to do anything at all. That gap matters. It means that, as a system, we are not yet leading from the front. The government expects FE to be proactive on AI – not lagging behind.

Without coordinated leadership and planning, we risk fragmented progress, wasted investment and, most importantly, learners and communities being left behind.

One of the reasons progress can feel uneven is that many FE leaders feel vulnerable talking openly about AI. There can be a fear of saying the wrong thing, or of appearing behind the curve.

That is precisely why safe, trusted spaces for collaboration are essential. Our new community of leaders is designed to provide that space – enabling honest conversations, shared learning and collective problem-solving. 

Through this community, we aim to explore practical themes such as developing AI strategies for colleges, leading people and teams through technological change, improving efficiency through AI, governance and ethical leadership, and the potential of smart campuses.

Just as importantly, there is a significant opportunity for colleges to agree, collectively, how we share and jointly commission AI tools and solutions. 

We are already working with partners including the Association of Colleges, Amazon Web Services and others to create the conditions for meaningful strategic collaboration, and we look forward to sharing more.

HRUC’s wider partnerships, including with Amazon, underline our commitment to learning from global expertise while keeping FE values at the centre. 

In the coming months, we will also be bringing leaders together at a dedicated event to continue this conversation.

AI is moving fast. But with proper system-wide planning, collaboration and leadership, FE can move with confidence. 

Now is the time for leaders to step forward, work together and shape an AI-enabled future that delivers for all. 

£23m expansion of edtech and AI pilot

The government is investing £23 million in a four-year pilot to trial artificial intelligence and edtech tools in schools and colleges, the education secretary has announced.

Opening the BETT UK conference, Bridget Phillipson said the scheme will “put the latest tech and AI tools through their paces in the cut and thrust of classrooms across the country”.

It is an expansion of a previous nine-month pilot in which schools and colleges trialled “innovative” edtech tools. It is not clear how many schools took part.

More than 1,000 schools and colleges will be involved in the new project, which will begin in September, the Department for Education confirmed.

A DfE spokesperson said it “will recruit schools and colleges to put the latest edtech to the test in classrooms, analysing their impact on pupil outcomes, including those with SEND, and on teacher workload”.

Pilot will track impact on staff and students

Phillipson told the BETT conference: “We’re investing an additional £23 million to expand our edtech testbed pilot into a four-year programme.

“It recruits schools and colleges to put the latest tech and AI tools through their paces in the cut and thrust of classrooms across the country.

“We’ll track how these tools perform the difference they make for teachers and, above all, the difference they can make for children.”

She said the pilot will gather “genuine evidence about what’s working, the cream of education tech and AI rising to the top so that we can spread that transformative potential far and wide”.

Phillipson added that she is “so excited about AI because it means that we can have the chance to make the education system work better for every single learner”.

The DfE said it has already had more than 280 expressions of interest from the edtech sector, from those wanting to be involved in the scheme.

IFS: 16-18 funding needs extra £150m by 2028 to maintain real-terms spending

Funding for 16 to 18 education will need to rise by a further £150 million by 2028-29 simply to maintain spending per student in real terms, according to the Institute for Fiscal Studies.

The warning comes in the IFS’s annual report on education spending in England, which highlights growing demographic pressure across further education and skills that is “absorbing” recent funding commitments.

Economists at the think tank urged for further real-terms increases “beyond” those already announced to account for growth in 16- to 18-year-olds of around 70,000 (3 per cent) students by 2028.

The report, supported by the Nuffield Foundation, also highlighted concerns of a “narrower” apprenticeship flexibility under the new growth and skills levy, modular Lifelong Learning Entitlement take up and deficit pressures on colleges.

‘Limited detail’ on funding boosts and allocations 

The report’s authors cast doubt on the commitments made in the Spending Review last June that indicated that the overall FE and skills budget would rise by just over £300 million in real terms between 2025-26 and 2028-29.

Altogether the skills budget, made up of 16 to 19 education, adult skills and apprenticeships, accounts for £14 billion of public spending in 2025-26.

Chancellor Rachel Reeves pledged an additional £1.2 billion per year in day-to-day funding for FE by 2028-29, in cash terms.

But the Treasury did not publish a detailed breakdown of how the increased funding would be doled out across the different parts of the further education and skills system, which the IFS said leaves “scope for it to be distributed in different ways”.

The Department for Education’s post-16 white paper published in October confirmed a real-terms boost of £450 million to the 16 to 19 budget between this academic year and 2026–27, which the authors said will reverse some of the real-terms funding decline seen since the 2010s.

“This would lead to a 2.5 per cent real-terms increase in spending per student aged 16-19 over this period. This would return funding per student in colleges to around its 2012-13 level and funding in school sixth forms to levels last seen in the mid 2010s,” the report said.

It added that after the increase, per student funding in FE would remain 6 per cent below 2010-11 levels.

“Although the white paper set out a broad range of policies, there was limited detail on how different strands are intended to fit together or how key trade-offs within the skills system would be addressed,” economists added.

More cash needed to meet demographic surge

The post-16 white paper indicated a desire to at least hold spending per student constant in real terms. 

IFS’ economists said that between 2018 and 2025, the number of 16- to 18-year- olds in England grew by around 300,000, or 16 per cent. Population projections suggest a further increase of around 70,000 (3 per cent) by 2028, when the number of 16- to 18-year-olds is expected to peak. 

The authors concluded that based on these population projections, maintaining spending per student at its 2026-27 level in real terms would require total funding to increase by a further £150 million (in today’s prices) by 2028-29.

SEND white paper will be ‘pivotal’ to addressing pressures

The most “acute” financial pressure is in special educational needs and disabilities (SEND), with the IFS warning that the government’s upcoming white paper on SEND reform will be “pivotal” to addressing soaring council deficits and provision demand.

The Office for Budget Responsibility has forecasted a £6 billion gap between expected spending and funding SEND provision by 2028-29.

The IFS expects SEND spending to more than double in real terms between 2015 and 2028. Over the last decade the think tank said SEND has accounted for over half the increase in total school funding over the period.

But local councils have been forced to spend more, causing them to build up deficits of up to £14 billion by 2027-28.

IFS report showing spending and funding for SEND from 2015-2028

Additionally, SEND transport spending almost doubled to £2 billion between 2018-19 and 2024-25, though a breakdown on post-16 transport was not provided. 

Luke Sibieta, IFS research fellow and author said: “The current system is increasingly costly and failing to deliver for everyone. Whether the government can both put the system on a stronger long-term footing, and manage to generate shorter-term savings, will be a crucial test for the forthcoming schools white paper.”

Nearly 3 in 10 colleges in deficit

The report also found almost three in 10 colleges are in deficit, according to analysis of the latest 2023-24 figures.

In 2010-11, just 16 per cent of colleges (weighted by income) were operating in deficit. This more than tripled to 54 per cent by 2015-16 and almost one in five colleges reported deficits exceeding 5 per cent of their income.

While financial performance has improved since its peak in 2017, by 2023-24, 28 per cent of colleges were in deficit. Additionally, 16 per cent of those colleges had been in deficit for at least three consecutive years.

The think tank’s authors said the “persistent” deficits frame the cost pressures colleges are likely to come up against in the next several years.

“Colleges have limited scope to absorb these pressures through adjustments to their largest area of spending, staff costs.”

Staff costs account for around 70 per cent of spending in English colleges in England but staff pay has seen substantial real terms pay cuts since 2010.

“With recruitment and retention challenges evident, colleges are likely to have limited scope to reduce staff costs further without risking additional strain on staffing capacity and provision,” the report added.

It added that the 2022 reclassification of colleges as public sector bodies has limited colleges’ ability to manage pressures through commercial borrowing or short-term financial adjustments.

Apprenticeships units could be ‘narrower’ model than promised

The report criticised the opacity of how new short, flexible courses will be delivered under the evolved growth and skills and levy.

DfE’s post-16 white paper last year indicated that employers will be able to use the levy on “apprenticeship” units but gave no “significant detail” about how apprenticeship units will be defined, approved or delivered.

Economists said this was “concerning” given that they are planned to be rolled out in April 2026. 

The units that will be only available in a set of “designated critical skills areas”, which the report said would restricts eligibility to courses needed in areas of labour market need and limit the extent to which the levy supports wider workforce development.

“The government appears to be moving towards a narrower model than the broader flexibility proposed prior to the election,” the report said.

“The challenge for the government will be to balance flexibility, cost control and the targeting of public funds towards training with demonstrable economic value.”

Uncertainty around Lifelong Learning Entitlement modular take up

The report’s authors also said they will be examining the take up of Lifelong Learning Entitlement (LLE) funded short courses in the coming year.

The first LLE-funded courses are expected to open for applications in September 2026. Applicants will also be able to take out loans for individual modules or short courses rather than full qualifications, initially only for subject areas outlined in the government’s industrial strategy.

The IFS cast doubts on the value of short courses, plus a possible reluctance from learners to take on debt for modular provision, and “uncertain” incentives facing providers.