The new EIF will be wholly fit for purpose when it comes into effect in September, says Paul Joyce
One of Ofsted’s core strategy promises is to improve the validity of inspection continually. As part of meeting that commitment, last autumn we carried out research on lesson visits and work scrutiny. Our aim was to test whether inspectors reliably assess the right things when they observe lessons and look at learners’ work.
Unfortunately, when the research was published last week, there was some misunderstanding about its findings. This was particularly the case with regard to the validity of our methods for observing lessons or training in further education and skills (FES) providers.
I want to clarify those findings and hopefully provide reassurance that our new education inspection framework (EIF) will absolutely be looking at the right things when it comes into effect in September.
Lesson visits and work scrutiny are just two important methods in our inspection toolkit. They help inspectors gather evidence and reach a judgment about the quality of education in a school or college. But we have not judged or graded individual lessons for some years now, and we will not do so under the EIF.
For the lesson visit research, inspectors were given a set of 18 measurable indicators and asked to evaluate independently the training or lessons they saw, against a five-point scale. The indicators covered three areas of interest: curriculum, teaching, and behaviour.
It’s important to clarify that these indicators were developed for the research study only. They will not be used on actual inspections under the new framework.
The education inspection framework will be the most tested that Ofsted has ever introduced
Our findings are encouraging. They show that inspectors are able to assess behaviour, teaching and the curriculum separately when they observe lessons or training. This has given us greater confidence about the validity of our new inspection framework, which has separate judgements for the quality of education and behaviour and attitudes.
But what we also found was that the research model did not quite fit the FES context. This meant the likelihood of two inspectors rating any of the indicators exactly the same was lower in colleges than it was in schools. This does not mean that the way we gather evidence from lesson visits in FE colleges is flawed, or that it leads to unreliable judgments. Only that our research model was not designed to take account of the complexity of further education providers.
With that in mind, we will be conducting a further research project later this year, specifically designed to test the validity and reliability of further education lesson and training observations. We have also established a research group of academics with expertise in this area and will be publishing a series of blogs and a literature review in due course.
But that doesn’t change where we are now with the EIF and shouldn’t cast any doubt on its suitability for further education and skills contexts.
We need to make sure our inspection tools are suitable for the uniqueness of each provider, as well as sufficiently reliable. That’s why we developed a flexible inspection methodology. We tested this methodology in a variety of FE providers as part of our piloting work, which is set out in our further education and skills inspection handbook.
Under the EIF, evidence from a number of different observation activities will be drawn together alongside meaningful discussions with leaders, governors, trustees, staff and students, as well as scrutiny of curriculum documents, learner records and published national data. We are confident this will enable inspectors to collect a secure evidence base on which to judge a provider.
The EIF will be the most tested inspection framework Ofsted has ever introduced. We recognise there is more to do, but that should not stop us from doing what we know is right – shifting our focus to the real substance of education and training, for the benefit of all children, young people and learners.