Pupil Referral Units are supposedly judged by Ofsted by the same criteria used to judge schools. This has meant that previous inspections have begun with half a day of going through exam results and attendance. I’ll use an approximation of this year’s results at my PRU to illustrate why this is problematic.
At the start of the Spring term, January 2013, we had eleven Year 11 students. Of these, all but one achieved five or more GCSE passes (grade G or above). A success rate of 91%. Only two of these got five or more A-Cs (18%). Is this good or bad? Well the best figures I have for PRUs nationally are 18.4% of PRU students gaining 5 A-Gs and 2.1% A*-Cs. So these figures look pretty good compared to those.
Unfortunately the national figures are from two different years (2009 and 2011) because there is no complete set of figures for 2011. It’s also not entirely clear where these numbers come from but presumably they’re an average of all maintained PRUs. So they’re an average of schools for excluded students, school phobics, young people in hospital, students attending both a school and a PRU and many other kinds of setting. Flattering as our exam results look against these national figures, the figures themselves are virtually meaningless.
It gets worse. We are dealing with such small numbers of students that one student leaving or another one arriving can make a huge difference. Between January and May we had six new students start. This meant our final results were 71% 5+ A-G. (Coincidentally, the A-C figure remained the same.)
A fall of 91% to 71%. Did we have a terrible couple of terms, full of inadequate teaching? Of course not. In fact we, teachers and new students, did very well to get the grades we did in such a short time.
Most PRUs’ rolls are so volatile that it’s meaningless to set percentage targets at the start of the year, it’s meaningless to judge overall effectiveness from percentages of exam results and it’s meaningless to judge year-on-year improvement (or otherwise) by exam results. Yet these judgments are the basis upon which inspections are built[^1].
So what can we do instead?
At my PRU we’ve always used 5 or more A-Gs as our starting point, rather than 5 or more A-Cs. This is because no matter who we have referred to us, they are all capable of passing GCSEs. Schools have always referred significantly lower numbers of A-C students than D-G students. Our A-C percentage has always depended at least as much on who is referred to us as on the quality of our teaching.
Using this as our benchmark, we have seen a huge and sustained improvement in our exam results. This is a good thing but it tells a very incomplete story. This figure alone doesn’t give any indication of how many students got Gs when they could have got Es; it doesn’t say how many students got Ds when they might only have been predicted Fs; it doesn’t say how many students ought to have got Cs or better but didn’t.
The only way of assessing success in terms of examination results is to look at individual students: look at each student’s starting point, look at his or her results and make a judgment. There is no alternative that makes any sense but even this approach is fraught with difficulties.
Which start point? Should we use end of KS2 results or the level at which the student is working when they start at the PRU (baseline assessment)? A lot will have happened between the end of KS2 and the student’s arrival at a PRU and it won’t have done his or her education much good, or they would likely still be in school. So baseline assessments often show that students have made no progress or have even gone backwards. However, education in a PRU is a lot more expensive than education in a mainstream school and, at least in part, this is because we are expected to make up this lost ground. At my PRU we use both start points. We aim for students to achieve what they would have done if all had gone well, based on their end of KS2 results. But we accept that for many students there is too much ground to make up. Baseline assessments are used in our planning so we know where to start.
Are exam results what we should be measuring anyway? I remember years ago that there was a common perception in many PRUs that PRU students shouldn’t be made to take exams at all. If they were going to be successful in exams, the argument went, they wouldn’t be out of school at all. It was cruel to put them through something that would damage their self-esteem still further. What they needed instead was care and understanding and to be helped to feel better about themselves.
The message students got from this was that they were right to think of themselves as educational failures and right to think that education had nothing to offer. It’s absolutely right that students in PRUs study for exams and leave with good, nationally recognised qualifications. It’s absolutely right that the focus of a PRU should be on teaching young people and giving them the knowledge, skills and qualifications they need to progress in life. It is also perfectly possible to teach subjects and put students in for exams whilst also boosting their confidence, caring for them and showing them understanding. In fact it all goes hand in hand.
But… many, perhaps the majority, of students who come to a PRU need a lot of help before they are ready to learn in a meaningful way. I firmly believe that this work can and should take place alongside (or even in) normal PRU lessons. This work can take months: often more time than we have with them. This means success for some students does not mean a C in English but instead means staying out of prison, staying out of care, getting his or her head straight or, sometimes, things just not getting any worse. This can’t be measured in terms of exam results. It can’t always be measured in any meaningful quantitative way at all.
What all this means is that it’s very easy for PRUs to make excuses. There is always a reason why so and so didn’t get the grades they might have been expected to get given their end of KS2 results. It also means that Ofsted and the DfE always have a stick to beat PRUs with. "Children in PRUs never get good qualifications." We’ve seen the headlines.
Where does this leave us? It means that heads of PRUs need to analyse effectiveness with honesty and integrity. Aim high and be realistic but don’t look for excuses. Have lots of detailed data but use it in the wider context. It means that Ofsted need to do the same. And that comes down to the individual inspector’s knowledge and expertise.
[^1]: The latest Ofsted guidance now specifically states that inspectors should take account of individual student circumstances in PRUs. They still seem to start with the overall figures though and it’s still worryingly reliant on your particular inspector’s whims.