‘Below the Floor’, ‘Coasting’… whatever next?

Last May, the Department for Education announced its intention to scrap the floor standard and coasting measure and replace them with a “single, transparent data standard” that will “trigger an offer of support”. The announcement also stated that the DFE would consult on what this new standard should be.

The consultation is likely to take place in this coming Autumn term, so it is not entirely clear whether the floor standard and coasting measure are still in place for the 2018 results or not. Until we hear otherwise, schools would be wise to assume that these thresholds are still in place, but (regarding the ‘coasting’ measure) it is worth bearing in mind that this DfE document from February 2018 states (page 6) that, out of all the schools deemed to be coasting last year, only one was subject to formal action by the Regional Schools Commissioner.

In anticipation of the forthcoming consultation regarding the new ‘single, transparent data standard’, in this blog I consider the questions that I think the DfE should be asking the profession, and offer my own views in response to those questions.

But before the questions, a key principle:

It is my firm opinion that any system designed to identify schools in need of intervention should not be constructed in such a way that there always must be schools falling into that category. It should allow for the hypothetical possibility that every school in the land is doing a great job. Being at the bottom of a league table should not be an issue, if every player in that league is doing really well (just that some are doing better than others).

Whatever the new standard may look like, I believe it is essential that it should adhere to this principle.

The questions that follow, and the answers I have offered, are grounded in this key principle.

Question 1: Should the ‘New Floor’ take into account both attainment and progress?


(NB currently, at KS2, both the floor and the coasting standards are based on thresholds for both attainment and progress. However, at KS4, only the Progress 8 measure is used.)

My view is yes, both attainment and progress should be considered. Focusing on attainment alone is disadvantageous to schools with large numbers of low attaining pupils, who might nonetheless have made very good progress (for example, children with SEND). And progress is, arguably, the greater indicator of the impact of a school, as it takes into account children’s starting points.

But focusing on progress alone has problems too. In the current system, at both KS2 and KS4, the progress measure is what you might call a ‘zero-sum’ game, i.e. for every winner there exists a loser. For every school with a +1 progress score, there will be another with a -1 score. In other words, the model is norm-referenced, with zero representing the national average rate of progress in each year. Progress could be (and indeed is) getting better and better every year across the country, but this just means the national average progress rates rise each year: the ‘zero’ score in the progress model is a relative standard, not an absolute one. There will always, by definition, be schools at the lower end of the distribution curve of progress scores – but that whole distribution curve could be getting better year on year. To be penalised for being at the bottom end of a distribution in any given year means we have a system that is constructed such that there must always be schools that are deemed to have failed. (See the Key Principle above.)

However, it could perhaps be argued that if a school has occupied the bottom end of the progress distribution curve for 3 years on the trot (or even 2 years?), then there is a problem that needs addressing – see question 3 below.

When thinking about progress scores, we also need to factor in “the Junior School issue”. I won’t dwell too much on this issue in this blog, other than to note that it is now a widely recognised fact that junior schools are less likely to achieve good progress scores than primary schools. In fact every junior school had the following statement written on the front page of their Ofsted Inspection Data Summary Report this year:

We know from national data that pupils at junior schools, on average, have higher attainment scores at the end of key stage 2 than pupils at all other primary schools. However, they also have lower progress scores. It is not clear what causes this but inspectors should be aware of this when using and interpreting data for different school types. (Ofsted)

For further thoughts about this particular discrepancy, see this blog by FFT’s Education DataLab.


One possible way around this issue would be to contextualise the progress model to take account of school type (primary or junior) and factor in the national difference.


Question 2: Where should the attainment threshold of the New Floor be set?

A quick bit of background: the original concept of the attainment floor was that it represented a minimum acceptable standard. Schools below that figure were seriously underperforming (unless their progress data showed otherwise.)  In 2014, the attainment element of the KS2 floor standard rose to 65% of pupils required to attain Level 4+ in both English and maths. For context, at this time, nationally 79% of pupils achieved this standard. So the floor standard was 14 percentage points below the national figure.

This concept seemed to change radically in 2016, when the attainment ‘floor’ became far more ambitious.

When the new post-levels assessment system came into play, the DfE retained the ‘65%’ part of the floor standard, whilst acknowledging that the new ‘expected standard’ was considerably more ambitious than the old one. In 2016, the national proportion of pupils achieving the expected standard in reading, writing and maths was 53%. The country as a whole was 12 percentage points below the ‘floor’. This year, the national figure has now hit 64% – tantalizingly close, yet still below, the ‘floor’.

I would argue that the ‘New Floor’ should return to its original meaning – a minimum acceptable standard. Given that the national figure (at KS2) is 64%, perhaps somewhere between 50-60% would be appropriate (assuming progress is also taken into account and that, as before, schools with 10 or fewer pupils in the cohort are not counted, as they are vulnerable to huge swings in data).

Regarding KS4, see question 4 below, as setting an attainment threshold is contingent upon the scope of the assessment.


Question 3: Should the New Floor be based upon a single year of results or a trend across 2 or 3 years?

Currently the floor standard is based upon a single year of results whilst ‘coasting’ depends upon being below a particular standard for 3 consecutive years.

I feel there is something to be said for the 3-year trend approach – or even a 2-year trend – to avoid that possibility that one bad year of results can spell disaster for a school. This is particularly important for smaller schools, e.g. a one-form entry primary school. Cohorts vary. One group of 30 children can be very different to another. When every child constitutes 3.3% of their cohort, it doesn’t take much for a school’s data to dip dramatically – a few extra children with learning difficulties, a couple of new entrants to the school half-way through Year 6 etc…

Having said that, I can anticipate the counter-argument that even one bad year of data is one year too many, as far as the children involved are concerned. If they have had a bad deal in terms of quality of teaching or access to the curriculum, they can’t get those years back. It’s of paramount importance therefore to try to understand the reasons behind a particular school’s outcomes – were there factors completely outside of the school’s control that have had a negative impact on results, or are there concerns about teaching or management that need to be addressed sooner rather than later?

Perhaps a 2-year approach strikes a happy medium here. If a school has had one disastrous year, it knows it has precisely one year to address the problems. If the bad year was truly a one-off, a freak result – no problem. But if that bad year of data reveals some systemic failings within the school, action is needed urgently. I appreciate that one year is probably not enough time to turn around some problems, but then it comes down to being able to demonstrate the capacity to improve and that things are moving in the right direction.

(This is all, of course, contingent on the attainment/progress standards being set. When I refer to a school having a ‘disastrous year’, I don’t mean failing to hit an ambitious standard that is set above the national figure…)

Question 4: Which subjects should be taken into account in the New Floor?

Currently, at KS4, the key indicator for floor and coasting is the Progress 8 measure – a measure based on pupils’ performance in 8 subjects, including GCSE English, Maths and at least 3 other EBacc subjects. This measure was designed to really focus attention on pupils’ performance in key academic subjects, and less so on vocational qualifications or GCSEs in areas such as the arts. Unsurprisingly, therefore, not everyone agrees with it due to both its subject focus and the fact that, arguably, an academic (non-vocational) pathway is not necessarily in every student’s best interests.

If the Progress 8 model remains as it is, and remains part of the New Floor, I think there is a strong argument to make it not the only trigger for secondary schools. For example, perhaps attainment in GCSE English and maths should also be included. Whilst it could be argued that the Progress 8 range of subjects is not right for every student, I think few would disagree that gaining good GCSE passes in English and maths is something we would want every child to be able to achieve, as an essential passport for their future.

At Key Stage 2, the floor and coasting measures focus on reading, writing and maths – where reading and maths are measured by SATs tests, but writing is a teacher assessment.

(NB the Grammar, Punctuation & Spelling test does not form a part of these accountability measures.)

This is a controversial area and one on which I have mixed views.

On the one hand, I would not want to see the focus in primary schools moving away from developing really good quality creative writing (which is assessed by the current Teacher Assessment Framework) and instead focusing simply on grammar and spelling rules devoid of context and ‘real writing’. There is a danger that, if writing were no longer part of this floor standard, in some schools, children’s learning and opportunities to develop their writing would suffer, in favour of spending more time practising things that are tested in Year 6.

On the other hand, there is a seriously unlevel playing field in that, in any given year, the majority of schools will not have been subject to any external moderation of their writing teacher assessments. (Every local authority has a duty to carry out moderation in a minimum of 25% of its schools each year.)

I do not in any way wish to imply that schools that are not moderated in any given year might be deliberately cheating the system and inflating their writing data. However, where there are schools or teachers with a misunderstanding or misinterpretation of the expected standards defined in the TAF, they could go for 3 years unchecked and consequently with inaccurate writing data before this is discovered when their turn for moderation comes around. So a school that might potentially be below the floor standard could remain undetected for 3 years whilst others are picked up immediately.

Is this fair?

I’m sure many voices out there would say ‘No, this isn’t fair. Focus the New Floor on just reading and maths, or include the GPS test as well’.

As I say, I have mixed views on this – if that approach were adopted, I think we would need to factor in (possibly via Ofsted or LA monitoring) a means of ensuring that standards in writing do not drop and that children’s access to quality teaching of writing is not compromised.

Question 5: Progress – simple model or contextual?

Back in the pre-Gove days, it was accepted that a fair way to evaluate the impact that a school has had on children’s learning was to use a progress model that took account of factors outside of the school’s control (e.g. socio-economic deprivation, proportions of children with English as an additional language etc) – ‘contextual value added’.

For example, the 2008 Ofsted publication, called ‘Using Data, Improving Schools’ (2008) stated:

Both simple value added (as a standardised measure and as a conversion measure) and CVA data have roles to play in building up an overall picture of a school’s effectiveness, and each can be a corrective for the other. CVA can illustrate the extent to which contextual factors can legitimately be regarded as having influenced the progress that pupils have made in relation to their prior attainment, while simple value added measures can bring a sense of perspective if a school’s CVA measure is particularly high or low. But ‘absolute’ success remains crucial.

(Emphasis added.)

But Mr. Gove was of the view that CVA provided an excuse for under-performance by schools in challenging areas, so it was scrapped in RAISEonline and other official measures in 2010. (NB the Fischer Family Trust, who had invented a CVA model before the idea was adopted/adapted in RAISEonline, continue to produce a contextual value added measure for schools in FFT Aspire.)

I have continued to maintain that some sort of contextualised progress model should play a part in evaluating schools’ successes and failures. Where used correctly (i.e. for self-evaluation of the past, but never for target-setting for the future) it should not become a barrier to closing the disadvantaged achievement gap and should never be used as an excuse for low expectations.  But it would be a valuable tool in determining how well schools are doing compared to schools in similar circumstances and as such, as indicated in the 2008 Ofsted quotation above, it should have a role to play as part of a range of indicators.


There may well be other questions worthy of consideration as the DfE works towards creating its new data standard, but these are the ones that have sprung to my mind and I hope this blog has provided some food for thought for school leaders to consider, whilst hopefully enjoying a relaxing summer break.

Ben Fuller, AAIA President

Leave a Reply