[Skip to content]

NAHT - For Leaders, For Learners
Search our Site
The “too abstract” levels and more abstract proposed replacement
Opinion icon

Warwick Mansell

The former TES journalist writes for NAHT on current education issues. The views expressed do not necessarily reflect those of NAHT

 

 


The “too abstract” levels system and its more abstract proposed replacement

Will the proposed new national assessment system for primary schools be an improvement over the current, quarter century-old, “levels” regime or a step backwards?

This is the question lingering behind last week’s much-derided, and much-delayed, announcement on the subject. (See consultation paper here: http://bit.ly/1as3iQU).

A lot of the controversy centred on the Government’s proposed move to tell pupils and their parents in which decile band the child finds themselves, based on performance in reading and maths tests. I don’t propose to discuss that too much here, partly as I’m sceptical as to whether it will see the light of day, given the outcry and the DfE’s recent record on unpopular policies, and also because others, including my blogging colleague Susan Young (http://bit.ly/16JNSmR) , have already covered it effectively.

But I wanted to concentrate on the broader ideas and detail behind how assessment results are proposed to be reported, given that I think that ministers probably could persevere with the overall structure even without the proposals’ most-criticised, “deciles”, aspect.

The point I want to make here is that, even without “deciles”, it would be hard for the government to make a genuine case that what is on the table now is beneficially addressing the problem that scrapping levels – the system which has been in use since the start of the national curriculum in the late 1980s under a Conservative government  -  was meant to address.

Ok, to put this in context, we need to go back to the origin of ministers’ move to scrap the eight-level (originally 10-level) national curriculum system. This stemmed from the November 2010 paper ( http://bit.ly/19fvAA8 )written by Tim Oates, who went on to become the government’s leading “expert” adviser to the curriculum review, and then, more substantively, the report (http://bit.ly/16WSaaJ) a year later of the “expert panel” which Mr Oates chaired, which was itself informed by the Bew review of assessment of 2011.

Mr Oates had written: “’Levels’ remain the main reporting mechanism is respect of [the] national curriculum. Yet genuine understanding of the way in which a child can attain a level remains widely misunderstood.”

He added: “The need for more detailed measurement has given us levels 4a, 4b and 4c and so on – yet the actual meaning of these in terms of children’s progression in key concepts and mastery of key knowledge cannot be justified adequately. “

The “expert panel” elaborated on concerns about the use of levels. It said:

“We have concerns…about the ways in which ‘levels’ are currently used to judge pupil progress, and their consequences. Indeed, we believe that this may actually inhibit the overall performance of our system and undermine learning.

“For this reason, we suggest a new approach to judging progression that we believe, in principle, to be more educationally sound. This has some significant implications for assessment and accountability.”

It continued: “We are concerned by the ways in which England’s current assessment system encourages a process of differentiating learners through the award of ‘levels’, to the extent that pupils come to label themselves in these terms.

“Although the system is predicated on a commitment to evaluating individual pupil performance, we believe it actually has a significant effect of exacerbating social differentiation, rather than promoting a more inclusive approach that strives for secure learning of key curricular elements by all.

“It also distorts pupil learning, for instance by creating the tragedy that some pupils become more concerned for what ‘level they are‘  than for the substance of what they know, can do and understand.”

The solution put forward by the group, then, was to try to return the measurement system to what Mr Oates had suggested was the original concept behind its introduction. This suggested that pupils should be given clear statements of performance linking the assessment to precise content within the new curriculum which they were supposed to master.

As the expert panel report said, “the focus of ‘standard attained’ should be on these specific elements, rather than a generalised  notion of a level. In plain language, all assessment and other processes should bring people back to the content of the curriculum (and the extent to which it has been taught and learned), instead of focusing on abstracted and arbitrary expressions of the curriculum such as ‘levels’.”

It added: “We believe that it is vital for all assessment, up to the point of public examinations, to be focused on which specific elements of the curriculum an individual has deeply understood and which they have not.”

The panel also cited supportively evidence from groups representing the interests of pupils with special educational needs, which had said: “there is a need for something more flexible that recognises and assesses individual progress; that assessment should focus on successes rather than being grounded in failure; and a teacher’s narrative judgement should be used in assessments of a pupil’s progress.”

The core problem with the levels system as it had developed, then, was that it had become too abstract: numerical judgements – and, especially, the sub-levels which had never featured in the official curriculum structure but were used in schools and in statistical analyses– had become almost ends in themselves. Detached from the detailed content of the curriculum, it had become difficult to know from the numerical level what a pupil could understand, and what they could not. By implication, this was a problem for parents as well as for pupils and teachers, as it was not clear to parents what a level meant.

And there was a need to move towards something which could give individual pupils a sense that they were progressing against individual aspects of the curriculum, by implication even when that progress might be less rapid than that of others.

In his letter in response to Mr Oates in June 2012, Mr Gove said: “I have, as the panel recommended, decided that the current system of levels and level descriptors should be removed and not replaced.”

This has remained the policy ever since, with last week’s proposals – which re-iterated the above wording of Gove’s letter to Mr Oates precisely - representing the first attempt to put flesh on the bones of what might follow levels.

The letter continued: “As you rightly identified, the present system is confusing for parents and restrictive for teachers. I agree with your recommendation that there should be a direct relationship between what children are taught and what is assessed.

“We will therefore describe content in a way which makes clear both [his italics]what should be taught and what pupils should know and be able to do as a result.”

(By the way, I think the Government hasn’t honoured that last pledge, the existing curriculum’s setting-out of both curriculum content and attainment targets – ie what pupils should be able to do following the teaching of the curriculum – now having been replaced, effectively, by only statements of the former. This is, then, another backtrack on a seemingly ever-lengthening list in curriculum and qualifications policy under Mr Gove).

The key point, though, is that it looks as though the new proposals are going even further away from the vision of the expert panel, seemingly backed by Mr Gove, and from the reason for scrapping levels in the first place.

The panel wanted a system which was less abstract than the current levels regime, more closely linked to the content of the national curriculum and with all pupils being given a sense of progress they were making over time on specific content.

The new model proposes to make the national reporting of assessment more abstract, less connected to the content of the curriculum and with pupils not given any sense of progress on specific topics beyond  - arguably - the overall subject. This last point comes with - it seems to me - a clear danger that the only information that the national curriculum testing system would hand to many lower-achieving pupils and their parents would be that they are making less progress than others under this new regime. This, I’d guess, might prompt many to give up or switch off as a result.

Looking at the detail, there are three elements to official reporting of pupil achievement in the national tests in reading and maths under the proposed new model.

- First, a pupil would be given a “scaled score”, possibly to range between 80 and 130, with a score of 100 set at a level which, for every year, would represent a constant standard a child would have to achieve to be deemed as “secondary ready”, a concept which is far from fully defined in these proposals for consultation.

This would give a single mark for the whole subject, and would see raw marks recalibrated onto that 80-to-130 scale along the lines of what happens in international testing studies, such as the OECD’s PISA tests (the  calculation of results under which is, by the way, fiendishly complicated, with methodologies which must remain opaque to all but a  very select group of experts)

- Second, and most controversially, of course, pupils would be given a ranking in the national cohort by decile: whether the pupil is in the 10 per cent top performers overall; or the second-top; etc, down to the bottom.

- Third, by implication a pupil would get - in a seeming nod towards inclusivity -  a sense of whether the child has made more or less progress than others nationally on average, given their starting point.

There is, then, amazingly, no sense, in the official assessment reporting arrangements, as to which elements of the statutory curriculum a child has mastered, and on which they are less secure, other than a few numerical indicators for an entire subject. A teacher being asked for detailed information on what a child could do and what they couldn’t by a parent would have to rely not on this system, then, but on her or his own in-class assessment. To be fair to the DfE here, the document does say that in-class assessment has to be used to compliment the results for each pupil from national tests, so that would be made available. But, given the importance the accountability system is going to place on the test results, there should be no doubt that test results for individual pupils are going to continue to be seen as very important in schools and, again, a teacher wanting to explain the test result to a parent would have no detailed ammunition in terms of the detailed content the test was supposed to be measuring. 

Here, it would be useful to remind ourselves, again, of the system that this is supposed to replace.

The levels system comes with an official definition of what a child should be able to do, given their achievement of a particular level. So, a teacher asked by a parent what a “level four” means could rely on how it is defined. In reading, for example, level four is currently defined:

“In responding to a range of texts, pupils show understanding of significant ideas, themes, events and characters, beginning to use inference and deduction….”, while at level three, the definition begins: “Pupils read a range of texts fluently and accurately.”

Now, it is certainly possible to question the degree to which a few marks scored on a one-off test under the current system supports the degree of interpretation being put on it under these definitions.  Indeed, I have done so myself in the past (eg http://bit.ly/m0Qgiv), wondering about the wisdom of hanging large interpretative judgements on results generated in this way.

But consider again what would happen now. A teacher, questioned by a parent on a pupil’s overall result, would only be able to say, well your child was above, or below, the (abstract, as-yet-largely undefined) point at which the government has chosen to specify “secondary readiness”, by this much, and he or she is in the top x/y per cent of peers and is making faster/slower progress than his or her peers.

There is no reference, then, to what they can actually do within the curriculum other than to the generalised subjects of maths and reading.

The most concerning aspect, though, I would guess, would be the progress measure. Let’s discard, first, again, the question of deciles. As I say, I can imagine the DfE either ditching deciles entirely or, perhaps though probably not very convincingly, releasing decile information only to higher-performers, given the criticism from those including Russell Hobby along the lines of the proposals being “a disgrace”.

But leaving out the proposal relating to deciles would still leave the outline of this plan largely intact as it was, giving all pupils a very abstract mark relative to 100 and then a sense of whether they had made more or less progress than average, given their earlier starting point, over the key stage.

So a child with below-average achievement and  with below-average progress – there is an example in the DfE document, and many children, of course, will be in both categories – would simply be told that, with no sense to cling to of any progress within particular curriculum areas. That would be the message to parents, too.

The at-face-glance powerful notion, then, that the national curriculum comes with a ladder of understanding which any – or almost any – child could attempt to scale, regardless of how others were doing, and which was behind the introduction of levels in the first place, seems to have been sacrificed under this new plan.

Will its replacement result in improvements, as a generation of pupils are handed new information suggesting they are performing worse than their peers and they then redouble their efforts, as a minister might exhort them to? Or will it demotivate many more than it energises?

Well, I would like to see some research suggesting that it might be the former, with the expert group having stressed that “the way in which the achievement of pupils is assessed and reported has a profound impact on the operation of an education system, with implications for pupil motivation…” Yet no research findings are offered in this document, with its big implications for millions of primary children.

Indeed, Professor Mary James, one of the four-panel expert group on the curriculum, has just told me that the evidence was that the plans overall – as expressed most strongly in the move to give decile information -  would “label a proportion of children who are most vulnerable as failures …it might give some children a kick up the behind, but there will be many others who will respond by saying ‘I’m not going to play this game any more’.”

“The evidence is that this kind of generalised labelling of children does them no good,” she added.

Similarly, does the notion of a teacher having some official ammunition to soften the blow for a parent who now must learn that their child is below expectations – in the form, perhaps, of reminding them what their child can do and what more they can do now than they used to be able to do - amount to accepting low expectations, or a realistic response which might have a better chance of retaining the child’s interest over the long term? Again, we are offered no thoughts here.

There must be other concerns about the technicalities and details of the assessment model. As discussed, the DfE is proposing to hang very serious – for the pupil – implications on the results of performance in these new tests.

So, for example, we are going to get a “higher- than” or “lower-than” average progress measure, given a comparison against other pupils with the same assessment starting points, but this is presumably based on one-off tests at 11 versus those at either five or seven.

But any child can over- or under-perform on the day. Suppose a child does happen to do better than expected in that earlier test, and that their underlying mastery of a particular subject domain is actually lower than recorded on the day. They then get compared against young children with, perhaps the same score but whose performance is a truer reflection. That first child may be told, at 11, they have made lower-than-average progress if they go on to do less well than their comparators, when the reality was that the original judgement was not a true reflection.

Beyond that, will the true measures truly be comparable, given that the baseline assessment – especially if made at five – would probably have to be relatively brief and simple?

In general, we should  be very cautious in sending a message to children that they are not making progress – or that they are making lots of progress – when all we are doing is getting a statistical comparison of two inevitably infallible one-off tests at a certain distance apart. A smarter system would be looking at much more information – lots of data sources, lots of qualitative information about what a child can do – before making a pronouncement with implications as big as this for the individual.

The hope of those backing this document must be that, facing these multiple potential problems with the official reporting of test results, teachers will step in and use their own – or those of commercial/independent providers they buy – assessment systems to do the detailed work of providing information on what children actually understand. In other words, teachers and schools would almost have to subvert the official information.

But that is rather an odd position for policy-making to reach, and comes, as mentioned above, with accountability to continue to put a lot of weight on the results of national tests within schools.

All this has left me wondering why, having seemed to endorse a criticism that “levels” were too abstract and removed from the content of the curriculum itself, the DfE has now embraced a vision which seems to make assessment reporting more abstract, and more removed.

Overall, given the above, I suspect both that the deciles plan might go after this consultation period ends and that an incoming Labour government would find it relatively easy to scrap the wider framework behind this plan from 2016 if it arrives in power the year before.

Standing back a bit, it is possible that one aspect of this proposal will be greeted with some relief by school leaders: although the overall proposal is that the expected standard is to rise and that more pupils and schools are meant to clear it, overall the document envisages that the number of schools failing to hit “floor standards” will stay roughly the same rather than rising. This must come, I think, from allowing schools not to be classed as missing the target so long as they achieve value-added/progress scores which are slightly more lenient than at present.

But other than that, there are lots of other odd things about this proposal. For some detailed further exploration, I’d recommend, as I did in my last piece, reading Tim Dracup’s “Gifted Phoenix”’s blog: (http://bit.ly/1bUcajT)

A couple of other aspects of seeming confusion I’d highlight here. First, although the document does mention the fact that the statutory duty of national assessment remains to provide a judgement on which aspects of the curriculum a particular pupil has mastered – it quotes from the 2002 Education Act which says national assessment must “ascertain what pupils [my italics] have achieved in relation to the attainment targets for that stage”, elsewhere in this paper it seems to see the purpose of national tests as now mainly relating to judgements about schools. As in:

“There will be a clear separation between ongoing, formative assessment (wholly owned by schools) and the statutory summative assessment which the government will prescribe to provide robust external accountability and national benchmarking.”

Ie, the national tests – in contrast to teacher assessment – are mainly about “external accountability”, which can only mean checks on teachers and schools, rather than about individual pupil performance. But the reporting of national test data has big implications, of course, for pupils; perhaps the emphasis on schools may be a reason why the risks of some of the problems documented above, which relate to the meaning given to results at the pupil level, seem not to have been fully thought through.

The second aspect I’d highlight is the curious phrase of “secondary ready” which accompanied this document. As mentioned above, the detailed definition of this is not made clear in the document, with the DfE only saying that it will be more exacting than the present level four performance in English and maths, and that 85 per cent of pupils will be expected to achieve it.

But the section in which it is introduced, looked at in tandem with a previous announcement (http://bit.ly/14tkJkB), would seem to imply that ministers want to move the national expectation of performance up from level four – which in practice translates as level 4c, or the lowest level of performance within level four – to level 4b.

In the proposals, the Government highlights statistics showing that 47 per cent of pupils (deep breath…) achieving a level four in English and maths, but not achieving a level 4b in both of these subjects, went on to achieve five A*-Cs at GCSE, including English and maths. But those who were adjudged more of a success at key stage 2, the figures for subsequent success were much higher.

Specifically, among those who achieved “at least” a level 4b in both subjects, the proportions going on to get the five A*-Cs including E and M was a much-higher 72 per cent. (Note 1)

Well, I think it would be best to mention just briefly a sense of amazement that this seems to be what the entire purpose of primary education is being reduced to, here: boosting a child’s performance on a very complicated statistical indicator to give them, the correlation as quoted here would suggest, a better chance of later success.

Then there is the irony of ironies: while the expert group reserved its strongest criticism for the use of sub-levels - arguing that they have, seemingly, the most limited relation to the underlying detail of national curriculum understanding - now we have a government responding to the group’s plan but doing so by seemingly defining the over-riding purpose of primary teaching in terms of moving pupils up from one sub-level to the next.

But even taking the notion of “secondary ready” on its own terms, this would be a very strange concept to translate into reality of reporting on the ground, I would suggest.

For are we truly to suggest to pupils who do not attain this new measure that they are not “secondary ready”, as this new measure would do?

Even notwithstanding some of the reservations implied above about the potential impact on motivation, and even when we forget about the fact that there are no proposals in this document to hold pupils back from moving on to secondary school (so what is, effectively, being proposed is that perhaps 15 per cent of pupils embark on secondary education even when the official assessment system says they are “not ready” for it), even the statistics as quoted don’t really bear out the notion that a child who doesn’t attain this new benchmark measure won’t be “secondary ready”.

For consider, again, the fact that, in 2012, even among this first group of pupils, getting level four but failing to achieve 4b in both subjects at key stage 2, almost half went on to do well at secondary school on the Government’s own terms.

If success  on the five A*-C including English and maths measure, then, is seen as the overall goal of secondary education, how could nearly half of these pupils not be deemed to have been “secondary ready”, given that they had emerged from secondary school relatively successfully?

Sorry if I’ve missed something here, but what a dog’s breakfast this is. Perhaps that’s no surprise, given the repeated seeming delays behind this paper in recent months.

Sadly, and regrettably, this document seems a fitting end to what I see as the most chaotic year of policy-making I’ve observed in 16 years as an education journalist.

I do think that the extent of top-down political and politicised control of policy-making, without detailed and, in the end, meaningful, input from experts with understanding of the complex issues behind, for example, assessment and qualifications policy, is really beginning to show in terms of detailed problems and confusion, with the lack of a Qualifications and Curriculum Authority – for all its problems in the past – a weakness.

After this academic year, perhaps the only hope will be that things cannot get any worse. Can they?

 

 

Note 1: I wonder if there might not be a little statistical sleight of hand going on, here, in that what we are getting is a comparison between quite a tightly-defined first group (ie pupils achieving level four but not more than one 4b in their English and maths results), and a more broadly-defined second group, which basically captures all higher-achieving pupils in the subject. Thus, we get in this second measure, pupils who are just got the level 4b, but also those who were “easy” level fives. I think this probably makes the comparison more dramatic than it needs to be, with seemingly a big gap between later performance among the first and second groups, when what we might want would be a comparison of pupils in GCSEs by individual national curriculum sub-level, which I guess would show a smaller and thus less newsworthy jump, rather than by “sub-level and above”. But anyway…

 

 

 

Page published: 24 July 2013