It is, of course, the same every year. Our young students have their faces splashed all over the news annually when it emerges they've achieved better results than any of their predecessors. And when the inevitable cloud of cynicism descends, with old duffers like the MT staff saying ‘well, they're obviously getting easier', the authorities wriggle away from the issue saying ‘don't spoil it for the students, they've done ever so well'.
That may be so but, well, the exams obviously are getting easier. Back in 1997, only 15.7% of A-level exam entries were given A grades. Some of the improvement may well be down to improved ability, but 10% in the space of a decade is an incredible leap. And you have to wonder why, for example, universities are having to tutor undergraduates in remedial maths when 43.7% of students are passing that exam with an A grade.
All of which is academic really. It's hard to resist the argument that something set up with a particular purpose - ie. to identify academic potential - isn't actually doing the job. If everyone's getting the top grade, then how do universities and employers differentiate between them? The ideas proposed so far, such as further dividing the A grade into A*, A** - ad infinitum - won't solve the problem, just delay the inevitable day when everyone's getting 99%.
Surely if we're after better standards of education we should be raising the bar every year, not lowering it. Unless of course your goal isn't to improve education but to hit the targets that suggest you're improving education. Now where did we learn that?