– Grade inflation is caused by a consumer-based approach to teaching.
– Academic promotions are based on research quality, not teaching.
– Grade inflation is caused by cultural and structural factors.
Noah Carl’s article for the Daily Sceptic accurately describes the widespread phenomenon of grade inflation at universities in the U.S. and the U.K. He argues that academic pay and promotion are linked to student-based course evaluations, and that professors are incentivized to give higher grades to avoid receiving lousy evaluations.
The author then states that this explanation does not hold true for British universities, for two reasons. The first is that academic promotions are based on research quality, not teaching quality. The second is that student evaluations occur before students receive their grades.
The real reasons for grade inflation are then discussed. The main one is a cultural malaise, wherein the pursuit of excellence is looked down upon. A subsidiary reason is structural, as universities can benefit from higher grades and degree classifications in league table rankings.
The article concludes that university league tables are a stupid and pernicious idea, and that universities would survive without them.
Busqueros is a pseudonym.
Noah Carl’s piece today for the Daily Sceptic describes, accurately, the widespread phenomenon of grade inflation at universities – both in the U.S. and the U.K. He reasons, drawing on the work of economist Stuart Rojstaczer, that this is attributable to a “consumer-based approach to teaching” in which academic pay and promotion are linked to student-based course evaluations. As Carl puts it:
Basically: if they’re too stingy with their grades, they’ll receive lousy evaluations, and in addition to the stress of dealing with irate students, they’ll be less likely to advance in their careers.
Now, I don’t know about the situation at American universities. But as regards British ones, this explanation is dead wrong – for two simple reasons, which are themselves highly instructive as regards the state of higher education in the U.K.
The first reason is that academic promotions – certainly from the middle of the university league tables upwards – are almost exclusively based, not on teaching quality, but on research quality. Your average Russell Group Vice-Chancellor couldn’t give two hoots about what happens in the classroom; come rain or shine, they are guaranteed to get all the bums on seats that they need in order to keep the show on the road in terms of student fees. VC pay is linked to performance, and performance generally means improvements in league table rankings, which brings prestige and (usually) an increase in the number of high-paying international students. What improves league table positions? Well it’s down to a range of factors, but the only one that directly relates to individual staff performance is the quality of their research. (Something called ‘teaching quality’ is also often included, but this, importantly, does not actually mean teaching quality in a way a lay person would understand it – more on that below.) Naturally, this makes research quality the only really relevant metric in who gets promoted and who doesn’t – although some universities do have promotional pathways for staff who just want to be good teachers, mainly to keep those staff members happy.
The idea, then, that staff are concerned about student evaluations is laughable. The only effort that most research-active academics at U.K. universities put into their teaching consists of finding ways to avoid having to spend time in the classroom – and this is almost entirely the product of how they are incentivised.
The second reason the Rojstaczer/Carl hypothesis is wrong is that, for related reasons to those discussed above, student evaluations generally take place way before students actually get their grades. Students evaluate course content typically in the last session of the semester, and get their exam results months later. Similarly, they fill in the National Student Survey (their single opportunity to assess their university experience in a neutral forum) roughly in the middle of their final year of study – i.e., months before they get their final degree classifications. The idea that staff are worried about student evaluations when they mark exams simply misunderstands the process – at least as far as common practice in the U.K. goes.
What are the real reasons for grade inflation, then? The main one is, in my experience, cultural. There is a kind of mother hen syndrome at work amongst many academic staff – a feeling that one should err on the side of generosity in all things when dealing with students. This is part of a wider cultural malaise, wherein the pursuit of excellence is in itself looked down upon as somehow harsh, patriarchal or ‘toxic’. I have been in staff sessions in which the opinion has been widely aired that it would be a good thing if all students got firsts – a concept that completely misunderstands the purpose of having exams, but which is indicative of the prevailing mood amongst a big cross-section of the academic profession. If one doesn’t particularly care about the pursuit of excellence, or indeed if one thinks it to be ‘problematic’, then the idea that a small number of excellent students should be set apart from their fellows as high-performers is an anathema. All must have prizes!
A subsidiary reason, though, is structural, and here we come back to league tables. Compilers of league tables can’t go into university classrooms and observe teaching. They therefore can’t actually assess ‘teaching quality’. But they do seem to feel as though teaching quality ought to be relevant in their rankings. So, what are some easy, rough-and-ready proxies for teaching quality? One is something called ‘continuation’, meaning the percentage of students who progress from one year to the next. If students get bad marks then they tend not to continue – or indeed are unable to continue if they fail. What if students get better marks, then? That’s one way in which universities benefit from grade inflation right there. Another proxy measure for teaching quality used in league table rankings is ‘graduate prospects’, meaning the number of graduates who get jobs or are in further study after graduating. What increases the likelihood that a graduate will get a job after graduating, or go on to postgraduate study? A good degree classification will certainly help. So there’s another way in which universities benefit from the grade inflation game.
Grade inflation, in other words, partly results from the weird obsession with driving down standards which is evident in every aspect of our culture. But it is also strongly linked to the desire of university VCs to ascend league tables by gaming the statistics upon which league tables are compiled. And, sure enough, most universities have over the past 10-20 years deliberately incentivised both higher grades (by dumbing down marking criteria) and higher degree classifications (through wheezes such as allowing students to disregard the worst-marked module in their final year when the overall classification is calculated) – with the result that employers now genuinely struggle to know whether somebody they are considering hiring, who has a first class degree, is any good or not.
You might draw the conclusion from this that university league tables are a really stupid and pernicious idea, and that maybe universities would survive perfectly happily without them, much as supermarkets, clothes shops, online retailers and driving schools somehow manage to struggle along and provide a decent service without being comprehensively ranked by the Guardian or the Times every 12 months. And you would be correct to do so.
Busqueros is a pseudonym.