Sunday, February 05, 2017

Guest post by Bahram Bekhradnia

I have just received this reply from Bahram Bekhradnia, President of the Higher Education Policy institute, in response to my review of his report on global university rankings.

My main two points which I think are not reflected in your blog – no doubt because I was not sufficiently clear – are
·       First the international rankings – with the exception of U-multirank which has other issues – almost  exclusively reflect research activity and performance. Citations and publications of course are explicitly concerned with research, and as you say “International faculty are probably recruited more for their research reputation than for anything else. Income from industry (THE) is of course a measure of reported funding for applied research. The QS academic reputation survey is officially about research and THE 's academic reputation survey of teaching is about postgraduate supervision.”  And I add (see below) that faculty to student ratios reflect research activity and are not an indicator of a focus on education.  There is not much argument that indicators of research dominate the rankings.
Yet although they ignore pretty well all other aspects of universities’ activities they claim nevertheless to identify the "best universities". They certainly do not provide information that is useful to undergraduate students, nor even actually to postgraduate students whose interest will be at discipline not institution level. If they were also honest enough to say simply that they identify research performance there would be rather less objection to the international rankings.

That is why it is so damaging for universities, their governing bodies – and even Governments – to pay so much attention to improving their universities performance in the international rankings.  Resources – time and money - are limited and attaching priority to improving research performance can only be right for a very small number of universities.

·     Second, the data on which they are based are wholly inadequate.  50% of the QS and 30% of the Times Higher rankings are based on nothing more than surveys of "opinion”, including in the case of QS the opinions of dead respondents.  But no less serious is that the data on which the rankings are based – other than the publications and prize related data – are supplied by universities themselves and unaudited, or are ‘scraped’ from a variety of other sources including universities websites’ and cannot be compared one with the other. Those are the reasons for the Trinity College Dublin and Sultan Qaboos fiascos.  One UAE university told me recently they had (mistakenly) submitted information about external income in UAE Dirhams instead of US Dollars – an inflation of 350% that no-one had noticed.  Who knows what other errors there may be – the ranking bodies certainly don’t. 
In reply to some of the detailed points that you make
In order to compare institutions you need to be sure that the data relating to each are compiled on a comparable basis, using comparable definitions et cetera. That is why the ranking bodies, rightly, have produced their own data definitions to which they ask institutions to adhere when returning data. The problem of course is that there is no audit of the data that are returned by institutions to ensure that the definitions are adhered to or that the data are accurate.  Incidentally, that is why also there is far less objection to national rankings, which can, if there are robust national data collection and audit arrangements, have fewer problems with regard to comparability of data.
But at least there is the attempt with institution-supplied data to ensure that they are on a common basis and comparable.  That is not so with data ‘scraped’ from random sources, and that is why I say that data scraping is such a bad practice.  It produces data which are not comparable, but which QS nevertheless uses to compare institutions. 
You say that THE, at least, omit faculty on research only contracts when compiling faculty to student ratios.  But when I say that FSRs are a measure of research activity I am not referring to research only faculty.  What I am pointing out is that the more research a university does the more academic faculty it is likely to recruit on teaching and research contracts.  These will inflate the faculty to student ratios without necessarily increasing the teaching capacity over a university that does less research, consequently has fewer faculty but whose faculty devote more of their time to teaching.  And of course QS even includes research contract faculty in FSR calculations.  FSRs are essentially a reflection of research activity. 

No comments: