Saturday, September 30, 2006

Another Duke Scandal?

Part 1

This post was inspired by a juxtaposition of two documents. One was an article written for a journal published by Universiti Teknologi MARA (UiTM) in Malaysia. I had to edit this article and it was not always a pleasant experience. It was full of grammatical errors such as omission of articles, sentences without subjects and so on. But at least I could usually understand what the author was talking about.

Taking a break, I skimmed some higher education sites and, via a blog by Professor K.C. Johnson, an historian at Brooklyn College, New York, arrived at a piece by Karla Holloway, a professor at Duke University (North Carolina) in an online journal, the Scholar and Feminist Online, published by the Barnard Center for research on Women. The center is run by Barnard College, an independent college affiliated to Columbia University.

A bit of background first. In March of this year, an exotic dancer, hired to perform at a party held by Duke University lacrosse players, claimed that she had been raped. Some of the players certanly seem to have been rude and loutish but the accusation looks more dubious every day. The alleged incident did, however, gave rise to some soul searching by the university administration. Committees were formed, one of which, dealing with race, was headed by Professor Holloway. Here is a link to Professor Holloway’s article Take a look at it for a few minutes.

Professor Holloway, after some remarks about the affair that have been criticized in quite a few places, describes how tired she is after sitting on the committee and that she is thinking of quitting.

I write these thoughts, considering what it would mean to resign from the committee charged with managing the post culture of the Lacrosse team's assault to the character of the university. My decision is fraught with a personal history that has made me understand the deep ambiguity in loving and caring for someone who has committed an egregious wrong. It is complicated with an administrative history that has made me appreciate the frailties of faculty and students and how a university's conduct toward those who have abused its privileges as well as protected them is burdened with legal residue, as well as personal empathy. My decision has vacillated between the guilt over my worry that if not me, which other body like mine will be pulled into this service? Who do I render vulnerable if I lose my courage to stay this course? On the other side is my increasingly desperate need to run for cover, to vacate the battlefield, and to seek personal shelter. It does feel like a battle. So when asked to provide the labor, once again, for the aftermath of a conduct that visibly associates me, in terms of race and gender, with the imbalance of power, especially without an appreciable notice of this as the contestatory space that women and black folk are asked to inhabit, I find myself preoccupied with a decision on whether or not to demur from this association in an effort, however feeble, to protect the vulnerability that is inherent to this assigned and necessary meditative role.

Until we recognize that sports reinforces exactly those behaviors of entitlement which have been and can be so abusive to women and girls and those "othered" by their sports' history of membership, the bodies who will bear evidence and consequence of the field's conduct will remain, after the fact of the matter, laboring to retrieve the lofty goals of education, to elevate the character of the place, to restore a space where they can do the work they came to the university to accomplish. However, as long as the bodies of women and minorities are evidence as well as restitution, the troubled terrain we labor over is as much a battlefield as it is a sports arena. At this moment, I have little appreciable sense of difference between the requisite conduct and consequence of either space.

Getting to the point, I am fairly confident that no journal published by Universiti Teknologi MARA would ever accept anything as impenetrable as this. Even though most people writing for Malaysian academic journals are not native speakers of English and many do not have doctorates, they do not write stuff as reader-unfriendly as this. I must add that, being allergic to committees, I am much more sympathetic to Professor Holloway than some other commentators.

If Professor Holloway were a graduate student who had been reading too much post-modern criticism and French philosophy, perhaps she could be excused. But she is nothing less than the William R. Kenan Professor of English at Duke. We surely expect clearer and less “reader-othering” writing than this from a professor, especially a professor of English. And what sort of comments does she write on her student essays?

Nor is this a rough draft that could be polished later. The article, we are told, has been read generously and carefully by Robyn Wieger, the Margaret Taylor Smith Director of Women’s Studies and Professor in Women’s Studies and Literature at Duke, and William Chafe, the Alice Mary Baldwin Professor and Dean of History at Duke. It also got an "intuitive and tremendously helpful" review from Janet Jakobsen, Full Professor and Director of the Barnard Centre for Research on Women.

Duke is, according to the Times Higher Education Supplement (THES), one of the best universities in the world. This is not entirely a result of THES’s erratic scoring – a bit more about that later – for the Shanghai Jiao tong ranking also gives Duke a high rating. UiTM, however, is not in THES’s top 200 or Shanghai Jiao Tong’s 500, even though it tries to maintain a certain minimal standard of communicative competence in the academic journals it publishes.

So how can Duke give professorships to people who write like that and how can Barnard College publish that sort of journal ?

Is it possible to introduce a ranking system that will give some credit to Universities that refrain from publishing stuff like this? I am wondering whether somebody could do something like Alan Sokal’s famous Social Text hoax, sending pages of pretentious nonsense to a cultural studies journal, which had no qualms about accepting them, but this time sending a piece to journals published by universities at different levels of the global hierarchy. My hypothesis is that universities in countries like Malaysia might be better able to see through this sort of thing than some of the academic superstars. An NDI (nonsense detection index) might then be incorporated into a ranking system and, I suspect, might be the disadvantage of places like Duke.

Another idea that might be more immediately practical is inspired by Professor Johnson’s observation that Professor Holloway has a very light teaching load. She does in fact, according to the Duke website, spend 5 hours and fifty minutes a week in the classroom and, presumably, spends an equivalent time marking, counselling and so on. This is about a half, maybe even a third or a quarter, of the teaching load of most Malaysian university lecturers. It might be possible to construct an index based on teaching hours per dollar of salary combined with a score for research articles or citations per dollar. Once again, I suspect that the score of Duke and similar places might not be quite so spectacular.

Also, one wonders whether Duke really deserves quite such a high THES ranking after all. Looking at the THES rankings for 2004 and 2005, it is clear that Duke has advanced remarkably and perhaps just a little unbelievably. In 2004 Duke was in 52nd place and in 2005 it rose to eleventh, just behind the Ecole Polytechnique in Paris and equal to the London School of Economics.

How did it do that? More in a little while.

Wednesday, September 27, 2006

Undeserved Reputations?

As well as producing an overall ranking of universities last year, the Times Higher Education Supplement (THES) also published disciplinary rankings. These comprised the world's top 50 universities in arts and humanities, social sciences, science, technology and biomedicine.

The publication of the disciplinary rankings was welcomed by some universities that did not score well on the general rankings but were at least able to claim that they had got into the top fifty for something.

But there are some odd things about these lists. They are based exclusively on peer review and nothing else. For all but one list (arts and humanities), THES provides data about the number of citations per paper, although this is not used to rank the universities. This is a measure of the quality of the papers published since other researchers would normally only cite interesting research. It is noticeable that the relationship between the peer reviewers' opinions of a university and the quality of its research is not particularly high. For example, in science Cambridge comes top, but the average number of citations per paper is 12.9. This is excellent (I believe that the average number of citations of a scientific paper is just one) but Berkeley, Harvard, MIT, Princeton, Stanford, Caltech, ETH Zurich, Yale, Chicago, UCLA, University of California at Santa Barbara, Columbia, Johns Hopkins and the University of California at San Diego all do better.

It is, of course, possible that the reputation of Cambridge rests upon the amount of research produced rather than its overall quality or that the overall average disguises the fact that it has a few research superstars who contribute to its reputation and that is reflected in the peer review. But the size of the difference between the subjective score of the peer review and the objective one of the citation count is still a little puzzling.

Another thing is that for many universities there are no scores for citations per paper. Apparently, this is because they did not produce enough papers to be counted although what they did produce might have been of a high quality. But how could they get a reputation that puts them in the top 50 while producing so little research?

There are 45 universities that got into a disciplinary top 50 without a score for citations. Of these, 25 are in countries where QS, THES's consultants, have offices, and ten are in located in exactly the same city where QS has an office. Of the 11 universities (the seven Indian Institutes of Technology count as one) that got into more than one top 50 list, no less than eight are in countries where QS has an office, Monash, the China University of Science and Technology, Tokyo, the National University of Singapore, Beijing (Peking University), Kyoto, New South Wales and the Australian National University. Four of the eleven are in cities -- Beijing, Tokyo, Singapore and Sydney -- where QS has an office.

So, it seems that proximity to a QS office can count as much as quantity or quality of research. I suspect that QS chose its peer reviewers from those that they knew from meetings, seminars or MBA tours or those that had been personally recommended to them. Whatever happened, this suggests another way to get a boost in the rankings -- start a branch campus in Singapore or Sydney and show up at any event organised by QS and get on the reviewers' panel.

Tuesday, September 12, 2006

More on the THES Peer Review

There are some odd things about the peer review section of the Times Higher Education Supplement (THES) world universities ranking. If you compare the scores for 2004 and 2005 you will find that there is an extremely high correlation, well over .90, between the two sets of figures. (You can do this simply by typing the data into an SPSS file) This suggests that they might not be really independent data.

THES has admitted this. It has said that in 2005 the ratings of the 2004 reviewers were combined with those of an additional and larger set of reviewers. Even so, I am not sure that this is sufficient to explain such a close association.

But there is something else that is, or ought to be, noticeable. If you look at the figures one by one (doing some quick conversions becaue in 2004 top scoring University of California at Berkeley gets 665 in this category and in 2005 Harvard is top with 100) you will notice that everybody except Berkeley goes up. The biggest improvement is the University of Melbourne but some European and other Australian universities also do much better than average.

How is it possible that all universities can improve compared to the 2004 top scorer, with some places showing a much bigger improvement than others, while the correlation between the two scores remains very high?

I've received information recently about the administration of the THES peer review that might shed some light on this.

First, it looks as though QS, THES's consultants, sent out a list of universities divided into subject and geographical areas from which respondents were invited to choose. One wonders how the original list was chosen.

Next, in the second survey of 2005 those who had done the survey a year earlier received their submitted results and were invited to make additions and subtractions.

So, it looks as if in 2005 those who had been on the panel in 2004 were given their submissions for 2004 and asked if they wanted to make any changes. What about the additional peers in 2005? I would guess that they were given the original list and asked to make a selection but it would be interesting to find out for certain.

I think this takes us a bit further in explaining why there is such a strong correlation betweeen the two years. The old reviewers for the most part probably returned their lists with a few changes and probably added more than they withdrew. This would help to explain the very close correlation between 2004 and 2005 and the improvements for everyone except Berkeley. Presumably, hardly anybody added Berkeley in 2004 and a few added Harvard and others.

There is still a problem though. The improvement in peer review scores between 2004 and 2005 is much greater for some universities than for others and it does not appear to be random. Of the 25 universities with the greatest improvements, eight are located in Australia and New Zealand, including Auckland, and 7 in Europe, including Lomonosov Moscow State University in Russia. For Melbourne, Sydney, Auckland and the Australian National University there are some truly spectacular improvements. Melburne goes up from 31 to 66, Sydney, from 19 to 53, Auckland from 11 to 45 and the Australian National University from 32 to 64. (Berkeley's score of 665 in 2004 was converted to 100 and the other scores adjusted acordingly).

How can this happen? Is it plausible that Australian universities underwent such a dramatic improvement in the space of just one year? Or is it a product of a flawed survey design? Did QS just send out a lot more questionnaires to Australian and European universities in 2005?

One more thing might be noted. I've heard of one case where a respondent passed the message from QS on to others in the same institution, at least one of whom apparently managed to submit a response to the survey. If this sort of thng was common in some places and if it was accepted by QS, it might explain why certain unversities did strikingly better in 2005.

THES will, let's hope, be a lot more transparent about how they do the next ranking.

Friday, September 08, 2006

More on the Rise of Ecole Polytechnique

I have already mentioned the remarkable rise of the Ecole Polytechnique (EP), Paris, in the Times Higher Education Supplement (THES) world university rankings to 10th place in the world and first in Continental Europe. This was largely due to what looked like a massive increase in the number of teaching staff between 2004 and 2005. I speculated that what happened was that QS, THES's consultants, had counted part-time faculty in 2005 but not in 2004.

The likelihood that this is what happened is confirmed by data from QS themselves. Their website provides some basic information about EP. There are two different sets of figures for numbers of faculty and student on the page for EP. At the top it says the ecole has 2,500 students and 380 faculty members. At the bottom there is a box, DATAFILE, which indicates that the ecole has 1900 faculty and 2468 students.

In, 2004, the top scoring university in the Faculty-student ratio category was Ecole Normale Superieure (ENS), another French grande ecole. According to QS's current data, ENS has 1,800 students and 900 faculty or 2 students per faculty. If the numbers of faculty and students at ENS remained the same between 2004 and 2005, then EP's score for faculty-student ratio would have gone from several times lower than ENS in 2004 (23 out of 100)) to quite a bit higher (100, the new top score) in 2005.

Going back to QS's figures their first set of data gives us 6.58 students per faculty and the second 1.30.

EP's dramatic improvement is most probably explained by their using the first set of figures, or something similar, in 2004 and the second set, or something similar, in 2005.

The main difference between the two is the number of faculty, 380 compared to 1900. Most probably, the 1,500 plus difference represents part-timers. Once again, I would be happy to hear of another explanation. I am certain that they are a lot more distinguished than the adjuncts and graduate assistants who do far too much teaching in American universities, but should they really be counted as equivalent to full-time teaching faculty?

The next question is why hasn't anyone else noticed this.

Tuesday, September 05, 2006

So That's how They Did It

For some time I've been wondering how the panels for the Times Higher Education Supplement (THES) World University Ranking peer review in 2004 and 2005 were chosen. THES have been very coy about this, telling us only how many were involved, the continents they came from and the broad disciplinary areas. What they have not done is to give any information about exactly how these experts were selected, how they were distributed between countries, what the response rate was, exactly what questions were asked, whether resondents were allowed to pick their own universities, how many universities they could pick and so on. In short, we are given none of the information that would be required from even the most lackadaisical writer of a doctoral dissertation.

Something interesting has appeared on websites in Russia and New Zealand. Here are the links. The first is from the Special Astrophysical Observatory of the Russian Academy of Science http://www.sao.ru/lib/news/WScientific/WSci4.htm

The second is from the University of Auckland, New Zealand
http://www.aus.ac.nz/branches/auckland/akld06/AUS-SP.pdf.

The document is a message from QS, the consultants used by THES for their ranking exercise, soliciting respondents for the 2005 peer review. It begins with a quotation from Richard Sykes, Rector of Imperial College, London: "you need smart people to recognise smart people".

As if being acknowledged a smart person who can recognise smart people were not enough, anyone spending five minutes filling out an online form will qualify for a bunch of goodies, comprising a discount on attending the Asia Pacific Leaders in Education Conference in Singapore, a one month trial subscription to the THES, a chance to win a stand at the World Grad School Tour, a chance to qualify for a free exhibition table at "these prestigous events" and a chance to win a BlackBerry personal organiser.

It is quite common in social science research to pay survey participants for their time and trouble but this might be a bit excessive. It could also lead to a bias in the response rate. After all, not everybody is going to get very excited about going to those prestigous events. But some people might and they are more likely to be in certain disciplines and in certain places than others.

But the most interesting thing is the bit at the top of the Russian page. The message was addressed not to any particular person. but just to "World Scientific Subscriber" . World Scientific is an online collection of scientific journals. One wonders whether QS had any way of checking who they were getting replies from. Was it the head of the Observatory or some exploited graduate student whose job was to check the e-mail? Also, did they send the survey to all World Scientitific subscribers or just to some of them or only to those in Russia or Eastern Europe?

So now you know what to do if you want to get on the THES panel of peer reviwers. Subscribe to World Scientific and, perhaps, a few other online subscription services or work for an institution that does. With a bit of luck you will be recognised as a real smart person and get a chance to vote your employer and your alma mater into the Top 300 or 200.
The Fastest Way into the THES TOP 200

In a little while the latest edition of the THES rankings will be out. There will be protests from those who fail to make the top 200, 300 or 500 and much self-congratulation from those included. Also, of course, THES and QS, THES’s consultants, directly or indirectly, will make a lot of money from the whole business.

If you search through the web you will find that QS and THES have been quite busy over the last year or so promoting their rankings and giving advice about what to do to get into the top 200. Some of their advice is not very helpful. Thus, Nunzio Quacquarelli, director of QS told a seminar in Kuala Lumpur in November 2005, that producing more quality research was one way of moving up in the rankings. This is not necessarily a bad thing but it will be a least a decade before any quality research can be completed, written up, submitted for publication, revised, finally accepted, published, and then cited by another researcher whose work goes through the same processes. Only then will research will start to push a university into the top 200 or 100 by boosting their score for citations per faculty.

Something less advertised is that once a university has got onto the list of 300 universities (so far this has been decided by peer review) there is a very simple way of boosting a university’s position in the rankings. It is also not unlikely that several universities have already realized this.

Pause for a minute and review the THES methodology. They gave a weighing of 40 per cent to a review of universities by other academics, 10 per cent to a rating by employers, 20 per cent to the ratio of faculty to students, 10 per cent to the proportion of international faculty and students, and 20 per cent to the number of citations per faculty. In 2005 the top scoring institution in each category was given a score of 100 and then the scores of the others were calibrated accordingly.

Getting back to boosting ratings, first take a look at the 2004 and 205 scores for citations per faculty. Comparison is a bit difficult because in 2004 the top scorer is given a score of 400 and then one of 100 in 2005 (it’s MIT in both cases.) What immediately demands attention is that there are some very dramatic changes between 2004 and 2005.

For example Ecole Polytechnique in Paris fell from 14.75 (dividing the THES figures by four because top ranked MIT was given a score of 400 in 2004) to 4, ETH Zurich from 66.5 to 8, and McGill in Canada from 21 to 8.

This at first sight is more a bit strange. The figures are supposed to refer to ten-year periods, so that in 2005 citations for the earliest year would be dropped and then those for another year added. You would not expect very much change from year to year since the figures for 2004 and 2005 overlap a great deal.

But it is not only citations that we have to consider. The score is actually based on citations per faculty member. So, if the number of faculty goes up and the number of citations remains the same then the score for citations per faculty goes down.

This in fact is what happened to a lot of universities. If we look at the score for citations per faculty and then the score for faculty-student ratio there are several cases where they change proportionately but in opposite directions.

So, going back to the three examples given above between 2004 and 2005 Ecole Polytechnique went up from 23 to 100, to become the top scorer for faculty-student ratio, ETH Zurich from 4 to 37, and Mc Gill from 23 to 42. Notice the rise in the faculty student ratio score is roughly proportionate to the fall in the number of citations per faculty.

I am not the first person to notice the apparent dramatic collapse of research activity at ETH Zurich. Norbert Staub in ETH Life International was puzzled by this. It looks as though it wasn’t that ETH Zurich stopped doing research but that apparently it acquired something like eight times as many teachers.

It seems pretty obvious that what happened to these institutions is that the apparent number of faculty went up between 2004 and 2005. This led to a rise in the score for faculty student ratio and a fall in the number of citations per faculty.

You might ask, so what? If a university goes up on one measure and goes down on another surely the total score will remain unchanged.

Not always. THES has indexed the scores to the top scoring university so that in 2005 the top scorer gets 100 for both faculty-student ratio and citations per faculty. But the gap between the top university for faculty student ration and run of the mill places in, say, the second hundred is much less than it is for citations per faculty. For example take a look at the faculty-student scores of the universities starting at position number 100. We have 15, 4, 13, 10, 23, 16, 13, 29, 12, 23. Then look at the scores for citations per faculty, 7, 1, 8, 6, 0, 12, 9, 14, 12, 7.

That means that many universities can, like Ecole Polytechnique, gain much more by increasing their faculty student ratio than they lose by reducing the citations per faculty. Not all of course. ETH Zurich suffered badly as a result of this faculty inflation.

So what is going on? Are we really to believe that in 2005 Ecole Polytechnique quadrupled its teaching staff, ETH Zurich increased its eightfold and that of McGill nearly doubled. This is totally implausible. The only explanation that makes any sort of sense is that either QS or the institutions concerned were counting their teachers differently in 2004 and 2005.

The likeliest explanation for Ecole Polytechnique’s s remarkable change is simply that in 2004 only full time staff were counted but in 2005 part-time staff were counted as well. It is well known that many staff of the Grandes Ecoles of France are employed by neighbouring research institutes and universities, although exactly how many is hard to find out. If anyone can suggest any other explanation please let me know.

Going through the rankings we find that are quite a few universities that are affected by what we might call “faculty inflation”. EPF Lausanne from 13 to 64, Eindhoven from 11 to 54, University of California at San Francisco from 39 to 91, Nagoya from 19 to 35, Hong Kong from 8 to 17.

So, having got through the peer review, this is how to get a boost in the rankings. Just inflate the number of teachers and deflate the number of students.

Here are some ways to do it. Wherever possible, hire part-time teachers but don’t differentiate between full and part-time. Announce that every graduate student is a teaching assistant, even if they just have to do a bit of marking, and count them as teaching staff. Make sure anyone who leaves is designated emeritus or emerita and kept on the books. Never sack anyone but keep him or her suspended. Count everybody in branch campuses and off -campus programmes. Classify all administrative appointees as teaching staff.

It will also help to keep the official number of students down. A few possible ways are not counting part-time students, not counting branch campuses, counting at the end of the semester when some have dropped out.

Wednesday, August 30, 2006

Comparing the Newsweek and THES Top 100 Universities

It seems to be university ranking season again. Shanghai Jiao Tong University has just come out with their 2006 edition and it looks like there will be another Times Higher Education Supplement (THES) ranking quite soon. Now, Newsweek has joined in with its own list of the world’s top 100 universities.

The Newsweek list is, for the most part, not original but it does show something extremely interesting about the THES rankings.

What Newsweek did was to combine bits of the THES and Shanghai rankings (presumably for 2005 although Newsweek does not say). They took three components from the Shanghai index, the number of highly cited researchers, number of articles in Nature and Science, and the number of articles in the ISI Social Sciences and Arts and Humanities Indices (the SJTU ranking actually also included the Science Citation Index.) and gave them a weighting of 50 per cent. Then, they took four components from the THES rankings, percentage of international faculty, percentage of international students, faculty-student ratio and citations per faculty. They also added a score derived from the number of books in the university library.

Incidentally, it is a bit irritating that Newsweek, like some other commentators, refers to the THES as The Times of London. The THES has in fact long been a separate publication and is no longer even owned by the same company as the newspaper.

The idea of combining data from different rankings is not bad, although Newsweek does not indicate why they assign the weightings that they do. It is a shame, though, that they keep THES’s data on international students and faculty and faculty-student ratio, which do not show very much and are probably easy to manipulate.

Still, it seems that this ranking, as far as it goes, is probably better than either the THES or the Shanghai ones, considered separately. The main problem is that it includes only 100 universities and therefore tells us nothing at all about the thousands of others.

The Newsweek ranking is also notable for what it leaves out. It does not include the THES peer review which accounted for 50 per cent of the ranking in 2004 and 40 per cent in 2005 and the rating by employers which contributed 10 per cent in 2005. If we compare the top 100 universities in the THES ranking with Newsweek’s top 100, some very interesting patterns emerge. Essentially, the Newsweek ranking tells us what happens if we take the THES peer review out of the equation.

First, a lot of universities have a much lower position on the Newsweek ranking that they do on the THES’s and some even disappear altogether from the former. But the decline is not random by any means. All four French institutions suffer a decline. Of the 14 British universities, 2 go up, 2 have the same place and 10 go down. Altogether 26 European universities fall and five (three of them from Switzerland) rise.

The four Chinese (PRC) universities in the THES top 100 disappear altogether from the Newsweek top 100 while most Asian universities decline. Ten Australian universities go down and one goes up.


There are some truly spectacular tumbles. They include Peking University (which THES likes to call Beijing University), the best university in Asia and number 15 in the world, according to THES, which is out altogether. The Indian Institutes of Technology have also gone. Monash falls from 33 to 73, Ecole Polytechnique in Paris from 10 to 43, and Melbourne from 19 to 53.

So what is going on? Basically, it looks as though the function of the THES peer and employer reviews was to allow universities from Australia, Europe, especially France and the United Kingdom, and Asia, especially China, to do much better that they would on any other possible measure or combination of measures.

Did THES see something that everybody else was missing? It is unlikely. The THES peer reviewers are described as experts in their fields and as being research-active academics. They are not described as experts in teaching methodology or as involved in teaching or curricular reform. So it seems that this is supposed a review of the research standing of universities and not of teaching quality or anything else. And for some countries it is quite a good one. For North America, the United Kingdom, Germany, Australia and Japan, there is a high correlation between the scores for citations per faculty and the peer review. For other places it is not so good. There is no correlation between the peer review and citations for Asia overall, China, France, and the Netherlands. For the whole of the THES top 200 there is only a weak correlation.

So a high score on the peer review does not necessarily reflect a high research profile and it is hard to see that it reflects anything else.

It appears that the THES peer review, and therefore the ranking as a whole, was basically a kind of ranking gerrymandering in which the results were influenced by the method of sampling. QS assigned took about a third each of its peers from North America, Europe and Asia and then asked them to name the top universities in their geographic areas. No wonder that we have large numbers of European, Asian and especially Australian universities in the top 200. Had the THES surveyed an equal number of reviewers from Latin America and Africa (“major cultural regions”?) the results would have been different. Had they asked reviewers to nominate universities outside their own countries (surely quality means being known in other countries or continents?) they would have been even more different.

Is it entirely a coincidence that the regions that are disproportionately favoured by the peer review, the UK, France, China and Australia, are precisely those where QS, the consultants who carried the survey, have offices and are precisely those regions that are active in the production of MBAs and the lucrative globalised trade in students, teachers and researchers?

Anyway, it will be interesting to see if THES is going to do the same sort of thing this year.