Wednesday, November 01, 2006

THES and QS: Some Remarks on Methodology

The Times Higher Education Supplement (THES) has come out with their third international ranking of universities. The most important part is a peer review, with academics responding to a survey in which they asked to nominate the top universities in subject areas and geographical regions.

QS Quacquarelli Symonds, THES's consultants have published a brief description of how they did the peer review.

Here is what they have to say:

Peer Review: Over 190,000 academics were emailed a request to
complete our online survey this year. Over 1600 responded - contributing to our
response universe of 3,703 unique responses in the last three years. Previous
respondents are given the opportunity to update their response.

Respondents are asked to identify both their subject area of expertise
and their regional knowledge. They are then asked to select up to 30
institutions from their region(s) that they consider to be the best in their
area(s) of expertise. There are at present approximately 540 institutions in the
initial list. Responses are weighted by region to generate a peer review score
for each of our principal subject areas which are:
Arts &
Humanities
Engineering & It
Life Sciences & Biomedicine
Natural Sciences
Social Sciences
The five scores by subject area are
compiled into a single overall peer review score with an equal emphasis placed
on each of the five areas.

The claim that QS sent e-mails to 190,000 academics is unbelievable and the correct number is surely 1,900. I have queried QS about this but so far there has been no response.

If these numbers are correct then it means that QS have probably achieved the lowest response rate in survey research history. If they sent e-mails to 1,900 academics and added a couple of zeros by mistake, we have to ask how any more mistakes they have made. Anyway, it will be interesting to see how QS responds to my question, if indeed they ever do.

Combined with other snippets of information we can get some sort of picture of how QS proceeded with the peer review.

In 2004 they sent emails to academics containing a list of 300 universities, divided into subject and regional areas and 1,300 replied. Respondents were asked to pick up to 30 universities in the subjects and the geographical areas in which they felt they had expertise. They were allowed to add names to the lists.

in 2005, the 2004 reviewers were asked if they wanted to add to or subtract from their previous responses. Additional reviewers were sent e-mails so that the total was now 2,375.

In 2006 the 2004 and 2005 reviewers were asked whether they wanted to make changes. A further 1,900 (surely?) academics were sent forms and 1,600 returned them making a total (after presumably some of the old reviewers did not reply) of 3,703 reviewers. With additions made in previous years, QS now has a list of 520 institutions.

I would like to make three points. Firstly, it seems that a lot depends on getting on to the original list of 300 universities in 2004. Once on, it seems that universities are not removed. If not included, it is possible that a university might be very good but never quite good enough to get a "write-in vote". So how was the original list chosen?

Secondly, the subject areas in three case are different from those indicated by THES . QS has Natural Sciences, Engineering and IT, and Life Sciences and Biomedicine, THES has Science, Technology and Biomedicine. This is a bit sloppy and maybe indicative of communication problems between THES and QS.

Thirdly, it is obvious that the review is of research quality -- QS explicitly says so -- and not of other things as some people have assumed.

No comments: