Saturday, May 25, 2013

The Efficiency Rankings

Times Higher Education has a story about a study by Dirk Van Damme, head of the Centre for Educational Research and Innovation at the OECD. This will be presented at the Global University Summit  held in Whitehall, London from the 28th to the 30th May.

The Summit "brings an invitation-only audience of leaders from the world’s foremost universities, senior policy-makers and international business executives to London in 2013." It is a "prestigious event" held in a "spectacular setting" and is sponsored by the University of Warwick, Times Higher Education, Thomson Reuters and UK Universities International Unit. Speakers include Vince Cable, Boris Johnson, the Russian ambassador and heads of various universities from around the world.

What Professor Van Damme has done is to treat the THE World University Rankings Research Indicator scores as an input and the Research Influence (Citations) scores as an output. The output scores are divided by the input scores and the result is a measure of the efficiency with which the inputs are turned into citations, which, as we all know, is the main function of the modern university.

According to THE:

"The input indicator takes scaled and normalised measures of research income and volume into account, and also considers reputation, while the output indicator looks at citations to institutional papers in Thomson Reuters’ Web of Science database, normalised for subject differences.
Professor van Damme said that the results - which show that university systems outside the Anglo-American elite are able to realise and increase outputs with much lower levels of input - did not surprise him.
“For example, Switzerland really invests in the right types of research. It has a few universities in which it concentrates resources, and they do very well,” he said.
Previous studies have found the UK to have the most efficient research system on measures of citation per researcher and per unit of spending.
But Professor van Damme explained that under his approach, productivity - output per staff member - was included as an input.
“With efficiency I mean the total research capacity of an institution, including its productivity, divided by its impact. The UK is not doing badly at all, but other countries are doing better, such as Ireland, which has a very low research score but a good citations score,” he said.
Given the severity of the country’s economic crisis, Ireland’s success was particularly impressive, he said.
“I think it is really conscious of the effort it has to make to maintain its position and is doing so.”
Low efficiency scores for China and South Korea reflected the countries’ problems in translating their huge investment into outputs, he added."

One hesitates to be negative about a paper presented at a prestigious event in a spectacular setting to an invitation only audience but this is frankly rather silly.

I would accept that income can be regarded as an input but surely not reputation and surely not volume of publications. Also, unless Van Damme's methodology has undisclosed refinements he is treating research scores as having the same value regardless of whether they are composed mainly of scores for reputation or for number of publications or for research income.

Then there is the time period concerned. Research income is income for one year, Publications are drawn from a five year period. These are then compared with citations over a six year period. So the paper is asking how research income for 2010 produces citations in the years 2006 - 2011 of papers published in the years 2006 - 2010. A university is certainly being remarkably efficient if its 2010 income is producing citations in 2006, 2007, 2008 and 2009.

Turning to the citations side of the equation, it should be recalled that the THE citations indicator includes an adjustment by which the citation impact score for universities is divided by the square root of the citation impact score for the country as a whole. In other words a university located in a country where papers are not cited very much gets a big boost and the lower the national citation impact score the bigger the boost for the university. This is why Hong Kong universities suffered reduced scores when Thomson Reuters took them out of China when counting citations and put them in their own separate category.

So, it is not surprising that universities from outside the Anglo-Saxon elite do well for citations and thus appear to be very efficient. Thomson Reuters methodology gives such universities a very substantial weighting just for being located in countries that are  less productive in terms of citations.

None of this is new. In 2010 Van Damme did something similar at a seminar in London.

Van Damme is just analysing the top 200 universities in the THE rankings. It would surely be more interesting to analyse the top 400 whose scores are obtainable from an iPad/iPhone app.

So here are the top ten universities in the world according to the efficiency with which they turn income, reputation and publications into citations. The procedure is to divide the citations score from the 2012 THE rankings by the research indicator score.

1.  Tokyo Metropolitan University
2.  Moscow State Engineering Physics Institute
3.  Florida Institute of Technology
4.  Southern Methodist University
5.  University of Hertfordshire
6.  University of Portsmouth
7.  King Mongkut's University of Technology
8.  Vigo University
9.  Creighton University
10. Fribourg University

No doubt the good and the great of the academic world assembled in Whitehall will make a trip to Portsmouth  or even to Vigo or Creighton if they can find them on the map.

And now for the hall of shame. Here are the bottom ten of the THE top 400, ranked according to efficiency as measured by citations indicator scores divided by research scores. The heads of these failing institutions will no doubt be packing their bags and looking for jobs as junior administrative assistants at technical colleges in Siberia or the upper Amazon


391.  Tsinghua University
392.  Chinese University of Hong Kong
393.  National Taiwan University
394.  National Chiao Tung University
395.   Tilburg University
396.  Delft University of Technology
397.  Seoul National University
398.  State University of Campinas
399.  Sao Paulo University
400.  Lomosonov Moscow State University

In a little while I hope to publish the full 400 after I have finished being sarcastic about the QS subject rankings.






Friday, May 24, 2013

Update on IREG Approval 

  • The International Ranking Experts group has also given its approval to the national ranking produced by the Perspektywy Education Foundation of Poland.

  • The approval given to the QS World, Asian and Latin American Rankings does not apply to the QS Stars.
Details here.


Saturday, May 18, 2013

The First IREG Audit 

QS is the first ranking organisation to get the seal of approval from the International Ranking Experts Group (IREG) for its World, Asian and Latin American rankings

The IREG audit process would appear on the surface to be quite rigorous. Take a look at the audit manual. There are a  number of criteria some of which sound quite daunting but are not really so. For example, Criterion 8 says:

 "If rankings are using composite indicators the weights of the individual indicators have to be published. Changes in weights over time should be limited and due to methodological or conception-related considerations."

Fair enough, but there is nothing about how weighting should be distributed across the indicators in the first place. Forty per cent for the academic survey in the QS rankings?

Some indicators are obvious -- providing a contact address. Others are so vague that they mean very little -- organisational measures that enhance the credibility of rankings.

The basic principle of the audit is that ranking organisations are given scores ranging from 1 (not sufficient/not applied) to 6 (distinguished) for the various criteria, with a double weighting for core criteria. The maximum score is 180 and 


"On the bases of the assessment scale described
above, the threshold for a positive audit decision will
be 60 per cent of the maximum total score. This
means the average score on the individual criteria
has to be slightly higher than “adequate”. In order
to establish the IREG Ranking Audit as a quality
label none of the core criteria must be assessed
with a score lower than three."

So a positive result could mean that an organisation is distinguished in everything. It could also mean that it is on average slightly higher than adequate. It would be interesting to know which applies to QS.

I do not know whether the auditors had any criticisms to make. If not it is difficult to see the point of the exercise. If they did it would be nice to know what they were.

QS are to be commended for submitting to the audit although it probably was not very searching but it still seems that the ranking world needs more and better monitoring and observation.

Wednesday, May 15, 2013

QS Rankings by Subject

QS have produced their annual subject rankings. At the top there are no real surprises and, while there is certainly room for argument, I do not think that anyone will be shocked by the top ten or twenty in each subject.

The university with the most number ones is Harvard:

Medicine
Biology
Psychology
Pharmacy and Pharmacology
Earth and Marine Sciences
Politics and International Studies
Law
Economics and Econometrics
Accounting and Finance
Education

MIT has seven:
Computer Science
Chemical Engineering
Electrical Engineering
Mechanical Engineering
Phys and Astronomy
Chemistry
Materials Science

Then there is Berkeley with exactly the four you would expect:
Environmental Science
Statistics and Operational Research
Sociology
Communication and Media Studies

Oxford has three:

Philosophy
Modern Languages
Geography

Cambridge another three:
History
Linguistics
Mathematics


Imperial College London is top for Civil Engineering and University of California, Davis for Agriculture and Forestry.


These rankings are based on the academic opinion survey, the employer survey, citations per paper and h-index, a measure of both output and influence that eliminates outliers, in proportions that vary for each subject. They are very research-focused which is unfortunate since there seems to be a consensus emerging at conferences and seminars that the THE-TR rankings are for policy makers, the Shanghai ARWU for researchers and the QS rankings for undergraduate students.

Outside the top fifty there are some oddities resulting form the small number of responses when we leave the top fifty or top one hundred. I will leave it to specialists to find them.











Tuesday, May 07, 2013

Unsolicited Advice



There has  been a lot of debate recently about the reputation survey component in the QS World University Rankings.

The president of University College Cork asked faculty to find friends at other universities who "understand the importance of UCC improving its university world ranking". The reason for the reference to other universities is that the QS survey very sensibly does not permit respondents to vote for their own universities, those that they list as their affiliation.  

This request appears to violate QS's guidelines which permit universities to inform staff about the survey but not to encourage them to nominate or refrain from nominating any particular university. According to an article in Inside Higher Ed QS are considering whether it is necessary to take any action.

This report has given Ben Sowter of QS sufficient concern to argue that it is not possible to effectively manipulate the survey.  He has set out a reasonable case why it is unlikely that any institution could succeed in marching graduate students up to their desktops to vote for favoured institutions to avoid being sent to a reeducation camp or to teach at a community college.

However, some of his reasons sound a little unconvincing: signing up, screening, an advisory board with years of experience. It would help if he were a little more specific, especially about the sophisticated anomaly detection algorithm, which sounds rather intimidating.

The problem with the academic survey is not that an institution like University College Cork is going to push its way into the global  top twenty or top one hundred  but that there could be a systematic bias towards those who are ambitious or from certain regions. It is noticeable that some universities in East and Southeast Asia do very much better on the academic survey than on other indicators. 

The QS academic survey is getting overly complicated and incoherent. It began as a fairly simple exercise. Its respondents were at first drawn form the subscription lists of World Scientific, an academic publishing company based in Singapore. Not surprisingly, the first academic survey produced a strong, perhaps too strong, showing for Southeast and East Asia and Berkeley. 

The survey turned out to be unsatisfactory, not least because of an extremely small response rate. In succeeding years QS has added respondents drawn from the subscription lists of Mardev, an academic database, largely replacing those from World Scientific, lists supplied by universities, academics nominated by respondents to the survey and those joining the online sign up facility. It is not clear how many academics are included in these groups or what the various response rates are. In addition, counting responses for three years unless overwritten by the respondent might enhance the stability of the indicator but it also means that some of the responses might be from people who have died or retired.

The reputation survey does not have a good reputation and it is time for QS to think about revamping the methodology. But changing the methodology means that rankings cannot be used to chart the progress or decline of universities over time. The solution to this dilemma might be to launch a new ranking and keep the old one, perhaps issuing it later in the year or giving it less prominence.

My suggestion to QS is that they keep the current methodology but call it the Original QS Rankings or the QS Classic Rankings. Then they could introduce the  QS Plus or New QS rankings or something similar which would address the issues about the academic survey and introduce some other changes. Since QS are now offering a wide range of products, Latin American Rankings, Asian Rankings, subject rankings, best student cities and probably more to come, this should  not impose an undue burden.

First, starting with the academic survey, 40 percent is too much for any indicator. It should be reduced to 20 per cent.

Next, the respondents should be divided into clearly defined categories, presented with appropriate questions and appropriately verified.

It should be recognised that subscribing to an online database or being recommended by another faculty member is not really a qualification for judging international research excellence. Neither is getting one’s name listed as corresponding author. These days that  can have as much to do with faculty politics as with ability.  I suggest that the academic survey should be sent to:

(a) highly cited researchers  or those with a high h-index who should be asked about international research excellence;
(b) researchers drawn from the Scopus database who should be asked to rate the regional or national research standing of universities.

Responses should be weighted according to the number of researchers per country.

This could be supplemented with a survey of student satisfaction with teaching based on a student version of the sign up facility and requiring a valid academic address with verification.

Also, a sign up facility could be established for anyone interested and asking a question about general perceived quality.

If QS ever do change the academic survey they might as well review the other indicators. Starting with the employer review, this should be kept since, whatever its flaws, it is an external check on universities. But it might be easier to manipulate than the academic survey. Something was clearly going on in the 2011 ranking when there appeared to be a disproportionate number of respondents from some Latin American countries, leading QS to impose caps on universities exceeding the national average by a significant amount. 

"QS received a dramatic level of response from Latin America in 2011, these counts and all subsequent analysis have been adjusted by applying a weighting to responses from countries with a distinctly disproportionate level of response."

It seems that this problem was sorted out in 2012. Even so, QS might consider giving   half the weighting for this survey to an invited panel of employers. Perhaps  they could also broaden their database by asking NGOS and non-profit groups about their preferences.

There is little evidence that overall the number of international students has anything to do with any measure of quality and it also may have undesirable backwash effects as universities import large numbers of less able students. The problem is that QS are doing a good business moving graduate students across international borders so it is unlikely that they will ever consider doing away this indicator.

Staff student ratio is by all accounts a very crude indicator of quality of teaching. Unfortunately, at the moment there does appear to be any practical alternative. 
One thing that QS could do is to remove research staff from the faculty side of the equation. At the moment a university that hires an army of underpaid research assistants  and sacks a few teaching staff, or packs them off to a branch campus, would be recorded as having brought about a great improvement in teaching quality.

Citations are a notoriously problematical way of measuring research influence or quality. The Leiden Ranking shows that there are many ways of measuring research output and influence. It would be a good  idea to combine several different ways of counting citations. QS have already started to use the h- index in their subject rankings starting this year and have used citations per paper in the Asian University Rankings.

With the 20 per cent left over from reducing the weighting for the academic survey QS might consider introducing a measure of research output rather than quality since this would help distinguish among universities outside the elite and perhaps use internet data from Webometrics as in the Latin American rankings.

Friday, April 26, 2013

New Report Out

Global University Rankings and their Impact: Report II

Andrejs Rauhvargers


Thursday, April 25, 2013


Asian higher education revolution a long way off

My article in University World News can be accessed here.

Saturday, April 20, 2013

The Leiden Ranking

The Leiden ranking for 2013 is out. This is produced by the Centre for Science and Technology Studies (CWTS) at Leiden University and represents pretty much the state of the art in assessing research publications and citations.

A variety of indicators are presented with several different settings but no overall winner is declared which means that these rankings are not going to get the publicity given to QS and Times Higher Education.

Here are top universities, using the default settings provided by CWTS.

Total Publications: Harvard
Citations per Paper: MIT
Normalised Citations per Paper: MIT
Quality of Publications: MIT

There are also indicators for international and industrial collaboration that I hope to discuss later.

It is also noticeable that high flyers in the Times Higher Education citations indicator, Alexandria University, Moscow Engineering Physics Institute (MEPhI), Hong Kong Baptist University, Royal Holloway, do not figure at all in the Leiden Ranking. What happened to them?

How could MEPhI, equal first in the world for research influence according to THE and Thomson Reuters, fail to even show up in the normalised citation indicator in the Leiden Ranking?

Firstly, Leiden have collected data for the top 500 universities in the world according to number of publications in the Web of Science. That would have been sufficient to keep these institutions out of the rankings.

In addition, Leiden use fractionalised counting as a default setting so that the impact of mutiple-author publications is divided by the number of university addresses. This would drastically reduce the impact of publications like the Review of Particle Physics.

Also, by field Leiden mean five broad subject groups whereas Thomson Reuters appears to use a larger number (21 if they use the same system as they do for highly cited researchers.) There is accordingly more chance of anomalous cases having a great influence in the THE rankings.

THE and Thomson Reuters would do well to look at the multi-authored, and most probably soon to be multi-cited, papers that were published in 2012 and look at the universities that could do well in 2014 if the methodology remains unchanged.


Tuesday, April 02, 2013

Combining Rankings

Meta University Ranking has combined the latest ARWU, QS and THE World Rankings. Universities are ordered by place so that Harvard gets the highest (i.e. lowest score) with an average of 2.67 (1st in ARWU, 3rd in QS and 4th in THE).

After that there is MIT, Cambridge, Caltech and Oxford.

Tuesday, March 19, 2013

Some More on the THE World University Rankings 2


I have calculated the mean scores for the indicator groups in the 2012-13 Times Higher education. world university rankings. The mean scores are for the 400 universities included in the published 2012-13 rankings are:

Teaching   41.67
International Outlook 52.35
Industry Income 50.74
Research 40.84
Citations 65.25

For Industry Income, N is 363 since 37 universities, mainly in the US, did not submit data. This might be a smart move if the universities realized that they were likely to receive a low score. N is 400 for the others.

There are considerable differences between the indicators which are probably due to Thomson Reuters' methodology. Although THE publishes data for 200 universities on its website and another 200 on an iPad/iPhone app there are in fact several hundred more universities that are not included in the published rankings but whose scores are used to calculate the overall mean from which scores for the ranked universities are derived.

A higher score on an indicator means a greater distance from all the institutions in the Thomson Reuters database.

The high scores for citations mean that there is a large gap between the top 400 and the lesser places outside the top 400.

I suspect that the low scores for teaching and research are due to the influence of the academic survey which contributes to both indicator clusters. We have already seen that after the top six, the curve for the survey is relatively flat.

The citations indicator already has a disproportionate influence contributing 30 % to the overall weighing. That 30 % is of course a maximum. Since universities on average are getting more for citations than the the  indicators, it has in practice a correspondingly greater weighting.

Friday, March 15, 2013

Some More on the THE World University Rankings 2012-13

Here are some observations based on a simple analysis of the Times Higher Education World University Rankings of 2012-13.

First, calculating the Pearson correlation between the indicator groups produces some interesting points. If a ranking is valid we would expect the correlations between indicators to  be fairly high but not too high. If the correlations between indicators are above .800 this suggests that they are basically measuring the same thing and that there is no point in having more than one indicator. On the other hand it is safe to assume that if an indicator does measure quality or desired characteristics in some way it will have a positive relationship with other valid indicators.

One thing about the 2012-13 rankings is that the relationship between international outlook (international faculty, students and research collaboration)  and the other indicators is negative or very slight. With teaching it is .025 (not significant), industry income .003 (not significant), research .156 and citations 158. This adds to  my suspicion that internationalisation, at least among those universities that get into the world rankings, does not per se  say very much about quality.

Industry income correlates modestly with teaching (.350) and research (.396), insignificantly with international outlook (.003) and negatively and insignificantly with citations (-.008).

The correlation between research  and teaching is very high at .905. This may well be because  the survey of academic opinion contributes to the teaching and the research indicators. There are different questions -- one about research and one about postgraduate supervision -- but the difference between the responses is probably quite small.

It is also very interesting that the correlation between scores for research and citations is rather modest at .410. Since volume of publications, funding and reputation should contribute to research influence, which is what citations are supposed to measure, this suggests that the citations indicator needs a careful review.

Teaching, research and international outlook are composites of several indicators. It would be very helpful if THE or Thomson Reuters released the scores for the separate indicators.

Sunday, March 10, 2013


The California Paradox

Looking at the Times Higher Education reputation rankings, I noticed that there were two Californian universities in the  superbrand six and seven in the top 50. This is not an anomaly. A slightly different seven can be found in the THE World University Rankings. California does even better in the Shanghai ARWU with three in the top six and 11 in the top 50. This is a slight improvement on 2003 when there were ten. According to ARWU, California would be the second best country in the world for higher education if it became independent.
California’s performance is not so spectacular according to QS who have just four Californian institutions in their top fifty, a fall from 2004 when they had five (I am not counting the University of California at San Francisco which, being a single subject medical school, should not have been there). Even so it is still a creditable performance.
But, if we are to believe many commentators, higher education in California, at least public higher education, is dying if not already dead.

According to Andy Kroll in Salon:

"California’s public higher education system is, in other words, dying a slow death. The promise of a cheap, quality education is slipping away for the working and middle classes, for immigrants, for the very people whom the University of California’s creators held in mind when they began their grand experiment 144 years ago. And don’t think the slow rot of public education is unique to California: that state’s woes are the nation’s".

The villains according to Kroll are Californian taxpayers who refuse to accept adding to a tax burden that is the among the highest in the world. 
It is surprising that the death throes of higher education in California have gone unnoticed by the well known international rankers.
It is also surprising that public and private universities that are still highly productive and by international standards still lavishly funded exist in the same state as secondary and elementary schools that are close to being the worse in the nation in terms of student performance. The relative and absolute decline in educational achievement is matched by a similar decline in the overall economic performance of the state.

It may be just a matter of time and in the coming decades Californian universities will follow primary and secondary education into irreversible decline.

 

 

 
Preserving data

Times Higher and QS have both renovated their ranking pages recently and both seem to have removed access to some data from previous years. THE used to provide links to the Times Higher Education (Supplement) - Quacquarelli Symonds rankings of 2004-2010 but apparently not any more. QS do not seem to give access to these rankings before 2007. In both cases, I will update if it turns out that there is a way to get to these rankings.


There is, however,  a site which has the rankings for the top 200 of the THES - QS Rankings of 2004-2007.

Wednesday, March 06, 2013

The THE Reputation Rankings

Times Higher Education have published their reputation rankings based on data collected from the World University Rankings of 2012.

They are not very interesting. Which is exactly what they should be. When rankings show massive changes from one year to another a certain amount of scepticism is required.

The same six, Harvard, MIT, Stanford, Berkeley, Oxford and Cambridge are well ahead of everybody else as they were in 2012 and in 2011.

Taking a quick look at the top fifty, there is little movement between 2011 and 2013. Four universities for the US, Japan, Netherlands and Germany have dropped out. In their place there is one more from Korea and from the the UK and two more from Australia.

I was under the impression that Australian universities were facing savage cuts in research funding and were going to be deserted by international students and researchers..

Maybe it is the other universities that are being cut. or maybe a bit of blood letting is good for the health.

I also noticed that the number of respondents went down a bit in 2012. It could be that the academic world is beginning to suffer from ranking fatigue.

Saturday, March 02, 2013

GRE Country Ranking: Verbal Reasoning

Arranging the mean scores for the 2011-12 GRE Verbal Reasoning test, we can see that the bottom looks rather similar to  the Quantitative Reasoning test. It comprises African and Arab countries. The top is very different, with five out of six places held by countries where English is currently the native language of a majority of the population.




1Australia158.40
2New Zealand157.30
3=Singapore157.10
3=Ireland157.10
3=UK157.10
6Canada156.00
7Netherlands155.50
8Belgium155.00
9US White154.10
10Switzerland153.70
11Romania153.50
12=Sweden153.30
12=South Africa153.30
14Bulgaria153.20
15Norway153.10
16USA152.90
17Argentina152.80
18France152.70
19US Asian152.60
20Austria152.50
21Germany152.30
22Denmark152.30
23Italy152.20
24Croatia151.70
25Finland151.70
26Uruguay151.60
27US American Indian151.50
28=Czech Republic151.40
28=Israel151.40
28=Trinidad151.40
31Hungary151.20
32=Portugal150.90
32=Spain150.90
34Poland150.40
35Lithuania150.30
36US Hispanic150.20
37Iceland149.80
38US Mexican149.70
39=Malaysia149.50
39=Barbados149.50
41Greece149.40
42=Costa Rica149.10
42=Philippines149.10
44Guatemala149.00
45Brazil148.90
46=Zimbabwe148.80
46=Jamaica148.80
48US Porto Rican148.70
49=Georgia148.60
49=Bosnia - Herzogovina148.60
51Guyana148.60
52Moldova148.40
53Macedonia148.30
54=Peru148.20
54=Mexico148.20
54=Bahamas148.20
57Chile148.00
58Belarus147.90
59Russia147.80
60=Latvia147.70
60=Colombia147.70
62Albania147.60
63=Estonia147.60
63=Venezuela147.60
63=El Salvador147.60
63=Cuba147.60
63=St Lucia147.60
68=Hong Kong147.50
68=South Korea147.50
70=Ukraine147.40
70=Bahrain147.40
72=Serbia147.30
72=Bolivia147.30
74=Eritrea147.20
74=Honduras147.20
76=Zambia147.10
76=Afghanistan147.10
78Pakistan147.00
79Panama146.80
80US Black146.70
81=Ecuador146.50
81=Kenya146.50
83Nigeria146.40
82=Morocco146.30
82=Senegal146.30
82=Nicaragua146.30
87=Cyprus146.10
87=Dominican Republic146.10
87=Sierra leone146.10
90Uzbekistan146.00
91China145.90
92Mongolia145.80
93=Vietnam145.70
93=Myanmar145.70
95Kazakhstan145.60
96=Togo145.50
96=Ghana145.50
98=Tunisia145.20
98=Uganda145.20
100Niger145.10
101=Burkina Faso145.00
101=Malawi145.00
103Kyrgyzstan144.90
104=India144.70
104=Indonesia144.70
106Cote d'Ivoire144.60
107=Japan144.50
107=Nepal144.50
107=Haiti144.40
110=Taiwan144.20
110=Bangladesh144.20
110=Ethiopia144.20
110=Benin144.20
110=Congo DR144.20
115Turkey144.10
116=Egypt143.80
116=Azerbaijan143.80
118Armenia143.70
119=Turkmenistan143.50
119=Cameroon143.50
121=Macao143.40
121=Sri Lanka143.40
121=Tanzania143.40
124Thailand142.80
125=Syria142.70
125=Rwanda142.70
127Qatar142.50
128Algeria141.60
129Jordan141.40
130Oman141.30
131Yemen141.00
132Congo Republic140.90
133Kuwait140.80
134Sudan140.60
135=UAE140.30
135=Mali140.30
137Namibia140.20
138Saudi Arabia137.40

Wednesday, February 27, 2013

Ranking Countries by GRE Scores



ETS has produced an analysis of the scores for the Graduate Record Exam required for entry into US graduate schools. Among the more interesting tables are the scores by nationality for the general test, composed of verbal reasoning, quantitative reasoning and analytical writing. This could be regrded as a crude measure of a country's undergraduate education system although clearly there are all sorts of factors that would blur the picture.

Here are mean scores for quantitative skills  by country.

1Hong Kong169.50
2China162.90
3Singapore160.30
4Taiwan159.20
5Vietnam158.90
6Turkey158.70
7South Korea158.20
8Macao158.00
9France157.50
10Belgium157.10
11Czech Republic156.90
12Israel156.70
13Switzerland156.70
14Netherlands156.60
15Greece156.40
16=Bulgaria156.30
16=Japan156.30
18Hungary156.20
19Australia155.70
20Germany155.50
21=Russia155.30
21=Thailand155.30
23=Belarus154.80
23=Romania154.80
25=Bangladesh154.70
25=Lithuania154.70
27Malaysia154.60
28=Eritrea154.50
28=Iceland154.50
28=Tunisia154.50
31=New Zealand154.40
31=Ukraine154.40
33Latvia154.30
34Sri Lanka154.20
35=Austria154.10
35=India154.10
35=Italy154.10
38=Indonesia154.00
38=Moldova154.00
40=Armenia153.80
40=Ireland153.80
42=Cyprus153.70
42=Argentina153.60
44Canada153.60
45=Nepal153.50
45=Portugal153.50
45=US Asian153.50
48=Albania153.40
48=Mongolia153.40
50=Croatia153.30
50=Egypt153.30
52Poland153.20
53=Norway153.10
53=Pakistan153.10
53=Spain153.10
56UK152.90
57=Denmark152.80
57=Kazakhstan152.80
59=Chile152.70
59=Syria152.70
61=Azerbaijan152.60
61=Macedonia152.60
61=Serbia152.60
61=Sweden152.60
65Myanmar152.40
66Peru152.30
67=Turkmenistan152.20
67=Uzbekistan152.20
69Jordan151.90
70=Estonia151.80
70=Ethiopia151.80
70=Morocco151.80
73Georgia151.60
74Finland151.50
75=South Africa151.30
75=Uruguay151.30
77Bolivia150.80
78Brazil150.50
79US White150.40
80Bosnia - Herzog150.10
81Venezuela150.00
82=Benin149.70
82=Costa Rica149.70
84USA149.50
85Colombia149.40
86Mexico149.30
87=Bahrain149.20
87=Zimbabwe149.20
89Philippines149.10
90Trinidad 148.80
91=Ecuador148.60
91=Panama148.60
91=UAE148.60
91=Yemen148.60
95=Algeria148.50
95=Sudan148.50
97=Guatemala148.30
97=Kyrgyzstan148.30
99Cote d' Ivoire148.10
100Togo148.00
101Qatar147.90
102Barbados147.80
103Rwanda147.60
104=El Salvador147.50
104=Honduras147.50
106=Ghana147.40
106=Nigeria147.40
108Cuba147.30
109=Kenya147.10
109=US American Ind147.10
111US Hispanic147.00
112Cameroon146.90
113=Burkina Faso146.80
113=Niger146.80
113=Zambia146.80
116=Dominican Repub146.50
116=Guyana146.50
116=Kuwait146.50
116=Tanzania146.50
116=US Mexican146.50
121=Uganda145.90
121=US Porto Rican145.90
123Jamaica145.80
124Oman145.40
125Senegal145.30
126St Lucia145.20
127Congo DR145.10
128Nicaragua144.50
129=Afghanistan144.20
129=Haiti144.20
131Mali144.00
132Malawi143.90
133=Bahamas143.70
133=Sierra Leone143.70
135US Black143.10
136Saudi Arabia142.80
137Congo Republic142.40
138Namibia140.20

Friday, February 22, 2013

More Rankings on the Way

Soon it will be springtime in the Northern hemisphere and spring would not be complete without a few more rankings.

The Times Higher Education reputation rankings will be launched in early March at the British Council's Going Global conference in Dubai.


“Almost 50,000 academics have provided their expert insight over just three short annual rounds of the survey, providing a serious worldwide audit of an increasingly important but little-understood aspect of global higher education – a university’s academic brand.”
This year’s reputation rankings will be the based on the 16,639 responses, from 144 countries, to Thomson Reuters’ 2012 Academic Reputation Survey, which was carried out during March and April 2012. The 2011 survey attracted 17,554 responses, and 2010’s survey attracted 13,388 respondents.

The survey is by invitation only and academics are selected to be statistically representative of their geographical region and discipline. All are published scholars, questioned about their experiences in the field in which they work. The average time this year’s respondents spent working in the sector was 17 years. '


Meanwhile, the QS ranking of 30 subjects is coming soon. Until now these have been based on varying combinations of employer opinion, academic opinion and citations. This year they will be adding  an indicator based on the h-index.

Here is a definition from Wikipedia:

"The index is based on the distribution of citations received by a given researchers publications. Hirsch writes:
A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np − h) papers have no more than h citations each.
In other words, a scholar with an index of h has published h papers each of which has been cited in other papers at least h times.[2] Thus, the h-index reflects both the number of publications and the number of citations per publication. The index is designed to improve upon simpler measures such as the total number of citations or publications. The index works properly only for comparing scientists working in the same field; citation conventions differ widely among different fields.
The h-index serves as an alternative to more traditional journal impact factor metrics in the evaluation of the impact of the work of a particular researcher. Because only the most highly cited articles contribute to the h-index, its determination is a relatively simpler process. Hirsch has demonstrated that h has high predictive value for whether a scientist has won honors like National Academy membership or the Nobel Prize. "


This means that one paper cited once produces an index of 1, 20 papers cited 20 times an index of 20, 100 papers cited 100 times an index of 100 and so on.

The point of this is that it combines productivity and quality as measured by citations and reduces the effect of extreme outliers. This is definitely an improvement for QS.





Tuesday, February 19, 2013

Freedom Indicator?

It is often argued  that the quality of a a university has something to do with academic freedom. Some Western academic have become noticeably self-righteous about respect for human rights in other countries. There have been criticisms of Yale University's links with Singapore, where gay rights are restricted.

One wonders whether Western campuses should talk so loudly about freedom. A recent incident at Carleton University in Canada suggests that when it comes to human rights some humans are much more equal than others.

Carleton has a freedom wall where students can write thoughts that are forbidden in the rest of the campus, probably even in much or most of Canada. Even this was too much for Arun Smith, a seventh year (yes, that's right) human rights student. From the Macleans On Campus blog:

"Seventh-year Carleton University human rights [apparently human rights and political science with a minor in sexuality studies] student Arun Smith has apparently not been in school long enough to learn that other people have rights to opinions that differ from his. After the “free speech wall” on campus was torn down, he posted a message to his Facebook wall claiming responsibility. “If everyone speaks freely we end up simply reinforcing the hierarchies that are created in our society,” it read. The display had been erected by campus club Carleton Students for Liberty and students were encouraged to write anything they wanted on the paper. Someone wrote “abortion is murder” and “traditional marriage is awesome.” GBLTQ Centre volunteer Riley Evans took offense, telling The Charlatan student newspaper that the wall was attacking those who have had abortions and those in same-sex relationships."


It appears that Arun Smith has been widely condemned and that he will be punished. What seems to have been passed over is that it is apparently necessary to have a wall where mainstream religious opinions can be expressed. Yes, I know that "abortion is murder" is a gross simplification of a complex philosophical issue but whose fault is it that it has to be expressed in three words?

The Justice Centre for Constitutional Freedoms has issued a Campus Freedom Index for Canadian universities. Unsurprisingly, Carleton gets a C and 3 Fs. The best appears to be St Thomas with one A and three Bs

What about an international edition?





The Commission Strikes Back

Jordi Curell from the European Commission's Directorate General for Education and Culture has written in defence of the proposed U-multirank university ranking system. He starts:

"Is Times Higher Education worried about competition to its world university ranking from U-Multirank? It looks like it from the tone of its reporting on the new European ranking initiative launched in Dublin at the end of January. "


He concludes:

"However, the EU should not finance U-Multirank forever; this should be limited to the start-up phase. That is why the contract for delivering the ranking includes the design of a self-sustaining business plan and organising the transition to this model.

These are challenging times for higher education in Europe, and the purpose behind U-Multirank could not be clearer. Our objective is improving the performance of Europe's higher education systems – not just selling newspapers."

 
 By the way, THE is a magazine now, not a newspaper.

Tuesday, February 12, 2013



Wasting Money

The League of European Research Universities claims to be upset about the 2 million Euros that the Europan Uniion is spending on its proposed multi-dimensional university ranking. What do they or their American counterpartsthink about things like this?

"The president [of the US] will invest $55 million in a new First in the World competition, to support the public and private colleges and non-profit organizations as they work to develop and test the next breakthrough strategy that will boost higher education attainment and student outcomes. The new program will also help scale-up those innovative and effective practices that have been proven to boost productivity and enhance teaching and learning on college campuses."

Monday, February 11, 2013

Update on U-Multirank

Using data supplied by institutions is not a good idea for any international ranking. Apart from questions of reliability and objectivity, there is always the possibility of "conscientious objectors" disrupting the ranking process by refusing to take part.

The League of European Research Universities has just announced that it will not participate in the European Union's proposed multi-dimensional ranking project.

Membership of the League is by invitation only and "is periodically evaluated against a broad set of quantitative and qualitative criteria, such as research volume, impact and funding, strengths in PhD training, size and disciplinary breadth, and peer-recognised academic excellence." At the moment , it includes Oxford, Cambridge, Heidelberg, Geneva and Strasbourg universities.

According to Times Higher Education

'Kurt Deketelaere, secretary-general of Leru, said the organisation, whose members include the universities of Oxford, Cambridge and Edinburgh, believes the project is ill-conceived and poorly designed.

"We consider U-Multirank at best an unjustifiable use of taxpayers' money and at worst a serious threat to a healthy higher education system," he said. "Leru has serious concerns about the lack of reliable, solid and valid data for the chosen indicators in U-Multirank, about the comparability between countries, about the burden put upon universities to collect data and about the lack of 'reality-checks' in the process thus far."'

Considering the sort of thing that European universities spend texpayers' money on, 2 million Euros seem comparatively trivial. There are no doubt genuine concerns about the reliability of data produced by institutions and comparability between countries but if you can swallow the camel of Rice University and Moscow Engineering Physics Institute as the best in the world for research influence according to Times Higher and Thomson Reuters, then why strain at U-Multirank's gnats?

And as for a serious threat to higher education, I think someone should sit down for a few minutes and have a cup of tea before making any more statements.

Saturday, February 09, 2013


Another Ranking on the Way

The European Union has just launched its U- Multirankranking system. Data will be collected during 2013 and the results will be out in 2014.
According to the European Commissioner for Education the aim is to to provide a multi-dimensional analysis of institutions rather than one that emphasises research excellence.
It is certainly true that the prominent international rankings focus largely or almost entirely  on research. The Shanghai rankings are all about research except perhaps the 10 percent for Nobel and Field awards given to alumni. The QS rankings have a weighting at least 60 per cent for research (citations per faculty and academic survey) and maybe more since research only faculty are counted in the faculty student ratio. Times Higher Education allocates 30 percent for research influence (citations) and 30 percent for research (volume, income and reputation). Since the scores for the citations indicators are substantially higher than those for the others  it can carry an even greater weight for many universities.  Rankings that measure other significant parts of a university’s mission might therefore fill an obvious gap.
But the new rankings are going to rely on data submitted by universities. What happens if several major institutions, including perhaps many British ones, decline to take part?

 

Sunday, February 03, 2013

Article in the Chronicle of Higher Education

The Chronicle of Higher Education has an article by Debra Houry on university rankings. She makes some pertinent comments although her recommendations at the end are either impractical or likely to make things worse.

She points out that several American colleges have been found to have submitted inflated data to the US News and World Report in order to boost their standing in the rankings and notes that "there is an inherent conflict of interest in asking those who are most invested in the rankings to self-report data."

This is true and is even more true of international rankings. One reason why the Shanghai rankings are more credible than those produced by QS and Times Higher Education is that they rely entirely on reasonably accessible public data. Using information provided by institutions is a risky business which, among other things, could lead to universities refusing to cooperate, something which ended the promising Asiaweek rankings in 2001.

She then argues that measures of student quality such as high school class rank and SAT scores should be abandoned because they "discourage colleges from selecting a diverse student body. An institution that begins accepting more African-American students or students from low-income families—two groups that have among the lowest SAT scores, according to the College Board—might see its ranking drop because the average SAT score of its freshmen has gone down."

True, but on the other hand an institution that puts more emphasis on standardized test scores might rise in the rankings and might also increase its intake of Asian students and so become more diverse. Are Asian students less diverse than African- Americans? They are certainly likely to be far more varied in terms of mother tongue, political opinions or religious affiliation.

She also points out that it is now a bit late to count printed books in the law school rankings and wonders about using ratemyprofessor to assess teaching quality.

Then there is a familiar criticism of the QS Stars rating systems.

Professor Houry also makes the common complaint that the rankings do not capture unique features of institutions such as "a program called Living-Learning Communities, which gives upperclassmen at Emory incentives to live on campus and participate in residential learning. But you would never learn about that from the ranking formulas."

The problem is that a lot of people are interested in how smart graduates are or how much  research, if any, faculty are doing or how much money is flowing in. But seriously, what is so interesting about upperlassmen living on campus? In any case if this is unique would you expect  any measure to "capture" it.

Finally she concludes "ranking organizations should develop more-meaningful measures around diversity of students, job placement, acceptance into professional schools, faculty membership in national academies, and student engagement. Instead of being assigned a numerical rank, institutions should be grouped by tiers and categories of programs. The last thing students want is to be seen as a number. Colleges shouldn't want that, either."

But all of these raise more problems than solutions. If we really want diversity of students shouldn't we counting counting conservative students  or evangelical Christians? Job placement raises the possibility, already found in law school rankings, of counting graduates employed in phony temporary jobs or glorified slave labor (internships). Membership in national academies? A bit elitist, perhaps?