International University Rankings - Benefits and limitations

International University Rankings - Benefits and limitations Seminari sobre rànquings internacionals Barcelona, 21 June 2011 Gero Federkeil, www.che....
Author: Randolph Payne
13 downloads 1 Views 2MB Size
International University Rankings - Benefits and limitations

Seminari sobre rànquings internacionals Barcelona, 21 June 2011 Gero Federkeil, www.che.de

CHE Centre for Higher Education, Germany

Presentation

The CHE – A Short Introduction The rise of international rankings International rankings – indicators & data sources International rankings – a critical view Conclusions

ACUP 2011 | Gero Federkeil |

2

The CHE - Centre for Higher Education Private, not-profit organisation Founded in 1994 by Bertelsmann Foundation and German Rectors Conference Goal: Initiate and promote of reforms in German higher education Activities: HE policy issues (e.g. Bologna, funding, …) Consulting Communication & training Ranking ACUP 2011 | Gero Federkeil |

3

The CHE - Centre for Higher Education Ranking of German universities among founding tasks of CHE First ranking published in 1998 Extension of fields and indicators Continuous further development of methodology Internationalisation Extension of CHE Ranking: Austria, Switzerland, Netherlands 2011: Ranking in Spain in cooperation with Fundació CYD U-Multirank project to “develop the concept and test the feasibility of a global multi-dimensional university ranking” Founding member of IREG –Observatory on Academic Rankings and Excellence (“Berlin Principles”) ACUP 2011 | Gero Federkeil |

4

The CHE – A Short Introduction The rise of international rankings International Rankings – Indicators & data sources International Ranking – A critical view Conclusions

ACUP 2011 | Gero Federkeil |

5

The rise of international rankings

Shanghai Jiatong University: Academic Ranking of World Universities (2003 - ) Original purpose: comparison of Chinese universities with rest of the world http://www.arwu.org/index.jsp

Times Higher Education (THE)/QS World Rankings (2004-2009) 2010 Separation of partners

QS World University Rankings (2010 - ) Private consulting company http://www.topuniversities.com/

THE/Thomson Reuters World Rankings (2010 - ) Co-operation with leading provider of bibliometric data base http://www.timeshighereducation.co.uk/world-university-rankings/

ACUP 2011 | Gero Federkeil |

6

The rise of international rankings HEEACT (Taiwan): Performance Ranking of Scientific Papers Purely bibliometric ranking http://ranking.heeact.edu.tw/en-us/2010/homepage/

Centre for Science and Technology Studies (CWTS): Leiden Rankings Purely bibliometric ranking http://www.cwts.nl/ranking/LeidenRankingWebSite.html

Scimago Institutions Ranking Purely bibliometric ranking http://www.scimagoir.com/index.php

Ranking Ecole des Mines Paris Analysis of university of graduation of CEO of Top 500 companies http://www.mines-paristech.fr/Actualites/PR/Ranking2011EN-Fortune2010.html

Webometrics Ranking of web-presence http://www.webometrics.info/ ACUP 2011 | Gero Federkeil |

7

The duality of rankings

The emergence of global rankings are a result of a growing global competetion in higher education; at the same time those rankings re-enforce this competition by their own results

Global Rankings have an impact on: National policies (excellence initiatives, scholarships) Institutional strategies

ACUP 2011 | Gero Federkeil |

8

The CHE – A Short Introduction The rise of international rankings International Rankings – Indicators & data sources International Ranking – A critical view Conclusions

ACUP 2011 | Gero Federkeil |

9

World Rankings: Indicators Shanghai Jiaotong Ranking Indicator

QS Weight Indicator

Weight

SCI publications

20 % Reputation among scholars

40 %

Publications Science & Nature

20 % Reputation among employers

10 %

Highly cited authors

20 % Citations

20 %

Nobel Prizes & Field Medals

20 % Student-staff-ratio

20 %

Alumni with NobelPrizes

10 % International students

10 %

Size

10 % International staff

10 %

• Measurement of research • Due to indicators/data bases: mainly in science and • Mixture of technology different dimensions, mainly reputation • What does the total score measure?

ACUP 2011 | Gero Federkeil |

10

World Rankings: Indicators THE World Rankings Indicator

Weight

HEEACT Ranking Indicator

Weight

Teaching

30.0 % Publications 1999- 2009

20 %

Research

30.0 % Citations 1999 - 2009

10 %

Citations

32.5 % Research Excellence

50 %

Industrial Income

2.5 %

International Mix

5%

H Index (20 %) Highly cited papers (15 %) Papers in high impact journals (15 %)

• Mainly Research • 34.5 % based on reputation • Research only • What data does=the total scoresciences measure? • Bibliometric bias towards • Long-term perspective

ACUP 2011 | Gero Federkeil |

11

Comparison of Results: Top 10 Position

QS THE ARWU University of Cambridge Harvard University Harvard University Harvard University California Institute of Technology University of California, Berkeley Yale University Massachusetts Institute of Technology Stanford University UCL (University College London) Stanford University Massachusetts Institute of Technology (MIT) Massachusetts Institute of Technology (MIT) Princeton University University of Cambridge University of Oxford University of Cambridge California Institute of Technology Imperial College London University of Oxford Princeton University University of Chicago University of California Berkeley Columbia University California Institute of Technology (Caltech) Imperial College London University of Chicago Princeton University Yale University University of Oxford Columbia University University of California Los Angeles Yale University University of Pennsylvania University of Chicago Cornell University Stanford University Johns Hopkins University University of California, Los Angeles Duke University Cornell University University of California, San Diego University of Michigan Swiss Federal Institute of Technology Zurich University of Pennsylvania Cornell University University of Michigan University of Washington Johns Hopkins University University of Toronto University of Wisconsin - Madison ETH Zurich (Swissof Federal Institute of Technology) Columbia University The Johns Hopkins University University Barcelona McGill University University of Pennsylvania University of California, San Francisco Australian Carnegie Mellon University The University 148 National University 142 201 - 300 of Tokyo

Autonomous University of Barcelona 173

Not among Top 200

ACUP 2011 | Gero Federkeil |

301 – 400

12

The CHE – A Short Introduction The rise of international rankings International Rankings – Indicators & data sources International Ranking – A critical view Conclusions

ACUP 2011 | Gero Federkeil |

13

Indicators used – a critical assessment Bibliometric indicators  Central element of research  Differences in methdological quality (e.g. field-normalised citation rates)

 Publications: lack of control for size  Field biases (humanities, engin.)  Language bias

Reputation  Reputation is a social reality

ACUP 2011 | Gero Federkeil |

 No performance indicator  Highly dependent on sample  Not very reliable in international perspective

14

Indicators used – a critical assessment Nobel prizes  Field biases (only a few fields)  Time problem / institutional affiliation

 High level excellence

„Small indicators“  Problems in definition and data collection (e.g. internat. students)  Try to bring in other dimensions than reserach  Problems in validity (e.g. studentstaff-ratio)

ACUP 2011 | Gero Federkeil |

15

General approach – A critical sessement

International rankings differ in their indicators. But with regard to the general methodology there is a ranking orthodoxy and a growing number of alternative approaches

ACUP 2011 | Gero Federkeil |

16

Ranking orthodoxy I: Institutional ranking No se puede mostrar la imagen en este momento.

ACUP 2011 | Gero Federkeil |

Critique of ranking orthodoxy I

Institutional rankings



Multi-level rankings: Field specific rankings

Most target groups/ users (prospective students, academic staff) are interested in information about „their“ field Universities are heterogeneous units; fields/faculties differ in their performance

 Rankings of whole institutions give misleading averages Global rankings increasingly introduced field based rankings

ACUP 2011 | Gero Federkeil |

18

Ranking orthodoxy II: „Composite indicator“

ACUP 2011 | Gero Federkeil |

Critique of ranking orthodoxy II

Composite overall indicator



Multi-dimensional ranking

Composite indicators blur profiles and strengths & weaknesses There are neither theoretical nor empirical arguments for assigning specific weights to single indicators Heterogeneity of preferences on indicators among stakeholders /users (“quality is in the eye of the beholder”)  fixed weights patronise users  Rankings should leave decision about relevance of indicators to users Global rankings started to include elements of personalisation ACUP 2011 | Gero Federkeil |

20

Ranking orthodoxy III: League tables

21

Conclusions I: Methodolgy Global rankings helped to bring higher education into public debate Their methods are flawed field-bias in favour of (biomedical) hard sciences language bias in disfavour of non-english speaking countries problems with validity and reliability of indicators Although there are some recent changes, they still follow the orthodox ranking approach mainly institutional rankings use of composite indicator league table approach This approach may be good for media interest, but does not provide meaningful information to important stakeholders/users

ACUP 2011 | Gero Federkeil |

22

Critique of ranking orthodoxy III

League tables



Group approach (top, middle, bottom)

Small differences in the numerical value of an indicator lead to big differences in league table positions (ignoring issues of statistical errors and insecurity) League tables tend to exaggerate differences between universities (“7th is better than 12th“, “326 is better than 341”)  Rankings should refer to groups / clusters rather than to exact league table positions Most rankings still stick to league table approach; a few deviate ACUP 2011 | Gero Federkeil |

23

The CHE – A Short Introduction The rise of international rankings International Rankings – Indicators & data sources International Ranking – A critical view Conclusions

ACUP 2011 | Gero Federkeil |

24

Conclusions II: The politics of ranking Due to their indicators & data sources most global rankings more or less exclusively focus on research And, in fact they are rankings of one particular type of institutions only: internationally oriented, comprehensive research universities This led to an obsession about „world class university“

De-valuation of institutional profiles different from that (specialised, teaching, regional ….) There is a need for an alternative approach that is multidimensional and makes visible different fields of excellence

ACUP 2011 | Gero Federkeil |

25

ACUP 2011 | Gero Federkeil |

26

There might be some limits to rankings in general

„You‘re kidding! You count publications?“ ACUP 2011 | Gero Federkeil |

More information: www.che-ranking.de www.u-multirank.eu www.ireg-observatory.org

[email protected] ACUP 2011 | Gero Federkeil |

28