Skip to content

Validating research performance metrics against peer rankings

by Stevan Harnad
Ethics in Science and Environmental Politics ()
Get full text at journal

Abstract

A rich and diverse set of potential bibliometric and scientometric predictors of research performance quality and importance are emerging today, from the classic metrics (publication counts, journal impact factors and individual article/author citation counts) to promising new online metrics such as download counts, hub/authority scores and growth/decay chronometrics. In and of themselves, however, metrics are circular: They need to be jointly tested and validated against what it is that they purport to measure and predict, with each metric weighted according to its contribution to their joint predictive power. The natural criterion against which to validate metrics is expert evaluation by peers, and a unique opportunity to do this is offered by the 2008 UK Research Assessment Exercise, in which a full spectrum of metrics can be jointly tested, field by field, against peer rankings.

Cite this document (BETA)

Readership Statistics

107 Readers on Mendeley
by Discipline
 
26% Social Sciences
 
25% Agricultural and Biological Sciences
 
20% Computer Science
by Academic Status
 
24% Researcher
 
14% Student > Ph. D. Student
 
13% Professor
by Country
 
7% United Kingdom
 
7% United States
 
4% Spain

Sign up today - FREE

Mendeley saves you time finding and organizing research. Learn more

  • All your research in one place
  • Add and import papers easily
  • Access it anywhere, anytime

Start using Mendeley in seconds!

Sign up & Download

Already have an account? Sign in