Fundamental research is high pay-off, high-risk research aimed at game-changing discoveries. These research programs have long-term objectives and may not deliver their outcomes for many years. It has long been understood that small incremental gains in progress are often not a good indicator of the eventual achievement of a program’s long-term objectives. So how can governments and corporations ensure that their investment is funding the right research programs? There has been a significant amount of research into what research programs need to be successful. The findings of such research can be summarised in three points: programming (identifying clear research goals that only fundamental research can solve); ring-fencing (defining a contained program of work which is independent of other short term requirements, parallel income-generating consultancy or intermediate product development); and monitoring (evaluating the outputs and outcomes of research programs). The first two points are relatively easy to achieve, the third can be difficult. While research programs require monitoring to gauge which research groups are performing well and to inform future funding allocations. Output metrics (often called perverse metrics by frustrated researchers) such as the number of journal publications, the impact factors of published papers, or numbers of graduating PhD students, are typical measures used to assess research performance. However, when funding agencies publish the metrics they will use to determine funding allocations, researchers’ need to remain competitive for funding results in a shift in their focus. Research focus shifts from meeting the primary objectives (research) to meeting short-term output requirements as dictated by the funding agency’s metrics. Significant research shows the negative effect of these metrics on true, forward looking, fundamental research. We argue that to stimulate fundamental research, performance metrics need to change their focus from monitoring outputs to monitoring inputs. The purpose of assessment is to fund research that is likely to achieve high level objectives. However, with fundamental research, past progress may not indicate future major discoveries. The timing of such discoveries cannot be controlled and therefore performance assessments should not focus on outputs. What can be controlled is the fundamental research-enabling environment; these inputs should be assessed to support future fundamental discoveries. An environment that fosters fundamental research will lead to fundamental research. In this paper we present four input categories of measures relating to the research environment where assessments should focus. These categories cover various characteristics of the research environment that includes the following: the right people, adequate resourcing, defining the goals, allocating time, putting in place processes to encourage the research and promoting a culture of collaboration. For each of these categories we provide several examples of metrics that could be used.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Tomecko, N., & Bilusich, D. (2017). The value of input metrics for assessing fundamental research. In Proceedings - 22nd International Congress on Modelling and Simulation, MODSIM 2017 (pp. 1503–1509). Modelling and Simulation Society of Australia and New Zealand Inc. (MSSANZ). https://doi.org/10.36334/modsim.2017.k7.tomecko