
Researchers under a loupe
The work of researchers is constantly evaluated in each phase of their career path. Amongst the science community, efforts are being made to develop responsible researcher evaluation. On what criteria should researchers be evaluated?
Text Anssi Bwalya Images getty images Translation Marko Saajanaho
Higher education institution and research centre recruitments, promotions, research funding… the work of researchers is evaluated at just about every turn. The evaluation criteria guide research direction.
”The evaluation affects how different fields progress, what kind of data is produced, who produces it, and for what purpose”, University of Eastern Finland postdoctoral researcher Maria Pietilä, versed in researcher evaluation, describes.
Even academic working hours have their limits, and especially an early-career researcher must consider their choices based on career prospects. It must be considered what kind of research can acquire funding and what achievements are emphasised in the recruitment process.
The yardsticks of responsible researcher evaluation
Some researchers think researcher evaluation focuses too heavily on peer-reviewed publications in esteemed scientific journals and other quantifiable metrics. This criticism has already resulted in changes. These days, we talk about responsible evaluation, which aims to strike a balance between metrics and more comprehensive qualitative evaluation.
Pietilä tells us that according to studies, researchers consider scientific publications an essential part of the evaluation of their work. However, many find publication metrics alone to provide an overly narrow view of a researcher’s work. The quantity of peer-reviewed articles depends partially on the field. In humanities, monographs can also serve as key research products, and data and software created through research in technical sciences may play an important role in the development of the field. At the same time, there are increasing calls for evaluations to consider a researcher’s commitment to promoting open science.
”Many find publication metrics alone to provide an overly narrow view of a researcher’s work.”
Furthermore, the work of a researcher is often not limited to research. Mira Söderman, coordinator of the Federation of Finnish Learned Societies’ Publication Forum, explains that the current evaluation process tries to recognise the diversity of work more extensively than before.
”Researchers do not only publish but also have many other important activities: they teach and participate in social interaction.”
The work of researchers is international, so a significant portion of researcher evaluation development is also carried out through international cooperation. For example, the European university alliance YUFE has created a portfolio to identify a researcher’s merits in four categories: research, teaching and guidance, social interaction, and teamwork and leadership.
Pietilä, who participated in the portfolio’s development, tells us the idea is for a recruiting party to pick and choose parts of the portfolio depending on each job’s requirements. The portfolio resembles narrative résumés: instead of writing a list-like CV, researchers are encouraged to describe their background, merits and career visions in their own words.

Will multifaceted evaluation lead from the frying pan into the fire?
A more multifaceted evaluation process is not without its challenges. Tampere University Professor of International Relations Tuomas Forsberg notes that the evaluation process should be predictable. This may not be the case if many different categories are included without clear prioritisation.
In Forsberg’s opinion, university researcher recruitments focusing primarily on research merits is perfectly sensible. Publication metrics do not suit comparisons between fields, but within a certain field, research merits measured in publication numbers are relatively effective at predicting future performance.
Social interaction, for example, should not be emphasised too heavily according to Forsberg. Each researcher can influence their own published research, but opportunities for social interaction are largely dependent on external factors – for example, which subjects happen to be highlighted in daily discourse at a certain time.
Each researcher can influence their own published research, but opportunities for social interaction are largely dependent on external factors.
”Social interaction is based much more on demand than supply”, Forsberg states.
A comprehensive qualitative evaluation of every applicant is not always realistic either, if a large number of researchers are applying for a specific position. The most promising candidates have to be screened from the glut of applications. Quantitative metrics are often helpful for such shortlisting.”
Pietilä does not think metrics should be completely abandoned, either. However, studies indicate that a simple list of publications is often not seen as a sufficient source of information. In interviews conducted by Pietilä and her colleagues, parties responsible for researcher recruitments wished for additional information about matters such as applicants’ motivations, career plans, and ideas on developing research.
Another problem is the fact there is not always even a systematic evaluation process in place. Many researcher recruitments take place ”behind closed doors” as direct recruitments.
”Occasionally, it should be highlighted when these searches should be open and why some are not”, Pietilä notes.
Pietilä, Söderman ja Forsberg note the importance of open and research-based discourse about evaluation. Söderman emphasises that responsible researcher evaluation is not simply about swapping one set of criteria with a different set.
”At the end of the day, this is primarily about a cultural shift”, Söderman says.
This means considering what we actually find important about researcher work. As such, guiding evaluation in a genuinely responsible direction requires the research community’s extensive participation.