Difficult to measure
Researchers cannot be given a “grade” for social relevance
Scientific research is at times beneficial to society as a whole, and at times the results disappear in a filing cabinet. Some Dutch politicians would like to measure the social impact of academia: is it going well, or could it be better?
Every six years, universities and institutes assess their research groups against the standards set out in the standard evaluation protocol (SEP), which also addresses social relevance.
ChatGPT
At the request of the Ministry of Education, Culture and Science, research agency Technopolis reviewed the SEP documents of 146 research units. In other words, Technopolis fed those documents into ChatGPT and asked what they said about social relevance.
The outcome: social relevance takes many forms. Researchers collaborate with governments, businesses, social organisations and citizens. They influence new technological developments, the energy transition and public debates on education. Many research groups also engage in public communication or publish in freely accessible journals.
Limited
According to the chatbot, these are just some of the things that come up in the evaluations, but they are limited. "The results can therefore not be considered a complete overview of the social relevance of research within research units," the report emphasises.
And that is the main takeaway from this "evaluation": you can't reduce it to a “score”. The ministry would like to publish the results on OCWinCijfers.nl, but the figures are of little use.
Or, as the report puts it, "The richness of the results indicates that a qualitative, descriptive indicator does most justice to the diversity of societal relevance and to the differences in research practices and target groups."
Funding
The evaluation does not mention it, but some political parties' underlying goal is to force academia to be economically and socially useful — through financial incentives, for example. But how?
According to reports on this subject, it is actually impossible. In 2011, the Rathenau Institute wrote: "There are appropriate forms of valorisation for every discipline: from patents and spin-offs, through advice on new legislation, to compiling an exhibition catalogue."
For a technical university, the number of patents may be a good indicator, as the Rathenau Institute considered at the time, but even then, counting makes little sense: are two patents better than one? What happens with those patents makes a big difference.
Everything of value
In 2018, the Royal Netherlands Academy of Arts and Sciences wrote a recommendation: counting and tallying are pointless. Instead, you could work with “narratives”: scientists who explain in advance for whom their research could be relevant.
The same problem affects education, by the way: how can you demonstrate that spending on education is worthwhile? Two years ago, the Education Council warned that education is at risk of budget cuts because politics is focused on outcomes. But those outcomes cannot always be captured because “not everything of value can be captured in indicators, calculations or broad prosperity analyses”.
Comments
We appreciate relevant and respectful responses. Responding to DUB can be done by logging into the site. You can do so by creating a DUB account or by using your Solis ID. Comments that do not comply with our game rules will be deleted. Please read our response policy before responding.