Too risky

NWO employees not allowed to use ChatGPT

ChatGPT. Foto: Pixabay
Illustration: Pixabay

NWO already drew a line in the sand last year, saying that juries, referees and committee members were not allowed to use generative AI. This ban is now permanent.

Scientists spend a lot of time applying for NWO grants, even though their chances of getting them are slim. This is reportedly one of the reasons for the workload at universities being so high.

To assess applications, NWO asks a committee of researchers to rank them based on quality, with the help of expert referees from around the world. The whole process takes a lot of time and effort, so it is probably tempting to cut corners. 

Risks
Though generative AI can produce meaningful-sounding texts based on the recombination of large amounts of data, this technology has several snags. Hence NWO's decision to draw up guidelines for this.

According to these guidelines, what applicants do is up to them, though one should be aware of the risks, such as fabricated footnotes, ingrained prejudices and plagiarism. The organisation advises applicants to be transparent about AI use. 

However, for grant application reviewers, the use of ChatGPT and similar programmes is absolutely out of the question. Uploading an application to a generative AI platform is already considered a violation of confidentiality. Besides, NWO says reviewers cannot know how careful AI will be if they let it review applications on their behalf.

Approved
NWO employees (lawyers, policy officers, etc) are only allowed to use approved AI programmes. This means no ChatGPT, Gemini or Midjourney.

Even approved AI programmes can only be used in a limited way. For example, employees are not allowed to contribute to draft versions of justifications or defences in appeal proceedings.

Hardly any AI programmes have been approved yet. According to the NWO website, only one has received the green light so far: "Currently, NWO staff are allowed to use the DeepL translation app."

Advertisement