Dealing with practical and ethical concerns

Should we treat AI in research the same way as animal testing?

Proefdier Foto: 123rf
Photo: 123rf

Admittedly, the title of this article strays into the realm of clickbait. However, on closer inspection this question might not be as ridiculous as it seems. In fact, it might form the starting point for a useful thought experiment about the benefits and harms of using AI in research and education. 

At first glance, the difference between inanimate computer models in the form of artificial intelligence (AI) and the very much alive group of laboratory animals seems rather stark. However, when viewed within the context of their application in research, similarities appear. 

The great increase in the accessibility and capabilities of AI has sparked discoveries and innovation in fields like medicine, environmental sciences and engineering and has opened the door to analytical methods that were previously deemed impossible. 

Similarly, the use of animal testing has been (and continues to be) fundamental to furthering our understanding of the natural world, human biology, medicine and a range of other fields. A contribution so great it warranted the creation of a monument dedicated to laboratory mice in the Russian city of Novosibirsk.

Strict requirements 
Of course, the above cannot be stated without recognizing the grand cruelties and animal harm associated with these achievements and the ethical debates about the use of laboratory animals that followed. 

Today, the use of animal testing is strongly regulated. In the Netherlands, permits for animal testing are only provided in case a non-animal alternative for the experiment is not available and if the potential societal gains outweigh the harm for animals. 

Any experiments that do occur should also limit the exposure of animals to harm as much as reasonably possible. At the UU and UMCU, the task of ensuring compliance with this legislation falls upon a dedicated Animal Welfare Body, which also provides information and support for researchers working with animals. 

We do not aspire to contribute to the discussion whether these regulations are adequate or just, as this deserves greater attention than we can give it here. We, do however, aim to use the current approach towards animal testing to make an (imperfect) comparison with the role of AI in research and education. 

Great risks
Similarly to animal testing, the scientific potential of AI cannot be seen separately from a range of practical and ethical concerns associated with its use. The training and use of AI models is energy intensive, leading to substantial carbon emissions if renewable sources of energy are not used. 

Furthermore, the data collection process associated with training AI models is riddled with concerns regarding privacy and intellectual property and is vulnerable to bias

Moreover, some use cases of AI conflict with good scientific practice, obscuring processing steps and complicating reproduction, not to mention the various moral and ethical issues that arise from giving AI a central role in any decision-making or recommendation process. However, unlike with animal testing, we are currently lacking broadly accepted principles and procedures regarding these concerns.

Of course, it can be questioned whether these AI-related concerns are comparable to the harm associated with animal testing. This is, however, not the point we want to make. Instead, we wish to introduce the idea that in a time of largely unregulated use of AI, the approach taken in approving the use of animal testing might form a source of inspiration for the development of future regulations. 

Weighing advantages and disadvantages
Carefully weighing the societal benefits of an AI research application against its potential harm could help in determining whether that application is truly beneficial or if better alternatives are available that can fulfil the same purpose. 

Similarly, looking for ways to minimize harm when AI is used could be a worthwhile pursuit, for example by prioritizing energy efficient models. It goes without saying that determining what is or is not a valid use of AI will be a delicate and complicated process, tapping into ongoing discussions of how (and if) the ‘‘value’’ of science should be assessed.

Perhaps it could be considered to dedicate these tasks and questions to an institution similar to the aforementioned Animal Welfare Body, though adding such layers of bureaucracy might be excessive. Alternatively, attention could be focussed on adapting existing procedures and checks concerning research integrity to better incorporate AI. For now, simply reporting on the main considerations surrounding the implementation of AI could already instil a minimal degree of transparency into a process where this is currently often lacking. 


We most surely do not have all the answers regarding the questions posed here and are also not under the illusion that we are the first to conceive of these principles. However, we do hope that the above comparison between AI and animal testing sheds some light on the questions that can be asked surrounding the ‘‘responsible’’ implementation of AI. 

Questions that perhaps, though unlikely, will return to you next time you are about to press “run’’ on your AI model of choice. Note: this article was written solely by old-fashioned human intelligence.

Advertisement