But there is a lot of work in store for examination committees

Use of ChatGPT has not led to a rise in fraud cases

Chatgpt
Photo: Pexels

Earlier this year, the Geosciences Examination Committee was at its wit’s end. Its members had hardly heard of the existence of ChatGPT when a teacher suspected several groups had committed fraud in a course given by the Sustainable Development department. Students might have written short essays with the help of artificial intelligence.

The committee does not want to go into details but it does reveal its conclusion: after several hearings and advice from both an expert and a lawyer, there was "not enough evidence" of fraud, so the students were not sanctioned.

So far, this has been the only case having to do with AI tools that the Examination Committee has had to deal with.

No boom
UU has classified the “improper” use of ChatGPT and related bots as fraud or plagiarism. The latter is described in the Education and Examination Regulations (OER) as: “ ... The acts or omissions of students that cause a misrepresentation of their own performance in terms of knowledge, skills and understanding, which may result in the examiner no longer being able to assess students’ knowledge or abilities correctly and fairly”.

But those who expected that the advent of AI bots would lead to a boom in fraud cases are wrong. When DUB asked the Examination Committees about it, they revealed that only a few cases have been reported by teachers so far.

According to them, the number of students who got caught using artificial intelligence is limited. They also investigated suspicions of fraud in a small number of cases. Furthermore, none of the cases reported to DUB had the scope of the case that took place at the Faculty of Geosciences.

Reprimand
“There are currently no official reports of fraud or plagiarism regarding the improper use of ChatGPT”, reports the Examination Committee of the Faculty of Social Sciences, for example. The Chemistry committee has no current cases, either.

The one from Economics says there were zero rulings as well. However, its members were asked to help a teacher who was suspicious of a student. “In both cases, its use was not sufficiently demonstrable for it to be legitimate to start a case."

Computer Science and Law received one notification each. At the graduate school of Computer Science and Information Science, a student was reprimanded for letting ChatGPT write a literature review for a Bachelor’s thesis. “That one was easy to detect because most of the referrals were fake.”

At Law, a student’s paper was invalidated because it contained texts generated by ChatGPT based on non-existent sources. A teacher asked the committee to verify whether a suspicion of ChatGPT use could be established, but there were not enough leads.

The Examination Committee from the Faculty of Humanities prefers not to say how many cases of fraud or suspected fraud it has dealt with but it surmises that ”the number is small".

All of the cases involved the use of ChatGPT for writing papers. Regarding the penalties imposed, the committee only declares that the papers in question were invalidated and that students were expelled from the course “depending on the extent of the fraud”. Furthermore, the Humanities committee underscores that teachers have been advised to first talk to students if they suspect that they have used ChatGPT.

An annoying limbo
According to Antal van den Bosch, Professor of Language, Communication & Computation, the burden of proof remains a problem. In fact, that's what he told the Faculty of Geosciences when he was consulted as an expert in the Sustainable Development case. “There is simply no reliable way to detect the use of ChatGPT at this time.”

He does not rule out that this might also be a reason for teachers to avoid reporting their suspicions to the Examination Committee. “We’re in an annoying limbo.”

In one of the episodes of DUB's Student Podcast, Van den Bosch called on students to familiarise themselves with the possibilities offered by AI as soon as possible. At the same time, he acknowledges that the risks of abuse when it comes to examinations has increased.

“But I wouldn’t dare to say that there is much more fraud going on now than before. Possibilities to evade plagiarism detection have always existed. It is often enough to passing an English text through Google Translate and then submit it in Dutch.”

According to Van den Bosch, students should keep in mind that it may be hard to catch plagiarism with the help of AI bots right now, but things may change in the near future. For example, there are rumours about these bots releasing the "watermarks" of older versions, which would facilitate the detection of plagiarism. This means that students could be caught retroactively.

Broad role
The Examination Committee of the Faculty of Humanities admits that the risk of fraud seems to have increased but notes that lecturers are aware of this as well. “To a large extent, the solution lies with adjusting examination methods in order to eliminate the unauthorised use of ChatGPT.”

The scholars of the faculty also emphasise that Examination Committees are not only there to “combat fraud” but also to monitor the quality of examinations. They have been holding plenty of conversations about how AI may influence the quality of examinations. One of the questions being debated is whether or not students' AI skills should be tested. “It is obvious that we will debate many issues in this area over the next few years.”

The Examination Committee from Social Sciences emphasised how broad their role is as well. “We assume that AI is only going to develop more and more. Taking that into consideration, the Examination Committee mainly considers how students and teachers should be allowed to deal with ChatGPT.”

DUB also asked the members of the DUB panel about their experiences with the use and abuse of ChatGPT. The Innovation Scientist Frank van Rijnsoever is certain that students have been using (and perhaps also abusing) the possibilities offered by the AI bot. But he also points out that abuse is hard to prove.

He has already adjusted his assessments for activities like “supervised tests written on Remindo, papers on fairly specific topics, and oral explanations of a submitted product.”

Remarkably, there are students in our panel who say they haven't noticed much of a change since ChatGPT came along.  For example, the Psychology student Levi Bierhuizen wrote to us in an e-mail: "Students in my programme occasionally joke about using ChatGPT but I haven't heard about anyone that has actually done it. Maybe this is due to the type of papers and analyses we have to do. We're also aware that the abuse of ChatGPT is considered as fraud and plagiarism by the university.”

Other panelists defend that ChatGPT should not be seen as a threat. Med student Thomas Visser states: “I think the main added value of Chat GPT lies in the fact that it can help you look in the right direction. That leaves enough room for the student to learn. And if people did cross the line using the current version, based on my experience, it cannot have been with assignments of a very high level.”

Marte Vroom, a Master’s student in Urban & Economics Geography, writes in an e-mail: “It is important to thoroughly analyse ChatGPT  and properly record what is and what is not seen as 'fraud'. However, I would also like to inform the university that ChatGPT can be a fantastic support tool for both students and staff.”

Advertisement