Analysis
Teachers still unsure about what to do with ChatGPT
After talking to nine lecturers, DUB concludes that teachers at UU are struggling with the use of ChatGPT in their courses. It is difficult for them to determine whether or not students have committed fraud using ChatGPT. Other uses of the AI app, such as brainstorming, spelling and grammar checking, are okay as long as teachers allow students to do that. However, there is still a considerable grey area when it comes to defining which of its applications are justified.
In addition, it is practically impossible to check whether students are complying with the rules and there are virtually no consequences for those who use ChatGPT in ways that are not allowed but don't use the texts generated by the app word for word. Teachers also see major risks in the new technology. The university has no clear rules, which means that each lecturer has to weigh up the pros and cons of generative AI. The lack of regulations creates ambiguity among teachers.
Fraud
After the launch of ChatGPT at the end of 2022, concerns quickly arose about the impact the new technology would have on higher education. At the time, academics feared that students would let the app do their assignments for them, which would enable them to obtain a diploma without actually being tested on the necessary knowledge.
Soon after the launch, the Executive Board emphasised that copying texts from ChatGPT without proper referrals would be considered plagiarism. Moreover, a sentence has been added to the Education and Examination Regulations (OER) saying that a student is committing fraud when an assignment made or partially made by software purports to be the student's own work (a Solis ID is required to access the link, Ed.). Students may use ChatGPT only if "expressly permitted in the
course in question".
Board of Examiners
A year and a half later, it is proving difficult to check whether texts have been copied from generative AI chatbots, especially when students rewrite texts from ChatGPT with their own words. In that case, ChatGPT use is hard to prove and often remains pure conjecture. In addition, teachers are reluctant to check students and accuse them of plagiarism because they fear that doing so would lead to mutual distrust. They are also afraid of falsely accusing students.
Nevertheless, Karin van Es, Associate Professor at the Faculty of Humanities, advises teachers to talk to students when fraud is suspected. Van Es is involved in a project on generative AI at Utrecht Education Incentive Fund (USO) and has researched how UU students use generative AI.
“In this conversation, teachers can ask why the student made certain choices in the assignment. For example, why they used that specific literature and how they connected readings. If they notice that the student's discourse is rattling on all sides, they can go turn to the Board of Examiners.”
In the Humanities faculty council, Philosophy lecturer Maarten van Houte was surprised to learn that lecturers are advised to talk to students first and only then contact the Board of Examiners.
"The essential problem is that such an invitation is not neutral as it implies that you don't trust the student in question. That fundamentally changes the relationship between teachers and students," says Van Houte.
“In the past, teachers would go to the Board of Examiners, who then made an objective assessment. I think that’s the right way to go. You don’t want to get into a discussion with a student and I think a conversation probably doesn't help much. If the student admits it, you still have to go to the Board of Examiners and if they deny it, you are left with the doubt whether you should go to the examination board with it.”
According to Education Policy Officer Yannick Markus, the Boards of Examiners are aware of cases where teachers suspect students of copying texts generated by ChatGPT without correct reference, but that is difficult to prove. The Board of Examiners often cannot with complete certainty detect plagiarism or fraud of this kind.
Brainstorming
Even though ChatGPT use is difficult to control, the rules are clear: students who copy a text pretending it is their own know that what they are doing is not allowed. But it is a lot harder to determine what is allowed. After all, numerous generative AI applications do not involve copying texts word for word.
For example, students can ask ChatGPT questions about the study materials and then use the results to brainstorm. They can also use it as a spelling and grammar checker or as a helping hand when programming. In theory, a student can have many parts of an assignment done by ChatGPT without committing plagiarism. However, the student will learn less from the assignment, making it undesirable to use generative AI in this way.
It is up to lecturers to indicate which ways students are allowed to use Chat GPT. So far, the university has left this decision to each lecturer: they know their subject best, so they are the ones most equipped to know the place ChatGPT can occupy in their course. But that's hard to do because teachers can't monitor ChatGPT use properly, nor can they draw a clear line between desirable and undesirable uses.
Guidelines
No educational policy on generative AI has been drawn up for the entire university as of yet because the influence of generative AI on education can vary greatly from one faculty to another and even between courses in study programs.
ChatGPT is not able to do all course assignments well. When an assignment requires students to reflect or apply their knowledge to something, ChatGPT usually has a tough time coming up with a good answer. However, assignments that focus on knowledge reproduction or testing basic skills are easier to fraud.
The university has general guidelines for generative AI use, but the recommendations are brief. In the absence of centralised or decentralised regulations, the handling of generative AI can vary greatly between teachers. One teacher may instruct students to use ChatGPT in a course, while another may completely ban it. As a result, students aren't sure what constitutes a good use of generative AI either.
Talking to the nine lecturers, DUB learned that most of them have made adjustments to their teaching and assessments. Some are making adjustments primarily to prevent fraud; others are looking for forms in which generative AI can have a useful place in education. They have added a passage on the use of generative AI to the course manual, for example. Some also discussed the topic with students in class.
Some teachers also checked whether ChatGPT could give a good answer to certain assignments, and then adjusted the assignments accordingly, so that ChatGPT would no longer be able to give a good answer. A few of the teachers consulted have even removed assignments from the course altogether because they could no longer guarantee that students would complete them independently.
When ChatGPT came out, some suggested that teachers could use the chatbot as part of an assignment. For example, students would ask ChatGPT several questions and then be asked to reflect on the answers. However, this use of ChatGPT doesn't seem widespread at UU. None of the teachers DUB spoke to has given such an assignment so far.
Race
Lecturer Jeroen Fokker teaches a first-year course in programming at the Bachelor’s programme in Computer Science, in which students learn basic programming skills. At the beginning of the course, he advises students not to use ChatGPT for the sake of their own learning. The course is concluded with a practical assignment which can be done at home or the university. If a student does it at home, Fokker has no way of knowing if they did the homework themselves because Generative AI tools are capable of doing the assignment satisfactorily.
Fokker cannot and does not want to control his students. “I don’t want to go into an arms race with ever better systems, which we then try to outsmart with ever-improving plagiarism software. What are we doing? I’m not even going to try that. You have to look at each other in the eye and say: 'Are you sure you want to do this?' Maybe it’s an admission of weakness or naivety, but it’s the only option I have. I can only appeal to my students not to use it.”
According to the lecturer, more than anything, students are harming themselves when they let generative AI complete assignments on their behalf. “If they do that, they don’t learn the basics well enough. So, later in the programme, when assignments get harder, they will not be able to do it. If students don’t get a good grasp of routine programming work, it’s more difficult for them to deviate from it later on, if necessary.” In addition, the course is concluded with a classic exam, in which students also have to program independently.
Fokker thinks there is no sense in adapting assignments to ChatGPT. “Maybe things would be better for a while if I came up with a new assignment, but then ChatGPT would know the answer one year from now. What my students write also ends up on the Internet, which means that ChatGPT would learn from the new assignment I would come up with.”
He is also critical of other uses of generative AI in education. “Brainstorming with ChatGPT is very different from brainstorming in class. In a work lecture, we try to encourage students to think about the content while talking about the assignment and working together. That’s different from brainstorming with generative AI, which provides a solution right away.”
Boundaries
Maarten van Houte, a lecturer at the Faculty of Philosophy, is completely opposed to the use of generative AI in education too. He makes it clear at the beginning of his course that no form of generative AI is allowed, even though he knows he cannot control students.
Van Houte does not see any value in using AI for his students. According to the lecturer, without an explicit ban, it is hard to indicate what is and is not allowed. “If a student is allowed to brainstorm with AI, then the lecturer must state that the results should not be used in the final product. There is a lack of resources to define boundaries between what is acceptable and what isn't. Should we let students decide for themselves?”
Van Houte is looking for examination forms in which students aren't as easily tempted to use ChatGPT. “For example, I explicitly instructed students to process texts from the lectures during a take-home exam. They have to demonstrate that they engaged with the texts. But, to be honest, that is a desperate attempt. I don’t know if it helps at all, maybe generative AI has a solution that also works for such an exam assignment.” Van Houte continues: “I’m now considering for example renting a computer room without Internet and giving students four hours to write an essay using the course materials, but that’s different from working on a long essay for a few days and thinking about it.”
Ethical objections
When it comes to principles, the lecturers consulted also appear to be divided. For some of them, including Van Houte and Fokker, the ethical objections to the new technology weigh heavily. They are concerned about the spread of misinformation and disinformation, as well as the absence of guarantees in terms of privacy, and the impact of generative AI on the climate due to its high energy and water consumption.
The other teachers identify these risks as well, but find the use of generative AI for educational purposes less problematic, provided it is used in a good way. They are in favour of dialogue rather than a ban. In such a dialogue, students should be educated about the proper use of generative AI and its risks. Some also feel that certain uses of generative AI can be useful for education.
Some of the teachers don’t see any point in banning ChatGPT because students already use it a lot. A university-wide study into the use of generative AI by Van Es and Dennis Nguyen showed that UU students and lecturers use generative AI frequently. More than 80 percent of respondents had used generative AI at some point. Of those, nearly one in four students do so frequently: several times a week or even several times a day. One in three teachers use it that frequently. However, there is still a group of students and teachers who only use ChatGPT rarely: that's 28 percent of students and 38 percent of teachers. A total of 1,981 UU students participated in the study, of which 1,633 were students and 348 were lecturers.
Conversation
Lecturer Frans Prins teaches the course Assessment and Evaluation to second-year Educational Sciences students and third-year students of the academic teacher training program, Alpo. He is also the scientific director of Educational Advice & Training (O&T) and chair of the assessment committee of the Faculty of Social and Behavioural Sciences.
During the workgroup, he engages in conversation with his students, showing how he uses ChatGPT. He then promises students that he will not use generative AI to provide feedback on their production. “I ask students if they use generative AI and what they think of it. Then I discuss several points that are important to think about. Traceability is crucial in science and that's not possible with AI-generated texts, which is problematic. That’s what I try to get students to reflect on.”
In his course, students have to work in groups to design an instrument, about which they write a research report. “Students are allowed to use ChatGPT for the introduction, but not for the rest. I tell them that every week and then we talk about the group assignment. That’s how I can tell if they understood. In general, they know what they are talking about. What matters to me is whether they have acquired the knowledge. If ChatGPT can help them with that, that’s fine by me."
According to the lecturer, it is necessary to think about how generative AI can help students and which uses are not beneficial. It must be clear to students what is allowed and what is not. The lecturer also believes that it is up to students to use generative AI in a useful way. “Otherwise, they are denying themselves a learning opportunity.”
Furthermore, he believes that the discussion about generative AI is also about the quality of education. “One student said he was especially inclined to use generative AI when the assignment was boring. We, teachers, also have to think about whether the assignments we give are interesting enough to learn from.”
Development
The arrival of ChatGPT has also exposed vulnerabilities in our education system, states Karin van Es. An assignment's goal is not to test the quality of the end product but rather what students have learned from the course. If teachers would focus on assessing students' development, the influence of generative AI on education would be reduced.
“At the Faculty of Humanities, we tend to do a paper-based test at the end of the course. We need to make sure that lecturers gain insight into a student’s thought process and the intermediate steps to arrive at a final paper.” Van Es continues: “It’s not a good idea to have students hand in an outline at the first examination moment in a course and then an entire paper at the second, out of the blue. That’s not good education anyway, but with the advent of generative AI, we can’t get away with it at all.”
“We need to rethink what we want to teach students and to what extent certain tools, such as grammar and spelling checking, fit in with that. Teachers' main task is ensuring that the learning objectives and tests are well put together. If that is the case, then a large part of the problem should already be solved.”
Philosophy lecturer Maarten van Houte notes that these recommendations bring about a much higher workload for lecturers. “Teachers don’t have enough time to assess each student’s outline, first chapter and first draft of a paper. I don’t see any foolproof solution.”