When the two hottest topics in academia intersect

Uncanny English

Chat GPT
Photo: Pexels

Over the past year, two controversial topics have frequently come up in conversations with my colleagues. The first is the use of English in and, more broadly, the internationalisation of Dutch universities, which have been politicised following the national elections last year. The second is the role of AI in education, particularly among students who rely on it to "assist" in their writing and other assessments. While these two issues do not seem related, to me they show how language use can elicit strong reactions across our community and highlight grievances about what is considered appropriate and useful.

On the first topic, English use in Dutch academia, a debate convened by the Utrecht Young Academy showed how polarizing it can be. Advocates for making Dutch universities more international by continuing English-language programs argue that multilingualism improves career prospects and that English remains the dominant language in research and science. Detractors support Dutch-language instruction as a way to maintain the language in academic discourse as well as provide more opportunities to students who may have disadvantaged backgrounds, who may have less preparation in the English language.

On the second topic, AI (and specifically ChatGPT), there is a lot of pearl clutching from purists that students should not use it since it is unreliable. Well, yes and no. Yes, ChatGPT can hallucinate (meaning it provides fake information) but this is due to the way the software is built, not by intention. Since the large language models used in ChatGPT rely on prediction of what are likely words to follow each other and which words appear frequently given a prompt, the output is as good as the input that they receive. This means ChatGPT will probably hallucinate less as it trains on more language examples (unless it trains on itself, which would worsen the problem and is something we should be aware of). And no, chatbots like ChatGPT hallucinate less than 30 percent of the time (and as low as 3 percent). If students performed as well as ChatGPT on assessments, they would pass all their courses. A more reasonable justification for discouraging student use is that students are not assessed on their own effort and intelligence, but on an artificial one (in other words, plagiarism). If we want students to think for themselves, maybe we should be worried. But then again, how many of us remember each other’s phone numbers now that our smartphones do this for us?

These two issues came together for me from some assignments that I recently graded. One of my Dutch students had been submitting reports on their internship and I noticed that the language was both well-written and oddly unnatural. I cannot think of a single time that I have used the word “delve” in my writing nor regularly come across this word in my entire life in English writing. Yet, across multiple reports this word appeared and I wondered whether my student was using ChatGPT. After asking them directly, they admitted to it. Since these reports were not formal writing assignments, there was no penalty; that said, I asked that in future the student use their own words and to reference ChatGPT if it was used.

Perhaps my student (and other non-native English users of ChatGPT) did not realize how peculiar their language sounded to me. I also thought that if non-native English language students in university did not have the opportunity to use English in class and get feedback on it, they would never know. So maybe having English-language courses and degree programmes, especially if taken by Dutch students wanting to practice their language skills, is an under-estimated benefit of being in an internationalised university. The students would sound less uncanny, even if they make more mistakes. Isn’t that the point of education, to learn from the mistakes we (and not computers) make?

Advertisement