Interview with Internet pioneer Marleen Stikker

‘The education sector should demand compensation from AI companies’

Marleen-Stikker foto Arie-Kers
Marleen Stikker giving an inaugural lecturer as part of the ceremony to open the academic year at Erasmus University, in Rotterdam. Photo: Arie Kers

When talking about artificial intelligence (AI), expert Marleen Stikker chooses her words carefully as she certainly does not want to give the impression that she is "against" it. She does come to a radical conclusion, however: that higher education institutions should file damage claims against big tech companies.


Who is Marleen Stikker?
Internet pioneer Marleen Stikker (aged 61) spoke about the future of education at the opening of the academic year at Erasmus University Rotterdam. 

She is the director of the Amsterdam-basedWaag Futurelab, a design and research centre focusing on technology and society. She was also one of the founders of De Digitale Stad (The Digital City, Ed.), an online community and social media platform. Currently, she is a member of the board of the Advisory Council for Science, Technology and Innovation (AWTI). 

Her latest book, Het internet is stuk – maar we kunnen het repareren (The internet is broken – but we can fix it, Ed.) was published in 2019 and discusses how the Internet is influenced by big companies. She is a Professor of Practice at Amsterdam University of Applied Sciences and holds an honorary doctorate from VU Amsterdam.


According to Stikker, there are two movements when it comes to AI criticism. “The first one talks of existential risks, claiming that AI will subjugate humans in the future. The members of this movement mostly come from the tech industry itself, such as Sam Altman of OpenAI, who travels all over Europe proclaiming that humanity is at stake.”

Stikker disagrees with such ideas because she finds them unscientific. “They believe that AI surpasses human intelligence and pretend it is a power in its own right. Human intelligence is many times more complex and ingenious than computing.” Moreover, Stikker feels that these arguments distract people from what is actually going on. “People like Altman are publicly calling for regulations, while his company is lobbying against AI legislation in Europe.” In her view, this "critical" movement is mostly a form of advertising. “The AI industry is promoting the mystification of AI and instilling fear. Society no longer knows how to act, so it is seeking help from the same prophets of doom who threw AI over us.”

Major issues
As for the second movement, Stikker says that “it focuses on the major issues and problems that emerge from the use of AI in practice. They argue that we must deal with these issues now before AI undermines our democracy and self-determination. There is no time to waste. We must not allow ourselves to be frightened. Instead, we should ask questions such as who owns the data AI is learning from, as that is a power issue. Scientific and cultural productions have been purloined on improper grounds. There are only a few parties with enough capital to do that, and they are all private parties with specific interests.”

Stikker warns that this new form of AI (generative AI) is already being built into social media, work processes and educational programmes. “Many people act as if it’s all inevitable, saying that we can’t afford to ignore AI because that would be unfair to students. But is that really true? Have we investigated this? Why are we allowing this technology to be thrown into the world without any kind of procedures or monitored trials?”

The problem is not that technology catalyses significant changes, explains Stikker. “Of course, we know that and we know it always will [catalyse changes, Ed.]. Nor is it a problem that AI presents us with all kinds of fundamental questions. After all, those are interesting questions: What is knowledge? What is correlation? What bias can be found in the data? If you ask ChatGPT a question about a subject you know a great deal about, you’ll see what kind of nonsense can be generated.”

According to Stikker, the problem is that the power lies with parties that, up until now, have shown that they cannot handle such power well. “They have appropriated data that doesn’t belong to them. That's the reason why they are getting all those lawsuits. Everyone is going to court because those companies have collected all kinds of data from websites and they are not bothered about intellectual property or the public domain. Basically, they are privatising data that doesn’t belong to them. They looted Waag Futurelab’s website, for instance, even though we have a Creative Commons licence, which prohibits commercial use. Media companies such as The Guardian are now blocking companies like OpenAI because they steal their content. Universities should do the same.”

Food and pharmaceuticals
People can find all that information themselves, but when an AI programme reads them, Stikker says the effects are of a different order. “Just think of the effects of disinformation. It’s one problem on top of another. We would never accept such a thing if it concerned innovations in the pharmaceutical industry, for example. After all, this industry has deployed certain protection mechanisms: they conduct an extensive trial, carefully monitor the effects, and there is an ethics committee. The pharmaceutical industry also has supervisory bodies like the Medicines Evaluation Board (Dutch acronym: CBG) and the European Medicines Agency (EMA). So, when it comes to innovations in the pharmaceutical industry, we do not say: 'We have to go along with them right away, otherwise we will miss the boat.”

According to Stikker, students occasionally using ChatGPT to write their essays are not the biggest threat posed by AI to the education sector. In her view, this is a secondary problem. “I wish the education sector would say: 'Whose technology is it anyway?' The education sector should help determine what is and isn’t possible with ChatGPT. We're currently having the wrong reflex: 'Oh dear, how is this affecting our scientific methodology and our tests?' We are far too compliant.”

In fact, Stikker believes that the scientific community and the education sector should sue the tech companies. “They have absorbed scientific knowledge on improper grounds and are putting a flawed product on the market that is damaging higher education.”

In addition, Stikker feels that the problem was foreseeable but companies did not take heed of this. “All the work that we carry out in the education and research sectors has been affected. How do you maintain the integrity of science? The costs are huge. I think it would be interesting to explore whether a claim can be made.”

No ban
Stikker emphasises that this doesn't mean that she is against this new form of AI. “I am not advocating a ban on AI. But we do need to get a grip on the process by which we introduce new technology into society.” She also acknowledges that AI raises all sorts of interesting questions. “That’s exactly what I like about it. We need to discuss this together in the education sector, perhaps we should start testing things in a different way. Now, students are often writing essays along the lines of ‘A says this, B says that, and my conclusion is’. Maybe we could do this differently, putting more emphasis on creative writing rather than academic writing. Besides, are we asking the right questions? Wouldn’t it be better to ask students to design something rather than write an essay? Doing so is fairly common in technical studies but it could also be done in the social sciences and humanities.”

Data and Truth
This new technology also raises interesting questions about facts and the truth. “It involves complex issues that we could discuss with students. It is crucial that they understand that data and algorithms are not neutral. We often skip these questions, even though they are fundamental to understanding what we are working with. Other fields of study, such as economics, do that too. They ask themselves how one measures prosperity, for instance.”

According to Stikker, another important question is what we define as intelligence. “It is called artificial intelligence but it is actually computing, i.e. data processing. That’s actually what I find most exciting about AI: the assumptions about intelligence and awareness. Why do we think we can reduce reality to data? Programmes always include the warning that they could be wrong. This way, they are evading their responsibility, which is something you can’t do as a human being.”

She thinks that different scientific fields should be cooperating to better understand all this. “At the end of the day, this is not a question of classifying disciplines like: 'This is natural sciences, this is humanities and this is social sciences.' Instead, we should look at how issues are linked with one another, from a range of disciplines. Students have to learn to question those algorithms together.”

Moreover, Stikker proposes that funding for AI research should be revamped. “Right now, we can only conduct large-scale research if the business community helps to fund it. A trade union does not have the resources to finance this type of research, for example, which affects the question posed. The victims of the Dutch childcare benefits scandal cannot access data or AI systems to monitor the government.”

Stikker is a member of the board of the Advisory Council for Science, Technology and Innovation (AWTI). Asked whether this advisory body could deal with this topic, Stikker stresses that she is speaking in a personal capacity and not on behalf of the AWTI. “But several recommendations are certainly in line with what I've said.”

She is referring to a recent report on "recognition and rewards" and the quality of science. Furthermore, the AWTI is working on an advisory report on innovation in the social sciences and humanities, set to be published early next year. “These disciplines cannot ignore AI, either.”

Language model
Last but not least, as far as Stikker us concerned, academia needs to explain to students that generative AI is a language model. “It generates text and images based on computing, but there is no real meaning. The models produce a sort of Reader’s Digest of various data sources, not the truth.”

Article by Bas Belleman, from the Higher Education News Agency (Dutch acronym: HOP).