Written by: Femke van der Bij

What is artificial intelligence? Where do you encounter it and how smart is a computer really? These questions take centre stage during an interactive lesson by UMCG researcher Mirjam Plantinga, who visits primary school De Mienskip in Buitenpost together with Bas Altenburg from 8D Research + Design = Impact. Together they introduce students from years 6, 7 and 8 to the world of artificial intelligence (AI).

‘I am a researcher at UMCG. Perhaps not in the way you might expect, because I don’t research diseases or the human body’, Plantinga begins the lesson. ‘I mainly look at how we can make good use of smart technology in the hospital.’ She explains what artificial intelligence is: computers that learn from data and recognise patterns. ‘In healthcare it can help with things that normally take a lot of time,’ she says. ‘Think of reviewing X-rays or sorting through large amounts of patient information. This gives doctors a clearer overview more quickly, so they can focus on what really matters: helping patients.’

Together with the 8D team, Plantinga developed a serious game for primary schools about the use of AI in healthcare. Altenburg kicks off the game with a clear explanation. ‘Okay, listen up. You are going to help a hospital. That hospital is called VITAI and all sorts of things are going wrong there. Computers are giving incorrect advice, information is not always accurate and doctors are unsure what to do. It is up to you to solve that.’ The first assignment appears on the screen.


‘You will work in teams. Half of you will be on the laptop, the other half will receive paper assignments. But pay attention: nobody has all the information. Only by working well together and sharing everything with each other will you get further.’ The students immediately pull their chairs closer together. There is whispering, pointing and eager discussion.

artificial intelligence primary school
During the escape room, students work together to crack codes and help hospital VITAI
artificial intelligence primary school
Which photo does not belong?

Four photos of pandas appear on the screen. Well… three pandas. One image shows a person in a panda costume. ‘That one does not belong!’ a student calls out immediately. But what turns out to be true? An AI system sometimes also recognises the person in disguise as a panda. Plantinga explains: ‘AI learns from examples. If a system has mainly seen images with black and white shapes and round eyes, it might think: this looks enough like what I know. But it does not know what a panda really is.’ ‘Compare it to a very fast sorting machine,’ Altenburg adds. ‘It only looks at similarities. It does not think the way you do.’

During one of the assignments, students are shown various examples on the screen. Their task: figure out whether AI is behind it or not. Pointing at the thermometer, a student calls out: ‘That is definitely not AI, we just have that at home!’ ‘But the checkout scanner really does use AI, it does not just recognise products by itself, does it?’ ‘Facial recognition on phones? Yes, that must be clever!’ ‘And what about the self-driving car?’

What first seemed simple turns out to be not so straightforward at all. It leads to many questions and surprises: how do you actually know whether something is ‘smart’? They discover that AI is often hidden in devices you use every day. At home, in the supermarket or on the street, sometimes in ways you would not have expected straight away.

Why is it important to explain this to children? ‘Everyone is a patient at some point,’ says Plantinga. ‘And these students may well be the doctors of the future. Besides, they often already use AI tools themselves. It is important to understand how that works.’

She notices that many people think AI always tells the truth. Or that it is something frightening that will take over everything. By talking about it together, greater understanding develops and we start asking more critical questions. When it comes to difficult decisions in healthcare, about treatments or quality of life, human judgment remains essential. Furthermore, a doctor must always be able to explain how a system reaches a recommendation. This is called ‘explainable AI’. ‘A doctor must also understand how a model was built,’ she says. ‘With what data? According to what rules? And can I explain this to my patient as well?’

During the game, students call out fiercely: ‘See, that is wrong!’ But a classmate says calmly: ‘We all make mistakes sometimes, that is perfectly fine.’ That is perhaps the most important lesson of the afternoon. People make mistakes, but we also understand why something went wrong. We can look back, adjust course and make a different choice. We think, weigh things up and make conscious decisions. AI can make mistakes too. The difference is: a computer does not understand what it is doing and cannot judge for itself whether an outcome is correct. A student puts it aptly: you need to look carefully at how you use AI and whether what the system says is actually right.

At the end of the lesson, the students are asked what grade they would give this AI lesson. Hands shoot up: ‘A nine!’ ‘A ten!’ A fine report card score, but the real message is clear: technology is powerful and useful, but thinking critically and evaluating outcomes, that remains human work. AI is a tool. We humans are the real thinkers.

Abonneer je op onze nieuwsbrief