AI Perspectives: When do we blindly trust AI?

🎤 DESY CONNECT EVENTS | from 14. June '23

With the big hype about ChatGPT, artificial intelligence (AI) has arrived in our everyday lives and at DESY, too, AI is used in research. AI scares many people, others can't get enough of it - but very few actually understand it. It is about time we shed some light on the broad field of AI. We have therefore dedicated a new event series to this exciting topic, because "we want to make AI understandable," says Katja Kroschewski, Head of Relation Management at DESY. The kick-off event for "AI Perspectives" took place on 14 June 2023 on the occasion of the Helmholtz AI Conference and the Helmholtz Imaging Conference in Hamburg.

Header_AIPerspectives.jpg

AI Perspectives provides an interdisciplinary forum to explore aspects of AI. Image: DALL·E 2 and DESY-PR

[More]

The illustration for AI Perspectives was created by the artificial intelligence DALL·E 2. DALL·E 2 is an algorithm that can create realistic images and artwork from a description in natural language. Read more here.

We are on the threshold of the next IT evolution: the use of artificial intelligence

AI not only affects the work at DESY and other research institutions of the Helmholtz Association, but will also have a lasting impact on our society. Together with experts from the natural sciences, technology, philosophy, music and art, we therefore wanted to explore the question: When will we be able to trust AI? After all, how does the AI actually come to a decision? And on what mechanisms is this decision based? What happens when the machine is trained with unnoticed biases? Two event formats were created: one for AI experts and one for the public.

 

Circle of chairs in a bumper car — first AI Perspectives exchange of experts took place in an unusual place

Two conferences - a joint day. That was the start of the combined "Unconference" of the Helmholtz AI Conference and the Helmholtz Imaging Conference on 14 July 2023 on the grounds of the Kuppel Hamburg. Just a stone's throw away from DESY's side entrance, a spacious event area stretches out and reminded us of something between a circus and a fairground. In the midst of old carousels, our interdisciplinary expert's workshop took place on the track of a bumper car.
For those who might not know: an "unconference" is a conference format where the participants, not the organisers, set the topics they want to talk about. For our project of an interdisciplinary workshop on the topic of Explainable AI (XAI), this was the perfect setting and the special location inspired our discussion.

PXL_20230614_131938224.MP.jpg

Discourse between bumper cars: The interdisciplinary workshop on AI brought together participants from physics, information, music, philosophy, art, theatre, biology and mathematics. Image: © Helmholtz Imaging, Jörg Modrow

Helmholtz_AI_Conference_0592.jpg

Discussing in unusual places at the "Unconference". Here on a roundabout. Image: © Helmholtz Imaging, Jörg Modrow.

"Here I hear the subject of fear for the first time. That doesn't exist in my science bubble."

For the initiators of AI Perspectives, which are Katja Kroschewski, Miriam Huckschlag and Klaus Ehret from DESY, Knut Sander and Katharina Kriegel from Helmholtz Imaging and the theatre director Jari Niesner, it was important to bring together different ways of thinking and working in order to approach the topic of XAI in as many different ways as possible, to inspire  and to learn from each other.

What challenges does the creative sector face, when AI now paints pictures and composes songs? How can XAI help create transparency? Two of the workshop participants, the musician Gregory Beller and the artist Anissa Tavara, use AI in their artistic work. And yet there is also fear of the unknown consequences of AI. How does AI change the perception of art and artists, of the creative process and the concept of ownership?

Another attendee, software engineer Lukas Klein from the German Cancer Research Centre in Heidelberg, commented: " Here I hear the subject of fear for the first time. That doesn't exist in my science bubble. Sure, the big debates like the Statement of AI Risk are familiar, but researchers are currently buoyed by the enthusiasm that so many helpful applications are possible, e.g. radiology, weather and climate predictions."

 

Does an AI need values? Interdisciplinary cooperation as the key to a safer future with AI

It was clear to the workshop participants that the greatest challenge was to make AI programmes explainable, transparent and safe from misuse. This requires the cooperation of various disciplines, such as philosophy, mathematics, neuroscience and psychology. A real danger arises, for example, when AI makes decisions on its own without questioning the consequences. "There are entire research areas that deal with the introduction of values in AI," knows philosopher Timo Freiesleben from the University of Tübingen. There is still a long way to go and many questions remain unanswered.

AI experts and public figures express their concerns about the risks of AI.
Helmholtz_AI_Conference_0628.jpg

Participants of our workshop at the "Unconference" in conversation. Image: © Helmholtz Imaging, Jörg Modrow

So what is Explainable AI (XAI) exactly?

Explainable AI, also called XAI, literally means " explicable AI". XAI refers to a set of methods and techniques developed to improve the understanding and explainability of machine learning models. This is because in traditional AI, the decisions and predictions of models are often like a "black box" - you don't understand why the model arrived at a particular decision. Explainable AI, on the other hand, aims to introduce more transparency and explainability into the decision-making process, eventually generating trust in the decision.

XAI provides tools and methods to explain the inner workings of AI systems, identify factors that led to certain predictions, and understand the impact of input variables on model predictions. This allows users and developers to check the reliability and fairness of models, identify possible biases and improve the performance of models.

XAI is particularly important in areas such as health, finance and law, where transparent and accountable decisions are very important.

"How much responsibility am I giving away?"

After a discussion-filled afternoon in summery temperatures, the evening continued with an interdisciplinary panel discussion for the public in the DESY lecture hall. In his welcoming speech, Edgar Weckert, DESY Director in charge of Photon Science, raised the question "How much responsibility do I relinquish? In her opening remarks, DESYan Judith Katzy of the Helmholtz AI Steering Committee pointed to the fantastic machine learning methods "that were simply unthinkable in the past. We can address problems much faster with AI in research." Theatre director Jari Niesner moderated the evening. Joining him on the panel were musician Gregory Beller, philosopher Timo Freiesleben, software expert Lukas Klein, DESY physicist Isabell Melzer-Pellmann and IT specialist Peter Steinbach.


"AI is good at remembering - humans are good at generalising"

Lukas Klein from the German Cancer Research Centre in Heidelberg summarised one of his findings as follows: "AI is strong in applications that require a lot of knowledge and memory. AI doesn't forget anything and can recall everything quickly. So AI is good at remembering - humans are good at generalising. Because if I see an unknown animal with sabre teeth, I know: danger! But we don't yet know exactly how the brain does that." The democratisation of AI was considered a decisive factor by all participants in the discussion, so that as many people as possible can work on explainable, comprehensible AI software and participate in it. We shouldn't have to rely on a company saying, "Yes, it has all been properly trained. Fortunately, open source beats the corporations anyway. The corporations had to find that out for themselves - more people are simply faster," Peter Steinbach summed up and found successful closing words to the applause of the audience.

You can also find the recording of the evening on the DESY YouTube canal.

Roundtable on the topic of Explainable AI
During the roundtable, LAION was mentioned, an AI that is 100% open source.

Info on the participants of the AI panel

Musician, composer, lecturer, scientist and sound artist
University of Music and Theatre, Hamburg
Postdoc in Machine Learning in the Science Cluster of the University of Tübingen
Doctoral Fellow in the Interactive Machine Learning Group at
German Cancer Research Center, Heidelberg
Leader of the CMS-DESY Group
Deutsches Elektronen-Synchrotron DESY, Hamburg
Theatre director, dramaturge, author and artist, Hamburg
Team Lead AI Consulting
Helmholtz-Centre Dresden-Rossendorf, Dresden
ConnectAlumninetzwerkaufTour.jpg

DESY CONNECT always on board: We were delighted to be part of the Helmholtz AI Conference with a stand to draw attention to our alumni network. We are also excited to host event formats on current topics such as this discussion evening on the use of AI. Here in action: our students Ridha Mohamed Abdallahi and Vladislav Litau.Image: © Miriam Huckschlag