"We need to build trust in artificial intelligence"
Artificial intelligence (AI) simultaneously gives rise to both euphoria and fear. Why this is so, how important AI is today – and will be in the future, and whether AI hasn’t already existed for many years now: Norbert Robers discussed these issues and more with Heike Trautmann, Professor of Business Information Systems.
For many people, AI is probably something very modern. But isn’t it already old hat, when we think back for example to the so-called Turing machine or the chess computers of the 1970s?
It’s true that AI today is based on these methods, for example – and the term was coined in the mid-1950s. But research in this field has progressed enormously in the past few decades, both in basic research and in applications-oriented research. Currently we speak of the so-called ‘third wave of AI’, which focuses for example on self-learning systems, explainability, the ability to communicate and methods of producing automated conclusions.
Parallel to AI, there are expressions being used such as ‘machine learning’ or ‘intelligent systems’. What exactly is special about AI that distinguishes it from other phenomena?
There is a great lack of clarity in people’s understanding of AI. What they imagine ranges from, on the one hand, pure automation, routine processes, equating it with machine learning or just deep learning to, on the other hand, a vision of an over-powerful technology surpassing human intelligence. This is why the European AI initiative CLAIRE focuses on the development of a common understanding of AI with the aim of advancing people-centred AI in Europe over and above research networks. For me, AI is the interaction of several areas, just as human intelligence is also based on the interaction between various processes and competences. In addition to machine learning, the central components of AI are, equally, the optimisation of frequently contradictory aims, drawing logical conclusions, language processing and robotics.
In connection with AI, some people have hopes regarding efficiency and digitalisation; others, in contrast, fear a devastation of the labour market. Why is it that AI polarises opinions so much?
The main reason is the patchy knowledge swirling around about AI. The uncertainty about what AI entails and the reporting in the media, which frequently polarises opinions, are partly responsible for nourishing fears of an alleged super-intelligence which threatens to achieve world domination. What’s important, therefore, is the transfer of scientific research to society, information, education and maximum transparency. At Münster University, for example, we are researching into methods of making the decisions taken by AI systems plausible and thereby strengthening trust in AI …
… which can, however, be quickly destroyed again when for example there are accidents involving self-driving cars.
That’s right. So for this reason it’s important to promote high-quality research in order to avoid these weaknesses and to increase trust in artificial intelligence.
In connection with AI, some people say it’s a curse, others say it’s a blessing. But isn’t this way of thinking – in terms of black and white – counterproductive?
If we succeed globally in developing AI technologies geared towards ethical principles – technologies which complement human intelligence in a purposeful way – then AI will bring great advances for society, business and, in particular, climate research. To that extent, it will be a blessing.
AI is not of course dangerous in itself. But isn’t it true that people will try to get the maximum out of this technology – and, in doing so, pave the way for the threat of the “dominance of machines”?
Every tool can be used or abused. What’s important are initiatives which promote a worldwide understanding of ethical guidelines which have to be accepted and implemented. For example, the development of autonomous weapons systems is highly problematic – which is why the purposeful regulation of AI technologies, both in research and in applications, is so important, taking into account standards relating to ethics, data protection and IT security.
In what areas of society will AI (presumably) develop the greatest benefits?
The rapid development of a vaccine to combat the Covid pandemic would not have been possible without the use of AI methods. Learning from enormous quantities of data in medicine – in order to make diagnoses, develop medicines and prevent diseases – has tremendous potential. Another key area is climate research: without the information we have, we would be faced with a task – saving our planet – which we wouldn’t be able to solve at all.
And what will artificial intelligence never be capable of, in contrast to human intelligence?
Artificial intelligence will not be able to reproduce human abilities such as empathy, intuition and the ability to make decisions based on experience and sets of facts. The question is also whether we would want that. In automated language processing and generation, for example, it is currently difficult to reproduce irony and sarcasm.
Do you have a kind of favourite example of a successful application of AI?
What impresses me are applications relevant to society, for example in climate or medical research. In my group, we’re working with communication specialists at Münster University to develop methods for discovering calculated disinformation and propaganda campaigns in online media such as Twitter, embedded in the University’s Topical Program ‘Algorithmisation and Social Interaction’. I’m also impressed by facial recognition in smartphones, translators such as DeepL or language processing models such as GPT, which on the basis of deep learning, automatedly generate, summarise and meaningfully supplement texts in their factual context.
What’s your prediction? Will AI fundamentally dominate our lives – and, if so, when will that be?
It’s already happening. Just think what would happen if AI technology ceased to exist from one day to the next. There is hardly any area of our social lives which would not be massively influenced and which would be nothing like as able to function as it has so far.
This interview was first published in the University newspaper wissen|leben, No. 7, 10 November 2021.