|
Münster (upm)
Dr. Christian Grimme<address>© private</address>
Dr. Christian Grimme
© private

A vision for the future: automatic recognition of fake news

"Facebook deploys a host of checkers to detect fake news." / A guest commentary by Dr. Christian Grimme

Fake news – or we can also just call it lies – is certainly not a phenomenon peculiar to the modern, digital world. Nor is using it to achieve ideological, political or economic aims anything new. Nevertheless, the issue of fake news is at the centre of public debate – at the latest since the election of the current President of the USA, Donald Trump – and it is associated directly with digitalisation and the social media. Social media are usually the first channels that rumours, lies and fake news appear on, from which they are extensively distributed and from which they find their way into public debate and awareness. You will no doubt have heard how, in 2015, Chancellor Angel Merkel had her photo taken with the Brussels assassin, whom she had let into the country as a refugee. This is, of course, an absurd piece of fake news. But everyone who saw it remembers the accompanying photo. Worse still, in many cases it brings back the memory of the fake news item even outside the wrong context.

The strategy behind this story is nothing new. Much of it may even, with a great deal of effort, have been possible 200 years ago. What is new is that the organization, technology and manpower needed are much less today. And it is precisely this that makes fake news seem to us to be so dangerous in the “Digital Age”.

The openness and anonymity provided by social networks make possible a great amount of diversity and freedom of opinion, as well as protection wherever the free expression of opinion in an “analogue” world is dangerous. But, to the same extent, they offer opportunities for abuse. The abstract nature of a simple user account and, at the same time, the availability of programming interfaces with social networks make it possible to spread and duplicate all types of content in huge amounts – sometimes even automatically. Unlike the (likewise automatic) distribution of spam mails, the aim is not to reach less interested users by means of massive replication. The metadata on users which are freely available on social networks, as well as the networking of interest groups, permit content to be distributed in a highly targeted way. If a (usually ideologically motivated) piece of fake news is placed in a suitable environment, it is often forwarded without being checked, or even deliberately.

<address>© fotolia.com/Coloures Pic</address>
© fotolia.com/Coloures Pic
The path taken by fake news does not, however, end there. It really becomes important when it makes the jump from social media to those media working with editors and which are often considered to be trustworthy and have a large reach. This jump succeeds because topics from social media increasingly serve as triggers for journalistic stories. And it is not rare for social media communities to be seen a group representing society.

The question remains: can we not protect ourselves from fake news by using technology? An automatic recognition of fake news would mean being able to state mechanically whether the content of a piece of news is true or false. At present, there is no method of reliably doing this – nor is there any in sight. Not for nothing does Facebook deploy a host of checkers to detect fake news. Demonstrating the existence of so-called social bots is not always effective. Even when automated profiles are discovered, it is not as a rule clear whether they are part of a campaign or simply utility software. And in any case, not all campaigns are carried out in a fully automated way.

One universally applicable approach for identifying automation and fake news would appear to be the detection of campaigns themselves. If the existence of these is proven, then both content and players can be easily extracted and checked. Currently, this approach has not been researched to any great extent and it is a highly interesting issue for research to be undertaken on.

 

Dr. Christian Grimme is a post-doc at the Department of Information Systems at Münster University. He is involved in a sub-project entitled “The Coordination of Simulation, Recognition and Repulsion of Hidden Propaganda Attacks” within the collaborative project “PropStop, funded by the German Ministry of Education and Research, on the issue of recognizing, proving and combating hidden propaganda attacks through new online media. Christian Grimme is principal investigator of "PropStop".

 

Further information