Risks from AI systems

When it comes to artificial intelligence (AI), many people think of the possibility of a "super intelligence" that controls humans. This scenario is rather unlikely. However, there are real risks associated with the increasing use of artificial intelligence. For example, algorithms  mayinfluence our perception of content and thus also the basis of our opinion formation. After all, AI systems can only ever be as good as the quality of the training data.

Discrimination

An AI model learns from training and test data. If the data already contains a distorted representation of reality (a so-called "bias"), a distorted model is also created. For example, if the data contains racist, homophobic or sexist content, this will be reflected in the AI system. When processing new data, the model would then fall back on these learned biases and discriminations and derive decisions based on them.

Example

An artificial intelligence was used by the company Amazon to sift through application documents and invite suitable applicants. To train the AI, it was given applications from previous years and information about which applicants were invited. The result: The AI only suggested men for an invitation. How could this happen? The AI had learned from the training data that men had been invited particularly frequently in the past. So it drew the logical conclusion that men are the better candidates. Due to the sexist selection of applicants in the past, the training data was "contaminated", so to speak . An AI that is trained with such data inevitably reproduces the same sexist behavior.

Disinformation

If content is deliberately distorted, reinterpreted or invented and disseminated in a targeted manner, this is known as disinformation. The aim is to influence public opinion on certain topics. In contrast to misinformation, this involves a deliberate intention to deceive in order to mislead and manipulate people. Disinformation deliberately sets the mood, polarizes and can reinforce tipping points that threaten social cohesion.  

Disinformation has been around on the Internet for a long time - so what's new?

For one thing, the quantity of disinformation has changed. With the help of generative AI tools, fake news articles and social media posts can be created en masse. Social bots can be used to artificially generate reach by making such content visible and feigning the impression of popularity and support here. Social bots are automated computer programs that pretend to be a real human being distributing likes, commenting or sharing posts.

On the other hand, the quality of disinformation has changed. On the one hand, because the amount of disinformation available on the web can also affect the AI models that reproduce such content. And on the other hand, so-called deep fakes are possible, which are becoming more and more convincing. Using generative tools, deceptively real photos, videos or audio files can be created. This content is used to make people do or say things that did not really happen. Such content can be extremely manipulative and misleading.

Why is this dangerous?

The foundation for a strong and vibrant democracy is an informed population. Disinformation is about purposefullydestroying trust in credible information. This is dangerous because people could form their opinions based on false information and lose trust in traditional media and democratic institutions. As a result, there is also a lack of orientation as to which information can actually still be believed and which cannot. This canunsettlepeople , increase fears andprovide a breeding ground forconspiracy narratives .