Why We Click on What Makes Us Angry
What are the most important and effective mechanisms of algorithms in relation to human cognitive processes?
Ada Rhode: Algorithms leverage well-known mechanisms of human perception. People react quite strongly to negative emotions such as anger or outrage. Furthermore, we are more likely to believe information if we hear it repeatedly, even if the facts contradict it.
Social media amplifies these effects by prioritizing content that triggers strong emotions. As a result, such content receives more attention and is shared more frequently.
How does "confirmation bias" affect this?
Ada Rhode: Confirmation bias describes the tendency to believe information that confirms our existing beliefs. We tend to ignore evidence that contradicts our assumptions or dismiss it as unreliable.
Algorithms amplify this effect because they prioritize displaying content that aligns with our past behaviour and interests.
How do you estimate the extent of disinformation driven by algorithms?
Ada Rhode: Several factors contribute to the spread of disinformation. The sheer volume of digital information is growing rapidly, while our attention is limited. That’s why algorithms curate content for us.
They do this by analyzing our behaviour: clicks, comments, or the time we spend on a certain post. Content that triggers strong reactions is displayed more frequently.
In addition, political or economic actors deliberately spread polarizing messages or misinformation. People are less likely to question such statements if they are repeated frequently or align with their existing beliefs.
We often don’t have the time to calmly reflect or verify information. We tend to make most of our decisions intuitively.
You describe yourself as a futurist. In your opinion, how would algorithms need to change in order to transform today’s social media applications into media applications that are constructively and richly diverse?
Ada Rhode: The term "future designer" is related to my studies. In 2024, I completed the part-time master’s program in Future Design at Coburg University. The term describes an approach in which one analyzes social or technological developments and derives design options for the future based on them.
During my studies, I learned to systematically analyze problems and ask about their underlying causes, rather than just treating symptoms. An important approach in this context is human-centered design: First, you ask what needs people actually have and how technologies can be designed to help them in their daily lives.
Algorithms that follow a constructive principle could be designed not only to maximize attention but also to improve the quality of public discussions. This could involve platforms placing greater emphasis on content that describes solutions, presents different perspectives, or fosters objective discussions.
In addition, they could show users posts more often that do not fully align with their previous views. As a result, people would be more likely to encounter different perspectives. This is necessary so that we, as a society, can negotiate how we want to live together.