In the old days, before the Internet and before people talked to each other, the world was a better place. You hear that all too often these days. In everyday life, you were exposed to all kinds of political opinions. But then came the Internet and social media algorithms. Their primary goal is to give users exactly what they want. That’s why they primarily suggest posts, groups, and friends that match our own attitudes. As a result, we live in a bubble online, where only what matches our views gets in anyway. The result is an increasingly divided society that is less willing to engage in dialogue.
The idea is the brainchild of political activist and entrepreneur Eli Pariser. In 2011, he popularized the concept of the filter bubble in a book of the same name. As we increasingly receive personally tailored information on the web, we are being indoctrinated with our own ideas. There is less room for serendipitous encounters through which we can learn and gain insight. Since then, the view that personalizing algorithms on the Internet is downright dangerous for society has become a part of public debate.
But media researchers are not sure whether the much-vaunted filter bubbles actually exist to the extent assumed. At the very least, the term in its popular interpretation probably paints a distorted picture of how people use the Internet – and how much social processes are similar online and in analog everyday life.
On the one hand, there is the assumption among journalists and other net-savvy groups that the Internet is the primary source of information for the population. This may be true for younger people, and indeed the consumption of online information is growing inexorably. However, people over 50 (which is not to deny that there is an aging population worldwide) spend significantly more time in front of the TV than on the Internet, four times as much in fact.
Moreover, it is unlikely that the web will displace all other media in the foreseeable future. The results of our media usage research show that older information channels rarely die out completely. After all, people still read newspapers and listen to the radio, even though television has been around for a long time (and podcasting is giving radio unprecedented competition). Even in the age of the Internet, all the other ways we can be exposed to (political) opinions continue to exist.
It is also questionable to what extent people actually find a uniform, pre-sorted environment online. It is undisputed that filters are used on online platforms that result in users being shown personalized advertisements – for example, for products they have recently researched online. But there is something else, something much more powerful, that is responsible for this phenomenon: conformity and the “mental bubble” in users’ brains.
On the trail of secret arithmetic operations
The influence of algorithms is generally difficult to study. This is because large corporations do not disclose the extent to which they control the flow of information to users of their offerings and the criteria they use to do so. But we can get to the bottom of the secret computing operators by taking a few detours. For example, we have started a project to investigate how Google presents different results to its users. In this long-term project, we play back the results of automated Google search queries to Internet users. We find that a large proportion of Google’s results are the same for all people. For political searches, such as U.S. presidential candidates for 2020 and German chancellor candidates for the 2021 federal election, 81.42 percent of all results were identical.
On closer inspection, Google does make a slight adjustment to the results it displays – based on what it knows about a user. But to the extent that it does, there is no concern about the filter bubble concept. The most common reason for deviations was that the search engine returned regional hits, such as the locations of associations where candidates would be appearing in the near future.
Meta, the company behind the social media platforms Facebook, Instagram and WhatsApp, is a bit different. Here, users are primarily shown content that they have not actively searched for. Meta’s algorithm evaluates, among other things, what preferences you have shared with the site and, most importantly, whether or not your “friends” or “followers” are discussing a particular post. So it depends on who you are connected to online and which posts seem important to them. If you only see politically very one-sided messages on the meta-platforms, you will only have people with similar attitudes as friends or followers.
So, for a filter bubble to form, two things have to come together:
- Algorithms that tailor the offer to the user
- A user’s personal preferences as expressed, for example, in search queries, friendships, groups and activities on the platform
It has long been known that we tend to like people who are similar to us (in social status, geographic origin, or political views). In addition, there is a human tendency to absorb information that reinforces an existing opinion. Media consumers have always been more interested in news that supports their own thoughts, whether they are reading a newspaper, watching television, or now, increasingly, online. This is because we prefer to have our social identity confirmed rather than challenged by unwelcome facts. The only thing new about social media is how fast and easy it is to network with like-minded people. And not just when it comes to rare hobbies or weird musical preferences, but also when it comes to extreme (political) positions.
Through these psychological mechanisms, we can create a so-called echo chamber online: a virtual space where we hear nothing but what we bring in ourselves. This term is rather inappropriate, however, because it suggests that a user has to express himself politically. In most cases this is not the case. In fact, our observations show that only every fourth user participates in public debates under posts. Only a small core, no less than four percent(!), always express their opinions publicly.
Muffle the discussion
Our current numbers show that Facebook and Instagram are not a place of political debate for the majority of users. Only one in five people have ended a friendship on these platforms because of political differences. These were mainly moral issues, such as the acceptance of refugees. Political agreement with their network contacts is thus not crucial for most users.
Some research suggests that in our non-digital everyday lives, we actually value being around like-minded people much more than we do online. So my team and I surveyed people in Europe and the United States about their perceptions of the refugee crisis. We wanted to understand the extent to which political filter bubbles exist, particularly on social media, but also offline. In addition to exploring people’s attitudes toward refugees and their media consumption patterns, we also wanted to know whether people tend to live in heterogeneous or homogeneous information environments in their everyday lives. We asked the following question: “How often are you confronted with opinions of people who speak positively about the acceptance of refugees? And how often are you confronted with negative views on the subject?”
The largest group (44 percent) were people with heterogeneous information environments who heard opinions from both sides at least occasionally. In contrast, 30 percent lived in a homogeneous environment. Interestingly, homogeneity, a kind of filter bubble, was most prevalent among those who talked to others in person more often. How often and for how long they watched TV or used social media did not matter. In other words, the notion that social media increasingly encapsulates its users in an information bubble is generally not tenable. Rather, studies show that the Internet is often used precisely as a means of obtaining different thoughts and points of view that one might not find in one’s personal environment or family. The Net also serves a similar function for cultural offerings, for example.
Extremists are more likely to be trapped in bubbles
Also interesting: The more extreme a person’s political views, the more homogeneous his or her information environment. This is especially true for right-wing extremists – both online and offline. This is where social media plays a crucial role. My team and I were able to show that within Facebook groups on the topic of refugees and building the wall, the exchange of information is very homogeneous. Again, it is not the social media itself that is to blame, but the fact that people with very strong (and therefore mostly one-sided) opinions come together in such groups.
Psychologically, such a unified group can serve an important function. Users are concerned with working together against what they subjectively perceive to be an injustice. Under these circumstances, it is human not to take a balanced view of all opinions, but above all to articulate one’s own anger. In this way, a common identity develops.
However, the downside of such associations of more radical-minded individuals is that information circulating within a closed group is less likely to be scrutinized. Conformity thus becomes a breeding ground for fake news. In fact, some studies show that politically extreme people are the most likely to share fake news on Twitter, Facebook, and Instagram. In particular, supporters of Donald Trump and Ron de Santis, but also German politicians around Tino Chrupalla, Alice Weidel, Maximilian Krah and Sara Wagenknecht, and generally staunchly conservative users are the biggest spreaders of fake news.
Some findings are even more pessimistic. For example, studies show that false information generally spreads faster on social media than accurate information, regardless of the platform. In particular, political stories rated as false by independent organizations reached more users in less time than accurate stories. The latter take almost six times as long, according to our analysis.
This is due to the logic by which users read or forward stories online. Misinformation can be used to target many of the news factors – characteristics that make a news story seem relevant to the audience. These include that
- something surprising might happen
- a major loss has occurred
- something has happened in the immediate area
The main characteristic of successful fake news is that it increases the news value. The most common user reactions to these stories are fear, disgust, or surprise.
Information overload fuels the success of fake news
This reveals a fundamental problem of the digital media public sphere. There is extreme information overload. That’s why some news providers try to attract attention by any means possible, because then information is more likely to be received and processed. Balanced, factual messages are therefore at a disadvantage compared to radical or scandalous-sounding messages.
This phenomenon is known as the backfire effect: When people are exposed to opinions from the opposite side, it can reinforce their original attitude. This is even more true for people with strong (political) beliefs. When we hear positions that don’t fit with our worldview, we immediately look for counterarguments. These, in turn, reinforce our own view.