Bias makes people vulnerable to misinformation spread through social media news and research

The following essay is reproduced with permission from The conversationan online publication covering the latest research.

Social networks are one of the main sources of information in the United States and across the world. Yet users are exposed to content of questionable accuracy, including conspiracy theories, click bait, hyperpartisan content, pseudo-science and even fabricated reports on “fake news”.

It’s no wonder there’s so much misinformation published: Spam and online fraud are lucrative for criminalsand government and political propaganda both partisan and financial benefits. But the fact that low credibility content spreads so quickly and easily suggests that the people and algorithms behind social media platforms are vulnerable to manipulation.

Explain the tools developed at the Social Media Observatory.

Our research has identified three types of biases that make the social media ecosystem vulnerable to intentional and accidental misinformation. This is why our Social Media Observatory at Indiana University built tools help people become aware of these biases and protect themselves from outside influences designed to exploit them.

Bias in the brain

Cognitive biases originate in the way the brain processes information that each person encounters every day. The brain can only process a finite amount of information, and too many incoming stimuli can cause information overload. This in itself has serious implications for the quality of information on social media. We have found that high competition for limited user attention means that some ideas go viral despite their low quality—even when people prefer to share high-quality content.*

To avoid being overwhelmed, the brain uses a number of turns. These methods are generally effective, but can also become prejudices when applied in the wrong contexts.

A cognitive shortcut occurs when a person decides whether or not to share a story that appears on their social media feed. People are very affected by the emotional connotations of a title, although it is not a good indicator of an article’s accuracy. Much more important is who wrote the play.

To counter this bias and help people pay more attention to the source of a claim before sharing it, we have developed Fakea mobile information game (free on android and iOS) simulating a typical social media news feed, with a mix of news articles from mainstream and low-credibility sources. Players get more points by sharing information from trusted sources and reporting suspicious content for fact-checking. In the process, they learn to recognize signals of source credibility, such as hyperpartisan claims and emotionally charged headlines.

Bias in society

Another source of prejudice comes from society. When people connect directly with their peers, the social biases that guide their selection of friends come to influence the information they see.

In fact, in our research, we have found that it is possible to determine a Twitter user’s political leanings by simply looking at the partisan preferences of their friends. Our analysis of the structure of these partisan communication networks found that social media is particularly effective at disseminating information – whether accurate or not – when they are closely connected and disconnected from other parts of society.

The tendency to evaluate information more favorably if it comes from one’s own social circle creates “echo chambersthat are ripe for manipulation, consciously or unwittingly. This helps explain why so many online conversations turn into “us versus them” clashes.

To investigate how the structure of online social networks makes users vulnerable to misinformation, we constructed Prank, a system that tracks and visualizes the delivery of content from low-credibility sources, and how it competes with fact-checking content. Our analysis of data collected by Hoaxy during the 2016 US presidential elections shows that Twitter accounts that shared misinformation were almost completely cutbased on corrections made by fact-checkers.

When we dug deeper into accounts spreading misinformation, we found a very dense core of accounts retweeting each other almost exclusively – including several bots. The only times fact-checking organizations were cited or mentioned by users in the misinformed group were when they questioned their legitimacy or claimed the opposite of what they wrote.

Bias in the machine

The third group of biases stems directly from the algorithms used to determine what people see online. Social media platforms and search engines use them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it can end up reinforcing users’ cognitive and social biases, making them even more vulnerable to manipulation.

For example, the detail advertising tools integrated with many social media platforms let disinformation activists exploit confirmation bias by personalize messages to people who are already inclined to believe them.

Additionally, if a user frequently clicks on Facebook links from a particular news source, Facebook tend to show that person more of the content on that site. This so-called “filter bubble“effect can isolate people from diverse perspectives, reinforcing confirmation bias.

Our own research shows that social media platforms expose users to a less diverse set of sources than non-social media sites like Wikipedia. Because it’s at the level of an entire platform, not a single user, we call it the homogeneity bias.

Another important ingredient of social media is trending information on the platform, based on what gets the most clicks. We call it popularity bias, as we have found that an algorithm designed to promote popular content can negatively affect the overall quality of information on the platform. It also feeds into existing cognitive biases, reinforcing what appears to be popular, no matter how good.

All of these algorithmic biases can be manipulated by social robots, computer programs that interact with humans through social media accounts. Most social bots, like Twitter Grand-Ben, are harmless. However, some conceal their true nature and are used for evil purposes, such as reinforce misinformation or falsely create the appearance of a grassroots movement, also called “astroturfing”. We have found proof of this type of manipulation ahead of the 2010 US midterm elections.

To study these manipulation strategies, we developed a social bot detection tool called Botometer. Botometer uses machine learning to detect bot accounts, inspecting thousands of different Twitter account characteristics, such as the times it posts, how often it tweets, and the accounts it follows and retweets. It’s not perfect, but it has revealed that the greatest number 15% of Twitter accounts show signs of being bots.

Using Botometer in conjunction with Hoaxy, we analyzed the heart of the disinformation network during the 2016 US presidential campaign. We found many bots exploiting both the cognitive, confirmation, and popularity biases of their victims and the Twitter algorithms.

These bots are able to build filter bubbles around vulnerable users, feeding them false claims and misinformation. First, they can grab the attention of human users who support a particular candidate by tweeting that candidate’s hashtags or mentioning and retweeting the person. Then, bots can amplify misrepresentations smearing opponents by retweeting articles from untrustworthy sources that match certain keywords. This activity also allows the algorithm to highlight for other users fake stories that are widely shared.

Understanding complex vulnerabilities

Although our research, and that of others, shows how individuals, institutions and even entire societies can be manipulated on social media, there are a lot of questions remains to be answered. It is particularly important to discover how these different biases interact with each other, potentially creating more complex vulnerabilities.

Tools like ours offer internet users more information about misinformation, and therefore some degree of protection against its harms. The solutions go probably won’t be tech-only, although there will probably be technical aspects. But they have to consider cognitive and social aspects of the problem.

*Editor’s note: This article was updated on January 10, 2019 to remove a link to a study that has been retracted. The text of the article is always correct and remains unchanged.

This article was originally published on The conversation. Read it original article.