There is broad bipartisan support among self-identified liberals and conservatives that social media companies should add warning labels to posts that contain misleading information, or that could lead to the spread of disinformation, data from a new study by researchers from Northeastern College of Arts, Media and Design exhibits.
Much of the content labeling polls have been conducted around the US presidential election. But the results of the national investigation, released on Wednesday, could address new concerns about misinformation during the COVID-19 pandemic, particularly regarding the use of vaccines and other health protocols, according to John wihbey, associate professor of journalism and media innovation at Northeastern and co-author of the study.
“We’re in a new moment, in a new phase of the pandemic – a time when we can get a little more pure idea of what the public thinks about these issues, ”Wihbey said.
In recent years, social media companies such as Twitter and Facebook have started labeling millions of posts as disinformation, including some from former President Donald Trump, who has been permanently suspended from platforms at the following the January 6 attack on the United States Capitol. building it was perpetrated by his supporters.
Trump’s claims of widespread electoral fraud in last year’s presidential election, which were debunked, and the insurgency that followed sparked a heated debate over the responsibility of tech companies in monitoring types of media. information that users can share, including limiting or removing so-called bogus. news, hate speech and content otherwise considered problematic.
Over the summer, the Northeastern team of researchers surveyed more than 1,400 people in the United States through an academic survey platform, called Prolific. Half of respondents said they use Twitter occasionally or more frequently, and 68% said they use Facebook occasionally or more frequently.
The survey was published in conjunction with The Northeast Ethics Institute as part of a larger effort examine potential new approaches to labeling content on social media platforms. The study‘s co-authors include Garrett Morrow, a doctoral student studying political science; Myojung Chung, Assistant Professor of Journalism and Media Advocacy; and Mike Peacey, associate professor of economics.
The study found that 92.1% of liberals, 60.1% of conservatives and 78.4% of moderates “strongly or somewhat agree” that social media platforms should use labels to notify users of posts containing misleading information. These labels were used to identify erroneous information, such as Twitter’s “fact-checking” labels, and warn users of potentially graphic or damaging posts, such as sensitive media warnings.
Participants also expressed that they encounter “problematic content”–misleading or incorrect information and hate speech–often using social platforms. The researchers are not trying to define the misinformation or problematic content of the study, says Wihbey, preferring to rely on participants’ perceptions of these issues to answer the survey questions.
The researchers also note that the participants showed a high degree of “overconfidence bias,” meaning they said they trusted their own abilities to discern misleading statements and disinformation online, but did expressed distrust of the abilities of others to do the same.
The important bipartisan agreement on labeling was slightly surprising, says Wihbey, given how polarizing the question of moderation of content was in the days following the election. Many conservatives opposed Trump’s ban–and the ban in general–to say that this amounts to censorship.
But the study also confirmed some of these partisan differences of opinion on how best to approach content moderation, with 63.2% of Conservatives saying labeling Trump’s posts, instead of banning it, was enough to cope with his “violent messages”. This is compared to the over 80% of Liberals who thought tougher action was needed.
The study comes as governments try to exercise control over the moderation policies of tech giants. This week again, Texas Governor Greg Abbott, a Republican, signed a bill require social media companies to disclose their content moderation policies and create an appeal process for banned users. Under the new law, users could sue companies to get their accounts reinstated. Florida Approved a similar law earlier this year.
Democrats have also tried to influence corporate policies. During the summer, President Joe Biden urged Facebook to take faster action against posts spreading misinformation about COVID-19, claiming that the bad information circulating on the platform regarding the safety and effectiveness of vaccines “is killing people.”
“It is absolutely necessary to try to determine what tools and methods we need to use to combat disinformation and disinformation, ”said Wihbey, summing up the sentiment of the survey. “At the same time, I think people don’t believe that closing accounts and disabling share buttons is the thing. [only] path to follow.”
Wihbey says study may indicate the public is coming to some kind of middle ground.
“We find that people want labels to link them to credible sources for verification, prepare them to think critically about disinformation, and slow the spread of disinformation by warning people about content they are trying to share,” say its authors.
For media inquiries, please contact [email protected]