Can trust in social media be improved?

UNIVERSITY PARK, Pennsylvania — More than two-thirds of Americans get their news from social media sites, according to a 2018 Pew Research Center study. But more than half of those who read information on social media expect it to be inaccurate.

Penn State researchers are working to improve the prediction of people’s trust in what they read online. In a new project, they will advance state-of-the-art machine learning methods to model the psychological phenomenon known as memory illusion, which are memory errors that individuals make when remembering, interpreting and doing deductions from past experiences and knowledge. Researchers will use these patterns to determine why some people are impressed by false information.

“When we read a news story, we recode it to understand it based on our previous experience, which can activate other elements associated with the news.” mentioned Aiping Xiong, an assistant professor in the College of Information Science and Technology and the project’s principal investigator. “Later, when we see other news that is false but presented in a similar way to what people have previously inferred, people will more easily believe that information (as true). This is the illusion that we’re talking about here.

“The memory illusion has been proven in small-scale psychology labs, but few have studied it in the real world,” added Dongwon Lee, associate professor at the College of Information Science and Technology and a collaborator on the project. “We realized that the computational approach to this issue of trust is actively studied by many data scientists, but the human side is less studied. Due to the accessibility of social media data, we thought we could model the phenomenon of memory illusion, specifically associative inferences, and then see if this phenomenon can help explain how some people become gullible (to what they read on social media).

Researchers will use data from Twitter and data-driven machine learning models to characterize the conclusions people reach and to understand how social media posts contribute to people’s trust in what they read. Next, they plan to conduct both lab and online studies with users to determine if there is a causal relationship between the two.

Ultimately, they hope to model associative inferences in existing machine learning approaches to better measure information trust with an additional human information processing perspective.

“The computational solution is basically that given the news in question, you collect many clues, and a mathematical model will collectively tell you if it’s likely to be wrong or not,” Lee said. “People use a variety of approaches to collect these clues: they look at the content, they look at who wrote it, or they look at how it got to them. But none of them use this particular index of associative inferences.

He added: “So if this memory illusion holds true, we can incorporate it into existing computational models and help better detect false information.”

The researchers blend their interdisciplinary experiences in the project. Lee has expertise in data science, while Xiong will draw on his background in human factors and psychology.

“We want to have a combination of both sides to solve this problem,” Xiong said. “This makes our project unique from focusing solely on data science and automatically detecting misinformation on social media platforms.”

“We are at the forefront of using this collaborative, interdisciplinary method to address a very complicated but socially impactful phenomenon,” Lee added.

Researchers hope to help overcome the widespread misinformation on social media that has impacted many things globally, such as conversations about the 2016 US presidential election and the Brexit referendum, as well as on climate change and vaccines.

“The impacts span the spectrum of people’s daily lives,” Xiong concluded. “We want to spend time and effort to hopefully make improvements to address this issue.”

Their work is funded by recent grants from the Penn State Institute for Social Science Research and the national science foundation.