The deluge of online misinformation inevitably leads to negative public health outcomes. The connection between exposure to several posts about health conspiracies and the resulting public behavior is apparent but scientifically challenging to quantify. A new AI-driven analytical method has shown that certain language patterns in misinformation posts on Reddit can predict people's rejection of COVID vaccinations.
AI Tool Predicts Online Health Misinformation Impact
Establishing clear linkages between misinformation and adverse outcomes has proven difficult due to the complexities of public health systems and the limited access to data from social media companies. However, Reddit stands out as an exception, allowing independent researchers to analyze its data. This openness is helping scientists get closer to identifying the missing link between misinformation and its real-world effects.
A new analytical framework that merges social psychology with the computational power of a large language model (LLM) aims to bridge the gap between online rhetoric and real-world behavior. These findings were recently shared on the preprint server arXiv.org and presented at the Association of Computing Machinery CHI Conference on Human Factors in Computing Systems in Hawaii.
The New AI Tool Methodology
Eugenia Rho, a computer scientist at Virginia Tech and the senior author of the new study, aimed to determine if there is a connection between people's behavior and the type of language they encounter on platforms like Reddit.
Together with her Ph.D. student Xiaohan Ding and their colleagues, Rho began by collecting thousands of Reddit posts from banned forums that opposed vaccines and COVID prevention measures. They then trained a large language model (LLM) to grasp the gist of each post, understanding the underlying meaning rather than just the literal words. "That's sort of the secret sauce here," says Valerie Reyna, a psychologist at Cornell University and a co-author of the study.
"Fuzzy-trace theory" posits that people focus more on the implications of information rather than its literal meaning. This explains why anecdotal stories about crime are more memorable than dry statistics. "People are more moved by certain kinds of messages than others," says Valerie Reyna, who helped pioneer fuzzy-trace theory in the 1990s.
This strategic use of wording enhances persuasiveness. "Over and over and over, studies show that language in the form of a gist is stickier," Rho says. Her team's analysis revealed that in social media, this is particularly true for causal gists, which imply a direct link between events.
For instance, a Reddit post stating, "Had my Pfizer jab last [Wednesday] and have felt like death since," uses language that has a strong rhetorical impact. Rho's team discovered that stronger causal gists in anti-COVID posts correlated with spikes in COVID hospitalizations and deaths nationwide, even after the Reddit forums were banned. Their data was drawn from nearly 80,000 posts across 20 subreddits active between May 2020 and October 2021.
By employing this newly developed framework to monitor social media activity, scientists could potentially predict real-world health outcomes during future pandemics or other significant events, such as elections. "In principle, it can be applied to any context in which decisions are made," says Reyna.
Psychologists Praised the Study’s Innovative Analytical Approach
However, this framework might not be equally effective in all scenarios. "When there is no discernible gist, the approach might be less successful," notes Christopher Wolfe, a cognitive psychologist at Miami University in Ohio, who was not involved in the study. This limitation could apply to contexts like understanding the behavior of people seeking treatment for common health issues, such as breast cancer, or studying sporadic, ephemeral events like auroras.
The approach does not necessarily specify the exact nature of the cause-and-effect relationship. “It seems that gists from social media may predict health decisions and outcomes, but the reverse is true as well,” says Rebecca Weldon, a cognitive psychologist at SUNY Polytechnic Institute, who did not contribute to the new research. This suggests that the relationship between social media rhetoric and real-world behavior might be more of a feedback loop, with each one reinforcing the other.
Both Wolfe and Weldon commended the authors for their innovative analytical approach. Wolfe describes the framework as a potential “game changer” for navigating complex online information ecosystems. Rho’s team hopes their framework can help social media companies and public health officials collaborate on more effective content moderation strategies. Identifying the types of misinformation most likely to influence behavior is a crucial first step in combating it.
Read More
The Lucrative Business of COVID-19 Misinformation Uncovered by Tax Records
Misleading Information Deprived Millions of Children of Vaccination