About
#News
05.02.2025 Technology

Artificial intelligence reduces beliefs in conspiracy theories

ChatGPT-style system created at MIT diminishes conspiratorial beliefs for up to two months after interaction

DebunkBot engages users in personalized conversations, challenging conspiratorial beliefs with facts and scientific evidence | Image: Shutterstock

Even though they often seem difficult to believe, many conspiracy theories attract (and convince) a large number of people. These types of beliefs, a major concern of our time, have been the focus of various academic studies due to their potential to undermine efforts such as vaccination and the fight against climate change.

It is often said that people who adopt these beliefs do so as a way of reaffirming their existing needs or motivations, and that their minds therefore cannot be changed by facts or evidence.

Researchers at the Massachusetts Institute of Technology (MIT) and Cornell University in the USA decided to challenge this assumption using a ChatGPT-style tool dubbed DebunkBot.

The aim was to provide evidence through an automated system that would convince people that their beliefs were not possible. The results of the study were published in the journal Science.

“We hypothesize that fact-based interventions may appear to fall short due to a lack of depth and personalization,” the authors wrote.

To test this hypothesis, they made use of advances in large language models (LLMs), a form of artificial intelligence (AI) used in applications such as ChatGPT.

With access to large volumes of information, the technology is also capable of generating tailored arguments for each user.

The tool can even directly refute evidence cited by study participants to support their conspiratorial beliefs.

Personalized chat

To test the chatbot, the researchers recruited 2,190 volunteers, who wrote in their own words about a conspiracy theory they believed in and offered evidence that would support their belief.

They then held three rounds of conversation with the AI tool, which was programmed to respond specifically to the evidence provided by the volunteer while trying to reduce their belief in the theory. Before and after this interaction, the participants answered a questionnaire about how much they believed in the theory on a scale of 0 (definitely false) to 100 (definitely true).

The research team found that the participants’ belief in their chosen conspiracy theory was reduced by an average of 20%. The effect lasted for at least two months.

Participants presented a wide range of conspiracy theories. Since the study took place in the US, they ranged from classics involving the assassination of John F. Kennedy and the existence of aliens and the Illuminati, to recent events such as the COVID-19 pandemic and the 2020 presidential elections, in which Donald Trump was defeated by Joe Biden.

The effect was observed even in participants with deep-rooted beliefs deemed important to their identity.

Another significant result was that the tool did not reduce belief in real conspiracies, such as Operation Northwoods and MKUltra, both planned (and carried out, in the second case) by the US Central Intelligence Agency (CIA) between the 1950s and 1970s.

A human reviewer thenanalyzed a sample of 128 arguments made by the AI, finding that 99.2% were true, 0.8% could be misleading, and none were false.

The tool went even further by reducing beliefs in other conspiracy theories unrelated to the ones the volunteers presented, “indicating an overall decrease in conspiratorial worldview and an increase in intentions to disprove other conspiracy believers,” the scientists wrote.

“Many people with seemingly fact-resistant conspiracy beliefs can change their minds when presented with compelling evidence,” say the authors.

“From a theoretical perspective, this paints a picture of human reasoning that is surprisingly optimistic: even the deepest of rabbit holes may have an exit,” the authors conclude.

In practical terms, the study emphasizes the potential positive impact of AI when used responsibly, as well as the importance of minimizing opportunities for the technology to be used in harmful ways.

* This article may be republished online under the CC-BY-NC-ND Creative Commons license.
The text must not be edited and the author(s) and source (Science Arena) must be credited.

News

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Receive our newsletter

Newsletter

Receive our content by email. Fill in the information below to subscribe to our newsletter

Captcha obrigatório
Seu e-mail foi cadastrado com sucesso!
Cadastre-se na Newsletter do Science Arena