Large language models (LLMs), such as the model underpinning the functioning of the well-known conversational platform ChatGPT, have proved to be very promising for summarizing and generating written texts. However, they could also be interesting tools for conducting research rooted in psychology, behavioral science and other scientific disciplines.
Researchers at Indiana University have recently used LLMs to study the intricate and nuanced landscape of human beliefs by analyzing debates between internet users on online platforms. Their proposed methodology, outlined in an article in Nature Human Behaviour, allowed them to create a detailed map of human beliefs, unveiling patterns hinting at polarization (i.e., extreme divisions between groups with opposing viewpoints) and cognitive dissonance (i.e., the discomfort felt when exposed to beliefs conflicting with our own).
“My fundamental research goal is to understand why people engage in certain behaviors, utilizing data and AI/NLP (Natural Language Processing),” Jisun An, senior author of the paper, told Phys.org.
“In pursuing this, I came to realize that beliefs lie at the heart of human actions, as they profoundly influence our decision-making and behaviors. Furthermore, I noticed that language embedding spaces effectively preserve semantic meaning, and that recent Large Language Models (LLMs) contain a vast amount of information about language, knowledge, and people.”
Following the release of the first LLMs, An gradually became convinced that these advanced machine learning–based models could be used to study human beliefs and behavior. This was the primary inspiration behind her recent paper, which specifically analyzed beliefs expressed by people online.
“Our research proposes a novel methodology for constructing a ‘belief embedding space’ to understand the complex system of human beliefs,” explained An. “Simply put, this method involves arranging countless individual beliefs on a continuous, high-dimensional map.
“In this space, each belief occupies a unique ‘location,’ with semantically similar or related beliefs positioned closer to each other, and contrasting beliefs placed further apart. Think of it like a map of ideas, where ‘eating healthy is important’ might be close to ‘regular exercise improves well-being,’ while ‘junk food is fine every day’ would be very far away.”
An and her colleagues created their “human belief map” by fine-tuning S-BERT (Sentence-BERT), a specialized model for generating high-quality sentence embeddings and measuring semantic similarity. Despite being a relatively smaller model, S-BERT is widely adopted for practical applications due to its efficiency and effectiveness.
“While previous research often focused on specific topics or a limited number of beliefs, we leveraged the extensive language comprehension and vast knowledge embedded within LLMs to create a comprehensive map encompassing a much wider variety of beliefs,” said An.

“This belief embedding space goes beyond mere classification; it provides a powerful foundation for quantitatively analyzing the complex interplay among beliefs and how individuals accept or reject new information (i.e., the decision-making process).”
Using their LLM-based methods, the researchers were able to analyze a vast and diverse range of beliefs, which were previously difficult to map collectively within a single space. In addition, they could numerically calculate the semantic similarity or distance between specific beliefs, unveiling complex relationships between beliefs that are difficult to uncover using traditional qualitative research methods.
“Even as new beliefs emerge or societal changes occur, our method allows for continuous updating and expansion of the belief map using LLMs, ensuring it reflects evolving societal dynamics,” said An. “We showed that our methodology successfully utilizes LLMs to construct a sophisticated ‘map’ of human beliefs, or ‘belief embedding.’ This opens new avenues for systematically and quantitatively analyzing the intricate system of human beliefs and, in doing so, provides a foundation for qualitatively and systematically studying people’s decision-making processes.”
When they looked at the belief map created using LLMs, the researchers made some interesting discoveries. First, they found that “relative dissonance” significantly influences people’s decision-making. This essentially means that when online users encounter new information or beliefs, they tend to choose or accept those that cause them less “discomfort” or are most aligned with their existing beliefs.
“More importantly, we show that people’s belief choices are shaped not just by how close a belief is to their own, but by how much closer the belief is compared to its competing belief,” explained An. “When two opposing beliefs on a certain issue are equally distant, people are just as likely to choose either one. However, when one belief is clearly closer than the other, people are far more likely to choose it.”
An and her colleagues described the effect they observed when analyzing their belief map as “relative dissonance.” This term essentially suggests that people’s decisions are influenced by the relative gap between beliefs that are closer and further from their own. Specifically, the researchers found that the greater this gap is, the stronger a person’s preference is for beliefs more aligned with their own.
“In other words, people are not only avoiding disagreement, but actively minimizing the difference in disagreement between available options,” said An. “This finding highlights that decision-making is not driven by absolute distance alone, but by the relative discomfort of accepting a belief that feels much further away, echoing key ideas from cognitive dissonance theory.”
The findings of this recent study could have various implications. First, they provide an explanation for why some information is readily accepted by some people and strongly rejected by others, shedding new light on the processes underpinning the formation and maintenance of social perspectives.

“Our work also offers guidance on how messages should be constructed to be effectively delivered to a target audience, by carefully considering their existing beliefs,” said An. “It could also inform the refined design of policies or campaigns aimed at encouraging behavioral change in various fields, such as health or environmental initiatives, by better understanding the intricate interplay of beliefs.”
The new insight gathered by An and her colleagues could contribute to the development of new behavioral science interventions aimed at encouraging people to make more responsible decisions that could benefit their health, finances or the environment on Earth. Meanwhile, the researchers plan to continue using LLMs to study people’s beliefs and online behavior.
“While our current study utilized limited data from Debate.org (DDO) to construct the belief map, our immediate project involves leveraging larger and more diverse social media datasets, such as Reddit, to build an even more detailed, richer, and real-world reflective belief map,” said An. “This will enable us to capture more subtle differences in individual beliefs and analyze belief interactions in various contexts more accurately.”
Once they have created this more refined map of human beliefs, the researchers plan to use it to plan new studies and experiments. They would also like to connect their observations with the results of another project carried out at Indiana University, called BRAIN (Belief Resonance and AI Narratives).
“This new project will delve deeper into how an individual’s belief system interacts with new incoming information and the mechanisms by which that information is either accepted or rejected,” said An. “For example, we want to understand why a community might quickly embrace a new sustainable farming practice, while another, with different core beliefs about traditional methods, might strongly resist it.”
So far, An and her colleagues have used LLMs to analyze people’s comments and posts on popular social media platforms. In the future, they would also like to explore the relationship between the beliefs that people express online and their decision to join or leave specific online communities.
“We hope that these further studies will allow us to better understand how beliefs are connected to people’s actual behaviors, including social interactions and decision-making,” added An. “We believe this will make significant contributions to understanding and predicting various social phenomena.”
Written for you by our author Ingrid Fadelli,
edited by Lisa Lock
, and fact-checked and reviewed by Robert Egan —this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.
More information:
Byunghwee Lee et al, A semantic embedding space based on large language models for modelling human beliefs, Nature Human Behaviour (2025). DOI: 10.1038/s41562-025-02228-z
© 2025 Science X Network
Citation:
LLMs delve into online debates to create a detailed map of human beliefs (2025, June 22)
retrieved 22 June 2025
from https://phys.org/news/2025-06-llms-delve-online-debates-human.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.