Part of what makes us human is the unique ways we think and solve problems. But using large language models like ChatGPT might be eroding this uniqueness and leading humans to think and communicate the same way, according to a group of scientists and psychologists who have co-authored a new opinion paper.
“Individuals differ in how they write, reason, and view the world,” Zhivar Sourati, a computer scientist of the University of Southern California and first author for the paper, said in a statement.
“When these differences are mediated by the same LLMs, their distinct linguistic style, perspective and reasoning strategies become homogenized, producing standardized expressions and thoughts across users,” Sourati continued.
The paper, published Wednesday in the journal Trends in Cognitive Sciences, examines how hundreds of millions of people worldwide use the same handful of chatbots and what that means for our individuality.
Thinking inside the box
Pew Research found that one-third of all Americans used ChatGPT last year, double the 2023 figure. And chatbot use is much more common among teens: Two-thirds say they use chatbots, and almost a third use them daily.
Businesses are also going all in on artificial intelligence. Stanford found that 78% of organizations reported using AI in 2024, up from 55% in 2023.
So we’re using AI a lot. But the danger is that we could lose the diversity in the ways we think. The team points out that LLMs generate writing that varies less than what people come up with on their own.
Part of the reason LLMs may be pushing homogenized thought, according to the paper’s authors, is the data used to train them.
“Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience,” Sourati says.
Why diverse thinking matters
There’s a good reason why the authors warn against this trend. Homogenized thought reduces pluralism, which is essentially the idea that multiple perspectives are good for society as a whole.
“This value of pluralism is rooted in the long-held principle that sound judgment requires exposure to varied thought,” the authors write in the paper. “Unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability,”
So we use different ways of thinking to figure out more solutions to a problem. If we lose the ability to think and communicate differently, it could affect how we adapt to new situations.
“The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning,” Sourati says.
The authors also say that this trend even impacts people who don’t use chatbots.
“If a lot of people around me are thinking and speaking in a certain way, and I do things differently, I would feel a pressure to align with them, because it would seem like a more credible or socially acceptable way of expressing my ideas,” Sourati says.
Read the full article here
