The growing popularity of chatbots powered by large language models (LLMs) may influence the way people formulate their thoughts and express themselves. According to a team of researchers from computer science and psychology, such tools may gradually standardize language, argumentation styles and users’ reasoning patterns. The authors warn that if this process continues unchecked, it could reduce cognitive diversity – a factor considered important for creativity and societies’ ability to adapt.
The article takes the form of a scientific commentary and was published on March 11 in Trends in Cognitive Sciences. The authors analyze how widespread use of language models may affect communication and cognitive processes. They note that billions of users worldwide now rely on a relatively small number of AI systems to help generate text, analyze information and solve problems.
According to the paper’s lead author, computer scientist Zhivar Sourati of the University of Southern California, people differ widely in how they write, argue and interpret the world. However, when communication is mediated by the same language models, stylistic and cognitive differences may gradually become blurred. As a result, expressions and patterns of thinking may become more uniform.
The researchers point out that language models are trained on data that often overrepresent languages, values and argumentative styles typical of Western societies – often described in the literature as WEIRD (Western, Educated, Industrialized, Rich, Democratic). As a result, generated responses may reflect a relatively narrow slice of global cultural experience.
The publication also cites earlier research showing that texts generated by language models tend to be less stylistically diverse than texts written by humans. The authors note that when users rely on AI to edit or refine their writing, their texts often lose individual style, and the sense of creative autonomy may diminish.
The research team also highlights the possible influence of language models on thinking processes themselves. Such systems often promote linear reasoning patterns, such as “chain-of-thought reasoning,” which involves presenting step-by-step explanations of logic. According to the authors, this may discourage more intuitive or abstract approaches to problem solving, which in some situations can be more effective.
The authors argue that developers of AI systems should deliberately increase linguistic, cultural and cognitive diversity in the datasets used to train models. In their view, incorporating a wider range of perspectives and reasoning styles could not only help preserve human cognitive diversity but also improve the reasoning capabilities of AI systems themselves.

