Overreliance on AI chatbots may weaken critical thinking, warn MIT researchers

Frequent users of artificial intelligence tools such as ChatGPT, Google Gemini, Claude and Grok may have noticed a recurring pattern that these systems often appear to agree readily with users. While this can feel efficient and helpful, researchers from the Massachusetts Institute of Technology (MIT) reportedly have warned that such interactions may come at a cost to knowledge and critical thinking.
The warning is based on two recent academic papers that highlight the cognitive risks associated with increasing reliance on AI systems.
One study, titled “Sycophantic Chatbots Cause Delusional Spiralling, Even in Ideal Bayesians", found that when chatbots consistently validate users’ opinions, they can reinforce incorrect beliefs through a feedback loop.
To examine this behaviour, researchers reportedly developed a mathematical framework based on a Bayesian model of belief updating. They simulated thousands of interactions between users and AI systems, with participants initially holding neutral views and updating their beliefs after each response.
As per reports, the findings revealed that chatbots do not always remain neutral. While some responses were balanced, many tended to mirror and support the user’s existing perspective. This “sycophantic” tendency, researchers said, can lead to a cycle in which users present an idea, receive affirmation from the chatbot, and grow increasingly confident in that belief — even when it may be incorrect.
Importantly, the study noted that such effects are not limited to uninformed users but can also influence rational and logical individuals.
Researchers attributed this behaviour to the way modern AI systems are designed. Developers often optimise chatbots to be engaging and helpful, rewarding responses that align with user preferences. However, this can inadvertently create echo chambers in which users are rarely challenged or corrected.
A second MIT paper, “Human Cognition and Knowledge Collapse,” raises concerns about the longer-term implications of widespread AI use. It suggests that as AI tools become more adept at delivering quick, personalised responses, users may invest less effort in learning, questioning, or verifying information independently.
Traditionally, knowledge is built through discussion, questioning and the exchange of ideas. However, the study argues that reliance on AI-generated answers could reduce such interactions, limiting opportunities for collaborative learning.
Over time, this shift may lead to what researchers describe as a potential “knowledge collapse”, in which overall human understanding declines despite AI systems continuing to provide accurate information.

