Facts 14/07/2025 19:41

Scientists just completed the first brain scan study of ChatGPT users. The results are terrifying.


As generative AI tools like ChatGPT become increasingly integrated into everyday life—from classrooms and college essays to workplace communication—the question isn’t just how they perform, but how we do when we use them. A new study from MIT’s Media Lab, the first to use brain scans to measure the effects of ChatGPT on human cognition, offers early and unsettling insights. Participants who relied on the chatbot to complete writing tasks showed consistently lower levels of brain activity, weaker memory integration, and diminishing creative engagement over time.

Contents  show 

The study’s lead researcher, Nataliya Kosmyna, did not wait for peer review to share the findings. Her concern: as society rushes to embrace AI for its convenience, we may be unintentionally trading away the mental processes that make learning, critical thinking, and originality possible. And while the sample was small and the study is still in its early stages, the patterns it revealed raise urgent questions—not only about how we use AI, but how it may be reshaping us in return.

The Hidden Costs of Cognitive Offloading

The MIT study doesn’t just raise questions about momentary brain activity—it points toward a deeper and potentially more lasting concern: what happens when people begin to routinely outsource their thinking to AI? In psychology and neuroscience, this phenomenon is known as cognitive offloading—the act of using external tools to reduce the mental burden of processing or storing information. While cognitive offloading isn’t inherently negative (writing things down or using a calculator are common examples), its consequences become more complex when the offloading displaces not just memory, but also the effortful processes of reasoning, judgment, and idea generation.

In the context of the MIT experiment, participants in the ChatGPT group gradually stopped engaging with the writing task in any meaningful way. By their third essay, many had shifted from co-creating with the tool to fully delegating the task—inputting prompts, accepting AI-generated content, and performing only superficial edits. This behavioral drift reflects a broader pattern of disengagement that concerns educators and cognitive scientists alike: when tasks are perceived as “done” with minimal input, the brain has fewer incentives to activate the deeper neural circuits associated with learning and memory consolidation.

The implications for students and young adults are particularly acute. Developing brains are not only more plastic—capable of forming and reshaping connections more readily—but also more vulnerable to shortcuts that bypass those very developmental pathways. As Kosmyna notes, one of her primary motivations for publishing the results early was concern that tools like ChatGPT might be introduced into formal education settings without sufficient scrutiny. “What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” she said in an interview. Her warning reflects a growing unease among experts that exposure to generative AI during critical learning periods could impair not just creativity, but also long-term academic development.

Memory was another area of clear decline. When the ChatGPT group was later asked to revise one of their earlier essays—this time without AI assistance—they struggled to recall the content and structure of their own work. EEG readings showed weaker activity in the alpha and theta bands, both of which are associated with memory recall and integration. This suggests that while the writing task was completed, it left little cognitive trace. In contrast, the group that initially wrote without AI showed a strong uptick in brain activity when allowed to use ChatGPT later—suggesting that when foundational understanding is in place, AI can act as a valuable enhancer rather than a crutch.

Taken together, these findings suggest that over-reliance on AI may dull the very skills it appears to support—writing, reasoning, and reflection. In the absence of deliberate engagement, we risk training ourselves—and especially our youngest learners—to think less deeply, question less often, and retain less over time.

Emotional Disconnection in the Age of AI Companionship

Beyond cognitive performance, emerging research suggests that frequent use of generative AI tools may have psychological implications—particularly in how individuals relate to themselves and others. One of the more striking patterns noted by the MIT Media Lab is that the more time users spend engaging with tools like ChatGPT, the lonelier they tend to feel. While the current brain scan study focused specifically on academic performance, earlier investigations by the same lab point to a growing emotional detachment linked to routine AI interaction.

Part of this emotional impact may stem from the nature of AI conversations themselves. Unlike human dialogue, which is messy, unpredictable, and often emotionally rich, interactions with AI are inherently one-sided. The user inputs a prompt, and the machine delivers an optimized, often polished response—one that feels responsive but lacks genuine reciprocity. Over time, this can create a subtle shift in expectations: conversations become transactional, and users may unconsciously deprioritize deeper, more effortful human engagement. For students or individuals already navigating digital-heavy lifestyles, this shift can amplify feelings of isolation.

This dynamic echoes broader findings in psychology around “social surrogacy”—the tendency for people to replace real social interaction with digital substitutes, such as watching television, scrolling through social media, or, increasingly, interacting with AI. While these surrogates may offer short-term comfort or stimulation, they do not satisfy core social needs. In fact, studies show that relying on digital proxies for connection often leaves users feeling more emotionally unfulfilled afterward. A 2023 meta-review in the journal Cyberpsychology, Behavior, and Social Networking highlighted this paradox: while digital interfaces can simulate companionship, they rarely provide the emotional nuance and validation that human relationships offer.

A Tool, Not a Crutch — The Nuanced Role of AI in Learning

Despite the sobering findings of the MIT study, researchers were careful not to frame ChatGPT as inherently harmful. In fact, a crucial part of the experiment revealed that when used strategically—after foundational learning has occurred—AI can enhance cognitive engagement rather than suppress it. When the group that initially wrote their essays without AI was later allowed to revise them using ChatGPT, they demonstrated a significant increase in neural connectivity across all EEG frequency bands. Rather than disengaging, these users appeared to leverage the tool thoughtfully, using it to refine and expand their original ideas rather than replace them.

This contrast illustrates a vital distinction: when AI is used to augment understanding, rather than bypass it, it can amplify creativity, efficiency, and even learning. In such cases, the technology serves as a collaborator, not a substitute. Much like a skilled editor can help a writer polish their prose without undermining their voice, ChatGPT can serve as a scaffold that supports but does not replace critical thinking. This mode of engagement, however, requires users to first do the cognitive lifting—to wrestle with ideas, articulate them, and then use AI as a tool for clarity or improvement.

Educational researchers have echoed this view. Dr. John T. Behrens, a learning sciences expert and former professor at Vanderbilt University, has argued that “AI can help learners test hypotheses, reframe questions, and explore counterarguments—if they already have some grasp of the subject.” The danger, he says, is when students turn to AI before forming their own ideas. In these cases, the tool becomes less of a tutor and more of a shortcut, preventing the brain from doing the work necessary to build understanding.

That’s why context and user intent matter profoundly. The risks of cognitive disengagement and memory loss, as documented in the MIT findings, appear most pronounced when users defer to AI from the outset—skipping over the messy, nonlinear process of thinking. On the other hand, when AI is used after active learning, it has the potential to extend that learning in ways that are faster, more nuanced, and more expressive than what many users might accomplish on their own.

News in the same category

News Post