Is Outsourcing Human Thought to AI Evolution or Devolution?

A critical examination of how reliance on AI tools like ChatGPT impacts human cognitive abilities and critical thinking skills.

Is Outsourcing Human Thought to AI Evolution or Devolution?

A disconcerting statistic emerged from a joint study by Microsoft Research and Carnegie Mellon University, which surveyed 319 knowledge workers across various fields. The conclusion was that those who frequently use AI exhibit a lower tendency to engage in critical thinking at work. Researchers described this state as “cognitive atrophy”—atrophied and unprepared.

This finding is not isolated but part of a growing body of research signals over the past two years.

From the “Google Effect” to the “ChatGPT Effect”

Outsourcing cognitive tasks to tools is not a phenomenon exclusive to the AI era. In 2011, psychologists coined the term “Google Effect”: people tend to forget facts that can be easily searched because their brains automatically determine that such information is not worth occupying valuable memory space. From an evolutionary perspective, this is quite reasonable; cognitive resources are limited, and outsourcing low-value memories is a smart choice.

However, there is a crucial distinction between using search engines and chatbots. When using Google, you still need to read webpages, assess sources, and filter information, which exercises cognitive muscles. In contrast, asking ChatGPT results in a structured, fluent answer, bypassing the entire “judgment” process.

A notable study from the MIT Media Lab tracked brain activity during writing tasks among three groups: one using ChatGPT, another using search engines, and a control group without any tools. The results showed that the ChatGPT group had the lowest brain engagement, and their content retention was the poorest in subsequent tests. Researchers termed this phenomenon “cognitive debt accumulation”.

Professor Matthias Stadler from the University of Munich conducted a similar experiment with 91 students studying unfamiliar topics using either Google or ChatGPT. While students using ChatGPT found the task easier, their answers were surprisingly poor. Stadler likened it to fast food: occasional consumption is fine, but relying solely on it will eventually lead to problems.

A 2025 study involving over 600 respondents provided a more direct quantitative conclusion: the group that used AI most frequently exhibited the deepest cognitive outsourcing and scored lowest on critical thinking tests.

Worse Outcomes for the “Smart” Individuals

One often-overlooked study may be the most alarming of all. A randomized controlled trial involving 150 business school students required them to independently complete a case analysis before half were allowed to use ChatGPT for a second case. The results revealed an intriguing divide: students who performed poorly in the first round improved with ChatGPT, while high-performing students saw their scores decline.

This outcome is understandable. For weaker students, ChatGPT filled gaps in knowledge and expression; for those already capable of independent thought, it disrupted their cognitive processes, substituting a ready-made answer for the deep reasoning that should have occurred.

Professor Dirk Lindbaum from the University of Bath attributes this phenomenon to the erosion of “cognitive agency” by large language models. Cognitive agency refers to a deeper ability—not just answering questions, but being aware of and in control of one’s thought processes, understanding how conclusions are reached, and evaluating the reliability of reasoning. This is what psychology calls “metacognition”. Lindbaum’s concern is that chatbots obscure this metacognitive process that should be completed by humans. He has applied to trademark the term “No AI Scholarship” as a personal commitment.

Not all studies point to the same pessimistic conclusion. Some scholars have found that using AI correctly—encouraging students to ask for explanations, question sources, and challenge conclusions—can enhance motivation and self-efficacy. Research from Virginia Tech and Johns Hopkins University also indicates that the key variable is not whether AI is used, but how it is used.

The problem is that most people do not use AI in that labor-intensive manner. They copy answers directly and move on to the next task.

Another structural issue troubling researchers is the rapid iteration of large language models. By the time a rigorous peer-reviewed study is published, the version of the chatbot studied has often been upgraded several times. The pace of scientific accumulation simply cannot keep up with product release cycles.

Harvard psychologists have noted that technology shaping cognition is not new: writing has weakened oral memory, the printing press changed information dissemination, and calculators reshaped reliance on numerical calculations. However, each transition requires the education system and social norms to adapt in sync. The current issue is that the speed of AI adoption far exceeds any historical precedent, leaving an extremely limited time window for adaptation.

Cheryl Wakslar, an associate professor at USC, admits that the social sciences were slow to respond to the cognitive impact of social media, missing a critical window. “This time we need to do better,” she says.

Currently, research teams are beginning to use eye-tracking and brain imaging technologies to monitor cognitive load in AI-assisted writing processes in real time, aiming to obtain more reliable objective data than survey questionnaires.

The conclusions are not yet in, but the signals are already clear: outsourcing thought is not without its costs.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.