This might not surprise you. Scientfic and anectodal reports note that many of us leave AI sessions with our minds feeling slack and flaccid, having failed to engage in the task that the chatbot aced.
The MIT study, “Your Brain on ChatGPT”, indicates that this is not just subjective. It asked volunteer to write SAT-style essays, splitting them into three groups. One used ChatGPT, another Google search, and the third used no tools at all. Brain activity was measured by EEG.
The ChatGPT group showed markedly less mental engagment in the tasks, indicated by lower brain activity. In the words of the authors, they “consistently underperformed at neural, linguistic, and behavioral levels.” Their essays were judged to be repetitive, unoriginal, and “soulless.”
And it got worse. As the exercise progressed, the ChatGPT users got lazier. By essay three, they were barely doing anything at all – just pasting the question into ChatGPT and submitting the output.
And then worse still. Three months later, participants were asked to reproduce the same essays. Those in the ChatGPT group had largely forgotten the content. The researchers concluded that using the AI tool had short-circuited the learning process. The information had never made it into long-term memory networks.
This has huge implications for whether kids should use AI at school, as well as how you should use it in your life. The MIT study adds to a fast-growing body of evidence that delegating thinking tasks to machines – “cognitive offloading” – leaves the offloader less mentally capable.
There are some rays of hope. In the MIT study, the no-tech group was later allowed to use ChatGPT to reproduce their essay. In this condition, their neural activity remained high even with the tool, presumably because subjects did the original thinking themselves. This suggests that prior engagement might inoculate the brain against the numbing effects of chatbot use.
Cheat Sheet
Other research points to similar conclusions. AI can support learning – but only if it is calibrated to act like a tutor, not a cheat sheet. When it asks questions, demands justifications, and offers Socratic feedback, chatbots can actually improve learners’ performance.
But let’s be real: chatbots are only used like this in a tiny number of instances. Their default mode is to give you the answer you want straightaway with no effort on your part. That’s what they’re for. And that is the problem.
So what should you do?
Tragically for the human race, the best answer currently is discipline. Wharton Business School professor Ethan Mollick says we must develop “metacognitive awareness”. This is the ability to recognize when you are offloading too much cognitive effort and consciously pull back.
First, don’t prompt ChatGPT cold. Think first, draft an outline or argument before turning to the AI. Second, use AI as a checker, not a generator. Let it challenge your work, not replace it. Third, Avoid multitasking with AI – this leads to shallow engagement. Finally, according to experts, we should reflect on what we’ve learned after each session.
As if, right? The extremely low chance that you, I or anyone will sit and ponder what we have and haven’t learned after a bout with AI shows what a challenge this is. But the stakes couldn’t be higher. When you stop thinking, you’re not just becoming less effective. You’re becoming less human.
Joe Smith is Founder of the AI consultancy 2Sigma Consultants. He studied AI at Imperial College Business School and is researching AI’s effects on cognition at Chulalongkorn University. He is author of The Optimized Marketer, a book on how to use AI to promote your business and yourself. Contact: joe@2sigmaconsultants.com.