Initially. I wasn’t intending to write a post today. That is, until I read this recent Time article.
It’s an MIT Media Lab study, authored by Nataliya Kosmyna, called “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.”
The study feels very déjà vu-esque of past studies done on the cognitive debt and impact of social media consumption.
The TLDR
This study split 54 subjects into three groups to write several SAT essays.
I’m in the process of reading the full study on arxiv.org, but here are a few points that glaringly stand out:
First group (used ChatGPT only): Lowest brain activity, poor memory, and creativity. Two English teachers called the essays “soulless.”
Second group (used Google Search): High brain engagement and stronger memory. Participants reported higher satisfaction.
Third group (brain only – no AI): Highest brain engagement and best performance in memory, creativity, and problem-solving. Also experienced the highest satisfaction.
What this demonstrates
This signals the strong enticement in the market to build the next “genie in a bottle” AI systems without regard to human impact. Just rub the magic ChatGPT genie lamp; voila, your output will appear.
But this is about so much more than output. It's about the impact on our cognition, creativity, individuality, and critical thinking.
Just because you can build it, doesn’t mean you should.
I believe vehemently that when designing AI systems, human alignment and impact must ALWAYS be front and center, coming first before anything else, not the other way around.
Sounds great, but how do we start to do this?
A simple place we can all start from when both building and using AI systems is to ask:
Cui bono? (Who benefits?)
It’s a small question with a big impact. It reveals the true intent behind what we build and what has been built.
Sadly, raising red flags doesn’t seem to quench the thirst of capitalism and its desire to constantly drive “business value.”
Instead, being the bearer of the red flag deems you an alarmist. Perhaps, but alarms serve a purpose; they preserve the well-being of others and can save lives.
Until we begin prioritizing the human impact of these systems as much as we do profit margins and business KPIs, we’ll continue to see studies like this emerge.
Sources:
https://time.com/7295195/ai-chatgpt-google-learning-school/
https://arxiv.org/pdf/2506.08872v1