AI Is Accelerating Cognitive Inequality
The Results Point to a Slow Collapse of Function
By Stephen Weese.
You may have heard of the recent (still not yet peer-reviewed) study at MIT that showed that students using Large Language Models (LLMs, such as ChatGPT) to perform writing tasks demonstrated reduced brain activity and showed little retention of the text that was created as a result. This was discovered by using an electroencephalogram (EEG) scan as well as by polling the test subjects afterward. The EEG showed less activity and connectivity in the brain, and even signs of cognitive passivity—they were happy to offload the heavy thinking work to the AI.
As a computer and AI professor, this gave me some pause. I have already observed that students are definitely using these tools to create work for college level courses, and that it certainly is at a cost to their learning. But something about this study also stands out to me—it was students they polled. Students are more likely to be currently developing skills like critical thinking, perseverance, and focus. When I personally use an LLM to research or write (I’m writing this article without one, though it did help with research), I feel a strong amplification of my abilities. You see, I know what questions to ask about my field. I know when the LLM is likely hallucinating or just plain wrong. I can interrogate it, treat it as a partner, and vastly accelerate and improve my results. It seems evident to me that those who are skilled in a field, understand the terminology, and know how to use an LLM properly will be challenged, think more, and produce more results.
Part of this is because a person like me with a graduate degree has already spent years developing:
Writing skills
Critical thinking
Reading comprehension
Long-term perseverance
Focus
Take a look at this list. It doesn’t take long to realize that a regular AI user, a dependent AI user, could get away with never developing any of these. This is the crux of the problem—if today’s students now neglect these skills, they will become dependent on AI for basic life tasks and employment, and never achieve independent mastery of any specific subject.
Individually, I don’t have to worry about this much, at least in the short term. AI tools for me greatly enhance what I’m capable of. For students learning these life skills now, it seems there is a good chance that their development of these life skills will be greatly hampered. So, in the long term, this is a problem for me, since the current younger generations will eventually be running—well, let’s face it—the planet, while I am in my retirement age (or maybe I’ll have an amazing robot body by then, who really knows at this point?). There have always been underachievers in society, but imagine a society where the vast majority of adults are cognitive underperformers.
The Future Slowly Falls Apart
When I was a student, I remember I was happy to take shortcuts so that I could go play video games or go out drinking. I don’t really blame the current generation for taking advantage of these tools; I would have, too. And hey, it’s correct, what, 90% of the time? That’s good enough, isn’t it? Yet these dependent students will not only have an undeveloped brain; they will not be able to handle breakdowns in the system, since they have not really been tested and pushed through problems that were difficult and lengthy to solve.
That means that the few left who do exhibit mastery will be bearing the burden of keeping society together. When things break—and they will—these masters will have to fix things. Of course, AI will be able to fix some problems, but most likely not the ones it creates itself. The more we cede decision-making over to AI, the more possible chances it has to make mistakes. For a while, the masters in society may do very well by being scarce and therefore remunerated greatly. Continued scarcity, however, is not a favorable outcome. And what will be scarce? Those skills I mentioned, things like critical thinking and focus.
I know, this kind of thing conjures up images of the Wall-E world or perhaps even Idiocracy. But it may be worth considering that the future doesn’t end up with us all being slaughtered, Terminator-style, but instead we become something like pets for the super AI intellects that run the world.
Compounded Losses
This trend does not only predict loss of intelligence and perseverance, but also art, creativity, and diversity. During the research for this article, ChatGPT helped me find a book that nearly describes this future perfectly. Shockingly, this text was written in 1909:
No one confessed the Machine was out of hand. Year by year it was served with increased efficiency and decreased intelligence. The better a man knew his own duties upon it, the less he understood the duties of his neighbor, and in all the world there was not one who understood the monster as a whole. Those master brains had perished.
– E.M. Forster, The Machine Stops
The story presents a world where everything is provided by the Machine, where people are so dependent upon it that even walking around or talking to someone seems like a useless effort. There is no reason to walk or travel anywhere, since whatever you desire can be brought to you. You can talk to anyone on the planet through screens, so there is no need to see anyone in person.
I do not predict an extreme world such as this, but I do predict a lamentable loss of these “master brains” in the world. Masters in sciences, art, engineering, and many other fields. As a computer professor, I am not against AI in general. It likely has the power to help cure diseases and solve several persistent and difficult problems humanity has faced. But there needs to be a humanity left after the problems are solved.
When We Strive, We Thrive
It may be obvious to point out that almost every success story in humanity has been the result of hard work, persistence, and determination. Difficulties and challenges help us grow. What really happens when we live life on “easy mode”? Maybe we should just relax and enjoy the lack of challenge.
This has actually been studied scientifically, and it has been shown that the right amount of stress is healthy and helpful for biological organisms. This concept is called hormesis. Too much stress, as you might predict, leads to worsening health and mental disruption. Yet too little stress does not provide the body and mind enough challenge to make them strong. We develop resilience through stress; we gain the ability to handle future challenges and keep our mental and physical strength. This concept is very simply demonstrated by things like lifting weights and studying new subjects. We know that though these things can feel stressful, they bring a body response that makes us both stronger and smarter.
Another similar principle is the Yerkes–Dodson law. It says that performance improves with mental arousal up to a certain point. The correct amount of stimulation and arousal results in improved efficiency and performance. Part of this arousal—the readiness to complete various tasks—is dependent on focus. Yet if the AI-dependent generation loses the ability to focus, they can never even activate this level of execution.
The Shift Must Come Soon
We have already seen the effects of the pandemic on the “COVID generation” of students. A significant decline in test scores was shown in English and Math across all demographic groups in the United States. Students who had to stay home and switch to online learning from educational institutions that were unprepared for the shift lost up to the equivalent of half a year of learning. The effects are still being seen as these students move up through the educational system.
The pandemic shift will be mild compared to what we may soon see from the ChatGPT generation. The problem is not AI tools in and of themselves; it is that they remove the challenge and duration of educational assignments. We must find a way to challenge students continually, to encourage information retention and critical thinking, and to demonstrate the benefits to individuals and society of subject mastery.
Parents will need to get more diligent in observing their children’s education. What tools are they using? Which ones are they allowed to use? What is the purpose of each specific assignment? Educators will need to come up with assignments and strategies that encourage mastery and critical thinking.
Simply delaying the use of AI in some contexts can help. For example: first, write out everything you know, put together key concepts, and then ask AI about your paper. Have it give you an outline instead of writing the paper itself. Use LLMs as an assistant, not a workhorse. Most of all, educators and parents will have to convince students that learning takes effort and that effort brings growth and reward. If there is no motivation for education, then education itself falls apart.
As with many things in life, I am recommending a “middle way.” We don’t destroy AI or go live in the wilderness away from all things technological as a species, but we instead identify the things that are good and fruitful in humanity and strive at every turning, at every change, to preserve these things above all else, no matter what tools or technology we have. In this way, we hold on to a positive future where we retain what is truly noble and helpful to live. A future where society is held together by the power of humanity and technology combined.
Stephen Weese is a computer professor and consultant in CS, IT, and AI. He also works in media and is the CEO of Marvelous Spiral Studios.




