Tech & Startup
Views

ChatGPT is making us forget how to think

AI creativity
A recent study from MIT revealed that the use of generative AI tools, particularly large language models like ChatGPT, is linked to a measurable decline in cognitive engagement. Illustration: Zarif Faiaz

There was a time I believed that writing required silence, discomfort, and a slow-burning kind of patience. The kind of patience that teaches you to sit with an idea long enough to watch it fall apart and come back together. That belief feels increasingly quaint now. Lately, I have found myself in conversations where people talk about ChatGPT and the productivity boost it has brought to their lives. I have used these models. And I have felt the shift; not in the speed of my sentences, but in the weight of them. They come faster, cleaner, and somehow emptier.

A recent study from MIT confirmed what many of us, quietly and uneasily, already suspected. It revealed that the use of generative AI tools, particularly large language models like ChatGPT, is linked to a measurable decline in cognitive engagement. Participants in the study who used ChatGPT to complete writing tasks exhibited a 47% collapse in neural connectivity, as measured by EEG scans. This is akin to unplugging half the circuits in your brain just because an app can autocomplete your thoughts, and this drop was most significant in brain regions associated with memory retention, critical thinking, and creativity. In contrast, those who relied on Google searches showed a smaller decline, and those who completed tasks unaided maintained far stronger cognitive activation. In essence, the more we delegate thought to machines, the less our brains seem inclined, or able, to participate.

Perhaps most alarming was the study's finding on memory retention. A staggering 83% of participants who used ChatGPT could not recall the content of their own writing only minutes after completing it. In contrast, only 11% of those who worked independently or with traditional search tools experienced similar lapses. This is not simply a case of distraction or fatigue but a byproduct of disengagement. When one does not invest mental effort in crafting a sentence or defending an argument, the information is never encoded meaningfully in memory to begin with. It becomes ephemeral and forgotten as easily as it is produced. The researchers introduced cognitive debt as a compelling term to describe this phenomenon. Just like financial debt allows individuals to enjoy present benefits while deferring the cost, cognitive debt permits a veneer of intellectual output without the effort required for actual understanding. Over time, this deficit compounds. The result is not merely a loss of memory, but a diminished capacity for original thought, independent analysis, and meaningful reflection.

To be clear, I am not saying that AI is inherently destructive. Its capabilities are extraordinary. What is worrying, however, is the unquestioned integration of these tools into our learning, writing, and thinking processes, often without guardrails or critical interrogation. AI is just a tool, and tools do not write your love letters, defend your beliefs, or interpret your grief. Tools do not express your doubts or distil your convictions. Language is not just a utility, but identity. And when we let AI take over our expression, we let it blur who we are, or worse, overwrite us entirely with templated clarity and algorithmic mimicry. We have accepted the premise that faster output equals better output, ignoring the deeper truth that genuine insight requires time, discomfort, and struggle. There is a particular kind of thrill in writing something difficult and getting it right. A sentence you have revised eight times, a metaphor that finally lands, a thought that reveals itself only after days of circling. You do not get that thrill when AI does the lifting. You get speed, efficiency, and a clean paragraph that sounds professional, but you do not feel the stretch. You do not remember the climb. This is where I begin to worry; not just as a writer, but as a person trying to hold on to a mind that feels increasingly tempted to coast. AI has not made me more creative. It has made me more passive. I am not nostalgic for pen and paper. I believe in technology. But I also believe there is a line between a tool and a substitute, and we are crossing it far too casually. Spellcheck supports you. A search engine directs you. A language model replaces you softly and convincingly. You start by asking it to rephrase a sentence. Soon, you will be letting it write your reflections, essays, and statements of purpose. So, what begins as a collaboration morphs into surrender.

The implications for education are profound. Increasingly, students are being encouraged to use AI to generate ideas, structure essays, or polish their writing. What they are not told is that in doing so, they may be forfeiting the very cognitive development that education is meant to cultivate. Writing is not merely about producing readable sentences. It is about organising complex thoughts, wrestling with ambiguity, articulating values, and refining perspective. To outsource this process to a machine is not to enhance one's intellect but bypass it. And this extends beyond classrooms. In journalism, corporate communication, and academic research, the temptation to rely on AI-generated content is growing. The efficiency is seductive. Why labour over nuance when a machine can produce 800 articulate words in under a minute? Why reflect, rephrase, or revise when an algorithm can simulate all three? Yet with each instance of such delegation, something fundamental is lost, not only in the authenticity of the work, but in the cognitive rigour of its creation. There is a certain sameness creeping into our language now. Emails, essays, and captions are beginning to sound generically polished, slightly too composed. The edges are gone. The quirks. The voice. As if everything is being filtered through the same rhythm, the same structure, it is not surprising. These models are trained on the average of everything. They give us the most statistically likely answer, which means, by definition, the least original one.

None of this is inevitable. But it does demand introspection. We must begin to ask not only what AI can do, but what it should not do. There will always be those who argue that AI merely reflects what we give it and that the fault lies not in the machine, but in its user. That is partially true. But it also misses the point. The long-term risk of AI lies not in its ability to think like us, but in our willingness to stop thinking altogether. When we become comfortable surrendering the difficult parts of cognition, doubt, revision, and contradiction, we become intellectually passive. The very traits that make us human begin to atrophy. So, if we are to preserve the richness of thought, the discipline of writing, and the unpredictability of true creativity, we must be prepared to resist the temptation of cognitive ease. The goal is not to banish AI from our lives, but to ensure it remains a tool of support, not substitution. We cannot afford to let efficiency devour reflection. We cannot afford to become fluent in words we did not wrestle to find. In the age of artificial intelligence, it is not our machines that must prove themselves intelligent. It is us.

Comments

ডলার সংকট

অবশেষে ডলারের দাম কমল

২০২২ সালে কোভিডের প্রভাবে বৈশ্বিক ও অভ্যন্তরীণ অর্থনৈতিক অস্থিরতার কারণে টাকার অবমূল্যায়ন শুরু হয়।

২০ মিনিট আগে