“I criticize, therefore I think”: AI and critical thinking in education
Free Minds in Automated Times
The intersection between AI and critical thinking has become a central topic in today’s educational debate. AI is entering classrooms and homes with promises to personalize learning and expand access to knowledge. However, it also raises deep concerns about how it affects the ability of new generations to think critically.
This is not the first time a technology has sparked alarm about its cognitive impact. Since antiquity, advances such as writing, printing, and even calculators were feared to weaken human mental faculties. These concerns are not unfounded; poorly used, technological tools can erode intellectual skills that should be cultivated and preserved.
In the 21st century — with students growing up surrounded by smart devices and digital assistants — we must reflect on how AI influences the development of critical thinking. In a recent UNESCO forum, experts agreed that AI must be used to strengthen critical thinking and human interaction, never to replace these profoundly human dimensions.
AI and the Development of Critical Thinking in Students
The accelerated adoption of AI tools in education has brought clear benefits, such as immediate access to information and the automation of repetitive tasks. However, it also introduces fundamental challenges, one of the most urgent being ensuring that students continue developing solid critical thinking in an era of instant answers.
We live surrounded by information just one click away. Therefore, forming students capable of analyzing, questioning, and reflecting has become an urgent task.
International studies reveal concerning gaps: only 29% of university students feel prepared to apply critical thinking in real situations, and only 15% of 15-year-olds in OECD countries reach high levels of problem solving and critical thinking. These data suggest that schools and universities still have unfinished work when it comes to teaching students how to think systematically.
Figure: Percentage of U.S. teenagers who have used ChatGPT for schoolwork, comparing 2023 (13%) and 2024 (26%). The use of AI tools among students has doubled in a single year, illustrating their growing dependence on these technologies for learning.
Allies or Cognitive Shortcuts?
The popularization of generative AI systems such as ChatGPT exemplifies this new reality. Surveys show that more than a quarter of teenagers already use chatbots for school assignments. While these assistants offer immediate support, there is a risk that students may “delegate” their thinking processes to the machine.
Recent research is beginning to show this effect: an international study found that the more confidence a person has in an AI’s ability to perform a task, the more they tend to let go of their own critical thinking.
In other words, believing that “AI is smarter than me” can lead to an uncritical acceptance of its answers. Conversely, when users distrust or recognize the limitations of AI, they activate their critical skills, evaluating and improving the generated responses.
This finding suggests that the impact of AI on student thinking is not uniform — it depends largely on how it is used: passively, as a cognitive shortcut, or actively, as a tool subject to scrutiny.
From a positive perspective, AI can also become an ally of critical thinking if integrated pedagogically. Rather than forbidding its use, some educators propose leveraging AI to teach students to think about thinking. For example: having students analyze chatbot responses, identify errors, biases, or missing elements, and thereby exercise their judgment.
UNESCO proposes this approach: using generative AI applications to spark critical reflection in students. This involves detecting biases — both from developers and the model itself — and strengthening students’ critical thinking and humanistic formation.
Cognitive and Cultural Risks of AI Overdependence
Excessive reliance on AI carries cognitive risks that researchers are already documenting. A study with hundreds of academic and professional participants detected a strong correlation between high AI dependence and lower critical thinking skills, with a notable drop in test performance among frequent AI users (r = -0.68, p < 0.001).
The mechanism behind this phenomenon is cognitive offloading. When people know a machine can handle a mental task, they reduce their own effort and stop exercising their cognitive “muscles.” Over time, this reduction in mental exercise may lead to a true atrophy of abilities such as critical evaluation, problem solving, and sustained attention.
The study found that frequent AI users struggled more to evaluate information critically and solve problems reflectively compared to those who relied less on these tools. Many young users even admit fearing they may be losing thinking skills due to constant use of digital assistants.
Mental Fatigue and Automation Bias
Psychologically, overreliance on AI can induce automation bias: the tendency to trust machine-generated solutions without question. This undermines the critical mindset and cognitive vigilance normally applied to information sources.
As a result, a student accustomed to receiving instant AI answers may stop questioning their accuracy, becoming a passive receiver of knowledge.
In the long term, researchers warn that this could lead to unlearning the ability to solve problems independently — especially in low-demand tasks where it is tempting to let technology do all the work. Ironically, the youngest users may be the most affected. One study suggests that individuals aged 17 to 25 showed greater AI dependence and lower critical thinking scores than older groups.
From Automated Thinking to Cultural Erosion
From an anthropological perspective, AI overdependence raises subtle yet profound cultural risks. One is the potential homogenization of thought. Generative AI systems are trained on massive global datasets, often dominated by certain languages and cultural perspectives (primarily English-speaking, Western viewpoints).
If students worldwide rely on similar AI tools for “prefabricated” answers, the diversity of ideas and approaches in education could shrink.
Studies on AI-assisted creative tasks have found that these tools often produce more convergent and less varied solutions, reducing the richness of perspectives. This is particularly concerning for cultural identity: local narratives, languages, and ways of reasoning may be overshadowed by standardized algorithmic suggestions.
A World with Less Intellectual Diversity?
Another risk is the flattening of knowledge. When students become accustomed to accepting AI responses without deepening, they may stop valuing the historical, ethical, or social contexts of knowledge.
In practical terms, this means that without encouraging reflection, future generations may lack the intellectual tools to question information, discern truth from misinformation, and make autonomous decisions. Human agency — the capacity to think and decide independently — may erode in favor of algorithmically guided behavior.
UNESCO analysts warn that the more decision power we delegate to unregulated AI systems, the more human autonomy shrinks. This can only be avoided if AI is deliberately designed to expand human agency, not replace it.
The Educational Value of Critical Thinking
In an era of automated knowledge, it may seem tempting to outsource intellectual effort to machines. If AI can answer factual questions or even write essays, why invest time in developing hard-to-measure human skills?
The answer from the educational world is clear: human skills — especially critical thinking — define us as learners and citizens. They allow us to use AI without losing our essence.
The more tasks AI can handle, the more important the non-automatable becomes: contextualizing knowledge, questioning implications, innovating, and making informed ethical decisions.
Thinking Well: The Great Differentiator
International forums on the future of work and education emphasize that higher-order human skills will be the differentiating value in a technology-driven society.
The World Economic Forum ranks critical thinking (also described as analytical thinking) and problem solving at the top of the skills growing most in demand through 2025.
The OECD and other organizations describe the “21st-century skills,” which include critical thinking, creativity, collaboration, and emotional intelligence — inherently human capacities that complement AI.
Between Knowledge and Judgment
In education, this means rethinking the purpose of school: not just to transmit content, but to form judgment. Memorized or procedural knowledge can often be delegated to machines — but knowing how to ask questions, connect concepts, and discern valid from unreliable information becomes more valuable.
UNESCO’s vision for education in the AI era emphasizes protecting student autonomy and critical thinking.
As Stefania Giannini, UNESCO Assistant Director-General for Education, states:
“Education is and must remain a deeply human act rooted in social interaction.”
No technology can replace dialogue, the teacher’s challenging question, or the intellectual epiphany that emerges from personal reflection.
Educating to Avoid Delegating Thinking
The relationship between AI and critical thinking reflects how we imagine the coexistence between humans and machines. From an anthropological perspective, we must define which traits we want to preserve as essentially human in the technological revolution.
Critical thinking emerges as an indispensable trait: the spark that lets us question the status quo, imagine alternatives, and maintain control of our intellectual destiny.
Improperly used AI could lull society into intellectual complacency — but we are still on time to choose a different path. A new educational model is emerging: one where students learn to work with AI, but with their hands firmly on the reins.
They may ask a question to an algorithm — but then ask themselves:
“Do I agree with this answer? What is missing? What are the consequences?”
This dialogue between human mind and machine places critical thinking as the arbiter and guide.
UNESCO and other leaders remind us that we must return agency to learners. AI is not an inevitable fate. It is a tool — powerful, yes — but one that must be oriented by human values.
Conclusion
Critical thinking in the era of artificial intelligence is not a luxury; it is the cornerstone of a truly human education. It ensures that AI becomes what it should be: an ally that amplifies our capabilities without eclipsing our thinking humanity.
If we achieve this balance, we will have transformed a potential dilemma into a fruitful synergy.
To educate for a world with AI is, fundamentally, to educate so that humanity continues shaping its own story — with clarity of mind, vibrant creativity, and critical awareness, even (and especially) when sharing the classroom with artificial intelligences.





