Skip to content

Education and personality#

scholar

A scholar sharpening his quill, around 1632, Gerrit Dou (public domain)

Is cognitive offloading really the problem?#

The advent of large language models (LLMs) has sparked a significant fear: that the emerging AI-native generation will become cognitively dependent on these tools, unable to function or solve problems without them.

This concern is often encapsulated by the term cognitive offloading: the idea that we delegate our mental effort to an external agent, potentially leading to the atrophy of our innate intellectual skills.

While this concept has been a central point of debate, particularly in discussions about memory and critical thinking, focusing solely on the offloading aspect may be too narrow. The true dynamic at play, especially for complex reasoning tasks, might be less about delegation and more about a fundamental shift in epistemic point of view.

Trust rather than cognitive offloading#

We must reconsider the nature of cognitive offloading when the external agent is a black-box system like an LLM. In essence, using an LLM to generate an answer can be likened to intellectual subcontracting (see BigAI as the universal contractor).

If a person possesses significant expertise in a domain, they have the necessary critical skills to thoroughly evaluate and validate the LLM's output, thus maintaining control and truly offloading a computational task.

However, for a novice, or for the average student, the ability to critically judge the provided answer is absent. In this scenario, they are not genuinely offloading their cognition; instead, they are forced to trust the answer, accepting it as an authoritative truth because they lack the independent capacity to form an opinion or they have no way to verify the veracity of the information.

This pivot from a skill-based delegation to an authority-based trust introduces a much deeper set of pedagogical and societal displacements.

The risk of personality loss#

If a large portion of the AI-native generation adopts this strategy of uncritical trust, relying on LLMs for core academic or professional output, we face the prospect of pervasive intellectual uniformity.

While the responses generated by sophisticated LLMs may be statistically and factually superior to the average human output today, this efficiency comes at a cost. The real sacrifice may not be the offloading of cognition, but the offloading of personality.

Indeed, personality is the unique voice, perspective, and singular chain of reasoning that defines an individual's intellectual world representation and thinking.

Historically, even when utilizing external sources like encyclopedias or the early Internet, individuals still had the space to inject their own viewpoint, perspective, and unique synthesis. Since most LLMs operate on similar foundational datasets and are governed by shared ethical guardrails, their outputs tend toward a centralized mean, creating a homogenizing effect. That effect is to add to the social conformity of our western societies.

This leads to the powerful realization circulating among nowadays' educators: the problem isn't just that students are "cheating": The problem is their work now lacks their personality. Works are becoming uniforms, better that what humans can do but not reflecting anymore the singular individuals.

We could argue that, already today, with all the resources available, most students are not making a lot of efforts, but that is not really true. Because even compiling various sources is not a neutral exercise.

What if LLMs have good aspects for education?#

To progress, we must engage in a mental experiment and reverse the prevailing negative viewpoint to explore potential benefits of LLM-powered education.

Memory exercises#

For centuries, education has required students to memorize a vast amount of pure factual knowledge, much of which quickly becomes obsolete or rarely used. LLMs, effectively acting as vastly superior search engines and comprehensive knowledge bases, make this kind of repetitive memorization largely obsolete. This is not necessarily revolutionary, but it can be viewed as an opportunity.

Reasoning#

The true educational opportunity, and the ongoing challenge, lies in reasoning.

As previously noted, the student has two choices:

  • Either to exert the mental effort to understand the LLM's reasoning (perhaps by asking for step-by-step explanations);
  • Or to simply trust the result, mirroring the traditional student who blindly copies a peer's correct exam answer without comprehension.

This dynamic suggests that LLMs will act as powerful difference amplifiers in education:

  • High-performing or motivated students can utilize the LLM as an omniscient personal tutor to deepen their understanding and explore complex concepts faster;
  • Conversely, students predisposed to mental shortcuts will simply substitute peer-copying with output-copying, deepening their reliance on trust without comprehension.

Thus, the tool promises to elevate the already-proficient to new heights while potentially stick the low-performing in a state of intellectual dependence.

First ideas for a LLM-enabled education system#

Context for the student#

In an AI-saturated world, the primary objective for educational reform should be to raise the cognitive bar.

If students possess an incredibly powerful, instantly available omniscient agent, then the nature of the problems they are asked to solve must evolve to match this capability.

Context for the educator#

We should, at any cost, avoid to enter into *the sterile loop:

  1. The teacher uses LLM to create a subject;
  2. The students use the LLM to answer to the subject;
  3. The teacher uses the LLM to correct the answers.

In this loop, nobody learns nothing.

That's why, when big AI companies are signing deals with universities, educators are bound to ask again the fundamental questions of education::

  • What should we teach?
  • How should we teach?
  • How can we evaluate pupils/students?

That's an excellent and crucial point. The "infernal loop" you describe—LLM-created assignment, LLM-generated answer, LLM-graded response—is a tangible risk that undermines the entire purpose of education.

To break this loop and move into an era of LLM-enabled education, a comprehensive manual or philosophical framework is needed. This requires us to return to the core educational questions: what, how, and why we teach.

What should be taught?#

The content of education must pivot away from easily searchable, factual knowledge and toward meta-skills that machines cannot replicate - at least not yet.

We can propose the following directions to study:

  • Focus on foundations: Students must learn the core theories that structure a discipline, not just the facts. They must understand the deep motivation for things. LLMs can then be seen as information boosters rather than as main information providers. This may not be easy.
  • Critics of LLM outputs: We can't focus anymore on information acquisition because LLMs know it all. Instead the focus should be made on information validation and information synthesis. Students should be taught how to critically evaluate LLM output, fact-check it, identify its biases, and integrate it with non-digital sources. Research in physical paper libraries will be great exercises in the coming years, just to assess the shift in representations between LLMs and old paper books. Hallucinations can also be used to learn about the AI unreliability.
  • Cultivate intellectual personality: Education must explicitly teach students how to inject their unique perspective, ethical lens, and personal values into their work. Students must also understand that LLMs tend to flatten everything into a statistical average... with no personality.

How teaching should be?#

The teaching methodology must transform from a delivery system of information to a facilitation system for complex thinking. Great phrase but not so easy to realize.

We propose the following directions to study:

  • Inquiry-like learning: Teachers should include in their courses more questions, more time to challenge, to create a will to inquire. Now that the students have in their pocket a thinking library, can they find the roots of the things? And what would they have done?
  • Focus on process: Assignments should require students to submit their prompting process (the history of their queries, the LLM outputs they discarded, and their rationale) alongside the final answer. This makes the methodology the object of evaluation (see below).
  • LLM-Proof Assignments: Design assignments that require real-world engagement, collaboration, ethical judgment, or application to local or current situations that are not yet in the LLM's training data. That's not easy, but imagine situations where the LLM may not have the right answer.

How can evaluation become?#

Education should keep both LLM-free evaluation and LLM-assisted evaluation.

We propose the following directions for the LLM-assisted evaluations.

  • Evaluation of the prompt/inquiry process: Grade the student's intellectual sophistication as revealed through the quality and complexity of their prompts. A good student asks a sequence of nuanced, specific questions; a poor student asks one vague question. That's one of the key points.
  • On top of this one is the standardization of the LLM engine during an exam. Every result a student obtains should be reproducible by the teacher, unless the student explicitly indicates the thoughts are LLM-free (which is a gamble that brings more risks if they are not).
  • Cross-domain and complexity focus: Using multi-dimensional problems that require students to connect concepts from different disciplines (e.g., combining history, economics, and environmental science to solve a current policy issue) is a sure way to stimulate the student's comprehension and synthesis capabilities.
  • The previous point focus on the increase of problem complexity. LLMs are bringing a lot to students: let them face complexity and cross-domain problems to test them.
  • In order to have a good vision of the student's value, only one exam may not be sufficient: The student should be analyzed in a holistic way. That's where AI becomes a teacher's assistant enabling to analyze patterns of responses across a semester, looking for consistent signs of real comprehension rather than isolated instances of correct answers. In a way, the evaluation would look like a psychological profile where you can cheat on some questions but not on all, and especially not on the underlying model.
  • Don't punish old-school pupils/students: They may not want to use LLMs so often. They should be rewarded when their practices are proven.

Changing the life of educators#

For sure, this paradigm shift demands many changes in the role of the professor.

Educators will need to move beyond simply assessing today's curriculum and proactively look at the cognitive demands of the next academic levels to effectively challenge their AI-augmented students.

A key strategy for educators will be to collaborate with LLMs themselves to design robust evaluation methods, for instance based on what we've just exposed.

Adaptation seems not optional now in education. Educators may have to live with it.

Will they be replaced by AI? That's not so easy. AI can be a huge complement to serious students. AI progresses are spectacular but better have AI-augmented teachers teaching to AI-augmented students than no teachers at all.

Conclusion#

The fears surrounding the AI-native generation may be misplaced. The daily integration of LLMs into the student workflow may not present an insurmountable barrier, but a unique lever for educational advancement.

This future demands fast and willing actions. Here are, we think, the main ideas:

  • Standardization: Considering the evaluation of prompts, there is a strong argument for implementing official, designated LLMs in education to ensure a consistent baseline for comparison and assessment across students.
  • Restructuring: Courses must be fundamentally restructured to favor complexity, moving past repetitive knowledge and simple application questions.
  • Complexity: Evaluation should relentlessly favor cross-domain, non-trivial issues that necessitate genuine synthesis and deep critical inquiry, ensuring the student is the master of the tool, not its servant.

(November 15 2025)


Navigation: