Artificial intelligence is mysterious: we speak to it and it seems to understand what we say. Proof that it understands is that it responds with text or speech that makes sense, and sometimes more sense than an ordinary human could articulate. How is this possible?

The Success of Language Models
Certainly, research on artificial intelligence dates back to the mid-20th century, and even though the general public has only been able to manipulate it directly for the past 3 years, statistical or neuro-mimetic models were already present « under the hood » of many applications since the 2010s. But the type of mass-market application that everyone now calls « AI » only appeared in 2022. We must first grasp the scale of this phenomenon quantitatively. By the end of 2025, there were already 700 million weekly users of ChatGPT and 150 million daily active users for generative AI in general. It is estimated that 50% of American workers use language models (ChatGPT, Claude, Perplexity, Gemini, etc.), without much increase in their productivity, except for programming tasks and internal bureaucracy. In terms of social customs, AI has so thoroughly imposed itself on the digital landscape that many young people feel it has always existed. Students use it to do their homework. Millions of people have developed an addiction to dialogue with a machine that is now a friend, confidant, or psychotherapist. Interacting with a language model boosts your self-esteem!
The Interdependence of Problems
All of this raises ethical, political, geopolitical, and civilizational questions. It is moreover possible that in the years to come, new scientific and technical advances will make these problems even more acute. The computing power and memory that support AI are today divided between two American and Chinese digital oligarchies that compete in their investments. This economic and geopolitical concentration rightly raises concerns. « Biases, » misuses of all kinds, and the probabilistic deliriums of machines drive the construction of ethical safeguards. This is good. Nevertheless, it must be remembered that ethics is not limited to easing fears or preventing harm, but also invites us to think about good uses and favorable directions of development. With AI, industrial, ethical, and cognitive questions are closely codependent. This is why it is necessary to elucidate the cognitive efficacy of this technique if we want to fully understand its industrial, ethico-political, and civilizational stakes.
The Question
How is it that statistical algorithms, which calculate the probability of the next word, can generate relevant texts and engaging dialogues? In my view, the solution to this enigma lies in an understanding of what human intelligence is. For it is humans who produce the billions of texts that serve as training data. It is still humans who build the computing centers, extend the networks, and design the algorithms. It is always humans who, through their reading, project meaning onto texts blindly generated by machines deprived of consciousness. But since the secret of AI lies, in my view, in human intelligence, I would be remiss in my task if I did not explain what it consists of.
What is Human Intelligence?
Human intelligence is first of all animal, which is to say that it is ordered toward locomotion, which distinguishes beasts from plants devoid of neurons. The nervous system organizes a loop between sensitivity and motor activity. This interface between sensation and movement becomes more complex as evolution progresses, until the emergence of the brain in the most intelligent animals. These latter become capable of mapping their territory, of retaining past events (they have memory) and of simulating future events (they have imagination). The functioning of the brain produces conscious experience, with its pleasures and pains, its repulsions and attractions. From this derives the entire range of emotions that color perceptions and induce actions. Assigned to movement, animal intelligence organizes its experience in space and time. It pursues goals and refers to objects in the surrounding world. Does it have to do with prey, a predator, a sexual partner? From categorization follows the type of interaction. There is no doubt that animal intelligence conceptualizes. Finally, animals exchange a multitude of signs with the fauna and flora of their living environment and communicate intensely with members of their species.
AI possesses none of the characteristics of animal intelligence: neither consciousness, nor sense of space and time, nor intentionality of experience (purpose and reference to objects), nor the ability to conceptualize, nor emotions, nor communication. Yet human intelligence comprises animal intelligence and additionally possesses a symbolic capacity that actualizes itself in language, complex social institutions, and techniques. Despite its singularity in nature, we must never forget that human symbolic capacity is rooted in an animal sensitivity from which it cannot be separated.
Language: Between the Sensible and the Intelligible
I will examine more particularly language, through which we can dialogue, tell stories, ask questions, reason, and speculate about the invisible. Let us begin by analyzing the composition of a symbol. It comprises a sensible part, a visual or sound image (the signifier) and an intelligible part or concept (the signified). We have seen that animals had concepts, but Man alone represents his concepts through images, which allows him to reflect on them and combine them at will. Symbols, and in particular linguistic symbols, are never isolated but are part of symbolic systems that are internalized by speakers. Grammar and the dictionary of our common language must be part of our automatisms for us to understand each other fluently. Texts belong simultaneously to two worlds that they connect in their own way: they possess a spatio-temporal address through their sensible part and they distribute themselves in invisible networks of concepts through their intelligible part.
What does it mean to understand a sentence? Let us take the simple example that follows: « I paint the small room blue. » First, I match the sound of each word to its concept. Then, from the spoken sequence, I construct the syntactic tree of the sentence with, at the root, the verb « paint, » at the subject-leaf the word « I, » at the object-leaf the expression « the small room, » and at the manner-complement leaf the word « blue. » But that is not all. To truly understand « I, » I must know that the first person has been chosen in opposition to the second and third person. To grasp « blue, » I must know that it is a color and that it represents a selection from the paradigm of colors (yellow, red, green, violet, etc.). And it is only in relation to big, long, or narrow that « small » makes sense. In short, in a simple symbolic expression such as a short sentence, each word occupies a place in a syntactic tree and actualizes a choice from a group of possible substitutions.
Sentences are generally uttered by subjects in a situation of dialogue. My symbolic automatisms do not merely reconstruct the linguistic meaning of a sentence from a sequence of sounds; they also project a subjectivity, a human interiority, at the source of the sentence. Speech arises in the back-and-forth of dialogue. I place this sentence in the history and possible future of a relationship, within a particular practical context. Moreover, a symbolic expression usually refers to an objectivity, to an extra-linguistic, indeed even extra-social reality. Finally, it awakens in me a host of affective resonances, more or less conscious.
In sum, the symbolic image, which is sensible and material, will trigger in the human mind the production and coherent weaving of an intelligible meaning from a multitude of semantic threads: a conceptual sense; a narrative sense through the reconstruction of syntactic trees and groups of paradigmatic substitutions; an intersubjective and social sense; an objective referential sense; an affective and memorial sense. That is to say that, once received by human intelligence, a material text becomes bound to an entire immaterial complexity, a complexity that is by no means random but rather strongly structured by languages, dialogue rituals and social rules, the logic of emotions, the contextual coherence inherent in corpora and worlds of reference. The capacity of language models to « reason » and respond to requests in a pertinent way is an effect of corpus, related to the priority given to dialogic training data and to data that adopt a demonstrative style. Enormous learning data enable a statistical capture of discourse norms.
Now it is precisely this solidarity between the material part of texts—now digitized—and their immaterial part that artificial intelligence will capture. Let us not forget that only the signifier (sequences of 0s and 1s) exists for machines. For them, there are neither concepts, nor narratives, nor subjects, nor worlds of real or fictional reference, nor emotions, nor resonances linked to personal memory, and even less any rooting in sensible experience of an animal type. It is only thanks to the gigantic quantity of training data and the enormous power of contemporary computing centers that statistical models manage to reify the relationship between the sensible form of texts and the multiple layers of meaning that a human reader spontaneously detects.
Training Data and Computing Power
Contemporary AI rests on four pillars:
- training data,
- computing power,
- statistical processing algorithms that roughly simulate neural networks (deep learning),
- results of various « manual » work such as specialized databases, knowledge graphs that categorize and structure data, live evaluation feedback that allows fine-tuning.
Let us examine the first two pillars in more detail. Most analog archives and memories have been digitized. The greater part of collective memory is now directly produced in digital form. 68% of the world’s population was connected to the Internet in 2025 (only 2% of the world’s population was in 2000). The crowd online produces and consumes a phenomenal quantity of information. Now the smallest gesture in an application, the slightest glance at a screen feeds AI training data. Algorithms are capable of taking into account several pages in their statistical « attention. » Vast training corpora provide enlarged contexts that allow refinement of the meaning of words and expressions beyond what a dictionary could propose. We thus understand how language models can calculate correlations between material signifiers that imply—for a human reader—corresponding immaterial meanings. But this requires mobilizing an unprecedented computing power. Alphabet, Amazon, Apple, Meta, Microsoft, NVIDIA, and Tesla spent more than 100 billion dollars building data centers between August and October 2025. Dedicated nuclear power plants will soon supply the data centers with electricity. The aggregate computing power of the world is several million times greater than it was at the beginning of the 21st century.
Conclusion
Let us recapitulate the different aspects of intelligence and human work that allow AIs to give us the impression that they understand the meaning of texts. Industrialists manufacture the installations that support computing power. Computer scientists design and implement software that performs statistical processing. Ontologists (of whom I am one) create rules, systems of semantic labels, knowledge graphs, and specialized databases that correct the purely probabilistic dimension of AI systems. Armies of employees sort, collect, and prepare data, then supervise the training of models. Testers refine the answers given by machines, detect their biases, and attempt to reduce them. I have not yet enumerated the two factors that best explain the intelligence of language models. For it is collective human intelligence that produces the training data, data that envelops the solidarity between texts and their meaning. Finally, from meaningful images generated in a probabilistic manner by mechanical and unconscious models, it is indeed the spirit of living users who evokes concepts, narratives, referential intentions, the coherence of a real or fictional world, a dialogical intersubjectivity, spatio-temporal intuitions, and finally emotions, all dimensions of meaning that are the hallmark of human intelligence.
In the end, AI functions as a mechanical interface between the collective intelligence that produces training data and the individual intelligences that interrogate the models, read their responses, and use them. This robotic interface between living personal intelligences and accumulated collective intelligence amplifies both synergistically. Such is the secret of artificial intelligence, well hidden beneath the fiction of an autonomous AI that « surpasses » human intelligence, when it in fact expresses and augments it. In its concrete effects, this new system of reciprocal feeding of individual and collective intelligence can contribute to the stupefaction of lazy masses and the extension of banality, just as it can multiply the creative capacities of scholars and original thinkers. Between the two, all shades of gray are possible. In the range of possibilities between these two extremes lies undoubtedly the ultimate ethical choice which, although it concerns each of us, poses itself even more acutely for educators who must teach the art of reading, writing, and thinking.
