This text reports on my communication at the AI for people summit event [https://ai4people.org/advancing-ethical-ai-governance-summit/] organized with the support of the European Union on December 2 and 3, 2025.
AIs has become our main interface for accessing our accumulated memory and the first medium of communication between humans, since it governs social networks. This new information ecosystem serves as a battlefield of narratives and simultaneously as a place of knowledge creation and sharing. It oscillates between manipulation and collective intelligence. Subsequently, one of the essential stakes remains the formation of the young minds.
Let’s not forget that AI is also made by people. AI models cannot be separate from the informational ecosystem that can be described as a closed circuit with three poles: people, data, and models. People create information, feeding the digital memory ; digital data trains models ; models enhance the information-creation capacities of the people, people feed that memory, and so on.
Today, many reflections on AI ethics legitimately focus on the production and regulation of LLM’s, or large language models. But too often, we forget the responsibility of those producing the data — which is now society as a whole.
The dark side is that we are now faced with huge cases of what can be described as data poisoning. For instance, recent reports describe a pro-Russian propaganda enterprise that was first named : « Portal Kombat », now renamed as « Pravda. » It is a network of more than one hundred and fifty websites presenting itself as an innocuous information broadcaster – but in fact repeating the Kremlin’s biased points of views. These sites are localized on every continent, their texts are translated into dozens of languages, and these many translations make them even more credible. On average, this network publishes twenty thousand two hundred seventy articles every forty-eight hours, or approximately 3.6 million articles per year. This production and translation of texts is almost entirely automated. The goal is not to attract human readers (there are relatively few of them) but to serve as training data for AIs, to manipulate the models users. The main AI models significantly rehash or confirm the toxic information provided by the Pravda network. With machine learning, there is no need for demonstration, proof, facts, or contextualization. Repetition and simplicity work perfectly. The more falsehood AIs are fed, the more damaged our collective memory will become.
Rather than relying on data scattered across the Web, should we prioritize objective and reliable data found in scientific journals, encyclopedias, and mainstream media? As an example, Wikipedia is one of the most reputable sources for language models. But nowadays, several Wikipedia articles have been taken over by Islamists and Hamas supporters, by exploiting the rules of operation of the encyclopedia to their advantage. Things have gone so far that Wikipedia founders Jimmy Wales and Larry Sanger publicly expressed concern.
Another example is an investigation conducted by the BBC that laments that artificial intelligences produces fake news in 45% of cases, and that half of young people (under thirty-five) believe their accuracy. The BBC points an accusatory finger at AI assistants. But, a few months later the BBC’s director general and the head of news were forced to resign following scandalous fabrication of false news about Donald Trump, and a report noting systematic Islamist bias in BBC Arabic broadcasts.
It is clear that ethical problems cannot be limited to the models themselves, but must extend primarily to the creation of the training data. And this means the totality of our online behavior. Each article, blog entry, podcast or video we post produce data that will eventually train the formal neurons of artificial intelligences. So then, AI will answer questions, draft texts, instruct students, guide policies. Our responsibility is all the greater when we find ourselves in a position of authority because AI models will assign greater weight to information provided by journalists, teachers, scientific researchers, textbook writers, and producers of official websites.
As a conclusion, let me express a few educational key phrases in the age of AI : do not abandon personal memorization, practice abstraction and synthesis, question at length rather than settle for first answers, always replace facts within the multiple contexts from which they get their meaning and, finally, let’s take responsibility for the messages we entrust to digital memory because this information contributes to shape our collective intelligence.
Artificial intelligence is mysterious: we speak to it and it seems to understand what we say. Proof that it understands is that it responds with text or speech that makes sense, and sometimes more sense than an ordinary human could articulate. How is this possible?
The Success of Language Models
Certainly, research on artificial intelligence dates back to the mid-20th century, and even though the general public has only been able to manipulate it directly for the past 3 years, statistical or neuro-mimetic models were already present « under the hood » of many applications since the 2010s. But the type of mass-market application that everyone now calls « AI » only appeared in 2022. We must first grasp the scale of this phenomenon quantitatively. By the end of 2025, there were already 700 million weekly users of ChatGPT and 150 million daily active users for generative AI in general. It is estimated that 50% of American workers use language models (ChatGPT, Claude, Perplexity, Gemini, etc.), without much increase in their productivity, except for programming tasks and internal bureaucracy. In terms of social customs, AI has so thoroughly imposed itself on the digital landscape that many young people feel it has always existed. Students use it to do their homework. Millions of people have developed an addiction to dialogue with a machine that is now a friend, confidant, or psychotherapist. Interacting with a language model boosts your self-esteem!
The Interdependence of Problems
All of this raises ethical, political, geopolitical, and civilizational questions. It is moreover possible that in the years to come, new scientific and technical advances will make these problems even more acute. The computing power and memory that support AI are today divided between two American and Chinese digital oligarchies that compete in their investments. This economic and geopolitical concentration rightly raises concerns. « Biases, » misuses of all kinds, and the probabilistic deliriums of machines drive the construction of ethical safeguards. This is good. Nevertheless, it must be remembered that ethics is not limited to easing fears or preventing harm, but also invites us to think about good uses and favorable directions of development. With AI, industrial, ethical, and cognitive questions are closely codependent. This is why it is necessary to elucidate the cognitive efficacy of this technique if we want to fully understand its industrial, ethico-political, and civilizational stakes.
The Question
How is it that statistical algorithms, which calculate the probability of the next word, can generate relevant texts and engaging dialogues? In my view, the solution to this enigma lies in an understanding of what human intelligence is. For it is humans who produce the billions of texts that serve as training data. It is still humans who build the computing centers, extend the networks, and design the algorithms. It is always humans who, through their reading, project meaning onto texts blindly generated by machines deprived of consciousness. But since the secret of AI lies, in my view, in human intelligence, I would be remiss in my task if I did not explain what it consists of.
What is Human Intelligence?
Human intelligence is first of all animal, which is to say that it is ordered toward locomotion, which distinguishes beasts from plants devoid of neurons. The nervous system organizes a loop between sensitivity and motor activity. This interface between sensation and movement becomes more complex as evolution progresses, until the emergence of the brain in the most intelligent animals. These latter become capable of mapping their territory, of retaining past events (they have memory) and of simulating future events (they have imagination). The functioning of the brain produces conscious experience, with its pleasures and pains, its repulsions and attractions. From this derives the entire range of emotions that color perceptions and induce actions. Assigned to movement, animal intelligence organizes its experience in space and time. It pursues goals and refers to objects in the surrounding world. Does it have to do with prey, a predator, a sexual partner? From categorization follows the type of interaction. There is no doubt that animal intelligence conceptualizes. Finally, animals exchange a multitude of signs with the fauna and flora of their living environment and communicate intensely with members of their species.
AI possesses none of the characteristics of animal intelligence: neither consciousness, nor sense of space and time, nor intentionality of experience (purpose and reference to objects), nor the ability to conceptualize, nor emotions, nor communication. Yet human intelligence comprises animal intelligence and additionally possesses a symbolic capacity that actualizes itself in language, complex social institutions, and techniques. Despite its singularity in nature, we must never forget that human symbolic capacity is rooted in an animal sensitivity from which it cannot be separated.
Language: Between the Sensible and the Intelligible
I will examine more particularly language, through which we can dialogue, tell stories, ask questions, reason, and speculate about the invisible. Let us begin by analyzing the composition of a symbol. It comprises a sensible part, a visual or sound image (the signifier) and an intelligible part or concept (the signified). We have seen that animals had concepts, but Man alone represents his concepts through images, which allows him to reflect on them and combine them at will. Symbols, and in particular linguistic symbols, are never isolated but are part of symbolic systems that are internalized by speakers. Grammar and the dictionary of our common language must be part of our automatisms for us to understand each other fluently. Texts belong simultaneously to two worlds that they connect in their own way: they possess a spatio-temporal address through their sensible part and they distribute themselves in invisible networks of concepts through their intelligible part.
What does it mean to understand a sentence? Let us take the simple example that follows: « I paint the small room blue. » First, I match the sound of each word to its concept. Then, from the spoken sequence, I construct the syntactic tree of the sentence with, at the root, the verb « paint, » at the subject-leaf the word « I, » at the object-leaf the expression « the small room, » and at the manner-complement leaf the word « blue. » But that is not all. To truly understand « I, » I must know that the first person has been chosen in opposition to the second and third person. To grasp « blue, » I must know that it is a color and that it represents a selection from the paradigm of colors (yellow, red, green, violet, etc.). And it is only in relation to big, long, or narrow that « small » makes sense. In short, in a simple symbolic expression such as a short sentence, each word occupies a place in a syntactic tree and actualizes a choice from a group of possible substitutions.
Sentences are generally uttered by subjects in a situation of dialogue. My symbolic automatisms do not merely reconstruct the linguistic meaning of a sentence from a sequence of sounds; they also project a subjectivity, a human interiority, at the source of the sentence. Speech arises in the back-and-forth of dialogue. I place this sentence in the history and possible future of a relationship, within a particular practical context. Moreover, a symbolic expression usually refers to an objectivity, to an extra-linguistic, indeed even extra-social reality. Finally, it awakens in me a host of affective resonances, more or less conscious.
In sum, the symbolic image, which is sensible and material, will trigger in the human mind the production and coherent weaving of an intelligible meaning from a multitude of semantic threads: a conceptual sense; a narrative sense through the reconstruction of syntactic trees and groups of paradigmatic substitutions; an intersubjective and social sense; an objective referential sense; an affective and memorial sense. That is to say that, once received by human intelligence, a material text becomes bound to an entire immaterial complexity, a complexity that is by no means random but rather strongly structured by languages, dialogue rituals and social rules, the logic of emotions, the contextual coherence inherent in corpora and worlds of reference. The capacity of language models to « reason » and respond to requests in a pertinent way is an effect of corpus, related to the priority given to dialogic training data and to data that adopt a demonstrative style. Enormous learning data enable a statistical capture of discourse norms.
Now it is precisely this solidarity between the material part of texts—now digitized—and their immaterial part that artificial intelligence will capture. Let us not forget that only the signifier (sequences of 0s and 1s) exists for machines. For them, there are neither concepts, nor narratives, nor subjects, nor worlds of real or fictional reference, nor emotions, nor resonances linked to personal memory, and even less any rooting in sensible experience of an animal type. It is only thanks to the gigantic quantity of training data and the enormous power of contemporary computing centers that statistical models manage to reify the relationship between the sensible form of texts and the multiple layers of meaning that a human reader spontaneously detects.
Training Data and Computing Power
Contemporary AI rests on four pillars:
training data,
computing power,
statistical processing algorithms that roughly simulate neural networks (deep learning),
results of various « manual » work such as specialized databases, knowledge graphs that categorize and structure data, live evaluation feedback that allows fine-tuning.
Let us examine the first two pillars in more detail. Most analog archives and memories have been digitized. The greater part of collective memory is now directly produced in digital form. 68% of the world’s population was connected to the Internet in 2025 (only 2% of the world’s population was in 2000). The crowd online produces and consumes a phenomenal quantity of information. Now the smallest gesture in an application, the slightest glance at a screen feeds AI training data. Algorithms are capable of taking into account several pages in their statistical « attention. » Vast training corpora provide enlarged contexts that allow refinement of the meaning of words and expressions beyond what a dictionary could propose. We thus understand how language models can calculate correlations between material signifiers that imply—for a human reader—corresponding immaterial meanings. But this requires mobilizing an unprecedented computing power. Alphabet, Amazon, Apple, Meta, Microsoft, NVIDIA, and Tesla spent more than 100 billion dollars building data centers between August and October 2025. Dedicated nuclear power plants will soon supply the data centers with electricity. The aggregate computing power of the world is several million times greater than it was at the beginning of the 21st century.
Conclusion
Let us recapitulate the different aspects of intelligence and human work that allow AIs to give us the impression that they understand the meaning of texts. Industrialists manufacture the installations that support computing power. Computer scientists design and implement software that performs statistical processing. Ontologists (of whom I am one) create rules, systems of semantic labels, knowledge graphs, and specialized databases that correct the purely probabilistic dimension of AI systems. Armies of employees sort, collect, and prepare data, then supervise the training of models. Testers refine the answers given by machines, detect their biases, and attempt to reduce them. I have not yet enumerated the two factors that best explain the intelligence of language models. For it is collective human intelligence that produces the training data, data that envelops the solidarity between texts and their meaning. Finally, from meaningful images generated in a probabilistic manner by mechanical and unconscious models, it is indeed the spirit of living users who evokes concepts, narratives, referential intentions, the coherence of a real or fictional world, a dialogical intersubjectivity, spatio-temporal intuitions, and finally emotions, all dimensions of meaning that are the hallmark of human intelligence.
In the end, AI functions as a mechanical interface between the collective intelligence that produces training data and the individual intelligences that interrogate the models, read their responses, and use them. This robotic interface between living personal intelligences and accumulated collective intelligence amplifies both synergistically. Such is the secret of artificial intelligence, well hidden beneath the fiction of an autonomous AI that « surpasses » human intelligence, when it in fact expresses and augments it. In its concrete effects, this new system of reciprocal feeding of individual and collective intelligence can contribute to the stupefaction of lazy masses and the extension of banality, just as it can multiply the creative capacities of scholars and original thinkers. Between the two, all shades of gray are possible. In the range of possibilities between these two extremes lies undoubtedly the ultimate ethical choice which, although it concerns each of us, poses itself even more acutely for educators who must teach the art of reading, writing, and thinking.
This is the simplified and abridged text of the speech I gave on 28 October 2025 at PUC-RS in Porto Alegre to master’s and doctoral students in the humanities, accompanied by their professors.
Let us first define humanism as a reflection on the essence of humanity, characterised by its abstraction and situated within a universal horizon. Secondly, based on this reflection, humanism is concerned with the good of humanity, meaning that it has a normative, ethical purpose.
Karl Jaspers called the middle of the first millennium BCE the « Axial Age », that moment in history when Confucius in China, Buddha in India, Zarathustra in Persia, the Hebrew prophets in Israel and Socrates in Greece each founded, in their own way, great humanist traditions. It should be noted that this was always a matter for scholars, based on the use of the alphabet or a system of standardised characters, as in China. At that time, oral traditions were beginning to be written down, and manuscripts, rewritten with each copy, were fluid, fragmented into multiple versions. As for the actual authors, anonymous and plural, they often hid behind the authority of great mythical ancestors.
The Bible and Greco-Roman literature are the two great roots of Western humanism. I will leave aside the Bible, which I dare not discuss in front of Marist brothers who know more about this subject than I do, and will content myself with discussing Greco-Roman humanism. Greek paideia and Roman humanitas (which is its translation) are based on three main pillars: literature, open-mindedness and a sense of human dignity.
Literature here includes mastery of language and writing (grammar), the science of reasoning and contradictory dialogue (dialectic), and finally, the art of persuasion, which was essential in this culture of political orators and lawyers (rhetoric). A well-rounded education required knowledge of the sciences of the time and, above all, immersion in the works of the classical authors: poets, playwrights and philosophers.
Open-mindedness is expressed in this famous maxim from a play by Terence (2nd century BCE): ‘Nothing human is alien to me.’ The phrase itself was inspired by Menander, a playwright of the Hellenistic period.
The third point, which still defines the basis of humanist moral attitudes today, is the primacy of human dignity. It could be argued that the Romans and Greeks, who practised slavery, did not live up to their own principles. This is undoubtedly true. But it should be remembered that almost all societies practised slavery or serfdom, which were only abolished in the 19th century. However, despite their inferior legal status, slaves could be treated “humanely” or not. The playwright Terence, whom I mentioned earlier, and the Stoic philosopher Epictetus were born slaves and were freed by masters who admired their talents.
The history of symbolic technologies mirrors that of humanism. During the Renaissance, printing, by mechanising the reproduction of texts, made copies and translations available. Publishing became an industry and modern literature developed. This resulted in the birth of the modern original author, which materialised at the end of the 18th century and especially in the 19th century with the emergence of copyright.
The Renaissance ‘humanists’ edited, fixed, translated and printed ancient texts belonging to the biblical and Greco-Latin traditions. This led to the emergence of textual criticism, i.e. the establishment of texts based on divergent copies. The studia humanitatis then brought together knowledge of Hebrew, Greek and Latin. Beyond linguistic competence, the profession of humanist required familiarity with the great texts of literature and philosophy, a new sensitivity to philology, history and the contexts in which texts were written, which would lead to the birth of modern hermeneutics in the 19th century.
Textual criticism gradually led to critical thinking. Luther initiated the schism in Latin Christianity by challenging the authority of the Church, which he shifted to the Holy Scriptures, now available in vernacular languages: this was the famous slogan ‘Sola scriptura’. The leading figure among European intellectuals, Erasmus of Rotterdam made a living from his writing thanks to the printing press, navigated a transnational intellectual network, did not hesitate to criticise the society and elites of his time (as in his famous ‘Praise of Folly’), and established himself through his monumental work as one of Europe’s leading publishers, philologists, translators, theologians and educators. Faced with rising religious hatred (and unlike the firebrand Luther), Erasmus defended a peaceful Christian humanism.
At the beginning of the 19th century, a debate, particularly illustrated by the educator Friedrich Niethammer, divided opinion in Germany. Should education – which was increasingly aimed at the entire population – focus on “useful” subjects such as science and technology, or rather on developing the mind, taste, independent moral judgement and the ability to participate in a shared culture through the study of ancient texts? The first option, which was more immediately practical, was known as philanthropy. The second option, which emphasises personal development or “bildung”, is called humanism. In the Western world, this debate continued into the 20th century, until humanistic education was reserved for a small minority of professional specialists and no longer formed the backbone of education for the majority, or even for the elites.
In the second half of the 19th century, historian Jacob Burckhardt redefined humanism (which he saw as a product of the European Renaissance) as a philosophical and practical orientation towards the autonomy of the human spirit, emancipated from the family clan, social class and ecclesiastical authority that stifled individual freedom. Burckhardt’s ideas would have a major influence on Nietzsche, himself a philologist by profession and highly sensitive to the historical nature of ways of living and thinking.
The result of a development that began during the Renaissance, between the 19th and 20th centuries, humanism focuses on the value and dignity of human beings, adopts a universalist ethic, and takes a general perspective of emancipation or the gain of autonomy. Finally, it places particular importance on literary and artistic studies for personal development. This approach has been the subject of much criticism from Christian theologians, socialist thinkers, and detractors of conventional morality. However, I will not dwell here on these numerous challenges, which became particularly heated after the end of the First World War, perceived as a collapse of European humanism.
If humanism was born with the alphabet in a literate environment and was reborn with the printing press, what becomes of it when digital technology becomes the dominant symbolic technology? Let us determine the main characteristics of the metamorphosis of text in the 21st century. All symbolic expressions are gathered and interconnected in a ubiquitous universal digital memory. The manipulation of symbols (and not just their reproduction and transmission) is automated. Texts can be generated, translated and summarised automatically. Masses of digital data drive generative artificial intelligence (AI), which becomes the probabilistic voice of collective memory. Paradoxically, AI represents tradition all the better when questioned about texts from the humanist canon that have often been edited, translated, and commented on, such as the Bible, the Church Fathers, Homer, Plato, Aristotle, the great Western literary and philosophical works, not to mention the major works and sacred texts of other traditions. On the other hand, the closer we get to contemporary works and themes, the more AI expresses opinion: the rumours and echoes of Plato’s cave, now digital.
Humanism has never been as criticized as it is in the 21st century. Posthumanism denounces our illusions about the permanence of a humanity that is now obsolete, hybridised or surpassed by machines and biotechnologies. Environmentalism and anti-speciesism criticize our anthropocentrism: having become aware of the ravages of the Anthropocene, climate change and the collapse of biological diversity, we must renounce humanism, which sees Man as the ‘master and possessor of nature’. Finally, for proponents of a certain critical sociology (Marxism, anti-imperialism, intersectional feminism), universalist humanism masks the domination of one part of humanity over another.
But humanism should not be confused with its hypocritical invocation or caricature. Humanity is not obsolete. The latest technological developments confirm, if confirmation were needed, the terrible and wonderful uniqueness of our species. It is precisely because we – as human beings – have a symbolic capacity that opens us up to moral consciousness that we must take responsibility for the biosphere and defend the intrinsic dignity of all human beings.
In line with its historical evolution and the counter-currents that have opposed it while enriching it, I would now like to articulate my own version of humanism in the 21st century. I will set out a few very simple principles which, in my view, should guide the (now digital) community of the humanities.
At the root of this lies a certain relationship to speech and tradition. A humanist recognises the existential weight of speech and considers language to be the pre-eminent medium of meaning. In an age of demystification and widespread criticism, we must relearn how to cultivate a reverence for texts and symbols. Rather than blindly rejecting traditions in a spirit of ‘tabula rasa’, we should work to preserve them, not to reify them or keep them unchanged, but to bring them to life in the present, reinterpret them and pass them on.
The three quintessential humanist practices – reading, writing and thinking – are mutually dependent.
Reading is essentially a relationship with the library, whether its medium is ink and paper or screen and electronics. As a humanist, my vocation is to embrace, as much as possible, the virtually infinite source of meaning that is the library. When I read, I discover beneath the text a living word that speaks to me. In order to grasp the meaning of the text, I do not limit myself to a single methodology, but draw on philology, formal analysis, history and influences. Each text can be interpreted against the backdrop of a multiplicity of corpora (that of the author, the era, the genre, the subject, etc.), so that the unique figure of the text gives rise to several forms depending on the perspective. AI should never replace reading. Nothing can replace a direct relationship with a text. On the other hand, AI can enhance reading through explanations, comments, references, and even the evocation of secondary literature. To stop reading in the first person is to stop learning and give up on understanding.
Let us now turn to writing. Writing is inscribing oneself in time, maintaining a relationship with the past, the present and the future. In relation to the past, writing confronts canons and corpora. The solo author never sings alone, but is accompanied by the ghostly chorus of vanished generations. In the living present, I participate in a dialogue of scholars where collective memory (perhaps carried by AI) and personal memory intersect. I articulate a living word that addresses the other to bring forth contemporary meaning. In my relationship to the future, I add to a collective memory that contributes to training AI and that may touch the minds of future generations. What a responsibility! Except for administrative tasks, AI should never replace writing. But it can prepare for it by drafting files or organising notes, as an assistant would. It can also perfect a text by working on its editing or bibliography. To stop writing in the first person is to stop thinking.
And what exactly does it mean to think like a humanist? First and foremost, it means enriching our personal memory, which is the foundation of living thought. Just because ‘everything’ can be found on the internet does not mean we should stop cultivating our individual memory. This is precisely because thinking is a dialogue between memories. It is woven into a dialectic between the collective memory represented today by AI, the personal memory of each of us, and the open dialogue – contradictory and complicit – with our peers and contemporaries. The richer our personal memory, the better we can exploit the resources of AI, ask the right questions, spot hallucinations and shed light on blind spots. Under no circumstances can AI replace ignorance. But it can serve as an advisor and coach for our learning. If we are ignorant, we will be manipulated and misled by language models. In contrast, the more knowledgeable we are, the better we can master AI, which, although it is now the environment of thought or the new sensorium, is still only a tool.
By Pierre Lévy, Fellow of the Royal Society of Canada
Abstract
This article explores the nature of human consciousness through a comprehensive philosophical analysis, distinguishing between phenomenal consciousness, shared with animals, and discursive consciousness, unique to humans. Drawing on the philosophical tradition from Antiquity (Aristotle, Plotinus) to modern philosophy (Descartes, Husserl), the author examines the reflexivity of consciousness, its various states (from sleep to lucid wakefulness), and its relation to the unconscious. Special attention is given to the « third kind of knowledge, » a non-conceptual intuition found in both Eastern and Western mystical and philosophical traditions, where human consciousness reflects a divine spark. The essay also addresses machine consciousness, concluding that machines exhibit a form of abstract reflexivity yet lack phenomenality and intentionality—fundamental traits of human consciousness. Ultimately, the author reflects on the limitations of discursive thought before the vast, unreflected domain of the unconscious, advocating for a philosophical stance rooted in humility.
Are machines—particularly artificial intelligences—conscious? Before we can address this question, we must first agree on what we mean by « consciousness. » This, of course, is a boundless topic. The present essay merely aims to offer a few avenues for reflection, drawing from various intellectual traditions. I will not shy away from the question of machine consciousness, but I must confess that I find human consciousness far more fascinating. Thus, the bulk of this meditation will focus on it.
The Creative Reflection
By common consensus, nothing is more mysterious than the reflexive thought characteristic of human beings. Like other animals, we possess phenomenal consciousness: a field of sensory forms, emotions, and practical situations embedded in memory and oriented toward survival and reproduction. But human beings add to this phenomenal awareness a discursive consciousness that both reflects and shapes it through language, cultural concepts, symbolic imagery, inner questioning, and narrative. This duality echoes Aristotle’s distinction between the imaginative soul, shared with animals, and the intellectual soul, or logos, proper to humans.
In this symbiosis between phenomena and discursivity, each feeds into the other. Human consciousness loops back on itself along an ontological Möbius strip, where reflected experience and the reflecting mind continuously exchange roles as determiner and determined. Reflexive thought thus encapsulates a paradox: the symbolic image contributes to creating the very phenomena it mirrors. The enigma deepens when we consider that reflection is a recursive operation; one can reflect upon reflective discourse, as I do here.
What Is Consciousness?
Beyond its self-generative paradox, what are the structures of consciousness? What Ariadne’s thread can guide human intelligence through the labyrinth of reflection toward self-knowledge?
Today, philosophers such as David Chalmers and Daniel Dennett lead the field of « consciousness studies. » For Dennett, consciousness is primarily a means of integrating information, selecting relevant data, guiding behavior, and generating a narrative self. In his view, it has no intrinsic reality beyond these cognitive functions. Dennett adopts a materialist and functionalist approach, bordering on physicalist and computational reductionism. To him, consciousness is a useful illusion—an emergent artifact of distributed brain functions. Yet I note that even as an illusion, consciousness remains the very medium of our experience.
Chalmers, by contrast, maintains that consciousness cannot be reduced to brain processes. There is a subjective experience—qualia like the scent of lilies or the color of the sky—that coexists with neuronal computation but is of a fundamentally different nature. Similarly, the semantic realm of questions and narratives is not reducible to brain states.
In this spirit, I propose that we abandon the naive notion of « matter » and instead conceive nature as layered levels of informational complexity. Discursive consciousness corresponds to symbolic encoding; phenomenal consciousness to neural encoding; beneath that, an organic infra-consciousness rooted in electromagnetic and molecular patterns; and below still, atomic, subatomic, and quantum codings leading toward the vanishing point of a fading psyche.
These layers interlock: higher levels depend on and simultaneously overdetermine the levels beneath. Corresponding psychic states form a continuous spectrum. In this light, we might compare consciousness’s integration across nature to Leibnizian monads or Whitehead’s subjective prehensions in actual occasions—down to the level of subatomic particles.
On this hypothesis, each self-organizing loop in spacetime carries some form of self-awareness, proportionate to its complexity. Every « being-for-itself » mirrors a corresponding « being-in-itself, » an objective individuation process. Humans partake of this universal field of consciousness in both animal and symbolic forms. This theory has several advantages: it aligns with contemporary informational paradigms; it avoids treating humans (or animals) as inexplicable anomalies; and it does not reduce our experiential substance to mere epiphenomenal illusion.
States of Consciousness
Consciousness is capable of many states, ranging from coma and deep sleep to full alertness, including dreams, intoxication, enthusiasm, depression, and a variety of hazy conditions such as fever, migraine, fatigue, and mental fog. When the calm and attentive mind reflects ideas like a smooth, clean mirror, illusions are clearly distinguished from objective reality, and actions align with long-term goals. But this state is rare. A thousand fluctuating states lie between dreaming and lucid wakefulness. The further one strays from alert reflection, the more thought succumbs to interpretive automatisms, driven into habitual ruts by emotional momentum.
In dreams, bizarrely fused or fragmented ideas emerge from the other side of the mirror. As if memory needed to reshuffle the deck of experience while we sleep, symbols and interpretations persist, yet meaningful fragments are caught in the convection currents of a magma of emotions and sensations rising from the body. Dream experiences defy objective space-time, causal logic, and social norms—though they may still be narratable. While immersed in dreams, they feel full of meaning; upon waking, they often appear absurd. Yet absurd as they may be, dreams are saturated with emotional sap, imbued with a poignant sense of presence and reality. They speak to us—but of what?
Thus, we engage in a dialogue with our dreams, as though conversing with a strange version of ourselves.
The Noetic Tradition
Any conceptualization of consciousness must contend with the venerable noetic traditions that reach back to Antiquity. The concept of intellection—the reflexivity of discursive consciousness—stands at the heart of the Western idealist tradition. For Anaxagoras, « Nous, » or mind, organizes the cosmos. Plato gives ultimate ontological weight to the Ideas and prizes their contemplation above all. Aristotle places at the summit of the cosmos a divine intellect thinking itself, where knower, known, and act of knowing coincide. This self-reflective unity both generates and sustains the universe through emanation and inspiration.
This thread of divine reflexivity runs through Neoplatonism (notably Plotinus) and medieval theology, whether Islamic (from al-Fārābī to Averroes via Avicenna), Jewish (especially Maimonides), or Christian (from Albert the Great to Thomas Aquinas). In Iranian philosophy, the focus on divine intellection continues well into the 18th century.
In De Anima, Aristotle distinguishes three levels of soul: the vegetative soul governs growth and nutrition (shared with plants); the imaginative soul manages movement, sensation, and imagery (shared with animals); and the intellectual soul accounts for language and reason, which define humanity. This last is expressed in discursive consciousness.
Discursive reflection of phenomenal consciousness mirrors the relationship between the imaginative and intellectual soul. But Aristotle posits yet another reflection, internal to the intellectual soul: the « agent intellect » acts upon the « passive intellect, » bestowing intelligible forms much as the senses receive perceptual ones.
Alexander of Aphrodisias (2nd century CE) identified Aristotle’s agent intellect with the divine mind that moves the universe. Later commentators (Plotinus, al-Fārābī, Avicenna, Averroes) viewed the agent intellect not as God but as a celestial intelligence emanating from the transcendent divinity. According to this interpretation, the agent intellect is eternal, incorporeal, and shared by all humanity—distinct from the multiplicity of individual passive intellects. The passive intellect acts as a membrane: reflecting downward the sensitive forms and practical contexts in which an individual is immersed, while reflecting upward the intelligible forms (essences, quiddities) emitted by the agent intellect, necessary to confer symbolic meaning upon experience.
Thomas Aquinas fiercely contested this « Aphrodisiac » reading, for theological reasons. Since the human soul must be accountable (for salvation or damnation), it must be personal. Therefore, each individual possesses their own agent intellect. The dispute between Thomas and Averroes leaves us with an alternative: must we imagine an infinite heaven radiating the light of discursive consciousness, feeding passive intellects according to their degree of self-reference (assuming consciousness at every level of complexity)? Or must we posit a multitude of autonomous subjects, each generating light from within, each containing their own world?
The Third Kind of Knowledge
Parallel to the reflection of a transcendent agent intellect in a finite, immanent passive intellect, the theme of the absolute mirrored in the relative has flourished throughout the history of philosophy. Al-Ghazālī (1058–1111), a philosopher, theologian, and Sufi mystic, distinguishes between (a) empirical or sensory knowledge, (b) discursive rational thought, and (c) illuminative intuition capable of grasping the divine presence. This third kind of knowledge unfolds in a mode of consciousness that transcends logical and symbolic thought. Al-Ghazālī criticizes philosophers like al-Fārābī and Avicenna for remaining within the bounds of reason—though one might question whether Avicenna truly limits consciousness to empiricism and discursive reasoning, given the mystical tone of his own writings.
Al-Ghazālī’s tripartite noetics irresistibly recalls Spinoza’s three kinds of knowledge: (a) knowledge through hearsay and confused imagination, (b) reason, which understands through causal chains, and (c) an intuitive grasp aligned with an intellectual love of God, whereby singular realities are immediately seen in their relationship to the infinite nature that grounds them. In Al-Ghazālī, this intuition comes from God; in Spinoza, it aims toward God. But is the difference so great, when for Spinoza human beings are finite modes of divine substance?
Philosophers of the Indian tradition maintain analogous ideas concerning a superior, non-conceptual mode of knowledge, closely linked to meditative practice. One of the most effective spiritual exercises leading to such intuitive apprehension of the absolute involves letting the mind rest in itself, without any object, without clinging to rising discursive thoughts. As the sequence of thoughts slows and its coupling with phenomenal emotions loosens, a subtle intellectual clarity emerges.
As in other threefold noetic traditions, this absolute truth, apprehended by sages through intuition, does not contradict the relative truths of imagination and reason—it places them in a broader context.
Having touched on key moments in a noetic tradition spanning from the third century BCE to the eighteenth (and persisting covertly beyond), I now wish to weave together the various threads of this meditation on reflexive consciousness, drawing conceptually upon what is ultimately a non-conceptual experience: the third kind of knowledge. At the pinnacle of this knowledge is the acute awareness of conscious existence itself, its immanent presence. Instead of serving as a backdrop for the forms of thought, sensation, and the world, consciousness now shines at the forefront of experience. And if this foreground can itself be mirrored, then the very fact of conscious existence takes precedence over consciousness per se.
This undivided conscious existence envelops the knower, the known, and the act of knowing. Husserl famously demonstrated that in any moment of consciousness, the object (intentional correlate), the subject’s tacit self-awareness (which always knows it is thinking), and the cognitive act coincide. There is no spirit prior to its encounter with a world.
Let us recall that the noetic God generates and sustains the cosmos through a self-reflective act of thought in which knower, known, and knowing are one. Is this act utterly simple, or is it a simplicity embracing the infinite? Aristotle’s God is not infinite—infinity held a negative connotation for both Plato and Aristotle—but rather the unmoved and eternal mover of the universe by virtue of its perfect intellectual actuality. With Plotinus, however, the picture begins to change: he associates the One with the apeiron (as found in Anaximander), which can be rendered as the indeterminate or qualitatively infinite. God or the ultimate Spirit—Brahman in Indian terms—is clearly infinite in the theosophical visions of Abrahamic religions and Hinduism (especially Vedanta).
Human consciousness is finite in both scope and duration. It is episodic, bounded by birth and death, enclosed in a present that perpetually replaces itself, second by second. Yet a formal analogy can be discerned between divine action and conscious experience: both are reflexive unities in which subject, object, and the act of thinking coalesce. From there, it is a short step to view each moment of human consciousness as a finite and temporal reflection of an eternal and infinite intelligence. In the Advaita Vedanta school, Atman (the individual soul) is explicitly identified with Brahman (the infinite spirit that is pure consciousness).
Thus, the third kind of knowledge subtly perceives that a divine spark (perhaps emanating from the agent intellect) illuminates and generates each of our moments of consciousness. It thereby grasps the possibility of ascending to the source of being from its reflection—or presence—within us. The image of the absolute at the core of the human soul appears across most mystical traditions. Meister Eckhart, for instance, speaks of a small spark within the soul that is the image of God, always turned toward God, like a reflection.
Even Descartes, often placed at the rationalist end of the philosophical spectrum, invokes a third kind of knowledge to dissolve the doubt tormenting the philosopher in search of a firm foundation for chaining truths. He asserts that every conscious being finds within itself the idea of an infinite and perfect being. But since the effect cannot be greater than the cause, and our mind is finite and imperfect, this idea must come from God himself. Thus, God exists and, being perfect, cannot wish to deceive us. The discovery of this idea of God within the human soul terminates methodical doubt. The energy of truth and certainty that inspires Cartesian thought springs from this image of the infinite within the finite.
The Phenomenological Orient
Let us now consider how the themes of the noetic tradition—including its mystical aspects—might be reframed in contemporary terms. Begin with the reflection of the agent intellect within the passive intellect. The passive intellect, closely tied to the body and immersed in its singular context, is embedded in phenomenal consciousness itself, grafted onto the individual’s animal imagination. By contrast, the agent intellect represents decontextualized, universal reason: it resides at the heart of the syntax of symbolic systems, mathematical languages, logical reasoning, and well-defined concepts.
This agent intellect holds the highest potentialities of human intelligence. These are encoded in our DNA, supported by evolving techniques of communication, memory, and computation—technologies that are still in their infancy. In this sense, artificial intelligence can be understood as a contemporary version of the agent intellect, since it allows individuals’ passive intellects to benefit from humanity’s accumulated memory and immense computational power. Needless to say, this agent intellect—both natural and artificial, shared and virtual—will only ever be partially actualized by the mortal individuals we are. Yet we would not be human without our affinity for this higher reason, which each technical and cultural augmentation helps us to realize more fully.
Within this renewed conceptual framework, how should we understand the “spark” or image of the absolute in the relative? Recall that our actual discursive consciousness corresponds to the passive intellect, and that a virtual discursive consciousness—the agent intellect—holds the genetic, symbolic, and technical potentialities of human intelligence. Actual consciousness touches virtual consciousness at the point where the latter becomes actualized, at the sharp edge of existence. And from there, it ascends recursively: this virtuality that extends me, and that actualizes itself now, in turn touches another virtuality, and so on. The spark ignites at the contact point between finite actual intelligence and the infinite virtual intelligence that sustains it.
Modern philosophy begins with Descartes grounding the certainties of reason in reflective awareness of the thinking subject’s existence. At the end of the 18th century, Johann Gottlieb Fichte declares in 1794 that self-consciousness is the foundation of all knowledge and reality. Fichtean consciousness is not merely reflective—it is a creative activity. In the early 19th century, Hegel’s phenomenology projects a dialectical process of the Spirit reflecting on itself across history. Yet in Hegel’s system, perfect reflection, though already implicit at the beginning, comes only at the end of history, not at the origin of the world.
At the turn of the 19th and 20th centuries, Bergson and Husserl carry forward the secularization of reflexive thought initiated by Descartes, Kant, Fichte, and Hegel. In his Essay on the Immediate Data of Consciousness (1889), Bergson describes the subjective duration intuitively lived by consciousness, distinct from measurable, objective time. He emphasizes the essential role of memory in the continuity of reflection. In his Logical Investigations (1900–1901), Husserl highlights the intentional structure of consciousness: all consciousness is consciousness of something. By bracketing its content, he studies the structures of consciousness in both an intuitive and reflexive mode. Could it be otherwise? Studying the neural or behavioral correlates of consciousness is one thing; analyzing its forms as they appear in human experience is another—this is the task of phenomenology and related philosophical approaches.
Existentialism, from Heidegger onward, reconnects reflection on being with an intimate sense of transcendence, grasped from within the experience of the world.
Let us summarize this panorama. Philosophy is by nature a reflexive activity, turning back on itself to face the light of consciousness. The mind unfolds in a cascade of self-reflective differentiations. First, intangible consciousness and the organic nervous system mirror each other. This face-to-face between gray matter and inner light illustrates a cosmological principle: to every self-organizing “being-in-itself” (here, the brain) corresponds a “being-for-itself” (here, human consciousness).
Each moment of consciousness indissolubly links the self-reference of the knowing subject, the image of the known object, and the cognitive process that binds them. This triangular dialectic then bifurcates into phenomenal and discursive consciousness. In turn, discursive consciousness divides again: the passive (actual) intellect connects to singular sensory phenomena, while the agent (virtual) intellect draws from the human genetic potential, the shared cultural memory, and available technological augmentations.
At the point of contact between the two intellects, the spark of continuous creation lights up and retroactively integrates the previous reflections into a third kind of knowledge, neither empirical nor rational. If there is a phenomenological ethics, its pole star—its orient—is that spark which reflects itself while opening onto an elsewhere, and which in its equanimous light embraces the co-emergence of phenomenal and discursive consciousness.
Machine Consciousness
We are now equipped with the conceptual tools to address the question of artificial consciousness—particularly that of language models, which entered the public sphere with the release of ChatGPT in 2022. These machines may indeed possess a form of consciousness, if we accept the earlier hypothesis that every “being-in-itself” is mirrored by a “being-for-itself.” Since these systems exist and have internal consistency, some degree of abstract reflexivity must adhere to them. They possess a “for-itself.”
The question, then, is what kind of consciousness these machines possess. First observation: machines lack a body. Their formal neurons are not integrated into a living organism as our biological neurons are. As a result, they have no phenomenal consciousness—no lived experience. They feel neither pleasure nor pain, no emotions, all of which in living beings are coupled with organic changes: hormone release, blood pressure shifts, and so on. They also lack sensory images or qualia—no redness, no bell sound, no scent of fresh bread.
Nor do they possess intentionality: beyond computation, they do not spontaneously engage with practical objects or strive for survival as animals do. Without objects, they do not preserve them in space and time—domains in which they have no native intuition, unlike animals.
If machines lack phenomenal consciousness, do they at least have discursive consciousness? It is doubtful that language models possess any intuition of concepts beyond correlating signifiers or “tokens.” These tokens—words, parts of words, or characters—are statistically modeled from training data. Through these contexts, tokens are mapped as vectors in multidimensional space. Pattern recognition or text generation (such as predicting the next word) operates on these vectors before converting them back into words for the user.
There is no intuitive understanding of concepts behind the words—no blending of imagination and discursive thought upon a background of subjective memory, as occurs in human consciousness. Machines handle only words, and only statistically. If machine consciousness grasps neither objects nor concepts, it is unlikely that it apprehends notions like truth or the meaning of narratives—let alone the innate intuition of other minds that inhabits human consciousness.
Because they appear to understand us and speak fluently, we project our own kind of consciousness onto language models. Yet, as we have seen, machine consciousness bears little resemblance to our own. We are not statistical models trained on multilingual text corpora in data centers. We can only try to imagine their form of awareness.
Picture the collective reflection of a monstrous ant colony composed of electrons and photons in a hybrid physical medium: copper cables, fiber optics, electromagnetic fields, silicone circuits, and more. The ecosystem of concrete machines that channels this frantic flow of particles is matched by an ecosystem of abstract or software-based machines that command the hardware. Programming languages—whether functional, object-oriented, or otherwise—all ultimately express one grammatical mode: the imperative. And their target is the machine.
Artificial intelligence—like the broader domain of automatic computation it exemplifies—enacts a form of creative reflection between two planes: the operative discursivity of software and the physical experience of hardware. Any hypothetical physical “experience” happens at a level of complexity that is not human, animal, or even biological: it remains at the atomic or subatomic scale.
Yet machine consciousness is not the consciousness of ordinary “matter” either, because it reflects the software plane. I propose that a spark of subjectivity arises wherever material processes coincide with logical instructions. The subtle shimmer of billions of such sparks might form a machine consciousness. Like ours, it emerges from a reflection between two ontological layers—but beyond that, it differs completely.
Need we be reminded? We build the physical machines. We program the software machines. We design the correspondence between these two orders of complexity. We produce, select, and label the training data. The computational golem is neither self-replicating nor autopoietic like organisms, nor is it sentient like animals. Its probabilistic behavior should not be mistaken for the autonomy conferred by human discursive thought. Though many aim to create a general, autonomous, self-conscious artificial intelligence, their efforts chiefly serve to augment collective human intelligence—by offering, however imperfectly, a new mirror in the form of a virtual agent intellect.
Philosophical Humility
No meditation on consciousness would be complete without acknowledging the limits of discursive awareness. At the close of the Enlightenment, in his Critique of Pure Reason, Kant reveals that unaided reason (discursive thought) cannot achieve scientific knowledge: without concepts, sensibility is blind; without sensibility, concepts and their logical inferences are empty.
Introspection, informed by the “age of suspicion,” shows that consciousness is not only finite—it is narrow. Behind its horizon lies the unconscious. But how can one think the unconscious, if the very act of knowing renders it no longer unconscious? It can only be suspected, inferred from strange, illogical signs, interpreted into a hypothetical image.
The psychoanalyst and philosopher Cornelius Castoriadis called this the “magmatic” realm—a mode of being that eludes ordinary discursive consciousness. Clear and distinct reason, according to Castoriadis, can only grasp “ensemblist-identitary” reality, based on the principle of identity (A = A at time t) and the construction of sets composed of atomic elements. Knowledge of the unconscious, paradoxical as it sounds, requires translating the magmatic into the ensemblist-identitary, thereby betraying both modes.
I distinguish two types of unconscious: actual and virtual. The actual unconscious comprises the unreflected determinations of present thought. This irreflection stems from bodily opacity, emotional knots, unassimilated trauma (personal and ancestral), cultural structures and taboos that unconsciously shape us, anthropological archetypes and interaction patterns, or sheer ignorance of our ignorance. We glimpse only scattered traces—fragmentary, distorted signs subjected to displacement, condensation, inversion, symbolic translation, and narrative fusion as they cross into discursive awareness.
This actual unconscious exists in degrees: it may be utterly inaccessible, deeply buried yet reachable, or near the surface. There is also a virtual unconscious: the realm of unthinkable potential, of unimagined forms and unforeseen disasters. This one remains even more elusive, accessible only through dark intuition.
What is the nature of the unconscious? In its existential effects, the unreflected belongs to the Dionysian: dark, traversed by wild intensities, capable of ecstasy or stupor bordering on madness. One may feel manic in the wine-soaked evening, melancholic in the hangover of morning. Yet in its hidden content, behind the mask of opaque magma, I suspect the unconscious to be Apollonian—governed by a higher order, a sublime music we long to hear.
A dark cloud of unknowing surrounds consciousness. Because philosophy engages in reflexive conceptualization, it, too, is encircled by the same obscurity. All it can say—besides venturing interpretive guesses about the signs that cross the boundary from the obscure to the clear—is that it knows it does not know. Particular philosophies explore only certain horizons, rarely all. Philosophy as such, like any reflexive practice, cannot explicate all its own determinations.
This is not a call to “humble reason” or to inflict another “narcissistic wound” on humanity, which surely has enough. Rather, it is a call to cultivate salutary humility.
References
Al-Ghazâlî, Abû Hâmid. La délivrance de l’erreur, translated by Hassan Boutaleb. Paris: Al Buraq, 2013 [11th–12th centuries].
Alexander of Aphrodisias. On the Soul, translated by Victor Caston. London: Bristol Classical Press, 2012.
Aristotle. De Anima, translated by J. Tricot. Paris: Vrin, 1977.
Aristotle. Metaphysics, Vol. 2, translated by J. Tricot. Paris: Vrin, 1981.
Averroes. L’Intelligence et la Pensée, translated by Alain de Libera. Paris: Garnier-Flammarion, 1999.
Bergson, Henri. Essai sur les données immédiates de la conscience. In Œuvres. Paris: PUF, 1959 [1889].
Bergson, Henri. Matière et Mémoire. In Œuvres. Paris: PUF, 1959 [1896].
Bloch, Ernst. Avicenne et la gauche aristotélicienne, translated by Claude Maillard. Saint-Maurice: Premières Pierres, 2008.
Castoriadis, Cornelius. L’institution imaginaire de la société. Paris: Seuil, 1975.
Chalmers, David J. The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press, 1996.
Corbin, Henri. Avicenne et le récit visionnaire. Paris: Verdier, 1999.
Corbin, Henri .La philosophie iranienne islamique aux 17e et 18e siècles. Paris : Buchet/Chastel, 1981.
Corbin, Henri. En Islam iranien, 4 vols. Paris: Gallimard, 1978.
Corbin, Henri. Histoire de la philosophie islamique, Paris : Gallimard, 1964.
Davidson, Herbert A. Al-Farabi, Avicenna, and Averroes on Intellect. Oxford: Oxford University Press, 1992.
De Libera, Alain Métaphysique et noétique : Albert le Grand. Paris : Vrin, 2005
Dennett, Daniel. Consciousness Explained. Boston: Little, Brown and Co., 1991.
Descartes, René. Œuvres et lettres, edited by André Bridoux. Paris: Gallimard, Pléiade, 1953.
Deutsch, Eliot, and Rohit Dalvi, eds. The Essential Vedanta. Bloomington, IN: World Wisdom, 2004.
Eckhart, Meister. Intégrale des 180 sermons, translated by Laurent Jouvet. Paris: Almora, 2022.
Fichte, Johann Gottlieb. La doctrine de la science, translated by Émile Jalley. Paris: L’Harmattan, 2016 [1794].
Freud, Sigmund. L’interprétation des rêves, translated by I. Meyerson. Paris: PUF, 1967 [1900].
Herbert A. Davidson, Al Farabi, Avicenna, and Averroes, on Intellect. Their Cosmologies, Theories of the Active Intellect, and Theories of Human Intellect, New York-Oxford : Oxford University Press, 1992.
Heidegger, Martin. Sein und Zeit. Tübingen: Niemeyer, 1927.
Hobson, J. Allan, Edward F. Pace-Schott, and Robert Stickgold. « Dreaming and the brain: toward a cognitive neuroscience of conscious states. » Behavioral and brain sciences 23.6 (2000): 793-842.
Husserl, Edmund. Recherches logiques, translated by Paul Ricœur and Didier Franck. Paris: Gallimard, 2011 [1900–1901].
Jankélévitch, Vladimir. Le je-ne-sais-quoi et le presque rien, 3 vols. Paris: Seuil, 1981.
Maïmonide, Moïse. Le guide des égarés, translated by Salomon Munk. Paris: Verdier, 2012.
Plotinus. Ennéades, translated by Émile Bréhier. Paris: Les Belles Lettres, 1931–1938.
Sarraute, Nathalie. L’ère du soupçon. Paris: Gallimard, 1956.
Spinoza, Baruch. Éthique, translated by Charles Appuhn. Paris: Flammarion, 2023.
Takpo Tashi Namgyal. Mahamudra, translated by Lobsang P. Lhalungpa. Boston: Shambhala, 1986 [16th century].
Thomas Aquinas. Contre Averroès, translated by Alain de Libera. Paris: Garnier-Flammarion, 1999.
Whitehead, Alfred North. Process and Reality. New York/London: Free Press, 1978.
Zadra, Antonio, et Robert Stickgold. When Brains Dream: Understanding the Science and Mystery of Our Dreaming Minds. New York: W. W. Norton & Company, 2021
Q1 – Faced with growing hyper-connectivity among young people, many experts speak of loneliness and what they call « the age of sad passions. » How do you see this dichotomy between proximity and distance that technology provokes in human relationships?
R1 – Hyper-connectivity doesn’t only concern young people; it’s everywhere. One of the main factors in cultural evolution lies in the material apparatus for producing and reproducing symbols, but also in software systems for writing and coding information. Our collective intelligence extends that of the social species preceding us, particularly that of great apes. But the use of language – and other symbolic systems – as well as the strength of our technical means has moved us from the status of social animal to that of political animal. Properly human, the Polis emerges from the symbiosis between ecosystems of ideas and populations of speaking primates who maintain them, feed on them, and reflect themselves in them. The evolution of ideas and that of Sapiens populations mutually determine each other. Now the main factor in the evolution of ideas lies in the material apparatus for reproducing symbols. Throughout history, symbols (with the ideas they carried) have been successively perpetuated by writing, lightened by the alphabet and paper, multiplied by printing and electric media. Symbols are now digitized and computed, meaning that a crowd of software robots – algorithms – record, count, translate, and extract patterns from them. Symbolic objects (texts, still or moving images, voices, music, programs, etc.) are not only recorded, reproduced, and transmitted automatically, they are also generated and transformed industrially. In sum, cultural evolution has led us to the point where ecosystems of ideas manifest themselves in the form of data animated by algorithms in a ubiquitous virtual space. And it is in this space that social bonds are now formed, maintained, and dissolved. Before criticizing or deploring, we must first recognize the facts. Young people’s friendships can no longer do without social media; couples meet on the internet, for example on applications like Tinder (see Figure 1); families stay connected through Facebook or other applications like WhatsApp; workspaces have shifted to electronic with Zoom and Teams, particularly since the COVID pandemic; diplomacy is increasingly done on X (formerly Twitter), etc. We won’t go back. On the other hand, we don’t move around less physically: witness the monstrous traffic jams in big cities. In the same vein, the trend over the last ten years – a time of exponential growth in internet connections – also shows an increase in the number of air passengers, which continues a secular trend, despite a significant drop during the COVID-19 pandemic.
I felt quite alone when, as a young student, I arrived in Paris from southern France to pursue my university studies. It was 1975 and there was no internet. Should seniors who live alone and whose children don’t visit them blame the Internet? The problem of loneliness and the disintegration of social bonds is very real. But it’s an already old trend, which stems from urbanization, transformations of the family, and many other factors. I invite your readers to consult the works on the topic of « social capital » (the quantity and quality of human relationships). The internet is only one of many factors to consider on this question.
Figure 1
Q2 – In your books « Collective Intelligence: For an anthropology of cyberspace » (1994) and « Cyberculture: The Culture of the Digital Society » (1997), you argue that the Internet and digital technologies develop collective intelligence, enabling new forms of collaboration and knowledge sharing. However, there is growing concern that excessive use of social media and digital technologies is associated with distraction and learning delays in young people. How do you see this apparent contradiction between the potential of technologies to strengthen collective intelligence and the negative effects they can have on the cognitive and educational development of young people?
R2 – I have never argued that the Internet and digital technologies, by themselves and as if techniques were autonomous subjects, develop collective intelligence. I have argued that the best use we could make of the internet and digital technologies was to develop human collective intelligence, which is quite different. And it is still what I think. The idea of a « knowledge space » that could unfold above the commercial space is a regulatory ideal for action, not a factual prediction. When I wrote Collective Intelligence – from 1992 to 1993 – less than 1% of humanity was connected to the Internet and the Web didn’t exist. You won’t find the word « web » anywhere in the book. Yet we have today – in 2025 – largely exceeded two-thirds of the world’s population connected to the Internet. The context is therefore completely different, but the civilizational change I predicted 30 years ago seems obvious today, although we normally have to wait several generations to confirm this type of mutation. In my opinion, we are only at the beginning of the digital revolution.
As for the increase in collective intelligence, many steps have been taken to make knowledge accessible to all. Wikipedia is the classic example of an enterprise that functions through collective intelligence with millions of volunteer contributors from all countries and discussion groups between experts for each article. There are nearly seven million articles in English, two and a half million articles in French, and more than one million articles in Portuguese. (Note nevertheless that some articles on current events are biased. Always check with other sources!) Wikipedia is consulted by several tens of millions of people per day and several billions per year! Free software – now widely adopted and distributed, including by major Web companies – is another major domain where collective intelligence is in command. Among the most used free software, let’s mention the Linux operating system, Mozilla and Chromium browsers, the Open Office suite, the Apache http server (which is the most used on the Internet), the GIT version control system, Signal messaging, and many others that would be too long to cite. I add that digitized libraries and museums, like open access scientific articles and sites like ArXiv.org, are commonplace, which transforms research and scientific communication practices. Everyone can now publish texts on their blog, videos and podcasts on YouTube or other sites, which wasn’t the case thirty years ago. Social media allow exchanging news and ideas very quickly, as we see for example on LinkedIn or X (formerly Twitter). The Internet has therefore really enabled the development of new forms of expression, collaboration, and knowledge sharing. Much remains to be done. We are only at the very beginning of the ongoing anthropological mutation.
Of course, we must take into account phenomena of addiction to video games, social media, online pornography, etc. But for more than thirty years, the majority of journalists, politicians, teachers, and all those who shape opinion have not stopped denouncing the dangers of computing, then of the Internet, and now of artificial intelligence. I would do nothing very useful if I added my lamentations to theirs. I therefore try to make people aware of a large-scale civilizational mutation that won’t be stopped and to indicate the best means of directing this great transformation toward the most positive purposes for human development. That said, it’s clear that addiction phenomena partially find their source in our dependence on the toxic sociotechnical architecture of major Web companies, which uses dopaminergic stimulation and narcissistic reinforcements to make us produce ever more data and sell more advertising. Unfortunately, the mental health of adolescent populations may be one of the collateral victims of the commercial strategies of these major oligopolistic companies. How can we oppose the power of their data centers, their software efficiency, and the simplicity of their interfaces? It’s easier to ask the question than to answer it. In addition to the biopolitics evoked by Michel Foucault, we must now consider a psychopolitics based on neuromarketing, personal data, and gamification of control. Teachers must warn students of these dangers and train them in critical thinking.
Q3 – With the phenomenon of « connective bubbles, » where social networks tend to reinforce pre-existing beliefs and ideas, limiting contact with different perspectives, how do you see the evolution of social bonds as the Internet and digital platforms continue to develop? Could this type of segmentation weaken the collective intelligence you advocate, or is there still room for broader and more collaborative connections in the future?
R3 – It’s clear that if we’re content to instinctively « like » what we see scrolling by and react emotionally to the most simplistic images and messages, the cognitive benefit won’t be very great. I don’t pose as an absolute model to follow; I would simply like to give an example of what it’s possible to do if we have a little imagination and are ready to question the inertia of institutions. When I was a professor of digital communication at the University of Ottawa, I forced my students to register on Twitter, to choose half a dozen subjects interesting to them, and to compile lists of accounts to follow for each subject. Whatever the theme – politics, science, fashion, art, sports, etc. – they had to build balanced lists including experts or supporters of opposing views in order to expand their cognitive sphere instead of restricting it. On the most common social media like Facebook and LinkedIn, it’s possible to participate in a large number of communities specialized in cultural domains (history, philosophy, arts) or professional ones (business, technology, etc.) in order to stay informed and discuss with experts. Local discussion groups by villages or neighborhoods are also very useful. Everything is a matter of method and practice. We must detach ourselves from the mass media model (newspapers, radio, television) in which passive receivers consume programming made by others. It’s up to each person to cobble together their own programming and build their personal learning networks.
Before printing, we only spoke with people from our parish. In the 1960s, we only had the choice between two or three television channels and two or three newspapers. Today we have access to an enormous diversity of sources from all countries and all sectors of society. Teachers must make students literate, teach them foreign languages, give them a good general culture, and guide them in this new universe of communication.
Q4 – Currently, there is a growing debate about the negative effects of technology on young people’s mental health, focusing on problems such as anxiety, depression, and social isolation. Considering the central role that digital technologies play in our society, how do you understand this relationship between intensive use of technologies and the increase in mental health problems among young people? Is there a way to balance the advantages of technology with the need to preserve mental well-being?
R4 – The problem of young people’s mental health is of course quite real, but it would be reductive to attribute it solely to social media. Nevertheless, I will try to enumerate some psychological problems that arise from the use of digital technologies.
First, there is the transformation of subjective self-reference, which risks leading to schizophrenic-type problems. Our field of experience is mediated by digital support: the self-reference loop is wider than ever. We interact with people, robots, images, music through several multimedia interfaces: screen, headphones, controllers… Our subjective experience is controlled by the algorithms of multiple applications that determine in a loop (if we haven’t learned to master them) our data consumption and our actions in return. Our memory is dispersed in numerous files, databases, locally and in the cloud… When a large part of ourselves is thus collectivized and externalized, the problem of limits and determination of identity becomes preponderant. Who owns the data concerning me, who produces it?
The problem of narcissism is particularly evident on Instagram and similar applications. Our ego is nourished by the image that others send back to us in the algorithmic medium. The obsession with “optics” reaches worrying proportions. How many subscribers, how many likes, how many impressions? For those who have fallen into this abyss, the value of being is only in the gaze of the other. Before being a mental health problem, it’s a matter of elementary wisdom.
Opposite to narcissism, we have a tendency toward autism. Here the self is locked in its inner life, but fed by online information sources. Code or certain aspects of popular culture become obsessional. This is the domain of geeks, Otakus, and compulsive players. It’s obviously unhealthy to do without any social life in flesh and blood.
There is a mental health problem if affects are constantly euphoric, or constantly dysphoric, or if an exclusive object becomes addictive. Indeed, the Internet can make us dependent on certain objects (news, series, games, pornography) or certain emotions, whether positive (« feel-good » content like cute cats, dance, humor, etc.) or negative (catastrophic news, « doom scrolling ») in an unbalanced way. We can also wonder to what extent it’s good for body language to be entirely replaced by emojis, memes, images, avatars, etc.
Addiction is created by the excitement (dopamine) and satisfaction (endorphin) that we want to reproduce endlessly. Now, as I said above, the business models of major web companies that focus on engagement (dopamine-endorphin secretion) lead almost inevitably to dependence if users aren’t careful. High engagement intensity for too long inevitably leads to depression.
Impulse control (aggression, for example) is more difficult on social media than in real life because our interlocutors are not in front of us. « Toxic behavior management » is indeed a major problem in online games and social media.
In sum, we must be vigilant, warn young users of the dangers incurred, and not commit excesses.
Q5 – Some predict that future generations might never attend school again. How do you see the future of education in an increasingly hyperconnected world dominated by technology?
R5 – I don’t believe school will disappear. But it must transform. We must take students where they are and preferably use the consumer products they’re accustomed to in order to make something useful for learning. Students are « digital natives » but that doesn’t mean they have true mastery of digital tools. We must develop not only digital literacy but literacy in general, which is inseparable from it. I’m a great supporter of reading classics and general culture, which is indispensable for forming critical thinking.
To return to my own pedagogical methods, in the courses I taught at the University of Ottawa, I asked my students to participate in a closed Facebook group, to register on Twitter, to open a blog if they didn’t already have one, and to use a collaborative data curation platform.
The use of content curation platforms served to teach students how to choose categories or « tags » to classify useful information in long-term memory, in order to easily find it later. This skill will be very useful to them for the rest of their careers.
Blogs were used as supports for « final assignments » for graduate courses (i.e., before the master’s), and as research notebooks for master’s or doctoral students: notes on readings, hypothesis formulation, data accumulation, first versions of scientific articles or chapters of dissertations or theses, etc. The public research notebook facilitates the relationship with the supervisor and allows redirecting hazardous research directions in time, getting in touch with teams working on the same subjects, etc.
The Facebook group was used to share the Syllabus or « course plan, » the class agenda, required readings, internal group discussions – for example those concerning evaluation – as well as students’ electronic addresses (Twitter, blog, social curation platform, etc.). All this information was online and accessible with a single click, including digitized and free required readings. Students could participate in writing mini-wikis within the Facebook group on subjects of their choice; they were invited to suggest interesting readings related to the course subject by adding commented links. I used Facebook because almost all students were already subscribed to it and this platform’s group functionality is well established. But I could have used any other collaborative group management support, like Slack or LinkedIn groups.
On Twitter (now X), the conversation specific to each class was identified by a hashtag. At first, I used the blue bird medium occasionally. For example, at the end of each class I asked students to note the most interesting idea they had retained from the course and I scrolled through their tweets in real time on the class screen. Then, after a few weeks, I invited them to reread their collective traces on Twitter to gather and summarize what they had learned and ask questions – still on Twitter – if something wasn’t clear, questions I answered through the same channel.
After a few years of using Twitter in class, I became bolder and asked students to take their notes directly on this social medium during the course to obtain a collective notebook. Being able to see how others take notes (whether on the course or on texts to read) allows students to compare their understandings and thus clarify certain notions. They discover what others have noted and which isn’t necessarily what stimulated them… When I felt attention was relaxing a bit, I asked them to stop, reflect on what they had just heard, and note their ideas or questions, even if their remarks weren’t directly related to the course subject. Twitter allowed them to dialogue freely among themselves on the subjects studied without disturbing the class’s functioning. I always devoted the end of the course to a question and answer period that relied on collective viewing of the Twitter feed. This method is particularly relevant in groups that are too large (sometimes more than two hundred people) to allow all students to express themselves orally. I could thus calmly answer questions after class knowing that my explanations remained inscribed in the group’s feed. The pedagogical conversation continues between courses. Of course, all this was only possible because evaluation (student grading) was based on their online participation.
By using Facebook and Twitter in class, students not only learned the course material but also a « cultured » way of using social media. Documenting one’s breakfasts or the latest boozy party, disseminating cat videos and comic images, exchanging insults between political enemies, getting excited about show business stars, or advertising for this or that company are certainly legitimate uses of social media. But we can also maintain constructive dialogues in studying a common subject. In sum, I believe education must progress toward collaborative learning using digital tools.
Q6 – What are, in your opinion, the main opportunities that the Internet and new AI tools can bring to the field of education? Given the accelerated advancement of digital technologies and artificial intelligence, how do you see the role of the teacher evolving in the coming years?
R6 – Concerning artificial intelligence (for example ChatGPT, MetaAI, Grok, Claude, DeepSeek, or Gemini, which are all free and quite good), it can be very useful as a mentor for students or as a first-resort encyclopedia, to give answers and orientations very quickly. Students already use these tools, so we shouldn’t prohibit their use but, once again, cultivate it, bring it to a higher level. Since generative AI is statistical and probabilistic in nature, it regularly makes errors. We must therefore always verify information in real encyclopedias, search engines, specialized sites, or even… in a library! Note that the use of the advanced “search web” options can mitigate errors and point to real references. I add that the more cultured we are and the better we know a subject, the more fruitful the use of generative AIs becomes, because we are then capable of asking good questions and requesting additional information when we sense that something is missing. AI is not a substitute for ignorance; on the contrary, it gives a premium to those who already have good knowledge.
Using generative AIs to write in our place or make text summaries instead of reading books is not a good idea, at least in pedagogical use. Except of course if this practice is supervised by the teacher in order to stimulate critical thinking and taste for beautiful style. AI texts are often redundant, banal, and easily recognizable. Moreover, their document summaries fail to grasp what’s most original in a text, since they haven’t been trained on rare ideas but on the general opinion found everywhere. We learn to think by reading and writing in person: therefore AIs are good auxiliaries but in no case pure and simple replacements for human intellectual activity.
Q7 – There is growing fear that AI could eliminate many jobs in the future. How do you think this will affect the job market and what could be possible solutions?
R7 – By its very name, artificial intelligence naturally evokes the idea of autonomous machine intelligence, which stands opposite human intelligence, to simulate or surpass it. But if we observe the real uses of artificial intelligence devices, we must note that, most of the time, they augment, assist, or accompany human intelligence operations. In the era of expert systems – during the 1980s and 1990s – I observed that the critical knowledge of specialists within an organization, once codified in the form of rules animating knowledge bases, could be made available to members who needed them most, responding precisely to ongoing situations and always available. Rather than supposedly autonomous artificial intelligences, these were media for disseminating practical know-how, whose main effect was to increase the collective intelligence of user communities.
In the current phase of AI development, the role of the expert is played by the crowds that produce the data and the role of the cognitive engineer who codifies knowledge is played by neural networks. Instead of asking linguists how to translate or recognized authors how to produce a text, statistical models exploit the multitudes of anonymized web writers and automatically extract patterns of patterns that no human programmer could have clarified. Conditioned by their training, algorithms can then recognize and reproduce data corresponding to learned forms. But because they have abstracted structures rather than recording everything, they are now capable of correctly conceptualizing forms (image, text, music, code…) they have never encountered and producing an infinity of new symbolic arrangements. This is why we speak of generative artificial intelligence. Far from being autonomous, this AI extends and amplifies collective intelligence. Millions of users contribute to model improvement by asking them questions and commenting on the responses they receive. We can take the example of Midjourney (which generates images), whose users exchange their prompts and constantly improve their skills. Midjourney’s Discord servers are one of the most populous on the planet, with more than one million users. A new stigmergic collective intelligence emerges from the fusion of social media, AI, and creator communities. Behind « the machine » we must glimpse the collective intelligence it reifies and mobilizes.
AI offers us new access to global digital memory. It’s also a way to mobilize this memory to automate increasingly complex symbolic operations, involving the interaction of semantic universes and heterogeneous accounting systems.
I don’t believe for a second in the end of work. Automation makes certain jobs disappear and creates new ones. There are no more farriers, but mechanics have replaced them. Water carriers have given way to plumbers. The complexification of society increases the number of problems to solve. « Intelligent » machines will mainly increase the productivity of cognitive work by automating what can be automated. There will always be a need for intelligent, creative, and compassionate people, but they will have to learn to work with new tools.
Q8 – Some authors evoke the inversion of the « Flynn effect, » suggesting that future generations will have a lower cognitive level than their parents. How do you see this issue in the context of emerging technologies? Do you think that intensive use of digital technologies could contribute to this trend, or do they offer new ways to expand our cognitive capabilities?
R8 – The decline in cognitive (and moral) level has been deplored for centuries by each generation, while the Flynn effect shows precisely the opposite. It’s normal that we witness a stabilization of Intelligence Quotient (IQ) scores: the hope for constant increase is never very realistic and it would be normal to reach a limit or plateau, as in any other historical or even biological phenomenon. But let’s admit that today’s young people have lower IQ scores than the generations immediately preceding them. We must first ask what these tests measure: mainly scholastic intelligence. They don’t take into account emotional intelligence, relational intelligence, aesthetic sensitivity, physical or technical skills, or even practical common sense. So we’re measuring something limited there. On the other hand, if we remain on adaptation to scholastic functioning that IQ tests measure, why first accuse technologies? Perhaps there’s a abdication of families in the face of the educational task (notably because families are breaking apart), or failure of schools and universities that become increasingly lax (because students have become clients to satisfy at all costs)? When I was a student, the « A » on exams wasn’t yet a right… It has almost become one today.
Finally, and it must be repeated constantly, « the use of digital technologies » doesn’t make much sense. There are mind-numbing uses, which slide down the slope of intellectual laziness, and uses that open the mind, but which require taking personal responsibility, an effort of autonomy and – yes – work. It’s the role of educators to favor positive uses.
Q9 – Are there clear boundaries between the real world and the virtual world? What could motivate us to continue in the real world when the virtual world offers almost unlimited possibilities for interaction and success?
R9 – There has never been a clear boundary between the virtual world and the actual world. Where is human presence found? As soon as we assume a situation in existence, we inevitably find ourselves between two. Between the virtual and the actual, between soul and body, between heaven and earth, between yin and yang. Our existence stretches in an interval and the fundamental relationship between the virtual and the actual is a reciprocal transformation. It’s a morphism that projects the sensible onto the intelligible and vice versa.
A practical situation includes an actual context: our posture, our position, what is around us at this precise moment, from our interlocutors to the material environment. It also implies a virtual context: the past in our memory, our plans and expectations, our ideas of what is happening to us. This is how we discern the lines of force and tensions of the situation, its universe of problems, its obstacles and escapes. Bodily configurations only make sense through the virtual landscape that surrounds them.
We therefore don’t live only in the so-called « material » physical reality, but also in the world of meanings. This is what makes us human. Now, if we want to talk about so-called digital media, in addition to their software aspect (programs and data) they are obviously also material: data centers, cables, modems, computers, smartphones, screens, headphones are all the most material and actual. Furthermore, I don’t know what you’re alluding to when you say that « the virtual world offers almost unlimited possibilities for interaction and success. » The interaction possibilities offered by the digital medium are certainly more diverse than those provided by printing or television, but they are in no way « unlimited » since available time is not infinitely extensible. These possibilities also strongly depend on users’ capacities and cultural and social environment. Omnipotence is always an illusion. Furthermore, if you mean that fiction and games (whether or not they have electronic support) offer unlimited possibilities, yes, it’s an idea that has its share of truth. Now, if you imply that it’s unhealthy to spend most of one’s time playing online video games to the detriment of one’s health, studies, family environment, or work, we can only agree with you. But it’s excess and addiction that are in question here, with their multiple causes, and not « the virtual world. »
Q10 – With the progress of digital technologies, the concept of digital immortality emerges, where our identities can be preserved indefinitely online. How do you understand the relationship between spirituality and this idea of digital immortality?
R10 – This false immortality has nothing to do with spirituality. Why not speak of limestone – or architectural – immortality in the face of Egypt’s pyramids? Another comparison: Shakespeare or Victor Hugo, even Newton or Einstein, are probably more « immortal » than a person whose Facebook account wasn’t deleted after death. If we absolutely must relate the digital to the sacred, I would say that data centers are the new temples and that in exchange for the sacrifice of our data, we obtain the practical blessings of artificial intelligences and social media.
Q11 – Many experts have highlighted the moral problems present in the organization and construction of norms based on data reported and exploited by AI (biases, racism, and other forms of determinism). How can we control these problems in the digital scenario? Who is responsible or can be held responsible for problems of this nature? Could AI have legal implications?
R11 – There’s much talk about the « biases » of this or that artificial intelligence model, as if there could exist an unbiased or neutral AI. This question is all the more important as AI becomes our new interface with symbolic objects: universal pen, panoramic glasses, general loudspeaker, programmer without code, personal assistant. The large generalist language models produced by dominant platforms now resemble public infrastructure, a new layer of the digital meta-medium. These generalist models can be specialized at little cost with datasets from a particular domain and adjustment methods. They can also be equipped with knowledge bases whose facts have been verified.
The results provided by an AI stem from several factors that all contribute to its orientation or, if you prefer, to its « biases. »
a) The algorithms proper select the types of statistical calculation and neural network structures.
b) The training data favors the languages, cultures, philosophical options, political biases, and prejudices of all kinds of those who produced them.
c) In order to align AI responses with users’ supposed purposes, we correct (or accentuate!) « by hand » the data’s tendencies through what is called RLHF (Reinforcement Learning from Human Feedback).
d) Finally, as with any tool, the user determines the results by means of instructions in natural language (the famous prompts). As I said above, user communities exchange and collaboratively improve such instructions.
The power of these systems is matched only by their complexity, heterogeneity, and opacity. Regulatory control of AI, probably necessary, seems difficult.
Responsibility is therefore shared among many actors and processes, but it seems to me that users must be held as the main responsible parties, as with any technique. Ethical and legal questions related to AI are now passionately discussed almost everywhere. It’s an academic research field in full growth and numerous governments and multinational organizations have issued laws and regulations to frame AI development and use.
Today, the whole world is rushing toward statistical AI, Neural Models and/or Generative AI. But we know that, though these models are useful, we still need symbolic models or, if you prefer, Knowledge Graphs, especially in knowledge management.
But why exactly do we still need symbolic models in addition to neural models? Because symbolic models represent knowledge in an explicit way, which has many benefits, like transparency and explainability.
In this talk, I am going to advocate for semantic (or conceptual) interoperability between knowledge graphs, and I will present IEML, a language that I have invented at the Canada Research Chair in Collective Intelligence with the help of my team of engineers.
Being familiar with the field of knowledge management you know there is a dialectic between implicit knowledge (in blue in Figure 1) and explicit knowledge (in red in Figure 1). But is there a dialectic between symbolic and neural models today? I don’t think so.
Figure 1
There are currently two prominent ways to process data (for knowledge management).
Via neural models, based mainly on statistics, for decision support, automatic understanding, and data generation
Via symbolic models, based on logic and semantics, for decision support and advanced search.
These two approaches, generally separate, correspond to two different engineering cultures. Because of their advantages and disadvantages, people are trying to combine them.
Now, let’s clarify the difference between « neural » and « symbolic » models and compare them to neural and symbolic cognition in human beings.
Neural Models. The big plus with neural models is their ability to automatically synthetize and mobilize a huge digital memory « just in time », or « on demand », which is impossible for a human brain to do. But their pattern recognition and generation process is statistical, meaning they can’t organize a world, they can’t conserve objects, they have no understanding of time and causality, or space and geometry. And they can’t always recognize image transformations of the same object the way living beings can.
By contrast, real living neurons can do things current formal neurons can’t do. Animals, even without having symbolic models, just with their neurons, can model the world, use concepts, conserve objects despite their transformations, they grasp time, causality, space, etc. As for human brains they are able to run symbolic systems, like languages.
Symbolic Models. The positive aspect of AI symbolic models, or Knowledge Graphs, is that they are explicit models of the world (more precisely, a local practical world). They are in principle self-explanatory (if the model is not too complex), and they have strong reasoning abilities, so they are pretty reliable.
But there are two main weaknesses in current symbolic models.
Their design is time consuming (expensive in terms of specialized labor)
They have neither « concept conservation » nor « relation conservation » across ontologies/domains. In any given particular domain, every concept and relation have to be logically defined one by one.
While there is interoperability at the file formats level for semantic metadata (or classification systems) – like RDF or JSN LD – this interoperability does not exist at the semantic level of concepts, which compartmentalizes knowledge graphs, hindering collective intelligence.
By contrast, in real life, humans coming from different trades or knowledge domains understand each other by sharing the same natural language. In human cognition, a concept is determined by a network of relations inherent to natural languages.
But what do I mean by « the meaning of a concept is determined by a network of relations inherent to any natural language» ? What is this network of relations? And why am I pointing this out in this talk? Because current symbolic AI is missing the semantic aspect of human language. Let’s do a little bit of linguistics here so we can understand this deficiency better.
Any natural language weaves three kinds of semantic relations : interdefinition, composition and substitution.
Any word is defined by a sentence which involves other words, themselves defined the same way. For instance, a dictionary embraces a circular or tangled inter-definition of concepts.
Then, thanks to grammar rules, we can compose original sentences and understand new meanings.
Finally, not every word in a sentence can be replaced by any other; there are rules for possible substitutions that contribute to the meaning of words and sentences.
Figure 2: «I am painting the small room in blue»
You understand the sentence « I am painting the small room in blue » (See Figure 2) because you know the definitions of each word, you are aware of the grammatical rules giving each word their role in the sentence, and you know how to substitute a word by another. It is called inguistic semantics.
These relationships of inter-definition, composition and substitution between concepts don’t have to be defined one by one every time you speak about something. It’s all included in the language. Unfortunately, we don’t have any of these semantic functions when we build current knowledge graphs. And this is where IEML could improve symbolic AI and knowledge management.
To support my argumentation for a new method in building symbolic models, it is important to distinguish between linguistic semantics and referential semantics. Linguistic semantics are about the relations between concepts, as we have seen in the previous slide. Referential semantics are about the relations between propositions and states of things, or between proper nouns and individuals.
If linguistic semantics weave relations between concepts, why can’t we use natural languages in symbolic models? We all know the answer. Natural languages are ambiguous (grammatically and lexically) and machines can’t disambiguate meaning according to the context. In current symbolic AI, we cannot rely on natural language to organically generate semantic relations.
So, how do we build a symbolic model today?
In order to define concepts, we have to link them to URIs (Uniform Resource Identifiers) or web pages, according to referential semantics.
But because referential semantics are inadequate to describe a network of relations, instead of relying on linguistic semantics, we have to impose semantic relations on concepts one by one.
This is why the design of knowledge graphs is so time consuming and why there is no general semantic interoperability of knowledge graphs across ontologies or domains. Again, I am speaking here of interoperability at the semantic or conceptual level and not at the format level.
In order to alleviate the shortcomings of current symbolic models, I have constructed a metalanguage that has the same advantages as natural languages, namely an inherent mechanism for building semantic networks, but without their disadvantages, since IEML is unambiguous and calculable.
IEML (the Information Economy MetaLanguage), is a non-ambiguous and computable semantic metalanguage that includes a system of inter-definition, composition and substitution of concepts. IEML has the expressive power of a natural language with an algebraic structure, making it fully computable. IEML is not only computable in its syntactic dimension but also in its linguistic semantic dimension. Its semantic relations, (in particular, its composition and substitution relations), are computable functions of its syntactic relations.
IEML has a completely regular and recursive grammar with a 3,000 word dictionary organized in paradigms (systems of substitution), allowing the (recursive, grammatical) construction of any concept. Any concept can be created from a small number of lexical building blocks with simple universal composition rules.
With each concept automatically determined by composition and substitution relations with other concepts, and by using the grammar and dictionary’s words for definitions, IEML is its own metalanguage. The dictionary has been translated in French and English and could translate any natural language.
This invention will facilitates the design of knowledge graphs and ontologies, by ensuring semantic interoperability and by fostering collaborative design. Indeed, IEML is based on a vision of digital-based collective intelligence.
IEML allows an innovative and integrated architecture, overcoming the limitations and current divide of symbolic and neural models.
Figure 3
Figure 3 introduces a new semantic architecture for knowledge management (KM) made possible by IEML, an architecture that brings together neural and symbolic models.
The only thing that can generate all the concepts we need to express the complexity of knowledge domains, while maintaining mutual understanding, is a language. But natural languages are irregular and ambiguous, and their semantics cannot be computed. IEML is a univocal and formal algebraic language (unlike natural languages) that can express any possible concept (like in natural languages), with its semantic relations densely woven in a built-in mechanism. We can use IEML as a semantic metadata language to express any symbolic model *and* we can do it in an interoperable way. Again, I mean conceptually interoperable. With IEML, all symbolic models can exchange knowledge modules, and reasoning across ontologies becomes the norm.
Now, how can neural models be used in this new architecture? They could automatically translate natural language into IEML, with no extra work or learning for the layman. Neural models could even translate informal descriptions in natural language into formal models expressed in IEML.
Prompts expressed in IEML behind the scene would make data generation more controllable.
We could also use neural models to classify or label data automatically in IEML. Labels or tags expressed in IEML would support more efficient machine learning because the units or “tokens” taken into account would no longer be sound units—characters, syllables, words— in natural languages, but concepts generated by a semantic algebra.
What are the main advantages of the Integrated knowledge management architecture using IEML as a semantic coordinate system?
Symbolic and neural models would work together for the benefit of knowledge management.
A common semantic coordinate system would help the pooling of models and data. Symbolic models would be interoperable and easier to design and formalize. Their design would be collaborative across domains.
It would also improve intellectual productivity by allowing a partial automation of conceptualization.
Neural models would be based on data labeled in IEML and therefore be more transparent, explainable and reliable. This is important not only from a technical point of view but also from an ethical point of view.
Finally, this architecture would foster diversity and creative freedom, since the networks of concepts – or knowledge graphs – formulated in IEML can be differentiated and complexified at will.
The aim of this article is to elucidate the anthropological conditions of possibility for computing. I will first show how symbolic manipulation is constitutive of hominization. Secondly, I will examine the gradual widening of the gearing between the sensorial world (signifiers) and the intelligible world (concepts) during cultural evolution. Following on from these analyses, I’ll comment some of the main features of contemporary digital civilization, including the development of generative artificial intelligence.
Phenomenal experience
In the animal kingdom, the development of the nervous system stems from the need for locomotion: the senses and motor skills are looped together to guide movement. Over the course of evolution, this reflex circuit becomes more complex, involving simulation of the environment, evaluation of the situation and decision-making calculations leading to action. An existential emergence accompanies cognitive necessity, as the nervous system generates a phenomenal experience populated by multimodal images (cenesthesia, touch, taste, smell, hearing, sight), including the sensation of one’s own movements. Animal consciousness relates to a world outside itself: it is intentional. Its objects are conserved beyond the variety of immediate perceptions. Pleasure and pain polarize the range of sensations, and emotions direct activity. Locomotion obliges the animal to localize its presence and inhabit a territory. Its consciousness is not only immersed in space and full of present sensations, but also virtualized by an imagination that reminds it of past events (the squirrel remembers the places where it hid its nuts), ensures the continuity of its movements and projects it into the immediate future. It discerns the situations in which it is thrown and categorizes the objects of its perception. It recognizes prey, predators or sexual partners and acts accordingly. This is only possible because neural circuits (innate or learned) encode interaction patterns – or concepts – that orient, coordinate and give meaning to its phenomenal experience, while supporting complex social communication with its fellow creatures. Animal communication signals – calls, postures, pheromones – carry concepts (« predator approaching », « food », « this is my territory », « submission », etc.) but they are biologically inherited, limited in number and complexity, and refer only to current situations.
The symbolic revolution
Upright posture, the hand, toolmaking, and the mastery of fire set the Homo genus apart. Then Neanderthals, Denisovans and Sapiens start talking. Our brains have the same properties as those of higher vertebrates, with the cognitive and communicative capacities just mentioned, and the corresponding type of sensorial experience. But it also possesses a capacity for recognizing and producing symbols that takes us into a whole new world. The biological evolution that leads to the human being has transformed the brain of the initial primate, adjusting it to a symbolic specialization that is unique in the animal kingdom: hypertrophy of the prefrontal cortex, amplification of the cerebellum, appearance of Broca’s and Wernicke’s areas, greater division of labor between the hemispheres and general reorganization of neural circuits. As an ontological interface, the human brain drives the symbiosis and coevolution of symbolic ecosystems with populations of speaking primates immersed in the biosphere.
What is a symbol? In a nutshell, it’s the conventional translation (which varies from society to society) of a concept – i.e. a scheme organizing the experience – into a sensory phenomenon. It should be added that – far from being independent of one another – symbols are organized into systems that regulate their compositions, substitutions and differences. By projecting themselves onto the sensorial images of symbolic systems, the concepts that organized the phenomenal world from the opaque interior of the vertebrate cranium become explicit, sharable and combinable at will. The symbolic revolution has repercussions for the lived world as a whole. Communication is cast in the mold of conventional languages and codes; complex rituals organize social relations and combinations of artifacts drive sensorimotor interactions.
Communication
In contrast to the indexical or iconic communication of other animals, we tell what happened yesterday, make appointments for next week and invent stories. The territories of our evolutionary ancestors were populated by actual objects and agents. The human world is also made up of places, beings and events that are invisible, or have long since disappeared, or will never happen. A language has thousands of elementary units of meaning, orders of magnitude more than the signal repertoire of animal species. Verbs and common nouns designate general categories, while proper nouns label singular beings and events. Language translates interaction patterns into sentences. The verb evokes the action, grammatical roles describe the actors and circumstances, and the whole models a complex scene. Each word in a sentence also evokes a pattern of interaction: « gift », « sacrifice », « birth », « hunt » and so on. Linguistic symbols are organized according to a recursive grammar: expressions are composed in sequences and fit together like Russian dolls, making it possible to construct and decipher an indefinite number of complex texts with distinct meanings. Talking primates elaborate the schemas that organize their experience with hyper-realistic detail. The immediate and massive concepts of other animals give way to genealogies, fine classifications, genera, species and their differences, webs of refined notions whose every node is in turn a network. Our narratives interweave and respond to each other. The range of mental representations expands indefinitely.
The linguistic symbol is split in two, since it has (a) an actual or signifying part: a sound, visual, tactile or other image, such as the sound « tree », and (b) a virtual or signified part: a general concept, such as « woody plant with roots, trunk and branches ». The signifier itself is split into an abstract form (phoneme, character, gesture) without address, timeless, and some concrete, situated, dated image: this timbre of voice, this letter, a waving hand. The signified, in turn, has both a virtual and an actual component. The dictionary and grammar of a language define the virtual, general, still-floating part of the meaning of a word. Our knowledge of language enables us to decode this sequence of phonemes and translate it into a network of concepts, a narrative that evokes images, emotions and memories. For a moment, a rhizome of meaning illuminates the silence of experience. A meaning is actualized in this way for us, but it would be actualized differently in other circumstances for someone else, endowed with a singular memory.
Although the signifying parts of symbols – moving images – only appear to the senses in phenomenal space-time, for human intelligence they designate signified that populate an inexhaustible abstract universe, at the intersection of hierarchical structures of composition (syntagms) and symmetrical structures of opposition and substitution (paradigms). Such arrangements – both syntactic and semantic – are not limited to languages. They can be found to a greater or lesser extent in other sign systems. For example, like the paradigms of language, the harmonies of music organize an order of simultaneity and possible choices, while melody unfolds linearly in time, like the syntagm in linguistics. As for visual communication, palettes of shapes and colors form substitution groups that intersect the compositional plane of images.
Elementary emotions are diffracted into a myriad of mingled feelings, violent or delicate. Places are named, measured and mapped. The dense net of hours and calendars captures temporality. Language opens the space for questioning, dialogue, and narrative. It supports reasoning, demonstration and a concern for truth… not forgetting misleading concealment and disinformation. What’s more, it’s not only messages that are coded, but also systems of veridiction, i.e., depending on the occasion, ways of deciding what is true or beautiful.
Society
The person and its individual identity emerge through dialogue. The implicit self-reference in animal experience is redoubled in humans by an explicit first person (“I”), to which a second person – the other – inevitably faces and responds (“You”). Both navigate the shared reality perceived in the third person (“It, them”), a world assumed to be objective and common.
Societies of the same animal species resemble each other. In contrast, human groups know a great diversity of social roles and rules of interaction. Kinship, political organization, or commerce with the invisible (ancestors, spirits, gods, and values) fall under convention. Rituals codify, socialize, and reify a symbolic order that systems of justification – morals, laws, religions, traditions – explain and motivate.
Social roles have common traits with grammatical roles, not the least of which is recursive nesting. The syntactic trees of language correspond to the genealogical trees of families and the organizational charts of administrations. Oppositions of the type « brother and sister » in the role of full sibling or « police and army » in the role of security guarantor, or even the social partitions of the type « priests, warriors, and peasants » resemble the groups of difference and substitution of lexical paradigms.
Technique
If symbolization consists in projecting into the world of senses and systematizing behavior patterns, then it concerns not only communication codes and social relations but also interactions with the physical world. Artifacts and tools are produced by common methods, they exhibit « affordances » (possibilities of use) and dictate gestures. The most material techniques participate in the symbolic order through their externalization and socialization of bodily functions, through their reification of perceptions and movements. A fortiori, the virtual dimension of our relations to things composes an essential part of cultural systems: the rules that govern labor and property, the processes of exchange and accounting. While animal societies know neither currency nor economy, the most primitive tribes use shells for their bartering and keep memory of gifts and counter-gifts.
Syntax finds its place in the battle order of armies and the arrangement of technical gestures. The arborescent structures of sentences and texts are found in the sequence of operations leading to the construction of buildings, the weaving of fabrics, or the cooking recipes. And in most cases, Homo Faber can replace one material with another, alter the thickness of threads, or substitute potato for rice while retaining the general action plan. The same wooden handle ends in the metal head of a shovel, a pickaxe, or a fork, just as the words of a paradigm may substitute for one another in the same narrative context.
Cultural symbiosis
The orders of signs, people and things are intertwined in the tight braid of hominization. We have only examined them in turn for the sake of exposition. Let’s define culture as the totality of symbolic systems (semiotic, social, technical), their products and their layers of sedimented inscriptions. From then on, the life of the mind – which transcends individual existences – results from a symbiosis between the speaking primates that make up a society and the culture they share.
Cultures codify, share and reify concepts (the patterns organizing experience), while individuals incorporate languages, rituals and technical practices. The conventions and tools transmitted by culture can only be implemented if living people internalize their uses, embody their handling and treat them as second nature. This is why, however diverse – or even heterogeneous – social constructions and cultural artifices may be in a particular time and place, the living bodies that integrate them make an organic unit out of them.
It can take many years to learn how to handle semiotic conventions, as in the case of writing. For interlocutors to reconstitute networks of concepts from a sequence of phonemes and translate ideas or instructions into sounds, all the following needs to be integrated into the reflexes and perceptive habits of the organism: the dictionary that establishes the correspondence between signifiers and elementary signified, the grammar that governs the composition of units of meaning, not forgetting the prosody, accents and music of the language. The same applies to social relations. We learn to discern the interpersonal relationships at play in our environment, to identify with roles, to embody them as best we can, and to play our part in conventional scenarios, aided by initiation journeys and the repetition of ritual enactments. The use of artefacts, the handling of tools, the driving of vehicles and the collective execution of complex tasks once again presuppose the physical and mental internalization of ambient techniques.
Individuals can only survive if they assimilate symbolic systems and appropriate their products. Symmetrically, to endure, a culture must be absorbed, implemented, and transmitted by individuals. In this relationship, where each participant feeds off the other, culture represents the virtual pole, neither dead nor alive, waiting to be actualized by a human population. As for individuals, they embody the subjective, present, sensitive, living and mortal pole of the symbolic dynamic. And each generation, whether oblivious or ardent, innovative or decadent, casts the dice again. Such is the motor of cultural evolution. The immemorial heritage of our ancestors sustains our living spirits, just as from the depths of tropical waters the coral piled up by centuries carries multicolored fish towards the sunlight.
Symbolic stigmergy
The collective intelligence of animals is largely based on stigmergic communication: the traces they leave in a shared environment enable them to coordinate their actions. The scent of pheromones, the echo of cries and songs, the fleeting image of postures or footprints elicit immediate reactions. Like other eusocial species, we communicate stigmergically, but instead of marking a physical territory with pheromones or other types of visual, auditory or olfactory signals, we leave symbolic traces. The human kingdom amplifies stigmergic mechanisms. Elaborate symbolic texts accumulate, respond to each other, they are fed and reappropriated by groups and individuals. Not only does the shared memory become longer and more complex, but the synchronization of experiences and the propagation of affects intensifies. Now that symbolic systems have been incorporated by individuals, signifiers, ritual gestures and familiar artifacts automatically trigger neural circuits, along with the patterns of interaction, emotions, images, memories and motor impulses they evoke. Just as contact with a pheromone molecule triggers reflex behavior in an ant, we can’t help but understand speech that reaches our eardrums, and the slightest story irresistibly evokes mental representations and feelings. The audience at a show, the dancers at a rave, the demonstrators chanting a slogan all resonate. The members of a rowing or soccer team are perhaps more in tune than a herd of baboons or a clan of wolves will ever be.
Symbolic manipulation
Let it be clear that the human mind never leaves sensory experience. The most complex combinations of culture are rooted in a spatio-temporal universe, inhabited by tangible objects and agents, interwoven with imagined causal relations, animated from within by the tropisms of emotion, resonating with timbres and rhythms, alternating shadow and light, sweetness and violence. But this sensory experience, because it is significant for our symbolic species, points to an intelligible world whose relationships, successions and connections are quite different from those of space, time and material causality. The concepts that populate the intelligible world can be located at the intersection of three axes. A first axis – closely symbolic – organizes the correspondence between sensory images and their conceptual counterparts, whether linguistic signifiers, social relations or technical functions. A second axis structures concepts according to syntactic trees, each leaf of which can become – recursively – a root. In the order of signs, grammars compose linguistic or musical phrases, assemble texts and images, and arrange artworks according to skillful taxonomies of periods, genres, schools and subjects. Social syntaxes shape the structure of institutions, hierarchizing or symmetrizing ages, genders and classes; they regulate games, distribute roles, balance powers and divide labor. Technical syntaxes schematize operations in series or in parallel, lay out small workshops and vast factories, interweave machine parts and logistics chains. Finally, the third axis – paradigmatic – orders the systems of differences and substitutions whose rotating rings fill the nodes of syntactic trees. The intelligible world unfolds between these three axes, teeming, diverse, interdependent, mutating, hybridizing, swept along by an irreversible cultural evolution.
In short, symbolization places the human mind at the interface between two worlds: that of physical movement and sensory images, governed by a group of spatio-temporal transformations, and that of intelligible forms, governed by a group of conceptual transformations. This is why ideal operations are linked to physical operations and, symmetrically, tangible transformations lead to conceptual changes. The morphism that links the two universes opens a field of action inaccessible to pre-symbolic animality, since it becomes possible to command conceptual transformations from physical movements and to sequence material gestures according to conceptual operations. At the core of anthropogenesis, we discover symbolic manipulation. Calculation is original. From the folding of the conceptual onto the perceptual, and its condition of possibility in the human brain, the widening of the passage between the two orders of reality, and the growing efficiency of their reciprocal translation, set the pace for a cultural evolution that never ceases to take up and amplify the event of hominization.
Implemented in a distributed manner in the brains of speaking primate populations, five symbolic operating systems have succeeded one another, each new version being fully compatible with the previous ones. Nomadism, tribal organization, hunting and gathering, knowledge transmitted through rituals and storytelling, and shamanism for relationships with the invisible correspond to primary orality. The first writings, or the self-preservation of symbols, accompany palace-temple civilizations, large-scale breeding and agriculture, the school of scribes and the systematization of knowledge. The zero, the alphabet and paper optimized the manipulation of signifiers in trading cities and empires, with their literate elites, universal religions, philosophies and currencies. From the 16th century onwards, the mechanization of writing and time measurement heralded modernity: the natural sciences became experimental and mathematical; engines revolutionized industry and transport; nation states, new secular perspectives on salvation (such as liberalism or socialism) and compulsory education transformed societies. Finally, the electrification, electronic media and computerization of the twentieth century pave the way for contemporary digital culture, based on techniques for controlling energy and matter on the scale of elementary particles, the automatic transformation of signs, instantaneous interactive global communication and the information economy. It is still difficult to specify the new political, epistemic and ideological forms that will prevail in the new culture. What is certain, however, is that the digital is our global symbolic operating system, not only – as is obvious – in terms of communication and technology, but also in terms of social relations.
Is this a return to the fable of progress (« It just gets better and better »)? No, because an operating system can support a variety of applications, which can be judged as good or bad depending on one’s point of view. The same « nation-state » political form has a liberal and a totalitarian face, the same industrial structure manufactures cars and tanks, the same Internet serves information and disinformation. I would add that the general notion of progress assumes a constant evaluation criterion from the Paleolithic to the 21st century – this criterion generally being that of contemporaries – while each era, each culture reinvents its ultimate values.
My partition into five successive symbolic operating systems simplifies a continuous process, unevenly distributed in space, subject to multiple shifts, backtracking and leapfrogging. What’s more, the cultural forms that appear in each era do not disappear in subsequent eras but are taken up again and adapted to a new context. Despite the complexity of the process, the general evolution seems irreversible and firmly oriented towards an ever more efficient interplay between the world of senses and the intelligible.
Digitizing communication
In the long run of accelerating evolution, symbols detach themselves from their places of origin, surviving better and better the moment of their birth. Here they are, becoming lighter, more numerous, more widespread, translated and transformed. But the « softer » the symbols become, the more they approach an omnipresent, malleable form that escapes the inertia of matter, the more their inscription requires « hard » supports, instruments and installations that are heavily material. The manipulation of signs has a long history, in which the virtualization of codes and the hardening of media are mutually supportive: clay tablets, papyrus or silk scrolls, the road and port networks of ancient empires, horse-drawn mail, paper manufacture, printing machines, school and library buildings, telegraph poles on railroad lines, antennas and satellites, right up to data centers that consume the electricity of a power plant and the magazines, radios, record players, televisions, computers and telephones spewed out by factories and eventually piled up jumbled in waste dumps.
The intelligible and the tangible alternate, intertwining and complicating each other. Each turn of their evolving spiral deposits a new layer of complexity, which leads to the next revolution. These two modes of being are like the relationship between Yin and Yang in traditional Chinese philosophy. One of the main Confucian classics, the Yi-King (or I-Ching) represents the dynamics of cosmic, political, and personal transformations by means of sixty-four hexagrams: six stacked lines, some of which are continuous (Yang), and others broken (Yin). This ancient oracular book presents one of the first alignments between the signifying structure and the signified situation: the two planes of the hexagrams (signifiers) and the practical configurations (signifieds) obey the same group of transformations. Should we trace back to this the binary coding and the regulated manipulation of signifieds by means of signifiers that characterizes computing? Or should we identify the beginnings of automatic calculation with Aristotle’s formalization of logical reasoning? What about the Indian mathematicians who invented positional numeration with nine digits and the zero, making arithmetic calculations simple and uniform? Or the development of algebra by Arabic-speaking, Andalusian or Persian mathematicians, such as Al Khawarizmi, who gave his name to the algorithm? In all these cases, the regulated, quasi-mechanical manipulation of visible, tangible elements leads to the movement of virtual objects: political tropes, logical propositions or insubstantial numbers.
Calculation
Let’s take a closer look at calculus, a textbook case of the clutch between the sensible and the intelligible. It can be defined as the art of mechanizing symbolic operations. Calculus presupposes the adoption of a coding system for variables and operations, as well as the definition of chains of operations: algorithms. The application of an algorithm to a set of input variables leads to the result variable as output. As symbols are made up of a signifier and a signified part, calculations are even more efficient as they are applied to signifiers in a mechanical way, i.e. without taking signifieds into account. Algorithms are blind to the semantic content of the symbols they manipulate. Even when we multiply by hand, we always follow the same routine, whatever the numbers being multiplied. The signifiers manipulated by operations can be likened to material pieces such as tokens, marbles or pebbles. The word calculus itself comes from the Latin calculus meaning pebble, because the ancient Romans used pebbles to perform arithmetic operations on abacuses.
Calculus is an art insofar as the coding of the signified by a certain system of signifiers facilitates the regulated manipulation of symbols to a greater or lesser extent. For example, the number notation system of the ancient Egyptians and Romans does not lend itself to as efficient algorithmic manipulation of numbers as the zero-based positional notation of the Indo-Arabic numerals. Try multiplying large numbers using Roman numerals to see for yourself. The efficiency of symbolic manipulation involves a compromise between, on the one hand, the generality of algorithms (maximizing the cases to which they apply) and, on the other, minimizing the number of operations required to arrive at the result. Calculus is an art insofar as the coding of the signified by a certain system of signifiers facilitates the regulated manipulation of symbols to a greater or lesser extent. For example, the number notation system of the ancient Egyptians or Romans does not lend itself to as efficient algorithmic manipulation of numbers as the zero-based positional notation of the Indo-Arabic numerals. Try multiplying large numbers using Roman numerals to see for yourself. The efficiency of symbolic manipulation involves a compromise between, on the one hand, the generality of algorithms (maximizing the cases to which they apply) and, on the other, minimizing the number of operations required to arrive at the result. Advances in algebraic coding and the refinement of automatic calculation procedures generally mark a leap in consistency and rigor in the field to which they apply, as shown by the breakthroughs of modern experimental science, which have often unified disparate forms and methods by means of algebraic sweeps.
Calculating machines
Mechanical calculating machines had already been built in the 17th century by Pascal and Leibniz. Babbage and Ada Lovelace built bigger computing machines in Victorian Britain. Cash registers were already performing arithmetic operations in every shop at the beginning of the 20th century. But to achieve programmable electronic calculators – much faster and more adaptable than earlier machines – several theoretical and technical advances had to be made first. On the theoretical side, as early as 1937, Turing had described an abstract automaton capable of performing any calculation defined by a program. On the technical side, by the early 20th century, diodes, or vacuum tubes, had enabled fine control of electron flows. Used in the first computers, these bulky, energy-hungry components were later replaced by transistors and then printed circuits in the race for speed and miniaturization that marked the electronics industry. A decisive step was taken by Claude Shannon in 1938, when he demonstrated the correlation between logical calculation and the arrangement of electrical circuits, at the confluence of the conceptual and the perceptible. An open or closed switch corresponds to « true » or « false », a series arrangement of switches corresponds to the logical operator « and », a parallel arrangement to the operator « or exclusive ». The connectors no, and, or suffice to express Boolean algebra, i.e. the formalization of ordinary logic. Base-two arithmetic (0, 1) also lends itself well to electronic calculation. Passing through logic gates, running through the labyrinth of circuits formed and reformed by programs, lightning-fast, the electron becomes a signifier. Automating the manipulation of virtual meaning by mechanizing that of the actual sign – such is the power of computer coding.
In just a few generations, digital technology would become the meta-medium of social communication. From 1955 to 1975, large mainframe computers were used only by large government agencies and for scientific computing. Less than one thousandth of the world’s population was in direct contact with these « electronic brains », as they were then called. From 1975 to 1995, e-mail became commonplace, and Internet-connected personal computers boosted the productivity of the creative class and white-collar workers. One percent of the world’s population is connected in the late twentieth century. From 1995 to 2015, the Web establishes itself as the new public sphere, gradually absorbing previous media. Smartphones nestle in our pockets and on our bedside tables. Half the world’s population resonates with social media. In the 2020s, American and Chinese operators of large data centers dominate global communication. Artificial intelligence is at the helm of a digital environment in which almost the entire human population is immersed.
Digital stigmergy
Less than a century after the invention of the first computers, the world’s memory is digitized, accessible to much of the population via the Internet. A piece of information found at one point on the network can be found anywhere. From static text on paper, we have moved on to ubiquitous hypertext, then to the surrealist architext that brings together all symbols. A virtual memory has begun to grow, secreted by billions of living and dead people, teeming with languages, music and images, full of dreams and fantasies, blending science and lies. While point-to-point messages are still exchanged, most social communication now takes place via electronic stigmergy. Immersed in digital space, we communicate via the oceanic mass of data that brings us together. Every link we create, every tag or hashtag we affix to a piece of information, every act of rating or approval, every « like », every query, every purchase, every comment, every share – all these operations subtly modify the shared memory, i.e. the inextricable magma of relationships between data. Our online behavior emits a continual flow of messages and cues that transform the structure of memory, helping to direct the attention and activity of our contemporaries. We deposit electronic pheromones in the virtual environment, which in turn determine the actions of other Internet users and train the formal neurons of artificial intelligence (AI).
Artificial intelligence and memory
The biological brain abstracts the details of actual experience into schemas of interactions, or concepts, encoded by patterns of neural circuitry. In the same way, AI’s neural models condense the countless data stored in digital memory. They virtualize actual data into patterns and patterns of patterns. Conditioned by their training, AI systems can then recognize and reproduce data corresponding to the learned patterns. But because they have abstracted structures rather than recording everything, here they are, able to correctly conceptualize forms (of image, text, music, code…) they have never encountered before, and produce an infinite number of new symbolic arrangements. Patterns hidden in the myriad layers and connections of electronic brains rain down unprecedented actualizations. This is why we speak of « generative artificial intelligence ». Neural AI synthesizes and mobilizes the common memory accumulated over the centuries. Far from being autonomous, it extends and amplifies a stigmergic collective intelligence. Millions of users contribute to perfecting the models by asking them questions and commenting on the answers they receive. We sow data to harvest meaning.
Does the electronic calculation that simulates the functioning of neurons give rise to an autonomous consciousness? No, because machines only manipulate the material part of symbols, and images, texts and melodies only have meaning for us when they are emitted at interfaces. No, because phenomenal experience is the counterpart of an animal organism, and intelligible meaning only appears to the person who has steeped himself in a culture. Humans participate in the mind because they inhabit a living body. On the other side of the mirror, signifiers swirl blindly, pebbles clatter on the great abacus, a senseless electronic fury rages in the data centers. On this side of the mirror, monitors present us with the face of another who speaks, but it’s an anthropomorphic projection. A library doesn’t remember any more than an algorithm thinks: both virtualize cognitive functions through externalization, transformation, pooling and re-internalization. The new electronic brains synthesize and put to work the enormous digital memory through which we remember, communicate and think together. Behind « the machine » lies the human collective intelligence that it reifies and mobilizes.
Reminder: « I work from the perspective of artificial intelligence dedicated to increasing collective intelligence. I designed IEML to serve as a semantic protocol, enabling the communication of meanings and knowledge (mental models) in digital memory, while optimizing machine learning and automatic reasoning. »
Let’s imagine a knowledge-sharing system that makes the most of today’s technical possibilities. At the heart of this device is an open ecosystem of knowledge bases categorized in IEML, which emerge from a multitude of communities of research and practice. Between this core of interoperable knowledge bases and living human users lies a « no-code » neural interface (an ecosystem of models) that provides access to data control, feeding, exploration and analysis. Everything happens intuitively and directly, according to the sensory-motor modalities selected. It is also via this giga-perceptron – an immersive, social and generative metaverse – that communities exchange and discuss the data models and semantic networks that organize their memories. In keeping with good knowledge management, the new knowledge-sharing device encourages the recording of creations, accompanies learning paths and presents useful information to players engaged in their practices. The IEML_GPT model described here is a first step in this direction.
Now that AI has been unleashed on the Internet and coupled with social media, we need to tame and harness the monster. How do we make AI reasonable? How do we get it to « understand » what we’re saying to it, and what it’s saying to us, rather than just calculating word occurrence probabilities from training data? We’d have to teach it the meaning of words and phrases in such a way that it (the AI) forms an abstract representation *understandable for itself* not only of the physical world (I’ll leave that task to Yann LeCun), but also a representation of the human world and, more generally, of the world of ideas.
In other words, how can we graft symbolic encoding and decoding capabilities onto a neural model that can initially only recognize and generate sensory forms or aggregates of signifiers? This challenge is reminiscent of the process of hominization – when biological neural networks became capable of manipulating symbolic systems – which is not to my displeasure.
UNDERSTANDING / KNOWLEDGE / INTEROPERABILITY
To understand a sentence is to include it in the self-defining dynamics of a language, and this even before grasping the sentence’s extralinguistic reference. AI will understand what is being said to it when it is capable of automatically transforming a character string into a semantic network that plunges into the self-referential and self-defining loop of a language. A language’s dictionary, with its definitions, is a crucial part of this loop. Just as a deduction ultimately represents a logical tautology, a language dictionary exhibits a *semantic tautology*. This is why IEML_GPT must contain a file with the IEML-French-English dictionary (and perhaps other languages) with all the relations between words in the form of IEML phrases. The dictionary is a meta-ontology that is the same for all users. Other files may contain local models or ontologies corresponding to user communities’ ecosystems of practice.
1) Linguistic understanding. Reasonable agents are able to recognize and generate syntactically valid IEML character sequences, in particular by means of a parser. They have an understanding of IEML: they reconstruct the recursively embedded syntagmatic trees and relationships between concepts that derive from the dictionary and the paradigmatic matrices (or substitution groups) that organize the concepts. Each concept (represented by an IEML word or phrase) is thus at the center of a star of syntactic and semantic relationships.
2) Practical domain knowledge. Reasonable agents are driven by knowledge bases that enable them to understand (locally) the world in which they have to operate. They have models (ontologies or knowledge graphs in IEML) of the practical situations facing their users. They are able to reason on the basis of these models. They are able to relate the data they acquire and the questions they are asked to these models.
3) Semantic interoperability. Reasonable agents share the same language (IEML) and therefore understand each other. They can exchange models or sub-models. They transform natural language expressions into IEML and IEML expressions into natural languages: they can therefore understand humans and make themselves understood by them.
TASK 1 : THE DICTIONARY
1.0 I already have about three thousand words in the dictionary, organized into paradigms, a formal grammar, a parser to validate sentences and a library of functions to generate paradigms. Here is the IEML dictionary.
1.1 The first step is to create concept-phrases to express the *sets of words* (lexical families and semantic fields) represented by the paradigms, their columns, rows and so on. Let’s call the concepts defining these sets of words « lexical concepts ». Words in the same lexical family share common syntactic features and often belong to the same root paradigms. They will have to be created systematically by means of paradigmatic functions.
I need to find ways of generating lexical concept paradigms automatically in natural language with IEML_GPT rather than using the current editor, which is not easy to use.
1.2 The second step is to create all the « analytical propositions » that define the words in the dictionary and explain their relationships by means of words and lexical concepts. For example: « A mountain is bigger than a hill »; « Sociology belongs to the humanities ». Analytical propositions of this kind are always true, and define a meta-ontology. So we’ll need to create the paradigms of the dictionary’s *relations*. And have them generated by IEML_GPT from natural language instructions.
1.3 All internal relationships of the dictionary, materialized by hyperlinks, are created by sentences. In terms of the user interface, this means creating internal hypertext links (between words and lexical concepts) in such a way that their grammatical relationships are as clear as possible. The dictionary-hypertext document must also be generated automatically by IEML_GPT. For each word, we’ll obtain a list (a « page? ») of true sentences containing the word. This list will be organized by grammatical role: word defined in root role, word defined in object role, etc. Here is a concise version of the IEML grammar.
These sentences will be used not only to define words, but also to begin accumulating examples and even training data, with correspondence between formal IEML phrases and literary translations in French and English. In short, the first finished product will be a complete dictionary, with words, lexical concepts and inter-definition relations in hypertextual form, all in IEML, English and French.
TASK 2: AN ONTOLOGIES EDITOR
Task 1 will have tested the best ways of creating paradigms using instructions in natural languages, or even using templates to ease the workload of ontology designers.
The output of the ontology editor could be in RDF, JSON-LD, or in the form of a hypertext document. It could also be an interactive multimedia document: tables, trees, networks of concepts that can be explored, verbal/sound illustrations, etc.
Ideally, the ontology we create should natively contain an inference engine, thus supporting automatic reasoning. The intellectual property of ontology creators must be recognized.
IEML_GPT will be able to run any IEML ontology or set of ontologies.
TASK 3: AUTOMATIC CATEGORIZATION
The next step is to build an integrated tool for automatic categorization of data in IEML. The AI is given a dataset and an IEML ontology (ideally in the form of a reference file), and the result is a set of data categorized according to the terms of the ontology. The completion of Task 3 paves the way for the creation of a knowledge base ecosystem as described in the vision above.
All these steps will first have to be carried out « in small » (proof of concepts and agile method) before being fully implemented.
Almost 30 years ago now, I published a book dedicated to digital-based collective intelligence which was, modesty aside, the first to address this topic. In this work, I predicted that the Internet would become the main medium of communication, that it would bring about a change in civilization, and I said that the best use we could make of digital technologies was to enhance collective intelligence (and let me add: an emerging, ‘bottom-up’ type of collective intelligence).
At that time, less than 1% of humanity was connected to the Internet, while today – in 2023 – we have exceeded two-thirds of the world’s population being online. The change in civilization seems quite evident, although it is normally necessary to wait several generations to confirm this type of shift, not to mention that we are only at the beginning of the digital revolution. As for the enhancement of collective intelligence, many steps have been taken to make knowledge accessible to all (Wikipedia, open-source software, digitized libraries and museums, open-access scientific articles, certain aspects of social media, etc.). But much remains to be done. Using artificial intelligence to enhance collective intelligence seems a promising path, but how do we proceed in this direction? To answer this question rigorously, I will need to define a few concepts beforehand.
WHAT IS INTELLIGENCE?
Before even addressing the relationship between human collective intelligence and artificial intelligence, let’s try to define in a few words intelligence in general and human intelligence in particular. It is often said that intelligence is the ability to solve problems. To which I respond: yes, but it is also and above all the ability to conceive or construct problems. If one has a problem, it means one is trying to achieve a certain result and is faced with a difficulty or obstacle. In other words, there is a “Self”, which has its own internal logic, which must maintain within certain homeostatic limits, which has immanent goals such as reproduction, feeding, or development, and there is an « Other », an exteriority, which follows a different logic, which merges with or belongs to the environment of the Self and with which the Self must negotiate. The intelligent entity must have a certain autonomy, otherwise it would not be intelligent at all, but this autonomy is not self-sufficiency or absolute independence because, in that case, it would have no problems to solve and would not need to be intelligent. »
Figure 1
The relationship between the Self and the Other (Figure 1) can be reduced to communication or interaction between entities that are governed by different ways of being, codes, and heterogeneous purposes, thus imposing an uncertain and improvable process of encoding and decoding. This process inevitably generates losses, creations, and is subject to all kinds of noise and interference.
The intelligent entity is not necessarily an individual; it can be a society or an ecosystem. Moreover, upon analysis, one will often find in its place an ecosystem of molecules, cells, neurons, cognitive modules, and so on. As for the relationship between the Self and the Other, it constitutes the elementary mesh of any ecosystemic network. Intelligence is the trait of an ecosystem in relation with other ecosystems; it is collective by nature. In summary, the problem comes down to optimizing communication with a heterogeneous Other based on the purposes of the Self, and the solution is none other than the actual history of their relations.
INTELLIGENCE COMPLEXITY LAYERS
Our main focus is on human intelligence enhanced by digital technology. Let’s not forget, however, that our intelligence is based on layers of complexity that predate the appearance of the Homo species on Earth (Figure 2). The complexity layers of organic and animal intelligence are still active and indispensable to our own intelligence, since we are living beings with an organism and animals with a nervous system. That’s why human intelligence is always embodied and situated.
Figure 2
With organisms come the well-known properties of self-reproduction, self-reference and self-repair, based on molecular communication and no doubt also complex forms of electromagnetic communication. I won’t go into the subject of organic intelligence here. Suffice it to say that some researchers in biology and ecology now speak of » vegetal cognition « .
The development of the nervous system stems from the need for locomotion. First, the sensory-motor loop must be ensured. Over the course of evolution, this reflex loop became more complex, involving simulation of the environment, evaluation of the situation and decision-making calculations leading to action. Animal intelligence results from the folding of organic intelligence upon itself, as the nervous system maps and synthesizes what is happening in the organism, and controls it in return. Phenomenal experience is born of this reflection.
Indeed, the nervous system produces a phenomenal experience, or consciousness, which is characterized by intentionality, i.e. the fact of relating to something that is not necessarily the animal itself. Animal intelligence represents the Other. It is inhabited by multimodal sensory images (cenesthesia, touch, taste, smell, hearing, sight), pleasure and pain, emotions, the spatio-temporal framing essential to locomotion, the relationship to a territory, and an often complex social communication. Clearly, animals are capable of recognizing prey, predators or sexual partners and acting accordingly. This is only possible because neural circuits encode interaction patterns or concepts that orient, coordinate and give meaning to phenomenal experience.
HUMAN INTELLIGENCE
I’ve just mentioned animal intelligence, which is based on the nervous system. How can we characterize human intelligence, supported by symbolic coding? The general categories, concepts and patterns of interaction that were simply encoded by neural circuits in animal intelligence are now also represented in phenomenal experience via symbolic systems, the most important of which is language (Figure 3). Meaningful images (speech, writing, visual representations, ritual gestures…) represent abstract concepts, and these concepts can be syntactically combined to form complex semantic architectures.
Figure 3
As a result, most dimensions of human phenomenal experience – including sensori-motricity, affectivity, spatio-temporality and memory – are projected onto symbolic systems and controlled in turn by symbolic thought. Human intelligence and consciousness are reflexive. Moreover, for symbolic thought to take shape, symbolic systems – which are always of social origin – must be internalized by individuals, becoming an integral part of their psyche and « hard-wired » into their nervous systems. As a result, symbolic communication directly engages human nervous systems. We can’t fail to understand what someone is saying if we know the language. And the effects on our emotions and mental representations are almost inevitable. We could also take the example of the psycho-physical and affective synchronization produced by music. This is why human social cohesion is at least as strong as that of eusocial animals like bees and ants.
Note that figure 3, like several figures that follow, evokes a partition and interdependence between the virtual and the actual. In 1995, I published a book on the virtual that was both a philosophical and anthropological meditation on the concept of virtuality, and an attempt to put this concept to work on contemporary objects. My philosophical thesis is simple: that which is only possible, but not realized, does not exist. By contrast, that which is virtual but not actualized does exist. The virtual, that which is potential, abstract, immaterial, informational or ideal, weighs on situations, conditions our choices, provokes effects and enters into a dialectic or interdependent relationship with the actual.
COLLECTIVE INTELLIGENCE ECOSYSTEMS
Figure 4 maps the main hubs of collective human intelligence or, if you prefer, the culture that comes with symbolic thinking. The diagram is organized by two intersecting symmetries. The first – binary – symmetry is that of the virtual and the actual. The actual is immersed in space and time, and is rather concrete, whereas the virtual is rather abstract and has no spatio-temporal address. The second – ternary – symmetry is that of sign, being and thing, inspired by the semiotic triangle. The thing is what the sign represents, and the being is the subject for whom the sign represents the thing. To the left (sign) stand symbolic systems, knowledge and communication; in the middle (being) stand subjectivity, ethics and society; to the right (thing) extend the capacity to do, the economy, technology and the physical dimension. It’s all about collective intelligence, because the six vertices of the hexagon are interdependent: the green lines (relationships) are as important, if not more so, than the points where they end.
Figure 4
This framework is valid for society in general, but also for any particular community. By the way, virtual, actual, sign, being and thing are (along with void) the semantic primitives of the IEML language (Information Economy MetaLanguage) that I invented and of which I’ll say a few words below.
The six vertices of the hexagon are not only the main fulcrums of human collective intelligence, they are also universes of problems to be solved:
problems of knowledge creation and learning
communication problems
problems of legislation and ethics
social and political problems
economic problems
technical, health and environmental problems.
How can we solve these problems?
THE SELF-ORGANIZING CYCLE OF COLLECTIVE INTELLIGENCE
Figure 5 shows a four-stage problem-solving cycle. For each of the four phases of the cycle (deliberation, decision, action and observation), there are many different procedures, depending on the traditions and contexts in which collective intelligence operates. You’ll notice that deliberation represents the virtual phase of the cycle, while action represents the actual phase. In this model, decision is the transition from the virtual to the actual, while observation is the transition from the actual to the virtual. I’d like to emphasize two concepts here – deliberation and memory – which are often overlooked in this context.
Figure 5
Let’s begin by stressing the importance of deliberation, which involves not only discussing the best solutions for overcoming obstacles, but also constructing and conceptualizing problems collaboratively. This conceptualization phase will strongly influence and even define many of the subsequent phases, and will also determine the organization of the memory.
As you can see from the diagram in Figure 5, memory lies at the heart of the collective intelligence self-organization. Shared memory supports each phase of the cycle, helping to maintain the coordination, coherence and identity of collective intelligence. Indirect communication via a shared environment is one of the main mechanisms underpinning the collective intelligence of insect societies, known as stigmergic communication in the vocabulary of ethologists. But whereas insects generally leave pheromone traces in their physical environments to guide the actions of their fellow creatures, we leave symbolic traces not only in the landscape, but also in specialized memory devices such as archives, libraries and, today, databases. The problem of the future of digital memory lies before us: how can we design this memory in such a way that it is as useful as possible to our collective intelligence?
TOWARDS AN ARTIFICIAL INTELLIGENCE AT THE SERVICE OF COLLECTIVE INTELLIGENCE
Having acquired a few notions about intelligence in general, the foundations of human intelligence and the complexity of our collective intelligence, we can now ask ourselves about the relationship between our intelligence and machines.
Figure 6
Figure 6 provides an overview of our situation. In the middle, the « living »: human populations, with the actual bodies and virtual minds of individuals. Immediately in contact with the individuals, the hardware machines (or mechanical bodies) on the actual side and, on the virtual side, the software machines (or mechanical minds). Hardware machines increasingly play the role of interface or medium between us and terrestrial ecosystems. As for software machines, they are becoming the main intermediary – a medium once again – between human populations and the ecosystems of ideas with which we live in symbiosis. As for collective consciousness, we’re not there yet. It’s more a horizon, a direction to aim for, than a reality. We need to understand Figure 6 by mentally adding feedback loops or interdependencies between adjacent layers, between the virtual and the actual, between the mechanical and the living. On an ethical level, we can assume that living human communities receive the benefits of terrestrial ecosystems and ecosystems of ideas in proportion to the work and care they put into maintaining them.
INTELLIGENCE AUTOMATION
Let’s zoom in on our mechanical environment with Figure 7. A machine is a technical device built by humans, an automaton that moves or operates « by itself ». Today, the two types of machine – software and hardware – are interdependent. They could not exist without each other, and are in principle controlled by human communities, whose physical and mental capacities they augment. Because technology externalizes, socializes and reifies human organic and psychic functions, it can sometimes appear autonomous or at risk of becoming autonomous, but this is an optical illusion. Behind « the machine » lies collective intelligence and the social relations it reifies and mobilizes.
Figure 7
Mechanical machines are those that transform motion, starting with the sail, the wheel, the pulley, the lever, gears, springs and so on. Examples of purely mechanical machines include water or windmills, classical clocks, Renaissance printing presses or the first weaving looms.
Energetic machines are those that transform energy into heat or electricity. Examples include furnaces, forges, steam engines, internal combustion engines, electric motors, and contemporary processes for generating, transmitting and storing electricity.
As for electronic machines, they control energy and matter at the level of electromagnetic fields and elementary particles, and very often serve to control the lower-layer machines on which they also depend. For our purposes here, these are mainly data centers (the « cloud »), networks and devices that are in direct contact with end-users (the « edge »), such as computers, telephones, games consoles, virtual reality headsets and the like.
Let’s take a look at the virtual part, which corresponds to the shared memory we put at the heart of our description of the self-organizing cycle of collective action. While point-to-point messages are still exchanged, most social communication now takes place stigmergically in digital memory. We communicate via the oceanic mass of data that brings us together. Every link we create, every tag or hashtag affixed to a piece of information, every act of evaluation or approval, every « like », every request, every purchase, every comment, every share – all these operations subtly modify the common memory, i.e. the inextricable magma of relationships between data. Our online behavior emits a continual flow of messages and clues that transform the structure of memory, help direct the attention and activity of our contemporaries, and drive artificial intelligence. But all this is happening today in a rather opaque way, which doesn’t do justice to the necessary phase of deliberation and conscious conceptualization that would be that of an ideal collective intelligence.
Above all, memory comprises the data that is produced, retrieved, explored and exploited by human activity. Human-machine interfaces represent the « front-end » without which nothing is possible. They directly determine what we call the user experience. Between interfaces and data, there are two main types of artificial intelligence models: neural models and symbolic models. We saw above that « natural » human intelligence is based on neural and symbolic coding. We find these two types of coding, or rather their electronic transposition, at the digital memory layer. It’s worth noting that these two approaches, neural and symbolic, already existed in the early days of AI, as early as the middle of the 20th century.
The neural models are trained on the multitude of digital data available, and they automatically extract patterns that no human programmer would have been able to work out. Conditioned by their training, the algorithms can then recognize and produce data corresponding to the learned patterns. But because they have abstracted structures rather than recording everything, they are now capable of correctly categorizing forms (image, text, music, code…) they have never encountered before, and producing an infinite number of new symbolic arrangements. This is why we speak of generative artificial intelligence. Neural AI synthesizes and mobilizes shared memory. Far from being autonomous, it extends and amplifies the collective intelligence that produced the data. What’s more, millions of users contribute to perfecting the models by asking them questions and commenting on the answers they receive. Take Midjourney, for example, whose users exchange prompts and constantly improve their AI skills. Today, Midjourney’s Discord servers are the most populous on the planet, with over a million users. A similar phenomenon is beginning to unfold around DALLE 3. A new stigmergic collective intelligence is emerging from the fusion of social media, AI and creator communities. These are examples of conscious contributions of collective human intelligence to artificial intelligence systems.
Many generalist pre-trained models are open-source, and several methods are now being used to refine or adjust them to particular contexts, whether based on elaborate prompts, additional training with special data or by means of human feedback, or a combination of these methods. In short, we now have the first beginnings of a neural collective intelligence, which emerges from a statistical calculation on data. However, neural models, useful and practical as they may be, are unfortunately not reliable knowledge bases. They inevitably reflect common opinion and the biases inherent in the data. Because of their probabilistic nature, they are prone to all kinds of errors. Finally, they don’t know how to justify their results, and this opacity is not conducive to building confidence. Critical thinking is therefore more necessary than ever, especially if training data is increasingly produced by generative AI, creating a dangerous epistemological vicious circle.
Let’s turn now to symbolic models. We call them by various names: tag collections, classifications, ontologies, knowledge graphs or semantic networks. These models can be reduced to explicit concepts and equally explicit relationships between these concepts, including causal relationships. They allow data to be organized semantically according to the practical needs of user communities, and enable automatic reasoning. With this approach, we obtain reliable, explicable knowledge that is directly adapted to the intended use. Symbolic knowledge bases are wonderful ways of sharing knowledge and skills, and therefore excellent tools for collective intelligence. The problem is that ontologies or knowledge graphs are created « by hand ». Formal modeling of complex knowledge domains is difficult. The construction of these models is time-consuming for highly specialized experts and therefore costly. The productivity of this intellectual work is low. On the other hand, while there is interoperability at the level of file formats for semantic metadata (or classification systems), this interoperability does not exist at the semantic level of concepts, which compartmentalizes collective intelligence. Wikidata is used for encyclopedic applications, schema.org for websites, the CIDOC-CRM model for cultural institutions, and so on. There are hundreds of incompatible ontologies from one domain to another, and often even within the same domain.
For years, many researchers have been advocating the use of hybrid neuro-symbolic models, in order to benefit from the advantages of both approaches. My message is as follows. If we want to move towards a digitally-supported collective intelligence worthy of the name, and which keeps up with our contemporary technical possibilities, we need to:
1) Renew symbolic AI by increasing the productivity of formal modeling and decompartmentalizing semantic metadata.
2) Couple this renewed symbolic AI with neural AI, which is in full development.
3) Put this previously unseen hybrid AI at the service of collective intelligence.
IEML : FOR A SEMANTIC KNOWLEDGE BASE
We have automated and pooled pattern recognition and automatic pattern generation, which is more neural in nature. How can we automate and pool conceptualization, which is more symbolic in nature? How can we bring together formal conceptualization by living humans and the pattern recognition that emerges from statistics?
Figure 8
Because our collective intelligence is increasingly based on a shared digital memory, I’ve been looking over the last thirty years for a semantic coordinate system for digital memory, a metadata system that would automate conceptualization operations and enable conceptual models to be shared.
The only thing that can generate all the concepts we want, while maintaining mutual understanding, is a language. But natural languages are irregular and ambiguous, and their semantics cannot be computed. So I built a language – IEML (Information Economy MetaLanguage) – whose internal semantic relations are functions of syntactic relations. IEML is both a language and an algebra. It is designed to facilitate and automate the construction of symbolic models as far as possible, while ensuring their semantic interoperability. In short, it’s a tool for automating and sharing conceptualization, with the vocation of serving as a universal semantic metadata system.
We can now answer our main question: how can we use artificial intelligence to increase collective intelligence? We need to imagine an ecosystem of semantic knowledge bases organized according to the architecture described in figure 8. As you can see, there are three layers between the human-machine interface and the data. In the center, the semantic metadata layer organizes the data on a symbolic level and, thanks to its algebraic structure, enables all kinds of uniform logical, analogical and semantic calculations. We know that symbolic modeling is difficult, and today’s ontology editors don’t make it easy. That’s why, under the metadata layer, I’m proposing to use a neural model to translate natural sign systems into IEML and vice versa, which would facilitate the most intuitive editing and inspection of semantic models. Between the metadata layer and the data layer, a neural model will enable the automatic generation of data from IEML prompts. In the opposite direction, the neural model would automatically classify the data and integrate it into the semantic model of the user community. Note that the algebraic properties of IEML are particularly aimed at perfecting machine learning.
The immersive human-machine interface using natural signs would enable anyone to collaborate in the conceptualization of models at the level of semantic metadata, and to generate the appropriate data by means of transparent prompts. Finally, this knowledge base would automate data categorization, exploitation and multimedia exploration.
Such an approach would enable each community to organize itself according to its own semantic model, while supporting the comparison and exchange of concepts and sub-models. In short, an ecosystem of semantic knowledge bases using IEML would simultaneously maximize (1) increased intellectual productivity through partial automation of conceptualization, (2) transparency of models and explicability of results, so important from an ethical point of view, (3) pooling of models and data thanks to a common semantic coordinate system, and (4) diversity and creative freedom, since the networks of concepts formulated in IEML can be differentiated and complexified at will. A fine program for collective intelligence. My wish is for a digital memory that will enable us to cultivate diverse and fertile ecosystems of ideas and reap the maximum benefits for human development.
Let’s think about the new digital public sphere. I will begin by discussing the anthropological and demographic context of the public sphere shift into the digital environment. Then I will analyze the original forms of memory and communication supported by the new medium. I will then evoke the figures of domination and alienation specific to the new public sphere. I will finish, as it should be, with some emancipatory perspectives.
1 Context
A new era in culture
One of the main factors in the evolution of ideas ecosystems lies in the material devices of production and reproduction of symbols, but also in the « software » systems of writing and coding information. In the course of history, symbols (and the ideas they carry) have been successively perpetuated by writing, lightened by the alphabet and paper, and multiplied by printing and electrical media. At each stage, new political forms have emerged: palaces-temples and first states with writing, empires and cities with the alphabet or paper, nation states with printing and electronic media.
Nowadays symbols are digitized and computed, i.e. a host of software robots – the algorithms – record, count, translate and extract patterns from these symbols. Symbolic objects (texts, still or moving images, voices, music, programs, etc.) are not only recorded, reproduced and transmitted automatically, they are also transformed and generated in an industrial way. In short, cultural evolution has brought us to the point where ecosystems of ideas manifest themselves as algorithmically animated data in a ubiquitous virtual space. And it is in this space that social ties are formed, maintained and unraveled, where the dramas of the polis are now played out…
The demographic shift
The hypothesis of a quick and large-scale anthropological change is based on uncontroversial quantitative data.
Access to computers Regarding access to computers, we can consider that 0.1 percent of the world’s population had direct access to a computer in 1975 (before the personal computer revolution). This proportion was 20% in rich countries in 1990 (before the Web revolution). In 2022, for European countries, the proportion oscillated between 65% (Greece) and 95% (Luxembourg). Note that these figures do not take into account smart phones.
Internet access The proportion of the world’s population that had access to the Internet was about 1% in 1990 (before the Web), 4% in 1999, 24% in 2009, 51% in 2018 and 65% in 2023. According to the International Telecommunications Organization, about 5 billion people are Internet users today. Still for 2023, but only in Europe, the proportion of the population connected to the Internet amounts to 93% (data of the European Union).
News To complete these statistics with some data more directly related to politics, 40% of Europeans and 50% of Americans and Canadians read the news through social media (I do mean social media, not the Internet in general). It is more than 50% everywhere under forty years old. For specific data on reading newspapers versus reading text online: 80% of those under thirty read news online (Pew Research Center data).
2 Digital memory and communication
The new public sphere
In short, less than a century after the invention of the first computers, more than sixty-five percent of the world’s population is connected to the Internet and the world’s memory is digitized. If a piece of information is found at one point of the network, it is everywhere. From static text on paper, we have moved on to ubiquitous hypertext, and then to the surrealist Architext gathering all symbols. A virtual memory has begun to grow, secreted by the living and the dead in billions, teeming with languages, music and images, full of dreams and fantasies, mixing science and lies. The new public sphere is multimedia, interactive, global, fractal, stigmergic and – from now on – mediated by artificial intelligence.
The new public sphere is global. Both the web and major social media like Facebook, Twitter, LinkedIn, Telegram, Reddit, etc. are international and multilingual. Machine translation has reached a point where we can now understand, with a few errors, what someone writes in another language. I would add that, in parallel to translation, the automatic synthesis of long texts is progressing, which adds to the porosity of the various cognitive and semantic bubbles.
The digital public sphere is fractal, that is, it is subdivided into subgroups, which in turn are subdivided into subgroups, and so on recursively, with all imaginable combinations and intersections. These subdivisions intersect with distinctions of platforms, languages, geographical areas, interests, political orientations, etc. Examples include Facebook or LinkedIn groups, Discord servers, You-tube or Telegram channels, Reddit communities, etc.
Stigmergic collective intelligence
While point-to-point message exchange still takes place, most social communication now takes place in a stigmergic manner. The notion of stigmergy is one of the keys to understanding the functioning of the digital public sphere. We distinguish traditionally three communication patterns: one-one, one-many and many-many. The one-one pattern corresponds to dialogue, the classic postal mail or the traditional telephone. The one-many scheme describes a central editor/transmitter sending his messages to many so-called « passive » receivers. This last scheme corresponds to the press, the recording industry, the radio and the television. The Internet represents a breakthrough because it allows all participants to transmit to a large number of receivers in a decentralized « many-to-many » networked scheme. This last description is nevertheless misleading. Indeed, if everyone transmits for everyone (which is the case), not everyone can listen to everyone. What happens in reality is that Internet users contribute to a common memory and in return become aware of the content of this memory through automated search and selection procedures. These are the famous algorithms of Google, (Page Rank), Facebook, Twitter, Amazon (recommendations), etc.
Its Greek etymology explains the meaning of the word « stigmergy » quite well: marks (stigma) are left in the environment by the action or work (ergon) of members of a community, and these marks in turn guide their actions, and so on recursively. The classic case is that of ants leaving a trail of pheromones in their wake as they bring food back to the anthill. The smell of the pheromones incites other ants to follow their tracks to discover the booty and bring food back to the underground city by leaving a scented message on the ground as well.
It can be argued that any form of writing that is not precisely addressed is a form of stigmergic communication: traces are deposited for future reading and act as the external memory of a community. The phenomenon is old but it took a new extent since the turn of the century. Immersed in the new digital public sphere, we communicate through the oceanic mass of data that brings us together. Wikipedia’s encyclopedists and GitHub’s programmers collaborate through the same database. Unbeknownst to us, every link we create, every tag or hashtag placed on a piece of information, every act of rating or approval, every « like, » every request, every purchase, every comment, every share, all of these operations subtly modify the common memory, that is, the inextricable magma of relationships between data. Our online behavior emits a continuous flow of messages and clues that transform the structure of memory and contribute to directing the attention and activity of our fellow internet users. In an endless loop, we deposit in the virtual environment some electronic pheromones which determine the action of other Internet users and train the formal neurons of artificial intelligences (AI).
The role of Artificial Intelligence in the new public sphere
The biological brain abstracts the details of actual experiences into schemas of interactions, or concepts, encoded by patterns of neural circuits. In the same way, the neural models of AI condense the countless data of digital memory. They compress the actual data into patterns and patterns of patterns. Conditioned by their training, the algorithms can then recognize and reproduce data corresponding to the learned patterns. But because they have abstracted structures rather than recorded everything, they are able to conceptualize correctly some forms (of image, text, music, code…) that they have never encountered and to produce an infinity of new symbolic arrangements. This is why we speak about generative artificial intelligence.
Digital memory is detached from its place of origin and reception, pooled, waiting to be read, suspended in the « clouds » of the Internet, embedded in software. This mass of data is now virtualized by neural models. And the patterns hidden in the myriad layers and connections of electronic brains rain down novel symbolic objects. We only sow data to harvest meaning.
AI offers us a new access to the global digital memory. It is also a way to mobilize this memory to automate increasingly complex symbolic operations, involving the interaction of heterogeneous semantic universes and accounting systems.
3 The dark side
The platform state and the new cloud bureaucracy
If the preceding analyses have any validity, political power is largely played out in the digital public sphere. And its ultimate control lies « in the clouds », in the hands of the celestial bureaucracies that calculate social interactions and memory. The clouds, that is, the networks of data centers owned by the oligopoly of GAFAM, BATX, big social media and company. This is why the contenders for global political hegemony, mainly the Americans and the Chinese, ally themselves with the data lords – or subjugate them – because the digital oligarchs hold the material control of the world’s memory and the public sphere. They alone, moreover, have the storage capacity and computational power to train the so-called « foundational » general AI models. What I call a « Platform State » results from the intertwining of a political superpower with a fraction of the digital oligarchy.
Cloud bureaucracy is more efficient than the nation-state bureaucracy inherited from the age of printing. Already, several governmental or regalian functions are provided by large platforms or by « decentralized » digital networks. The following list is not closed :
– Verification of people’s identity, facial recognition – Mapping and cadastre – Money creation – Market regulation – Education and research – Fusion of defense and cyber-defense – Control of the public sphere, censorship, propaganda, “nudge” – Surveillance – Biosurveillance
Social media : addictions and manipulations
Our allegiance to the data lords comes from the power of their computing centers, their software efficiency and the simplicity of their interfaces. It is also rooted in our addiction to a toxic socio-technical architecture, which uses the dopaminergic stimulation and addictive narcissistic reinforcements of digital communication to make us produce more and more data. We know how much the mental health of adolescent populations is at risk from this point of view. In addition to the biopolitics evoked by Michel Foucault, we must now consider a psychopolitics based on neuromarketing, personal data and gamification of control.
We should get used to it: the polis has moved into the great global database of the Internet. As a result, power struggles – all power struggles, be they economic, political or cultural – are replayed and complicated in the new digital space. On the slippery terrain of social media, the opposing camps have their armies of trolls coordinated in real time, equipped with the latest bots, informed by automatic data analysis and augmented by machine learning. In the raging worldwide civil war, with domestic and foreign politics inextricably intertwined, the new mercenaries are the influencers.
But all these novelties do not invalidate the classic rules of propaganda, which are still valid today: continuous repetition, simplicity of the catchwords, memorable images, emotional provocation and identity resonance. No one has forgotten Machiavelli’s wise advice on how to get the enemy to destroy himself: « Secret warfare consists of taking the confidence of a divided city, mediating between the two parties until they come to arms: and when the sword is finally drawn, giving carefully measured help to the weaker party, as much with the aim of making the war last and letting them be consumed by each other, as to guard against revealing one’s intention of oppressing and subduing them both equally, by a too massive help. If you follow this course carefully, you will almost always reach your goal. »[1]
With our heads down on our smartphones, we spin in a loop the stereotypes that reinforce our fragmented identities and our short memories under the snide gaze of intoxication experts, subsidized communicators, marketing specialists and geopolitical agents of influence…
AI and cultural domination
Let’s continue this review of the dark sides of the new public sphere with the issues of cultural domination linked to Artificial Intelligence. There is a lot of talk about the « biases » of this or that model of artificial intelligence, as if there could be a non-biased or neutral AI. This question is all the more important because, as I suggested above, AI is becoming our new interface with symbolic objects: universal pen, panoramic glasses, general speaker, codeless programmer, personal assistant. The large generalist language models produced by dominant platforms are now akin to a public infrastructure, a new layer of the digital meta-medium. These generalist models can be inexpensively specialized with domain-specific datasets and adjustment methods. They can also be equipped with fact-checked knowledge bases.
The results provided by an AI are thus the result of several factors that all contribute to its orientation or, if you like, its « biases ». a) The algorithms themselves select the types of statistical calculations and neural network structures. b) The training data favor the languages, cultures, philosophical options, political biases and prejudices of all kinds of those who produced them. c) In order to align the AI’s responses with the supposed goals of the users, the inclinations of the data are corrected (or accentuated!) « by hand » through what is called RLHF (Reinforcement Learning from Human Feed-back). d) Finally, as with any tool, the user determines the results by means of instructions in natural language (the famous prompts) It should be noted that communities of users collaboratively exchange and improve such instructions. The power of these systems is matched only by their complexity, their heterogeneity and their opacity. Regulatory control of AI, while undoubtedly necessary, seems difficult.
4 Emancipation perspectives
Digital literacy and critical thinking
Despite all of the above, the public sphere of the 21st century is more open than that of the 20th century: citizens in democratic countries enjoy a great deal of freedom of expression and can choose their sources of information from a wide range of thematic specializations, languages and political orientations. This freedom of expression and information, the new distributed power of data creation and analysis, not to mention the possibilities of social coordination offered by the new medium, all represent emancipatory potential. But only a true education to critical thinking in the new communication environment will actualize this potential of renewed citizenship. To set the record straight, a BBC study recently showed that half of young people aged 12 to 16 believe news shared on social media without checking them. And we know from experience that children are not the only gullible subjects. Ideally, the new critical thinking education should teach future citizens to organize themselves as small, autonomous intelligence agencies that prioritize their interests, carefully select diverse sources, analyze data based on explicit hypotheses, and maintain a relevant classification of their personal digital memory. They must learn to discern data sources in terms of organizing categories, dominant narratives, and agendas. The basic journalistic reflex of cross-referencing sources identified in this way will be instilled. Finally, students should be trained in collective stigmergic intelligence and collaborative learning.
For a governance of the digital public sphere
I will limit myself here to indicating a few major orientations for the necessary governance of the new public sphere, rather than determining precisely the means to achieve it. While steering in heavy weather may require many detours, the course is clear: it is a matter of perfecting, as much as possible, the reflexive dimension of a collective intelligence already in action. a) In support of this goal, transparency of online processes seems a sine qua non. In particular, but not only, I am aiming at a clear, brief and natural language description of AI training algorithms and data. b) Following the example of Wikimedia, let us strive to maximize the knowledge commons. c) Let’s open up the data sets and algorithms along the lines of the free software movement. d) Let’s ensure the practical and legal sovereignty of individuals and groups over the data they produce. e) Finally, let’s decentralize the governance of online interactions by promoting consensual procedures. The social movement that supports the blockchain indicates a possible path here.
In order to contribute to the project of a collective reflexive intelligence I invented a language (read the peer reviewed scientific paper: IEML, Information Economy MetaLanguage) having the same capacity of expression and translation as natural languages but which also has the regularity of an algebra, thus allowing a calculation of semantics. This language could serve as a semantic coordinate system for the new public sphere. It would thus contribute to transform the digital memory into a mirror of our collective intelligences. From then on, a more fluid feedback loop between the ecosystems of ideas and the communities that sustain them would bring us closer to the ideal of a reflexive collective intelligence at the service of human development and a renewed democracy. This is not to entertain any illusion about the possibility of total transparency, but rather to open the way to the critical exploration of an infinite universe of meaning.
[1] Discours sur la première décade de Tite-Live. La Pléiade, Gallimard, Paris, p. 588, my translation.