See the original publication in portuguese here: https://www.pucpress.com.br/wp-content/uploads/2025/05/CADERNOS_DO_CONTEMPORANEO_0000_P.pdf
For the french version go to : https://pierrelevyblog.com/2025/06/19/les-reseaux-sociaux-et-lia-sont-ils-responsables-de-nos-malheurs/

Interview with Prof. Dr. Pierre Lévy
Q1 – Faced with growing hyper-connectivity among young people, many experts speak of loneliness and what they call « the age of sad passions. » How do you see this dichotomy between proximity and distance that technology provokes in human relationships?
R1 – Hyper-connectivity doesn’t only concern young people; it’s everywhere. One of the main factors in cultural evolution lies in the material apparatus for producing and reproducing symbols, but also in software systems for writing and coding information. Our collective intelligence extends that of the social species preceding us, particularly that of great apes. But the use of language – and other symbolic systems – as well as the strength of our technical means has moved us from the status of social animal to that of political animal. Properly human, the Polis emerges from the symbiosis between ecosystems of ideas and populations of speaking primates who maintain them, feed on them, and reflect themselves in them. The evolution of ideas and that of Sapiens populations mutually determine each other. Now the main factor in the evolution of ideas lies in the material apparatus for reproducing symbols. Throughout history, symbols (with the ideas they carried) have been successively perpetuated by writing, lightened by the alphabet and paper, multiplied by printing and electric media. Symbols are now digitized and computed, meaning that a crowd of software robots – algorithms – record, count, translate, and extract patterns from them. Symbolic objects (texts, still or moving images, voices, music, programs, etc.) are not only recorded, reproduced, and transmitted automatically, they are also generated and transformed industrially. In sum, cultural evolution has led us to the point where ecosystems of ideas manifest themselves in the form of data animated by algorithms in a ubiquitous virtual space. And it is in this space that social bonds are now formed, maintained, and dissolved. Before criticizing or deploring, we must first recognize the facts. Young people’s friendships can no longer do without social media; couples meet on the internet, for example on applications like Tinder (see Figure 1); families stay connected through Facebook or other applications like WhatsApp; workspaces have shifted to electronic with Zoom and Teams, particularly since the COVID pandemic; diplomacy is increasingly done on X (formerly Twitter), etc. We won’t go back. On the other hand, we don’t move around less physically: witness the monstrous traffic jams in big cities. In the same vein, the trend over the last ten years – a time of exponential growth in internet connections – also shows an increase in the number of air passengers, which continues a secular trend, despite a significant drop during the COVID-19 pandemic.
I felt quite alone when, as a young student, I arrived in Paris from southern France to pursue my university studies. It was 1975 and there was no internet. Should seniors who live alone and whose children don’t visit them blame the Internet? The problem of loneliness and the disintegration of social bonds is very real. But it’s an already old trend, which stems from urbanization, transformations of the family, and many other factors. I invite your readers to consult the works on the topic of « social capital » (the quantity and quality of human relationships). The internet is only one of many factors to consider on this question.
Figure 1
Q2 – In your books « Collective Intelligence: For an anthropology of cyberspace » (1994) and « Cyberculture: The Culture of the Digital Society » (1997), you argue that the Internet and digital technologies develop collective intelligence, enabling new forms of collaboration and knowledge sharing. However, there is growing concern that excessive use of social media and digital technologies is associated with distraction and learning delays in young people. How do you see this apparent contradiction between the potential of technologies to strengthen collective intelligence and the negative effects they can have on the cognitive and educational development of young people?
R2 – I have never argued that the Internet and digital technologies, by themselves and as if techniques were autonomous subjects, develop collective intelligence. I have argued that the best use we could make of the internet and digital technologies was to develop human collective intelligence, which is quite different. And it is still what I think. The idea of a « knowledge space » that could unfold above the commercial space is a regulatory ideal for action, not a factual prediction. When I wrote Collective Intelligence – from 1992 to 1993 – less than 1% of humanity was connected to the Internet and the Web didn’t exist. You won’t find the word « web » anywhere in the book. Yet we have today – in 2025 – largely exceeded two-thirds of the world’s population connected to the Internet. The context is therefore completely different, but the civilizational change I predicted 30 years ago seems obvious today, although we normally have to wait several generations to confirm this type of mutation. In my opinion, we are only at the beginning of the digital revolution.
As for the increase in collective intelligence, many steps have been taken to make knowledge accessible to all. Wikipedia is the classic example of an enterprise that functions through collective intelligence with millions of volunteer contributors from all countries and discussion groups between experts for each article. There are nearly seven million articles in English, two and a half million articles in French, and more than one million articles in Portuguese. (Note nevertheless that some articles on current events are biased. Always check with other sources!) Wikipedia is consulted by several tens of millions of people per day and several billions per year! Free software – now widely adopted and distributed, including by major Web companies – is another major domain where collective intelligence is in command. Among the most used free software, let’s mention the Linux operating system, Mozilla and Chromium browsers, the Open Office suite, the Apache http server (which is the most used on the Internet), the GIT version control system, Signal messaging, and many others that would be too long to cite. I add that digitized libraries and museums, like open access scientific articles and sites like ArXiv.org, are commonplace, which transforms research and scientific communication practices. Everyone can now publish texts on their blog, videos and podcasts on YouTube or other sites, which wasn’t the case thirty years ago. Social media allow exchanging news and ideas very quickly, as we see for example on LinkedIn or X (formerly Twitter). The Internet has therefore really enabled the development of new forms of expression, collaboration, and knowledge sharing. Much remains to be done. We are only at the very beginning of the ongoing anthropological mutation.
Of course, we must take into account phenomena of addiction to video games, social media, online pornography, etc. But for more than thirty years, the majority of journalists, politicians, teachers, and all those who shape opinion have not stopped denouncing the dangers of computing, then of the Internet, and now of artificial intelligence. I would do nothing very useful if I added my lamentations to theirs. I therefore try to make people aware of a large-scale civilizational mutation that won’t be stopped and to indicate the best means of directing this great transformation toward the most positive purposes for human development. That said, it’s clear that addiction phenomena partially find their source in our dependence on the toxic sociotechnical architecture of major Web companies, which uses dopaminergic stimulation and narcissistic reinforcements to make us produce ever more data and sell more advertising. Unfortunately, the mental health of adolescent populations may be one of the collateral victims of the commercial strategies of these major oligopolistic companies. How can we oppose the power of their data centers, their software efficiency, and the simplicity of their interfaces? It’s easier to ask the question than to answer it. In addition to the biopolitics evoked by Michel Foucault, we must now consider a psychopolitics based on neuromarketing, personal data, and gamification of control. Teachers must warn students of these dangers and train them in critical thinking.
Q3 – With the phenomenon of « connective bubbles, » where social networks tend to reinforce pre-existing beliefs and ideas, limiting contact with different perspectives, how do you see the evolution of social bonds as the Internet and digital platforms continue to develop? Could this type of segmentation weaken the collective intelligence you advocate, or is there still room for broader and more collaborative connections in the future?
R3 – It’s clear that if we’re content to instinctively « like » what we see scrolling by and react emotionally to the most simplistic images and messages, the cognitive benefit won’t be very great. I don’t pose as an absolute model to follow; I would simply like to give an example of what it’s possible to do if we have a little imagination and are ready to question the inertia of institutions. When I was a professor of digital communication at the University of Ottawa, I forced my students to register on Twitter, to choose half a dozen subjects interesting to them, and to compile lists of accounts to follow for each subject. Whatever the theme – politics, science, fashion, art, sports, etc. – they had to build balanced lists including experts or supporters of opposing views in order to expand their cognitive sphere instead of restricting it. On the most common social media like Facebook and LinkedIn, it’s possible to participate in a large number of communities specialized in cultural domains (history, philosophy, arts) or professional ones (business, technology, etc.) in order to stay informed and discuss with experts. Local discussion groups by villages or neighborhoods are also very useful. Everything is a matter of method and practice. We must detach ourselves from the mass media model (newspapers, radio, television) in which passive receivers consume programming made by others. It’s up to each person to cobble together their own programming and build their personal learning networks.
Before printing, we only spoke with people from our parish. In the 1960s, we only had the choice between two or three television channels and two or three newspapers. Today we have access to an enormous diversity of sources from all countries and all sectors of society. Teachers must make students literate, teach them foreign languages, give them a good general culture, and guide them in this new universe of communication.
Q4 – Currently, there is a growing debate about the negative effects of technology on young people’s mental health, focusing on problems such as anxiety, depression, and social isolation. Considering the central role that digital technologies play in our society, how do you understand this relationship between intensive use of technologies and the increase in mental health problems among young people? Is there a way to balance the advantages of technology with the need to preserve mental well-being?
R4 – The problem of young people’s mental health is of course quite real, but it would be reductive to attribute it solely to social media. Nevertheless, I will try to enumerate some psychological problems that arise from the use of digital technologies.
First, there is the transformation of subjective self-reference, which risks leading to schizophrenic-type problems. Our field of experience is mediated by digital support: the self-reference loop is wider than ever. We interact with people, robots, images, music through several multimedia interfaces: screen, headphones, controllers… Our subjective experience is controlled by the algorithms of multiple applications that determine in a loop (if we haven’t learned to master them) our data consumption and our actions in return. Our memory is dispersed in numerous files, databases, locally and in the cloud… When a large part of ourselves is thus collectivized and externalized, the problem of limits and determination of identity becomes preponderant. Who owns the data concerning me, who produces it?
The problem of narcissism is particularly evident on Instagram and similar applications. Our ego is nourished by the image that others send back to us in the algorithmic medium. The obsession with “optics” reaches worrying proportions. How many subscribers, how many likes, how many impressions? For those who have fallen into this abyss, the value of being is only in the gaze of the other. Before being a mental health problem, it’s a matter of elementary wisdom.
Opposite to narcissism, we have a tendency toward autism. Here the self is locked in its inner life, but fed by online information sources. Code or certain aspects of popular culture become obsessional. This is the domain of geeks, Otakus, and compulsive players. It’s obviously unhealthy to do without any social life in flesh and blood.
There is a mental health problem if affects are constantly euphoric, or constantly dysphoric, or if an exclusive object becomes addictive. Indeed, the Internet can make us dependent on certain objects (news, series, games, pornography) or certain emotions, whether positive (« feel-good » content like cute cats, dance, humor, etc.) or negative (catastrophic news, « doom scrolling ») in an unbalanced way. We can also wonder to what extent it’s good for body language to be entirely replaced by emojis, memes, images, avatars, etc.
Addiction is created by the excitement (dopamine) and satisfaction (endorphin) that we want to reproduce endlessly. Now, as I said above, the business models of major web companies that focus on engagement (dopamine-endorphin secretion) lead almost inevitably to dependence if users aren’t careful. High engagement intensity for too long inevitably leads to depression.
Impulse control (aggression, for example) is more difficult on social media than in real life because our interlocutors are not in front of us. « Toxic behavior management » is indeed a major problem in online games and social media.
In sum, we must be vigilant, warn young users of the dangers incurred, and not commit excesses.
Q5 – Some predict that future generations might never attend school again. How do you see the future of education in an increasingly hyperconnected world dominated by technology?
R5 – I don’t believe school will disappear. But it must transform. We must take students where they are and preferably use the consumer products they’re accustomed to in order to make something useful for learning. Students are « digital natives » but that doesn’t mean they have true mastery of digital tools. We must develop not only digital literacy but literacy in general, which is inseparable from it. I’m a great supporter of reading classics and general culture, which is indispensable for forming critical thinking.
To return to my own pedagogical methods, in the courses I taught at the University of Ottawa, I asked my students to participate in a closed Facebook group, to register on Twitter, to open a blog if they didn’t already have one, and to use a collaborative data curation platform.
The use of content curation platforms served to teach students how to choose categories or « tags » to classify useful information in long-term memory, in order to easily find it later. This skill will be very useful to them for the rest of their careers.
Blogs were used as supports for « final assignments » for graduate courses (i.e., before the master’s), and as research notebooks for master’s or doctoral students: notes on readings, hypothesis formulation, data accumulation, first versions of scientific articles or chapters of dissertations or theses, etc. The public research notebook facilitates the relationship with the supervisor and allows redirecting hazardous research directions in time, getting in touch with teams working on the same subjects, etc.
The Facebook group was used to share the Syllabus or « course plan, » the class agenda, required readings, internal group discussions – for example those concerning evaluation – as well as students’ electronic addresses (Twitter, blog, social curation platform, etc.). All this information was online and accessible with a single click, including digitized and free required readings. Students could participate in writing mini-wikis within the Facebook group on subjects of their choice; they were invited to suggest interesting readings related to the course subject by adding commented links. I used Facebook because almost all students were already subscribed to it and this platform’s group functionality is well established. But I could have used any other collaborative group management support, like Slack or LinkedIn groups.
On Twitter (now X), the conversation specific to each class was identified by a hashtag. At first, I used the blue bird medium occasionally. For example, at the end of each class I asked students to note the most interesting idea they had retained from the course and I scrolled through their tweets in real time on the class screen. Then, after a few weeks, I invited them to reread their collective traces on Twitter to gather and summarize what they had learned and ask questions – still on Twitter – if something wasn’t clear, questions I answered through the same channel.
After a few years of using Twitter in class, I became bolder and asked students to take their notes directly on this social medium during the course to obtain a collective notebook. Being able to see how others take notes (whether on the course or on texts to read) allows students to compare their understandings and thus clarify certain notions. They discover what others have noted and which isn’t necessarily what stimulated them… When I felt attention was relaxing a bit, I asked them to stop, reflect on what they had just heard, and note their ideas or questions, even if their remarks weren’t directly related to the course subject. Twitter allowed them to dialogue freely among themselves on the subjects studied without disturbing the class’s functioning. I always devoted the end of the course to a question and answer period that relied on collective viewing of the Twitter feed. This method is particularly relevant in groups that are too large (sometimes more than two hundred people) to allow all students to express themselves orally. I could thus calmly answer questions after class knowing that my explanations remained inscribed in the group’s feed. The pedagogical conversation continues between courses. Of course, all this was only possible because evaluation (student grading) was based on their online participation.
By using Facebook and Twitter in class, students not only learned the course material but also a « cultured » way of using social media. Documenting one’s breakfasts or the latest boozy party, disseminating cat videos and comic images, exchanging insults between political enemies, getting excited about show business stars, or advertising for this or that company are certainly legitimate uses of social media. But we can also maintain constructive dialogues in studying a common subject. In sum, I believe education must progress toward collaborative learning using digital tools.
Q6 – What are, in your opinion, the main opportunities that the Internet and new AI tools can bring to the field of education? Given the accelerated advancement of digital technologies and artificial intelligence, how do you see the role of the teacher evolving in the coming years?
R6 – Concerning artificial intelligence (for example ChatGPT, MetaAI, Grok, Claude, DeepSeek, or Gemini, which are all free and quite good), it can be very useful as a mentor for students or as a first-resort encyclopedia, to give answers and orientations very quickly. Students already use these tools, so we shouldn’t prohibit their use but, once again, cultivate it, bring it to a higher level. Since generative AI is statistical and probabilistic in nature, it regularly makes errors. We must therefore always verify information in real encyclopedias, search engines, specialized sites, or even… in a library! Note that the use of the advanced “search web” options can mitigate errors and point to real references. I add that the more cultured we are and the better we know a subject, the more fruitful the use of generative AIs becomes, because we are then capable of asking good questions and requesting additional information when we sense that something is missing. AI is not a substitute for ignorance; on the contrary, it gives a premium to those who already have good knowledge.
Using generative AIs to write in our place or make text summaries instead of reading books is not a good idea, at least in pedagogical use. Except of course if this practice is supervised by the teacher in order to stimulate critical thinking and taste for beautiful style. AI texts are often redundant, banal, and easily recognizable. Moreover, their document summaries fail to grasp what’s most original in a text, since they haven’t been trained on rare ideas but on the general opinion found everywhere. We learn to think by reading and writing in person: therefore AIs are good auxiliaries but in no case pure and simple replacements for human intellectual activity.
Q7 – There is growing fear that AI could eliminate many jobs in the future. How do you think this will affect the job market and what could be possible solutions?
R7 – By its very name, artificial intelligence naturally evokes the idea of autonomous machine intelligence, which stands opposite human intelligence, to simulate or surpass it. But if we observe the real uses of artificial intelligence devices, we must note that, most of the time, they augment, assist, or accompany human intelligence operations. In the era of expert systems – during the 1980s and 1990s – I observed that the critical knowledge of specialists within an organization, once codified in the form of rules animating knowledge bases, could be made available to members who needed them most, responding precisely to ongoing situations and always available. Rather than supposedly autonomous artificial intelligences, these were media for disseminating practical know-how, whose main effect was to increase the collective intelligence of user communities.
In the current phase of AI development, the role of the expert is played by the crowds that produce the data and the role of the cognitive engineer who codifies knowledge is played by neural networks. Instead of asking linguists how to translate or recognized authors how to produce a text, statistical models exploit the multitudes of anonymized web writers and automatically extract patterns of patterns that no human programmer could have clarified. Conditioned by their training, algorithms can then recognize and reproduce data corresponding to learned forms. But because they have abstracted structures rather than recording everything, they are now capable of correctly conceptualizing forms (image, text, music, code…) they have never encountered and producing an infinity of new symbolic arrangements. This is why we speak of generative artificial intelligence. Far from being autonomous, this AI extends and amplifies collective intelligence. Millions of users contribute to model improvement by asking them questions and commenting on the responses they receive. We can take the example of Midjourney (which generates images), whose users exchange their prompts and constantly improve their skills. Midjourney’s Discord servers are one of the most populous on the planet, with more than one million users. A new stigmergic collective intelligence emerges from the fusion of social media, AI, and creator communities. Behind « the machine » we must glimpse the collective intelligence it reifies and mobilizes.
AI offers us new access to global digital memory. It’s also a way to mobilize this memory to automate increasingly complex symbolic operations, involving the interaction of semantic universes and heterogeneous accounting systems.
I don’t believe for a second in the end of work. Automation makes certain jobs disappear and creates new ones. There are no more farriers, but mechanics have replaced them. Water carriers have given way to plumbers. The complexification of society increases the number of problems to solve. « Intelligent » machines will mainly increase the productivity of cognitive work by automating what can be automated. There will always be a need for intelligent, creative, and compassionate people, but they will have to learn to work with new tools.
Q8 – Some authors evoke the inversion of the « Flynn effect, » suggesting that future generations will have a lower cognitive level than their parents. How do you see this issue in the context of emerging technologies? Do you think that intensive use of digital technologies could contribute to this trend, or do they offer new ways to expand our cognitive capabilities?
R8 – The decline in cognitive (and moral) level has been deplored for centuries by each generation, while the Flynn effect shows precisely the opposite. It’s normal that we witness a stabilization of Intelligence Quotient (IQ) scores: the hope for constant increase is never very realistic and it would be normal to reach a limit or plateau, as in any other historical or even biological phenomenon. But let’s admit that today’s young people have lower IQ scores than the generations immediately preceding them. We must first ask what these tests measure: mainly scholastic intelligence. They don’t take into account emotional intelligence, relational intelligence, aesthetic sensitivity, physical or technical skills, or even practical common sense. So we’re measuring something limited there. On the other hand, if we remain on adaptation to scholastic functioning that IQ tests measure, why first accuse technologies? Perhaps there’s a abdication of families in the face of the educational task (notably because families are breaking apart), or failure of schools and universities that become increasingly lax (because students have become clients to satisfy at all costs)? When I was a student, the « A » on exams wasn’t yet a right… It has almost become one today.
Finally, and it must be repeated constantly, « the use of digital technologies » doesn’t make much sense. There are mind-numbing uses, which slide down the slope of intellectual laziness, and uses that open the mind, but which require taking personal responsibility, an effort of autonomy and – yes – work. It’s the role of educators to favor positive uses.
Q9 – Are there clear boundaries between the real world and the virtual world? What could motivate us to continue in the real world when the virtual world offers almost unlimited possibilities for interaction and success?
R9 – There has never been a clear boundary between the virtual world and the actual world. Where is human presence found? As soon as we assume a situation in existence, we inevitably find ourselves between two. Between the virtual and the actual, between soul and body, between heaven and earth, between yin and yang. Our existence stretches in an interval and the fundamental relationship between the virtual and the actual is a reciprocal transformation. It’s a morphism that projects the sensible onto the intelligible and vice versa.
A practical situation includes an actual context: our posture, our position, what is around us at this precise moment, from our interlocutors to the material environment. It also implies a virtual context: the past in our memory, our plans and expectations, our ideas of what is happening to us. This is how we discern the lines of force and tensions of the situation, its universe of problems, its obstacles and escapes. Bodily configurations only make sense through the virtual landscape that surrounds them.
We therefore don’t live only in the so-called « material » physical reality, but also in the world of meanings. This is what makes us human. Now, if we want to talk about so-called digital media, in addition to their software aspect (programs and data) they are obviously also material: data centers, cables, modems, computers, smartphones, screens, headphones are all the most material and actual. Furthermore, I don’t know what you’re alluding to when you say that « the virtual world offers almost unlimited possibilities for interaction and success. » The interaction possibilities offered by the digital medium are certainly more diverse than those provided by printing or television, but they are in no way « unlimited » since available time is not infinitely extensible. These possibilities also strongly depend on users’ capacities and cultural and social environment. Omnipotence is always an illusion. Furthermore, if you mean that fiction and games (whether or not they have electronic support) offer unlimited possibilities, yes, it’s an idea that has its share of truth. Now, if you imply that it’s unhealthy to spend most of one’s time playing online video games to the detriment of one’s health, studies, family environment, or work, we can only agree with you. But it’s excess and addiction that are in question here, with their multiple causes, and not « the virtual world. »
Q10 – With the progress of digital technologies, the concept of digital immortality emerges, where our identities can be preserved indefinitely online. How do you understand the relationship between spirituality and this idea of digital immortality?
R10 – This false immortality has nothing to do with spirituality. Why not speak of limestone – or architectural – immortality in the face of Egypt’s pyramids? Another comparison: Shakespeare or Victor Hugo, even Newton or Einstein, are probably more « immortal » than a person whose Facebook account wasn’t deleted after death. If we absolutely must relate the digital to the sacred, I would say that data centers are the new temples and that in exchange for the sacrifice of our data, we obtain the practical blessings of artificial intelligences and social media.
Q11 – Many experts have highlighted the moral problems present in the organization and construction of norms based on data reported and exploited by AI (biases, racism, and other forms of determinism). How can we control these problems in the digital scenario? Who is responsible or can be held responsible for problems of this nature? Could AI have legal implications?
R11 – There’s much talk about the « biases » of this or that artificial intelligence model, as if there could exist an unbiased or neutral AI. This question is all the more important as AI becomes our new interface with symbolic objects: universal pen, panoramic glasses, general loudspeaker, programmer without code, personal assistant. The large generalist language models produced by dominant platforms now resemble public infrastructure, a new layer of the digital meta-medium. These generalist models can be specialized at little cost with datasets from a particular domain and adjustment methods. They can also be equipped with knowledge bases whose facts have been verified.
The results provided by an AI stem from several factors that all contribute to its orientation or, if you prefer, to its « biases. »
a) The algorithms proper select the types of statistical calculation and neural network structures.
b) The training data favors the languages, cultures, philosophical options, political biases, and prejudices of all kinds of those who produced them.
c) In order to align AI responses with users’ supposed purposes, we correct (or accentuate!) « by hand » the data’s tendencies through what is called RLHF (Reinforcement Learning from Human Feedback).
d) Finally, as with any tool, the user determines the results by means of instructions in natural language (the famous prompts). As I said above, user communities exchange and collaboratively improve such instructions.
The power of these systems is matched only by their complexity, heterogeneity, and opacity. Regulatory control of AI, probably necessary, seems difficult.
Responsibility is therefore shared among many actors and processes, but it seems to me that users must be held as the main responsible parties, as with any technique. Ethical and legal questions related to AI are now passionately discussed almost everywhere. It’s an academic research field in full growth and numerous governments and multinational organizations have issued laws and regulations to frame AI development and use.
