AI is at the Heart of the New Communication Ecosystem

This text reports on my communication at the AI for people summit event [https://ai4people.org/advancing-ethical-ai-governance-summit/] organized with the support of the European Union on December 2 and 3, 2025.

AIs has become our main interface for accessing our accumulated memory and the first medium of communication between humans, since it governs social networks. This new information ecosystem serves as a battlefield of narratives and simultaneously as a place of knowledge creation and sharing. It oscillates between manipulation and collective intelligence. Subsequently, one of the essential stakes remains the formation of the young minds.

Let’s not forget that AI is also made by people. AI models cannot be separate from the informational ecosystem that can be described as a closed circuit with three poles: people, data, and models. People create information, feeding the digital memory ; digital data trains models ; models enhance the information-creation capacities of the people, people feed that memory, and so on. 

Today, many reflections on AI ethics legitimately focus on the production and regulation of LLM’s, or large language models. But too often, we forget the responsibility of those producing the data — which is now society as a whole.

The dark side is that we are now faced with huge cases of what can be described as data poisoning. For instance, recent reports describe a pro-Russian propaganda enterprise that was first named : « Portal Kombat », now renamed as « Pravda. » It is a network of more than one hundred and fifty websites presenting itself as an innocuous information broadcaster – but in fact repeating the Kremlin’s biased points of views. These sites are localized on every continent, their texts are translated into dozens of languages, and these many translations make them even more credible. On average, this network publishes twenty thousand two hundred seventy articles every forty-eight hours, or approximately 3.6 million articles per year. This production and translation of texts is almost entirely automated. The goal is not to attract human readers (there are relatively few of them) but to serve as training data for AIs, to manipulate the models users. The main AI models significantly rehash or confirm the toxic information provided by the Pravda network. With machine learning, there is no need for demonstration, proof, facts, or contextualization. Repetition and simplicity work perfectly. The more falsehood AIs are fed, the more damaged our collective memory will become.

Rather than relying on data scattered across the Web, should we prioritize objective and reliable data found in scientific journals, encyclopedias, and mainstream media? As an example, Wikipedia is one of the most reputable sources for language models. But nowadays, several Wikipedia articles have been taken over by Islamists and Hamas supporters, by exploiting the rules of operation of the encyclopedia to their advantage. Things have gone so far that Wikipedia founders Jimmy Wales and Larry Sanger publicly expressed concern.

Another example is an investigation conducted by the BBC that laments that artificial intelligences produces fake news in 45% of cases, and that half of young people (under thirty-five) believe their accuracy. The BBC points an accusatory finger at AI assistants. But, a few months later the BBC’s director general and the head of news were forced to resign following scandalous fabrication of false news about Donald Trump, and a report noting systematic Islamist bias in BBC Arabic broadcasts. 

It is clear that ethical problems cannot be limited to the models themselves, but must extend primarily to the creation of the training data. And this means the totality of our online behavior. Each article, blog entry, podcast or video we post produce data that will eventually train the formal neurons of artificial intelligences. So then, AI will answer questions, draft texts, instruct students, guide policies. Our responsibility is all the greater when we find ourselves in a position of authority because AI models will assign greater weight to information provided by journalists, teachers, scientific researchers, textbook writers, and producers of official websites.

As a conclusion, let me express a few educational key phrases in the age of AI : do not abandon personal memorization, practice abstraction and synthesis, question at length rather than settle for first answers, always replace facts within the multiple contexts from which they get their meaning and, finally, let’s take responsibility for the messages we entrust to digital memory because this information contributes to shape our collective intelligence.

REFERENCES

The Pravda Network

https://www.newsguardrealitycheck.com/p/a-well-funded-moscow-based-global? https://www.fdd.org/analysis/policy_briefs/2025/01/31/russian-malign-influence-campaigns-expand-onto-bluesky/ https://www.sgdsn.gouv.fr/files/files/20240212_NP_SGDSN_VIGINUM_PORTAL-KOMBAT-NETWORK_ENG_VF.pdf

Wikipedia

https://www.detroitnews.com/story/business/2025/03/07/wikipedia-roiled-with-internal-strife-overpage-editsabout-the-middle-east/81935309007/ https://www.thejc.com/opinion/how-the-gaza-coverage-hard-wired-anti-israel-into-ai-snmil3i1
https://www.timesofisrael.com/wikipedia-co-founder-locks-edits-on-gaza-genocide-page-citing-anti-israel-bias/
https://www.adl.org/resources/report/editing-hate-how-anti-israel-and-anti-jewish-bias-undermines-wikipedias-neutrality
https://www.piratewires.com/p/how-wikipedia-s-pro-hamas-editors-hijacked-the-israel-palestine-narrative
https://besacenter.org/debunking-the-genocide-allegationsa-reexamination-of-the-israel-hamas-war-2023-2025/

The BBC

https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content https://www.nbcnews.com/news/us-news/bbc-director-resigns-criticism-broadcasters-editing-trump-speech-rcna242858
https://lpost.be/2025/11/10/crise-majeure-a-la-bbc-a-londres-le-patron-tim-davie-et-la-cheffe-de-linfo-deborah-turness-demissionnent-apres-un-scandale-de-montage/ https://camera-uk.org/2025/11/06/background-to-the-telegraphs-bbc-bias-report/

Publié par Pierre Lévy

Assiociate Researcher at the University of Montreal (Canada), Fellow of the Royal Society of Canada. Author of: Collective Intelligence (1994), Becoming Virtual (1995), Cyberculture (1997), The Semantic Sphere (2011) and several other books translated in numerous languages. CEO of INTLEKT Metadata Inc.

Laisser un commentaire