Semantics, artificial intelligence and collective intelligence

Art: M.C. Escher

INTRODUCING SEMANTIC COMPUTING

Language allows a dynamic coordination between networks of concepts within the members of a community, from the smallest scale like a family, or a team, to the largest political or economic units. It also enables storytelling, dialogue, questioning and reasoning. Language supports not only communication but also thought as well as the conceptual organization of memory, complementary to its emotional and sensorimotor structure.

But how does language work? On the receiving end, we hear a sequence of sounds that we translate into a network of concepts, bringing meaning to a statement. On the transmitting side, from a network of concepts that we have in mind – a meaning to be conveyed – we generate a sequence of sounds. Language is the interface between sound sequences and concept networks.

Instead of phoneme chains (sounds) there can also be sequence of ideograms, letters, or gestures like in the case of sign language. What remains constant among all languages and writing systems is this quasi-automatic interfacing between a sequence of sensible images (sound, visual, tactile), and a graph of abstract concepts (general categories). And relations between concepts are also considered concepts themselves. 

This reciprocal translation between a sequence of images (the signifier) and networks of concepts (the signified) suggests that a mathematical category could model language by organizing a correspondence between an algebra and a graph structure. The algebra would regulate reading and writing operations on texts, while the graph structure would organize operations on nodes and oriented links. To each text would correspond a network of concepts, and vice versa. Operations on texts reflect operations on conceptual graphs in a dynamic way. 

Once we have a regular language to encode strings of signifiers, we could transform a sequence of symbols into syntagmatic trees (syntax is the order of syntagms) and vice versa. However, if its syntagmatic tree – its internal grammatical structure – is indispensable to the understanding of the meaning of a sentence, it is not sufficient. For each linguistic expression lies at the intersection of a syntagmatic axis and a paradigmatic axis. A syntagmatic tree represents the internal semantic network of a sentence, the paradigmatic axis represents its external semantic network and in particular its relations with sentences having the same structure, but from which it is distinct. To understand the phrase « I choose the vegetarian menu », it is of course necessary to recognize that the verb is « to choose », the subject « I » and the object « the vegetarian menu » and to know moreover that « vegetarian » qualifies « menu ». But one must also recognize that vegetarian is opposed to meaty and to vegan, therefore remembering that there are semantic opposition systems in the language. Establishing semantic relations between concepts presupposes that we recognize the syntagmatic trees internal to sentences, but also the concepts and their components which belong to paradigmatic matrices external to the sentence, specific to a language or held by certain practical domains.

Because natural languages are ambiguous and irregular, I have designed a mathematical language (IEML) translatable into natural languages, a computable language which can encode algebraically not only syntagmatic trees, but also the paradigmatic matrices where words and concepts take their meaning. Every sentence in the IEML metalanguage is located precisely at the intersection of a syntagmatic tree and paradigmatic matrices. 

Based on the regular syntagmatic-paradigmatic grid of IEML, we can now generate and recognize semantic relations between concepts in a functional way: knowledge graphs, ontologies, and data models. On the AI side, the encoding of labels, or data categorization, in this algebraic language that is IEML would certainly facilitate machine learning.

Technically, IEML is a lightweight and decentralized project. It consists of an IEML natural language dictionary (in French and English so far), an open-source parser supporting computable functions on language expressions with a platform for collaborative editing and sharing of concepts and ontologies. Beyond AI, my vision for IEML is to foster the semantic interoperability of digital memories as well as a synergy between personal cognitive empowerment and the transparency and reflexivity of collective intelligence. The development, maintenance and use of a semantic protocol based on IEML would require ongoing research and training efforts.

For more details, go to: https://intlekt.io/2022/10/02/semantic-computing-with-ieml-3/

Publié par Pierre Lévy

Assiociate Prof. at the University of Montreal (Canada), Fellow of the Royal Society of Canada. Author of: Collective Intelligence (1994), Becoming Virtual (1995), Cyberculture (1997), The Semantic Sphere (2011) and several other books translated in numerous languages.

2 commentaires sur « Semantics, artificial intelligence and collective intelligence »

Votre commentaire

Entrez vos coordonnées ci-dessous ou cliquez sur une icône pour vous connecter:

Logo WordPress.com

Vous commentez à l’aide de votre compte WordPress.com. Déconnexion /  Changer )

Image Twitter

Vous commentez à l’aide de votre compte Twitter. Déconnexion /  Changer )

Photo Facebook

Vous commentez à l’aide de votre compte Facebook. Déconnexion /  Changer )

Connexion à %s

%d blogueurs aiment cette page :