Semantic Computing with IEML
Based on the Information Economy MetaLanguage (IEML), semantic computing brings together several perspectives: an improvement of artificial intelligence, a solution to the problem of semantic interoperability, an algebraic model of semantic linguistics: all this at the service of human collective intelligence.
Neuro-Symbolic Artificial Intelligence
Every animal has a nervous system. Neuronal computing, which is statistical in essence, is the common basis of all animal intelligence. Machine learning, and in particular deep learning, is a partial automation and externalization of this neural computing. By contrast, symbolic or logical computing distinguishes human intelligence from other forms of animal intelligence. Language and complex semantic representations are the most obvious manifestations of symbolic computing, which is of course supported by neural networks. After what has been labeled automatic reasoning, expert systems and semantic web, knowledge graphs are today the name of an automation and externalization of natural symbolic computing.
The point that we want to make here is that a progress in human intelligence – which is what we are looking for – does not necessarily come from an augmentation of neuronal computing power. It may be achieved by the invention and use of new symbolic systems. For example, compared to anterior irregular numbering systems like the roman one, the invention of the position numbering system with a zero improved markedly arithmetical calculations. This example suggests that, considering two identical neural networks, one of them may have a much more efficient processing than the other just because of an improved data labelling.
In this line of thought, it is only on the basis of an adequately coded symbolic AI that we will be able to effectively exploit our new machine learning capabilities. We are standing for a neuro-symbolic perspective on AI, but we think that an improvement of the symbolic part is needed. Symbolic AI has been invented before the Internet, when the problem of semantic interoperability did not exist. Because we now have a global memory and because our communication systems are processed by algorithms, natural languages are not anymore the right tool for knowledge metadata. Natural languages are multiple, informal, ambiguous, and changing. To make things worst, cultures, trades and disciplines divide reality in different ways. Finally, the numerous metadata systems used to classify data – often inherited from the age of print – are incompatible. The reader may object that the problem of semantic interoperability is solved by the semantic web standard RDF (Resource Description Framework) and other similar standards. It is true that current standards solve the problem of interoperability at a technical – or software – level. However *semantic* interoperability is not about files standards but about categories and architectures of concepts. To learn more on this point, see the following blogpost (a must-read).
Digital computers exist for less than a century. We still live in the prehistory of automatic computing. Today we enjoy universal coordinate systems for space and time, but no coordinate semantic system. Public health, demography and economy statistics, training and education resources, talent management, job market, the internet of things, smart cities and many other sectors rely on multiple incompatible classification systems and ontologies, inside and among themselves. To take a classical example, disaster management requires an intense emergency communication between different services, departments and organizations. But currently these institutions do not share the same metadata system, even inside the same country, the same state or the same administration.
The solution proposed by INTLEKT Metadata to the problem of semantic interoperability is not a universal ontology, not even one standard ontology by domain, which would be a drastic over-simplification and impoverishment of our collective intelligence. We want to promote interoperability and semantic computability while allowing diversity to flourish.
Our solution is rather based on a techno-scientific breakthrough: the invention of a univocal and computable semantic code called IEML (the Information Economy MetaLanguage) that has been specially designed to solve the problem of semantic interoperability, while improving the calculability of semantics. In one word, IEML semantics are optimally computable because they are a function of its syntax. IEML is a programmable language (akin to a computable Esperanto) able to translate any ontology or semantic metadata system and to connect all of their categories. So, if their metadata speak the same metalanguage, a great diversity of classifications and ontologies, reflecting the situations and pragmatic goals of different communities, will be able to connect and exchange concepts.
IEML has a compact dictionary (less than 3500 words) that is organized by subject-oriented paradigms and visualized as keyboards. IEML paradigms work as symmetrical, nested and interconnected « micro-ontologies ». This feature enables the weaving of semantic relations between IEML words by the means of functions. IEML grammar is completely regular and is embedded in the IEML editor. All IEML texts are produced by the same grammatical operations on the same small dictionary. In brief, a computer only needs a dictionary and a grammar to “understand” an IEML text, which is notoriously not the case for texts in natural languages. Indeed, IEML has the expressive power of a natural language and can therefore translate any language, which makes it an ideal pivot-language.
IEML can not only improve inter-human communication, but also make inter-machine and human-machine communication more fluid to ensure a collective mastery of the Internet of things, intelligent cities, robots, autonomous vehicles, etc. Contemporary collective intelligence works in a stigmergic way. It is a massively distributed read-write process on our digital memories. By framing our information architecture, we structure our memory, we train our algorithms, we determine our thoughts and influence our actions. Collective intelligence requires metadata intelligence. Everyone should be able to structure her digital information in her own way and, at the same time, be able to exchange it with the utmost precision through the channels of a universal semantic postal service. IEML is the semantic metadata system adapted to the new situation, able to harness our global computing environment for the benefit of human collective intelligence.
A semantic knowledge base organized by an IEML metadata system would play simultaneously three roles:
- an encyclopaedia of a practical domain: a knowledge graph, including causal models à la Judea Pearl;
- a training set (or several training sets) for machine learning;
- a decision support system in real time, able to save the users attention span, while telling them what they need to know.
Today, the encyclopedia part is the work of the knowledge graph people and their RDF triples. The training set and its labels is the job of the machine learning specialists. The data scientists, for their part, build business intelligence tools. These activities will converge and a new kind of semantic engineers will emerge. Furthermore, communities, institutions and companies of all sorts will be able to share their semantic knowledge in a virtual public database, or to exchange their private data on a new semantic market.
Semantic Interoperability at the Service of Collective Intelligence
At INTLEKT, we create semantic metadata systems fitting your organization’s needs with a focus on complex human systems like software, games, health, the environment and urban phenomena.
Benefiting from our unique patented technology IEML, the Information Economy MetaLanguage, complex models becomes explorable, interoperable and can be translated automatically in several natural languages.
Our lead consultant Pierre Levy, Ph.D., Fellow of the Royal Society of Canada, has a deep experience in knowledge engineering. He has published thirteen books translated in twelve languages, including the titles Collective Intelligence, Becoming Virtual and The Semantic Sphere exploring epistemological and anthropological aspects of digital technologies.